Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

References

CPS-2408 - Getting issue details... STATUS

Current Instance Configuration

  • As per the current configuration,

    • We have single cluster - exposed using CPS_NCMP_CACHES_CLUSTER_NAME and defaults to cps-and-ncmp-common-cache-cluster

    • Each data structure has its own instance per JVM. So for the below case we have 6 instance per JVM.

Data Structure Name

Configuration Name

Instance Name

1

moduleSyncWorkQueue

defaultQueueConfig

moduleSyncWorkQueue

2

moduleSyncStartedOnCmHandles

moduleSyncStartedConfig

moduleSyncStartedOnCmHandles

3

dataSyncSemaphores

dataSyncSemaphoresConfig

dataSyncSemaphores

4

trustLevelPerCmHandle

trustLevelPerCmHandleCacheConfig

hazelcastInstanceTrustLevelPerCmHandleMap

5

trustLevelPerDmiPlugin

trustLevelPerDmiPluginCacheConfig

hazelcastInstanceTrustLevelPerDmiPluginMap

6

cmNotificationSubscriptionCache

cmNotificationSubscriptionCacheMapConfig

hazelCastInstanceCmNotificationSubscription

Target Instance Configuration

  • What I want to achieve,

    • Cluster would still be the same but want to limit the number of instances.

    • We will have a single Hazelcast instance that has all the configurations of different data structures and we can get hold off a datastructure when needed using that instance.

    • It will be a single instance per JVM.

  • How we can achieve,

    • By having a common configuration i.e master configuration for all the type of data structure that we need. We still have same config. but we are giving it different names everytime.

    • CPS_NCMP_INSTANCE_CONFIG_NAME is the environment variable to set the common config name and it defaults to cps-and-ncmp-hz-instance-config.

    • During initialization check if the configuration is present or not, if present return the initialized instance to get hold off the required data structures.

Impacts

  • There would be just one member per JVM now as opposed to 6 instances per JVM.

  • For multiple instance setup , lets say 2 instances , with new setup we will have 2 members as opposed to 12 members.

    • Less members , less number of TCP connections in between them.

    • More members , more chatty the members are in order to replicate the data or choose the leader partition etc.

  • Less members meaning less data to replicate , and since everything is in memory hence expecting some heap space to be free.

  • Number of exposed ports should be just one per JVM as opposed to 6 per JVM. ( may need to change the charts config to free up the exposed ports )

  • No labels