CPS-2161: Remove Hazelcast from NCMP Module Sync
References
- CPS-2161Getting issue details... STATUS
- CPS-2146Getting issue details... STATUS
Issues & Decisions
# | Issue | Notes | Decision |
---|---|---|---|
1 | Placeholder for issue |
Background
The use of Hazelcast during NCMP's CM-handle Module Sync is leading to:
- High memory usage during CM-handle registration
- Consistency problems
- Poor load balancing between NCMP instances for module sync
Summary of Hazelcast structures for Module/Data Sync
Structure | Type | Notes |
---|---|---|
moduleSyncWorkQueue | BlockingQueue<DataNode> | Entire CM handles are stored in work queue for module sync. This creates very high memory usage during CM handle registration. The use of this blocking queue likely causes issues with load balancing during module sync also. |
moduleSyncStartedOnCmHandles | Map<String, Object> | One entry is stored in memory per CM handle in ADVISED state. |
dataSyncSemaphores | Map<String, Boolean> | Note this map is only populated if data sync is enabled for a CM handle. If the feature is used, it will store one entry per CM handle with data sync enabled. |
Consistency problems
Consistency problems are evidenced by log entries showing duplicate CM-handles being created:
STATEMENT: insert into fragment (anchor_id,attributes,parent_id,xpath) values ($1,$2,$3,$4) RETURNING *
DETAIL: Key (anchor_id, xpath)=(2, /dmi-registry/cm-handles[@id='C9B31349E93B850D52EFD2F632BAE598']) already exists.
ERROR: duplicate key value violates unique constraint "fragment_anchor_id_xpath_key"
Additionally, in CPS-2146 it was reported that:
moduleSync was quite chaotic between the two NCMP pods, both of them logged that the other one is working on the given cmHandle which reached the READY state minutes ago.
The consistency issues are likely a result of Hazelcast requiring an odd number of cluster members to resolve consistency issues via quorum.
Proposed Changes
It is proposed the LCM (Lifecycle Management) State Machine be changed to include an explicit state for syncing modules (or data).
The previous LCM State Machine is outlined here:
The proposed LCM State Machine is:
Aside: For Module Upgrade, the state transition from READY to LOCKED to ADVISED could be simplified to READY to ADVISED.
A side effect of introducing a SYNCING state will be an additional LCM event notification.
Module Set Syncing
Proof of Concept
A PoC is being constructed: WIP Remove hazelcast map for module sync | https://gerrit.nordix.org/c/onap/cps/+/20724
From the PoC, it was determined that when running multiple instances of NCMP, there was approximately 10% of batches being processed by both instances simultaneously, which led to some handles going to LOCKED state, due to database exceptions. Two solutions proposed:
- Add a distributed lock (from Hazelcast) to create a critical section, allowing only 1 instance to to move handles to SYNCING state
- Allow collisions, by gracefully handling AlreadyDefinedExceptions in the code
Solution 1 is verified to work, and gives 50% faster registration than now. Solution 2 is not yet tested, so it is yet to be determined which has better performance/reliability.