Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

References

  1. Jira Legacy
    serverSystem Jira
    serverId4733707d-2057-3a0f-ae5e-4fd8aff50176
    keyCPS-2478

  2. https://gerrit.onap.org/r/c/cps/+/139344

...

Issue

Notes 

Decision

1

Calls to DB for modules (check existing Tag)

Could easily cache Module Set Tag in memory to reduce this

Toine Siebelink Implemented as part of https://gerrit.onap.org/r/c/cps/+/139344

2

First batch (on each thread) calls to DMI for same Tag

Use cache from #1 or store first cm Handle in DB immediately instead of as part of Batch

Toine Siebelink PoC-ed as part of https://gerrit.onap.org/r/c/cps/+/139344 but then replaced with Distributed Hazelcast Set instead

3

Store new schema set for each cm handle (instead of Tag)

Use schema set concept (in CPS Core) to only store each new Module Set Tag once. This seems the correct usage of Schema Set concept and wil have the greatest performance benefit. This requires more costly and difficult solution as NCMP code is develop assuming each cm handle schema set name is the same as its id.
Affected use-cases:

  1. Initial Registration

  2. Upgrade

Toine Siebelink No considered as part of this User Story. Create a new Technical Debt Jira instead:

Jira Legacy
serverSystem Jira
serverId4733707d-2057-3a0f-ae5e-4fd8aff50176
keyCPS-2506

Analysis

A small Spock&Groovy integration test as been created to sync a few hundred cm handles with multiple threads. See https://gerrit.onap.org/r/c/cps/+/139344

...

Parameter

Value

Notes

Cm Handles

500

Module Set Tags

2

250 CM Handles Each

Worker Threads (parallelism)

2

Environment

Windows 11. 13th Gen Intel(R) Core(TM) i9-13900H 2.60 GHz

Measurements Before & After PoC

Method

Before (avg. 4 runs)

After (avg. 6 runs)

Notes (improvements)

# Calls

Time Spent (ms)

%

# Calls

Time Spent (ms)

%

query module references

500

1,017

7%

2

5

0%

Used ‘privateModuleSetCache’ map to locally store required data on each thread. Data discarded when thread finishes but this eliminates vast majority of DB calls.

get modules from DMI

100-200

1,326

9%

2

13

0%

Use a Hazelcast distributed Set: ‘moduleSetTagsBeingProcessed’ to prevent multiple threads/instances attempting to process the same new tag.

store schema set

500

10,449

73%

500

5,156

86%

2 x faster. Probably due to less contention with read queries

update states

5+

1,429

10%

5+

833

14%

1.7 x faster

Total

14,221100%  

6,006100%  

> 2 x faster!

Extrapolated Results for 20,000 Nodes and DMI Delay

below figures are calculated by multiplying the total time and adding fix delays for DMI requests

Methods

Before

After

Notes

Time Spent (ms)

%

Notes

Time Spent (ms)

%

query module references

40,670

7%

5

0%

get modules from DMI

54,050

9%

2,667

1%

add 200ms delay for first 10 batch of 100

store schema set

417,960

73%

206,220

85%

update states

57,170

10%

33,313

14%

Total

569,850

    

~ 9.50 Minutes

 ~9m30s  

242,205

~4m3s

Need to add 2 minutes for initial delay: ~6m ~ 55 CM Handles/sec

K6 Historical and current results, detailed analysis

Excel
nameCPS-2478 K6 Stats.xlsx