Performance for CM Data Notification Event Schema
Reference
CPS-2329: Performance for CM Data Notification Event SchemaClosed
Performance for CM Data Notification Event Schema
This detailed workload setup provides a comprehensive overview of the Kafka consumer environment on Linux, detailing how cloud events are handled, system resource configurations, and message publishing mechanisms.
Test Flow Description
According to requirements the test flow included following steps:
Test Flow Steps:
Start Required Containers with Docker Compose:
Navigate to the Docker Compose file:
C:\CPS\master\cps\docker-compose\docker-compose.yml
.Use the following command to bring up all necessary containers:
docker-compose --profile dmi-stub --profile monitoring up -d
Stop the "cps-and-ncmp" Container:
Manually stop the
cps-and-ncmp
container to simulate a controlled interruption in the service.
Publish CM AVC Cloud Events to Kafka:
Use the K6 script located at
k6-tests\once-off-test\kafka\produce-avc-event.js
to publish Cloud Events to the Kafka topic "dmi-cm-events".
Verify Published Messages in Kafka UI:
Open Kafka UI at http://localhost:8089/.
Verify that the expected number of messages has been published to the "dmi-cm-events" topic.
Restart the "cps-and-ncmp" Container:
After the messages are published, restart the
cps-and-ncmp
container to resume normal operations.
Verify Message Consumption:
Once the
cps-and-ncmp
container is running again, verify the number of messages consumed from the "cm-events" topic using Kafka UI at http://localhost:8089/.
Monitor System Metrics in Grafana:
Log in to Grafana at http://localhost:3000/.
Capture relevant metrics such as CPU, Memory, and Threads during the test execution.
K6 test load:
Configuring a performance test scenario with a load-testing tool, possibly using something like k6 for generating load.
K6 Scenarios
produce_cm_avc_event: { executor: 'shared-iterations', exec: 'produce_cm_avc_event', vus: 1000, // You can adjust VUs to meet performance requirements iterations: $TOTAL_MESSAGES, // Total messages to publish (for example: 100 K , 200 K) maxDuration: '15m', // Adjust depending on the expected completion time }
Explanation:
executor: 'shared-iterations': The
shared-iterations
executor divides the total number of iterations among the virtual users (VUs). This is ideal if you know the total amount of work (in this case, messages) that you want to execute but not how long it will take.exec: Refers to the function or scenario name that handles the actual work (in this case, generating CM AVC events).
vus: The number of virtual users that will be running concurrently. You can adjust this based on how much load you want to simulate.
iterations: The total number of iterations (or messages in your case) that the virtual users will complete. You can set this based on your performance test needs, e.g.,
100000
or200000
.maxDuration: The maximum time the test is allowed to run. If the iterations are not completed within this time, the test will stop.
Suggested Adjustments:
Make sure to replace
$TOTAL_MESSAGES
with a specific value, such as100000
or200000
, depending on your test requirements.You may want to tune the
vus
andmaxDuration
depending on your system's capacity and how fast you expect the messages to be processed. If you're testing scalability, start with a smaller number and increase.
Test Environment
# | Environment/Workload | Description |
---|---|---|
1 | Tested on Linux | Laptop : Dell Inc. XPS 15 9530 |
2 | Tested on Windows | Laptop : Lenovo ThinkPad |
2 | Number of CPS Instance | 1 |
| NCMP resource config | YAML Configuration: Defines deployment resources for the NCMP service: |
| Kafka Topic configuration | CM Notification Topic Configuration: |
| Publishing topic name | dmi-cm-events |
| Forwarded topic name | cm-events |
4 | Total number of Cm Avc cloud events | 100,000/200,000 Kafka messages sent through the Kafka topic. |
5 | Cloud event headers | The headers for each Kafka message contain the following fields: |
6 | Kafka payload |
SampleAvcInputEvent.json{
"data": {
"push-change-update": {
"datastore-changes": {
"ietf-yang-patch:yang-patch": {
"patch-id": "34534ffd98",
"edit": [
{
"edit-id": "ded43434-1",
"operation": "replace",
"target": "ran-network:ran-network/NearRTRIC[@id='22']/GNBCUCPFunction[@id='cucpserver2']/NRCellCU[@id='15549']/NRCellRelation[@id='14427']",
"value": {
"attributes": []
}
},
{
"edit-id": "ded43434-2",
"operation": "create",
"target": "ran-network:ran-network/NearRTRIC[@id='22']/GNBCUCPFunction[@id='cucpserver1']/NRCellCU[@id='15548']/NRCellRelation[@id='14426']",
"value": {
"attributes": [
{
"isHoAllowed": false
}
]
}
},
{
"edit-id": "ded43434-3",
"operation": "delete",
"target": "ran-network:ran-network/NearRTRIC[@id='22']/GNBCUCPFunction[@id='cucpserver1']/NRCellCU[@id='15548']/NRCellRelation[@id='14426']"
}
]
}
}
}
}
}
|
8 | Number of DMI Plugin stub | 1 DMI Plugin Stub is used for testing purposes. |
9 | Commit ID | 81eb7dfc2f100a72692d2cbd7ce16540ee0a0fd4 |
10 | Commit ID link | |
11 | K6 script (to publish cloud events) | The test script used to publish cloud events is located at: ..\cps\k6-tests\once-off-test\kafka\produce-avc-event.js |
This analysis compares the performance of two environments, Linux and Windows, when forwarding Kafka messages under different loads. The test captures metrics such as CPU usage, memory usage, thread count, and the time taken to forward Kafka messages. Here's the detailed breakdown of the data provided:
Metric | Linux (i9) | Windows (i5) | Linux (i9) | Windows (i5) |
Total number of kafka messages | 100,000
| 200,000 | ||
CPU Usage (%) | 44.2 | 82.1 | 78.3 | 72.6 |
Memory Usage (MB) | 244 | 195 | 212 | 222 |
Total Threads | 321 | 320 | 320 | 319 |
First Message Processed (HH:MM:SS) | 16:37:11 | 17:30:51 | 16.52.54 | 17.42.56 |
Last Message Processed (HH:MM:SS) | 16:37:14 | 17:31:03 | 16.52.59 | 17.43.10 |
Total Time to Process Messages (Seconds)* | 3 | 12 | 5 | 14 |
Message Throughput (Messages/Second) | 33,333 | 8,333 | 40,000 | 14,286 |
Estimated Bandwith (1.5Kb/message) Mbps | 391 | 98 | 469 | 167 |
Note: Given the accuracy of measurement of time is whole seconds the error margin in calculated throughput is relative big. Larger volume tests should have been performed to get more accurate figures. But since the achieved figure is many times higher then required we deemed the current test results good enough as agreed with @kieran mccarthy on Oct 2, 2024
Linux (i9), 100,000 |
Linux (i9), 200,000 |
Windows (i5), 100,000 |
Windows (i5), 200,000 |
Highlighted Points:
Total Number of Kafka Messages:
Linux and Windows are tested with two sets of Kafka messages: 100,000 and 200,000 messages. This allows you to evaluate the system performance under both moderate and high loads.
CPU Usage:
Linux (44.2% for 100K messages, 78.3% for 200K messages) shows significantly lower CPU usage than Windows (82.1% for 100K messages, 72.6% for 200K messages).
Important: Windows CPU usage is very high at 100,000 messages (82.1%), indicating that the Windows environment is under heavy load. At 200,000 messages, the load decreases slightly (72.6%), likely due to optimization at higher loads.
Memory Usage:
Linux uses 244 MB and 212 MB for 100,000 and 200,000 messages respectively, while Windows uses 195 MB and 222 MB.
Important: Memory consumption is slightly lower on Linux for 200,000 messages compared to Windows.
Thread Count:
The total number of threads used is relatively consistent across environments (around 320 threads).
Message Forwarding Times:
The first and last Kafka messages were forwarded between 16:37:11 and 16:37:14 on Linux (for 100K messages), and between 17:30:51 and 17:31:03 on Windows.
For 200,000 messages, Linux starts forwarding at 16:52:54 and ends at 16:52:59, whereas Windows starts at 17:42:56 and finishes at 17:43:10.
Important: Linux consistently forwards messages faster, finishing in fewer seconds compared to Windows.
Time Taken to Forward Kafka Messages:
For 100,000 messages:
Linux forwards in 3 seconds, while Windows takes 12 seconds.
For 200,000 messages:
Linux forwards in 5 seconds, while Windows takes 14 seconds.
Important: Linux is significantly faster than Windows at processing the Kafka messages.
Consumption Rate (Messages/Second):
For 100,000 messages, Linux processes at 33,333 messages/second, while Windows processes only 8,333 messages/second.
For 200,000 messages, Linux processes at 40,000 messages/second, while Windows handles 14,286 messages/second.
Important: Linux consistently handles Kafka messages at a higher rate, showing better throughput.
Verdict / Conclusion:
CPU performance is main factor:
Message Processing Speed: i9-machine is able to process Kafka messages much faster than i5 machine. For both 100,000 and 200,000 messages, i9 takes significantly less time to forward messages, indicating better handling of high message throughput.
Recommendation:
To process high Kafka load , high CPU availability is recommended as CPU seems to be the main factor for throughput of messages