Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

...

...

...

...

...

...

...

...

...

...

...

Test Environment

Linux

Windows

CPU (%)

113.98 

268.01  

Memory  (MB)

522 

497.3

Network data sent (MB)

59.6

120

Network data received (MB)

113

168

Total number of threads

360-361

358-362

1st kaka message forwarded (HH:MM:SS)

13:49:25

14:56:33

Last kaka message forwarded (HH:MM:SS)

13:49:28

14:56:39

Consumption rate (Messages / second)

33,333

16,667

Kafka Consumer Performance on Linux:

  1. CPU Efficiency:

    • Linux consumes 113.98% CPU, which is much lower compared to Windows (268.01%). For a Kafka consumer, this indicates that Linux efficiently handles Kafka message consumption with a lower CPU overhead. This is crucial in production environments where maximizing resource efficiency is key.
  2. Memory Usage:

    • 522 MB of memory is used by the Kafka consumer on Linux. While Linux consumes slightly more memory than Windows, this difference is not significant. The memory footprint appears stable and manageable, showing that Linux maintains Kafka consumption processes effectively.
  3. Network Data:

    • The Kafka consumer on Linux sends 59.6 MB of data and receives 113 MB of data, indicating a balanced network workload. Lower network data usage could imply efficient message batching or optimized data handling strategies, which are often critical in Kafka consumers for reducing latency and improving throughput.
  4. Thread Management:

    • The 360-361 threads on Linux suggest a stable and scalable multi-threading architecture for consuming Kafka messages. Since Kafka consumers can handle parallel processing of messages, this thread count shows Linux is performing within a controlled range without overloading the system with too many threads.
  5. Message Consumption Rate:

    • 33,333 messages per second on Linux is an exceptionally high consumption rate compared to Windows (16,667). This suggests that the Linux environment is highly optimized for Kafka consumers, allowing it to process a large volume of messages in real-time scenarios. This high throughput is vital in scenarios with large-scale data streams, where message backlogs need to be minimized.
  6. Message Latency:

    • The first Kafka message is forwarded at 13:49:25 and the last at 13:49:28, showing a minimal delay between the start and completion of message forwarding. This low latency is essential in Kafka-based systems that rely on quick, real-time message processing, and it demonstrates that Linux provides fast message handling.

Conclusion for Kafka Consumers on Linux:

  • Efficient Resource Usage: Linux is shown to handle Kafka consumer tasks with efficient CPU usage and a stable memory footprint.
  • Superior Throughput: The 33,333 messages/second throughput shows Linux can handle very high message ingestion rates, making it suitable for data-intensive Kafka streams.
  • Low Latency: The minimal lag in forwarding Kafka messages indicates that Linux performs well under real-time processing demands.
  • Optimized for Network and Threads: Balanced network data handling and stable thread management further highlight Linux’s ability to maintain performance under high workloads.

Overall, Linux is an excellent environment for running Kafka consumers, especially in scenarios that demand high throughput, low latency, and efficient resource management.

Key Observations:

  1. CPU Usage:
    • Linux shows significantly lower CPU usage (113.98%) compared to Windows (268.01%), indicating that the process is more CPU-efficient on Linux.
  2. Memory Usage:
    • Linux uses slightly more memory (522 MB) than Windows (497.3 MB), but the difference is minimal.
  3. Network Data:
    • Linux sends less data (59.6 MB) than Windows (120 MB) but also receives less data (113 MB vs 168 MB). This could indicate different network handling efficiencies or workload patterns.
  4. Threads:
    • The number of threads is almost identical between the two environments, so thread management is consistent.
  5. Kafka Message Timing:
    • Linux processed Kafka messages earlier (13:49:25) than Windows (14:56:33). The processing duration is almost the same, but Linux starts earlier.
  6. Consumption Rate:
    • Linux has a much higher message consumption rate (33,333 messages/second) than Windows (16,667 messages/second). This suggests that Linux is significantly more efficient in handling Kafka messages.

Conclusion:

  • Linux demonstrates better performance overall in terms of CPU usage and message consumption rate. Despite similar memory usage, Linux handles Kafka message forwarding faster and consumes more messages per second.
  • Windows consumes significantly more CPU resources and handles Kafka messages at a slower rate, making it less efficient for high-performance use cases.

If high throughput and CPU efficiency are critical, Linux would be the better environment for this application based on these metrics.

Test Environment

...

#

...

Environment/Workload

...

Description

...

Laptop :                Dell Inc. XPS 15 9530
Processor :           13th Gen Intel® Core™ i9-13900H × 20           
Installed RAM :    32.0 GiB 
Edition :               Fedora Linux 40 (Workstation Edition)u

...

Laptop :                Lenovo ThinkPad
Processor :            11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz   2.42 GHz
Installed RAM  :    40.0 GB (39.7 GB usable)
Edition :                Windows 11 Pro

...

1

...

Code Block
themeConfluence
titleDescription
collapsetrue
This YAML snippet is a CPS resource configuration for deploying a service or containerized application. It defines the resource allocations and limits for the deployment, possibly for a container orchestrator like Kubernetes or Docker Swarm.

Here's a breakdown of the configuration:

replicas: 1: Specifies that there should be 1 replica of this deployment (only one instance will run).

resources: Defines the resource requests and limits for CPU and memory.

reservations: The minimum amount of resources guaranteed for the container.
cpus: '2': The container requests 2 CPUs.
memory: 2G: The container requests 2GB of memory.
limits: The maximum amount of resources the container can use.
cpus: '3': The container can use up to 3 CPUs.
memory: 3G: The container can use up to 3GB of memory.
This configuration ensures that the container will always have at least 2 CPUs and 2GB of memory but can scale up to 3 CPUs and 3GB of memory when needed. If it tries to exceed those limits, it might be throttled or killed by the orchestrator, depending on the setup.

...

Code Block
collapsetrue
cmNotificationTopic:
    enabled: false
    groupId: cm_events
    topic: "dmi-cm-events"
    sendTimeout: 5000

...

dmi-cm-events

...

cm-events

...

100,000    (kafka messages)

...

Cloud event headers

Code Block
languagejs
titleCloud Event Headers
linenumberstrue
collapsetrue
 "ce_type": "org.onap.cps.ncmp.events.avc1_0_0.AvcEvent",
 "ce_source": "DMI",
 "ce_destination": "dmi-cm-events",
 "ce_specversion": "1.0",
 "ce_time": new Date().toISOString(),
 "ce_id": crypto.randomUUID(),
 "ce_dataschema": "urn:cps:org.onap.cps.ncmp.events.avc1_0_0.AvcEvent:1.0.0",
 "ce_correlationid": crypto.randomUUID() 
  

...

languagejs
titleSampleAvcInputEvent.json
collapsetrue

...

Reference

Jira Legacy
serverSystem Jira
serverId4733707d-2057-3a0f-ae5e-4fd8aff50176
keyCPS-2329

Table of Contents
stylenone

Performance for CM Data Notification Event Schema

This detailed workload setup provides a comprehensive overview of the Kafka consumer environment on Linux, detailing how cloud events are handled, system resource configurations, and message publishing mechanisms.

Test Flow Description

According to requirements the test flow included following steps:

Test Flow Steps:

  1. Start Required Containers with Docker Compose:

    • Navigate to the Docker Compose file: C:\CPS\master\cps\docker-compose\docker-compose.yml.

    • Use the following command to bring up all necessary containers:

      Code Block
      languagepowershell
      docker-compose --profile dmi-stub --profile monitoring up -d

  2. Stop the "cps-and-ncmp" Container:

    • Manually stop the cps-and-ncmp container to simulate a controlled interruption in the service.

  3. Publish CM AVC Cloud Events to Kafka:

    • Use the K6 script located at k6-tests\once-off-test\kafka\produce-avc-event.js to publish Cloud Events to the Kafka topic "dmi-cm-events".

  4. Verify Published Messages in Kafka UI:

...

    • Verify that the expected number of messages has been published to the "dmi-cm-events" topic.

  1. Restart the "cps-and-ncmp" Container:

    • After the messages are published, restart the cps-and-ncmp container to resume normal operations.

  2. Verify Message Consumption:

    • Once the cps-and-ncmp container is running again, verify the number of messages consumed from the "cm-events" topic using Kafka UI at http://localhost:8089/.

...

  1. Monitor System Metrics in Grafana:

...

    • Capture relevant metrics such as CPU, Memory, and Threads during the test execution.

K6 test load:

  •  Configuring a performance test scenario with a load-testing tool, possibly using something like k6 for generating load. 

    K6 Scenarios

    Code Block
    languagepy
     produce_cm_avc_event: {
                executor: 'shared-iterations',
                exec: 'produce_cm_avc_event',
                vus: 1000, // You can adjust VUs to meet performance requirements
                iterations: $TOTAL_MESSAGES, // Total messages to publish  (for example: 100 K , 200 K)
                maxDuration: '15m', // Adjust depending on the expected completion time
            }

    Explanation:

    • executor: 'shared-iterations': The shared-iterations executor divides the total number of iterations among the virtual users (VUs). This is ideal if you know the total amount of work (in this case, messages) that you want to execute but not how long it will take.

    • exec: Refers to the function or scenario name that handles the actual work (in this case, generating CM AVC events).

    • vus: The number of virtual users that will be running concurrently. You can adjust this based on how much load you want to simulate.

    • iterations: The total number of iterations (or messages in your case) that the virtual users will complete. You can set this based on your performance test needs, e.g., 100000 or 200000.

    • maxDuration: The maximum time the test is allowed to run. If the iterations are not completed within this time, the test will stop.

    Suggested Adjustments:

    • Make sure to replace $TOTAL_MESSAGES with a specific value, such as 100000 or 200000, depending on your test requirements.

    • You may want to tune the vus and maxDuration depending on your system's capacity and how fast you expect the messages to be processed. If you're testing scalability, start with a smaller number and increase.

Test Environment

#

Environment/Workload

Description

1

Tested on Linux

Laptop :                Dell Inc. XPS 15 9530
Processor :           13th Gen Intel® Core™ i9-13900H @2.60GHz
Installed RAM :    32.0 GiB 
Edition :               Fedora Linux 40 (Workstation Edition)u

PassMark Bench Mark: 28820

2

Tested on Windows

Laptop :                Lenovo ThinkPad
Processor :            11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz 
Installed RAM  :    40.0 GB (39.7 GB usable)
Edition :                Windows 11 Pro

PassMark Bench Mark: 9761

2

Number of CPS Instance

1

NCMP resource config

YAML Configuration: Defines deployment resources for the NCMP service:
Replicas: 1
CPU Reservations: 2 CPUs
Memory Reservations: 2 GB
CPU Limits: 3 CPUs
Memory Limits: 3 GB

Kafka Topic configuration

CM Notification Topic Configuration:
Enabled: false
Group ID: cm_events
Topic Name: dmi-cm-events

Publishing topic name

dmi-cm-events

Forwarded topic name

cm-events

4

Total number of Cm Avc cloud events

100,000/200,000 Kafka messages sent through the Kafka topic.

5

Cloud event headers

The headers for each Kafka message contain the following fields:
ce_type: "org.onap.cps.ncmp.events.avc1_0_0.AvcEvent"
ce_source: "DMI"
ce_destination: "dmi-cm-events"
ce_specversion: "1.0"
ce_time: ISO-formatted timestamp
ce_id: A unique ID generated using crypto.randomUUID()
ce_dataschema: "urn:cps.onap.cps.ncmp.events.avc1_0_0.AvcEvent:1.0.0"
ce_correlationid: Correlation ID generated using crypto.randomUUID()

6

Kafka payload 

SampleAvcInputEvent.json
Code Block
languagejs
{
  "data": {
    "push-change-update": {
      "datastore-changes": {
        "ietf-yang-patch:yang-patch": {
          "patch-id": "34534ffd98",
          "edit": [
            {
              "edit-id": "ded43434-1",
              "operation": "replace",
              "target": "ran-network:ran-network/NearRTRIC[@id='22']/GNBCUCPFunction[@id='cucpserver2']/NRCellCU[@id='15549']/NRCellRelation[@id='14427']",
              "value": {
                "attributes": []
              }
            },
            {
              "edit-id": "ded43434-2",
              "operation": "create",
              "target": "ran-network:ran-network/NearRTRIC[@id='22']/GNBCUCPFunction[@id='cucpserver1']/NRCellCU[@id='15548']/NRCellRelation[@id='14426']",
              "value": {
                "attributes": [
                  {
                    "isHoAllowed": false
                  }
                ]
              }
            },
            {
              "edit-id":
[
 "ded43434-3",
              
{
"operation": "delete",
              "
edit-id
target": "
ded43434-1", "operation": "replace", "target": "ran-network:
ran-network:ran-network/NearRTRIC[@id='22']/GNBCUCPFunction[@id='
cucpserver2
cucpserver1']/NRCellCU[@id='
15549
15548']/NRCellRelation[@id='
14427
14426']"
,

            
"value": {
}
          
"attributes": [
]

        
}
      }
    
}
,

  
{ "edit-id": "ded43434-2", "operation": "create", "target": "ran-network:ran-network/NearRTRIC[@id='22']/GNBCUCPFunction[@id='cucpserver1']/NRCellCU[@id='15548']/NRCellRelation[@id='14426']", "value": { "attributes": [ { "isHoAllowed": false } ] } }, { "edit-id": "ded43434-3", "operation": "delete", "target": "ran-network:ran-network/NearRTRIC[@id='22']/GNBCUCPFunction[@id='cucpserver1']/NRCellCU[@id='15548']/NRCellRelation[@id='14426']" } ] } } } } }8Number of DMI Plugin stub19Commit ID 81eb7dfc2f100a72692d2cbd7ce16540ee0a0fd410Commit ID linkhttps://gerrit.onap.org/r/gitweb?p=cps.git;a=commit;h=81eb7dfc2f100a72692d2cbd7ce16540ee0a0fd411K6 script (to publish cloud events)..\cps\k6-tests\once-off-test\kafka\produce-avc-event.js
}
}

8

Number of DMI Plugin stub

1 DMI Plugin Stub is used for testing purposes.

9

Commit ID 

81eb7dfc2f100a72692d2cbd7ce16540ee0a0fd4  

10

Commit ID link

Commit Link

11

K6 script (to publish cloud events)

The test script used to publish cloud events is located at:

..\cps\k6-tests\once-off-test\kafka\produce-avc-event.js

This analysis compares the performance of two environments, Linux and Windows, when forwarding Kafka messages under different loads. The test captures metrics such as CPU usage, memory usage, thread count, and the time taken to forward Kafka messages. Here's the detailed breakdown of the data provided:

Metric

Linux (i9)

Windows (i5)

Linux (i9)

Windows (i5)

Total number of kafka messages

100,000

200,000

CPU Usage (%)

44.2 

82.1 

78.3

72.6

Memory Usage (MB)

244 

195

212

222

Total Threads

321

320

320

319

First Message Processed (HH:MM:SS)

16:37:11

17:30:51

16.52.54

17.42.56

Last Message Processed (HH:MM:SS)

16:37:14

17:31:03

16.52.59

17.43.10

Total Time to Process Messages (Seconds)*

3

12

5

14

Message Throughput (Messages/Second)

33,333

8,333

40,000

14,286

Estimated Bandwith (1.5Kb/message) Mbps

391

98

469

167

  • Note: Given the accuracy of measurement of time is whole seconds the error margin in calculated throughput is relative big. Larger volume tests should have been performed to get more accurate figures. But since the achieved figure is many times higher then required we deemed the current test results good enough as agreed with kieran mccarthy on  

Linux (i9),                   100,000

Image Added

Linux (i9),                    200,000

Image Added

Windows (i5),             100,000

Image Added

Windows (i5),             200,000

Image Added

Highlighted Points:

  1. Total Number of Kafka Messages:

    • Linux and Windows are tested with two sets of Kafka messages: 100,000 and 200,000 messages. This allows you to evaluate the system performance under both moderate and high loads.

  2. CPU Usage:

    • Linux (44.2% for 100K messages, 78.3% for 200K messages) shows significantly lower CPU usage than Windows (82.1% for 100K messages, 72.6% for 200K messages).

    • Important: Windows CPU usage is very high at 100,000 messages (82.1%), indicating that the Windows environment is under heavy load. At 200,000 messages, the load decreases slightly (72.6%), likely due to optimization at higher loads.

  3. Memory Usage:

    • Linux uses 244 MB and 212 MB for 100,000 and 200,000 messages respectively, while Windows uses 195 MB and 222 MB.

    • Important: Memory consumption is slightly lower on Linux for 200,000 messages compared to Windows.

  4. Thread Count:

    • The total number of threads used is relatively consistent across environments (around 320 threads).

  5. Message Forwarding Times:

    • The first and last Kafka messages were forwarded between 16:37:11 and 16:37:14 on Linux (for 100K messages), and between 17:30:51 and 17:31:03 on Windows.

    • For 200,000 messages, Linux starts forwarding at 16:52:54 and ends at 16:52:59, whereas Windows starts at 17:42:56 and finishes at 17:43:10.

    • Important: Linux consistently forwards messages faster, finishing in fewer seconds compared to Windows.

  6. Time Taken to Forward Kafka Messages:

    • For 100,000 messages:

      • Linux forwards in 3 seconds, while Windows takes 12 seconds.

    • For 200,000 messages:

      • Linux forwards in 5 seconds, while Windows takes 14 seconds.

    • Important: Linux is significantly faster than Windows at processing the Kafka messages.

  7. Consumption Rate (Messages/Second):

    • For 100,000 messages, Linux processes at 33,333 messages/second, while Windows processes only 8,333 messages/second.

    • For 200,000 messages, Linux processes at 40,000 messages/second, while Windows handles 14,286 messages/second.

    • Important: Linux consistently handles Kafka messages at a higher rate, showing better throughput.

...

Verdict / Conclusion:

  1. CPU performance is main factor:

    • Message Processing Speed: i9-machine is able to process Kafka messages much faster than i5 machine. For both 100,000 and 200,000 messages, i9 takes significantly less time to forward messages, indicating better handling of high message throughput.

Recommendation:

To process high Kafka load , high CPU availability is recommended as CPU seems to be the main factor for throughput of messages