Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

« Previous Version 8 Next »

This detailed workload setup provides a comprehensive overview of the Kafka consumer environment on Linux, detailing how cloud events are handled, system resource configurations, and message publishing mechanisms.

Test Flow Description

According to requirements the test flow included following steps:

Test Flow Steps:

  1. Start Required Containers with Docker Compose:

    • Navigate to the Docker Compose file: C:\CPS\master\cps\docker-compose\docker-compose.yml.
    • Use the following command to bring up all necessary containers:


      docker-compose --profile dmi-stub --profile monitoring up -d


  2. Stop the "cps-and-ncmp" Container:

    • Manually stop the cps-and-ncmp container to simulate a controlled interruption in the service.
  3. Publish CM AVC Cloud Events to Kafka:

    • Use the K6 script located at k6-tests\once-off-test\kafka\produce-avc-event.js to publish Cloud Events to the Kafka topic "dmi-cm-events".
  4. Verify Published Messages in Kafka UI:

    • Verify that the expected number of messages has been published to the "dmi-cm-events" topic.
  1. Restart the "cps-and-ncmp" Container:

    • After the messages are published, restart the cps-and-ncmp container to resume normal operations.
  2. Verify Message Consumption:

    • Once the cps-and-ncmp container is running again, verify the number of messages consumed from the "cm-events" topic using Kafka UI at http://localhost:8089/.

  1. Monitor System Metrics in Grafana:

    • Capture relevant metrics such as CPU, Memory, and Threads during the test execution.

K6 test load:

  •  Configuring a performance test scenario with a load-testing tool, possibly using something like k6 for generating load. 

    K6 Scenarios
     produce_cm_avc_event: {
                executor: 'shared-iterations',
                exec: 'produce_cm_avc_event',
                vus: 1000, // You can adjust VUs to meet performance requirements
                iterations: $TOTAL_MESSAGES, // Total messages to publish  (for example: 100 K , 200 K)
                maxDuration: '15m', // Adjust depending on the expected completion time
            }

    Explanation:

    • executor: 'shared-iterations': The shared-iterations executor divides the total number of iterations among the virtual users (VUs). This is ideal if you know the total amount of work (in this case, messages) that you want to execute but not how long it will take.
    • exec: Refers to the function or scenario name that handles the actual work (in this case, generating CM AVC events).
    • vus: The number of virtual users that will be running concurrently. You can adjust this based on how much load you want to simulate.
    • iterations: The total number of iterations (or messages in your case) that the virtual users will complete. You can set this based on your performance test needs, e.g., 100000 or 200000.
    • maxDuration: The maximum time the test is allowed to run. If the iterations are not completed within this time, the test will stop.

    Suggested Adjustments:

    • Make sure to replace $TOTAL_MESSAGES with a specific value, such as 100000 or 200000, depending on your test requirements.
    • You may want to tune the vus and maxDuration depending on your system's capacity and how fast you expect the messages to be processed. If you're testing scalability, start with a smaller number and increase.

Test Environment


#

Environment/Workload

Description

1Tested on Linux

Laptop :                Dell Inc. XPS 15 9530
Processor :           13th Gen Intel® Core™ i9-13900H @2.60GHz
Installed RAM :    32.0 GiB 
Edition :               Fedora Linux 40 (Workstation Edition)u

PassMark Bench Mark: 28820

2Tested on Windows

Laptop :                Lenovo ThinkPad
Processor :            11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz 
Installed RAM  :    40.0 GB (39.7 GB usable)
Edition :                Windows 11 Pro

PassMark Bench Mark: 9761

2Number of CPS Instance

1


NCMP resource configYAML Configuration: Defines deployment resources for the NCMP service:
Replicas: 1
CPU Reservations: 2 CPUs
Memory Reservations: 2 GB
CPU Limits: 3 CPUs
Memory Limits: 3 GB

Kafka Topic configurationCM Notification Topic Configuration:
Enabled: false
Group ID: cm_events
Topic Name: dmi-cm-events

Publishing topic name
dmi-cm-events

Forwarded topic name
cm-events
4Total number of Cm Avc cloud events

100,000/200,000 Kafka messages sent through the Kafka topic.

5

Cloud event headers

The headers for each Kafka message contain the following fields:
ce_type: "org.onap.cps.ncmp.events.avc1_0_0.AvcEvent"
ce_source: "DMI"
ce_destination: "dmi-cm-events"
ce_specversion: "1.0"
ce_time: ISO-formatted timestamp
ce_id: A unique ID generated using crypto.randomUUID()
ce_dataschema: "urn:cps.onap.cps.ncmp.events.avc1_0_0.AvcEvent:1.0.0"
ce_correlationid: Correlation ID generated using crypto.randomUUID()
6Kafka payload 
SampleAvcInputEvent.json
{
  "data": {
    "push-change-update": {
      "datastore-changes": {
        "ietf-yang-patch:yang-patch": {
          "patch-id": "34534ffd98",
          "edit": [
            {
              "edit-id": "ded43434-1",
              "operation": "replace",
              "target": "ran-network:ran-network/NearRTRIC[@id='22']/GNBCUCPFunction[@id='cucpserver2']/NRCellCU[@id='15549']/NRCellRelation[@id='14427']",
              "value": {
                "attributes": []
              }
            },
            {
              "edit-id": "ded43434-2",
              "operation": "create",
              "target": "ran-network:ran-network/NearRTRIC[@id='22']/GNBCUCPFunction[@id='cucpserver1']/NRCellCU[@id='15548']/NRCellRelation[@id='14426']",
              "value": {
                "attributes": [
                  {
                    "isHoAllowed": false
                  }
                ]
              }
            },
            {
              "edit-id": "ded43434-3",
              "operation": "delete",
              "target": "ran-network:ran-network/NearRTRIC[@id='22']/GNBCUCPFunction[@id='cucpserver1']/NRCellCU[@id='15548']/NRCellRelation[@id='14426']"
            }
          ]
        }
      }
    }
  }
}
8Number of DMI Plugin stub1 DMI Plugin Stub is used for testing purposes.
9Commit ID 81eb7dfc2f100a72692d2cbd7ce16540ee0a0fd4  
10Commit ID linkCommit Link
11K6 script (to publish cloud events)

The test script used to publish cloud events is located at:

..\cps\k6-tests\once-off-test\kafka\produce-avc-event.js

This analysis compares the performance of two environments, Linux and Windows, when forwarding Kafka messages under different loads. The test captures metrics such as CPU usage, memory usage, thread count, and the time taken to forward Kafka messages. Here's the detailed breakdown of the data provided:

Metric

Linux (i9)

Windows (i5)

Linux (i9)

Windows (i5)

Total number of kafka messages

100,000


200,000

CPU Usage (%)

44.2 

82.1 

78.3

72.6

Memory Usage (MB)

244 

195

212

222

Total Threads

321

320

320

319

First Message Processed (HH:MM:SS)

16:37:11

17:30:51

16.52.54

17.42.56

Last Message Processed (HH:MM:SS)

16:37:14

17:31:03

16.52.59

17.43.10

Total Time to Process Messages (Seconds)

3

12

5

14

Message Throughput (Messages/Second)

33,333

8,333

40,000

14,286

Linux (i9),                   100,000

Linux (i9),                    200,000

Windows (i5),             100,000

Windows (i5),             200,000

Highlighted Points:

  1. Total Number of Kafka Messages:

    • Linux and Windows are tested with two sets of Kafka messages: 100,000 and 200,000 messages. This allows you to evaluate the system performance under both moderate and high loads.
  2. CPU Usage:

    • Linux (44.2% for 100K messages, 78.3% for 200K messages) shows significantly lower CPU usage than Windows (82.1% for 100K messages, 72.6% for 200K messages).
    • Important: Windows CPU usage is very high at 100,000 messages (82.1%), indicating that the Windows environment is under heavy load. At 200,000 messages, the load decreases slightly (72.6%), likely due to optimization at higher loads.
  3. Memory Usage:

    • Linux uses 244 MB and 212 MB for 100,000 and 200,000 messages respectively, while Windows uses 195 MB and 222 MB.
    • Important: Memory consumption is slightly lower on Linux for 200,000 messages compared to Windows.
  4. Thread Count:

    • The total number of threads used is relatively consistent across environments (around 320 threads).
  5. Message Forwarding Times:

    • The first and last Kafka messages were forwarded between 16:37:11 and 16:37:14 on Linux (for 100K messages), and between 17:30:51 and 17:31:03 on Windows.
    • For 200,000 messages, Linux starts forwarding at 16:52:54 and ends at 16:52:59, whereas Windows starts at 17:42:56 and finishes at 17:43:10.
    • Important: Linux consistently forwards messages faster, finishing in fewer seconds compared to Windows.
  6. Time Taken to Forward Kafka Messages:

    • For 100,000 messages:
      • Linux forwards in 3 seconds, while Windows takes 12 seconds.
    • For 200,000 messages:
      • Linux forwards in 5 seconds, while Windows takes 14 seconds.
    • Important: Linux is significantly faster than Windows at processing the Kafka messages.
  7. Consumption Rate (Messages/Second):

    • For 100,000 messages, Linux processes at 33,333 messages/second, while Windows processes only 8,333 messages/second.
    • For 200,000 messages, Linux processes at 40,000 messages/second, while Windows handles 14,286 messages/second.
    • Important: Linux consistently handles Kafka messages at a higher rate, showing better throughput.

Verdict / Conclusion:

  1. CPU performance is main factor:

    • Message Processing Speed: i9-machine is able to process Kafka messages much faster than i5 machine. For both 100,000 and 200,000 messages, i9 takes significantly less time to forward messages, indicating better handling of high message throughput.

Recommendation:

To process high Kafka load , high CPU availability is recommended as CPU seems to be the main factor for throughput of messages

  • No labels