Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Test Name

No Load (ms)

500 AVC events/sec (ms)

% Loss/Gain

1000 AVC events/sec (ms)

% Loss/Gain

Read

34

35

-2.5%

37

-7.5%

Read (Alternate ID)

92

95

-2.9%

98

-6.1%

Write

40

42

-5.6%

46

-14.6%

Write (Alternate ID)

90

97

-7.0%

100

-10.7%

...

Test Name

No Load (events/sec)

500 AVC events/sec

% Loss/Gain

1000 AVC events/sec

% Loss/Gain

Legacy Batch Read

116

110

-5.5%

107

-8.1%

Analysis:

  • Unlike search and write operations, legacy Legacy batch read performance improves declines under load, showing with a 5.5% gain decrease at 500 AVC events/sec and an 8.1% gain decrease at 1,000 AVC events/sec. This indicates that as the system experiences more AVC events, the efficiency of batch read operations is negatively impacted. The increasing loss suggests that higher event rates introduce processing overhead, potentially due to resource contention, increased latency, or queuing delays. These factors may contribute to slower response times and reduced throughput for batch read operations under heavy system load.

Conclusion & Recommendations

...

This indicates a gradual performance degradation as the event rate increases, emphasizing the need for optimization in high-load scenarios.

Recommendations:

...

Possible Bottlenecks

  • Kafka partitioning

    • using a single Kafka partition can significantly affect event processing, especially under high load.

...

Parallel Processing for Write Operations: Implementing parallelism or batching for synchronous writes may reduce the observed degradation.

    • Here’s how it impacts performance:

Impact of Single Kafka Partition on Event Processing:

  1. Throughput Bottleneck:

    • Kafka distributes messages across partitions, allowing multiple consumers to process data in parallel.

    • With only one partition, all events are handled by a single consumer, limiting the processing speed.

  2. Increased Latency:

    • Since there is no parallelism, events are processed sequentially rather than concurrently.

    • As load increases (e.g., 500 or 1000 AVC events/sec), processing delays may accumulate, causing performance degradation.

  3. Consumer Scaling Limitation:

    • Kafka allows multiple consumers within a consumer group to read from different partitions.

    • With a single partition, adding more consumers will not improve performance since only one consumer can read from it at a time.

Recommendations to Improve Performance:

Increase the Number of Partitions

  • Use multiple partitions to enable parallel processing and higher throughput.

  • A good rule of thumb: Number of partitions = Number of consumers × Desired parallelism level.

Tune Consumer Configuration

  • Increase fetch.max.bytes, max.poll.records, and optimize commit.interval.ms for better performance.

Conclusion:

Yes, using a single Kafka partition is likely affecting event processing, especially under high event rates (500 or 1,000 AVC events/sec). Scaling partitions and optimizing consumer settings can help mitigate performance issues. 🚀

Notes: How to check kafka configuration of CPS ?

Checking in Kafka Configuration (For Default Settings)

If you haven't explicitly set the number of partitions while creating the topic, Kafka may use the default partition count from the broker settings.

To check Kafka's default number of partitions, run:

Code Block
languagebash
cat /etc/kafka/server.properties | grep num.partitions

Example Output:

Code Block
num.partitions=1

num.partitions=1

  • If it's 1, all newly created topics will have a single partition by default unless overridden.

  • This image confirms that Kafka is configured with a single partition (num.partitions=1) into CPS.

...