Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Gliffy
macroId3a5ae2ae-4423-46fe-a3da-7f3f788166e4
namePerf test env
pageid64004034

DMaaP Simulator

Results

16374735

DMaaP Simulator


DMaap Simulator is a simple Spring Boot application, which exposes two endpoints. First endpoint (@PostMapping("/events/unauthenticated.SEC_FAULT_OUTPUT")) which receives fault events topic and second (@GetMapping("/summary")) which display events count and average processing time in milliseconds(depending on "startEpochMicrosec").

JMeter is generating the current epoch (current time) and updates this field:

Image Added


Architecture

Gliffy
macroId263b2a3c-f8f5-4fd1-bba1-766ddc81bf56
nameDMaap Simulator architecture
pagePin3


DMaap Simulator image:

View file
nameves-dmaa-simulator-image.tar
height150


DMaap Simulator supports VES collector in the Frankfurt release.

What is measured

JMeter test results & metrics

  • Total Events Sent - total number of events sent by JMeter (including failed requests)
  • Failed Requests - total number of failed requests 
  • Error Rate % - 'Failed Requests' to 'Total Events Sent' ratio in percentages
  • DMaaP - Received Events -  total number of events received by DMaaP on Fault topic
  • Total Throughput - number of events sent per second by JMeter
  • Total Errors - failed requests per second
  • Active Threads - number of active threads per second
  • Sync Processing Time (Client → VES) - time measured from sending the request by JMeter to receiving the response by JMeter 
  • Async Processing Time (Client → VES → DMaaP) -  time measured from sending the request by JMeter to receiving the event by DMaaP
  • RAM Usage - RAM usage of JMeter VM
  • CPU Usage - CPU usage of JMeter VM

Ves metrics

  • Uptime - how long VES is running
  • Start time - when VES has been started
  • Heap used - current Heap usage in percentages
  • Non-Heap used - current Non-Heap usage in percentages
  • Processing time eventListener endpoint - method execution time in VES
  • Rate - number of HTTP requests per second
  • Duration - maximum and average HTTP request processing time (HTTP request other than 5xx) in milliseconds
  • Errors - number of 4xx and 5xx requests per second
  • JVM Heap - JVM Heap usage 
    • used - the amount of used memory
    • committed - the amount of memory in bytes that is committed for the Java virtual machine to use
    • max - the maximum amount of memory in bytes that can be used for memory management
  • JVM Non-Heap - JVM Non-Heap usage
    • used, committed, max as in JVM Heap 
  • JVM Total - JVM Heap + JVM Non-Heap 
    • used, committed, max as in JVM Heap
  • CPU Usage - VES CPU usage (Note, that VES can use the whole CPU available on the Worker Node)
    • system - CPU usage for the whole system
    • process - CPU usage for the Java Virtual Machine process
    • process-1h - average CPU usage for the Java Virtual Machine process over 1h
  • Load
    • system-1m - number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time
    • cpus - the number of processors available to the Java virtual machine
  • Threads 
    • live -  the current number of live threads including both daemon and non-daemon threads
    • daemon - the current number of live daemon threads
    • peak - the peak live thread count since the Java virtual machine started or peak was reset
  • Thread States - The current number of threads depending on the state
    • runnable
    • blocked
    • waiting
    • timed-waiting
    • new
    • terminated

K8s metrics

  • Nodes CPU Usage - current CPU usage on each worker node 
  • Nodes RAM Usage - current RAM usage on each worker node
  • Nodes Total CPU usage - CPU usage on each node over time
  • Network Usage Receive - incoming network traffic on each node in MBs
  • Nodes Total RAM Usage - RAM usage on each node over time
  • Usage of each core - CPU usage of each core on each worker node
  • Network Usage Transmit - outgoing network traffic on each node in MBs

Results

Environment 1

  • CPU - 8 cores
  • CPU clock speed - 2.4 GHz
  • Max Heap - 512 MB
  • Start Heap - 256 MB 


Test scenarioDescriptionJMeter test results & metricsVes metricsK8s metrics

1_test_scenario_6_steps.jmx

1000req/2.5min →
  • 6.
6RPS1750req/
  • 6RPS → 2.5min
  • 11.
6RPS 3500req/
  • 6RPS → 2.
5min →
  • 5min 
  • 23.
3RPS5000req/
  • 3RPS → 2.5min
→ 33.3RPS 7500req/
  •  33.3RPS → 2.
5min → 50RPS
  • 5min
10000req/
  • 50RPS → 2.
5min →
  • 5min
  • 66.6RPS → 2.
6RPS
  • 5min


Image Modified


Image Modified


Image Modified

2_test_scenario_1k_rps.jmx

1000req/1s 
  • 1000RPS → 1s

Image Modified

Image Modified

Image Modified

2_test_scenario_2k_rps.jmx

  • 2000RPS → 1s

Image Added

Image Added

Image Added

2_test_scenario_3k_rps.jmx

2000req/1s 
  • 3000RPS → 1s 

Image Added

Image Added

Image Added

2_test_scenario_4k_rps.jmx
  • 4000RPS → 1s 

Image Added

Image Added

Image Added

2_test_scenario_5k_rps.jmx

  • 3000req/1s 

2_test_scenario_10k_rps.jmx

4000req/1s 
  • 5000RPS → 1s 

Image Added

Image Added

Image Added

custom
  • 11.1RPS → 15min

Image Added

Image Added

Image Added

custom
  • 22.2RPS → 15min
Image Added

Image Added

Image Added

custom
  • 33.3RPS → 15min 

Image Added

Image Added

Image Added

custom 
  • 44.4RPS → 15min

Image Added

Image Added

Image Added


Environment 2

  • CPU - 24 cores
  • CPU clock speed - 2.4 GHz
  • Max Heap - 512 MB
  • Start Heap - 256 MB 


Test scenarioDescriptionJMeter test results & metricsVes metricsK8s metrics
custom
  • 11.1RPS → 1h

Image Added

Image Added

Image Added

custom
  • 22.2RPS→ 30min

Image Added

Image Added

Image Added

custom
  • 30RPS → 10min

Image Added

Image Added

Image Added

custom
  • 35RPS → 10min

Image Added

Image Added

Image Added

custom
  • 45RPS → 10min

Image Added

Image Added

Image Added

custom
  • 50RPS → 10min

Image Added

Image Added

Image Added

custom
  • 80RPS → 10min

Image Added

Image Added

Image Added

custom
  • 120RPS → 5min

Image Added

Image Added

Image Added

custom
  • 130RPS → 5min

Image Added

Image Added

Image Added


Test scenarioDescriptionJMeter test results & metricsVes metricsVes additional metrics
custom
  • 11RPS → 2days

Image Added

Image Added

Image Added

Presentation


View file
nameDCAEGEN2 VES performance tests.odp
height250


Replacing Cambria with DMaaP Client

Presentation


View file
nameVesCollectorEventBatchAndNewClient.pptx
height250


Performance Tests with real DMaaP

Environment 

  • CPU - 24 cores
  • CPU clock speed - 2.4 GHz
  • Max Heap - 512 MB
  • Start Heap - 256 MB

Ves with Dmaap client

3_test_scenario_50_rps_time_600.jmx50RPS → 10min

Image Added

Image Added

Image Added

3_test_scenario_50_rps_time_600.jmx50RPS → 10min

Image Added

Image Added

Image Added

3_test_scenario_50_rps_time_600.jmx50RPS → 10min

Image Added

Image Added

Image Added

3_test_scenario_100_rps_time_600.jmx100RPS → 10min

Image Added

Image Added

Image Added

3_test_scenario_100_rps_time_600.jmx100RPS → 10min

Image Added

Image Added

Image Added

3_test_scenario_100_rps_time_600.jmx100RPS → 10min

Image Added

Image Added

Image Added


Ves with Cambria client

Ves version :  1.9.1


3_test_scenario_50_rps_time_600.jmx50RPS → 10min

Image Added

Image Added

Image Added

3_test_scenario_50_rps_time_600.jmx50RPS → 10min

Image Added

Image Added

Image Added

3_test_scenario_50_rps_time_600.jmx50RPS → 10min

Image Added

Image Added

Image Added

3_test_scenario_100_rps_time_600.jmx100RPS → 10min

Image Added

Image Added

Image Added

3_test_scenario_100_rps_time_600.jmx100RPS → 10min

Image Added

Image Added

Image Added

3_test_scenario_100_rps_time_600.jmx100RPS → 10min

Image Added

Image Added

Image Added


Summary test results:

Environment 

  • CPU - 24 cores
  • CPU clock speed - 2.4 GHz
  • Max Heap - 512 MB
  • Start Heap - 256 MB




Average and Max Sync Processing Time (Client → VES)Average VES Processing timeError Rate [%]Max CPU Usage [%]
VES with Dmaap50th percentile95th percentile99th percentile


50RPS → 10min122ms, 349ms181ms, 3.23s591ms, 4.43s91ms036
124ms, 515ms204ms, 4.27s609ms, 5.48s90ms042
121ms, 399ms177ms, 2.17s561ms, 5.2291ms030
100RPS → 10min274ms, 7.06570ms, 7.93s1.0s, 8.06s139ms0.05%72
574ms, 6,07s1,17s, 14.19s1.89s, 15.37s201ms091
291ms, 5.88s415.5ms, 6.45s922ms, 11.3s143ms078
VES with Cambria50th percentile95th percentile99th percentile


50RPS → 10min118ms, 520ms174ms, 2.10s571ms, 6.48s90ms048
122ms, 548ms230ms, 5.17s581ms, 5.54s88ms032
123ms, 557ms194ms, 2.50s676ms, 5.12s88ms042
100RPS → 10min301ms, 5.79s772ms, 16.97s1.16s, 17.08s153ms088
340ms, 7.13s636ms, 17.87s1.15s, 18.48s149ms076
307ms, 8.29s506ms, 9.44s855ms, 9.78s155ms088



Conclusion:

Results of performance tests for both VES collector client implementation (DMaap client and Cambria client) are very similar.

Max CPU usage, Error rate, Average VES processing time, average and max sync processing time(Client → Ves) are almost the same.


In Ves collector with DMaap client(100RPS for 10 min),  appeared an error with the connection pool. In that specific case, we had a connection poll set to 16 and we got an error that the connection poll limits have been reached (stack trace in attachment).

We have to handle that kind of error in code.