Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 57 Next »


Architecture

7

 

 

 

 

 

 

 

 

 



    1. Developer Wiki
    2.  Strony

 

  1.  DCAE S3P Documentation
  2.  VES Collector Performance Test

DMaaP Simulator


DMaap Simulator is a simple Spring Boot application, which exposes two endpoints. First endpoint (@PostMapping("/events/unauthenticated.SEC_FAULT_OUTPUT")) which receives fault events topic and second (@GetMapping("/summary")) which display events count and average processing time in milliseconds(depending on "startEpochMicrosec").

JMeter is generating the current epoch (current time) and updates this field:


Architecture


DMaap Simulator image:


DMaap Simulator supports VES collector in the Frankfurt release.

What is measured

JMeter test results & metrics

  • Total Events Sent - total number of events sent by JMeter (including failed requests)
  • Failed Requests - total number of failed requests 
  • Error Rate % - 'Failed Requests' to 'Total Events Sent' ratio in percentages
  • DMaaP - Received Events -  total number of events received by DMaaP on Fault topic
  • Total Throughput - number of events sent per second by JMeter
  • Total Errors - failed requests per second
  • Active Threads - number of active threads per second
  • Sync Processing Time (Client → VES) - time measured from sending the request by JMeter to receiving the response by JMeter 
  • Async Processing Time (Client → VES → DMaaP) -  time measured from sending the request by JMeter to receiving the event by DMaaP
  • RAM Usage - RAM usage of JMeter VM
  • CPU Usage - CPU usage of JMeter VM

Ves metrics

  • Uptime - how long VES is running
  • Start time - when VES has been started
  • Heap used - current Heap usage in percentages
  • Non-Heap used - current Non-Heap usage in percentages
  • Processing time eventListener endpoint - method execution time in VES
  • Rate - number of HTTP requests per second
  • Duration - maximum and average HTTP request processing time (HTTP request other than 5xx) in milliseconds
  • Errors - number of 4xx and 5xx requests per second
  • JVM Heap - JVM Heap usage 
    • used - the amount of used memory
    • committed - the amount of memory in bytes that is committed for the Java virtual machine to use
    • max - the maximum amount of memory in bytes that can be used for memory management
  • JVM Non-Heap - JVM Non-Heap usage
    • used, committed, max as in JVM Heap 
  • JVM Total - JVM Heap + JVM Non-Heap 
    • used, committed, max as in JVM Heap
  • CPU Usage - VES CPU usage (Note, that VES can use the whole CPU available on the Worker Node)
    • system - CPU usage for the whole system
    • process - CPU usage for the Java Virtual Machine process
    • process-1h - average CPU usage for the Java Virtual Machine process over 1h
  • Load
    • system-1m - number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time
    • cpus - the number of processors available to the Java virtual machine
  • Threads 
    • live -  the current number of live threads including both daemon and non-daemon threads
    • daemon - the current number of live daemon threads
    • peak - the peak live thread count since the Java virtual machine started or peak was reset
  • Thread States - The current number of threads depending on the state
    • runnable
    • blocked
    • waiting
    • timed-waiting
    • new
    • terminated

K8s metrics

  • Nodes CPU Usage - current CPU usage on each worker node 
  • Nodes RAM Usage - current RAM usage on each worker node
  • Nodes Total CPU usage - CPU usage on each node over time
  • Network Usage Receive - incoming network traffic on each node in MBs
  • Nodes Total RAM Usage - RAM usage on each node over time
  • Usage of each core - CPU usage of each core on each worker node
  • Network Usage Transmit - outgoing network traffic on each node in MBs

Results

Environment 1

  • CPU - 8 cores
  • CPU clock speed - 2.4 GHz
  • Max Heap - 512 MB
  • Start Heap - 256 MB 


Test scenarioDescriptionJMeter test results & metricsVes metricsK8s metrics

1_test_scenario_6_steps.jmx

  • 6.6RPS → 2.5min
  • 11.6RPS → 2.5min 
  • 23.3RPS → 2.5min
  •  33.3RPS → 2.5min
  • 50RPS → 2.5min
  • 66.6RPS → 2.5min




2_test_scenario_1k_rps.jmx

  • 1000RPS → 1s

2_test_scenario_2k_rps.jmx

  • 2000RPS → 1s

2_test_scenario_3k_rps.jmx

  • 3000RPS → 1s 

2_test_scenario_4k_rps.jmx
  • 4000RPS → 1s 

2_test_scenario_5k_rps.jmx

  • 5000RPS → 1s 

custom
  • 11.1RPS → 15min

custom
  • 22.2RPS → 15min

custom
  • 33.3RPS → 15min 

custom 
  • 44.4RPS → 15min


Environment 2

  • CPU - 24 cores
  • CPU clock speed - 2.4 GHz
  • Max Heap - 512 MB
  • Start Heap - 256 MB 
Test scenarioDescriptionJMeter test results & metricsVes metricsK8s metrics
custom
  • 11.1RPS → 1h

custom
  • 22.2RPS→ 30min

custom
  • 30RPS → 10min

custom
  • 35RPS → 10min

custom
  • 45RPS → 10min

custom
  • 50RPS → 10min

custom
  • 80RPS → 10min

custom
  • 120RPS → 5min

custom
  • 130RPS → 5min


Test scenarioDescriptionJMeter test results & metricsVes metricsVes additional metrics
custom
  • 11RPS → 2days

Presentation



Replacing Cambria with DMaaP Client

Presentation



Performance Tests with real DMaaP 


Environment 1:

  • CPU - 8 cores
  • CPU clock speed - 2.4 GHz
  • Max Heap - 512 MB
  • Start Heap - 256 MB

Ves with Dmaap Client


Test scenario

Description

JMeter test results & metrics

Ves metrics

K8s metrics

3_test_scenario_10_rps_time_300.jmx10RPS → 5min

3_test_scenario_10_rps_time_300.jmx10RPS → 5min

3_test_scenario_10_rps_time_300.jmx10RPS → 5min

3_test_scenario_20_rps_time_300.jmx20RPS → 5min

3_test_scenario_20_rps_time_300.jmx20RPS → 5min

3_test_scenario_20_rps_time_300.jmx20RPS → 5min

3_test_scenario_50_rps_time_300.jmx50RPS → 5min

3_test_scenario_50_rps_time_300.jmx50RPS → 5min

3_test_scenario_50_rps_time_300.jmx50RPS → 5min




Ves with cambria


Test scenario

10RPS → 5min

JMeter test results & metrics

Ves metrics

K8s metrics

3_test_scenario_10_rps_time_300.jmx10RPS → 5min

3_test_scenario_10_rps_time_300.jmx10RPS → 5min

3_test_scenario_10_rps_time_300.jmx10RPS → 5min

3_test_scenario_20_rps_time_300.jmx20RPS → 5min

3_test_scenario_20_rps_time_300.jmx20RPS → 5min

3_test_scenario_20_rps_time_300.jmx20RPS → 5min

3_test_scenario_50_rps_time_300.jmx50RPS → 5min

3_test_scenario_50_rps_time_300.jmx50RPS → 5min

3_test_scenario_50_rps_time_300.jmx50RPS → 5min

Environment 2 

  • CPU - 24 cores
  • CPU clock speed - 2.4 GHz
  • Max Heap - 512 MB
  • Start Heap - 256 MB

Ves with Dmaap client

Test scenario

10RPS → 5min

JMeter test results & metrics

Ves metrics

K8s metrics

3_test_scenario_50_rps_time_600.jmx50RPS → 10min

3_test_scenario_50_rps_time_600.jmx50RPS → 10min

3_test_scenario_50_rps_time_600.jmx50RPS → 10min

3_test_scenario_100_rps_time_600.jmx100RPS → 10min

3_test_scenario_100_rps_time_600.jmx100RPS → 10min

3_test_scenario_100_rps_time_600.jmx100RPS → 10min


Ves with Cambria client

Test scenario

10RPS → 5min

JMeter test results & metrics

Ves metrics

K8s metrics

3_test_scenario_50_rps_time_600.jmx50RPS → 10min

3_test_scenario_50_rps_time_600.jmx50RPS → 10min

3_test_scenario_50_rps_time_600.jmx50RPS → 10min

3_test_scenario_100_rps_time_600.jmx100RPS → 10min

3_test_scenario_100_rps_time_600.jmx100RPS → 10min

3_test_scenario_100_rps_time_600.jmx100RPS → 10min


Summary test results:


Environment 1

  • CPU - 8 cores
  • CPU clock speed - 2.4 GHz
  • Max Heap - 512 MB
  • Start Heap - 256 MB




Average and Max Sync Processing Time (Client → VES)Average VES Processing timeError Rate [%]Max CPU Usage [%]
VES with Dmaap50th percentile95th percentile99th percentile


10RPS → 5min127ms, 287ms160ms, 1.06s180ms, 1.07s98ms039
123ms, 172ms137ms, 528ms151ms, 544ms98ms037
123ms, 170ms139ms, 513ms160ms, 531ms98ms037
20RPS → 5min125ms, 412ms157ms, 1.23s178ms, 1.33s101ms048
123ms, 412ms157ms, 1.12s231ms, 1.53s101ms053
123ms, 215ms158ms, 985ms221ms, 2.91s100ms047
50RPS → 5min


293ms, 3.33s484ms, 6.27s607ms,6.48s269ms495
281ms, 3.14s427ms, 5.99s531ms, 7.35s270ms094*
298ms, 3.36s463ms, 6.04s615ms, 7.48s271ms097
VES with Cambria50th percentile95th percentile99th percentile


10RPS → 5min

123ms, 272ms153ms, 1.04s172ms, 1.05s93ms040
119ms, 174ms135ms, 547ms147ms, 547ms95ms033
119ms, 174ms135ms, 538ms149ms, 546ms93ms034
20RPS → 5min124ms, 544ms152ms, 1.08s217ms, 3.86s96ms046
125ms, 595ms150ms, 1.08s202ms, 3.03s96ms049
127ms, 682ms152ms, 1.18s213ms, 3,46s97ms046
50RPS → 5min219ms, 2.6s335ms, 3.13s504ms, 7.s219ms14497
240ms, 3.23s353ms, 4.84s458ms, 6.38236ms14493
312ms, 3,82s569ms, 6.20s774ms, 9.47s276ms14498



Environment 2

  • CPU - 24 cores
  • CPU clock speed - 2.4 GHz
  • Max Heap - 512 MB
  • Start Heap - 256 MB




Average and Max Sync Processing Time (Client → VES)Average VES Processing timeError Rate [%]Max CPU Usage [%]
VES with Dmaap50th percentile95th percentile99th percentile


50RPS → 10min122ms, 349ms181ms, 3.23s591ms, 4.43s91ms036
124ms, 515ms204ms, 4.27s609ms, 5.48s90ms042
121ms, 399ms177ms, 2.17s561ms, 5.2291ms030
100RPS → 10min274ms, 7.06570ms, 7.93s1.0s, 8.06s139ms0.05%72
574ms, 6,07s1,17s, 14.19s1.89s, 15.37s201ms091
291ms, 5.88s415.5ms, 6.45s922ms, 11.3s143ms078
VES with Cambria50th percentile95th percentile99th percentile


50RPS → 10min118ms, 520ms174ms, 2.10s571ms, 6.48s90ms048
122ms, 548ms230ms, 5.17s581ms, 5.54s88ms032
123ms, 557ms194ms, 2.50s676ms, 5.12s88ms042
100RPS → 10min301ms, 5.79s772ms, 16.97s1.16s, 17.08s153ms088
340ms, 7.13s636ms, 17.87s1.15s, 18.48s149ms076
307ms, 8.29s506ms, 9.44s855ms, 9.78s155ms088



Conclusion:

Results of performance tests for both VES collector client implementation (DMaap client and Cambria client) are very similar.

Max CPU usage, Error rate, Average VES processing time, average and max sync processing time(Client → Ves) are almost the same.


In Ves collector with DMaap client(100RPS for 10 min),  appeared an error with the connection pool. In that specific case, we had a connection poll set to 16 and we got an error that the connection poll limits have been reached (stack trace in attachment).

We have to handle that kind of error in code.





  • No labels