Table of Contents |
---|
Summary
Performance test was triggered injecting vCPE, vFirewall, vDNS, VOLTE onset & abatement (where applicable) messages through REST interface of Drools PDP installed in windriver lab (PFPP). This was running on a kubernetes host, using the m2.xlarge spec, having the following configuration:
- 32GB RAM
- 8 VCPU
- 160GB Disk
The test environment was set-up using simulated components i.e. A&AI, VFC and SO using "features enable controlloop-utils". However the performance test subscribes to the POLICY-CL-MGT topic, which is not simulated, to determine when the PDP has completed various steps.
...
Average processing time was calculated using the last 10000 records from audit.log for each use case, as well as the matching transaction time extracted from the network.log, using the below script:
https://gerrit.onap.org/r/gitweb?p=policy/drools-applications.git;a=blob;f=testsuites/performance/src/main/resources/amsterdam/generate_mt_performace_report.sh;h=846628543b5a127c49f57a1fa1f4a254dfff64da;hb=refs/heads/mastergenerate-mt-performance-report
Result
Time taken by Drools PDP
Code Block |
---|
Elapsed time for vCPE : matched 88238870 samples, average 1918 ms unmatched 11771130 samples, average 338339 ms Elapsed time for vFirewall : matched 88268871 samples, average 5855 ms unmatched 11741129 samples, average 177 ms Elapsed time for vDNS : matched 8869 samples, average 9 ms unmatched 100001131 samples, average 1816 ms Elapsed time for VOLTE : matched 8868 samples, average 9 ms unmatched 100001132 samples, average 1615 ms |
Note: the “unmatched samples” are requests for which no corresponding network time could be identified by the reporting tool. Hence those numbers represent the total elapsed time, with nothing subtracted out.
CPU Utilization
Total CPU used by the PDP was measured before and after the test, using "ps -l".
Initial CPU time | Final CPU time | Total CPU used during test | Average CPU per ONSET |
---|---|---|---|
00:36:55 | 00:43:28 | 393 ms | 9.8 ms |
Memory Utilization
Code Block |
---|
Number of young garbage collections used during the test: 1468 Avg. Young garbage collection time: ~5.8 ms per collection Total number of Full garbage collection: 3 Avg. Full garbage collection time: ~112 ms per collection S0C S1C S0U S1U EC EU OC OU MC MU CCSC CCSU YGC YGCT FGC FGCT GCT 2048.0 2048.0 0.0 704.0 68096.0 59110.3 432640.0 86116.6 73344.0 71849.4 8320.0 7801.2 321 2.033 3 0.337 2.370 2560.0 1536.0 0.0 1088.0 93184.0 82890.4 432640.0 143366.1 73344.0 71882.9 8320.0 7804.8 1789 10.564 3 0.337 10.901 |
Performance Metrics
No. | Metric | Description | Result | Comments |
---|---|---|---|---|
4 | Maximum Simultaneous Executions | Measure the maximum number of simultaneous policy executions that can be achieved whilst maintaining system stability and resource utilization | 10 | DMaaP connection limitations prevented the test from running more than 10 simultaneous threads/ONSETs |
5 | Multi-Threaded Response Time | Measure the execution time for onset and abatement in each use case when multiple threads are injecting ONSET events simultaneously | vCPE - 19 18 ms vFirewall - 58 55 ms vDNS - 18 9 ms VOLTE - 16 9 ms | |
6 | Multi Threaded CPU Usage | CPU Usage for each use case when multiple threads are injecting ONSET events simultaneously | 9.8 ms | |
7 | Multi Threaded Memory Usage | Memory Usage for each use case when multiple threads are injecting ONSET events simultaneously |