Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Summary

The Performance test was executed by making request against the Policy RESTful APIs residing on the XACML PDP installed in the windriver lab (PFPP) to create, push and get policy decisions.  This was running on a kubernetes host, using the m2.xlarge spec, having the following configuration:

  • 16GB RAM
  • 8 VCPU
  • 160GB Disk

The performance test runs 10 simultaneous threads injecting ONSET messages.  In each thread, each of the different types of ONSETs is injected and processed in a serial fashion, first vCPE is injected and its associated APPC requests are answered, then vFirewall is injected, and so on.  Each thread repeats this process 1000 times.  APPC response messages were injected in the test plan wherever needed.

Average processing time was calculated using the last 10000 records from audit.log for each use case, as well as the matching transaction time extracted from the network.log, using the below script:

The 72 hour stability test was executed on a VM running the policy docker containers in an openstack cloud instance. The test execution resulted in ~135 million getDecision requests. Of those 99.74% returned the expected response, while the remaining 0.26% unexpectedly returned response code 401 "Unauthorised". CPU and memory usage can be seen in below graphs:

Result

Time taken by Drools PDP

Elapsed time for vCPE :
  matched 8870 samples, average 18 ms
  unmatched 1130 samples, average 339 ms

Elapsed time for vFirewall :
  matched 8871 samples, average 55 ms
  unmatched 1129 samples, average 177 ms

Elapsed time for vDNS :
  matched 8869 samples, average 9 ms
  unmatched 1131 samples, average 16 ms

Elapsed time for VOLTE :
  matched 8868 samples, average 9 ms
  unmatched 1132 samples, average 15 ms


Note: the “unmatched samples” are requests for which no corresponding network time could be identified by the reporting tool.  Hence those numbers represent the total elapsed time, with nothing subtracted out.


CPU Utilization

Total CPU used by the PDP was measured before and after the test, using "ps -l".

Initial CPU timeFinal CPU timeTotal CPU used during testAverage CPU per ONSET
00:36:5500:43:28393 ms9.8 ms


Memory Utilization

Number of young garbage collections used during the test: 1468
Avg. Young garbage collection time: ~5.8 ms per collection
Total number of Full garbage collection: 3
Avg. Full garbage collection time: ~112 ms per collection

S0C    S1C    S0U    S1U      EC       EU        OC         OU       MC     MU    CCSC   CCSU   YGC     YGCT    FGC    FGCT     GCT
2048.0 2048.0  0.0   704.0  68096.0  59110.3   432640.0   86116.6   73344.0 71849.4 8320.0 7801.2    321    2.033   3      0.337    2.370
2560.0 1536.0  0.0   1088.0 93184.0  82890.4   432640.0   143366.1  73344.0 71882.9 8320.0 7804.8   1789   10.564   3      0.337   10.901

Performance Metrics

No.

Metric

Description

ResultComments
4Maximum Simultaneous ExecutionsMeasure the maximum number of simultaneous policy executions that can be achieved whilst maintaining system stability and resource utilization10DMaaP connection limitations prevented the test from running more than 10 simultaneous threads/ONSETs
5Multi-Threaded Response TimeMeasure the execution time for onset and abatement in each use case when multiple threads are injecting ONSET events simultaneously

vCPE - 18 ms

vFirewall - 55 ms

vDNS - 9 ms

VOLTE - 9 ms


6Multi Threaded CPU UsageCPU Usage for each use case when multiple threads are injecting ONSET events simultaneously9.8 ms
7Multi Threaded Memory UsageMemory Usage for each use case when multiple threads are injecting ONSET events simultaneously

  • No labels