Test architecture
In order to conduct client tests this will be conducted in following architecture:
- HV-VES Client - produces high amount of events for processing.
- Processing Consumer - consumes events from Kafka topics and creates performance metrics.
- Offset Consumer - reads Kafka offsets.
- Prometheus - sends requests for performance metrics to HV-VES, Processing Consumer and Offset Consumer, provides data to Grafana.
- Grafana - delivers analytics and its visualization.
Link between HV-VES Client and HV-VES is TLS secured (provided scripts generate and place certificates on proper containers).
Note: In the Without DMaaP Kafka tests the DMaaP/Kafka service was substituted with wurstmeister kafka
Environment and Resources
Kubernetes cluster with 4 worker nodes, sharing hardware configuration shown in a table below, is deployed in OpenStack cloud operating system. The test components in docker containers are further deployed on the Kubernetes cluster.
Configuration | ||
---|---|---|
CPU | Model | Intel(R) Xeon(R) CPU E5-2680 v4 |
No. of cores | 24 | |
CPU clock speed [GHz] | 2.40 | |
Total RAM [GB] | 62.9 |
Network Performance
Pod measurement method
In order to check cluster network performance tests with usage of Iperf3 have been applied. Iperf is a tool for measurement of the maximum bandwidth on IP networks, it runs on two mode: server and client. We used a docker image: networkstatic/iperf3.
Following deployment creates a pod with iperf (server mode) on one worker, and one pod with iperf client for each worker.
To create deployment, execute following command:
kubectl create -f deployment.yaml
To find all iperf pods, execute:
kubectl -n onap get pods -o wide | grep iperf
To measure connection between pods, run iperf on iperf-client pod, using following command:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server
To change output format from MBits/sec to MBytes/sec:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes
To change measure time:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -t <time-in-second>
To gather results, the command was executed:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes
Results of performed tests
worker1 (136 MBytes/sec)
worker2 (87 MBytes/sec)
worker3 (135 MBytes/sec)
worker0 (2282 MBytes/sec) (iperf client and server exist on same worker )
Average speed (without worker 0 ) : 119 MBytes/sec
Test Setup
Preconditions
- Installed ONAP (Frankfurt)
- Plain TCP connection between HV-VES and clients (default configuration)
- Metric port exposed on HV-VES service
In order to reach metrics endpoint in HV-VES there is a need to add the following lines in the ports section of HV-VES service configuration file:
- name: port-t-6060 port: 6060 protocol: TCP targetPort: 6060
Before start tests, download docker image of producer which is available here:
To extract image locally use command:
docker load < hv-collector-go-client.tar.gz
Modify tools/performance/cloud/producer-pod.yaml file to use the above image and set imagePullPolicy to IfNotPresent:
... spec: containers: - name: hv-collector-producer image: onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-go-client:latest imagePullPolicy: IfNotPresent volumeMounts: ...
To execute performance tests we have to run functions from a shell script cloud-based-performance-test.sh in HV-VES project directory: ~/tools/performance/cloud/
First we have to generate certificates in ~/tools/ssl folder by using gen_certs. This step only needs to be performed during the first test setup (or if the generated files have been deleted).
Generating certificates./cloud-based-performance-test.sh gen_certs
Then we call setup in order to send certificates to HV-VES, and deploy Consumers, Prometheus, Grafana and create their ConfigMaps.
Setting up the test environment./cloud-based-performance-test.sh setup
After completing previous steps we can call the start function, which provides Producers and starts the test.
Performing the test./cloud-based-performance-test.sh start
For the start function we can use optional arguments:
--load should the test keep defined number of running producers until script interruption (false) --containers number of producer containers to create (1) --properties-file path to file with benchmark properties (./test.properties) --retention-time-minutes retention time of messages in kafka in minutes (60) Example invocations of test start:
Starting performance test with single producers creation./cloud-based-performance-test.sh start --containers 10
The command above starts the test that creates 10 producers which send the amount of messages defined in test.properties once.
Starting performance test with constant messages load./cloud-based-performance-test.sh start --load true --containers 10 --retention-time-minutes 30
This invocation starts load test, meaning the script will try to keep the amount of running containers at 10 with kafka message retention of 30 minutes.
The test.properties file contains Producers and Consumers configurations and it allows setting following properties:
Producer hvVesAddress HV-VES address (dcae-hv-ves-collector.onap:6061) client.count Number of clients per pod (1) message.size Size of a single message in bytes (16384) message.count Amount of messages to be send by each client (1000) message.interval Interval between messages in miliseconds (1) Certificates paths client.cert.path Path to cert file (/ssl/client.p12) client.cert.pass.path Path to cert's pass file (/ssl/client.pass) Consumer kafka.bootstrapServers Adress of Kafka service to consume from (message-router-kafka:9092) kafka.topics Kafka topics to subscribe to (HV_VES_PERF3GPP)
Results can be accessed under following links:
HV-VES Performance test results
With dmaap Kafka
Conditions
Tests were performed with 5 repetitions for each configuration shown in the table below.
Number of producers | Messages per producer | Payload size [B] | Interval [ms] |
---|---|---|---|
2 | 90000 | 8192 | 10 |
4 | 90000 | 8192 | 10 |
6 | 60000 | 8192 | 10 |
Raw results data
Raw results data with screenshots can be found in following files:
- Series 1 - results_series_1.zip
- Series 2 - results_series_2.zip
Test Results - series 1
Test Results - series 2
No DMaaP Kafka SetUp
Install Kafka Docker on Kubernetes
(based on: ultimate-guide-to-installing-kafka-docker-on-kuber)
Create config maps
Config maps are required by zookeeper and kafka-broker deployments.
kubectl -n onap create cm kafka-config-map --from-file=kafka_server_jaas.conf
kubectl -n onap create cm zk-config-map --from-file=zk_server_jaas.conf
Create deployments
kubectl -n onap create -f zookeeper.yml
kubectl -n onap create -f kafka-service.yml
kubectl -n onap create -f kafka-broker.yml
Verify that pods are up and running
kubectl -n onap get pods | grep 'zookeeper-deployment-1\|broker0'
kubectl -n onap get svc | grep kafka-service
If you need to change some variable or anything in a yml file, delete the current deployment, for example:
kubectl -n onap delete deploy kafka-broker0
And after modifying the file create a new deployment as described above.
Run the test
Modify tools/performance/cloud scripts to match the names in your deployments, described in the previous step. Here is a diff file (you may need to adapt it to the current code situation):
Go to tools/performance/cloud and reboot the environment:
./reboot-test-environment.sh -v
Now you are ready to run the test.
Without DMaaP Kafka
Conditions
Tests were performed with following configuration:
Messages per producer | Payload size [B] | Interval [ms] |
---|---|---|
90000 | 8192 | 10 |
Raw results data
Raw results data with screenshots can be found in following files:
- Series 1 - results_series_1.zip
- Series 2 - results_series_2.zip
To see custom Kafka metrics you may want to change kafka-and-producers.json (located in HV-VES project directory: tools/performance/cloud/grafana/dashboards) to