Test architecture
In order to conduct client tests this will be conducted in following architecture:
- HV-VES Client - produces high amount of events for processing.
- Processing Consumer - consumes events from Kafka topics and creates performance metrics.
- Offset Consumer - reads Kafka offsets.
- Prometheus - sends requests for performance metrics to HV-VES, Processing Consumer and Offset Consumer, provides data to Grafana.
- Grafana - delivers analytics and its visualization.
Note: In the Without DMaaP Kafka tests the DMaaP/Kafka service was substituted with wurstmeister kafka
Environment and Resources
Kubernetes cluster with 4 worker nodes, sharing hardware configuration shown in a table below, is deployed in OpenStack cloud operating system. The test components in docker containers are further deployed on the Kubernetes cluster.
Configuration | ||
---|---|---|
CPU | Model | Intel(R) Xeon(R) CPU E5-2680 v4 |
No. of cores | 24 | |
CPU clock speed [GHz] | 2.40 | |
Total RAM [GB] | 62.9 |
Network Performance
Pod measurement method
In order to check cluster network performance tests with usage of Iperf3 have been applied. Iperf is a tool for measurement of the maximum bandwidth on IP networks, it runs on two mode: server and client. We used a docker image: networkstatic/iperf3.
Following deployment creates a pod with iperf (server mode) on one worker, and one pod with iperf client for each worker.
To create deployment, execute following command:
kubectl create -f deployment.yaml
To find all iperf pods, execute:
kubectl -n onap get pods -o wide | grep iperf
To measure connection between pods, run iperf on iperf-client pod, using following command:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server
To change output format from MBits/sec to MBytes/sec:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes
To change measure time:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -t <time-in-second>
To gather results, the command was executed:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes
Results of performed tests
worker1 (136 MBytes/sec)
worker2 (87 MBytes/sec)
worker3 (135 MBytes/sec)
worker0 (2282 MBytes/sec) (iperf client and server exist on same worker )
Average speed (without worker 0 ) : 119 MBytes/sec
Test Setup
Preconditions
- Installed ONAP
- Plain TCP connection between HV-VES and clients (default configuration)
- Metric port exposed on HV-VES service
In order to reach metrics endpoint in HV-VES there is a need to add the following lines in the ports section of HV-VES service configuration file:
- name: port-t-6060 port: 6060 protocol: TCP targetPort: 6060
Before start tests, download docker image of producer which is available here. To extract image locally use command:
docker load < hv-collector-go-client.tar.gz
To execute performance tests we have to run functions from a shell script cloud-based-performance-test.sh in HV-VES project directory: ~/tools/performance/cloud/
First we have to generate certificates in ~/tools/ssl folder by using gen_certs. This step only needs to be performed during the first test setup (or if the generated files have been deleted).
Generating certificates./cloud-based-performance-test.sh gen_certs
Then we call setup in order to send certificates to HV-VES, and deploy Consumers, Prometheus, Grafana and create their ConfigMaps.
Setting up the test environment./cloud-based-performance-test.sh setup
After that we have to change HV-VES configuration in Consul KEY/VALUE tab (typically we can access Consul at port 30270 of any Controller node, i.e. http://slave1:30270/ui/#/dc1/kv/dcae-hv-ves-collector/edit).
Hint
How to access Consul UI?
Consul's address: http://<worker external IP>:<Consul External Port>
To check Consul External Port, execute:
ubuntu@onap-5422-rke-node:~$ kubectl -n onap get svc | grep consul
consul-server-ui NodePort 10.43.132.178 <none> 8500:31190/TCP 6d20h
----------------------------------------------------------------------------------------------------------------------------------------------
If service "consul-server-ui" is not exposed to external port (NodePort) and is configured as Cluster IP, please follow steps below.
ubuntu@onap-5422-rke-node:~$ kubectl -n onap get svc | grep consul
consul-server-ui ClusterIP 10.43.132.178 <none> 8500/TCP 25h
ubuntu@onap-5422-rke-node:~$ kubectl -n onap edit svc consul-server-ui
----
...
apiVersion: v1
kind: Service
metadata:
...
spec:
...
ports:
...
type: ClusterIP --> NodePort ### change value to NodePort
status:
...
---------------------------------------
service/consul-server-ui edited
ubuntu@onap-5422-rke-node:~$ kubectl -n onap get svc | grep consul
consul-server-ui NodePort 10.43.132.178 <none> 8500:31190/TCP 25h
After completing previous steps we can run the test. The table below contains the parameters that can be passed to cloud-based-performance-test.sh script.
gen_certs generate certs in ../../ssl directory setup set up ConfigMaps and consumers setup_all set up ConfigMaps consumers and producers send_config send producers configuration (message interval and payload), located in producers-config/producer-config.json to each producer start_interval start interval mode, config file is located in producers-config/interval-config.json
Optional parameters:
--producers : number of producers in deployment (10)
--retention-time-minutes : messages retention time on kafka in minutes (60)start_instant start_instant : start instant mode, config file is located in producers-config/instant-config.json
Optional parameters:
--producers : number of producers in deployment (10)
--retention-time-minutes : messages retention time on kafka in minutes (60)scale_producers scale producer deployment to number provide in argument stop stop all producers reset_producers reset all metrics on each producer clean remove ConfigMap, HV-VES consumers clean_all remove ConfigMap, HV-VES consumers and producers help print usage Performing the test./cloud-based-performance-test.sh start_interval --producers 10
The command above starts the test that creates 10 producers which send messages in the interval mode. The parameters can be changed in configuration files located in producers-config folder.
Above request body queues a task to create new connections for 1 second with 100 milliseconds interval and then for 2 seconds with 100 milliseconds interval, meaning that overall of 30 connections should be setup over 3 seconds.
Above request updates configuration of producers to send messages with 8192 bytes with 100 milliseconds interval.
It is also possible to run the test in instant mode.
Above request queues a task to create 500 connections without intervals between them.
The test.properties file contains Producers and Consumers configurations and it allows setting following properties:
Producer | |
---|---|
hvVesAddress | HV-VES address (dcae-hv-ves-collector.onap:6061) |
Certificates paths | |
client.cert.path | Path to cert file (/ssl/client.p12) |
client.cert.pass.path | Path to cert's pass file (/ssl/client.pass) |
Consumer | |
kafka.bootstrapServers | Adress of Kafka service to consume from (message-router-kafka:9092) |
kafka.topics | Kafka topics to subscribe to (HV_VES_PERF3GPP) |
Results can be accessed under following links:
- Prometheus: http://slave1:30000/graph?g0.range_input=1h&g0.expr=hv_kafka_consumer_travel_time_seconds_count&g0.tab=1
- Grafana: http://slave1:30001/d/V94Kjlwmz/hv-ves-processing?orgId=1&refresh=5s
To remove created ConfigMaps, Consumers, Producers, Grafana and Prometheus from Kubernetes cluster we call clean function. Note: clean doesn't remove certificates from HV-VES.
Cleaning the environment./cloud-based-performance-test.sh clean
In order to restart the test environment, which means redeploying hv-ves pod, resetting kafka topic and performing setup, we use reboot-test-environment.sh.
./reboot-test-environment.sh
Results can be accessed under following links:
- Prometheus: http://slave1:30000/graph?g0.range_input=1h&g0.expr=hv_kafka_consumer_travel_time_seconds_count&g0.tab=1
- Grafana: http://slave1:30001/d/V94Kjlwmz/hv-ves-processing?orgId=1&refresh=5s
HV-VES Performance test results
With dmaap Kafka
Conditions
Tests were performed with 5 repetitions for each configuration shown in the table below.
Number of producers | Messages per producer | Payload size [B] | Interval [ms] |
---|---|---|---|
2 | 90000 | 8192 | 10 |
4 | 90000 | 8192 | 10 |
6 | 60000 | 8192 | 10 |
Raw results data
Raw results data with screenshots can be found in following files:
- Series 1 - results_series_1.zip
- Series 2 - results_series_2.zip
Test Results - series 1
Test Results - series 2
No DMaaP Kafka SetUp
Install Kafka Docker on Kubernetes
(based on: ultimate-guide-to-installing-kafka-docker-on-kuber)
Create config maps
Config maps are required by zookeeper and kafka-broker deployments.
kubectl -n onap create cm kafka-config-map --from-file=kafka_server_jaas.conf
kubectl -n onap create cm zk-config-map --from-file=zk_server_jaas.conf
Create deployments
kubectl -n onap create -f zookeeper.yml
kubectl -n onap create -f kafka-service.yml
kubectl -n onap create -f kafka-broker.yml
Verify that pods are up and running
kubectl -n onap get pods | grep 'zookeeper-deployment-1\|broker0'
kubectl -n onap get svc | grep kafka-service
If you need to change some variable or anything in a yml file, delete the current deployment, for example:
kubectl -n onap delete deploy kafka-broker0
And after modifying the file create a new deployment as described above.
Run the test
Modify tools/performance/cloud scripts to match the names in your deployments, described in the previous step. Here is a diff file (you may need to adapt it to the current code situation):
Go to tools/performance/cloud and reboot the environment:
./reboot-test-environment.sh -v
Now you are ready to run the test.
Without DMaaP Kafka
Conditions
Tests were performed with following configuration:
Messages per producer | Payload size [B] | Interval [ms] |
---|---|---|
90000 | 8192 | 10 |
Raw results data
Raw results data with screenshots can be found in following files:
- Series 1 - results_series_1.zip
- Series 2 - results_series_2.zip
To see custom Kafka metrics you may want to change kafka-and-producers.json (located in HV-VES project directory: tools/performance/cloud/grafana/dashboards) to