...
Suggested Tasks
Description | Jira | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Add new | k6 performance test | suitprofile ‘Endurance’ |
| ||||||||
Add new Jenkins job to run endurance test |
| ||||||||||
Add Grafana support to visualize memory usage pattern |
| ||||||||||
Two docker-compose deployments simultaneously |
| ||||||||||
Agree and Define new ‘Suite’ (js) |
|
Solution Proposal
...
For the sake of simplicity, the solution would be straightforward if a new test server is provided.
The functional/test-suit improvements in the current k6 performance tests if required
...
Agree and Define new ‘Suite’ (js)
The existing K6 performance tests are enough to detect any memory leakage so that no new K6 test will be added.Daniel Hanrahan Halil Cakal
There should be a new test suit Toine Siebelink Kolawole Adebisi-Adeolokun Halil CakalJira Legacy server System Jira serverId 4733707d-2057-3a0f-ae5e-4fd8aff50176 key CPS-2493
HOW: Daniel and I decided to define a new test suite (endurance). Daniel Hanrahan Halil Cakal
Have one ncmp-test-runner.js with different configs: kpi.json and endurance.json, moving the scenarios and thresholds settings into the json configs
The endurance test should run all tests in parallel. Also, the same scenarios, executor types, and VUs should be the same in the Endurance suit except for the legacy_batch_consume_scenario. The executor type for this scenario should be changed to constant-arrival-rate with 1 req/second. Toine Siebelink Daniel Hanrahan Kolawole Adebisi-Adeolokun Halil Cakal
As an example:
All tests should be run in parallel in the Endurance test. Also, the same scenarios, executor types, and VUs should be the same in the Endurance suit except for legacy_batch_consume_scenario.Code Block language json { "hosts": { "ncmpBaseUrl": "http://localhost:8883", "dmiStubUrl": "http://ncmp-dmi-plugin-demo-and-csit-stub:8092", "kafkaBootstrapServer": "localhost:9092" }, "kafka": { "legacyBatchTopic": "legacy_batch_topic" }, "scenarios": { "passthrough_read_scenario": { "executor": "constant-vus", "exec": "passthroughReadScenario", "vus": 2, "duration": "15m" } }, "thresholds": { "http_req_failed": ["rate == 0"], "cmhandles_created_per_second": ["avg >= 22"], "cmhandles_deleted_per_second": ["avg >= 22"], "ncmp_overhead_passthrough_read": ["avg <= 40"] } }
The
...
The current KPI and ENDURANCE (proposed) test parameters (Last updated )
1 | K P I | E N D U R A N C E | ||||||||
2 | Test Stages | Scenario Name | Unit | Executor Type | VUs | Duration | Scenario Name | VUS | Executor Type | Duration |
3 | setup | create_cm_handles | CM-handles/second | N/A | N/A | 20m | create_cm_handles | N/A | N/A | 20m |
4 | scenario | passthrough_read_scenario | overhead | constant-vus | 2 | 15m | passthrough_read_scenario | 2 | constant-vus | 1h |
5 | passthrough_read_alt_id_scenario | overhead | constant-vus | 2 | passthrough_read_alt_id_scenario | 2 | constant-vus | |||
6 | passthrough_write_scenario | overhead | constant-vus | 2 | passthrough_write_scenario | 2 | constant-vus | |||
7 | passthrough_write_alt_id_scenario | overhead | constant-vus | 2 | passthrough_write_alt_id_scenario | 2 | constant-vus | |||
8 | cm_handle_id_search_nofilter_scenario | milliseconds | constant-vus | 1 | cm_handle_id_search_nofilter_scenario | 1 | constant-vus | |||
9 | cm_handle_id_search_module_scenario | milliseconds | constant-vus | 1 | cm_handle_id_search_module_scenario | 1 | constant-vus | |||
10 | cm_handle_id_search_property_scenario | milliseconds | constant-vus | 1 | cm_handle_id_search_property_scenario | 1 | constant-vus | |||
11 | cm_handle_id_search_cpspath_scenario | milliseconds | constant-vus | 1 | cm_handle_id_search_cpspath_scenario | 1 | constant-vus | |||
12 | cm_handle_id_search_trustlevel_scenario | milliseconds | constant-vus | 1 | cm_handle_id_search_trustlevel_scenario | 1 | constant-vus | |||
13 | cm_handle_search_nofilter_scenario | milliseconds | constant-vus | 1 | cm_handle_search_nofilter_scenario | 1 | constant-vus | |||
14 | cm_handle_search_module_scenario | milliseconds | constant-vus | 1 | cm_handle_search_module_scenario | 1 | constant-vus | |||
15 | cm_handle_search_property_scenario | milliseconds | constant-vus | 1 | cm_handle_search_property_scenario | 1 | constant-vus | |||
16 | cm_handle_search_cpspath_scenario | milliseconds | constant-vus | 1 | cm_handle_search_cpspath_scenario | 1 | constant-vus | |||
17 | cm_handle_search_trustlevel_scenario | milliseconds | constant-vus | 1 | cm_handle_search_trustlevel_scenario | 1 | constant-vus | |||
18 | legacy_batch_produce_scenario | milliseconds | shared-iterations | 2 | N/A | legacy_batch_produce_scenario | 1 (1 req. sec) | constant-arrival-rate | ||
19 | legacy_batch_consume_scenario | events/second | per-vu-iterations | 1 | ||||||
20 | teardown | delete_cm_handles | CM-handles/second | N/A | N/A | 20m | delete_cm_handles | N/A | 10m |
...
Visualizing the ENDURANCE test results
As mentioned in issues/decisions, there are two alternative ways of representing memory trends: Grafana and
...
GnuPlot.
Grafana
...
EST has its own Prometheus and Grafana (externally accessible, no need to install Globalprotect) and it can be configured to show cps-and-ncmp memory trends.
Link to EST Grafana: https://monitoring.nordix.org/login
Through prometheus.yml (of EST), a new scrape_configs for the cps-and-ncmp microservice can be added.
Code Block | ||
---|---|---|
| ||
scrape_configs:
- job_name: 'cps-and-ncmp'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets:
- 'cps-and-ncmp:8080' // replace by <physical-server-ip:port> |
Also, the dashboard provider and dashboard config (jvm-micrometer-dashboard.json) can be added.
Code Block | ||
---|---|---|
| ||
providers:
- name: default
orgId: 1
type: file
options:
path: /var/lib/grafana/dashboards
foldersFromFilesStructure: true |
Then, the trend of G1 Old Gen space can be observed as seen below:
...
Permanent Storage Alternatives
Prometheus
Configuring the Prometheus with a persistent volume to retain data is possible.
This is an example service config for Prometheus:
Code Block | ||
---|---|---|
| ||
prometheus:
container_name: ${PROMETHEUS_CONTAINER_NAME:-prometheus}
image: prom/prometheus:latest
ports:
- ${PROMETHEUS_PORT:-9090}:9090
restart: always
volumes:
- ./config/prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
profiles:
- monitoring
volumes:
prometheus_data:
driver: local |
GNUPlot
GnuPlot can also draw a plot for only G1 Old Gen space
...
Getting support from Team Kraken for external link/server access, if we decide to visualize memory trends by Grafana.
...
.