Table of Contents |
---|
Test Steps & Test Data
- Create Dataspace: curl -u cpsuser:cpsr0cks! -X POST "http://localhost:8883/cps/api/v1/dataspaces?dataspace-name=openroadm" -H "accept: text/plain" -d ''
- Create Schemaset: curl -u cpsuser:cpsr0cks! -X POST "http://localhost:8883/cps/api/v1/dataspaces/openroadm/schema-sets?schema-set-name=owb-msa221-schema" -H "accept: text/plain" -H "Content-Type: multipart/form-data" -F 'file=@owb-msa221.zip;type=application/zip' Schemaset: owb-msa221.zip
- Create Anchor: curl -u cpsuser:cpsr0cks! -X POST "http://localhost:8883/cps/api/v1/dataspaces/openroadm/anchors?schema-set-name=owb-msa221-schema&anchor-name=owb-msa221-anchor" -H "accept: text/plain" -d ''
- Post the initial Node:
curl -g -H "Authorization: Basic Y3BzdXNlcjpjcHNyMGNrcyE=" -H "Content-Type: application/json" --request POST 'http://localhost:8883/cps/api/v1/dataspaces/openroadm/anchors/owb-msa221-anchor/nodes' -d @owb-msa22-first-node-payload.json -i Test data: owb-msa22-first-node-payload.json - Use Test client to post multiple data concurrently. Test data: msa221-data.zip
Write Test Cases
Test Iteration 1 - 4 GB RAM, 2 vCPU - Kubernetes Cluster
Test Environment | |
---|---|
CPUs | 2 vCPU |
Memory | 4 Gi (~4.2 GB) |
Data Size | 269 KB/DataNode |
VM/K8s cluster | K8s cluster application (CPS) instance - 1 postgres instance - 2 |
...
Test Results
# | #Nodes | Thread | Time taken to post single node | Time taken to post all data nodes | Comments |
---|---|---|---|---|---|
1 | 50 | 1 | 0.8 sec | 42.6 sec | |
2 | 500 | 1 | 0.9 sec | 9 m 36 sec Connections drops for a while and resumes | Reason For Connection drops: Application restarts due Out OF Memory Error |
3 | 500 | 10 | ~ 1.2 sec | 8 m 42 sec | No improvements with increased threads count |
Test Iteration
...
2 - 8 GB RAM, 4 vCPU - Kubernetes Cluster
Test Environment | |
---|---|
CPUs | 4 vCPU |
Memory | 8 Gi (~48.5 GB) |
Data Size | 269 KB/DataNode |
VM/K8s cluster | K8s cluster application (CPS) instance - 1 postgres instance - 2 |
...
Test Results
# | #Nodes | Thread | Time taken to post single node | Time taken to post all data nodes | Comments |
---|---|---|---|---|---|
1 | 500 | 1 | 0.8 - 0.9 sec | 7m 53 sec | Not able to post more than 500 nodes which lead to application pod restart with the the reason code 137 (OOMKilled) |
Test Iteration
...
3 - 17 GB RAM, 8 vCPU - Kubernetes Cluster
Test Environment | |
---|---|
CPUs | 8 vCPU |
Memory | 16 Gi (~17 GB) |
Data Size | 269 KB/DataNode |
VM/K8s cluster | K8s cluster application (CPS) instance - 1 postgres instance - 2 |
...
Test Results
# | #Nodes | Thread | Time taken to post single node | Time taken to post all data nodes | Comments |
---|---|---|---|---|---|
1 | 500 | 1 | ~0.9 sec | 7m 44 sec | Not able to post more than 500 nodes which lead to application pod restart with the the reason code 137 (OOMKilled) |
Test Iteration
...
4 - 16 GB RAM, 12 vCPU - Kubernetes Cluster
Test Environment | |
---|---|
CPUs | 12 vCPU |
Memory | 16 Gi (~17 GB) |
Data Size | 269 KB/DataNode |
VM/K8s cluster | K8s cluster application (CPS) instance - 1 postgres instance - 2 |
...
Test Results
...
# | #Nodes | Thread | Time taken to post single node | Time taken to post all data nodes | Comments |
---|---|---|---|---|---|
1 | 500 | 1 | 0.8 - 0.9 sec | 7m 23 sec | Not able to post more than 500 nodes which lead to application pod restart with the the reason code 137 (OOMKilled) |
Test 1: initial bulk insert (single thread)
...
Test 2: bulk insert, cps temporal notifications disabled (single thread)
Test Iteration 5 - 128 GB RAM, 64 vCPU, Single VM - CPS temporal notifications enabled
Test Environment | |
---|---|
CPUs | 64 vCPU |
Memory | 128 GB |
Data Size | 269 KB/DataNode |
VM/K8s cluster | Single VM |
Test Results
# | #Nodes | Thread | Time taken to post single node | Time taken to post all data nodes | Comments |
---|---|---|---|---|---|
1 | 500 | 1 | ~ 2 sec | 16 m 6 sec | |
2 | 1000 | 1 | ~ 2 sec |
| |
3 | 1000 | 60 | 2 - 6 sec | 1h 19 min 2 sec | No improvements with increased threads count |
4 | 2000 | 1 | 1h 34 min for 1131 nodes. Got stuck due to postgres connection leak |
Test Iteration 6 - 128 GB RAM, 64 vCPU, Single VM - CPS temporal notifications disabled
Test Environment | |
---|---|
CPUs | 64 vCPU |
Memory | 128 GB |
Data Size | 269 KB/DataNode |
VM/K8s cluster | Single VM |
Test Results
# | #Nodes | Thread | Time taken to post single node | Time taken to post all data nodes | Comments |
---|---|---|---|---|---|
1 | 1000 | 1 | 1.8 - 2 sec | 30 m 15 sec | |
2 | 1000 | 2 | 1.8 - 2 sec | 29 m 49 sec | No improvements in performance with increased threads count |
3 | 1000 | 5 | 1.8 - 2 sec | 30m 44 sec | |
4 | 1000 | 10 | 1.8 - 2 sec | 30m 21 sec | |
5 | 2000 | 1 | 1.8 - 2 sec | 59 m 26 sec | |
6 | 3000 | 1 | 1.8 - 2 sec | 1h 30 min 29 sec |
Read Test Cases
See Reading Large Data Instrumentation Comparisons
Test Results (After xpath query performance improvements)
No of nodes | threads count | Time to read single txn | Time to read all | Comments |
500 | 1 | ~0.5 sec | 4 min 10 sec | Drastic improvement in performance than before. It took 2 hours to retrieve the data before the fix. |
500 | 2 | ~0.5 sec | 4 min 12 sec | No improvements in performance with increased threads count |