...
Query: cps/api/v1/dataspaces/openroadm/anchors/owb-msa221-anchor/node?xpath=/openroadm-devices/openroadm-device[@device-id='C201-7-13A-5A1']&include-descendants=true
Durations are average of 100 measurements
...
1 object containing all node as descendants (mainly one big list)
Query: cps/api/v1/dataspaces/openroadm/anchors/owb-msa221-anchor/node?xpath=/openroadm-device&include-descendants=true
All queries ran 10-reames
Patch | Devices | E2E duration (s) | Fragment Query duration (s) | Service duration (s) | Object Size (KBMB) | Object Size #Fragments | Graph |
---|---|---|---|---|---|---|---|
1) Baseline | 1,000 | 11.8 | 0<0.0311 * | 11.740 | 3330.3 | 86,000 | |
2,000 | 28.5 | 0<0.0491 * | 28.401 | 6660.7 | 172,000 | ||
5,000 | 87.0 | 0<0.1581 * | 86.814 | 1,664,9857 | 430,000 | ||
10,000 | 201.0 | 0<0.4451* | 201.008 | 3.3,329,970 | 860,000 | ||
2) | 1,000 | 0.5 | 0.2232 | 0.321332,9973 | 0.3 | 86,000 | |
2,000 | 1.0 | 0.4174 | 0.558665,9946 | 0.7 | 172,000 | ||
5,000 | 2.5 | 1.0871 | 1.4394 | 16649851.7 | 430,000 | ||
10,000 | 7. | 0 ??? Was this wrong ?!0 | 2.9289 | 4.0490 | 33299703.3 | 860,000 | |
1,000 | 3.0 | 1.2623 | 1.6917 | 3329970.3 | 86,000 | ||
2,000 | 5.5 | 2.3173 | 3.1732 | 6659940.7 | 172,000 | ||
5,000 | 11.0 | 5.4334 | 5.5916 | 16649851.7 | 430,000 | ||
10,000 | 25.4 | 11.6987 | 13.6616 | 33299703.3 | 860,000 |
*Only initial Hibernate query** , hibernate will lazily fetch data later which is reflected in E2E time
Observations:
- PathsSet #2 did perform better than the latest patch! Need to compare Daniel Hanrahan will follow up
Get nodes parallel
Fetch 1 device from a database with 10,000 devices
Bash parallel Curl commands, 1 thread executed 10 Sequential requests with no delays, average response times are reported
Query: cps/api/v1/dataspaces/openroadm/anchors/owb-msa221-anchor/node?xpath=/openroadm-devices/openroadm-device[@device-id='C201-7-13A-5A1']&include-descendants=true
Patch | Threads | E2E duration (s) | Succes Ratio | Fragment Query duration (s) |
---|---|---|---|---|
https://gerrit.onap.org/r/c/cps/+/133511/12 | 1 | 0.082 | 100% | 0.2382 |
2 | 0.091 | 100% | 0.0991 | |
3 | 0. | 123120 | 100% | 0.1271 |
5 | 0.1803 | 100% | 0.1802 | |
10 | 0.2883 | 99.9% | 0.2833 | |
20 | 0.4995 | 99.5% | 0.4875 | |
50 | 1.0 | 99. | 0074% | 1.0.982 |
100 | 2.3443 | 99.7% | 2.2713 | |
200 | 7.6 | 99. | 5677% | 6.2252 |
500 | 17.1 | 41. | 1344% | 13.8338 |
10001,000 | 15.306 | 11.866 |
Graph:
...
3 (many errors) | 26.0% | 11.9 |
Observations
- From 10 Parallel request (of 10 sequential request) the client can't always connect and we see time out error (succes ratio <100%)
- Sequential request are fired faster than actual responses so from DB perspective they are almost parallel request as well
- Database probably already become bottleneck with 2 threads, effectively firening a total of 20 call very quickly. Its know that the DB connection pool/internal will slow down from 12 or more 'parallel' request
Graphs:
- Average E2E Execution Time
- Internal Method Counts (total)
Observations:
Data sheets:
...