Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 29 Next »

Setup

Environment:

OS: Zorin OS 16.2

RAM: 32 GB

CPU: Intel® Core™ i7-10610U CPU @ 1.80GHz × 8

Data: 

Included in ZIP file (at bottom)

  1. All data under 1 anchors
    1. Under /openroadm-devices we have list of 10,000 openroadm-device[..]
  2. tree-size per 'device' fragments 86 fragments
  3. KB per devices: 333 KB


Single-large object request

Query: cps/api/v1/dataspaces/openroadm/anchors/owb-msa221-anchor/node?xpath=/openroadm-devices/openroadm-device[@device-id='C201-7-13A-5A1']&include-descendants=true

Durations are average of 100 measurements

 (1 object out of many)

PatchDevicesE2E duration (s)Fragment Query duration (s)Service OverheadGraph

1) Baseline

https://gerrit.onap.org/r/c/cps/+/133482

1,0000.0450.0230.022

2,0000.0540.0350.018
5,0000.1440.1170.027
10,0000.2900.2600.030
2) https://gerrit.onap.org/r/c/cps/+/133511/21,0000.0540.0530.001

2,0000.1000.1000.000
5,0000.2290.2290.000
10,0000.2130.2120.000
1,0000.0200.0160.004

2,0000.0300.0260.003
5,0000.1130.1080.005
10,0000.1000.0960.003


Observations (patch 3) 

  1. Is 'findByAnchorAndCspPath' being used (shouldn't?!)
  2. Query time increases until list-size reached 6,000 elements and then levels off

Whole data tree as one request

1 object containing all node as descendants (mainly one big list)

Query: cps/api/v1/dataspaces/openroadm/anchors/owb-msa221-anchor/node?xpath=/openroadm-device&include-descendants=true

All queries ran 10-reames

PatchDevicesE2E duration (s)Fragment Query duration (s)Service duration (s)Object Size (MB)Object Size #FragmentsGraph

1) Baseline

https://gerrit.onap.org/r/c/cps/+/133482

1,00011.8<0.1 *11.7400.386,000

2,00028.5<0.1 *28.4010.7172,000
5,00087.0<0.1 *86.8141,7430,000
10,000201.0<0.1*201.0083.3860,000

2)

https://gerrit.onap.org/r/c/cps/+/133511/2**

1,0000.50.20.30.386,000

2,0001.00.40.60.7172,000
5,0002.51.11.41.7430,000
10,0007.0 2.94.03.3860,000
1,0003.01.31.70.386,000

2,0005.52.33.20.7172,000
5,00011.05.45.61.7430,000
10,00025.411.713.63.3860,000

*Only initial Hibernate query, hibernate will lazily fetch data later which is reflected in E2E time

Observations:

  1. PathsSet #2  did perform better than the latest patch! Need to compare Daniel Hanrahan will follow up

Get nodes parallel

Fetch 1 device from a database with 10,000 devices

Bash parallel Curl commands, 1 thread executed 10 Sequential requests with no delays, average response times are reported

Query: cps/api/v1/dataspaces/openroadm/anchors/owb-msa221-anchor/node?xpath=/openroadm-devices/openroadm-device[@device-id='C201-7-13A-5A1']&include-descendants=true

PatchThreadsE2E duration (s)Succes RatioFragment Query duration (s)
https://gerrit.onap.org/r/c/cps/+/133511/1210.082100%0.2
20.091100%0.1
30.120100%0.1
50.3100%0.2
100.399.9%0.3
200.599.5%0.5
501.099.4%1.0
1002.399.7%2.3
2007.699.7%6.2
50017.141.4%13.8
1,00015.3 (many errors)26.0%11.9

Observations

  1. From 10 Parallel request (of 10 sequential request) the client can't always connect and we see time out error (succes ratio <100%)
    1. Sequential request are fired faster than actual responses so from DB perspective they are almost parallel request as well 
  2. Database probably already become bottleneck with 2 threads, effectively firening a total of 20 call very quickly. Its know that the DB connection pool/internal will slow down from 12 or more 'parallel' request

Graphs:

  1. Average E2E Execution Time
  2. Internal Method Counts (total)

Observations:


Data sheets:

  • No labels