Table of Contents |
---|
Jira Legacy | ||||||||
---|---|---|---|---|---|---|---|---|
|
Guiding Principles
Table of Contents |
---|
Jira Legacy | ||||||||
---|---|---|---|---|---|---|---|---|
|
Guiding Principles
- NCMP REST Interface will follow/be inspired by RESTConf interface for easy acceptance of and transition to this interface
- Will follow ONAP's RESTful API Design Specification
- The interface will include the concept of data-stores inspired by Network Management Datastore Architecture (NMDA) and as used in RESTConf
- The application should be able to easily switch between 'pass-through' and other datastores (also identical rest endpoint and responses)
References
Follow principles/patterns of RESTCONF RFC-8040 https://datatracker.ietf.org/doc/html/rfc8040
Follow principles/patterns of yang-patch RFC-8072 https://datatracker.ietf.org/doc/html/rfc8040
Follow principles/patterns of RESTCONF NMDA RFC-8527 https://datatracker.ietf.org/doc/html/rfc8527
Requirements
Note |
---|
Please note this section was added long after the implementation and focuses on characteristic only. |
Characteristics - WIP
It is proposed that reported characteristics will be used as a baseline for NCMP when agreed and sign-off.
...
Expected duration
...
100 ms to get module references
1000 ms to get module resources
...
N/A
...
N/A
...
N/A
...
5
...
10
...
N/A
...
TDB e.g. 5 KB
...
25 op/sec
...
10
...
N/A
...
within 1 sec
...
10
...
N/A
...
13 ops/sec
...
10
...
Within 2 sec
Note
- This is for mixed TCs
- Single KPIs will be monitored in NCMP owned pipeline with our performance every day(2 hrs interval) - Performance
- New BATCH KPI - TBD
- Organise a call to walk through the code and TCs with //. Want to see the code of the Test Cases. Csaba Szabó to walkthrough TCs with CPS team
- Check Registration & de-reg agree with //
Open Questions
- Concurrent which number are we agreeing to test, 10 parallel op. according to the input load, the average response time as mentioned in comment is 20 parallel op
- DMI delay - We need feed back from ETH on this
- Better to have R, W use cases separated as in the FS for both throughput and response time or should we merge this on NCMP
- Check Registration - Kolawole Adebisi-Adeolokun Refer to Michelle's email
- Check DE-Registration if this was ever agreed
Ongoing Discussion
- Share Test cases per KPI with CPS. AP @Csaba Szabó X to walkthrough TCs with Toine & Daniel(CPS)
- Functional Specification document.
- Confluence: Configuration Handling Functional Specification.
i. Current content shows mixed testcases, all TCs happening in parallel. AP @Csaba Kocsis Clarify what is being run in parallel and update table accordingly. - Waiting for feedback
ii. AP @Csaba Kocsis update UseCase_0005_IdSearch_NoFilter & UseCase_0008_Search_NoFilter to merge the Input Load column to show total of 5 parallel requests. - Waiting for feedback
iii. AP @Csaba Kocsis Notify Product Engineering (Dagda) about the change. - Waiting for feedback
- AP , @Csaba Kocsis , @Toine Siebelink, @Kolawole Adeolokun: Review the FS and identify if additional information is required. ONGOING
...
- Question about FS and Stability test strategy – what is expected to be supported in parallel or what is expected to be tested in series? How is that decided? AP Michelle & @Csaba Kocsis - Waiting for feedback
...
Issues & Decisions
Issues | Notes | Decisions | |
---|---|---|---|
1 | KPI for De-registration of 100 CM-handles | This was mentioned. Was this ever agreed, is this a valid use case that needs to be covered together with the Registration ? | Not priority for now, but acceptable if we match the registration req. |
2 | DMI delay | Could we get some feedback on DMI-delays for other use cases as not mentioned in FS document | Awaiting for ETH feedback AP On Kolawole Adebisi-Adeolokun and Csaba Kocsis Provided |
3 | Number of instances In some cases, ETH have used 2 instances, can we verify the number of instances for each use case. Some of the req were defined per instance and resources used : Identify which of these ? | Agreed to; CPS use 1 instance currently, but should focus on aligning performance with 2 instances for all use case | |
4 | Input Load Distribution the CM-handle search and ID search | Currently has 5 parallel request between them distributed at 2.5 each. This fractional distribution isn't feasible for parallel processing; the load should be allocated as whole numbers. Load needs to be distributed at. Would it be acceptable to adjust this distribution to either 2 or 3 parallel requests each (and vice versa ) without any negative repercussions? Agreed to do 6 parallel request combined total and divide the load to 3 parallel request each | |
5 | Regarding CM-handle search and ID search | FS only identified Module performance, are there any testing done towards a combined search of properties and modules in a single query Confirmed no other testing was previously done on this..... CPS have the capabilities to do mixed testing. ETH tbc on if they want to consider this ( Csaba Kocsis ) |
Requirements
Note |
---|
Please note this section was added long after the implementation and focuses on characteristic and enhancements after this study only. |
Characteristics
It is proposed that reported characteristics will be used as a baseline for NCMP when agreed and sign-off.
Operation | Concurrent requests/parallel | DMI Delay | Response size | Performance Requirement (Blue Stone tablet KPI) | Notes | Sign-Off | |
---|---|---|---|---|---|---|---|
1 | Registration of 20,000 CM-handles (in batches of 100) | 1 (requests are sequential) | 100 ms to get module references | N/A |
|
| |
2 | De-registration of 100 CM-handles | 1 (requests are sequential) | No Module delays | N/A |
| De-registration is currently not mentioned in Stone Tablet KPI or FS, however we have agreed to match the performance of registration for now as de-reg is also not a priority at this point in time | |
3 | CM-handle ID search with Module filter | 5 parallel request | N/A | 20,000 CM Handles i.e. 100*20.000 = 2MB | 2 seconds/operation | FS stated 5 parallel request for each of ID search and search, meaning a combined total of 10 parallel search requests. | |
4 | CM-handle search with Module filter | 5 parallel request | N/A | 20,000 CM Handles i.e. 500*20.000 = 10MB | 15 seconds/operation | FS stated 5 parallel request for each of ID search and search, meaning a combined total of 10 parallel search requests. | |
5 | Synchronous single CM-handle pass-through read | 4 (Parallel operations) | 300 ms | 5 KB | 10 request/second | Read are done in parallel with Write and searches. Note CPS will test passthrough read using both cmHandleId and alternateId. | |
6 | Synchronous single CM-handle pass-through write (CUD) | 4 (Parallel operations) | 670 ms | 5 KB | 5 request/second | No response is expected | |
7 | Batch/Bulk Read | 60 read request with 200 cmHandles each at 1 req/second. | 150 cmhandles/second |
|
Notes
- This is for mixed TCs
- Single KPIs will be monitored in NCMP owned pipeline with our performance every day(2 hrs interval) - Performance
- Test cases 3 through 7 are to run in parallel.
Synchronous single cm-handle pass-through (read) requests
...
Expand | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...
Expand | ||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...