OOM gating has been introduced for Dublin.
It consists of a deployment followed by a set of tests on patchsets submitted to OOM repository.
The CI part is managed on gitlab.com and the deployment is executed on ONAP Orange lab.
The goal is to provide a feedback - and ultimately to vote - on code change prior to merge to consolidate the Master branch.
The developer can evaluate the consequences of his/her patchset on a fresh installation.
The gating is triggered in 2 scenarios:
- new patchset in OOM (so if you submit 8 patchsets, gating will be queued 8 times..)
- comment with the magic word oom_redeploy is posted in the Gerrit's comment section
Please note that it is just an indicator, as it is based on the OOM Master branch, the errors reported in the results may be due to already existing code and may not be related to the patch itself.
It is trivial to guess when the patch has nothing to do with the errors, but in some cases some pods or helm charts may be failed prior to the patchset modifying code from the same component.
The goal is to converge towards a rolling release, it means that Master would be always valid but it may take some time and would require some evolution in the management of docker versioning.
The Gating process can be described as follows:
On patchset submission, the gating chain is triggered. It can be seen in the gerrit History windows.
A fourth notification is done to display the results in the gerrit review.
Please note that if you submit another patchset, it will retrigger the deployment and testing sequence. At this moment there is no filter so even a doc change triggers the pipeline.
At the end of the processing the ONAP jobDeployer reports the results
If you follow the links you will reach the xtesting-onap gitlab page (on gitlab.com).
Note you need a gitlab account to access these pages.
This link is unique and corresponds to the gating pipeline relating a specific patchset.
You can download the artifact that corresponds to this specific test.
If you're downloading all of the results, first you need to select the pipeline.
You should see the following menu
Then if you want to download the artifact with all the results, you must select the page stage, then click download.
Test | Description | Criteria | Results |
---|---|---|---|
onap_healthcheck_k8s/onap-k8s | We check the status of the ONAP pods | FAIL if 1 PODS in non Running state | logs available under results/healthcheck-k8s/k8s/xtesting.log In the logs you will find
results are also published in a public database: |
onap_healthcheck_k8s/onap-helm | We check the status of the ONAP helm chart | FAIL if 1 chart is not DEPLOYED | logs available under results/healthcheck-k8s/k8s/xtesting.log In the logs you will find
results are also published in a public database: |
healthcheck
| Traditionnal Robot healthcheck test suites with the label core, small, medium or full the tesuites are run in parallel | FAIL if 1 of the helcheck if FAIL | logs available under results/healthcheck/<core|small|medium|full>/xtesting.log Robot outputs are available in results/healthcheck/<core|small|medium|full>/<core|small|medium|full> Results are also published in a public database: |
healthdist | Robot test to check the onboarding and distribution of the model in ONAP | FAIL if the csar cannot be retried or the model cannot be distributed | logs available under results/healthcheck/healthdist/xtesting.log |
End to End tests
| They correspond to full onboard/instantation of some VNFs They are triggered only if core and small healthcheck test suites aree successful the test includes the creation and the cleaning of the resources | FAIL if VNF cannot be created at any stage (onboarding, distribution, instantiation,..) if basic_vm Fails, other cases are not executed to save time | logs available under results/vnf/vnf/xtesting.log |