OOM gating has been introduced for Dublin.
It consists in running of a deployment followed by a set of tests on patchset submission in patchsets submitted to OOM repository.
The CI part is managed on gitlab.com and the deployment is executed on ONAP Orange lab.
...
The developer can evaluate the consequences of his/her patchset on a fresh installation.
The gating is triggered on in 2 conditionsscenarios:
- new patchset in OOM (so if you submit 8 versions of the patchpatchsets, gating will be queued 8 times..)
- from gerrit use comment with the magic word oom_redeploy redeploy is posted in the Gerrit's comment section
Please note that it is just an indicator, as it is based on the OOM Master branch, the errors reported in the results may be due to already existing code and may not be connected related to the patch itself.
It is trivial to guess when the patch has nothing to do with the errors, but in some cases some pods or helm charts may be failed prior to the patchset modifying code from the same component.
The goal is to converge towards a rolling release, it means that Master would be always valid but it may take some time and would require some evolution on in the management of docker versioning.
...
The Gating process can be described as follows:
In order to simplify the integration and avoid depenencies towars manifest or branches that become quickly out of sync a 3 steps gating was introduced and can be described as follows:
On patchset submission, the gating chain is triggered. It can be seen in the gerrit History windows.
...
A fourth notification is done to display the results in the gerrit review.
Please note that if you resubmit an amended patchsubmit another patchset, it will retrigger a the deployment and testing sequence. For the At this moment there is no filter so even a doc change triggers the pipeline.
...
Note you need a gitlab account to access to these pages.
This link is unique and corresponds to the gating pipeline corresponding to the patchrelating a specific patchset.
You can download the artifact that corresponds to this specific test.
If you download 're downloading all of the results, first you need first to select the pipeline.
You should see the following menu
...
Then if you want to download the artifact with all the results, you must select the page stage, then click on download.
...
logs available under results/healthcheck-k8s/k8s/xtesting.log
In the logs you will find
- the list of pods
- the list of deployments
- the list of SVC
- the describe of non Running pods
- the events
results are also published in a public database:
...
logs available under results/healthcheck-k8s/k8s/xtesting.log
In the logs you will find
- the list of heml charts and their status
- the details on non DEPLOYED heml charts
results are also published in a public database:
...
healthcheck
- core
- small
- medium
- full
...
Traditionnal Robot healthcheck test suites with the label core, small, medium or full
the tesuites are run in parallel
...
logs available under results/healthcheck/<core|small|medium|full>/xtesting.log
Robot outputs are available in results/healthcheck/<core|small|medium|full>/<core|small|medium|full>
Results are also published in a public database:
...
End to End tests
- basic_vm (ubuntu16)
- freeradius (freeradius on basic_vm + use of NBI for instantiation)
- clearwater_ims
They correspond to full onboard/instantation of some VNFs
They are triggered only if core and small healthcheck test suites aree successful
the test includes the creation and the cleaning of the resources
FAIL if VNF cannot be created at any stage (onboarding, distribution, instantiation,..)
if basic_vm Fails, other cases are not executed to save time
logs available under results/vnf/vnf/xtesting.log
...
Please see Where can I find the list of supported use cases? for the list of tests