Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Current »

OOM gating has been introduced for Dublin.

It consists of a deployment followed by a set of tests on patchsets submitted to OOM repository.

The CI part is managed on gitlab.com and the deployment is executed on ONAP Orange lab.

The goal is to provide a feedback - and ultimately to vote - on code change prior to merge to consolidate the Master branch.

The developer can evaluate the consequences of his/her patchset on a fresh installation.


The gating is triggered in 2 scenarios:

  • new patchset in OOM (so if you submit 8 patchsets, gating will be queued 8 times..)
  • comment with the magic word oom_redeploy is posted in the Gerrit's comment section


Please note that it is just an indicator, as it is based on the OOM Master branch, the errors reported in the results may be due to already existing code and may not be related to the patch itself.

It is trivial to guess when the patch has nothing to do with the errors, but in some cases some pods or helm charts may be failed prior to the patchset modifying code from the same component.

The goal is to converge towards a rolling release, it means that Master would be always valid but it may take some time and would require some evolution in the management of docker versioning.


The Gating process can be described as follows:


In order to simplify the integration and avoid depenencies towars manifest or branches that become quickly out of sync a 3 steps gating was introduced and can be described as follows:

On patchset submission, the gating chain is triggered. It can be seen in the gerrit History windows.

A fourth notification is done to display the results in the gerrit review.

Please note that if you submit another patchset, it will retrigger the deployment and testing sequence. At this moment there is no filter so even a doc change triggers the pipeline.


At the end of the processing the ONAP jobDeployer reports the results


If you follow the links you will reach the xtesting-onap gitlab page (on gitlab.com).

Note you need a gitlab account to access these pages.


This link is unique and corresponds to the gating pipeline relating a specific patchset.

You can download the artifact that corresponds to this specific test.

If you're downloading all of the results, first you need to select the pipeline.

You should see the following menu

Then if you want to download the artifact with all the results, you must select the page stage, then click download.


TestDescriptionCriteriaResults
onap_healthcheck_k8s/onap-k8sWe check the status of the ONAP podsFAIL if 1 PODS in non Running state

logs available under results/healthcheck-k8s/k8s/xtesting.log

In the logs you will find

  • the list of pods
  • the list of deployments
  • the list of SVC
  • the describe of non Running pods
  • the events

results are also published in a public database:

http://testresults.opnfv.org/onap/api/v1/results?case_name=onap-k8s&build_tag=gitlab_ci-functest-rancher-baremetal-daily-master-<Gerrit_ID>

onap_healthcheck_k8s/onap-helmWe check the status of the ONAP helm chartFAIL if  1 chart is not DEPLOYED

logs available under results/healthcheck-k8s/k8s/xtesting.log

In the logs you will find

  • the list of heml charts and their status
  • the details on non DEPLOYED heml charts

results are also published in a public database:

http://testresults.opnfv.org/onap/api/v1/results?case_name=onap-helm&build_tag=gitlab_ci-functest-rancher-baremetal-daily-master-<Gerrit_ID>


healthcheck

  • core
  • small
  • medium
  • full

Traditionnal Robot healthcheck test suites with the label core, small, medium or full

the tesuites are run in parallel

FAIL if 1 of the helcheck if FAIL

logs available under results/healthcheck/<core|small|medium|full>/xtesting.log

Robot outputs are available in results/healthcheck/<core|small|medium|full>/<core|small|medium|full>

Results are also published in a public database:

http://testresults.opnfv.org/onap/api/v1/results?case_name=core&build_tag=gitlab_ci-functest-rancher-baremetal-daily-master-<Gerrit_ID>

healthdistRobot test to check the onboarding and distribution of the model in ONAPFAIL if the csar cannot be retried or the model cannot be distributedlogs available under results/healthcheck/healthdist/xtesting.log

End to End tests

  • basic_vm (ubuntu16)
  • freeradius (freeradius on basic_vm + use of NBI for instantiation)
  • clearwater_ims


They correspond to full onboard/instantation of some VNFs

They are triggered only if core and small healthcheck test suites aree successful


the test includes the creation and the cleaning of the resources

FAIL if VNF cannot be created at any stage (onboarding, distribution, instantiation,..)


if basic_vm Fails, other cases are not executed to save time

logs available under results/vnf/vnf/xtesting.log


Results are also published in a public database:

http://testresults.opnfv.org/onap/api/v1/results?case_name=basic_vm&build_tag=gitlab_ci-functest-rancher-baremetal-daily-master-<Gerrit_ID>


          

  • No labels