Introduction
CSIT ("Continuous System and Integration Testing", even though you could also read that as "Component-Specific Integration Testing") cases in ONAP exist typically for verifying functionalities of a single or few ONAP components in limited testing environments (suitable for lightweight installation in local development environment or and Jenkins) and often rely on simulators. Currently these tests are executed once per day on images from master branch (as well as on maintenance branches of selected previous releases) or whenever the test cases themselves are committed for review. This means that by ONAP's current automated quality assurance procedure any new bugs that could be caught by the existing CSIT cases are not discovered until the bugs are already merged to master. In addition, any failures in daily CSIT Jenkins are not sending any kind of notifications or raising tickets or alarms, so the teams are not even getting any automated feedback from the failures that can therefore go days or in worst cases weeks without notice (much less action).
Goal
The ultimate ideal goal for automated quality assurance procedure should be that code review verification jobs would run also the relevant CSIT cases and have a verification vote on the review before it is merged. Moreover those CSIT tests should be always executed locally by the developer before committing to minimize the possibility of a failure in Jenkins and shorten the feedback loop to minimum.
Obstacles
The writer of this analysis is not aware of the original reasons to exclude CSIT from review verification jobs (or if including them was ever even considered), but several reasons for keeping them separate can be seen in current state of ONAP:
- Many CSIT suites take a long time to execute, prolonging review verification feedback significantly and most likely causing bottlenecks in Jenkins leading to even further delays in day-to-day development
- review-specific verification of images would significantly increase the space used in Nexus and could lead to capacity problems if not considered properly
- CSIT tests might not be stable enough in all cases to rely on in continuous development yet - we can't have random failures delaying or preventing merges and we have to be able to rely on the results