Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 64 Next »

Motivation

The most obvious solution for taking full advantage of CSIT is to add the test cases under the same repository with the functionalities that they are testing (instead of having them in separate centralized CSIT repository as we currently have). This would have the following benefits:

  • CSIT could be triggered by any commit to the project repo
  • CSIT tests the code (or specifically, docker images that have been built) from the committed branch
  • CSIT could have a vote on the commit based on the result of the test run
  • If the implementation changes require changes in CSIT tests (to cover new functionality or to pass in the first place), that could be handled within the same commit
  • ideally, local verification would become less complex (no need to work between CSIT repo and project repo) 
  • No need of Integration to merge any changes related to your project

Issues

Given the fact that CSITs are currently a very colourful collection of various suites with different scopes and strategies, the transition of CSITs to project repositories is not necessarily trivial:

  • CSIT suites that test components from multiple project repositories at the same time 
    • such CSIT tests may have to be separated using additional simulators, or
    • project repository structures themselves may have to be reconsidered
    • In any case, the division between the images under test and images that are just necessary dependencies should be clearly made and documented
      • the images under test should be coming from the commit branch
      • in the case of necessary dependencies it must be decided whether they should be provided as simulators or real components
        • if provided as real components, they should be referred to with a fixed, unchanging version number and should be stable and mature enough to develop on
          • ONAP's (unintentional?) practice of allowing the same versioned image to be changed might prove problematic
      • project-specific simulators are trivial case from dependency handling point of view, but if common simulators are in use, dependency considerations for them are the same as with real components
  • Jenkins templates may have to be redesigned to support unified approach for triggering review branch-specific docker build, CSIT execution and voting chain as part of review verification
    • What is the most elegant and effortless solution for ensuring that the docker images built from commit branch are taken in use in the CSIT (both in Jenkins and in local development environment)?
  • CSITs would become blockers for merging code
    • local pre-commit verification should be supported better by common CSIT tools
    • are all projects and suites mature enough to deal with that?

  • Docker image production practices will have to be unified (see Docker Image Build Guidelines, Independent Versioning and Release Process and Release Versioning Strategy)
    • To which extent are the current guidelines actually followed?
    • Are the guidelines good enough for CSIT development as they are now if they are followed?

Technical details to be decided

  • Should we keep separate docker build and CSIT jobs and just chain them into review verification, or should we try to incorporate docker building and CSIT execution into existing review jobs?
    • Reusing existing jobs and chaining them would require some docker image tag tuning to make sure CSIT tests pick up the exact image that was produced by preceding docker build job 
    • Either way, JJB templates will have to be touched
    • How should the following issue be solved?
      • give unique tag to the docker image to be tested in review and pass it to CSIT job
        • what should the tag be?
          • timestamp (from maven.build.timestamp) already seems to be used, but how to extract it?
          • jenkins build identifier? That can be determined from triggered CSIT job and it would be a direct way to find out where the image came from afterwards also to human reader (I'm intending to use this in my initial PoC with CCSDK)
          • gerrit commit id that triggered the job?
          • sha-256 of the docker image? 
        • how to pass the parameters? 
          • triggered jenkins build identifier can be found from ${BUILD_URL}/api/json (if triggered at all)
          • reverse trigger mechanism used as the basis of current trigger_jobs doesn't seem to support parameter passing?
          • file-based or some other custom mechanism?
          • replace reverse trigger mechanism with normal trigger with parameters (i.e. define the trigger in the triggering job instead in the triggered job)?
      • Do we need new docker image job templates for in-review docker builds or can the existing ones be reused somehow?
      • Is it possible to chain triggered jobs and let them all vote on the original review or does it need an umbrella job?
  • Should we still have common CSIT scripts (run-csit.sh etc) in CSIT repo and related procedures (setup, tests, teardown and related result collection) as the basis of project-specific test execution? 
  • Execution of CSIT tests and incorporating locally built test images should be made as easy as possible following common guidelines
    • Setting up testing environment (specific project-specific dependencies should be handled by the setup scripts)
    • Specific environment variables expected by the test suite (like GERRIT_BRANCH)
  • What is the significance of Java/Python/etc. SNAPSHOT/STAGING/RELEASE artifacts in Nexus? Do they have any actual role, or are the docker builds always creating those artifacts for themselves on the fly (against current Docker Image Build Guidelines)?
    •  Maven repository is empty
    • NuGet repository (whatever that is) has various rather old Mongo nuget archives that don't seem to be produced by anything in ONAP?
    • PyPI repository has a lot of 3rd party Python Wheel packages and two relatively recent tar packages from ONAP (onap_dcae_cbs_docker_client-1.0.1.tar.gz and onap-dcae-dcaepolicy-lib-2.4.1.tar.gz)
    • npm repository has various versions of clamp-ui tar packages 
  • What about code coverage/Sonar? Apparently currently there are no templates dealing with Sonar (instead, all the project have their custom Sonar JJB definition), and all the Sonars run on daily schedule instead of being triggered

Project status and readiness at the end of Guilin


  • No labels