Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »


Sorted inputs - by category

Release Process

  • Self release process had a few hiccups but seems better now
  • gerrit/jenkins/jira stability issues still occurring sporadicly

Product Creation

  • Job Deployer for OOM (and now SO) was great CI improvement.  Noticeable impact to reviewing merges when Job Deployer was offline due to lab issues.
  • Addition of Azure public cloud resources is helping with the verify job load.
  • Need to continue with more projects added to CI pipeline and tests targeted for the specific project (e.g., instantiateVFWCL , ./vcpe.py infra , rescustservice for SO)


Testing Process

  • Adding Orange and Ericsson Labs to installation testing was a good change. More  labs coming on board for this.
  • Still some issues due to lab capabilities, seems to be better but some slowliness are still occuring and hard to troubleshoot (infra has still a significant impact)
  • CSIT refactoring provided more clarity (some tests were running on very old versions and sometimes not maintained since casablanca), moreover the teams were not notified in case of errors (changed in Frankfurt)
  • Still space for improvements
    • robot healthcheck have not the same level of maturity from a component to another - some of them are still PASS even the component is clearly not working as expected (just check that a webserver is answering without really checking the component features), good examples shoudl be promoted as best practices
    • CSIT tests are more functional tests (which is good), integration tests in target deployement (using OOM) should be possible by the extension of the gating to the different components (but resources needed)
    • Still lots of manual processing to deal with the use cases - no programmatic way to verify all the release use cases on any ONAP solution
    • hard to get a good view of the real coverage in term of API/components - the Daily chain mainly used VNF-API for instance, no end to end automated tests dealing with policy/dcae

ONAP Project Oversight

  • Jira was improvement over wiki pages for milestone tracking but it still seems onerous on the PTLs
  • Need automated test result tracking for use cases (partially done in el alto through the first PoC leveraging xtesting - test DBs was used and collected results from E/// and Orange labs)

Others

  • PTL meetings seem more like tracking calls. Might want to consider a PTL committee that would run the PTL meetings.
  • SSL rocket chat seemed to work for folks - need to consider moving this to a supported solution.
  • Rocket chat private server ACL issue
    • onapci.org access to jenkins is conflicting  with rocket chat since they share the same gateway
    • IP ACL's that blocked some high volume downloads of logs from jenkins also blocked access to rocket chat fro some proxy's

====================================================================================================================================================

Have we fixed anything captured during the Dublin Retrospective?

Release Process

Product Creation

  • OOM verify job was very helpful - finding defects before merged into the charts
  • SECCOM TTL for test certificates doesn't impact the security - probably longer than a year
  • AAF underestimated the difficulty of changing the service locator
    • Infrastructure changes need to start very early in the release 
    • Plan for backward compatibility up front
  • Addition of OJSI Jira has been useful to create more focus on security issues - vulnerabilities are easier to track
    • Nexus vulnerabilities analysis still is very difficult - need some innovation

Testing Process

  • No overall platform team - not enough people have a picture of the complete platform
  • Off-line installation finding new issues - could be done earlier to find issues
  • Need to do installation testing in other places than just Windriver to catch environment assumptions
    • Diverse lab test envs (Orange) was helpful
    • Many versions of OpenStack
  • Project team engaging in the integration process accelerated the process of integration - zoom shared debug sessions (much faster than jira)
  • Early start of integration testing helps issue resolution earlier than the tail end of the release
  • RC0 attempt to integrate and test was not successful - no CI/CD and a large set of changes integrated at once negatively impacted system stability

ONAP Project Oversight

  • POC should not intervene with the ongoing ONAP Release content/timeline i.e. Dublin. POC should not hold the following release i.e. El-Alto
  • Retrospectives best done in person
  • Delay in Dublin - issues not found until the very end
    • Testing started earlier in the release
    • Functional deployments exposed more defects and happen late in the release
      • Intersystem testing should start sooner
      • Instantiate vFW as a step in the CI/CD process - fully automated testing and very stable
    • Requirements came too late in the process
    • Need to begin the testing of Functional requirements earlier
    • Teams need to be transparent into the actual status of the milestones - example: projects marked as code complete were not complete
    • Delay generated by the management of the Casablanca Maintenance Release
  • Lack of available lab resources   
  • Need to start with keystone V3 install in El Alto - installation jobs will be changed (authentication)




  • No labels