Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

A Use Case teams are cross-functional in nature: they are composed of a use case leader, developers/committers/contributors, and also (indirectly) the ONAP platform members & PTLs from components that need to be involved. Use case teams are dynamic in nature, they are formed when new use cases are defined.

Image RemovedImage Added


Diagram showing the cross-interaction with other teams:

Image RemovedImage Added

Use Case Teams Process

...

  • Architecture - The Architecture team should be aware of any major updates coming from the requirements & use case teams. If there is architectural impact arising from your use case or requirements comprising your use case, you can make presentations at the Architecture sub-committee meetings. Changes may include impacts to the functional architecture, or platform architecture.
    • #1 ARCHITECTURE PROPOSALS - Teams must make Architecture sub-committee presentations following the template: R7 Guilin Architecture Review (template) - functional requirements
    • #1a REQ link to ARCH-Jira link

      Steps to LinkDescription
      Link your REQ Jira to the ARCH-jira
      1. Before a requirement may be reviewed by the arch subcommittee, the requirement must be documented in JIRA (REQ), and the REQ reference must be submitted when requesting an arch review.
      2. After the arch review subcommittee has completed its review and made a determination, please add an issue link to the REQ issue for your review.
        1. Navigate to your requirement issue in JIRA (REQ)
        2. Select edit
        3. Scroll down until you find the "Linked Issues" field (see attachment)
        4. Make sure that the link type is "Is blocked by" (see attachment)
        5. Add the JIRA reference for your arch review (ONAPARC) to the "issue" field. (see attachment)
        6. Click the "Update" button.


    • Image RemovedImage Added
    • #2: PROCEDURE - Schedule presentations with the Architecture sub-committee Project Architectural Review Requirements and Process (Draft)
  • PTL - The Use Case teams should engage the PTLs to inform them that the requirements in the use cases may have impact to their particular platform component. This can be done via attending the individual component meetings or coordinated at the Use Case Realization calls. Often, the platform components each have a release planning page as well.

...

  • M2 REQUIREMENTS & FUNCTIONALITY FREEZE

    • USE CASE TEAM ENGAGEMENT -
      • FUNCTIONALITY IDENTIFIED - The Use Case/requirements functionality has been specified, agreed to and frozen. No new visible functionality is to be added to the use case after M2.
      • FUNCTIONAL TEST CASES - Each Project team at M2 has defined and documented their Functional Test Cases. The integration teams have been engaged, and each individual project will likely have its own dedicated integration team wiki page.
      • DESCOPING IDENTIFIED - At this point, if functionality cannot be delivered, the Requirements JIRA (your clone of REQ-1) should be updated, and descoped items should be identified and reported to the project management team/TSC.
      • API DOCUMENTATION - The Use Case Teams have identified the basic changes for their API that they will be introducing and have started to document that API changes that are associated with their use case. The APIs need to be documented and available in the Use Case Wiki. API freeze is in M3.
      • VNF REQUIREMENTS - Base set of impacts to VNF Requirements identified by use case/ project. The VNF requirements project documents requirements for both PNF and VNFs. Requirements updates should be coordinated with the VNF-REQTS project.
      • DATA MODEL DEVELOPMENT - Discussion of the Data model by Use Case/requirements team with input from the Modeling sub-committee. The modeling sub-committee will communicate the clean release information model as a refining input to the development of the data model for use by the Use Case Teams on specific projects.
      • DATA MODEL REVIEW - Reviews of Data Model with Project (Use Case) Teams. The Data Model is being reviewed by the Use Case Teams with inputs from the Modeling S/C by bringing the developing data model (in the discussion state) to the modeling S/C. It would not be feasible to for the members of the modeling S/C to attend all of the various U/C meetings; although one-off sync-ups might occur in this stage. For those U/C that have significant data modeling work, it would be advised that that U/C team reserves a slot in the modeling S/C meeting(s) to present their data modeling changes and information flows so that the modeling S/C team can advise the U/C team as they develop their data model.
      • MAPPING INFORMATION & DATA MODEL - Mapping of information model and the data model is also done between the modeling S/C and the Use case teams. This might happen in the project teams, or on the modeling S/C calls. Active discussion and interaction between Use Case Team and the Modeling S/C to make sure that the information model and the data model development are in lock-step.
      • CROSS REFERENCING JIRA TICKETS - The modeling S/C uses Jira tickets to track activities; and the Use Case teams also use Jiras to track platform work, modeling work, epics & stories. So it would be smart to link or associate relevant Jira tickets together.
      • JOINT REVIEWS - The Data model should be reviewed with the Modeling S/C. Data model being developed by the component team is using the component model as input.
      • VENDOR EQUIPMENT - If there is any vendor equipment that needs to be delivered it will be sent in this time frame.
    • MODELING SUBCOMMITTEE ENGAGEMENT -
    • For the RELEASE Information Model these are the activities that the Modeling sub-committee is engaged in leading up to M2.
      • RELEASE INFORMATION MODEL (Starting Point) - The release starts with a clean release information model from the PREVIOUS release (with all of its attendant contributions). Then new contributions of the current release are considered (see below the process for handling each specific contribution).
      • DELAYED ELEMENTS OF THE RELEASE INFO-MODEL - Any delayed elements (parameters/classes) in a contribution can't make the current release (it stays experimental), what would happen to the overall contribution or release information model (is it allowed to go clean).
      • INFORMATION MODEL FREEZE - The aggregate / release information model for the release is approved by association with the fragment/ component reviews.  Each of the fragment (contributions) are individually approved.
      • RELEASE INFO MODEL DECLARED CLEAN - After component reviews have concluded and release info model freeze by the modeling S/C the info model is called the "clean model" in M2. The Use Case teams developing Data Models can be certain that the information model will be usable as shown. The diagrams and model wiki pages will indicate that this is a clean model. Put into the information model for that release. Unfinished contributions are postponed.
    • DISCUSSION OF CONTRIBUTIONS - Each contribution discussed according to following process. This is where refining of each of the contribution models occurs by the Modeling Sub-committee (S/C). The release information model is not separately tracked, composed, updated, or released in this period of time. But, rather, each individual contribution has its own Wiki. Thus, for each contribution:
      • CONSIDER CONTRIBUTION - START: Input Contribution (verb Consider) END: Contribution in Discussion State
        • An individual model contribution is a model that will eventually be a part of the total release information model. It is generally a self-contained model which depicts a particular capability or function of the system. This contribution starts as a "input contribution" and undergoes consideration by the modeling sub-committee. Consideration means that the modeling S/C is entertains & assesses if the input contribution should be accepted into the current (or a future release) by weighing the contribution against its relevance and the available resources (modelers) in the release.
      • REVIEW & REFINE CONTRIBUTION - START: Contribution in Discussion State (verb Reviewing & Refine) END: Contribution in Discussion state
        • The contribution undergoes reviewing & refining during the discussion state. Reviewing & refining means that the modeling S/C is discussing the modeling, and updating the contribution based on feedback and comments from the modeling team. Each contribution can be reviewed and refined independently and concurrently with other contributions. Discussion state classes, attributes and relationships are tagged IISOMI experimental.
      • FINAL CALL FOR COMMENTS & INITIATE POLLING - START: Contribution in Discussion State (verb Approving/Poll) END: Contribution in Discussion state
        • (a) FINAL PRESENTATION - The contribution undergo the approval process, and brought one to the modeling S/C for discussion and socialization.
        • (b) FINAL CALL FOR COMMENTS - After that, a final call for comments is issued by a sub-team lead to the modeling team whereby final thoughts & input can be given. This final call for comments signals that the discussion is wrapping up for this contribution and will soon go to a poll.
        • (c) INITIATING POLL - After final call and no further outstanding comments exist, the contribution is brought to a poll by a sub-committee chair. A poll is created whereby modeling S/C members can give the contribution a vote of "yes" or "no". 
      • APPROVING CONTRIBUTION - START: Contribution in Discussion State Post-Poll (verb Approving) Contribution in Clean State
        • After the poll has concluded, the contribution has finished the approval process. The contribution is now considered to be in the clean state. The items that are in the IISOMI experimental state get promoted to a preliminary state. A gendoc is generated and put on the wiki page. The gendoc would be translated and published on the readthedocs site.

Image RemovedImage Added

  • Architecture Engagement -
    • M2 ARCHITECTURE WORK - Before M2, the architecture team is working to refine their four basic things: (1) Functional Architecture, (2) Use case impacts (3) component architecture, (4) Architecture enhancements. At M2, the project teams working requirements and use cases should be aware of the reviews for the major architecture initiatives and be involved in reviews.
    • SYNC UP - The architecture sub-committee should have a sync up with the project teams to have a check-point to share updates to the project impacts.
  • Components (PTL) Engagement - ONAP Platform Teams (A&AI, SO, SDC etc) revflease.
    • FEEDBACK - The platform PTLs should be updated and be involved with project teams requirements freezing and discuss if there are de-scoping that may be necessary. API impacts from projects should also be communicated to the project PTLs.
    • SOCIALIZATION - The socialization of project scope changes, and API updates should be made. Attendance at Project PTL meetings will help keep the project teams in sync with the project.

...

  • M3 API FREEZE

    • USE CASE / REQUIREMENTS PROJECT TEAMS -
      • API FREEZE - M3 is characterized by the API freeze. The main thing that happens at M3, is that the API is frozen by the Use Case / Requirements Teams.
      • DATA MODEL FREEZE - Developers are working to review & finalize the data model in order to develop the API.
        • Info Model - The Modeling S/C is develops the info-model; the Use Case Teams should present at the Modeling S/C their proposed data model that might be frozen so that the modeling S/C can provide assistance and assess if info-model impacts.
        • DATA MODEL IMPACTING INFO MODEL - If changes in the Data Model impact the information model, those changes need to be worked by the model S/C. The Modeling S/C would evaluate the change to the Information model and possibly make updates.
        • EVALUATE DATA MODEL - START: Input Data Model > verb Consider > END: Data Model in Discussion State
          • The data model is a model that is used in a Use Case and is based on the Information Model. It is generally a self-contained model which depicts a particular capability or function of the system. The data model starts as a "input data model" and undergoes consideration by the Use Case teams. Consideration means that the Use Case teams is entertains & assesses if the input data model. If the Use Case teams think that the contribution is not ready for the current release that contribution it might postponed. It would be noted in the Release Management Project page as such.
        • REVIEW & REFINE DATA MODEL - START: Data Model in Discussion State > verb Reviewing & Refine > END: Data Model in Discussion state
          • The data model undergoes reviewing & refining during the discussion state. Reviewing & refining means that the Use Case Teams are discussing the data model and updating their data model based on feedback and comments from the Use Case team and modeling team. Each data model can be reviewed and refined independently and concurrently with other use case projects. Things in the discussion state are classes, attributes and relationships are tagged as IISOMI experimental.
        • FINAL CALL FOR COMMENTS & INITIATE POLLING - START: Data Model in Discussion State > verb Approving/Poll > END: Data Model in Discussion state
          • (a) FINAL DATA MODEL - When the data model has gotten to a point where the use case team feels that it can start to undergo the approval process, the data model is brought one final time the use case project team.
          • (b) FINAL CALL FOR COMMENTS - After that, a final call for comments is issued by a use case lead whereby final thoughts & input can be given. This final call for comments signals that the discussion is wrapping up for this contribution and will soon go to a poll.
          • (c) INITIATING POLL - After final call and no further outstanding comments exist, the contribution is brought to a poll by a use case lead. A poll is created whereby use case team members can give the contribution a vote of "yes" or "no". 
        • APPROVING CONTRIBUTION - START: Data model is in Discussion State Post-Poll > verb Approving > Data model in Clean State
          • After the poll has concluded, the data model has finished the approval process. The data model is now considered to be in the clean state. The items that are in the IISOMI experimental state get promoted to a preliminary state.
      • RECONCILE DATA & INFO MODEL - Reconciling the info-model with the data-model. If there are impacts to the information model, it should be captured.
      • SWAGGER - Publish Swagger, Edge Rules.
      • MAINTENANCE RELEASE - Use Case teams should determine what content should go to a maintenance release. Content moved to a maintenance release opens a lot of questions: Integration, testing, phasing, re-introduced (later release), re-scoping, tracking & project management.

      • INSIGHTS STATISTICS PAGE - https://lfanalytics.io/projects/lfn%2Fonap/dashboard

      • TASKTASK TOPICDESCRIPTION
        1Update API documentation
        2

        Verify that merge requests are code reviewed


        3

        Update the OOM Port list


        4

        Data models are shared with the Modeling subcommittee


        5

        Complete the Architecture Sub-committee Review

        Go to the Architecture subcommittee and present at the meeting and hold a component review.
        6Integration Blocker
        7Review License Scan Issues
        8Update documented Risks.Update the Project Status Page. Release milestone exceptions. Submit and exception Request.
        9Resolve all high priority bugs in JiraAny high priority bug left by M3 should be resolved if possible. Release notes will be updated with open issues if not completed.


      • Example of A&AI tasks at M3:
      • Image RemovedImage Added
    • MODELING S/C ENGAGEMENT at M3-
      • MODELING S/C ENGAGEMENT - The Use Case teams may wish to solicit the opinion of the modeling S/C and present their data model for discussion and socialization.
      • REFINEMENTS TO THE RELEASE INFO MODEL - The Release Information Model is clean (base-lined) at M3. Though, updates can still happen to the release information model and the contributions. Certain elements within the model(s) could go to back to an experimental state. Only certain elements (e.g. attributes, ranges) are likely to go to the experimental state NOT the entire contribution. New additions could be added to a contribution model. A contribution cannot be clean and experimental. Clean has a relationship to the IISOMI states. For an entity to be clean it must be either preliminary or mature.  
    • ARCHITECTURE ENGAGEMENT -
      • API FREEZE & ARCHITECTURE - API updates can impact that Component Architecture. That is architecture related to the ONAP components (A&AI, SO, SDC etc). Impacts to the API also affect the architecture landing page portal, and the architecture component descriptions. This is where the architecture team captures links to the API descriptions and documentation. Impacts to the API should have been identified during the architecture sub-committee review at M0.
    • COMPONENT  (PTL) ENGAGEMENT -
      • API FREEZE & COMPONENT IMPACT - The API freeze most directly affects the ONAP components (A&AI, SO, SDC etc). As the project teams working use cases & requirements will directly impact the software used by micro-services or platform components. Software changes are tracked in Jira, and should be coordinated with the PTL platform component technical leads.

...

  • M4 CODE FREEZE

    • USE CASE / REQUIREMENTS PROJECT TEAMS -

    • CODE FREEZE - The Use Case Teams are delivering the Software for the release. Requirements and use case teams are working to complete their software defined in their jiras and wiki pages. These will include the following tasks listed here.

    • COMPONENT S/W DELIVERY - S/W drops should be coordinated with PTL (components). Sync up with each component and PTLs should be done. Each component that is impact should have already been tracked in M0 and M1. Each of the component S/W impacts should be tracked by Jira tickets. Historically, it is the case that sometimes certain functionality of a component may not be able to be delivered. In this case, an assessment should be made if this will impact other platform components or other aspects of the use case.
    • JIRA TICKETS - Jira tickets should be updated with S/W delivery status. Delayed or Jira tickets that cannot be completed need to be communicated to the ONAP project release manager. Jira tickets that are tracking the overall project, the REQ-tickets need to be updated if there have been changes in content or status.
    • DEFERRED - Deferred elements that could not be delivered in the release should be noted. These can now be scheduled for the next release as generally by M4, the next ONAP release content is already starting to be considered and early planning is occurring.
    • INTEGRATION WORK - Integration work and test-cases should be worked. The integration teams need to be aware of any delays in software component delivery. If there are things can cannot make it into the current release test cases need to be updated.
    • API UPDATES - Swagger updates and API updates should be made if necessary. The API delivery was in M3 however, some things may change going into or during M4 which may cause API updates.
    • JIRA S/W BUGS COMMUNICATED - Tracking new software bugs with Jira tickets as necessary. As new software issues are encountered, they need to be communicated to the release.

    • RELEASE MANAGEMENT REPORTING - During M4, status of the projects are tracked for evaluation and software delivery. Potential delays needs to be communicated to ONAP project management. The TSC will consider and assess the status or each requirement/use case and the health of the release based on the Jira tickets. Any deferrals should be noted with the project management.
    • TSC REPORTING - The TSC is tracking delivery and health of the release at M4
    • MARKETING RELEASE - The marketing report of overall ONAP is being drafted at this point.
    • DOCUMENT GENERATION - Read the docs updates should be made for the Use Case. The read the docs can be found here: https://onap.readthedocs.io/en/latest/index.html


    • tASKTASKitem

      review License Scan Issues

      Refer to most recent license scan, to determine is license scan are issues and resolve license scan issues.

      If your project issues open source software, it may have licensing issues. For example if open source or proprietary software


      Merge Requests

      Address all security Issues

      Security Violation page. Some are common vulnerabilities across all of ONAP.

      Depending on libraries used.


      FOSS Wiki -

      Maven command to show dependency tree and uploaded. 

      Project FOSS


      High Priority Jira IssuesHigh Priority Jira Issues and document any remaining open issues.

      Release Platform Maturity & CII

      Platform Maturity Requirements (S3P).

      Performance stability resilience security Scalability manageability scalablity

      Project Maturity Review for AAI

      CII Badging Scorecard.

      The Analytics (what used to be Bitergia) is used to show the different commits, different project committing, and showing that it is an active projects and the span of committers across different companies. To find outliers, and project not being supported to evaluate for maturity review.

      https://lfanalytics.io/projects/lfn%2Fonap/dashboard


      Test Coverage

      Verify test coverage goals have been met

      This is done in Sonar Cloud. Sonar cloud shows lines of code that are test are not covered.

      J-Unit tests are cross-indexed against the software and statistics are compiled for each component in Sonar Cloud.

      Overall Line coverage. meaning that the tests cover >50% of the code.

      An example: Image Removed Image Added components: Image Removed Image Added

      https://sonarcloud.io/organizations/onap/projects

      Quality profiles are generated. Bugs, Vulnerabilities; Code Smells, Security hotspots; and Active rules are applied

      and identify code duplication. Suggested Bug fixes.


      Undocumented Risk

      Integration Weatherboard

      Integration Weatherboard

      0: Integration Weather Board for Frankfurt Release


      Update the INFO Yaml

      Review and uupdate the INFO. yaml

      Info about project, life cycle state. Comiters.

      Project meta-data. Stating committers, PTLs etc.

      through Oracle VM VirtualBox

      Keep track of project changers.

      Image Removed Image Added , Image Removed Image Added , Image Removed Image Added , Image Removed Image Added , Image Removed Image Added,

      Each Micro-service of the project has a INFO.yaml Image Removed Image Added

      Apply for project status: Image Removed Image Added


    • Image RemovedImage Added
    • MODELING S/C ENGAGEMENT at M4 -

      • f

    • ARCHITECTURE ENGAGEMENT -

      • S-P - B-

    • COMPONENT (PTL) ENGAGEMENT -

      • D-T - D-p.

...