Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 15 Next »

This page contains work in progress!

Questions and comments are inserted in the text as needed, prefix them with "Question:" or "Comment:".

Text below the line "----temporary text ----" is a placeholder for text that may or may not be used later on.

Page History

RevAuthorComment
9/7/17Peter LCopied text from the v4 document, must check the v5 document for additional parts
9/14/17Oskar MSome restructuring and clarifications. Temporary text either removed or inserted into the various UC steps.
9/21/17Oskar MAdded some sequence diagrams and made some minor adjustments to descriptions as well as overall assumptions in order to align with diagrams. Policy-based automation to handle network faults or service degradation has been moved to a separate step.

Goal

A planned list of 5G nodes are on-boarded into ONAP, and ONAP configures the nodes to the level that they are ready to handle traffic. ONAP begins to actively monitor and manage the nodes.

Assumptions

  • The 5G nodes consist of both PNFs (DU) and VNFs (CU). A single CU may consist of several VNFs.
  • Scope is limited to one PLMN without slicing.
  • A single vendor delivers RAN equipment and software.
  • A single service provider:
    • Owns or leases data center equipment, cell sites, physical transport, and any new equipment installed on these sites
    • Owns and operates the resulting RAN
    • Is the single user of the entire ONAP based management system
      • VNF/PNF provider is not visible as actor in this use case and self-service for VNF/PNF onboarding is not supported.
  • Network status including KPIs can be monitored in Portal (dashboard), but exporting data via APIs to external monitoring applications is out of scope.
  • This use case covers only initial deployment of nodes. Thus, change management such as software upgrade is out of scope.

<Question-Karpura Suryadevara - re: 3rd bullet above)> Is there a specific reason why only single vendor is considered for all the components of RAN equipment and software ?
<Peter L> Yes - the intention is to simplify the use case by avoiding (a) interoperbility problems between components and (b) the mapping of the same ONAP level configurations to multiple, vendor specific models (with this limitation we only need to show one mapping). If we want multiple vendor equipment in the RAN then the UC should perhaps be relabeled to "5G Multi Vendor RAN Deployment" to clearly reflect that - but then we also need equipment from multiple vendors to run a demo. Or?

Preconditions

To clarify the limits of this use case for this release of ONAP the following preconditions are assumed:

  • Requirements on RAN coverage is well defined and documented (frequencies, power levels, coverage, and capacity)
  • Based on requirements network planning has been performed including:
    • Cell sites, equipment and cell structure
    • Transport network and fronthaul infrastructure
    • Data center usage
    • Other infrastructure components that may be needed such as CA/RA server
  • Needed new hardware has already been delivered and installed, both outside the data center (new PNFs and their cabling) and inside the data center
  • Additional infrastructure components have been deployed and configured
  • The Core Network with its VNFs and transport network is operational and known to ONAP (is managed by the same ONAP instance or is known and can be addressed and connected to according to 3GPP defined methods)
  • Software packages and licenses have been procured and provided by the RAN vendor

Postconditions

The 5G RAN is providing RAN service for the user equipment according to expectations:

  • All planned services are online, providing FCAPS data through the relevant channels
  • RAN NOC personnel have full access to the FCAPS data and ONAP automation framework
    • Dashboard displays relevant RAN data
    • Calculation and monitoring of key performance indicators is activated, used to verify the capacity requirements
  • A first automation use case reacting on an incident or state change has been implemented

Steps

(Oskar M.) More sequences diagrams to added to steps below. Diagram in step 3 needs some more details.

Step 1: Service design

  • Onboard SW packages, descriptors and any other artefacts provided by the RAN vendor
  • Design RAN-level templates, recipes and workflows covering common network elements, transport network, data collection and analytics, policies and corrective actions
  • Design node-level templates, recipes and workflows covering network elements (PNFs and VNFs), transport network, placement or QoS constraints, data collection and analytics, policies and corrective actions
  • Distribute the completed design as well as vendor-provided artefacts to the various run-time components.

(Oskar M.) General ONAP question: Are WAN resource requirements embedded in service template parsed by SO, or do they use separate descriptors that shall be distributed to SDNC?

Step 2: Verify design

  • Verify templates and recipes from step 1, using dedicated test environment or limited trial following steps below. If necessary, make adjustments according to step 1.

Step 3: Deploy shared services

This step refers to deployment of any shared RAN services and functions defined by templates, recipes and workflows in step 1. Note that some of the functions below may be partially inactive until nodes are added in step 4.

  • On receiving service instantiation request via Portal or external API, SO and controllers will decompose the request, and allocate and connect the various resources.
  • DCAE will start fault, performance, and log data collection as described during design time.
  • DCAE will perform data analytics as configured in recipes, to monitor the environment and detect anomalous conditions. Output from analytics is forwarded to Policy and dashboard.

(Oskar M.) General ONAP question: The diagram below is modeled based on some other ONAP use cases. Why is there a service instantation request towards SO, but no corresponding requests towards DCAE or Policy to instantiate/activate analytics blueprints or policy rules for this particular service instance?

We can replace the above sequence diagram as follows, when we make use of OOF (ONAP Optimization Framework) for optimal placement of various virtual network functions.


 

Step 4: Add nodes

This step refers to deployment of services and functions defined by templates, recipes and workflows in step 1 for a new 5G node.

  • A sub-flow includes the onboarding process for related PNFs.
  • In this step node-specific data from planning is also inserted.
  • On receiving request, SO and controllers will decompose the request, and allocate and connect the various resources.
  • DCAE will start fault, performance, and log data collection as described during design time.
  • DCAE will perform data analytics as configured in recipes, to monitor the environment and detect anomalous conditions. Output from analytics is forwarded to Policy and dashboard.

Step 5: Verify operation

  • Verify that service is provided and can be monitored through dashboard using basic observability data and calculated KPIs.

Step 6: React to incident

  • Corrective/remedial action for network impairments or for violations of service levels as described by defined policies are initiated using the SO and/or controllers.
    • For verification purposes this may require fault injection.
  • Verify that policy definitions and their corrective actions have intended effect.
  • No labels