Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Summary: Edge Scoping 

...

  • Cloud SW Capability example 
    • Cloud region "x" with SR-IOV, GPU, Min-guarantee support
    • Cloud region "y" with SR-IOV support
  • Cloud HW Capability example 
    • Resource cluster "xa" in Cloud region "x" with SR-IOV and GPU support 
    • Resource cluster "xb" in Cloud region "x" with GPU support
    • Resource cluster "ya" in Cloud region "y" with SR-IOV support

Note 3:

  • 5G Service/VNF placement example
    • Constraints used by Optimization Framework (OOF)
      • 5G CU-UP VNF location to be fixed to a specific physical DC based on 5G DU, bounded by a max distance from 5G DU

    • Optimization Policy used by OOF
      • Choose optimized cloud region (or instance) for the placement of 5G CU UP for subscriber group based on the above constraints

Note 4:

  • For the 5G Service/VNF placement example in Note 3
    • 5G CU-UP VNF preferably maps to a specific Cloud region & Physical DC End Point 

Note 5:

  • For the 5G Service/VNF placement example in Note 3
    • OOF will pass the Physical DC End Point to SO as a opaque data

Note 6:

  • For the 5G Service/VNF placement example in Note 3
    • SO passes the Physical DC End Point to Multi-Cloud as a opaque data, besides the Cloud Region

Cloud-agnostic Placement/Networking & Homing Policies (Phase 1 - Casablanca MVP, Phase 2 - Stretch Goal)

...

Step 7. SO → MC - Deploy VNF template in the target <cloud owner, cloud region> for the Service Instance

7a7) MC Processing (need MC code changes)

  • Parse Template (e.g. OpenStack Heat Template)
    • For each VNFC, instance type in the template
      • Fetch Cloud-Agnostic Workload Deployment Policy (Intent) based on <Service (e.g. vCPE), VNFC (e.g. vGW)>
        • Value/Content: <Policy JSON> Note: The policy details are in Section 7b).
      • Parse Policy JSON
      • Modify template (if needed) according to Intent 
        • Intent examples of interest for R3 
          • "Infrastructure High Availability (HA) for VNF" 
          • "Infrastructure Resource Isolation for VNF"   
            • "Burstable QoS"
          • "Infrastructure Resource Isolation for VNF"   
            • "Guaranteed QoS"
  • Policy (Intent) Realization

    • "Infrastructure High Availability (HA) for VNF"OpenStack-based Cloud realization For R3, Host-based anti-affinity using Determining the flavor (OpenStack-based VIMs) # same logic applies for instance type in Azure
      • Each VNFC uniquely maps to a Flavor - for e.g. VNFC "vgw" maps to "vgw-base", "vDNS" maps to "vDNS-base"
      • Beyond Casablanca
        • VNFC intent to realization mapping happens through A&AI. 
    • "Infrastructure High Availability (HA) for VNF"
      • OpenStack-based Cloud realization 
        • For R3, Host-based anti-affinity using server groups //Beyond R3, Support other anti-affinity models at availability zone level etc. 
        • Implementation Notes: 
          • Instance "count" in heat template specifies VNFC scale out factor 
          • While dynamic injection of server group into heat template is ideal, a simple starting point could be just switching to an alternate heat template which is identical to the deployment template and additionally has server group
      • Azure realization 
        • Availability Set?
    • "Infrastructure Resource Isolation for VNF" – { "qosProperty": { {"Burstable QoS": "TRUE", "Burstable QoS Oversubscription Percentage": "25"} } }

  • For R3, Cloud-Agnostic Workload Deployment Policy (Intent) 
    • Can be directly mapped to specific realization (e.g. OpenStack Flavor, Azure Instance Type) to simplify implementation. 
  • This policy is exactly the same as the policies with "deployment-intent" in the Cloud Selection Policy for Homing described in Section 2
        • Example 
          • VNFC "vgw" with "Guaranteed QoS" 
            • "flavor-xyz-no-oversubscription"
            • vCPU (Min/Max) - 16, Mem (Min/Max) - 32GB 
            • Maps to "vgw-Guaranteed-QoS" flavor for OpenStack-based VIMs
            • Same VNFC with "Burstable QoS", 25% over-subscription 
            • "flavor-xyz-25-percent-oversubscription"
              • vCPU (Min) - 16, Mem (Min) - 32GB 
              • vCPU (Max) - 20, Mem (Max) - 40GB  
              • Maps to "vgw-Guaranteed-QoS-25-percent-oversubscription" flavor for OpenStack-based VIMs
          • VNFC "vDNS" with "Guaranteed QoS" & "Infrastructure High Availability"
            • Maps to "vDNS-Guaranteed-QoS" flavor and "vDNS-infrastructure-high-availability" heat template
        • Only certain pre-defined over-subscription values are allowed to simplify implementation
        • Implementation Notes:
          • While dynamic injection of limit/reservation into flavor is ideal, a simple starting would be to be to switch to a pre-defined flavor in the environment file
            • For aforementioned example
              • Original flavor - "flavor-xyz-no-oversubscription"
              • Modified flavor based on Policy - "flavor-xyz-25-percent-oversubscription" 
    • Implementation Notes:
      • From an implementation stand point, MC would be exposing a Workload Deployment Policy (Intent) API
        • Input : deployment-intent, cloud owner, cloud region, deployment template, deployment environment file, ...
        • Output : Success or Failure with reason, modified deployment template, modified deployment environment file, . ..

7b) Cloud-Agnostic Workload Deployment Policy (Intent)

        • .
SO ↔ MC API extension - Json Schema with use case examples - (the exact data is sent from OOF to SO. SO transparently echoes this data to MC)

...

View file
namePromethus-aggregation.pdf
height250

ONAP Edge Analytics with DCAE/DMaaP independent of closed loop (Beyond Casablanca)

Value

  • 5G Analytics

ONAP ComponentLife cycle phaseEnhancements
OOM - ONAP CentralDeploy
  • Separate ONAP-edge Instance per 'edge domain', (ie., separate from onap-central instance, of course)
    • Note: Independent of any Edge CP's Orchestration components.
  • SP uses a central-OOM with a 'policy' for deployment of an onap-edge instance, e.g., xyz edge provider with abc components, etc.
    • However, onap-edge instance can be 'lighter weight' with subset of components needed (per MVP discussed below)
    • Desirable to managed as a separate K8s cluster (ie., separate from onap-central instance, of course) and, only for onap-edge use, ie., don't use for other 'workloads' like network apps or 3rd party apps
  • Central OOM to deploy the following ONAP edge instance
    • DMaaP with mirror capability

...