Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

BUSINESS DRIVER

Executive Summary - Increasingly, edge locations are becoming K8S based as K8S can support multiple deployment types (VNFs, CNFs, VMs and containers).  This work enables ONAP to deploy workloads in K8S based sites.

Business Impact - This will enable operators to use both Openstack based sites as well K8s based sites to deploy workloads.  It also enables usage of common compute resources for both network functions and applications, thereby utilizing compute infrastructure efficiently. Since K8S supported with the same API, superior workload mobility can be achieved.

Business Markets - Applicable across compute continuum : On-prem edges, network edges, edge clouds and public clouds.

Funding/Financial Impacts - Potential of significantly avoiding multiple service orchestrators, avoid multiple infrastructure managers, thereby savings on CAPEX.

Organization Mgmt, Sales Strategies - There is no additional organizational management or sales strategies for this use case outside of a service providers "normal" ONAP deployment and its attendant organizational resources from a service provider. 

Technical Debt

R4 has many feature planned and there may be few items that may spill over to R5.  

Some of the features that are being done in R4 (for recollection)

  • K8S based Cloud region support
  • Deploy VMs and container based workloads.
  • VM and container description in Helm
  • Support for multiple resource types including Deployment, POD, Service, Config-map, CRDs, stateful set etc..
  • Support for multiple profiles, where given resource bundle definition is used to deploy multiple times.
  • Support for Day2 configuration of each individual profile/instance.
  • Networking:
    • Support for dynamic and multiple networks
    • Ability to place the POD in multiple networks
    • Initial support for OVN for data networks
    • Provider network support (using OVN)

Some features that are postponed to R5 are:

  • Dynamic route and provider network operator
  • OVN operator 
  • ISTIO security
  • Modularity stuff 
    • Logging
    • Monitoring
  • Visualization of resource bundle 
  • Use case that show cases the Day 2 configuration (Kafka or Collection package of Distributed Analytics as a service)
  • CLI commands


New requirements coming from various use cases

(Most of the requirements are coming from big data AI platform use case)

  • A way to deploy apps/services that span across multiple clusters.
  • Day2 configuration control of workloads at the app/service level as a transaction
  • Dependency graph (DAG) of deploying workloads across multiple clusters
  • Bulk deployment of apps/services in multiple clusters.
  • Function chaining
  • Multi-tenant management (such as namespaces, users etc...)
  • Edge Daemonset via labeling (for scheduler to know what kinds of apps/services to be deployed without any user intervention) 

Functional requirements

  • SRIOV-NIC Support
  • Multi-Cluster scheduler
  • Edge-Labeling & Daemon-set implementation across edges
  • User Manager
  • Meta-configuration scheduler
  • Placement support (if there are multiple edge candidates)
  • HPA support (being taken care as part of HPA work)
  • NSM and OVN SFC for function chaining - PoC item
  • CLI Support for all relevant APIs. (Applications, Resource-Bundle definitions,  Profiles, Configuration templates, configs and meta-configs etc...)
  • Continuous monitoring (using Kubernetes APIs) and updating the DB with latest status and resources allocated.
  • CLI/GUI support on the status of resources (At app level, At resource bundle level and at each resource level)

  • Study : ETSI defined Container definition




  • No labels