BUSINESS DRIVER
Executive Summary - Increasingly, edge locations are becoming K8S based as K8S can support multiple deployment types (VNFs, CNFs, VMs and containers). This work enables ONAP to deploy workloads in K8S based sites.
Business Impact - This will enable operators to use both Openstack based sites as well K8s based sites to deploy workloads. It also enables usage of common compute resources for both network functions and applications, thereby utilizing compute infrastructure efficiently. Since K8S supported with the same API, superior workload mobility can be achieved.
Business Markets - Applicable across compute continuum : On-prem edges, network edges, edge clouds and public clouds.
Funding/Financial Impacts - Potential of significantly avoiding multiple service orchestrators, avoid multiple infrastructure managers, thereby savings on CAPEX.
Organization Mgmt, Sales Strategies - There is no additional organizational management or sales strategies for this use case outside of a service providers "normal" ONAP deployment and its attendant organizational resources from a service provider.
Technical Debt
R4 has many feature planned and there may be few items that may spill over to R5.
...
Some features that are postponed to R5 are:
- SRIOV-NIC support
- Dynamic route additionand provider network operator
- OVN operator
- ISTIO security
- Modularity stuff
- Logging (Each Micro-service is expected to log messages as expected by fluentd)
- Monitoring (Each Micro-service is expected to expose metrics as expected by Prometheus)
- Tracing (Ensure that all HTTP based applications use tracing libraries to enable distributed tracing)
- Visualization of resource bundle
- Use case that show cases the Day 2 configuration (Kafka or Collection package of Distributed Analytics as a service)
New requirements coming from various use cases
...
- A way to deploy apps/services that span across multiple clusters.
- Day2 configuration control of workloads at the app/service level as a transaction
- Dependency graph (DAG) of deploying workloads across multiple clusters
- Bulk deployment of apps/services in multiple clusters.
- Function chaining
- Multi-tenant management (such as namespaces, users etc...)
- Edge Daemonset via labeling (for scheduler to know what kinds of apps/services to be deployed without any user intervention)
Functional requirements
- SRIOV-NIC Support
- Multi-Cluster schedulersupport
- Edge-Labeling & Daemon-set implementation across edges
- User Manager
- Meta-configuration schedulerCluster-Labeling
- Distributed Cloud support
- Multi-tenancy
- Placement support (if there are multiple edge candidates)
- HPA support (being taken care as part of HPA work)
- NSM and OVN SFC for function chaining - PoC itemwith HPA
- Service Coupling using ISTIO and WGRD
- CLI Support for all relevant APIs. (Applications, Resource-Bundle definitions, Profiles, Configuration templates, configs and meta-configs etc...)
- Continuous monitoring (using Kubernetes APIs) and updating the DB with latest status and resources allocated.
- CLI/GUI support on the status of resources (At app level, At resource bundle level and at each resource level)
- Integration with CDS
- Study: Security orchestration (Possibly in R7)
- Study : ETSI defined Container definition
ONAP Architecture impact
None
All the changes are expected to be done in Multi-Cloud project. We don't expect any changes to API exposed by Multi-Cloud to the SO. Also, there are no changes expected in SDC to Multi-Cloud interface. All the work that was done SDC and SO will be good enough in R6.
There are some suggestions from the community to make "K8S workload support" as first class citizen and that may require changes to SDC and SO. But that is not planned for R6.
Few conceptual differences:
- In R4/R5, each K8S cluster is expected to be registered as cloud-region in A&AI by the ONAP admin user. Now, it is expected that each 'distributed cloud' is registered as the cloud-region.
- In R4/R5, each RB only has one helm chart. In R6, it is going to be enhanced where one RB can have multiple helm charts. There would be one meta file in the RB to describe the helm charts. Since, the entire RB is represented as tar file, there are no code changes expected in SDC or SDC client in Multi-Cloud.
- In R4/R5, there is no concept of 'Deployment intent'. In R6, deployment intents are to be created by the user before instantiating the service/RB.
- In R4/R5, each profile has only one values.yaml file, but now each profile can have multiple values.yaml files as RB would have multiple helm charts (sub-apps).
All these conceptual differences are localized to Multi-Cloud project and no change is expected in any other project.
R4 Page Link: K8S based Cloud Region Support
Attachments |
---|