Container Background
Linux containers allow for an application and all of its operating system dependencies to be packaged and deployed as a single unit without including a guest operating system as done with virtual machines. The most popular container solution is Docker which provides tools for container management like the Docker Host (dockerd) which can create, run, stop, move, or delete a container. Docker has a very popular registry of containers images that can be used by any Docker system; however, in the ONAP context, Docker images are built by the standard CI/CD flow and stored in Nexus repositories. OOM uses the "standard" ONAP docker containers and three new ones specifically created for OOM.
Containers are isolated from each other primarily via name spaces within the Linux kernel without the need for multiple guest operating systems. As such, multiple containers can be deployed with little overhead such as all of ONAP can be deployed on a single host. With some optimization of the ONAP components (e.g. elimination of redundant database instances) it may be possible to deploy ONAP on a single laptop computer.
Life Cycle Management via Kubernetes
As with the VNFs deployed by ONAP, the components of ONAP have their own life-cycle where the components are created, run, healed, scaled, stopped and deleted. These life-cycle operations are managed by the Kubernetes container management system which maintains the desired state of the container system as described by one or more deployment descriptors - similar in concept to OpenStack HEAT Orchestration Templates. The following sections describe the fundamental objects managed by Kubernetes, the network these components use to communicate with each other and other entities outside of ONAP and the templates that describe the configuration and desired state of the ONAP components.
ONAP Components to Kubernetes Object Relationships
Kubernetes deployments consist of multiple objects:
- nodes - a worker machine - either physical or virtual - that hosts multiple containers managed by kubernetes.
- services - an abstraction of a logical set of pods that provide a micro-service.
- pods - one or more (but typically one) container(s) that provide specific application functionality.
- persistent volumes - One or more permanent volumes need to be established to hold non-ephemeral configuration and state data.
The relationship between these objects is shown in the following figure:
OOM uses these kubernetes objects as described in the following sections.
Nodes
OOM works with both physical and virtual worker machines.
- Virtual Machine Deployments - If ONAP is to be deployed onto a set of virtual machines, the creation of the VMs is outside of the scope of OOM and could be done in many ways, such as:
- manually, for example by a user using the OpenStack Horizon dashboard or AWS EC2, or
automatically, for example with the use of a OpenStack Heat Orchestration Template which builds an ONAP stack, Azure ARM template, AWS CloudFormation Template, or
- orchestrated, for example with Cloudify creating the VMs from a TOSCA template and controlling their life cycle for the life of the ONAP deployment.
- Physical Machine Deployments - If ONAP is to be deployed onto physical machines there are several options but the recommendation is to use Rancher along with Helm to associate hosts with a kubernetes cluster.
Pods
A group of containers with shared storage and networking can be grouped together into a kubernetes pod. All of the containers within a pod are co-located and co-scheduled so they operate as a single unit. Within ONAP Amsterdam release, pods are mapped one-to-one to docker containers although this may change in the future. As explained in the Services section below the use of Pods within each ONAP component is abstracted from other ONAP components.
Services
OOM uses the kubernetes service abstraction to provide a consistent access point for each of the ONAP components independent of the pod or container architecture of that component. For example, the SDNC component may introduce OpenDaylight clustering as some point and change the number of pods in this component to three or more but this change will be isolated from the other ONAP components by the service abstraction. A service can include a load balancer on its ingress to distribute traffic between the pods and even react to dynamic changes in the number of pods if they are part of a replica set (see the MSO example below for a brief explanation of replica sets).
Persistent Volumes
As pods and containers are ephemeral, any data that must be persisted across pod restart events needs to be stored outside of the pod in a persistent volume(s). Kubernetes supports a wide variety of types of persistent volumes such as: Fibre Channel, NFS, iSCSI, CephFS, and GlusterFS (for a full list look here) so there are many options as to how storage is configured when deploying ONAP via OOM.
OOM Networking with Kubernetes
- DNS
- Ports - Flattening the containers also expose port conflicts between the containers which need to be resolved.
Name Spaces
Within the namespaces are kubernetes services that provide external connectivity to pods that host Docker containers. The following is a list of the namespaces and the services within:
- onap-aai
- aai-service
- hbase
- model-loader-service
- aai-dmaap
- aai-kafka
- aai-zookeeper
- aai-resources
- aai-traversal
- data-router
- elasticsearch
- gremlin
- search-data-service
- sparky-be
- onap-appc
- dbhost
- dgbuilder
- sdnctldb01
- sdnctldb02
- sdnhost
- onap-dcae
- cdap0
- cdap1
- cdap2
- dcae-collector-common-event
- dcae-collector-dmaapbc
- dcae-controller
- dcae-pgaas
- dmaap
- kafka
- zookeeper
- onap-message-router
- dmaap
- global-kafka
- zookeeper
- onap-mso
- mariadb
- mso
- onap-policy
- brmsgw
- drools
- mariadb
- nexus
- pap
- pdp
- pypdp
- onap-portal
- portalapps
- portaldb
- vnc-portal
- onap-robot
- robot
- onap-sdc
- sdc-be
- sdc-cs
- sdc-es
- sdc-fe
- sdc-kb
- onap-sdnc
- dbhost
- sdnc-dgbuilder
- sdnc-portal
- sdnctldb01
- sdnctldb02
- sdnhost
- onap-vid
- vid-mariadb
- vid-server
Note that services listed in italics are local to the namespace itself and not accessible from outside of the namespace.
Kubernetes Deployment Specifications for ONAP
Each of the ONAP components are deployed as described in a deployment specification. This specification documents key parameters and dependencies between the pods of an ONAP components such that kubernetes is able to repeatably startup the component. The components artifacts are stored here in the oom/kubernetes repo in ONAP gerrit. The mso project is a relatively simple example, so let's start there.
MSO Example
Within the oom/kubernetes/mso repo, one will find three file in yaml format:
The db-deployment.yaml file describes deployment of the database component of mso. Here is the contents:
As one might image, the mso-deployment.yaml file describes the deployment artifacts of the mso application. Here are the contents:
The last of the three files is the all-services.yaml file which defines the kubernetes service(s) that will be exposed in this name space. Here is the contents of the file:
Customizing Deployment Specifications
For each ONAP component deployed by OOM, a set of deployment specifications are required. Use fortunately there are many examples to use as references such that the previous 'mso' example, as well as: aai, appc, message-router, policy, portal, robot, sdc, sdnc and vid. If your components isn't already deployed by OOM, you can create your own set of deployment specifications that can be easily added to OOM.
Development Deployments
The deployment specifications currently (as of ) represent a simple simplex deployment of ONAP that may not have the robustness typically required of a full operational deployment. Follow on releases will enhance these deployment specifications as follows:
- Load Balancers - kubernets has built in support for user defined or simple 'ingress' load balances at the service layer to hide the complexity of multi-pod deployments from other components.
- Horizontal Scaling - replica sets can be used to dynamically scale the number of pods behind a service to that of the offered load.
- Stateless Pods - using concepts such as DBaaS (database as a service) database technologies could be removed (where appropriate) from the services thus moving to the 'cattle' model so common in cloud deployments.
Kubernetes Under-Cloud Deployments
The automated ONAP deployment depends on a fully functional kubernetes environment being available prior to ONAP installation. Fortunately, kubenetes is supported on a wide variety of systems such as Google Compute Engine, AWS EC2, Microsoft Azure, CenturyLink Cloud, IBM Bluemix and more. If you're setting up your own kubernetes environment, please refer to ONAP on Kubernetes for a walk through of how to set this environment up on several platforms.
ONAP 'OneClick' Deployment Walk-though
Once a kubernetes environment is available and the deployment artifacts have been customized for your location, ONAP is ready to be installed.
The first step is to setup the /oom/kubernetes/config/onap-parameters.yaml file with key-value pairs specific to your OpenStack environment. There is a sample that may help you out or even be usable directly if you don't intend to actually use OpenStack resources. Here is the contents of this file:
Note that these values are required or the following steps will fail.
In-order to be able to support multiple ONAP instances within a single kubernetes environment a configuration set is required. The createConfig.sh
script is used to do this.
> ./createConfig.sh -n onap
The bash script createAll.bash
is used to create an ONAP deployment with kubernetes. It has two primary functions:
- Creating the namespaces used to encapsulate the ONAP components, and
- Creating the services, pods and containers within each of these namespaces that provide the core functionality of ONAP.
> ./createAll.bash -n onap
Namespaces provide isolation between ONAP components as ONAP release 1.0 contains duplicate application (e.g. mariadb) and port usage. As such createAll.bash
requires the user to enter a namespace prefix string that can be used to separate multiple deployments of onap. The result will be set of 10 namespaces (e.g. onap-sdc, onap-aai, onap-mso, onap-message-router, onap-robot, onap-vid, onap-sdnc, onap-portal, onap-policy, onap-appc
) being created within the kubernetes environment. A prerequisite pod config-init (pod-config-init.yaml
) may editing to match you environment and deployment into the default namespace before running createAll.bash
.
Integration with MSB
The Microservices Bus Project provides facilities to integrate micro-services into ONAP and therefore needs to integrate into OOM - primarily through Consul which is the backend of MSB service discovery. The following is a brief description of how this integration will be done (thanks Huabing):
A registrator to push the service endpoint info to MSB service discovery.
- The needed service endpoint info is put into the kubernetes yaml file as annotation, including service name, Protocol,version, visual range,LB method, IP, Port,etc.
- OOM deploy/start/restart/scale in/scale out/upgrade ONAP components
- Registrator watch the kubernetes event
- When an ONAP component instance has been started/destroyed by OOM, Registrator get the notification from kubernetes
- Registrator parse the service endpoint info from annotation and register/update/unregister it to MSB service discovery
- MSB API Gateway uses the service endpoint info for service routing and load balancing.
Details of the registration service API can be found at Microservice Bus API Documentation.
How to define the service endpoints using annotation ONAP Services List#OOMIntegration
A preliminary view of the OOM-MSB integration is as follows:
A message sequence chart of the registration process:
MSB Usage Instructions
FAQ (Frequently Asked Questions)
Does OOM enable the deployment of VNFs on containers?
- No. OOM provides a mechanism to instantiate and manage the ONAP components themselves with containers but does not provide a Multi-VIM capability such that VNFs can be deployed into containers. The Multi VIM/Cloud Project may provide this functionality at some point.
DCAE has its own controller - how is this managed with OOM?
- The DCAE controller will merge with OOM during the Amsterdam release as described in the Data Collection Analytics & Events Project. In the short term the DCAE controller is problematic in a container environment as it directly interfaces to OpenStack and request multiple VMs (e.g. CDAP, etc). The short term proposal is to containerize the DCAE components and statically create them as part of the larger ONAP deployment. Advanced DCAE controller features like hierarchical and geographically diverse deployments need further investigation.
Related Tools
Current Limitations and Feature Requests
- DCAE - The DCAE component not only is not containerized but also includes its own VM orchestration system. A possible solution is to not use the DCAE Controller but port this controller's policies to Kubenetes directly, such as scaling CDAP nodes to match offered capacity.
- Single Name Space
- Deployment Parameter Optimization
Configuration Parameters
Currently ONAP configuration parameters are stored in multiple files; a solution to coordinate these configuration parameters is required. Kubernetes Config Maps may provide a solution or at least partial solution to this problem.
Centralized Parameter Control
- Component Rationalization
Duplicate containers - The VM structure of ONAP hides internal container structure from each of the components including the existence of duplicate containers such as Maria DB.
Presentations and Demos
ONAP OOM K8S Deployment v1.1-final.pptx
Jira Stories
References
- Docker container list - source of truth: https://git.onap.org/integration/tree/packaging/docker/docker-images.csv
- Docker - http://docker.com
- Kubernetes - http://kubernetes.io