Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

What's needed to deploy ONAP


ONAP is a set of different applications. Since Casablanca release, the preferred way to deploy ONAP is OOM (ONAP Operations Manager).

OOM is a set of helm charts + an helm plugin (deploy).

Each helm chart will deploy on a Kubernetes cluster a component of ONAP (AAI, SO, VFC, ...).

helm deploy plugin will simplify the deployment of the whole solution (faster deployment, use of standard helm metadata storage).


So in order to deploy ONAP, you'll need a working kubernetes (Ingress, Storage class, CNI, ...) environment + helm installed (see software requirements for the correct version to use according to the ONAP version).

On top of this Kubernetes installation, you may also need third party components for specific use cases:

  • Cert Manager if you want to use CMPv2 certificates in DCAE and SDNC
  • Prometheus (preferrably installed with this helm chart) if you want to scrape some metrics
  • Strimzi (PoC in Jakarta, default in Kohn) for Kafka deployment

Strategy to deploy the stack


OOM and Integration team has decided to take a modular approach in order to fulfill all the operations for deploying and validating an ONAP instance:

  • Create the virtual machines
  • Deploy Kubernetes

  • Deploy Platform Services on Kubernetes
  • Deploy ONAP
  • Test ONAP

Important choices have been made:

  • All the deployments are using Gitlab CI
  • All deployments are using ansible playbooks
  • An "orchestrator" of Gitlab CI deployment is used

Chained CI: the Gitlab CI deployment "orchestrator"

Chained CI is a dynamic gitlab ci pipeline manage which will call an underneath pipeline at each stage, wait for completion, retrieve artifacts and move into the next stage if sucessful

Via a declarative (yaml) file, we can:

  • chain any pipeline
  • use outputs of previous stages as input of a stage

Chained CI configuration is mainly splitted in 4 types of files:


More Information :

official documentation: https://docs.onap.org/projects/onap-integration/en/latest/onap-integration-ci.html#integration-ci

Creating virtual machines

input

  • the description of the wanted infrastructure (networks, servers, volume, floating IPs, ...)
  • credentials to use IaaS API

output

  • an inventory file with the created machines and their purpose

Implementations:

OS Infra Manager for OpenStack deployments

AZ Infra Manager for Azure VM deployments

Creating Kubernetes cluster

input

  • an inventory file with server with specific groups:
    • kube-master for servers hosting API part of K8S
    • etcd for servers hosting etcd
    • kube-worker for servers hosting "compute" part of K8S
    • k8s-cluster with kube-node and kube-worker servers
    • k8s-full-cluster with all the servers (master, etcd, worker, jumphost)

output

  • a deployed kubernetes cluster
  • the admin kube config

implementations

Kubespray Automatic installation (used in dailies / weeklies)

RKE Automatic installation (not used anymore)

RKE2 Automatic installation (used internally)

AKS Automatic installation (used for gating)

Adding Services to the Kubernetes cluster

input

  • a valid kube config
  • an accessible kubernetes API

output

  • installed platform components (helm, prometheus, others)

implementations

This installation is done in the "postconfiguration" part of the Kubernetes cluster project.

There are 2 types of implementations :

Deploying ONAP

input

  • a valid kube config
  • helm
  • possibly an override file to choose which components to use / specific configuration for ONAP or some components

output

  • a (working hopefully) ONAP deployment

implementation

ONAP OOM Automatic installation

ONAP OOM Automatic installation refresh (not used today on community deployments unfortunately)

Testing ONAP

input

  • a valid kueb config
  • the name of the namespace where ONAP is installed (onap by default)
  • a docker service

output

  • a report on the performed tests

implementation

Xtesting ONAP

Specificities of ONAP Dailies / Weeklies on Orange premices


Openstack API is not present on Internet and thus all calls must be made via a jumphost (rebond.opnfv.fr)

Specificities of ONAP gating on Azure


As Azure has no OpenStack APIs, a small openstack instance using devstack (using DevStack Automatic Installation) is created near each worker.

Gating

Gating is built on top of "automatic deployment" seen before.

As for daily deployments, two chains in chained ci are created per gating environment (2 gating environment today):

  • Infrastructure deployment (Virtual Machines + Kubernetes + Platform services)
  • ONAP deployment and test

One of the difference is that first one will not trigger the second one.

Infrastructure deployment chain is meant to be performed once in a while (after ~100 days, artifacts are too old in gitlab and it must be reinstalled)

ONAP deployment and test chain is meant to be performed anytime a gate is ready to be launched.

As we have a limited number of platform and potentially a bigger number of gates to be performed, a queue system needs to be put in front.

At the time of creation of this gating system, no "out of the box" queue system was found (or understood, we never understood how to use zuul for example)

So the decision was made to create 4 μservices using a MQTT broker named mosquitto as messenging system:

  • Gerrit 2 MQTT : it will create topics / message for every event sent by Gerrit (via SSH)
  • MQTT 2 Gerrit : it will send comments (optionally with score) to a specific Gerrit review when a message is sent in a specific topic
  • Chained CI MQTT Trigger (master mode) : will listen to message on specific topics and queue them when they belongs to a wanted topic. Will resend them when a worker ask for a job
  • Chained CI MQTT Trigger (worker mode) : when free, will listen to message on specific topics and launch a gate (if elected) when receiving one. Will ask for Job every xx seconds when free

Some details are given in the but this is how it's done in the two "main" cases:

Workers are free

  1. A new patchset is created on a watched repo (OOM for example)
  2. Gerrit2MQTT create a message on /onap/oom/patchset-created
  3. Chained CI MQTT Trigger Master reads the message and put it in internal queue
  4. Worker is free and propose to use
  5. Master will acknowledge and remove the message from the queue
  6. Worker will start a chained ci and wait for completion. According to the completion status, it will retrieve failed jobs and abstract messages
  7. Worker will send them to gerrit notification topic
  8. MQTT 2 Gerrit will see the message, retrieve Gerrit number and Patchset number and upload the message

Workers are not free

  1. A new patchset is created on a watched repo (OOM for example)
  2. Gerrit2MQTT create a message on /onap/oom/patchset-created
  3. Chained CI MQTT Trigger Master reads the message and put it in internal queue
  4. Later, a worker is free and send a message to its master to announce it can take a job
  5. Master dequeues the oldest message and resend it
  6. Worker proposes to use
  7. Master acknowledges and removes the message from the queue
  8. Worker starts a chained ci and wait for completion. According to the completion status, it retrieves failed jobs and abstract messages
  9. Worker sends abstract and failed job list gerrit notification topic
  10. MQTT 2 Gerrit will see the message, retrieve Gerrit number and Patchset number and upload the message
  11. Worker announces it's free

Current deployments

All Gating μservices are deployed on Azure ONAP "gating" kubernetes (alongside with a nexus)

Each gating system has a Chained CI MQTT Trigger worker μs.

One Chained CI MQTT Trigger master is created (we can have several that would monitor different repos / have different workers)




  • No labels