Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

toc
Table of Contents


Note

Hint: This page refers to "Casablanca". For Dublin/El Alto goto here: SDN-R with OOM Rancher/Kubernetes Installation

Introduction

This page discusses the process to install SDNR/SDNC into the ONAP installation at OWL (ONAP Open Wireless Laboratory) in WINLAB at Rutgers University.  The ONAP installation itself OWL/WINLAB laboratory environment is described in the wiki page ONAP Open Wireless Laboratory (OWL) at Wireless Information Network Laboratory (WINLAB).  This page describes how to install a development Docker image of SDNC into ONAP rather than the default image taken from the nexus3.onap.org:10001 repository.

...

Given the close deadline for the proof-of-concept, we have decided to develop our code in a github site that is outside of the ONAP gerrit (there is a description at this wiki page).  The starting point for the code will be a branch of the ONAP gerrit, and we will fully conform with ONAP practices with the intention of submitting the code to the ONAP gerrit after the proof-of-concept.  We have agreed to install the karaf features into CCSDK and then create a SDNC docker image from that CCSDK image.  This is in accord approach accords with the policy of keeping features in CCSDK and will help us better leverage the work of the OOM group because their Helm charts install SDNC and not CCSDK.  We have also agreed to use the Casablanca branch of both CCSDK and SDNC rather than the master branch because the master branch has been updated to evolve into Dublin and Casablaca will be a stable environment as we work on the proof-of-concept.

...

The Dockerfile in ccsdk/distribution/odlsli/src/main/docker that creates the CCSDK Docker images needs to be updated with the correct tag for the OpenDaylight Oxygen image. 

Change:

Code Block
# Base ubuntu with added packages needed for open ecomp
FROM onap/ccsdk-odl-oxygen-image:${project.version}

...

The instructions to create an ONAP installation using the OOM Rancher/Kubernetes approach are in the ONAP wiki site (be sure to select the Casablanca version of the instructions).  Once installed, there are further instructions on deploying ONAP at this wiki page

Working with the ONAP oom and integration repositories in the "ubuntu" home directory in sb4-rancher

...

sudo -i -u ubuntuChange to user ubuntu

cd ~/git/oom && git status && git checkout . && git pull

Discard any changes in the oom repository and pull down the latest. I assume that we keep all of our changes in override files and other locations
cd ~/git/integration

This repository maintains version numbers of the latest code for the ONAP components. There is information about the repository at https://gerrit.onap.org/r/gitweb?p=integration.git;a=summary.

git pull

Get the latest; currently working in the master branch. There is no Casablanca branch.

checkout casablanca

We agreed to use the casablanca release for the proof-of-concept

cd ~/git/integration/version-manifest/src/main/scripts

This folder contains scripts that update the OOM repository with the correct version numbers

./update-oom-image-versions.sh \

~/git/integration/version-manifest/src/main/resources/docker-manifest-stagingrelease.csv \

~/git/oom

Execute a script to update version numbers in the Helm charts in the oom/kubernetes directory. This will make changes to the values.yaml files, so “git status” in ~/git/oom will return many changes. I emphasize “staging” "Release” because there is also a “release” “staging” script. We want to use the staging release version numbers.

cd ~/git/oom/kubernetes

Start following instructions at https://wiki.onap.org/display/DW/OOM+Helm+%28un%29Deploy+plugins

sudo cp -r ~/oom/kubernetes/helm/plugins/ ~/.helm

Get the Helm deploy plugin developed by the OOM group

make repo

This updates the Helm repo served by a local Helm process listening on port localhost:8879

make && make onap

I think this updates the local Helm repo with the latest versions in ~/git/oom/kubernetes. These commands take a while.

...

To install the development image rather than the nexus3 image, open a terminal session with the VM containing the Rancher controller (sb4-rancher).  There are instructions on how to create a ssh tunnel to sb4-rancher at this wiki page.  Once logged in, we must update parameters in the values.yaml file in the Helm chart for SDNC in the OOM repository, shown here.

...

The simplest way to override the values is to copy the entire values.yaml file into a separate file (I use ~/oof-pci/override-sndc.yaml) and modify the relevant parameters in that new file.  The new values are shown below.  We identify the repository with the source image name and tag, create a cluster of three ODL members, and create a redundant MySQL deployment of two instances.

...

#################################################################
# Application configuration defaults.
#################################################################
# application images
repository: nexus3.onap.org:10001
repositoryOverride: registry.hub.docker.com
pullPolicy: Always
#image: onap/sdnc-image:1.4.1
image: ft3e0tab7p92qsoceonq/oof-pci-sdnr:1.4.2-SNAPSHOT

...

mysql:
  nameOverride: sdnc-db
  service:
    name: sdnc-dbhost
    internalPort: 3306
  nfsprovisionerPrefix: sdnc
  sdnctlPrefix: sdnc
  persistence:
    mountSubPath: sdnc/mysql
    enabled: true
  disableNfsProvisioner: true
  replicaCount: 2
  geoEnabled: false

...

# default number of instances
replicaCount: 3

...

By default, the OOM Rancher/Kubernetes script installs all of the components, which we do not need for the proof-of-concept.  We identify which components to install by copying the ~/git/oom/kubernetes/onap/values.yaml file into a separate "override" file (~/oof-pci/override-onap.yaml) and changing "enabled: true" to "enabled: false" for the unneeded components.  Currently, these are the selected components.

aaffalse
aaitrue
appcfalse
clampfalse
clifalse
consulfalse
contribfalse
dcaegen2false
dmaaptrue
esrfalse
logtrue
sniro-emulatortrue
ooftrue
msbfalse
multicloudfalse
nbifalse
policytrue
pombafalse
portaltrue
robottrue
sdcfalse
sdnctrue
sotrue
uuifalse
vfcfalse
vidfalse
vnfsdkfalse

Command to install ONAP with the development image

Following the guidelines at the OOM wiki page, I use this command to install ONAP with the desired configuration. The ~/oof-pci files are located into https://github.com/onap-oof-pci-poc/ccsdk repository.

Code Block
cd ~/git/oom/kubernetes
helm install sdnc/ -n demo-sdnc --namespace onap -f ~/oof-pci/override-onap.yaml -f ~/oof-pci/override-sdnc.yaml

...

If there is already an instance SDNC installed, it must be deleted before installing a new version.  I use Use these commands.

Code Block
helm del demo-sdnc --purge
kubectl get persistentvolumeclaims -n onap | grep demo-sdnc | sed -r 's/(^[^ ]+).*/kubectl delete persistentvolumeclaims -n onap \1/' | bash
kubectl get persistentvolumes      -n onap | grep demo-sdnc | sed -r 's/(^[^ ]+).*/kubectl delete persistentvolumes      -n onap \1/' | bash
kubectl get secrets                -n onap | grep demo-sdnc | sed -r 's/(^[^ ]+).*/kubectl delete secrets                -n onap \1/' | bash
kubectl get clusterrolebindings    -n onap | grep demo-sdnc | sed -r 's/(^[^ ]+).*/kubectl delete clusterrolebindings    -n onap \1/' | bash

The first command deletes SDNC but, despite the "--purge" option, some residual resources remain.  The subsequent commands discovers those resources and generates commands that can be copied and pasted into your terminal session to be executed.  If you know how to pipe a string into bash so it can be executed directly, kindly update this wiki pagesome residual resources remain.  The subsequent commands discover and delete those resources.  The "helm del..." command takes some time, so please be patient.  Once SDNC has been deleted, you can install the new version using the commands in the previous section.

...

Now that SDNC/SDNR is deployed, how can you access it?  To access the browser interfaces of SDNC/SDNR, I you can use this sequence of commands.  First:

...

We see that there are three instances of SDNC running and two instances of SDNC-DB and that they are deployed in different nodes, as expected.  All of the pods have private IP addresses that are not accessible from outside the ONAP deployment, but demo-sdnc-sdnc-0 is installed in NODE sb4-k8s-4, which has IP address 10.31.1.79.  We If you cannot use ping to determine the IP address of the node, the command "kubectl describe node <node-name> -n <namespace>" will provide the address.

You can now enter this command.

Code Block
% kubectl get svc -n onap | grep NAME && kubectl get svc -n onap | grep sdnc
NAME                          TYPE           CLUSTER-IP      EXTERNAL-IP                            PORT(S)                                                       AGE
sdnc                          NodePort       10.43.141.133   <none>                                 8282:30202/TCP,8202:30208/TCP,8280:30246/TCP,8443:30267/TCP   20h
sdnc-ansible-server           ClusterIP      10.43.41.91     <none>                                 8000/TCP                                                      20h
sdnc-cluster                  ClusterIP      None            <none>                                 2550/TCP                                                      20h
sdnc-dbhost                   ClusterIP      None            <none>                                 3306/TCP                                                      20h
sdnc-dbhost-read              ClusterIP      10.43.100.184   <none>                                 3306/TCP                                                      20h
sdnc-dgbuilder                NodePort       10.43.16.12     <none>                                 3000:30203/TCP                                                20h
sdnc-dmaap-listener           ClusterIP      None            <none>                                 <none>                                                        20h
sdnc-portal                   NodePort       10.43.40.149    <none>                                 8843:30201/TCP                                                20h
sdnc-sdnctldb01               ClusterIP      None            <none>                                 3306/TCP                                                      20h
sdnc-sdnctldb02               ClusterIP      None            <none>                                 3306/TCP                                                      20h
sdnc-ueb-listener             ClusterIP      None            <none>                                 <none>                                                        20h
so-sdnc-adapter               ClusterIP      10.43.141.124   <none>                                 8086/TCP                                                      2d

SDNC is presenting a service at a NodePort that is accessible from outside the ONAP installation.  PORT 8282:30202 means that port 30202 is accessible externally and maps to internal port 8282 (I'm not sure why 8282 rather than 8181; a port mapping from 8282 to 8181 may be set in a Dockerfilethe Dockerfile that creates the SDNC image maps host port 8282 to container port 8181).  Therefore, SDNC is listening at sb4-k8s-4:30202, or 10.31.1.79:30202.  By creating a ssh tunnel to sb4-k8s-4 (described here), one can open a browser to localhost:30202/apidoc/explorer/index.html and see this.

...