4. Deploy/Undeploy the SDN-C Cluster

4. Deploy/Undeploy the SDN-C Cluster

Steps described in this page are run by "ubuntu", a non-root user.





Clone the OOM project from ONAP gerrit

Run the following command on master node to clone OOM project from ONAP gerrit at any directory you prefer, and that directory will be referred as "{$OOM}" in this page.

git clone http://gerrit.onap.org/r/oom

SDN-C Cluster Deployment

Configure SDN-C Cluster Deployment

We are using Kubernetes replicas to achieve the SDN-C cluster deployment (see details from About SDN-C Clustering for the desired goal).

This only needs to be done one time and, at the moment, all modifications are done manually (they can be automated via scripting in the future when the needs come up).



Get New startODL.sh Script From Gerrit Topic SDNC-163

The source of the new startODL.sh script, gerrit change 25475, has been merged into sdnc/oam project on December 15th, 2017.


Do the following to get the new startODL.sh script which provides the configuration of ODL clustering for SDN-C cluster.

#

Purpose

Command Examples

#

Purpose

Command Examples

1

Get the shared new startODL.sh script content

Go to gerrit change 25475

click on installation/sdnc/src/main/scripts/startODL.sh under the Files section to view the text of the script.

click on the Download button (

) to download the startODL_new.sh.zip file

and extract open the sh file inside the zip file, rename it to "startODL.sh".

2

Create new startODL.sh on the Kubernetes node VM

mkdir -p /dockerdata-nfs/cluster/script

vi /dockerdata-nfs/cluster/script/startODL.sh

paste the copied content from step 1 to this file

3

Give execution permission to the new startODL.sh script

chmod 777 /dockerdata-nfs/cluster/script/startODL.sh

sudo chown $(id -u):$(id -g) /dockerdata-nfs/cluster/script/startODL.sh



Get SDN-C Cluster Templates From Gerrit Topic SDNC-163

The source of the templates, gerrit change 25467, has been merged into sdnc/oam project on December 20th, 2017.

Skip step 1 and 2 if your cloned OOM project includes this change.

Skip step 3 if you skipped previous section (adding startODL.sh script).

Skip step 4, if you dont want to add/deploy extra features/bundles/packages.

Step 5 in important. It determines number of sdnc and db pods.

#

Purpose

Command and Examples

#

Purpose

Command and Examples

1

Get the shared templates code git fetch command

Go to gerrit change 25467

Click Download downward arrow,

From the right bottom corner drop list, select anonymous http,

Click the click board in the same line as Checkout to get (copy to clipboard) the git commands (which includes the git fetch and checkout commands).

git fetch https://gerrit.onap.org/r/oom refs/changes/67/25467/23 && git checkout FETCH_HEAD

2

Fetch the shared template to the oom directory on the Kubernetes node VM

cd {$OOM}

Execute the git command from step 1.

3

Link the new startODL.sh

Skip this change if you have skipped the get new startODL.sh script section

Be careful with editing YAML files. They are sensitive to number of spaces. Be careful with copy/paste from browser.



vi kubernetes/sdnc/templates/sdnc-statefulset.yaml

Make the following changes:

4

Link the ODL deploy directory

If you are not going to use the test bundle to test out SDN-C cluster and load balancing, you can skip this step.

ODL automatically install bundles/pacakges that are put under deploy directory, this mount point provides capability to drop a bundle/package in the Kubernetes node at /dockerdata-nfs/cluster/deploy directory and it will automativally be installed in the sdnc pods (under opt/opendaylight/current/deploy directory).



vi kubernetes/sdnc/templates/sdnc-statefulset.yaml

Make the following changes:



5

Enable cluster configuration

vi kubernetes/sdnc/values.yaml

Change the following fields with the new value:



Make nfs-provisioner Pod Runs On Node Where NFS Server Runs

Skip the following section, if any of thew following coditions matches.



Verify (from Master node)

#

Purpose

Command and Example

#

Purpose

Command and Example

1

Find the node name

Run command "ps -ef|grep nfs", you should

  • node with nfs server runs nfsd process:

ubuntu@sdnc-k8s:~$ ps -ef|grep nfs
root 3473 2 0 Dec07 ? 00:00:00 [nfsiod]
root 11072 2 0 Dec06 ? 00:00:00 [nfsd4_callbacks]
root 11074 2 0 Dec06 ? 00:00:00 [nfsd]
root 11075 2 0 Dec06 ? 00:00:00 [nfsd]
root 11076 2 0 Dec06 ? 00:00:00 [nfsd]
root 11077 2 0 Dec06 ? 00:00:00 [nfsd]
root 11078 2 0 Dec06 ? 00:00:00 [nfsd]
root 11079 2 0 Dec06 ? 00:00:03 [nfsd]
root 11080 2 0 Dec06 ? 00:00:13 [nfsd]
root 11081 2 0 Dec06 ? 00:00:42 [nfsd]
ubuntu@sdnc-k8s:~$

  • node with nfs client runs nfs svc process:

ubuntu@sdnc-k8s-2:~$ ps -ef|grep nfs
ubuntu 5911 5890 0 20:10 pts/0 00:00:00 grep --color=auto nfs
root 18739 2 0 Dec06 ? 00:00:00 [nfsiod]
root 18749 2 0 Dec06 ? 00:00:00 [nfsv4.0-svc]
ubuntu@sdnc-k8s-2:~$

kubectl get node

ubuntu@sdnc-k8s:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
sdnc-k8s Ready master 6d v1.8.4
sdnc-k8s-2 Ready <none> 6d v1.8.4
ubuntu@sdnc-k8s:~$

2

Set label on the node

kubectl label nodes <NODE_NAME_FROM_LAST_STEP> disktype=ssd

ubuntu@sdnc-k8s:~$ kubectl label nodes sdnc-k8s disktype=ssd

node "sdnc-k8s" labeled

ubuntu@sdnc-k8s:~$

3

Check the label has been set on the node

kubectl get node --show-labels

ubuntu@sdnc-k8s:~$ kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
sdnc-k8s Ready master 6d v1.8.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=sdnc-k8s,node-role.kubernetes.io/master=
sdnc-k8s-2 Ready <none> 6d v1.8.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=sdnc-k8s-2
ubuntu@sdnc-k8s:~$

4

Update nfs-provisioner pod template to force it running on the nfs server node

In nfs-provisoner-deployment.yaml file, add “spec.template.spec.nodeSelector” for pod “nfs-provisioner”



Create the ONAP Config

Setup onap-parameters.yaml file

The following commands must run on master node before creating ONAP configuration.

cd {$OOM}/kubernetes/config

cp onap-parameters-sample.yaml onap-parameters.yaml



Run createConfig

To simplify steps in this section

You can skip the steps in this section by following instruction in autoCreateOnapConfig of Scripts section to

  • create {$OOM}/kubernetes/oneclick/tools/autoCreateOnapConfig.bash file

  • run it and wait until the script completes

#

Purpose

Command and Examples

#

Purpose

Command and Examples

0

Set the OOM Kubernetes config environment

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1

Run the createConfig script to create the ONAP config

cd {$OOM}/kubernetes/config
./createConfig.sh -n onap

**** Creating configuration for ONAP instance: onap

namespace "onap" created

NAME:   onap-config

LAST DEPLOYED: Wed Nov  8 20:47:35 2017

NAMESPACE: onap

STATUS: DEPLOYED

 

RESOURCES:

==> v1/ConfigMap

NAME                   DATA  AGE

global-onap-configmap  15    0s

 

==> v1/Pod

NAME    READY  STATUS             RESTARTS  AGE

config  0/1    ContainerCreating  0         0s

 

 

**** Done ****

Wait for the config-init container to finish

Use the following command to monitor onap config init intil it reaches to Completed STATUS:

kubectl get pod --all-namespaces -a

The final output should be shown as the the following with onap config in Completed STATUS:

Additional checks for config-init



Deploy the SDN-C Application

To simplify steps in this section

You can skip the steps in this section by following instruction in autoDeploySdnc of Scripts section to

  • create {$OOM}/kubernetes/oneclick/tools/autoDeploySdnc.bash file

  • run it and wait until the script completes

Execute the followings on the master node.

#

Purpose

Command and Examples

#

Purpose

Command and Examples

0

Set the OOM Kubernetes config environment

(If you have set the OOM Kubernetes config enviorment in the same terminal, you can skip this step)

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1

Run the createAll script to deploy the SDN-C appilication

cd {$OOM}/kubernetes/oneclick

./createAll.bash -n onap -a sdnc

********** Creating instance 1 of ONAP with port range 30200 and 30399

********** Creating ONAP:


********** Creating deployments for sdnc **********

Creating namespace **********
namespace "onap-sdnc" created

Creating service account **********
clusterrolebinding "onap-sdnc-admin-binding" created

Creating registry secret **********
secret "onap-docker-registry-key" created

Creating deployments and services **********
NAME: onap-sdnc
LAST DEPLOYED: Thu Nov 23 20:13:32 2017
NAMESPACE: onap
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolume
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
onap-sdnc-db 2Gi RWX Retain Bound onap-sdnc/sdnc-db 1s

==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
sdnc-db Bound onap-sdnc-db 2Gi RWX 1s

==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dbhost None <none> 3306/TCP 1s
sdnctldb01 None <none> 3306/TCP 1s
sdnctldb02 None <none> 3306/TCP 1s
sdnc-dgbuilder 10.43.97.219 <nodes> 3000:30203/TCP 1s
sdnhost 10.43.99.163 <nodes> 8282:30202/TCP,8201:30208/TCP 1s
sdnc-portal 10.43.72.72 <nodes> 8843:30201/TCP 1s

==> extensions/v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
sdnc-dgbuilder 1 1 1 0 1s

==> apps/v1beta1/StatefulSet
NAME DESIRED CURRENT AGE
sdnc-dbhost 2 1 1s
sdnc 3 3 1s
sdnc-portal 2 2 1s

 


**** Done ****



Ensure that the SDN-C appication has started

Use the kubectl get pods command to monitor the SDN-C startup; you should observe:

  • sdnc-dbhost-0 pod starts and gets into Running STATUS first,

    • while

      • sdnc-dbhost-1 pod does not exist and

      • sdnc, sdnc-dgbuilder and sdnc-portal pods are staying in Init:0/1 STATUS

  • once sdnc-dbhost-0 pod is fully started with the READY "1/1",

    • sdnc-dbhost-1 will be starting from ContainerCreating STATUS and runs up to Running STATUS

  • once sdnc-dbhost-1 pod is in RunningSTATUS,

    • sdnc pods will be starting from PodInitializing STATUS and end up with Running STATUS in parallel

    • while

      • sdnc-dgbuilder and sdnc-portal pods are staying in Init:0/1 STATUS

  • once sdnc pods are (is) in Running STATUS,

    • sdnc-dgbuilder and sdnc-portal will be starting from PodInitializing STATUS and end up with Running STATUS in parllel

2

Validate that all SDN-C pods and services are created properly

helm ls --all

ubuntu@sdnc-k8s:~$ helm ls --all
NAME REVISION UPDATED STATUS CHART NAMESPACE
onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap
onap-sdnc 1 Thu Nov 23 20:13:32 2017 DEPLOYED sdnc-0.1.0 onap
ubuntu@sdnc-k8s:~$

kubectl get namespace

ubuntu@sdnc-k8s:~$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Active 2d
onap-sdnc Active 12m
ubuntu@sdnc-k8s:~$

kubectl get deployment --all-namespaces

ubuntu@sdnc-k8s-2:~$ kubectl get deployment --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system heapster 1 1 1 1 15d
kube-system kube-dns 1 1 1 1 15d
kube-system kubernetes-dashboard 1 1 1 1 15d
kube-system monitoring-grafana 1 1 1 1 15d
kube-system monitoring-influxdb 1 1 1 1 15d
kube-system tiller-deploy 1 1 1 1 15d
onap-sdnc sdnc-dgbuilder 1 1 1 0 26m
ubuntu@sdnc-k8s-2:~$

kubectl get clusterrolebinding --all-namespaces

ubuntu@sdnc-k8s:~$ kubectl get clusterrolebinding --all-namespaces
NAMESPACE NAME AGE
addons-binding 15d
onap-sdnc-admin-binding 13m
ubuntu@sdnc-k8s:~$

kubectl get serviceaccounts --all-namespaces

ubuntu@sdnc-k8s:~$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
onap default 1 2d
onap-sdnc default 1 14m
ubuntu@sdnc-k8s:~$

kubectl get service -n onap-sdnc

kubectl get pods --all-namespaces -a

docker ps |grep sdnc

On Server 1:

$ docker ps |grep sdnc |wc -l
9

$ docker ps |grep sdnc
ebcb2f7f1a4a docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" About an hour ago Up About an hour k8s_filebeat-onap_sdnc-1_onap-sdnc_263071db-e69e-11e7-a01e-026c942e0e8c_0
55a82019ce10 nexus3.onap.org:10001/onap/sdnc-image@sha256:f5f85d498c397677fa2df850d9a2d8149520c5be3dce6c646c8fe4f25f9bab10 "bash -c 'sed -i 's/d" About an hour ago Up About an hour k8s_sdnc-controller-container_sdnc-1_onap-sdnc_263071db-e69e-11e7-a01e-026c942e0e8c_0
bbdfdfc2b1f0 gcr.io/google-samples/xtrabackup@sha256:29354f70c9d9207e757a1bae6a4cbf2f57a56b18fe5c2b0acc1198a053b24b38 "bash -c 'set -ex\ncd " About an hour ago Up About an hour k8s_xtrabackup_sdnc-dbhost-1_onap-sdnc_6e9a6c9a-e69e-11e7-a01e-026c942e0e8c_0
26854595164d mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" About an hour ago Up About an hour k8s_sdnc-db-container_sdnc-dbhost-1_onap-sdnc_6e9a6c9a-e69e-11e7-a01e-026c942e0e8c_0
b577493b5725 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-dbhost-1_onap-sdnc_6e9a6c9a-e69e-11e7-a01e-026c942e0e8c_0
14dcc0985259 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-1_onap-sdnc_263071db-e69e-11e7-a01e-026c942e0e8c_1
b52be823997b quay.io/kubernetes_incubator/nfs-provisioner@sha256:b5328a3825032d7e1719015260260347bda99c5d830bcd5d9da1175e7d1da989 "/nfs-provisioner -pr" About an hour ago Up About an hour k8s_nfs-provisioner_nfs-provisioner-860992464-dz5bt_onap-sdnc_261c3d30-e69e-11e7-a01e-026c942e0e8c_0
dc6dfd3fde3b gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_nfs-provisioner-860992464-dz5bt_onap-sdnc_261c3d30-e69e-11e7-a01e-026c942e0e8c_0
006aaa34c5af mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 32 hours ago Up 32 hours k8s_sdnc-db-container_sdnc-dbhost-1_onap-sdnc_d1cc3b28-e59e-11e7-a01e-026c942e0e8c_0

On Server 2:

$ docker ps |grep sdnc|wc -l
20

$ docker ps |grep sdnc