Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 19 Next »

Steps described in this page are run by "ubuntu", a non-root user.



Clone the OOM project from ONAP gerrit

Run the following command to clone OOM project from ONAP gerrit at any directory you prefer, and that directory will be referred as "{$OOM}" in this page.

git clone http://gerrit.onap.org/r/oom

SDN-C Cluster Deployment

Configure SDN-C Cluster Deployment

We are using Kubernetes replicas to achieve the SDN-C cluster deployment (see details from About SDN-C Clustering for the desired goal).

This only needs to be done one time and, at the moment, all modifications are done manually (they can be automated via scripting in the future when the needs come up).


Get New startODL.sh Script From Gerrit Topic SDNC-163

The source of the new startODL.sh script, gerrit change 25475, has been merged into sdnc/oam project on December 15th, 2017.

Skip this section if your SDN-C image includes this change.


Do the following to get the new startODL.sh script which provides the configuration of ODL clustering for SDN-C cluster.

#PurposeCommand Examples
1Get the shared new startODL.sh script content

Go to gerrit change 25475

click on installation/sdnc/src/main/scripts/startODL.sh under the Files section to view the details of the changes

click on the Download button to download the startODL_new.sh.zip file

open the sh file inside the zip file, and copy the content (to be used in step 2)

2Create new startODL.sh on the Kubernetes node VM

mkdir -p /home/ubuntu/cluster/scripts

vi /home/ubuntu/cluster/script/startODL.sh

paste the copied content from step 1 to this file

3Give execution permission to the new startODL.sh scriptchmod 777 /home/ubuntu/cluster/script/startODL.sh


Get SDN-C Cluster Templates From Gerrit Topic SDNC-163

Only do this before the SDN-C cluster code is merged into gerrit OOM project.

#PurposeCommand and Examples
1Get the shared templates code git fetch command

Go to gerrit change 25467

Click Download downward arrow, and click the click board in the same line as Checkout to get the git commands (which includes the git fetch and checkout commands).

 An example of the git commands with patch set 19

git fetch https://gerrit.onap.org/r/oom refs/changes/67/25467/19 && git checkout FETCH_HEAD

2Fetch the shared template to the oom directory on the Kubernetes node VM

cd {$OOM}

run the git commands got from step 1

3Link the new startODL.sh

vi kubernetes/sdnc/templates/sdnc-statefulset.yaml

do the following changes:

PurposeChanges

Skip this change if you have skipped the get new startODL.sh script section

mount point for new startODL.sh script

FieldAdded Value

.spec.template.spec.containers.volumeMounts

(of container sdnc-controller-container)

- mountPath: /opt/onap/sdnc/bin/startODL.sh
name: sdnc-startodl

 .spec.template.spec.volumes

- name: sdnc-startodl
hostPath:
path: /home/ubuntu/cluster/script/startODL.sh

mount point for ODL deploy directory

(ODL automatically install bundles/pacakges under its deploy directory, this mount point provides capability to drop a bundle/package in the Kubernetes node at /home/ubuntu/cluster/deploy directory and it will automativally be installed in the sdnc pods.)

FieldAdded Value

.spec.template.spec.containers.volumeMounts

(of container sdnc-controller-container)

- mountPath: /opt/opendaylight/current/deploy
name: sdnc-deploy

.spec.template.spec.volumes

- name: sdnc-deploy
hostPath:
path: /home/ubuntu/cluster/deploy

4Enable cluster configuration

vi kubernetes/sdnc/values.yaml

change the following fields with the new value:

fieldnew valueold value
enableODLClustertruefalse
numberOfODLReplicas31
numberOfDbReplicas21


Make nfs-provisioner Pod Runs On Node Where NFS Server Runs

Skip this section if you have skipped "Share the /dockerdata-nfs Folder between Kubernetes Nodes".


On the node where you have configured nfs server (from step 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes), run the following:

#PurposeCommand and Example
1Find the node name
 find nfs server node

Run command "ps -ef|grep nfs", you should

  • node with nfs server runs nfsd process:

ubuntu@sdnc-k8s:~$ ps -ef|grep nfs
root 3473 2 0 Dec07 ? 00:00:00 [nfsiod]
root 11072 2 0 Dec06 ? 00:00:00 [nfsd4_callbacks]
root 11074 2 0 Dec06 ? 00:00:00 [nfsd]
root 11075 2 0 Dec06 ? 00:00:00 [nfsd]
root 11076 2 0 Dec06 ? 00:00:00 [nfsd]
root 11077 2 0 Dec06 ? 00:00:00 [nfsd]
root 11078 2 0 Dec06 ? 00:00:00 [nfsd]
root 11079 2 0 Dec06 ? 00:00:03 [nfsd]
root 11080 2 0 Dec06 ? 00:00:13 [nfsd]
root 11081 2 0 Dec06 ? 00:00:42 [nfsd]
ubuntu@sdnc-k8s:~$

  • node with nfs client runs nfs svc process:

ubuntu@sdnc-k8s-2:~$ ps -ef|grep nfs
ubuntu 5911 5890 0 20:10 pts/0 00:00:00 grep --color=auto nfs
root 18739 2 0 Dec06 ? 00:00:00 [nfsiod]
root 18749 2 0 Dec06 ? 00:00:00 [nfsv4.0-svc]
ubuntu@sdnc-k8s-2:~$

kubectl get node

 Example of response

ubuntu@sdnc-k8s:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
sdnc-k8s Ready master 6d v1.8.4
sdnc-k8s-2 Ready <none> 6d v1.8.4
ubuntu@sdnc-k8s:~$

2Set label on the node

kubectl label nodes <NODE_NAME_FROM_LAST_STEP> disktype=ssd

 An example

ubuntu@sdnc-k8s:~$ kubectl label nodes sdnc-k8s disktype=ssd

node "sdnc-k8s" labeled

ubuntu@sdnc-k8s:~$

3Check the label has been set on the node

kubectl get node --show-labels

 An example

ubuntu@sdnc-k8s:~$ kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
sdnc-k8s Ready master 6d v1.8.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=sdnc-k8s,node-role.kubernetes.io/master=
sdnc-k8s-2 Ready <none> 6d v1.8.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=sdnc-k8s-2
ubuntu@sdnc-k8s:~$

4Update nfs-provisioner pod template to force it running on the nfs server node

In nfs-provisoner-deployment.yaml file, add “spec.template.spec.nodeSelector” for pod “nfs-provisioner”

 An example of the nfs-provisioner pod with nodeSelector


Create the ONAP Config

#PurposeCommand and Examples
0.1

(Only Once) Create the ONAP config using a sample YAML file

cd {$OOM}/kubernetes/config

cp onap-parameters-sample.yaml onap-parameters.yaml

0

Set the OOM Kubernetes config environment

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1

Run the createConfig script to create the ONAP config

cd {$OOM}/kubernetes/config
./createConfig.sh -n onap

 Example of output

**** Creating configuration for ONAP instance: onap

namespace "onap" created

NAME:   onap-config

LAST DEPLOYED: Wed Nov  8 20:47:35 2017

NAMESPACE: onap

STATUS: DEPLOYED

 

RESOURCES:

==> v1/ConfigMap

NAME                   DATA  AGE

global-onap-configmap  15    0s

 

==> v1/Pod

NAME    READY  STATUS             RESTARTS  AGE

config  0/1    ContainerCreating  0         0s

 

 

**** Done ****

Wait for the config-init container to finish

Use the following command to monitor onap config init intil it reaches to Completed STATUS:

kubectl get pod --all-namespaces -a

 Example of final output

The final output should be shown as the the following with onap config in Completed STATUS:

Additional checks for config-init
helm

helm ls --all

 Example of output

NAME REVISION UPDATED STATUS CHART NAMESPACE
onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap

helm status onap-config

 Example of output

LAST DEPLOYED: Tue Nov 21 17:07:13 2017
NAMESPACE: onap
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
global-onap-configmap 15 2d

==> v1/Pod
NAME READY STATUS RESTARTS AGE
config 0/1 Completed 0 2d

 kubernetes namespaces

kubectl get namespaces

 Example of output

NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Active 2d


Deploy the SDN-C Application

#PurposeCommand and Examples
0

Set the OOM Kubernetes config environment

(If you have set the OOM Kubernetes config enviorment in the same terminal, you can skip this step)

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1

Run the createAll script to deploy the SDN-C appilication

cd {$OOM}/kubernetes/oneclick

./createAll.bash -n onap -a sdnc

 Example of output

********** Creating instance 1 of ONAP with port range 30200 and 30399

********** Creating ONAP:


********** Creating deployments for sdnc **********

Creating namespace **********
namespace "onap-sdnc" created

Creating service account **********
clusterrolebinding "onap-sdnc-admin-binding" created

Creating registry secret **********
secret "onap-docker-registry-key" created

Creating deployments and services **********
NAME: onap-sdnc
LAST DEPLOYED: Thu Nov 23 20:13:32 2017
NAMESPACE: onap
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolume
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
onap-sdnc-db 2Gi RWX Retain Bound onap-sdnc/sdnc-db 1s

==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
sdnc-db Bound onap-sdnc-db 2Gi RWX 1s

==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dbhost None <none> 3306/TCP 1s
sdnctldb01 None <none> 3306/TCP 1s
sdnctldb02 None <none> 3306/TCP 1s
sdnc-dgbuilder 10.43.97.219 <nodes> 3000:30203/TCP 1s
sdnhost 10.43.99.163 <nodes> 8282:30202/TCP,8201:30208/TCP 1s
sdnc-portal 10.43.72.72 <nodes> 8843:30201/TCP 1s

==> extensions/v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
sdnc-dgbuilder 1 1 1 0 1s

==> apps/v1beta1/StatefulSet
NAME DESIRED CURRENT AGE
sdnc-dbhost 2 1 1s
sdnc 3 3 1s
sdnc-portal 2 2 1s

 


**** Done ****


Ensure that the SDN-C appication has started

Use the kubectl get pods command to monitor the SDN-C startup; you should observe:

  • sdnc-dbhost-0 pod starts and gets into Running STATUS first,
    • while
      • sdnc-dbhost-1 pod does not exist and
      • sdnc, sdnc-dgbuilder and sdnc-portal pods are staying in Init:0/1 STATUS
  • once sdnc-dbhost-0 pod is fully started with the READY "1/1",
    • sdnc-dbhost-1 will be starting from ContainerCreating STATUS and runs up to Running STATUS
  • once sdnc-dbhost-1 pod is in RunningSTATUS,
    • sdnc pods will be starting from PodInitializing STATUS and end up with Running STATUS in parallel
    • while
      • sdnc-dgbuilder and sdnc-portal pods are staying in Init:0/1 STATUS
  • once sdnc pods are (is) in Running STATUS,
    • sdnc-dgbuilder and sdnc-portal will be starting from PodInitializing STATUS and end up with Running STATUS in parllel
 Example of start up status changes

2

Validate that all SDN-C pods and services are created properly

helm ls --all

 Example of SDNC release

ubuntu@sdnc-k8s:~$ helm ls --all
NAME REVISION UPDATED STATUS CHART NAMESPACE
onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap
onap-sdnc 1 Thu Nov 23 20:13:32 2017 DEPLOYED sdnc-0.1.0 onap
ubuntu@sdnc-k8s:~$

kubectl get namespaces

 Example of SDNC namespace

ubuntu@sdnc-k8s:~$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Active 2d
onap-sdnc Active 12m
ubuntu@sdnc-k8s:~$

kubectl get deployment --all-namespaces

 Example of SDNC deployment

ubuntu@sdnc-k8s-2:~$ kubectl get deployment --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system heapster 1 1 1 1 15d
kube-system kube-dns 1 1 1 1 15d
kube-system kubernetes-dashboard 1 1 1 1 15d
kube-system monitoring-grafana 1 1 1 1 15d
kube-system monitoring-influxdb 1 1 1 1 15d
kube-system tiller-deploy 1 1 1 1 15d
onap-sdnc sdnc-dgbuilder 1 1 1 0 26m
ubuntu@sdnc-k8s-2:~$

kubectl get clusterrolebinding --all-namespaces

 Example of SDNC cluster role binding

ubuntu@sdnc-k8s:~$ kubectl get clusterrolebinding --all-namespaces
NAMESPACE NAME AGE
addons-binding 15d
onap-sdnc-admin-binding 13m
ubuntu@sdnc-k8s:~$

kubectl get serviceaccounts --all-namespaces

 Example of SDNC service account

ubuntu@sdnc-k8s:~$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
onap default 1 2d
onap-sdnc default 1 14m
ubuntu@sdnc-k8s:~$

kubectl get service --all-namespaces

 Example of all SDNC services

ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 15d
kube-system heapster ClusterIP 10.43.210.11 <none> 80/TCP 15d
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 15d
kube-system kubernetes-dashboard ClusterIP 10.43.196.205 <none> 9090/TCP 15d
kube-system monitoring-grafana ClusterIP 10.43.90.8 <none> 80/TCP 15d
kube-system monitoring-influxdb ClusterIP 10.43.52.1 <none> 8086/TCP 15d
kube-system tiller-deploy ClusterIP 10.43.106.73 <none> 44134/TCP 15d
onap-sdnc dbhost ClusterIP None <none> 3306/TCP 17m
onap-sdnc sdnc-dgbuilder NodePort 10.43.97.219 <none> 3000:30203/TCP 17m
onap-sdnc sdnc-portal NodePort 10.43.72.72 <none> 8843:30201/TCP 17m
onap-sdnc sdnctldb01 ClusterIP None <none> 3306/TCP 17m
onap-sdnc sdnctldb02 ClusterIP None <none> 3306/TCP 17m
onap-sdnc sdnhost NodePort 10.43.99.163 <none> 8282:30202/TCP,8201:30208/TCP 17m
ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$

kubectl get pods --all-namespaces -a

 Example of all SDNC pods

ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ kubectl get pods --all-namespaces -a
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-4285517626-32km8 1/1 Running 0 15d
kube-system kube-dns-638003847-vqz8t 3/3 Running 0 15d
kube-system kubernetes-dashboard-716739405-tnxj6 1/1 Running 0 15d
kube-system monitoring-grafana-2360823841-qfhzm 1/1 Running 0 15d
kube-system monitoring-influxdb-2323019309-41q0l 1/1 Running 0 15d
kube-system tiller-deploy-737598192-5663c 1/1 Running 0 15d
onap config 0/1 Completed 0 2d
onap-sdnc sdnc-0 2/2 Running 0 17m
onap-sdnc sdnc-1 0/2 CrashLoopBackOff 16 17m
onap-sdnc sdnc-2 2/2 Running 0 17m
onap-sdnc sdnc-dbhost-0 1/1 Running 0 17m
onap-sdnc sdnc-dbhost-1 1/1 Running 0 16m
onap-sdnc sdnc-dgbuilder-356329770-cpfzj 0/1 Running 6 17m
onap-sdnc sdnc-portal-0 0/1 Running 6 17m
onap-sdnc sdnc-portal-1 0/1 CrashLoopBackOff 7 17m
ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$

docker ps |grep sdnc

 Example of SDNC docker container

$ docker ps |grep sdnc |wc -l
14
$ docker ps |grep sdnc

9a1fc91b6dcc docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" 19 minutes ago Up 19 minutes k8s_filebeat-onap_sdnc-2_onap-sdnc_c7feb9d1-d08a-11e7-957f-0269cb13eff1_0
9dbaa04e160c docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" 19 minutes ago Up 19 minutes k8s_filebeat-onap_sdnc-0_onap-sdnc_c7f28480-d08a-11e7-957f-0269cb13eff1_0
fca36e9b5353 nexus3.onap.org:10001/onap/sdnc-image@sha256:1049151464b3e60d9a553bc2f3bdaf79555839217f0557652e982ca99398375a "/opt/onap/sdnc/bin/s" 19 minutes ago Up 19 minutes k8s_sdnc-controller-container_sdnc-2_onap-sdnc_c7feb9d1-d08a-11e7-957f-0269cb13eff1_0
00efa164a58a nexus3.onap.org:10001/onap/sdnc-image@sha256:1049151464b3e60d9a553bc2f3bdaf79555839217f0557652e982ca99398375a "/opt/onap/sdnc/bin/s" 19 minutes ago Up 19 minutes k8s_sdnc-controller-container_sdnc-0_onap-sdnc_c7f28480-d08a-11e7-957f-0269cb13eff1_0
4a2769dfee37 mysql/mysql-server@sha256:720f301388709af2c84ee09ba51340d09d1e9f7ba45f19719b5b18b5fa696771 "/entrypoint.sh mysql" 19 minutes ago Up 19 minutes (healthy) k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_c7e050b3-d08a-11e7-957f-0269cb13eff1_0

8b4a21cb2bd2 mysql/mysql-server@sha256:720f301388709af2c84ee09ba51340d09d1e9f7ba45f19719b5b18b5fa696771 "/entrypoint.sh mysql" 19 minutes ago Up 19 minutes (healthy) k8s_sdnc-db-container_sdnc-dbhost-1_onap-sdnc_cde0fde0-d08a-11e7-957f-0269cb13eff1_0

04904cb18336 gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes ago Up 19 minutes k8s_POD_sdnc-portal-0_onap-sdnc_c810f3e4-d08a-11e7-957f-0269cb13eff1_0
e89a3e28505a gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes ago Up 19 minutes k8s_POD_sdnc-2_onap-sdnc_c7feb9d1-d08a-11e7-957f-0269cb13eff1_0
7d2c4ac066f4 gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes ago Up 19 minutes k8s_POD_sdnc-dgbuilder-356329770-cpfzj_onap-sdnc_c7dd4b22-d08a-11e7-957f-0269cb13eff1_0
660ee9119001 gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes ago Up 19 minutes k8s_POD_sdnc-0_onap-sdnc_c7f28480-d08a-11e7-957f-0269cb13eff1_0
fa0e59e1a7c7 gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes ago Up 19 minutes k8s_POD_sdnc-dbhost-0_onap-sdnc_c7e050b3-d08a-11e7-957f-0269cb13eff1_0


24b6d3eb3020 gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes ago Up 19 minutes k8s_POD_sdnc-dbhost-1_onap-sdnc_cde0fde0-d08a-11e7-957f-0269cb13eff1_0
a6e6445b87eb gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes ago Up 19 minutes k8s_POD_sdnc-1_onap-sdnc_c7f6b06f-d08a-11e7-957f-0269cb13eff1_0
1d00b4bb46c0 gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes ago Up 19 minutes k8s_POD_sdnc-portal-1_onap-sdnc_c813c667-d08a-11e7-957f-0269cb13eff1_0

$

In the above example, it missed sdnc-1 container as I had a failure of it due to directory sharing.

3

Validate that the SDN-C bundlers are up

 Enter pod container with 2 options
 Option 1: through pod name from anywhere

Use command

kubectl exec -it <POD_NAME_WITH_NAME_SPACE> bash

 Example

 Option 2: through docker container ID from where the container is

Use command

docker exec -it <DOCKER_CONTAINER_ID> bash

 Example

 Check SDNC bundles in ODL client
 Enter ODL client

 Check SDNC bundlers

4

Validate that the SDN-C APIs are shown on the ODL RestConf page

Access the ODL RestConf page from the following URL:

http://<Kubernetes-Master-Node-IP>:30202/apidoc/explorer/index.html

 Example of SDNC APIs in ODL RestConf page

5

Validate the SDN-C ODL cluster

Goal:

Verify if the SDN-C ODL-Cluster is running properly

Prerequisites
  1. This test is on one of your Kubernetes nodes
  2. Make sure python-pycurl is installed
    • If not, for Ubuntu use "apt-get install python-pycurl" to install it
Use ODL intergration tool to monitor ODL cluster

Clone ODL Integration-test project

git clone https://github.com/opendaylight/integration-test.git

Enter cluster-montor folder

cd integration-test/tools/clustering/cluster-monitor

Create cluster-monitor.bash script

vi cluster-monitor.bash

Content of cluster-monitor.bash
########################################################################################
# This script wraps ODL monitor.py and dynamicly picks up ODL clustered IP of the sdnc #
# clustered pods and update the IP address in the cluster.json which feed the ODL      #
# monitor.py.                                                                          #
# This script also changed the username and password in the cluster.json file.         #
#                                                                                      #
# If the sdnc pods IP address is re-assigned, the running session of this script       #
# should be restarted.                                                                 #
#                                                                                      #
# To run it, just enter the following command:                                         #
#    ./cluster-monitor.bash                                                              #
########################################################################################
#!/bin/bash

# get IPs string by using kubectl
ips_string=$(kubectl get pods --all-namespaces -o wide | grep 'sdnc-[0-9]' | awk '{print $7}')
ip_list=($(echo ${ips_string} | tr ' ' '\n'))

# loop and replace existing IP
for ((i=0;i<=2;i++));
do
   if [ "${ip_list[$i]}" == "<none>" ]; then
     echo "Ip of deleted pod is not ready yet"
     exit 1;
   fi

   let "j=$i+4"
   sed -i -r "${j}s/(\b[0-9]{1,3}\.){3}[0-9]{1,3}\b"/${ip_list[$i]}/ cluster.json
done

# replace port, username and password
sed -i  's/8181/8080/g' cluster.json
sed -i 's/username/admin/' cluster.json
sed -i 's/password/admin/' cluster.json

# start monitoring
python monitor.py

This script is used to fetch all the IPs of SDN-C pods and automatically update cluster.json file

Start cluster montor UI

./cluster-monitor.bash

Note:

If applications inside any of these three SDNC pods are not fully started, this script won't be executed successfully due to the issues such as connection error, value error and etc. 

Otherwise, you should see the monitoring UI as the following:

Use testCluster RPC to test SDN-C load sharing

The testCluster-bundle.zip provides a testBundle which offers a testCluster API to help with validating SDN-C RPC load sharing in the deployed SDN-C cluster.

It's just as easy as to do the following:

  1. download testCluster-bundle.zip (by clicking on the hyplinked text), and place it to the sdnc-deploy hostPath which has been defined in .spec.volumes of sdnc-statefulset.yaml file
  2. unzip testCluster-bundle.zip, If unzip command is not installed, install it by using "sudo apt install unzip" and then "unzip testCluster-bundle.zip".

     Here's an example

    ubuntu@sdnc-k8s-1:~/cluster/deploy$ unzip testCluster-0.2.0.zip

    The program 'unzip' is currently not installed. You can install it by typing:
    sudo apt install unzip
    ubuntu@sdnc-k8s-1:~/cluster/deploy$ sudo apt install unzip
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    Suggested packages:
    zip
    The following NEW packages will be installed:
    unzip
    0 upgraded, 1 newly installed, 0 to remove and 117 not upgraded.
    Need to get 158 kB of archives.
    After this operation, 530 kB of additional disk space will be used.
    Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 unzip amd64 6.0-20ubuntu1 [158 kB]
    Fetched 158 kB in 0s (230 kB/s)
    Selecting previously unselected package unzip.
    (Reading database ... 93492 files and directories currently installed.)
    Preparing to unpack .../unzip_6.0-20ubuntu1_amd64.deb ...
    Unpacking unzip (6.0-20ubuntu1) ...
    Processing triggers for mime-support (3.59ubuntu1) ...
    Processing triggers for man-db (2.7.5-1) ...
    Setting up unzip (6.0-20ubuntu1) ...
    ubuntu@sdnc-k8s-1:~/cluster/deploy$ unzip testCluster-0.2.0.zip
    Archive: testCluster-0.2.0.zip
    inflating: testDataBroker-api-0.2.0-SNAPSHOT.jar
    inflating: testDataBroker-features-0.2.0-SNAPSHOT.jar
    inflating: testDataBroker-impl-0.2.0-SNAPSHOT.jar
    ubuntu@sdnc-k8s-1:~/cluster/deploy$

  3. As this hostPath is mounted as ODL's deploy directory, once the zip file is unzipped, the testBundle will be automatically loaded by ODL and the testCluster API will be availble as ODL RestConf API.
    • testCluster API is accessible from
      •  ODL RestConf API Documentation

      •  Postman

        Example of postman code snippets
        POST /restconf/operations/testCluster:who-am-i HTTP/1.1
        Host: ${KUBERNETES MASTER VM IP}:30202
        Accept: application/json
        Authorization: Basic YWRtaW46S3A4Yko0U1hzek0wV1hsaGFrM2VIbGNzZTJnQXc4NHZhb0dHbUp2VXkyVQ==
        Cache-Control: no-cache
        Postman-Token: 9683538b-de47-dec8-3e88-c491be9dd6ef
        
        
        
        
      •  Curl command
        curl -u admin:Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U -H 'Accept: application/json' -X POST 'http://${KUBERNETES MASTER VM IP}:30202/restconf/operations/testCluster:who-am-i'
    •  An example of testCluster API response
      {
      	"output": {
      		"node": "sdnc-2"
      	}
      }


  • Undeploy the SDN-C Application

#PurposeCommand and Examples
0
  • Set the OOM Kubernetes config environment

(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step)

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1
  • Run the deleteAll script to delete all SDN-C pods and services

./deleteAll.bash -n onap -a sdnc

 Example of output

********** Cleaning up ONAP:
release "onap-sdnc" deleted
namespace "onap-sdnc" deleted
clusterrolebinding "onap-sdnc-admin-binding" deleted
Service account onap-sdnc-admin-binding deleted.

Waiting for namespaces termination...

********** Gone **********

2
  • Validate that all SDN-C pods and servers are cleaned up

docker ps |grep sdnc

 Example of no more SDNC docker container

ubuntu@sdnc-k8s:~$ docker ps |grep sdnc
ubuntu@sdnc-k8s:~$

kubectl get pods --all-namespaces -a

 Example of no more SDNC pods

ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ kubectl get pods --all-namespaces -a
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-4285517626-32km8 1/1 Running 0 15d
kube-system kube-dns-638003847-vqz8t 3/3 Running 0 15d
kube-system kubernetes-dashboard-716739405-tnxj6 1/1 Running 0 15d
kube-system monitoring-grafana-2360823841-qfhzm 1/1 Running 0 15d
kube-system monitoring-influxdb-2323019309-41q0l 1/1 Running 0 15d
kube-system tiller-deploy-737598192-5663c 1/1 Running 0 15d
onap config 0/1 Completed 0 2d
ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$

kubectl get service --all-namespaces

 Example of no more SDNC services

ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 15d
kube-system heapster ClusterIP 10.43.210.11 <none> 80/TCP 15d
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 15d
kube-system kubernetes-dashboard ClusterIP 10.43.196.205 <none> 9090/TCP 15d
kube-system monitoring-grafana ClusterIP 10.43.90.8 <none> 80/TCP 15d
kube-system monitoring-influxdb ClusterIP 10.43.52.1 <none> 8086/TCP 15d
kube-system tiller-deploy ClusterIP 10.43.106.73 <none> 44134/TCP 15d
ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$

kubectl get serviceaccounts --all-namespaces

 Example of no more SDNC service account

ubuntu@sdnc-k8s:~$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
onap default 1 2d
ubuntu@sdnc-k8s:~$

kubectl get clusterrolebinding --all-namespaces

 Example of no more SDNC cluster role binding

ubuntu@sdnc-k8s:~$ kubectl get clusterrolebinding --all-namespaces
NAMESPACE NAME AGE
addons-binding 15d
ubuntu@sdnc-k8s:~$

kubectl get deployment --all-namespaces

 Example of no more SDNC deployment

ubuntu@sdnc-k8s:~$ kubectl get deployment --all-namespaces

NAMESPACE     NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE

kube-system   heapster               1         1         1            1           2d

kube-system   kube-dns               1         1         1            1           2d

kube-system   kubernetes-dashboard   1         1         1            1           2d

kube-system   monitoring-grafana     1         1         1            1           2d

kube-system   monitoring-influxdb    1         1         1            1           2d

kube-system   tiller-deploy          1         1         1            1           2d

ubuntu@sdnc-k8s:~$

kubectl get namespaces

 Example of no more SDNC namespace

ubuntu@sdnc-k8s:~$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Active 2d
ubuntu@sdnc-k8s:~$

helm ls --all

 Example of no more SDNC release

ubuntu@sdnc-k8s:~$ helm ls --all
NAME REVISION UPDATED STATUS CHART NAMESPACE
onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap
ubuntu@sdnc-k8s:~$


  • Remove the ONAP Config

#PurposeCommand and Examples
0
  • Set the OOM Kubernetes config environment

(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step)

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1
  • Remove the ONAP config and any deployed applications in one shot

./deleteAll.bash -n onap

 Example of removing ONAP config output

ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ ./deleteAll.bash -n onap

 

********** Cleaning up ONAP:

Error: release: not found

Error from server (NotFound): namespaces "onap-consul" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-consul-admin-binding" not found

Service account onap-consul-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-msb" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-msb-admin-binding" not found

Service account onap-msb-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-mso" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-mso-admin-binding" not found

Service account onap-mso-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-message-router" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-message-router-admin-binding" not found

Service account onap-message-router-admin-binding deleted.ls

 

 

release "onap-sdnc" deleted

namespace "onap-sdnc" deleted

clusterrolebinding "onap-sdnc-admin-binding" deleted

Service account onap-sdnc-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-vid" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-vid-admin-binding" not found

Service account onap-vid-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-robot" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-robot-admin-binding" not found

Service account onap-robot-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-portal" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-portal-admin-binding" not found

Service account onap-portal-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-policy" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-policy-admin-binding" not found

Service account onap-policy-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-appc" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-appc-admin-binding" not found

Service account onap-appc-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-aai" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-aai-admin-binding" not found

Service account onap-aai-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-sdc" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-sdc-admin-binding" not found

Service account onap-sdc-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-dcaegen2" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-dcaegen2-admin-binding" not found

Service account onap-dcaegen2-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-log" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-log-admin-binding" not found

Service account onap-log-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-cli" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-cli-admin-binding" not found

Service account onap-cli-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-multicloud" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-multicloud-admin-binding" not found

Service account onap-multicloud-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-clamp" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-clamp-admin-binding" not found

Service account onap-clamp-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-vnfsdk" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-vnfsdk-admin-binding" not found

Service account onap-vnfsdk-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-uui" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-uui-admin-binding" not found

Service account onap-uui-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-aaf" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-aaf-admin-binding" not found

Service account onap-aaf-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-vfc" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-vfc-admin-binding" not found

Service account onap-vfc-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-kube2msb" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-kube2msb-admin-binding" not found

Service account onap-kube2msb-admin-binding deleted.

 

Waiting for namespaces termination...

 

********** Gone **********

2
  • Manually clean up

This step is to clean up the leftover items which were created by the config/createConfig script but not cleaned up by the oneclick/deleteAll script.

 Example of left over ONAP config

ubuntu@sdnc-k8s:~/oom/kubernetes/config$ ./createConfig.sh -n onap

**** Creating configuration for ONAP instance: onap
Error from server (AlreadyExists): namespaces "onap" already exists
Error: a release named "onap-config" already exists.
Please run: helm ls --all "onap-config"; helm del --help
**** Done ****
ubuntu@sdnc-k8s:

ONAP serviceaccount

No action needed.

It cannot be deleted by a specific command, but will instead be automatically deleted when the namespace is deleted.

 Example of service account can not be deleted

ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
onap default 1 2d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl delete serviceaccounts default -n onap
serviceaccount "default" deleted
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
onap default 1 6s
ubuntu@sdnc-k8s:

... after ONAP namespace is deleted...

ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$

ONAP namespace 
 Example of deleting ONAP name space

ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Active 2d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl delete namespace onap
namespace "onap" deleted
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Terminating 2d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$

 release
 Example of deleting ONAP config release

ubuntu@sdnc-k8s:~/oom/kubernetes/config$ helm ls --all
NAME REVISION UPDATED STATUS CHART NAMESPACE
onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ helm delete onap-config --purge
release "onap-config" deleted
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ helm ls --all
ubuntu@sdnc-k8s:~/oom/kubernetes/config$

3
  • Delete the shared folder

 sudo rm -rf /dockerdata-nfs/onap


  • Scripts

The following scripts help to simplify various procedures by automating them   (smile)

autoCreateOnapConfig
########################################################################################
# This script replaces {$OOM}/kubernetes/config/createConfig.sh script                 #
# and will only terminated when the ONAP configuration is Completed                    #
#                                                                                      #
# Before using it, do the following to prepare the bash file:                          #
#   1, cd {$OOM}/kumbernetes/oneclick                                                  #
#   2, vi autoCreateOnapConfig.bash                                                    #
#   3, paste the full content here to autoCreateOnapConfig.bash file and save the file #
#   4, chmod 777 autoCreateOnapConfig.bash                                             #
# To run it, just enter the following command:                                         #
#    ./autoCreateOnapConfig.bash                                                       #
########################################################################################
#!/bin/bash


echo "Create ONAP config under config directory..."
cd ../config
./createConfig.sh -n onap
cd -


echo "...done : kubectl get namespace
-----------------------------------------------
>>>>>>>>>>>>>> k8s namespace"
kubectl get namespace


echo "
-----------------------------------------------
>>>>>>>>>>>>>> helm : helm ls --all"
helm ls --all


echo "
-----------------------------------------------
>>>>>>>>>>>>>> pod : kubectl get pods --all-namespaces -a"
kubectl get pods --all-namespaces -a


status=`kubectl get pods --all-namespaces -a |grep onap |xargs echo | cut -d' ' -f4`
while true
do
  echo "wait for onap config pod reach to Completed STATUS"
  sleep 5
  echo "-----------------------------------------------"
  kubectl get pods --all-namespaces -a
  status=`kubectl get pods --all-namespaces -a |grep onap |xargs echo | cut -d' ' -f4`
  if [ "$status" = "Completed" ]
  then
    echo "onap config is Completed!!!"
    break
  fi
done
autoCleanOnapConfig
########################################################################################
# This script wraps {$OOM}/kubernetes/oneclick/deleteAll.sh script along with          #
# the following steps to clean up ONAP configure:                                      #
#     - remove ONAP namespace                                                          #
#     - remove ONAP release                                                            #
#     - remove ONAP shared directory                                                   #
#                                                                                      #
# Before using it, do the following to prepare the bash file:                          #
#   1, cd {$OOM}/kumbernetes/oneclick                                                  #
#   2, vi autoCleanOnapConfig.bash                                                     #
#   3, paste the full content here to autoCleanOnapConfig.bash file and save the file  #
#   4, chmod 777 autoCleanOnapConfig.bash                                              #
# To run it, just enter the following command:                                         #
#    ./autoCleanOnapConfig.bash                                                        #
########################################################################################
#!/bin/bash

./deleteAll.bash -n onap

echo "----------------------------------------------
Force remove namespace..."
kubectl delete namespace onap
echo "...done"
kubectl get namespace

echo "Force delete helm process ..."
helm delete onap-config --purge --debug
echo "...done"
helm ls --all

echo "Remove ONAP dockerdata..."
sudo rm -rf /dockerdata-nfs/onap
echo "...done"
ls -altr /dockerdata-nfs
autoDeploySdnc
########################################################################################
# This script wraps {$OOM}/kubernetes/oneclick/createAll.sh script along with          #
# the following steps to deploy ONAP SDNC application:                                 #
#     - wait until sdnc-0 is running properly with both (2) containers up              #
#                                                                                      #
# Before using it, do the following to prepare the bash file:                          #
#   1, cd {$OOM}/kumbernetes/oneclick                                                  #
#   2, vi autoDeploySdnc.bash                                                          #
#   3, paste the full content here to autoDeploySdnc.bash file and save the file       #
#   4, chmod 777 autoDeploySdnc.bash                                                   #
# To run it, just enter the following command:                                         #
#    ./autoDeploySdnc.bash                                                             #
########################################################################################
#!/bin/bash

echo "Deploy SDNC..."
./createAll.bash -n onap -a sdnc

echo "...done
-----------------------------------------------
>>>>>>>>>>>>>> pod : kubectl get pods --all-namespaces -a"
kubectl get pods --all-namespaces -a

status=`kubectl get pods --all-namespaces -a |grep sdnc-0 |xargs echo | cut -d' ' -f3`
while true
do
  echo "wait for onap sdnc-0 reachs fully running"
  sleep 5
  echo "-----------------------------------------------"
  kubectl get pods --all-namespaces -a

  status=`kubectl get pods --all-namespaces -a |grep sdnc-0 |xargs echo | cut -d' ' -f3`
  if [ "$status" = "2/2" ]
  then
    echo "onap sdnc-0 is running!!!"
    break
  fi
done
autoCleanSdnc
########################################################################################
# This script wraps {$OOM}/kubernetes/oneclick/deleteAll.sh script along with          #
# the following steps to un-deploy ONAP SDNC application fully:                        #
#     - force remove clusterrolebinding for onap-sdnc                                  #
#     - force remove namespace for onap-sdnc                                           #
#     - force remove release for onap-sdnc                                             #
#     - wait until onap-sdnc namespace is remvoed                                      #
#                                                                                      #
# Before using it, do the following to prepare the bash file:                          #
#   1, cd {$OOM}/kumbernetes/oneclick                                                  #
#   2, vi autoCleanSdnc.bash                                                           #
#   3, paste the full content here to autoCleanSdnc.bash file and save the file        #
#   4, chmod 777 autoCleanSdnc.bash                                                    #
# To run it, just enter the following command:                                         #
#    ./autoCleanSdnc.bash                                                              #
########################################################################################
#!/bin/bash

./deleteAll.bash -n onap -a sdnc

echo "----------------------------------------------
Remove clusterrolebindnig..."
kubectl delete clusterrolebinding onap-sdnc-admin-binding
echo "...done : kubectl get clusterrolebinding"
kubectl get clusterrolebinding

echo "Remove onap-sdnc namespace..."
kubectl delete namespaces onap-sdnc
echo "...done : kubectl get namespaces"
kubectl get namespaces

echo "Delete onap-sdnc release..."
helm delete onap-sdnc --purge
echo "...done: helm ls --all"
helm ls --all


sdncCount=`kubectl get namespaces | grep onap-sdnc | wc -l`
while true
do
  echo "wait for onap-sdnc namespace to be removed"
  sleep 5
  echo "-----------------------------------------------"
  kubectl get namespaces

  sdncCount=`kubectl get namespaces | grep onap-sdnc | wc -l`
  if [ "$sdncCount" = "0" ]
  then
    echo "sdnc removed!!!"
    break
  fi
done
  • No labels