Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Info

Steps described in this page are run by "ubuntu", a non-root user.

...

#PurposeCommand and Examples
1Get the shared templates code git fetch command

Go to gerrit change 25467

Click Download downward arrow,

From the right bottom corner drop list, select anonymous http,

Click the click board in the same line as Checkout to get (copy to clipboard) the git commands (which includes the git fetch and checkout commands).

Expand
titleAn example of the git commands with patch set 1923
git fetch https://gerrit.onap.org/r/oom refs/changes/67/25467/
19
23 && git checkout FETCH_HEAD
2Fetch the shared template to the oom directory on the Kubernetes node VM

cd {$OOM}

run the git commands got from step 1

3Link the new startODL.sh
Info

Skip this change if you have skipped the get new startODL.sh script section


vi kubernetes/sdnc/templates/sdnc-statefulset.yaml

do the following changes:

PurposeChanges


mount point for new startODL.sh script

FieldAdded Value

.spec.template.spec.containers.volumeMounts

(of container sdnc-controller-container)

- mountPath: /opt/onap/sdnc/bin/startODL.sh
name: sdnc-startodl

 .spec.template.spec.volumes

- name: sdnc-startodl
hostPath:
path: /dockerdata-nfs/cluster/script/startODL.sh

3Link the ODL deploy directory
Info

If you are not going to use the test bundle to test out SDN-C cluster and load balancing, you can skip this step.

ODL automatically install bundles/pacakges under its deploy directory, this mount point provides capability to drop a bundle/package in the Kubernetes node at /dockerdata-nfs/cluster/deploy directory and it will automativally be installed in the sdnc pods.


vi kubernetes/sdnc/templates/sdnc-statefulset.yaml

do the following changes:

PurposeChanges

mount point for ODL deploy directory

FieldAdded Value

.spec.template.spec.containers.volumeMounts

(of container sdnc-controller-container)

- mountPath: /opt/opendaylight/current/deploy
name: sdnc-deploy

.spec.template.spec.volumes

- name: sdnc-deploy
hostPath:
path: /dockerdata-nfs/cluster/deploy


5Enable cluster configuration

vi kubernetes/sdnc/values.yaml

change the following fields with the new value:

fieldnew valueold value
enableODLClustertruefalse
numberOfODLReplicas31
numberOfDbReplicas21

...

#PurposeCommand and Examples
0

Set the OOM Kubernetes config environment

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1

Run the createConfig script to create the ONAP config

cd {$OOM}/kubernetes/config
./createConfig.sh -n onap

Expand
titleExample of output

**** Creating configuration for ONAP instance: onap

namespace "onap" created

NAME:   onap-config

LAST DEPLOYED: Wed Nov  8 20:47:35 2017

NAMESPACE: onap

STATUS: DEPLOYED

 

RESOURCES:

==> v1/ConfigMap

NAME                   DATA  AGE

global-onap-configmap  15    0s

 

==> v1/Pod

NAME    READY  STATUS             RESTARTS  AGE

config  0/1    ContainerCreating  0         0s

 

 

**** Done ****

Wait for the config-init container to finish

Use the following command to monitor onap config init intil it reaches to Completed STATUS:

kubectl get pod --all-namespaces -a

Expand
titleExample of final output

The final output should be shown as the the following with onap config in Completed STATUS:

Additional checks for config-init
helm

helm ls --all

Expand
titleExample of output

NAME REVISION UPDATED STATUS CHART NAMESPACE
onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap

helm status onap-config

Expand
titleExample of output

LAST DEPLOYED: Tue Nov 21 17:07:13 2017
NAMESPACE: onap
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
global-onap-configmap 15 2d

==> v1/Pod
NAME READY STATUS RESTARTS AGE
config 0/1 Completed 0 2d

 kubernetes namespaces

kubectl get namespaces

Expand
titleExample of output

NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Active 2d


Deploy the SDN-C Application

...

#PurposeCommand and Examples
0

Set the OOM Kubernetes config environment

(If you have set the OOM Kubernetes config enviorment in the same terminal, you can skip this step)

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1

Run the createAll script to deploy the SDN-C appilication

cd {$OOM}/kubernetes/oneclick

./createAll.bash -n onap -a sdnc

Expand
titleExample of output

********** Creating instance 1 of ONAP with port range 30200 and 30399

********** Creating ONAP:


********** Creating deployments for sdnc **********

Creating namespace **********
namespace "onap-sdnc" created

Creating service account **********
clusterrolebinding "onap-sdnc-admin-binding" created

Creating registry secret **********
secret "onap-docker-registry-key" created

Creating deployments and services **********
NAME: onap-sdnc
LAST DEPLOYED: Thu Nov 23 20:13:32 2017
NAMESPACE: onap
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolume
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
onap-sdnc-db 2Gi RWX Retain Bound onap-sdnc/sdnc-db 1s

==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
sdnc-db Bound onap-sdnc-db 2Gi RWX 1s

==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dbhost None <none> 3306/TCP 1s
sdnctldb01 None <none> 3306/TCP 1s
sdnctldb02 None <none> 3306/TCP 1s
sdnc-dgbuilder 10.43.97.219 <nodes> 3000:30203/TCP 1s
sdnhost 10.43.99.163 <nodes> 8282:30202/TCP,8201:30208/TCP 1s
sdnc-portal 10.43.72.72 <nodes> 8843:30201/TCP 1s

==> extensions/v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
sdnc-dgbuilder 1 1 1 0 1s

==> apps/v1beta1/StatefulSet
NAME DESIRED CURRENT AGE
sdnc-dbhost 2 1 1s
sdnc 3 3 1s
sdnc-portal 2 2 1s

 


**** Done ****


Ensure that the SDN-C appication has started

Use the kubectl get pods command to monitor the SDN-C startup; you should observe:

  • sdnc-dbhost-0 pod starts and gets into Running STATUS first,
    • while
      • sdnc-dbhost-1 pod does not exist and
      • sdnc, sdnc-dgbuilder and sdnc-portal pods are staying in Init:0/1 STATUS
  • once sdnc-dbhost-0 pod is fully started with the READY "1/1",
    • sdnc-dbhost-1 will be starting from ContainerCreating STATUS and runs up to Running STATUS
  • once sdnc-dbhost-1 pod is in RunningSTATUS,
    • sdnc pods will be starting from PodInitializing STATUS and end up with Running STATUS in parallel
    • while
      • sdnc-dgbuilder and sdnc-portal pods are staying in Init:0/1 STATUS
  • once sdnc pods are (is) in Running STATUS,
    • sdnc-dgbuilder and sdnc-portal will be starting from PodInitializing STATUS and end up with Running STATUS in parllel
Expand
titleExample of start up status changes through "kubectl get pods --all-namespaces -a -o wide"

2

Validate that all SDN-C pods and services are created properly

helm ls --all

Expand
titleExample of SDNC release

ubuntu@sdnc-k8s:~$ helm ls --all
NAME REVISION UPDATED STATUS CHART NAMESPACE
onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap
onap-sdnc 1 Thu Nov 23 20:13:32 2017 DEPLOYED sdnc-0.1.0 onap
ubuntu@sdnc-k8s:~$

kubectl get namespace

Expand
titleExample of SDNC namespace

ubuntu@sdnc-k8s:~$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Active 2d
onap-sdnc Active 12m
ubuntu@sdnc-k8s:~$

kubectl get deployment --all-namespaces

Expand
titleExample of SDNC deployment

ubuntu@sdnc-k8s-2:~$ kubectl get deployment --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system heapster 1 1 1 1 15d
kube-system kube-dns 1 1 1 1 15d
kube-system kubernetes-dashboard 1 1 1 1 15d
kube-system monitoring-grafana 1 1 1 1 15d
kube-system monitoring-influxdb 1 1 1 1 15d
kube-system tiller-deploy 1 1 1 1 15d
onap-sdnc sdnc-dgbuilder 1 1 1 0 26m
ubuntu@sdnc-k8s-2:~$

kubectl get clusterrolebinding --all-namespaces

Expand
titleExample of SDNC cluster role binding

ubuntu@sdnc-k8s:~$ kubectl get clusterrolebinding --all-namespaces
NAMESPACE NAME AGE
addons-binding 15d
onap-sdnc-admin-binding 13m
ubuntu@sdnc-k8s:~$

kubectl get serviceaccounts --all-namespaces

Expand
titleExample of SDNC service account

ubuntu@sdnc-k8s:~$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
onap default 1 2d
onap-sdnc default 1 14m
ubuntu@sdnc-k8s:~$

kubectl get service -n onap-sdnc

Expand
titleExample of all SDNC services

kubectl get pods --all-namespaces -a

Expand
titleExample of all SDNC pods

docker ps |grep sdnc

Expand
titleExample of SDNC docker container

On Server 1:

$ docker ps |grep sdnc |wc -l
9

$ docker ps |grep sdnc
ebcb2f7f1a4a docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" About an hour ago Up About an hour k8s_filebeat-onap_sdnc-1_onap-sdnc_263071db-e69e-11e7-a01e-026c942e0e8c_0
55a82019ce10 nexus3.onap.org:10001/onap/sdnc-image@sha256:f5f85d498c397677fa2df850d9a2d8149520c5be3dce6c646c8fe4f25f9bab10 "bash -c 'sed -i 's/d" About an hour ago Up About an hour k8s_sdnc-controller-container_sdnc-1_onap-sdnc_263071db-e69e-11e7-a01e-026c942e0e8c_0
bbdfdfc2b1f0 gcr.io/google-samples/xtrabackup@sha256:29354f70c9d9207e757a1bae6a4cbf2f57a56b18fe5c2b0acc1198a053b24b38 "bash -c 'set -ex\ncd " About an hour ago Up About an hour k8s_xtrabackup_sdnc-dbhost-1_onap-sdnc_6e9a6c9a-e69e-11e7-a01e-026c942e0e8c_0
26854595164d mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" About an hour ago Up About an hour k8s_sdnc-db-container_sdnc-dbhost-1_onap-sdnc_6e9a6c9a-e69e-11e7-a01e-026c942e0e8c_0
b577493b5725 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-dbhost-1_onap-sdnc_6e9a6c9a-e69e-11e7-a01e-026c942e0e8c_0
14dcc0985259 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-1_onap-sdnc_263071db-e69e-11e7-a01e-026c942e0e8c_1
b52be823997b quay.io/kubernetes_incubator/nfs-provisioner@sha256:b5328a3825032d7e1719015260260347bda99c5d830bcd5d9da1175e7d1da989 "/nfs-provisioner -pr" About an hour ago Up About an hour k8s_nfs-provisioner_nfs-provisioner-860992464-dz5bt_onap-sdnc_261c3d30-e69e-11e7-a01e-026c942e0e8c_0
dc6dfd3fde3b gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_nfs-provisioner-860992464-dz5bt_onap-sdnc_261c3d30-e69e-11e7-a01e-026c942e0e8c_0
006aaa34c5af mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 32 hours ago Up 32 hours k8s_sdnc-db-container_sdnc-dbhost-1_onap-sdnc_d1cc3b28-e59e-11e7-a01e-026c942e0e8c_0

On Server 2:

$ docker ps |grep sdnc|wc -l
20

$ docker ps |grep sdnc

5155b7fcd0b9 nexus3.onap.org:10001/onap/admportal-sdnc-image@sha256:9cfdfa8aac18da5571479e0c767b92dbc72a3a5b475be37bd84fb65400696564 "/bin/bash -c 'cd /op" About an hour ago Up About an hour k8s_sdnc-portal-container_sdnc-portal-1380828306-7xxdz_onap-sdnc_261d5989-e69e-11e7-a01e-026c942e0e8c_0
d3a58f2bb662 nexus3.onap.org:10001/onap/ccsdk-dgbuilder-image@sha256:c52ad4dacc00da4a882d31b7022e59e3dd4bd4ec104380910949bce4d2d0c7b9 "/bin/bash -c 'cd /op" About an hour ago Up About an hour k8s_sdnc-dgbuilder-container_sdnc-dgbuilder-3612718752-m189c_onap-sdnc_262301b1-e69e-11e7-a01e-026c942e0e8c_0
275e0173f109 nexus3.onap.org:10001/onap/sdnc-image@sha256:f5f85d498c397677fa2df850d9a2d8149520c5be3dce6c646c8fe4f25f9bab10 "bash -c 'sed -i 's/d" About an hour ago Up About an hour k8s_sdnc-controller-container_sdnc-2_onap-sdnc_263d495f-e69e-11e7-a01e-026c942e0e8c_0
2566062bd408 docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" About an hour ago Up About an hour k8s_filebeat-onap_sdnc-2_onap-sdnc_263d495f-e69e-11e7-a01e-026c942e0e8c_0
53df8341738c docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" About an hour ago Up About an hour k8s_filebeat-onap_sdnc-0_onap-sdnc_2629ed07-e69e-11e7-a01e-026c942e0e8c_0
e147f48ffb5b nexus3.onap.org:10001/onap/sdnc-image@sha256:f5f85d498c397677fa2df850d9a2d8149520c5be3dce6c646c8fe4f25f9bab10 "bash -c 'sed -i 's/d" About an hour ago Up About an hour k8s_sdnc-controller-container_sdnc-0_onap-sdnc_2629ed07-e69e-11e7-a01e-026c942e0e8c_0
c5f63561e196 gcr.io/google-samples/xtrabackup@sha256:29354f70c9d9207e757a1bae6a4cbf2f57a56b18fe5c2b0acc1198a053b24b38 "bash -c 'set -ex\ncd " About an hour ago Up About an hour k8s_xtrabackup_sdnc-dbhost-0_onap-sdnc_265aef90-e69e-11e7-a01e-026c942e0e8c_0
4a0cc594d12d mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" About an hour ago Up About an hour k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_265aef90-e69e-11e7-a01e-026c942e0e8c_0
6c02ac51ea56 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-dbhost-0_onap-sdnc_265aef90-e69e-11e7-a01e-026c942e0e8c_0
8eafb15b5e7c gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-2_onap-sdnc_263d495f-e69e-11e7-a01e-026c942e0e8c_0
887ac4e733e8 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-0_onap-sdnc_2629ed07-e69e-11e7-a01e-026c942e0e8c_0
41044c2ed166 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-dgbuilder-3612718752-m189c_onap-sdnc_262301b1-e69e-11e7-a01e-026c942e0e8c_0
6771b1319b20 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-portal-1380828306-7xxdz_onap-sdnc_261d5989-e69e-11e7-a01e-026c942e0e8c_0
fd57d6e9577b mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 2 hours ago Up 2 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_7c3c3068-e699-11e7-a01e-026c942e0e8c_0
0be895c2ccad mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 2 hours ago Up 2 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_ea765fb6-e696-11e7-a01e-026c942e0e8c_0
7ddaf2cf9806 mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 4 hours ago Up 4 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_e04ab438-e689-11e7-a01e-026c942e0e8c_0
8ad307977830 mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 25 hours ago Up 25 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_87db0033-e5d9-11e7-a01e-026c942e0e8c_0
979b3ff11974 mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 25 hours ago Up 25 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_6c1b0064-e5d8-11e7-a01e-026c942e0e8c_0
b903e3e52f51 mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 25 hours ago Up 25 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_744c98a4-e5d3-11e7-a01e-026c942e0e8c_0
36857a112463 mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 32 hours ago Up 32 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_abe261cd-e59d-11e7-a01e-026c942e0e8c_0
$


3

Validate that the SDN-C bundlers are up

Expand
titleEnter pod container with 2 options
Expand
titleOption 1: through pod name from anywhere

Use command

kubectl exec -it <POD_NAME_WITH_NAME_SPACE> bash

Expand
titleExample

Expand
titleOption 2: through docker container ID from where the container is

Use command

docker exec -it <DOCKER_CONTAINER_ID> bash

Expand
titleExample

Expand
titleCheck SDNC bundles in ODL client
Expand
titleEnter ODL client

Expand
titleCheck SDNC bundlers

4

Validate that the SDN-C APIs are shown on the ODL RestConf page

Access the ODL RestConf page from the following URL:

http://<Kubernetes-Master-Node-IP>:30202/apidoc/explorer/index.html

Expand
titleExample of SDNC APIs in ODL RestConf page

5

Validate the SDN-C ODL cluster

Goal:

Verify if the SDN-C ODL-Cluster is running properly

Prerequisites
  1. This test is on one of your Kubernetes nodes
  2. Make sure python-pycurl is installed
    • If not, for Ubuntu use "apt-get install python-pycurl" to install it
Use ODL intergration tool to monitor ODL cluster

Clone ODL Integration-test project

git clone https://github.com/opendaylight/integration-test.git

Enter cluster-montor folder

cd integration-test/tools/clustering/cluster-monitor

Create cluster-monitor.bash script

vi cluster-monitor.bash

Code Block
languagebash
titleContent of cluster-monitor.bash
linenumberstrue
collapsetrue
########################################################################################
# This script wraps ODL monitor.py and dynamicly picks up ODL clustered IP of the sdnc #
# clustered pods and update the IP address in the cluster.json which feed the ODL      #
# monitor.py.                                                                          #
# This script also changed the username and password in the cluster.json file.         #
#                                                                                      #
# If the sdnc pods IP address is re-assigned, the running session of this script       #
# should be restarted.                                                                 #
#                                                                                      #
# To run it, just enter the following command:                                         #
#    ./cluster-monitor.bash                                                              #
########################################################################################
#!/bin/bash

# get IPs string by using kubectl
ips_string=$(kubectl get pods --all-namespaces -o wide | grep 'sdnc-[0-9]' | awk '{print $7}')
ip_list=($(echo ${ips_string} | tr ' ' '\n'))

# loop and replace existing IP
for ((i=0;i<=2;i++));
do
   if [ "${ip_list[$i]}" == "<none>" ]; then
     echo "Ip of deleted pod is not ready yet"
     exit 1;
   fi

   let "j=$i+4"
   sed -i -r "${j}s/(\b[0-9]{1,3}\.){3}[0-9]{1,3}\b"/${ip_list[$i]}/ cluster.json
done

# replace port, username and password
sed -i  's/8181/8080/g' cluster.json
sed -i 's/username/admin/' cluster.json
sed -i 's/password/admin/' cluster.json

# start monitoring
python monitor.py

This script is used to fetch all the IPs of SDN-C pods and automatically update cluster.json file

Start cluster montor UI

./cluster-monitor.bash

Note:

If applications inside any of these three SDNC pods are not fully started, this script won't be executed successfully due to the issues such as connection error, value error and etc. 

Otherwise, you should see the monitoring UI as the following:

Use testCluster RPC to test SDN-C load sharing

The testCluster-bundle.zip provides a testBundle which offers a testCluster API to help with validating SDN-C RPC load sharing in the deployed SDN-C cluster.

It's just as easy as to do the following:

  1. download testCluster-bundle.zip(by clicking on the hyplinked text), and place it to the sdnc-deploy hostPath which has been defined in .spec.volumes of sdnc-statefulset.yaml file
  2. unzip testCluster-bundle.zip, If unzip command is not installed, install it by using "sudo apt install unzip" and then "unzip testCluster-bundle.zip".

    Expand
    titleHere's an example

    ubuntu@sdnc-k8s-1:~/cluster/deploy$ unzip testCluster-0.2.0.zip

    The program 'unzip' is currently not installed. You can install it by typing:
    sudo apt install unzip
    ubuntu@sdnc-k8s-1:~/cluster/deploy$ sudo apt install unzip
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    Suggested packages:
    zip
    The following NEW packages will be installed:
    unzip
    0 upgraded, 1 newly installed, 0 to remove and 117 not upgraded.
    Need to get 158 kB of archives.
    After this operation, 530 kB of additional disk space will be used.
    Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 unzip amd64 6.0-20ubuntu1 [158 kB]
    Fetched 158 kB in 0s (230 kB/s)
    Selecting previously unselected package unzip.
    (Reading database ... 93492 files and directories currently installed.)
    Preparing to unpack .../unzip_6.0-20ubuntu1_amd64.deb ...
    Unpacking unzip (6.0-20ubuntu1) ...
    Processing triggers for mime-support (3.59ubuntu1) ...
    Processing triggers for man-db (2.7.5-1) ...
    Setting up unzip (6.0-20ubuntu1) ...
    ubuntu@sdnc-k8s-1:~/cluster/deploy$ unzip testCluster-0.2.0.zip
    Archive: testCluster-0.2.0.zip
    inflating: testDataBroker-api-0.2.0-SNAPSHOT.jar
    inflating: testDataBroker-features-0.2.0-SNAPSHOT.jar
    inflating: testDataBroker-impl-0.2.0-SNAPSHOT.jar
    ubuntu@sdnc-k8s-1:~/cluster/deploy$

  3. As this hostPath is mounted as ODL's deploy directory, once the zip file is unzipped, the testBundle will be automatically loaded by ODL and the testCluster API will be availble as ODL RestConf API.
    • testCluster API is accessible from
      • Expand
        titleODL RestConf API Documentation

      • Expand
        titlePostman

        Code Block
        languageactionscript3
        titleExample of postman code snippets
        linenumberstrue
        collapsetrue
        POST /restconf/operations/testCluster:who-am-i HTTP/1.1
        Host: ${KUBERNETES MASTER VM IP}:30202
        Accept: application/json
        Authorization: Basic YWRtaW46S3A4Yko0U1hzek0wV1hsaGFrM2VIbGNzZTJnQXc4NHZhb0dHbUp2VXkyVQ==
        Cache-Control: no-cache
        Postman-Token: 9683538b-de47-dec8-3e88-c491be9dd6ef
        
        
        
        
      • Expand
        titleCurl command
        curl -u admin:Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U -H 'Accept: application/json' -X POST 'http://${KUBERNETES MASTER VM IP}:30202/restconf/operations/testCluster:who-am-i'
    • Expand
      titleAn example of testCluster API response
      {
      	"output": {
      		"node": "sdnc-2"
      	}
      }

...

#PurposeCommand and Examples
0
  • Set the OOM Kubernetes config environment

(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step)

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1
  • Run the deleteAll script to delete all SDN-C pods and services

./deleteAll.bash -n onap -a sdnc

Expand
titleExample of output

********** Cleaning up ONAP:
release "onap-sdnc" deleted
namespace "onap-sdnc" deleted
clusterrolebinding "onap-sdnc-admin-binding" deleted
Service account onap-sdnc-admin-binding deleted.

Waiting for namespaces termination...

********** Gone **********

2
  • Validate that all SDN-C pods and servers are cleaned up

docker ps |grep sdnc

Expand
titleExample of no more SDNC docker container

ubuntu@sdnc-k8s:~$ docker ps |grep sdnc
ubuntu@sdnc-k8s:~$

kubectl get pods --all-namespaces -a

Expand
titleExample of no more SDNC pods

ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ kubectl get pods --all-namespaces -a
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-4285517626-32km8 1/1 Running 0 15d
kube-system kube-dns-638003847-vqz8t 3/3 Running 0 15d
kube-system kubernetes-dashboard-716739405-tnxj6 1/1 Running 0 15d
kube-system monitoring-grafana-2360823841-qfhzm 1/1 Running 0 15d
kube-system monitoring-influxdb-2323019309-41q0l 1/1 Running 0 15d
kube-system tiller-deploy-737598192-5663c 1/1 Running 0 15d
onap config 0/1 Completed 0 2d
ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$

kubectl get service --all-namespaces

Expand
titleExample of no more SDNC services

ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 15d
kube-system heapster ClusterIP 10.43.210.11 <none> 80/TCP 15d
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 15d
kube-system kubernetes-dashboard ClusterIP 10.43.196.205 <none> 9090/TCP 15d
kube-system monitoring-grafana ClusterIP 10.43.90.8 <none> 80/TCP 15d
kube-system monitoring-influxdb ClusterIP 10.43.52.1 <none> 8086/TCP 15d
kube-system tiller-deploy ClusterIP 10.43.106.73 <none> 44134/TCP 15d
ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$

kubectl get serviceaccounts --all-namespaces

Expand
titleExample of no more SDNC service account

ubuntu@sdnc-k8s:~$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
onap default 1 2d
ubuntu@sdnc-k8s:~$

kubectl get clusterrolebinding --all-namespaces

Expand
titleExample of no more SDNC cluster role binding

ubuntu@sdnc-k8s:~$ kubectl get clusterrolebinding --all-namespaces
NAMESPACE NAME AGE
addons-binding 15d
ubuntu@sdnc-k8s:~$

kubectl get deployment --all-namespaces

Expand
titleExample of no more SDNC deployment

ubuntu@sdnc-k8s:~$ kubectl get deployment --all-namespaces

NAMESPACE     NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE

kube-system   heapster               1         1         1            1           2d

kube-system   kube-dns               1         1         1            1           2d

kube-system   kubernetes-dashboard   1         1         1            1           2d

kube-system   monitoring-grafana     1         1         1            1           2d

kube-system   monitoring-influxdb    1         1         1            1           2d

kube-system   tiller-deploy          1         1         1            1           2d

ubuntu@sdnc-k8s:~$

kubectl get namespaces

Expand
titleExample of no more SDNC namespace

ubuntu@sdnc-k8s:~$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Active 2d
ubuntu@sdnc-k8s:~$

helm ls --all

Expand
titleExample of no more SDNC release

ubuntu@sdnc-k8s:~$ helm ls --all
NAME REVISION UPDATED STATUS CHART NAMESPACE
onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap
ubuntu@sdnc-k8s:~$


Remove the ONAP Config

...

#PurposeCommand and Examples
0
  • Set the OOM Kubernetes config environment

(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step)

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1
  • Remove the ONAP config and any deployed applications in one shot

./deleteAll.bash -n onap

Expand
titleExample of removing ONAP config output

ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ ./deleteAll.bash -n onap

 

********** Cleaning up ONAP:

Error: release: not found

Error from server (NotFound): namespaces "onap-consul" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-consul-admin-binding" not found

Service account onap-consul-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-msb" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-msb-admin-binding" not found

Service account onap-msb-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-mso" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-mso-admin-binding" not found

Service account onap-mso-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-message-router" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-message-router-admin-binding" not found

Service account onap-message-router-admin-binding deleted.ls

 

 

release "onap-sdnc" deleted

namespace "onap-sdnc" deleted

clusterrolebinding "onap-sdnc-admin-binding" deleted

Service account onap-sdnc-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-vid" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-vid-admin-binding" not found

Service account onap-vid-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-robot" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-robot-admin-binding" not found

Service account onap-robot-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-portal" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-portal-admin-binding" not found

Service account onap-portal-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-policy" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-policy-admin-binding" not found

Service account onap-policy-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-appc" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-appc-admin-binding" not found

Service account onap-appc-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-aai" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-aai-admin-binding" not found

Service account onap-aai-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-sdc" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-sdc-admin-binding" not found

Service account onap-sdc-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-dcaegen2" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-dcaegen2-admin-binding" not found

Service account onap-dcaegen2-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-log" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-log-admin-binding" not found

Service account onap-log-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-cli" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-cli-admin-binding" not found

Service account onap-cli-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-multicloud" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-multicloud-admin-binding" not found

Service account onap-multicloud-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-clamp" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-clamp-admin-binding" not found

Service account onap-clamp-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-vnfsdk" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-vnfsdk-admin-binding" not found

Service account onap-vnfsdk-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-uui" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-uui-admin-binding" not found

Service account onap-uui-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-aaf" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-aaf-admin-binding" not found

Service account onap-aaf-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-vfc" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-vfc-admin-binding" not found

Service account onap-vfc-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-kube2msb" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-kube2msb-admin-binding" not found

Service account onap-kube2msb-admin-binding deleted.

 

Waiting for namespaces termination...

 

********** Gone **********

2
  • Manually clean up

This step is to clean up the leftover items which were created by the config/createConfig script but not cleaned up by the oneclick/deleteAll script.

Expand
titleExample of left over ONAP config

ubuntu@sdnc-k8s:~/oom/kubernetes/config$ ./createConfig.sh -n onap

**** Creating configuration for ONAP instance: onap
Error from server (AlreadyExists): namespaces "onap" already exists
Error: a release named "onap-config" already exists.
Please run: helm ls --all "onap-config"; helm del --help
**** Done ****
ubuntu@sdnc-k8s:

ONAP serviceaccount

No action needed.

It cannot be deleted by a specific command, but will instead be automatically deleted when the namespace is deleted.

Expand
titleExample of service account can not be deleted

ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
onap default 1 2d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl delete serviceaccounts default -n onap
serviceaccount "default" deleted
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
onap default 1 6s
ubuntu@sdnc-k8s:

... after ONAP namespace is deleted...

ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$

ONAP namespace 
Expand
titleExample of deleting ONAP name space

ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Active 2d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl delete namespace onap
namespace "onap" deleted
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Terminating 2d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$

 release
Expand
titleExample of deleting ONAP config release

ubuntu@sdnc-k8s:~/oom/kubernetes/config$ helm ls --all
NAME REVISION UPDATED STATUS CHART NAMESPACE
onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ helm delete onap-config --purge
release "onap-config" deleted
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ helm ls --all
ubuntu@sdnc-k8s:~/oom/kubernetes/config$

3
  • Delete the shared folder

 sudo rm -rf /dockerdata-nfs/onap

...