4. Deploy/Undeploy the SDN-C Cluster
Steps described in this page are run by "ubuntu", a non-root user.
- 1 Clone the OOM project from ONAP gerrit
- 2 SDN-C Cluster Deployment
- 2.1 Configure SDN-C Cluster Deployment
- 2.2 Create the ONAP Config
- 2.2.1 Setup onap-parameters.yaml file
- 2.2.2 Run createConfig
- 2.3 To simplify steps in this section
- 2.4 Deploy the SDN-C Application
- 2.5 To simplify steps in this section
- 2.5.1 Set the OOM Kubernetes config environment
- 2.5.2 Run the createAll script to deploy the SDN-C appilication
- 2.5.3 Validate that all SDN-C pods and services are created properly
- 2.5.4 Validate that the SDN-C bundlers are up
- 2.5.5 Validate that the SDN-C APIs are shown on the ODL RestConf page
- 2.5.6 Validate the SDN-C ODL cluster
- 3 Undeploy the SDN-C Application
- 4 Remove the ONAP Config
- 5 Scripts
Clone the OOM project from ONAP gerrit
Run the following command on master node to clone OOM project from ONAP gerrit at any directory you prefer, and that directory will be referred as "{$OOM}" in this page.
git clone http://gerrit.onap.org/r/oom
SDN-C Cluster Deployment
Configure SDN-C Cluster Deployment
We are using Kubernetes replicas to achieve the SDN-C cluster deployment (see details from About SDN-C Clustering for the desired goal).
This only needs to be done one time and, at the moment, all modifications are done manually (they can be automated via scripting in the future when the needs come up).
Get New startODL.sh Script From Gerrit Topic SDNC-163
The source of the new startODL.sh script, gerrit change 25475, has been merged into sdnc/oam project on December 15th, 2017.
Do the following to get the new startODL.sh script which provides the configuration of ODL clustering for SDN-C cluster.
# | Purpose | Command Examples |
|---|---|---|
1 | Get the shared new startODL.sh script content | Go to gerrit change 25475 click on installation/sdnc/src/main/scripts/startODL.sh under the Files section to view the text of the script. click on the Download button () to download the startODL_new.sh.zip file and extract open the sh file inside the zip file, rename it to "startODL.sh". |
2 | Create new startODL.sh on the Kubernetes node VM | mkdir -p /dockerdata-nfs/cluster/script vi /dockerdata-nfs/cluster/script/startODL.sh paste the copied content from step 1 to this file |
3 | Give execution permission to the new startODL.sh script | chmod 777 /dockerdata-nfs/cluster/script/startODL.sh
|
Get SDN-C Cluster Templates From Gerrit Topic SDNC-163
The source of the templates, gerrit change 25467, has been merged into sdnc/oam project on December 20th, 2017.
Skip step 1 and 2 if your cloned OOM project includes this change.
Skip step 3 if you skipped previous section (adding startODL.sh script).
Skip step 4, if you dont want to add/deploy extra features/bundles/packages.
Step 5 in important. It determines number of sdnc and db pods.
# | Purpose | Command and Examples |
|---|---|---|
1 | Get the shared templates code git fetch command | Go to gerrit change 25467 Click Download downward arrow, From the right bottom corner drop list, select anonymous http, Click the click board in the same line as Checkout to get (copy to clipboard) the git commands (which includes the git fetch and checkout commands). git fetch https://gerrit.onap.org/r/oom refs/changes/67/25467/23 && git checkout FETCH_HEAD |
2 | Fetch the shared template to the oom directory on the Kubernetes node VM | cd {$OOM} Execute the git command from step 1. |
3 | Link the new startODL.sh | Skip this change if you have skipped the get new startODL.sh script section Be careful with editing YAML files. They are sensitive to number of spaces. Be careful with copy/paste from browser. vi kubernetes/sdnc/templates/sdnc-statefulset.yaml Make the following changes: |
4 | Link the ODL deploy directory | If you are not going to use the test bundle to test out SDN-C cluster and load balancing, you can skip this step. ODL automatically install bundles/pacakges that are put under deploy directory, this mount point provides capability to drop a bundle/package in the Kubernetes node at /dockerdata-nfs/cluster/deploy directory and it will automativally be installed in the sdnc pods (under opt/opendaylight/current/deploy directory). vi kubernetes/sdnc/templates/sdnc-statefulset.yaml Make the following changes: |
5 | Enable cluster configuration | vi kubernetes/sdnc/values.yaml Change the following fields with the new value: |
Make nfs-provisioner Pod Runs On Node Where NFS Server Runs
Skip the following section, if any of thew following coditions matches.
you have skipped 2. Share the /dockerdata-nfs Folder between Kubernetes Nodes
Kubernetes Master and workers are located on the same VM
Kubernetes Federation (on multiple VMs verified)
Verify (from Master node)
# | Purpose | Command and Example |
|---|---|---|
1 | Find the node name | Run command "ps -ef|grep nfs", you should
kubectl get node ubuntu@sdnc-k8s:~$ kubectl get node |
2 | Set label on the node | kubectl label nodes <NODE_NAME_FROM_LAST_STEP> disktype=ssd ubuntu@sdnc-k8s:~$ kubectl label nodes sdnc-k8s disktype=ssd node "sdnc-k8s" labeled ubuntu@sdnc-k8s:~$ |
3 | Check the label has been set on the node | kubectl get node --show-labels ubuntu@sdnc-k8s:~$ kubectl get node --show-labels |
4 | Update nfs-provisioner pod template to force it running on the nfs server node | In nfs-provisoner-deployment.yaml file, add “spec.template.spec.nodeSelector” for pod “nfs-provisioner” |
Create the ONAP Config
Setup onap-parameters.yaml file
The following commands must run on master node before creating ONAP configuration.
cd {$OOM}/kubernetes/config
cp onap-parameters-sample.yaml onap-parameters.yaml
Run createConfig
To simplify steps in this section
You can skip the steps in this section by following instruction in autoCreateOnapConfig of Scripts section to
create {$OOM}/kubernetes/oneclick/tools/autoCreateOnapConfig.bash file
run it and wait until the script completes
# | Purpose | Command and Examples |
|---|---|---|
0 | Set the OOM Kubernetes config environment | cd {$OOM}/kubernetes/oneclick source setenv.bash |
1 | Run the createConfig script to create the ONAP config | cd {$OOM}/kubernetes/config **** Creating configuration for ONAP instance: onap namespace "onap" created NAME: onap-config LAST DEPLOYED: Wed Nov 8 20:47:35 2017 NAMESPACE: onap STATUS: DEPLOYED
RESOURCES: ==> v1/ConfigMap NAME DATA AGE global-onap-configmap 15 0s
==> v1/Pod NAME READY STATUS RESTARTS AGE config 0/1 ContainerCreating 0 0s
**** Done **** |
2 | Wait for the config-init container to finish | Use the following command to monitor onap config init intil it reaches to Completed STATUS:
The final output should be shown as the the following with onap config in Completed STATUS: |
Additional checks for config-init |
Deploy the SDN-C Application
To simplify steps in this section
You can skip the steps in this section by following instruction in autoDeploySdnc of Scripts section to
create {$OOM}/kubernetes/oneclick/tools/autoDeploySdnc.bash file
run it and wait until the script completes
Execute the followings on the master node.
# | Purpose | Command and Examples |
|---|---|---|
0 | Set the OOM Kubernetes config environment(If you have set the OOM Kubernetes config enviorment in the same terminal, you can skip this step) | cd {$OOM}/kubernetes/oneclick source setenv.bash |
1 | Run the createAll script to deploy the SDN-C appilication | cd {$OOM}/kubernetes/oneclick ./createAll.bash -n onap -a sdnc ********** Creating instance 1 of ONAP with port range 30200 and 30399 ********** Creating ONAP:
Creating namespace ********** Creating service account ********** Creating registry secret ********** Creating deployments and services ********** RESOURCES: ==> v1/PersistentVolumeClaim ==> v1/Service ==> extensions/v1beta1/Deployment ==> apps/v1beta1/StatefulSet
|
Ensure that the SDN-C appication has started | Use the kubectl get pods command to monitor the SDN-C startup; you should observe:
| |
2 | Validate that all SDN-C pods and services are created properly | helm ls --all ubuntu@sdnc-k8s:~$ helm ls --all kubectl get namespace ubuntu@sdnc-k8s:~$ kubectl get namespaces kubectl get deployment --all-namespaces ubuntu@sdnc-k8s-2:~$ kubectl get deployment --all-namespaces kubectl get clusterrolebinding --all-namespaces ubuntu@sdnc-k8s:~$ kubectl get clusterrolebinding --all-namespaces kubectl get serviceaccounts --all-namespaces ubuntu@sdnc-k8s:~$ kubectl get serviceaccounts --all-namespaces kubectl get service -n onap-sdnc kubectl get pods --all-namespaces -a docker ps |grep sdnc On Server 1: $ docker ps |grep sdnc |wc -l $ docker ps |grep sdnc On Server 2: $ docker ps |grep sdnc|wc -l $ docker ps |grep sdnc |