SDNC Cluster Deployment
Configure SDNC Cluster Deployment
We are using Kubernetes replicas to achieve the SDNC cluster deployment (see details from About SDN-C Clustering for the desired goal).
This only needs to be done one time and, at the moment, all modifications are done manually (they can be automated via scripting in the future when the needs come up).
Under oom project check out the patch:
git fetch https://haok@gerrit.onap.org/r/a/oom refs/changes/67/25467/11 && git checkout FETCH_HEAD
or you can follow the change details to update yaml files.
Change details
Edit SDNC Templates
The following is the list of SDNC deployment templates which need to be modified for an SDN-C cluster deployment.
under directory {$OOM}/kubernetes/sdnc
# | Files under | Changed/Added fields and values |
---|---|---|
1 | values.yaml | Defined a new variable "mysql: mysql:5.6" |
under directory {$OOM}/kubernetes/sdnc/templates
# | file name | Changed/Added fields and values | |||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | (Create this file) sdnc-configmap.yaml | apiVersion: v1 kind: ConfigMap metadata: name: mysql namespace: "{{ .Values.nsPrefix }}-sdnc" labels: app: mysql data: master.cnf: | # Apply this config only on the master. [mysqld] log-bin [localpathprefix] master slave.cnf: | # Apply this config only on slaves. [mysqld] super-read-only [localpathprefix] slave | |||||||||||||||||||||||||||
1 | db-deployment.yaml |
| |||||||||||||||||||||||||||
2 | sdnc-deployment.yaml |
| |||||||||||||||||||||||||||
3 | web-deployment.yaml |
| |||||||||||||||||||||||||||
4 | sdnc-pv-pvc.yaml | Changed from Static volume mounts (PV and PVCs) to Dynamic volume mounts
| |||||||||||||||||||||||||||
5 | all-services.yaml |
--- apiVersion: v1 kind: Service metadata: name: sdnhostcluster namespace: onap-sdnc labels: app: sdnc annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" spec: ports: - name: "sdnc-cluster-port" port: 2550 clusterIP: None selector: app: sdnc sessionAffinity: None type: ClusterIP | |||||||||||||||||||||||||||
(Find sdnhost service and add expose one more port under .spec.ports) - name: "sdnc-jolokia-port-8080" port: 9090 targetPort: 8080 nodePort: {{ .Values.nodePortPrefix }}00 |
Notes:
- Use .apiVersion "apps/v1beta1" for the Kubernetes version before 1.8.0; otherwise, use .apiVersion "apps/v1beta2"
Check the Kubernetes version using the command "kubectl version"
- The value must align with the associated service name in the all_services.yaml file under the same directory.
- By default, .spec.podManagementPolicy has the value "OrderReady".
- With the value "OrderReady", the Kubernetes pod management tells the StatefulSet controller to respect the ordering guarantees, waiting for a Pod to become Running and Ready or completely terminate and then launching or terminating another Pod.
- With the value "Parallel", the Kubernetes pod management tells the StatefulSet controller to launch or terminate all Pods in parallel, not waiting for Pods to become Running and Ready or completely terminated prior to launching or terminating another Pod.
- Export 2 new ports for
- SDNC cluster (2550)
- Jolokia (8080)
- Since StartODL.sh has to be changed to enable clusters to function, two paths must be mounted:
- mount /home/ubuntu/cluster/script/cluster-startODL.sh (local) to replace /opt/onap/sdnc/bin/startODL.sh (docker), so that we can use our local updated script with cluster config.
- mount /home/ubuntu/cluster/deploy (local) to /opt/opendaylight/current/deploy (docker), so that we can dynamically deploy test bundles outside pods.
- The newly added headless service is SDNC pods in SDNC cluster to be able to find each other with fixed FQDN directly.
Make nfs-provisioner Pod Runs On Node Where NFS Server Runs
On the node where you have configured nfs server, run the following:
# | Purpose | Command and Example |
---|---|---|
1 | find the node name | kubectl get node |
2 | set label on the node | kubectl label nodes <NODE_NAME_FROM_LAST_STEP> disktype=ssd |
3 | check the label has been set on the node | kubectl get node --show-labels |
4 | update nfs-provisioner pod template to force it running on the nfs server node | In sdnc-pv-pv.yaml file, add “spec.template.spec.nodeSelector” for pod “nfs-provisioner” |
Create New Script: cluster-startODL.sh
This is manual for now, It can be automated when we automate the SDN-C cluster deployment.
Create cluster-startODL.sh under /home/ubuntu/cluster/script/
Create the ONAP Config
# | Purpose | Command and Examples | |||
---|---|---|---|---|---|
0.1 | (Only Once) Create the ONAP config using a sample YAML file | cd {$OOM}/kubernetes/config cp onap-parameters-sample.yaml onap-parameters.yaml | |||
0 | Set the OOM Kubernetes config environment | cd {$OOM}/kubernetes/oneclick source setenv.bash | |||
1 | Run the createConfig script to create the ONAP config | cd {$OOM}/kubernetes/config | |||
2 | Wait for the config-init container to finish | Use the following command to monitor onap config init intil it reaches to Completed STATUS:
| |||
Additional checks for config-init |
|
Deploy the SDN-C Application
# | Purpose | Command and Examples | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
0 | Set the OOM Kubernetes config environment(If you have set the OOM Kubernetes config enviorment in the same terminal, you can skip this step) | cd {$OOM}/kubernetes/oneclick source setenv.bash | ||||||||
1 | Run the createAll script to deploy the SDN-C appilication | cd {$OOM}/kubernetes/oneclick ./createAll.bash -n onap -a sdnc | ||||||||
Ensure that the SDN-C appication has started | Use the kubectl get pods command to monitor the SDN-C startup; you should observe:
| |||||||||
2 | Validate that all SDN-C pods and services are created properly | helm ls --all kubectl get namespaces kubectl get deployment --all-namespaces kubectl get clusterrolebinding --all-namespaces kubectl get serviceaccounts --all-namespaces kubectl get service --all-namespaces kubectl get pods --all-namespaces -a docker ps |grep sdnc | ||||||||
3 | Validate that the SDN-C bundlers are up | |||||||||
4 | Validate that the SDN-C APIs are shown on the ODL RestConf page | Access the ODL RestConf page from the following URL:
| ||||||||
5 | Validate the SDN-C ODL cluster | Goal:Verify if the SDNC ODL-Cluster is running properly Prerequisites
Use ODL intergration tool to monitor ODL cluster
Use testCluster RPC to test SDN-C load sharingThe testCluster-bundle.zip provides a testBundle which offers a testCluster API to help with validating SDN-C RPC load sharing in the deployed SDN-C cluster. It's just as easy as to do the following:
|
Undeploy the SDN-C Application
# | Purpose | Command and Examples |
---|---|---|
0 | Aet the OOM Kubernetes config environment(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step) | cd {$OOM}/kubernetes/oneclick source setenv.bash |
1 | Run the deleteAll script to delete all SDN-C pods and services | ./deleteAll.bash -n onap -a sdnc |
2 | Validate that all SDN-C pods and servers are cleaned up | docker ps |grep sdnc kubectl get pods --all-namespaces -a kubectl get service --all-namespaces kubectl get serviceaccounts --all-namespaces kubectl get clusterrolebinding --all-namespaces kubectl get deployment --all-namespaces kubectl get namespaces helm ls --all |
Remove the ONAP Config
# | Purpose | Command and Examples | ||||||
---|---|---|---|---|---|---|---|---|
0 | Set the OOM Kubernetes config environment(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step) | cd {$OOM}/kubernetes/oneclick source setenv.bash | ||||||
1 | Remove the ONAP config and any deployed applications in one shot | ./deleteAll.bash -n onap | ||||||
2 | Manually clean upThis step is to clean up the leftover items which were created by the config/createConfig script but not cleaned up by the oneclick/deleteAll script. |
| ||||||
3 | Delete the shared folder | sudo rm -rf /dockerdata-nfs/onap |
Scripts
The following scripts help to simplify various procedures by automating them