SDNC Cluster Deployment
Configure SDNC Cluster Deployment
We are using kubernetes replicas to achieve the SDNC cluster deployment (see details from 4. Deploy/Un-deploy SDN-C Cluster for desired goal).
This only need to be done one time, and at this moment, all modifications are done manually. (They can be automated via scripting in the future when the needs come up)
Edit SDNC templates
The following is the list of SDNC deployment templates that need to be modified for SDNC cluster deployment.
# | Template file under {$OOM}/kubernetes/sdnc/templates | Changed/Added fields and values | |||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | db-deployment.yaml |
| |||||||||||||||||||||||||||
2 | sdnc-deployment.yaml |
| |||||||||||||||||||||||||||
3 | web-deployment.yaml |
| |||||||||||||||||||||||||||
4 | all-services.yaml |
--- apiVersion: v1 kind: Service metadata: name: sdnhostcluster namespace: onap-sdnc labels: app: sdnc annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" spec: ports: - name: "sdnc-cluster-port" port: 2550 clusterIP: None selector: app: sdnc sessionAffinity: None type: ClusterIP | |||||||||||||||||||||||||||
(Find sdnhost service and add expose one more port under .spec.ports) - name: "sdnc-jolokia-port-8080" port: 9090 targetPort: 8080 nodePort: {{ .Values.nodePortPrefix }}00 |
Notes:
- Use .apiVersion "apps/v1beta1" for kubernetes version before 1.8.0, otherwise, use .apiVersion "apps/v1beta2"
Check kubernetes version using command "kubectl version"
- The value must align with the associated service name in all_services.yaml file under the same directory.
- By default, .spec.podManagementPolicy has value "OrderReady".
- With value "OrderReady", kubernetes pod management tells the StatefulSet controller to respect the ordering guarantees, waiting for a Pod to become Running and Ready or completely terminated then launching or terminating another Pod.
- With value "Parallel", kubernetes pod management tells the StatefulSet controller to launch or terminate all Pods in parallel, no waiting for Pods to become Running and Ready or completely terminated prior to launching or terminating another Pod.
- Export 2 new ports for
- SDNC cluster (2550)
- Jolokia (8080)
- Since StartODL.sh has to be changed for enabling cluster function, there are two mounted Path are added for
- mount /home/ubuntu/cluster/script/cluster-startODL.sh (local) to replace /opt/onap/sdnc/bin/startODL.sh (docker), so that we can use our local updated script with cluster config.
- mount /home/ubuntu/cluster/deploy (local) to /opt/opendaylight/current/deploy (docker), so that we can dynamically deploy test bundles outside pods.
- The newly added headless service is SDNC pods in SDNC cluster to be able to find each other with fixed FQDN directly.
Create New Script: cluster-startODL.sh
This is manual for now, It can be automated when we automate SDNC cluster deployment.
Create cluster-startODL.sh under /home/ubuntu/cluster/script/
Create ONAP Config
# | Purpose | Command and Examples | |||
---|---|---|---|---|---|
0.1 | (only once) Create ONAP config using sample YAML file | cd {$OOM}/kubernetes/config cp onap-parameters-sample.yaml onap-parameters.yaml | |||
0 | Always set OOM kubernetes config environment | cd {$OOM}/kubernetes/oneclick source setenv.bash | |||
1 | Run createConfig Scripe to create ONAP config | cd {$OOM}/kubernetes/config | |||
2 | Waiting for config-init container to finish | Use the following command to monitor onap config init intil it reaches to Completed STATUS:
| |||
Additional checkings for config-init |
|
Deploy SDNC Application
# | Purpose | Command and Examples | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
0 | Always set OOM kubernetes config environment(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step) | cd {$OOM}/kubernetes/oneclick source setenv.bash | ||||||||
1 | Run createAll script to deploy SDNC appilication | cd {$OOM}/kubernetes/oneclick ./createAll.bash -n onap -a sdnc | ||||||||
Monitoring SDNC appication start up | Use kubectl get pods command to monitor the SDNC start up, you should observe:
| |||||||||
2 | Validate all SDNC pods and services are created properly | helm ls --all kubectl get namespaces kubectl get deployment --all-namespaces kubectl get clusterrolebinding --all-namespaces kubectl get serviceaccounts --all-namespaces kubectl get service --all-namespaces kubectl get pods --all-namespaces -a docker ps |grep sdnc | ||||||||
3 | Validate SDNC bundlers are up | |||||||||
4 | Validate SDNC APIs are shown in ODL RestConf page | Access ODL RestConf page from the following URL
| ||||||||
5 | Validate SDNC ODL cluster | Goal:Verify if the SDNC ODL-Cluster is running properly prerequisites
Use ODL intergration tool to monitor ODL cluster
Use testCluster RPC to test SDN-C load sharingThe testCluster-bundle.zip provides a testBundle which offers a testCluster API to help with validating SDN-C RPC load sharing in the deployed SDN-C cluster. It's just as easy as to do the following:
|
Un-deploy SDNC application
# | Purpose | Command and Examples |
---|---|---|
0 | Always set OOM kubernetes config environment(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step) | cd {$OOM}/kubernetes/oneclick source setenv.bash |
1 | Run deleteAll script to delete all SDNC pods and services | ./deleteAll.bash -n onap -a sdnc |
2 | Validate all SDNC pods and servers are cleared up | docker ps |grep sdnc kubectl get pods --all-namespaces -a kubectl get service --all-namespaces kubectl get serviceaccounts --all-namespaces kubectl get clusterrolebinding --all-namespaces kubectl get deployment --all-namespaces kubectl get namespaces helm ls --all |
Remove ONAP Config
# | Purpose | Command and Examples | ||||||
---|---|---|---|---|---|---|---|---|
0 | Always set OOM kubernetes config environment(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step) | cd {$OOM}/kubernetes/oneclick source setenv.bash | ||||||
1 | Remove ONAP config and possible deployed applications in one shot | ./deleteAll.bash -n onap | ||||||
2 | manually clean upThis step is to clean up the left items that were created by config/createConfig script but not cleaned up by oneclick/deleteAll script. |
| ||||||
3 | Delete the share folder | sudo rm -rf /dockerdata-nfs/onap |
Scripts
The following scripts helps simplify the releted procedures with less key stroke needed