This wiki describes how to deploy SDN-C on a Kubernetes cluster using latest SDN-C helm chart.
...
Code Block |
---|
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# sudo mkdir /onapDev #Note that all components are changed to enabled:false except sdnc (and underlying mysql). Here we set number of SDNC/MySQL replica to 3/2. #Note that global.persistence.mountPath is set to non-mounted directory /onapDev (this is required to be done since we will keep nfs-provisioner as enabled in SDN-C configuration) ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# cat ~/oom/kubernetes/onap/values.yaml # Copyright © 2017 Amdocs, Bell Canada # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ################################################################# # Global configuration overrides. # # These overrides will affect all helm charts (ie. applications) # that are listed below and are 'enabled'. ################################################################# global: # Change to an unused port prefix range to prevent port conflicts # with other instances running within the same k8s cluster nodePortPrefix: 302 # ONAP Repository # Uncomment the following to enable the use of a single docker # repository but ONLY if your repository mirrors all ONAP # docker images. This includes all images from dockerhub and # any other repository that hosts images for ONAP components. #repository: nexus3.onap.org:10001 repositorySecret: eyJuZXh1czMub25hcC5vcmc6MTAwMDEiOnsidXNlcm5hbWUiOiJkb2NrZXIiLCJwYXNzd29yZCI6ImRvY2tlciIsImVtYWlsIjoiQCIsImF1dGgiOiJaRzlqYTJWeU9tUnZZMnRsY2c9PSJ9fQ== # readiness check - temporary repo until images migrated to nexus3 readinessRepository: oomk8s # logging agent - temporary repo until images migrated to nexus3 loggingRepository: docker.elastic.co # image pull policy pullPolicy: Always # default mount path root directory referenced # by persistent volumes and log files persistence: mountPath: /onapDev # flag to enable debugging - application support required debugEnabled: false ################################################################# # Enable/disable and configure helm charts (ie. applications) # to customize the ONAP deployment. ################################################################# aaf: enabled: false aai: enabled: false appc: enabled: false clamp: enabled: false cli: enabled: false consul: enabled: false dcaegen2: enabled: false esr: enabled: false log: enabled: false message-router: enabled: false mock: enabled: false msb: enabled: false multicloud: enabled: false policy: enabled: false portal: enabled: false robot: enabled: false sdc: enabled: false sdnc: enabled: true dmaap-listenerreplicaCount: 3 config: enableClustering: false mysql: disableNfsProvisioner: true dmaapPortreplicaCount: 39042 so: enabled: false replicaCount: 1 liveness: # necessary to disable liveness probe when setting breakpoints # in debugger so K8s doesn't restart unresponsive container enabled: true # so server configuration config: # message router configuration dmaapTopic: "AUTO" # openstack configuration openStackUserName: "vnf_user" openStackRegion: "RegionOne" openStackKeyStoneUrl: "http://1.2.3.4:5000" openStackServiceTenantName: "service" openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e" # configure embedded mariadb mariadb: config: mariadbRootPassword: password uui: enabled: false vfc: enabled: false vid: enabled: false vnfsdk: enabled: false ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# |
Note: If you set number of sdnc/mysql replicas in onap/values.yaml, it will overrides the setting you are about to do in the next step.
Customize the oom/kubernetes/sdnc chart, like the values.yaml file, to configure number of replicas for SDN-C and MySQL service as per your deployment needs.
...
Code Block |
---|
#Press "Enter" after running the command to get the prompt back ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# nohup sudo helm serve >/dev/null 2>&1 & [1] 2316 ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# Regenerating index. This may take a moment. Now serving you on 127.0.0.1:8879 ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# # Verify $ ps ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm repo list NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879 ubuntu@k8s-s1-master:/-ef | grep helm root 7323 18581 0 20:52 pts/8 00:00:00 sudo helm serve root 7324 7323 0 20:52 pts/8 00:00:00 helm serve ubuntu 7445 18581 0 20:52 pts/8 00:00:00 grep --color=auto helm $ # Verify ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm repo list NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879 ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# |
...
Note |
---|
Setup of this Helm repository is a one time activity. If you make changes to your deployment charts or values, make sure to run **make** command again to update your local Helm repository. |
If Change 41597 , has not been merged , create persistent volumes to be available for claim by persistent volumes claim created during MySQL pods deployment. (As of today, late April 2018, the change has been merged and you don't need to create PVs per following step.)
...
Code Block |
---|
Example:
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm install local/onap --name <Release-name> --namespace onap
Execute:
# we choose "dev" as our release name here
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm install local/onap --name dev --namespace onap
NAME: dev
LAST DEPLOYED: Thu Apr 5 15:29:43 2018
NAMESPACE: onap
STATUS: DEPLOYED
RESOURCES:
==> v1beta1/ClusterRoleBinding
NAME AGE
onap-binding <invalid>
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dev-aaf-cs ClusterIP None <none> 7000/TCP,7001/TCP,9042/TCP,9160/TCP <invalid>
dev-aaf NodePort 10.102.199.114 <none> 8101:30299/TCP <invalid>
dev-sdnc-dgbuilder NodePort 10.98.198.119 <none> 3000:30203/TCP <invalid>
dev-sdnc-dmaap-listener ClusterIP None <none> <none> <invalid>
sdnc-dbhost-read ClusterIP 10.97.164.247 <none> 3306/TCP <invalid>
dev-sdnc-nfs-provisioner ClusterIP 10.96.108.12 <none> 2049/TCP,20048/TCP,111/TCP,111/UDP <invalid>
dev-sdnc-dbhost ClusterIP None <none> 3306/TCP <invalid>
sdnc-sdnctldb02 ClusterIP None <none> 3306/TCP <invalid>
sdnc-sdnctldb01 ClusterIP None <none> 3306/TCP <invalid>
dev-sdnc-portal NodePort 10.98.82.180 <none> 8443:30201/TCP <invalid>
dev-sdnc-ueb-listener ClusterIP None <none> <none> <invalid>
sdnc-cluster ClusterIP None <none> 2550/TCP <invalid>
dev-sdnc NodePort 10.109.177.114 <none> 8282:30202/TCP,8202:30208/TCP,8280:30246/TCP <invalid>
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
dev-aaf-cs 1 1 1 0 <invalid>
dev-aaf 1 1 1 0 <invalid>
dev-sdnc-dgbuilder 1 1 1 0 <invalid>
dev-sdnc-dmaap-listener 1 1 1 0 <invalid>
dev-sdnc-nfs-provisioner 1 1 1 0 <invalid>
dev-sdnc-portal 1 0 0 0 <invalid>
dev-sdnc-ueb-listener 1 0 0 0 <invalid>
==> v1beta1/StatefulSet
NAME DESIRED CURRENT AGE
dev-sdnc-db 2 1 <invalid>
dev-sdnc 3 3 <invalid>
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
dev-aaf-cs-7c5b64d884-msh74 0/1 ContainerCreating 0 <invalid>
dev-aaf-775bdc6b48-cf7fr 0/1 Init:0/1 0 <invalid>
dev-sdnc-dgbuilder-6fdfb498f-wf7bt 0/1 Init:0/1 0 <invalid>
dev-sdnc-dmaap-listener-5998b5774c-wz24j 0/1 Init:0/1 0 <invalid>
dev-sdnc-nfs-provisioner-75dcd8c86c-qz2qh 0/1 ContainerCreating 0 <invalid>
dev-sdnc-portal-5cd7598547-46gn4 0/1 Init:0/1 0 <invalid>
dev-sdnc-ueb-listener-598c68f8d8-frbfz 0/1 Init:0/1 0 <invalid>
dev-sdnc-db-0 0/2 Init:0/3 0 <invalid>
dev-sdnc-0 0/2 Init:0/1 0 <invalid>
dev-sdnc-1 0/2 Init:0/1 0 <invalid>
dev-sdnc-2 0/2 Init:0/1 0 <invalid>
==> v1/Secret
NAME TYPE DATA AGE
dev-aaf-cs Opaque 0 <invalid>
dev-sdnc-dgbuilder Opaque 1 <invalid>
dev-sdnc-db Opaque 1 <invalid>
dev-sdnc-portal Opaque 1 <invalid>
dev-sdnc Opaque 1 <invalid>
onap-docker-registry-key kubernetes.io/dockercfg 1 <invalid>
==> v1/ConfigMap
NAME DATA AGE
dev-aaf 0 <invalid>
dev-sdnc-dgbuilder-config 1 <invalid>
dev-sdnc-dgbuilder-scripts 2 <invalid>
sdnc-dmaap-configmap 1 <invalid>
dev-sdnc-db-db-configmap 2 <invalid>
sdnc-portal-configmap 1 <invalid>
sdnc-ueb-configmap 1 <invalid>
dev-sdnc-installsdncdb 1 <invalid>
dev-sdnc-dblib-properties 1 <invalid>
dev-sdnc-aaiclient-properties 1 <invalid>
dev-sdnc-startodl 1 <invalid>
dev-sdnc-onap-sdnc-svclogic-config 1 <invalid>
dev-sdnc-svclogic-config 1 <invalid>
dev-sdnc-filebeat-configmap 1 <invalid>
dev-sdnc-log-configmap 1 <invalid>
==> v1/StorageClass
NAME PROVISIONER AGE
dev-sdnc-db-data dev-sdnc-db/nfs <invalid>
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes#
|
Downgrade helm
The helm installation procedure will put the latest version of it on your master node. Then Tiller (helm server) version will follow the helm (helm client) version and Tiller version will be also the latest.
If helm/tiller version on your K8S master node is not what ONAP installation wants, you will get “Chart incompatible with Tiller v2.9.1”. See below:
ubuntu@kanatamaster:~/oominstall/kubernetes$ helm install local/onap --name dev --namespace onap
Error: Chart incompatible with Tiller v2.9.1
ubuntu@kanatamaster:~/oominstall/kubernetes$
A temporary fix for this will be often downgrading helm/tiller. Here is the procedure:
Step 1) downgrade helm client (helm)
- Download desired version (tar.gz file) form kubernetes website. Example here: https://kubernetes-helm.storage.googleapis.com/helm-v2.8.1-linux-amd64.tar.gz . You can change the version number in the file name and you will get it!
(curl https://kubernetes-helm.storage.googleapis.com/helm-v2.8.1-linux-amd64.tar.gz --output helm-v2.8.1-linux-amd64.tar.gz --silent) . (try 2.8.2, if you get the same error with 2.8.1) - Unzip and untar the file. It will create "linux-amd64" directory.
- Copy helm binary file from linux-amd64 directory to /usr/local/bin/ (kill helm process if it is stopping the copy)
- Run "helm version"
Step 2) downgrade helm server (Tiller)
Use helm rest, . Follow the below steps:
Code Block | ||
---|---|---|
| ||
# Uninstalls 1Tiller from a cluster helm <invalid>reset sdnc-ueb-configmapforce # Clean up any existing artifacts kubectl -n kube-system delete deployment tiller-deploy kubectl 1-n kube-system delete serviceaccount tiller <invalid>kubectl dev-sdnc-installsdncdb 1 <invalid> dev-sdnc-dblib-properties 1 <invalid> dev-sdnc-aaiclient-propertiesn kube-system delete ClusterRoleBinding tiller-clusterrolebinding cat 1 <invalid> dev-sdnc-startodl 1 <invalid> dev-sdnc-onap-sdnc-svclogic-config 1 <invalid> dev-sdnc-svclogic-config 1 <invalid> dev-sdnc-filebeat-configmap 1 <invalid> dev-sdnc-log-configmap 1 <invalid> ==> v1/StorageClass NAME PROVISIONER AGE dev-sdnc-db-data dev-sdnc-db/nfs <invalid> ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# > tiller-serviceaccount.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: tiller-clusterrolebinding subjects: - kind: ServiceAccount name: tiller namespace: kube-system roleRef: kind: ClusterRole name: cluster-admin apiGroup: "" EOF # Run the blow command to get the matching tiller version for helm kubectl create -f tiller-serviceaccount.yaml # Then run init helm helm init --service-account tiller --upgrade # Verify helm version #Note: Dont forget to start helm nohup sudo helm serve >/dev/null 2>&1 & |
Note |
---|
The **--namespace onap** is currently required while all onap helm charts are migrated to version 2.0. After this activity is complete, namespaces will be optional. |
...
Code Block |
---|
#query existing pv in onap namespace
ubuntu@k8s-s1-master:/home/ubuntu# kubectl get pv -n onap
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-vol1 11Gi RWO,RWX Retain Bound dev-sdnc-db-data 38s
nfs-vol2 11Gi RWO,RWX Retain Bound dev-sdnc-db-data 34s
#Example commands are found here:
# delete all pvc under onap
ubuntu@k8s-s1-master:/home/ubuntu# kubectl delete pvc -n onap --all
#query existing pvc in onap namespace
ubuntu@k8s-s1-master:/home/ubuntu# kubectl get pvc -n onap
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
dev-sdnc-db-data-dev-sdnc-db-0 Bound nfs-vol1 11Gi RWO,RWX dev-sdnc-db-data 21h
dev-sdnc-db-data-dev-sdnc-db-1 Bound nfs-vol2 11Gi RWO,RWX dev-sdnc-db-data 21h
#delete existing pv
ubuntu@k8s-s1-master:/home/ubuntu# kubectl delete pv nfs-vol1 -n onap
pv "nfs-vol1" deleted
ubuntu@k8s-s1-master:/home/ubuntu# kubectl delete pv nfs-vol2 -n onap
pv "nfs-vol2" deleted
#delete existing pvc
ubuntu@k8s-s1-master:/home/ubuntu# kubectl delete pvc dev-sdnc-db-data-dev-sdnc-db-0 -n onap
pvc "dev-sdnc-db-data-dev-sdnc-db-0" deleted
ubuntu@k8s-s1-master:/home/ubuntu# kubectl delete pvc dev-sdnc-db-data-dev-sdnc-db-1 -n onap
pvc "dev-sdnc-db-data-dev-sdnc-db-1" deleted |
...