This wiki describes how to deploy SDN-C on a Kubernetes cluster using latest SDN-C helm chart.
...
Code Block |
---|
#Press "Enter" after running the command to get the prompt back ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# nohup sudo helm serve >/dev/null 2>&1 & [1] 2316 ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# Regenerating index. This may take a moment. Now serving you on 127.0.0.1:8879 ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# # Verify $ ps ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm repo list NAME-ef | grep helm root 7323 URL18581 stable 0 https20://kubernetes-charts.storage.googleapis.com local52 pts/8 http://127.0.0.1:8879 ubuntu@k8s-s1-master:/home/00:00:00 sudo helm serve root 7324 7323 0 20:52 pts/8 00:00:00 helm serve ubuntu 7445 18581 0 20:52 pts/8 00:00:00 grep --color=auto helm $ # Verify ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm repo list NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879 ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# |
If you don't find the local repo, add it manually.
...
Note |
---|
Setup of this Helm repository is a one time activity. If you make changes to your deployment charts or values, make sure to run **make** command again to update your local Helm repository. |
If Change 41597 , has not been merged , create persistent volumes to be available for claim by persistent volumes claim created during MySQL pods deployment. (As of today, late April 2018, the change has been merged and you don't need to create PVs per following step.)
...
Code Block |
---|
Example: ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm install local/onap --name <Release-name> --namespace onap Execute: # we choose "dev" as our release name here ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm install local/onap --name dev --namespace onap NAME: dev LAST DEPLOYED: Thu Apr 5 15:29:43 2018 NAMESPACE: onap STATUS: DEPLOYED RESOURCES: ==> v1beta1/ClusterRoleBinding NAME AGE onap-binding <invalid> ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dev-aaf-cs ClusterIP None <none> 7000/TCP,7001/TCP,9042/TCP,9160/TCP <invalid> dev-aaf NodePort 10.102.199.114 <none> 8101:30299/TCP <invalid> dev-sdnc-dgbuilder NodePort 10.98.198.119 <none> 3000:30203/TCP <invalid> dev-sdnc-dmaap-listener ClusterIP None <none> <none> <invalid> sdnc-dbhost-read ClusterIP 10.97.164.247 <none> 3306/TCP <invalid> dev-sdnc-nfs-provisioner ClusterIP 10.96.108.12 <none> 2049/TCP,20048/TCP,111/TCP,111/UDP <invalid> dev-sdnc-dbhost ClusterIP None <none> 3306/TCP <invalid> sdnc-sdnctldb02 ClusterIP None <none> 3306/TCP <invalid> sdnc-sdnctldb01 ClusterIP None <none> 3306/TCP <invalid> dev-sdnc-portal NodePort 10.98.82.180 <none> 8443:30201/TCP <invalid> dev-sdnc-ueb-listener ClusterIP None <none> <none> <invalid> sdnc-cluster ClusterIP None <none> 2550/TCP <invalid> dev-sdnc NodePort 10.109.177.114 <none> 8282:30202/TCP,8202:30208/TCP,8280:30246/TCP <invalid> ==> v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE dev-aaf-cs 1 1 1 0 <invalid> dev-aaf 1 1 1 0 <invalid> dev-sdnc-dgbuilder 1 1 1 0 <invalid> dev-sdnc-dmaap-listener 1 1 1 0 <invalid> dev-sdnc-nfs-provisioner 1 1 1 0 <invalid> dev-sdnc-portal 1 0 0 0 <invalid> dev-sdnc-ueb-listener 1 0 0 0 <invalid> ==> v1beta1/StatefulSet NAME DESIRED CURRENT AGE dev-sdnc-db 2 1 <invalid> dev-sdnc 3 3 <invalid> ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE dev-aaf-cs-7c5b64d884-msh74 0/1 ContainerCreating 0 <invalid> dev-aaf-775bdc6b48-cf7fr 0/1 Init:0/1 0 <invalid> dev-sdnc-dgbuilder-6fdfb498f-wf7bt 0/1 Init:0/1 0 <invalid> dev-sdnc-dmaap-listener-5998b5774c-wz24j 0/1 Init:0/1 0 <invalid> dev-sdnc-nfs-provisioner-75dcd8c86c-qz2qh 0/1 ContainerCreating 0 <invalid> dev-sdnc-portal-5cd7598547-46gn4 0/1 Init:0/1 0 <invalid> dev-sdnc-ueb-listener-598c68f8d8-frbfz 0/1 Init:0/1 0 <invalid> dev-sdnc-db-0 0/2 Init:0/3 0 <invalid> dev-sdnc-0 0/2 Init:0/1 0 <invalid> dev-sdnc-1 0/2 Init:0/1 0 <invalid> dev-sdnc-2 0/2 Init:0/1 0 <invalid> ==> v1/Secret NAME TYPE DATA AGE dev-aaf-cs Opaque 0 <invalid> dev-sdnc-dgbuilder Opaque 1 <invalid> dev-sdnc-db Opaque 1 <invalid> dev-sdnc-portal Opaque 1 <invalid> dev-sdnc Opaque 1 <invalid> onap-docker-registry-key kubernetes.io/dockercfg 1 <invalid> ==> v1/ConfigMap NAME DATA AGE dev-aaf 0 <invalid> dev-sdnc-dgbuilder-config 1 <invalid> dev-sdnc-dgbuilder-scripts 2 <invalid> sdnc-dmaap-configmap 1 <invalid> dev-sdnc-db-db-configmap 2 <invalid> sdnc-portal-configmap 1 <invalid> sdnc-ueb-configmap 1 <invalid> dev-sdnc-installsdncdb 1 <invalid> dev-sdnc-dblib-properties 1 <invalid> dev-sdnc-aaiclient-properties 1 <invalid> dev-sdnc-startodl 1 <invalid> dev-sdnc-onap-sdnc-svclogic-config 1 <invalid> dev-sdnc-svclogic-config 1 <invalid> dev-sdnc-filebeat-configmap 1 <invalid> dev-sdnc-log-configmap 1 <invalid> ==> v1/StorageClass NAME PROVISIONER AGE dev-sdnc-db-data dev-sdnc-db/nfs <invalid> ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# |
Downgrade helm
The helm installation procedure will put the latest version of it on your master node. Then Tiller (helm server) version will follow the helm (helm client) version and Tiller version will be also the latest.
If helm/tiller version on your K8S master node is not what ONAP installation wants, you will get “Chart incompatible with Tiller v2.9.1”. See below:
ubuntu@kanatamaster:~/oominstall/kubernetes$ helm install local/onap --name dev --namespace onap
Error: Chart incompatible with Tiller v2.9.1
ubuntu@kanatamaster:~/oominstall/kubernetes$
A temporary fix for this will be often downgrading helm/tiller. Here is the procedure:
Step 1) downgrade helm client (helm)
- Download desired version (tar.gz file) form kubernetes website. Example here: https://kubernetes-helm.storage.googleapis.com/helm-v2.8.1-linux-amd64.tar.gz . You can change the version number in the file name and you will get it!
(curl https://kubernetes-helm.storage.googleapis.com/helm-v2.8.1-linux-amd64.tar.gz --output helm-v2.8.1-linux-amd64.tar.gz --silent) . (try 2.8.2, if you get the same error with 2.8.1) - Unzip and untar the file. It will create "linux-amd64" directory.
- Copy helm binary file from linux-amd64 directory to /usr/local/bin/ (kill helm process if it is stopping the copy)
- Run "helm version"
Step 2) downgrade helm server (Tiller)
Use helm rest, . Follow the below steps:
Code Block | ||
---|---|---|
| ||
# Uninstalls Tiller from a cluster
helm reset --force
# Clean up any existing artifacts
kubectl -n kube-system delete deployment tiller-deploy
kubectl -n kube-system delete serviceaccount tiller
kubectl -n kube-system delete ClusterRoleBinding tiller-clusterrolebinding
cat > tiller-serviceaccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
EOF
# Run the blow command to get the matching tiller version for helm
kubectl create -f tiller-serviceaccount.yaml
# Then run init helm
helm init --service-account tiller --upgrade
# Verify
helm version
#Note: Dont forget to start helm
nohup sudo helm serve >/dev/null 2>&1 & |
Note |
---|
The **--namespace onap** is currently required while all onap helm charts are migrated to version 2.0. After this activity is complete, namespaces will be optional. |
...