This wiki describes how to set up a Kubernetes cluster with kuberadm, and then deploying SDN-C within that Kubernetes cluster.
...
Code Block | ||
---|---|---|
| ||
helm init --service-account tiller --upgrade # A new pod is created, but will be in pending status. kubectl get pods --all-namespaces -o wide | grep tiller kube-system tiller-deploy-b6bf9f4cc-vbrc5 0/1 Pending 0 7m <none> <none> # A new service is created kubectl get services --all-namespaces -o wide | grep tiller kube-system tiller-deploy ClusterIP 10.102.74.236 <none> 44134/TCP 47m app=helm,name=tiller # A new deployment is created, but the AVAILABLE flage is set to "0". kubectl get deployments --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system kube-dns 1 1 1 1 1h kube-system tiller-deploy 1 1 1 0 8m |
Downgrade helm
The helm installation procedure will put the latest version of it on your master node. Then Tiller (helm server) version will follow the helm (helm client) version and Tiller version will be also the latest.
If helm/tiller version on your K8S master node is not what OJNAP installation wants, you will get “Chart incompatible with Tiller v2.9.1”. See below:
ubuntu@kanatamaster:~/oominstall/kubernetes$ helm install local/onap --name dev --namespace onap
Error: Chart incompatible with Tiller v2.9.1
ubuntu@kanatamaster:~/oominstall/kubernetes$
A temporary fix for this will be often downgrading helm/tiller. Here is the procedure:
Step 1) downgrade helm client (helm)
...
Step 2) downgrade helm server (Tiller)
Use helm rest, . Follow the below steps:
Code Block | ||
---|---|---|
| ||
# Uninstalls Tiller from a cluster
helm reset --force
# Clean up any existing artifacts
kubectl -n kube-system delete deployment tiller-deploy
kubectl -n kube-system delete serviceaccount tiller
kubectl -n kube-system delete ClusterRoleBinding tiller-clusterrolebinding
# Run the blow command to get the matching tiller version for helm
kubectl create -f tiller-serviceaccount.yaml
# Then run init helm
helm init --service-account tiller --upgrade
# Verify
helm version |
Configure the Kubernetes Worker Nodes (k8s-node<n>)
Setting up cluster nodes is very easy. Just refer back to the "kubeadm init" output logs (/root/kubeadm_init.log). In the last line of the the logs, there is a “kubeadm join” command with token information and other parameters.
...
Code Block | ||
---|---|---|
| ||
decrease sdnc pods to 1 $ kubectl scale statefulset sdnc -n onap-sdnc --replicas=1 statefulset "sdnc" scaled # verify ..2 sdnc pods will terminate $ kubectl get pods --all-namespaces -a | grep sdnc onap-sdnc nfs-provisioner-5fb9fcb48f-cj8hm 1/1 Running 0 21h onap-sdnc sdnc-0 2/2 Running 0 2h onap-sdnc sdnc-1 0/2 Terminating 0 40m onap-sdnc sdnc-2 0/2 Terminating 0 15m increase sdnc pods to 5 $ kubectl scale statefulset sdnc -n onap-sdnc --replicas=5 statefulset "sdnc" scaled increase db pods to 4 $kubectl scale statefulset sdnc-dbhost -n onap-sdnc --replicas=5 statefulset "sdnc-dbhost" scaled $ kubectl get pods --all-namespaces -o wide | grep onap-sdnc onap-sdnc nfs-provisioner-7fd7b4c6b7-d6k5t 1/1 Running 0 13h 10.42.0.149 sdnc-k8s onap-sdnc sdnc-0 2/2 Running 0 13h 10.42.134.186 sdnc-k8s onap-sdnc sdnc-1 2/2 Running 0 13h 10.42.186.72 sdnc-k8s onap-sdnc sdnc-2 2/2 Running 0 13h 10.42.51.86 sdnc-k8s onap-sdnc sdnc-dbhost-0 2/2 Running 0 13h 10.42.190.88 sdnc-k8s onap-sdnc sdnc-dbhost-1 2/2 Running 0 12h 10.42.213.221 sdnc-k8s onap-sdnc sdnc-dbhost-2 2/2 Running 0 5m 10.42.63.197 sdnc-k8s onap-sdnc sdnc-dbhost-3 2/2 Running 0 5m 10.42.199.38 sdnc-k8s onap-sdnc sdnc-dbhost-4 2/2 Running 0 4m 10.42.148.85 sdnc-k8s onap-sdnc sdnc-dgbuilder-6ff8d94857-hl92x 1/1 Running 0 13h 10.42.255.132 sdnc-k8s onap-sdnc sdnc-portal-0 1/1 Running 0 13h 10.42.141.70 sdnc-k8s onap-sdnc sdnc-portal-1 1/1 Running 0 13h 10.42.60.71 sdnc-k8s onap-sdnc sdnc-portal-2 |
sudo ntpdate -s 10.247.5.11