This wiki describes how to set up a Kubernetes cluster with kuberadm, and then deploying SDN-C within that Kubernetes cluster.
...
Code Block | ||
---|---|---|
| ||
# On the k8s-master vm setup the kubernetes master node. # The "sudo -i" changes user to root. sudo -i # There is no kubernetes app running. ps -ef | grep -i kube | grep -v grep # Pick one DNS add-on: either "kube-dns" or "CoreDNS". If your environment setup is for "Kubernetes federation" or "SDN-C Geographic Redundancy" then use "CoreDNS" addon. # Note that kubeadm version 1.8.x does not have support for coredns feature gate. # Upgrade kubeadm to latest version before running below command: # With "CoreDNS" addon (recommended) kubeadm init --feature-gates=CoreDNS=true | tee ~/kubeadm_init.log # with kube-dns addon kubeadm init | tee ~/kubeadm_init.log # Verify many kubernetes app running (kubelet, kube-scheduler, etcd, kube-apiserver, kube-proxy, kube-controller-manager) ps -ef | grep -i kube | grep -v grepkube-controller-managergrep # The "exit" reverts user back to ubuntu. exit |
...
Code Block | ||
---|---|---|
| ||
sudo kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system kube-apiserver-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system kube-controller-manager-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system kube-dns-545bc4bfd4-jcklm 3/3 Running 0 44m 10.32.0.2 k8s-master kube-system kube-proxy-lnv7r 1/1 Running 0 44m 10.147.112.140 k8s-master kube-system kube-scheduler-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system weave-net-b2hkh 2/2 Running 0 1m 10.147.112.140 k8s-master #(There will be 2 coredns pods with different IP addresses, with kubernetes version 1.10.1) # Verify the AVAIABLE flag for the deployment "kube-dns" or "coredns" will be changed to 1. (2 with kubernetes version 1.10.1) #For coredns sudo kubectl get deployment --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system coredns 2 2 2 2 2m #For kubedns sudo kubectl get deployment --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system kube-dns 1 1 1 1 1h |
...
Code Block | ||
---|---|---|
| ||
helm init --service-account tiller --upgrade # A new pod is created, but will be in pending status. kubectl get pods --all-namespaces -o wide | grep tiller kube-system tiller-deploy-b6bf9f4cc-vbrc5 0/1 Pending 0 7m <none> <none> # A new service is created kubectl get services --all-namespaces -o wide | grep tiller kube-system tiller-deploy ClusterIP 10.102.74.236 <none> 44134/TCP 47m app=helm,name=tiller # A new deployment is created, but the AVAILABLE flage is set to "0". kubectl get deployments --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system kube-dns 1 1 1 1 1h kube-system tiller-deploy 1 1 1 0 8m |
Downgrade helm
The helm installation procedure will put the latest version of it on your master node. Then Tiller (helm server) version will follow the helm (helm client) version and Tiller version will be also the latest.
If helm/tiller version on your K8S master node is not what OJNAP installation wants, you will get “Chart incompatible with Tiller v2.9.1”. See below:
ubuntu@kanatamaster:~/oominstall/kubernetes$ helm install local/onap --name dev --namespace onap
Error: Chart incompatible with Tiller v2.9.1
ubuntu@kanatamaster:~/oominstall/kubernetes$
A temporary fix for this will be often downgrading helm/tiller. Here is the procedure:
Step 1) downgrade helm client (helm)
...
...
Step 2) downgrade helm server (Tiller)
Use helm rest, . Follow the below steps:
Code Block | ||
---|---|---|
| ||
# Uninstalls Tiller from a cluster
helm reset --force
# Clean up any existing artifacts
kubectl -n kube-system delete deployment tiller-deploy
kubectl -n kube-system delete serviceaccount tiller
kubectl -n kube-system delete ClusterRoleBinding tiller-clusterrolebinding
# Run the blow command to get the matching tiller version for helm
kubectl create -f tiller-serviceaccount.yaml
# Then run init helm
helm init --service-account tiller --upgrade
# Verify
helm version |
Configure the Kubernetes Worker Nodes (k8s-node<n>)
Configure the Kubernetes Worker Nodes (k8s-node<n>)
Setting up cluster nodes is very easy. Just refer back to the "kubeadm init" output logs (/root/kubeadm_init.log). In the last line of the the logs, there is a “kubeadm join” command with token information and other parameters.
...
See 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes for instruction on how to set this up.
Configuring SDN-C ONAP
Clone OOM project only on Kuberentes Master Node
Note | ||
---|---|---|
| ||
OOM deployment is now being done using helm. It is highly recommended to stop here and continue from this page. Deploying SDN-C using helm chart |
As ubuntu user, clone the oom repository.
...
Get the following 2 gerrit changes from Configure SDN-C Cluster Deployment .
- Get New startODL.sh Script From Gerrit Topic SDNC-163 (download startODL_new.sh.zip script and copy into /dockerdata-nfs/cluster/script/startODL.sh)
- Get SDN-C Cluster Templates From Gerrit Topic SDNC-163 (skip step 1 and 2. The gerrit change 25467 has been already merged. Just update sdnc-statefulset.yaml and values.yaml)
...
Verify SDNC Clustering
Refer to Validate the SDN-C ODL cluster.
Undeploy SDNC
Code Block | ||
---|---|---|
| ||
$ cd ~/oom/kubernetes/oneclick/ $ source setenv.bash $ ./deleteAll.bash -n onap $ ./deleteAll.bash -n onap -a sdnc $ sudo rm -rf /dockerdata-nfs |
...
Code Block | ||
---|---|---|
| ||
decrease sdnc pods to 1 $ kubectl scale statefulset sdnc -n onap-sdnc --replicas=1 statefulset "sdnc" scaled # verify ..2 sdnc pods will terminate $ kubectl get pods --all-namespaces -a | grep sdnc onap-sdnc nfs-provisioner-5fb9fcb48f-cj8hm 1/1 Running 0 21h onap-sdnc sdnc-0 2/2 Running 0 2h onap-sdnc sdnc-1 0/2 Terminating 0 40m onap-sdnc sdnc-2 0/2 Terminating 0 15m increase sdnc pods to 5 $ kubectl scale statefulset sdnc -n onap-sdnc --replicas=5 statefulset "sdnc" scaled increase db pods to 4 $kubectl scale statefulset sdnc-dbhost -n onap-sdnc --replicas=5 statefulset "sdnc-dbhost" scaled $ kubectl get pods --all-namespaces -o wide | grep onap-sdnc onap-sdnc nfs-provisioner-7fd7b4c6b7-d6k5t 1/1 Running 0 13h 10.42.0.149 sdnc-k8s onap-sdnc sdnc-0 2/2 Running 0 13h 10.42.134.186 sdnc-k8s onap-sdnc sdnc-1 2/2 Running 0 13h 10.42.186.72 sdnc-k8s onap-sdnc sdnc-2 2/2 Running 0 13h 10.42.51.86 sdnc-k8s onap-sdnc sdnc-dbhost-0 2/2 Running 0 13h 10.42.190.88 sdnc-k8s onap-sdnc sdnc-dbhost-1 2/2 Running 0 12h 10.42.213.221 sdnc-k8s onap-sdnc sdnc-dbhost-2 2/2 Running 0 5m 10.42.63.197 sdnc-k8s onap-sdnc sdnc-dbhost-3 2/2 Running 0 5m 10.42.199.38 sdnc-k8s onap-sdnc sdnc-dbhost-4 2/2 Running 0 4m 10.42.148.85 sdnc-k8s onap-sdnc sdnc-dgbuilder-6ff8d94857-hl92x 1/1 Running 0 13h 10.42.255.132 sdnc-k8s onap-sdnc sdnc-portal-0 1/1 Running 0 13h 10.42.141.70 sdnc-k8s onap-sdnc sdnc-portal-1 1/1 Running 0 13h 10.42.60.71 sdnc-k8s onap-sdnc sdnc-portal-2 |
sudo ntpdate -s 10.247.5.11