Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This wiki describes how to set up a Kubernetes cluster with kuberadm, and then deploying APPC within that Kubernetes cluster.

(To view the current page, Chrome is the preferred browser. IE may add extra "CR LF"  each line, which causes problems). 

Table of Contents

What is OpenStack? What is Kubernetes? What is Docker?

In the OpenStack lab, the controller executes the function of partitioning resources. The compute nodes are the collection of resources (memory, CPUs, hard drive space) to be partitioned. When creating a VM with "X" memory, "Y" CPUs and "Z" hard drive space, OpenStack's controller reviews its pool of available resources, allocates the quota, and then creates the VM on one of the available compute nodes. Many VMs can be created on a single compute node. OpenStack's controller uses a lot of criteria to choose a compute node, but if an application spans multiple VMs, Affinity rules can be used to ensure the VMs don’t congregate on a single compute node. This would not be good for resilience.

...

Deployment Architecture

The Kubernetes deployment in this tutorial will be set up on top of OpenStack VMs. Let's call this the undercloud. undercloud can be physical boxes, or VMs. The VMs can come from different cloud providers, but in this tutorial we will use OpenStack. The following table shows the layers of software that need to be considered when thinking about resilience:

...

Code Block
languagebash
openstack server list;
openstack network list;
openstack flavor list;
openstack keypair list;
openstack image list;
openstack security group list

openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-master"
openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node1"
openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node2"
openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node3"


Configure Each VM 

Repeat the following steps on each VM:

Pre-Configure Each VM

Make sure the VMs are: 

  • Up to date
  • The clocks are synchonized 

...

Question: Did you check date on all K8S nodes to make sure they are in synch?

Install Docker

The ONAP apps are pakages in Docker containers.

...

Code Block
languagebash
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88

# Add a docker repository to "/etc/apt/sources.list". It is for the latest stable one for the ubuntu falvour on the machine ("lsb_release -cs")
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get -y install docker-ce

sudo docker run hello-world


# Verify:
sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                     PORTS               NAMES
c66d903a0b1f        hello-world         "/hello"            10 seconds ago      Exited (0) 9 seconds ago                       vigorous_bhabha


Install the Kubernetes Pakages

Just install the pakages; there is no need to configure them yet.

...

Note: If you intend to remove kubernetes packages use  "apt autoremove kubelet; apt autoremove kubeadm;apt autoremove kubectl;apt autoremove kubernetes-cni" .

Configure the Kubernetes Cluster with kubeadm

kubeadm is a utility provided by Kubernetes which simplifies the process of configuring a Kubernetes cluster.  Details about how to use it can be found here: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/.

Configure the Kubernetes Master Node (k8s-master)

The kubeadm init command sets up the Kubernetes master node. SSH to the k8s-master VM and invoke the following command. It is important to capture the output into a log file as there is information which you will need to refer to afterwards.

...

Code Block
languagebash
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.7
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 17.03
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubefed-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.147.114.12]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 44.002324 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node kubefed-1 as master by adding a label and a taint
[markmaster] Master kubefed-1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 2246a6.83b4c7ca38913ce1
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a

NOTE: the "kubeadm join .." command shows in the log of kubeadm init, should run in each VMs in the k8s cluster to perform a cluster, use "kubectl get nodes" to make sure all nodes are all joined.


Execute the following snippet (as ubuntu user) to get kubectl to work. 

...

Code Block
languagebash
sudo kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP               NODE
kube-system   etcd-k8s-master                      1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   kube-apiserver-k8s-master            1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   kube-controller-manager-k8s-master   1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   kube-dns-545bc4bfd4-jcklm            3/3       Running   0          44m       10.32.0.2        k8s-master
kube-system   kube-proxy-lnv7r                     1/1       Running   0          44m       10.147.112.140   k8s-master
kube-system   kube-scheduler-k8s-master            1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   weave-net-b2hkh                      2/2       Running   0          1m        10.147.112.140   k8s-master


#(There will be 2 codedns pods with different IP addresses, with kubernetes version 1.10.1)

# Verify the AVAIABLE flag for the deployment "kube-dns" or "coredns" will be changed to 1. (2 with kubernetes version 1.10.1)
sudo kubectl get deployment --all-namespaces
NAMESPACE     NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   kube-dns   1         1         1            1           1h

Troubleshooting tip: 

  • If any of the weave pods face a problem and gets stuck at "ImagePullBackOff " state, you can try running the " sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" " again. 
  • Sometimes, you need to delete the problematic pod, to let it terminate and start fresh. Use "kubectl delete po/<pod-name> -n <name-space> " to delete a pod.
  • To "Unjoin" a worker node "kubectl delete node <node-name> (go through the "Undeploy APPC" process at the end if you have an APPC cluster running)

Install "make" ( Learn more about ubuntu-make here : https://wiki.ubuntu.com/ubuntu-make

Code Block
#######################
# Install make from kubernetes directory. 
#######################
$ sudo apt install make
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
  linux-headers-4.4.0-62 linux-headers-4.4.0-62-generic linux-image-4.4.0-62-generic snap-confine
Use 'sudo apt autoremove' to remove them.
Suggested packages:
  make-doc
The following NEW packages will be installed:
  make
0 upgraded, 1 newly installed, 0 to remove and 72 not upgraded.
Need to get 151 kB of archives.
After this operation, 365 kB of additional disk space will be used.
Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB]
Fetched 151 kB in 0s (208 kB/s)
Selecting previously unselected package make.
(Reading database ... 121778 files and directories currently installed.)
Preparing to unpack .../archives/make_4.1-6_amd64.deb ...
Unpacking make (4.1-6) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up make (4.1-6) ...

Install Helm and Tiller on the Kubernetes Master Node (k8s-master)

ONAP uses Helm, a package manager for kubernetes.

...

Code Block
languagebash
# Uninstalls Tiller from a cluster
helm reset --force
 
 
# Clean up any existing artifacts
kubectl -n kube-system delete deployment tiller-deploy
kubectl -n kube-system delete serviceaccount tiller
kubectl -n kube-system delete ClusterRoleBinding tiller-clusterrolebinding
 
 
kubectl create -f tiller-serviceaccount.yaml
 
#init helm
helm init --service-account tiller --upgrade

Configure the Kubernetes Worker Nodes (k8s-node<n>)

Setting up cluster nodes is very easy. Just refer back to the "kubeadm init" output logs (/root/kubeadm_init.log). In the last line of the the logs, there is a “kubeadm join” command with token information and other parameters.

...

Now you have a Kubernetes cluster with 3 worker nodes and 1 master node.

Cluster's Full Picture

You can run " kubectl describe node" on the Master node and get a complete report on nodes (including workers) and thier system resources.

Configure dockerdata-nfs

This is a shared directory which must be mounted on all of the Kuberenetes VMs(master node and worker nodes). Because many of the ONAP pods use this directory to share data.

See 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes for instruction on how to set this up.


Configure ONAP

Clone OOM project only on Kuberentes Master Node

As ubuntu user, clone the oom repository. 

...

Note

You may use any specific known stable OOM release for APPC deployment. The above URL downloads latest OOM.


Customize the oom/kubernetes/onap parent chart, like the values.yaml file, to suit your deployment. You may want to selectively enable or disable ONAP components by changing the subchart **enabled** flags to *true* or *false*.

Code Block
$ vi oom/kubernetes/onap/values.yaml
 Example:
...
robot: # Robot Health Check
  enabled: true
sdc:
  enabled: false
appc:
  enabled: true
so: # Service Orchestrator
  enabled: false

Deploy APPC

To deploy only APPC, customize the parent chart to disable all components except APPC as shown in the file below. Also set the global.persistence.mountPath to some non-mounted directory (by default, it is set to mounted directory /dockerdata-nfs).

...

Code Block
ubuntu@k8s-master:~/oom/kubernetes$ kubectl get pods --all-namespaces -o wide -w
NAMESPACE     NAME                                            READY     STATUS            RESTARTS   AGE       IP            NODE
kube-system   etcd-k8s-master                                 1/1       Running           5          14d       10.12.5.171   k8s-master
kube-system   kube-apiserver-k8s-master                       1/1       Running           5          14d       10.12.5.171   k8s-master
kube-system   kube-controller-manager-k8s-master              1/1       Running           5          14d       10.12.5.171   k8s-master
kube-system   kube-dns-86f4d74b45-px44s                       3/3       Running           21         27d       10.32.0.5     k8s-master
kube-system   kube-proxy-25tm5                                1/1       Running           8          27d       10.12.5.171   k8s-master
kube-system   kube-proxy-6dt4z                                1/1       Running           4          27d       10.12.5.174   k8s-node1
kube-system   kube-proxy-jmv67                                1/1       Running           4          27d       10.12.5.193   k8s-node2
kube-system   kube-proxy-l8fks                                1/1       Running           6          27d       10.12.5.194   k8s-node3
kube-system   kube-scheduler-k8s-master                       1/1       Running           5          14d       10.12.5.171   k8s-master
kube-system   tiller-deploy-84f4c8bb78-s6bq5                  1/1       Running           0          4d        10.47.0.7     k8s-node2
kube-system   weave-net-bz7wr                                 2/2       Running           20         27d       10.12.5.194   k8s-node3
kube-system   weave-net-c2pxd                                 2/2       Running           13         27d       10.12.5.174   k8s-node1
kube-system   weave-net-jw29c                                 2/2       Running           20         27d       10.12.5.171   k8s-master
kube-system   weave-net-kxxpl                                 2/2       Running           13         27d       10.12.5.193   k8s-node2
onap          dev-appc-0                                      0/2       PodInitializing   0          2m        10.47.0.5     k8s-node2
onap          dev-appc-1                                      0/2       PodInitializing   0          2m        10.36.0.8     k8s-node3
onap          dev-appc-2                                      0/2       PodInitializing   0          2m        10.44.0.7     k8s-node1
onap          dev-appc-cdt-8cbf9d4d9-mhp4b                    1/1       Running           0          2m        10.47.0.1     k8s-node2
onap          dev-appc-db-0                                   2/2       Running           0          2m        10.36.0.5     k8s-node3
onap          dev-appc-dgbuilder-54766c5b87-xw6c6             0/1       PodInitializing   0          2m        10.44.0.2     k8s-node1
onap          dev-robot-785b9bfb45-9s2rs                      0/1       PodInitializing   0          2m        10.36.0.7     k8s-node3


Cleanup deployed ONAP instance

To delete a deployed instance, use the following command:

...

Code Block
#query existing pv in onap namespace
$ kubectl get pv -n onap
NAME                           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                               STORAGECLASS       REASON    AGE
dev-appc-data0                 1Gi        RWO            Retain           Bound     onap/dev-appc-data-dev-appc-0       dev-appc-data                8m
dev-appc-data1                 1Gi        RWO            Retain           Bound     onap/dev-appc-data-dev-appc-2       dev-appc-data                8m
dev-appc-data2                 1Gi        RWO            Retain           Bound     onap/dev-appc-data-dev-appc-1       dev-appc-data                8m
dev-appc-db-data               1Gi        RWX            Retain           Bound     onap/dev-appc-db-data               dev-appc-db-data             8m

#Example commands are found here: 

#delete existing pv
$ kubectl delete pv dev-appc-data0 -n onap
pv "dev-appc-data0" deleted
$ kubectl delete pv dev-appc-data1 -n onap
pv "dev-appc-data0" deleted
$ kubectl delete pv dev-appc-data2 -n onap
pv "dev-appc-data2" deleted
$ kubectl delete pv dev-appc-db-data -n onap
pv "dev-appc-db-data" deleted

#query existing pvc in onap namespace
$ kubectl get pvc -n onap
NAME                           STATUS    VOLUME                         CAPACITY   ACCESS MODES   STORAGECLASS       AGE
dev-appc-data-dev-appc-0       Bound     dev-appc-data0                 1Gi        RWO            dev-appc-data      9m
dev-appc-data-dev-appc-1       Bound     dev-appc-data2                 1Gi        RWO            dev-appc-data      9m
dev-appc-data-dev-appc-2       Bound     dev-appc-data1                 1Gi        RWO            dev-appc-data      9m
dev-appc-db-data               Bound     dev-appc-db-data               1Gi        RWX            dev-appc-db-data   9m

#delete existing pvc
$ kubectl delete pvc dev-appc-data-dev-appc-0 -n onap
pvc "dev-appc-data-dev-appc-0" deleted
$ kubectl delete pvc dev-appc-data-dev-appc-1 -n onap
pvc "dev-appc-data-dev-appc-1" deleted
$ kubectl delete pvc dev-appc-data-dev-appc-2 -n onap
pvc "dev-appc-data-dev-appc-2" deleted
$ kubectl delete pvc dev-appc-db-data -n onap
pvc "dev-appc-db-data" deleted

Verify APPC Clustering

Refer to Validate the APPC ODL cluster.

Get the details from Kubernete Master Node


Access to RestConf UI is via https://<Kuberbetes-Master Node-IP>:30230/apidoc/explorer/index.html (admin user)

...