This wiki describes how to set up a Kubernetes cluster with kuberadm, and then deploying SDN-C within that Kubernetes cluster.
(To view the current page, Chrome is the preferred browser. IE may add extra "CR LF" each line, which causes problems).
What is OpenStack? What is Kubernetes? What is Docker?
In the OpenStack lab, the controller executes the function of partitioning resources. The compute nodes are the collection of resources (memory, CPUs, hard drive space) to be partitioned. When creating a VM with "X" memory, "Y" CPUs and "Z" hard drive space, OpenStack's controller reviews its pool of available resources, allocates the quota, and then creates the VM on one of the available compute nodes. Many VMs can be created on a single compute node. OpenStack's controller uses a lot of criteria to choose a compute node, but if an application spans multiple VMs, Affinity rules can be used to ensure the VMs don’t congregate on a single compute node. This would not be good for resilience.
Kubernetes is similar to OpenStack in that it manages resources. Instead of scheduling VMs, Kubernetes schedules Pods. In a Kubernetes cluster, there is a single master node and multiple worker nodes. The Kubernetes’s master node is like the OpenStack controller in that it allocates resources for Pods. Kubernetes worker nodes are the pool of resources to be allocated, similar to OpenStack’s compute nodes. Pods, like VMs, can have Affinity rules configured in order to increase Apps resilience.
If you would like more information on these subjects, please explore these links:
Deployment Architecture
The Kubernetes deployment in this tutorial will be set up on top of OpenStack VMs. Let's call this the undercloud. undercloud can be physical boxes, or VMs. The VMs can come from different cloud providers, but in this tutorial we will use OpenStack. The following table shows the layers of software that need to be considered when thinking about resilience:
Hardware Base OS | Openstack Software Configured on Base OS | VMs Deployed by Openstack | Kubernetes Software Configured on VMs | Pods Deployed by Kubernetes | Docker Containers Deployed within a POD |
---|---|---|---|---|---|
Computer 1 | Controller Node | ||||
Computer 2 | Compute | VM 1 | k8s-master | ||
Computer 3 | Compute | VM 2 | k8s-node1 | sdnc-0 | sdnc-controller-container, filebeat-onap |
sdnc-dbhost-0 | sdnc-db-container xtrabackup, | ||||
Computer 4 | Compute | VM 3 | k8s-node2 | sdnc-0 | sdnc-controller-container, filebeat-onap |
sdnc-dbhost-0 | sdnc-db-container xtrabackup | ||||
Computer 5 | Compute | VM 4 | k8s-node3 | sdnc-0 | sdnc-controller-container, filebeat-onap |
nfs-provisioner-xxx | nfs-provisioner |
Setting up an OpenStack lab is out of scope for this tutorial. Assuming that you have a lab, you will need to create 1+n VMs: one to be configured as the Kubernetes master node, and "n" to be considered as Kubernetes worker nodes. We will create 3 Kubernetes worker nodes for this tutorial because we want each of our SDN-Cs to appear on a different VM for resiliency.
There are some changes committed in OOM repo. Two new pods are added: dmaap and ueb-listener. We observed issue with SDNC pods deployment using one master and three worker nodes. We were able to deploy SDNC with latest OOM using one master and four worker nodes.
Hence if you wish to use latest OOM for SDNC deployment, it is recommended to add another Compute Node (VM 5) as a worker node (k8s-node4).
Create the Undercloud
The examples here will use the OpenStackClient; however, the Openstack Horizon GUI could be used. Start by creating 4 VMs with the hostnames: k8s-master, k8s-node0, k8s-node1, and k8s-node1. Each VM should have internet access and approximately:
- 16384 MB
- 20 GB
- 4 vCPUs
How much resources are needed?
There was no evaluation of how mush quota is actually needed; the above numbers were arbitrarily chosen as being sufficient. A lot more is likely needed if the the full ONAP environment is deployed. For just SDN-C, this is more than plenty.
Use the ubuntu 16.04 cloud image to create the VMs. This image can be found at https://cloud-images.ubuntu.com/.
wget https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img openstack image create ubuntu-16.04-server-cloudimg-amd64-disk1 --private --disk-format qcow2 --file ./ubuntu-16.04-server-cloudimg-amd64-disk1
Exactly how to create VMs in OpenStack is out of scope for this tutorial. However, here is some examples of what OpenStackClient commands can be used to perform this job:
openstack server list; openstack network list; openstack flavor list; openstack keypair list; openstack image list; openstack security group list openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-master" openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node1" openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node2" openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node3"
Configure Each VM
Repeat the following steps on each VM:
Pre-Configure Each VM
Make sure the VMs are:
- Up to date
- The clocks are synchonized
As ubuntu user run the followings.
# (Optional) fix vi bug in some versions of Mobaxterm (changes first letter of edited file after opening to "g") vi ~/.vimrc ==> repeat for root/ubuntu and any other user which will edit files. # Add the following 2 lines. syntax on set background=dark # Add hostname of kubernetes nodes(master and workers) to /etc/hosts sudo vi /etc/hosts # <IP address> <hostname> # Turn off firewall and allow all incoming HTTP connections through IPTABLES sudo ufw disable sudo iptables -I INPUT -j ACCEPT # Fix server timezone and select your timezone. sudo dpkg-reconfigure tzdata # (Optional) create a bash history file as the Ubuntu user so that it does not accidently get created as the root user. touch ~/.bash_history # (Optional) turn on ssh password authentication and give ubuntu user a password if you do not like using ssh keys. # Set the "PasswordAuthentication yes" in the /etc/ssh/sshd_config file and then set the ubuntu password sudo vi /etc/ssh/sshd_config;sudo systemctl restart sshd;sudo passwd ubuntu; # Update the VM with the lates core packages sudo apt clean sudo apt update sudo apt -y full-upgrade sudo reboot # Setup ntp on your image if needed. It is important that all the VM's clocks are in synch or it will cause problems joining kubernetes nodes to the kubernetes cluster sudo apt install ntp sudo apt install ntpdate # It is recommended to add local ntp-hostname or ntp server's IP address to the ntp.conf # Sync up your vm clock with that of your ntp server. The best choice for the ntp server is one which is different form Kubernetes VMs... a solid machine. Make sure you can ping it! # A service restart would be needed to synch the time up. You can run them from command line for immediate change. sudo vi /etc/ntp.conf # Append the following lines to /etc/ntp.conf, to make them permanent. date sudo service ntp stop sudo ntpdate -s <ntp-hostname | ntp server's IP address> ==>e.g.: sudo ntpdate -s 10.247.5.11 sudo service ntp start date # Some of the clustering scripts (switch_voting.sh and sdnc_cluster.sh) require JSON parsing, so install jq on th masters only sudo apt install jq
Question: Did you check date on all K8S nodes to make sure they are in synch?
Install Docker
The ONAP apps are pakages in Docker containers.
The following snippet was taken from https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#install-docker-ce:
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual sudo apt-get install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo apt-key fingerprint 0EBFCD88 # Add a docker repository to "/etc/apt/sources.list". It is for the latest stable one for the ubuntu falvour on the machine ("lsb_release -cs") sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update sudo apt-get -y install docker-ce sudo docker run hello-world # Verify: sudo docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c66d903a0b1f hello-world "/hello" 10 seconds ago Exited (0) 9 seconds ago vigorous_bhabha
Install the Kubernetes Pakages
Just install the pakages; there is no need to configure them yet.
The following snippet was taken from https://kubernetes.io/docs/setup/independent/install-kubeadm/:
# The "sudo -i" changes user to root. sudo -i apt-get update && apt-get install -y apt-transport-https curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - # Add a kubernetes repository for the latest stable one for the ubuntu falvour on the machine (here:xenial) cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update # As of today (late April 2018) version 1.10.1 of kubernetes packages are available. To install that version, you can run: apt-get install -y kubelet=1.10.1-00 apt-get install -y kubectl=1.10.1-00 apt-get install -y kubeadm # To install latest version of Kubenetes packages. (recommended) apt-get install -y kubelet kubeadm kubectl # To install old version of kubernetes packages, follow the next line. # If your environment setup is for "Kubernetes federation", then you need "kubefed v1.10.1". We recommend all of Kubernetes packages to be of the same version. apt-get install -y kubelet=1.8.6-00 kubernetes-cni=0.5.1-00 apt-get install -y kubectl=1.8.6-00 apt-get install -y kubeadm # Verify version kubectl version kubeadm version kubelet --version exit # Append the following lines to ~/.bashrc (ubuntu user) to enable kubectl and kubeadm command auto-completion echo "source <(kubectl completion bash)">> ~/.bashrc echo "source <(kubeadm completion bash)">> ~/.bashrc
Note: If you intend to remove kubernetes packages use "apt autoremove kubelet; apt autoremove kubeadm;apt autoremove kubectl " .
Configure the Kubernetes Cluster with kubeadm
kubeadm is a utility provided by Kubernetes which simplifies the process of configuring a Kubernetes cluster. Details about how to use it can be found here: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/.
Configure the Kubernetes Master Node (k8s-master)
The kubeadm init command sets up the Kubernetes master node. SSH to the k8s-master VM and invoke the following command. It is important to capture the output into a log file as there is information which you will need to refer to afterwards.
Note: A new add-on named as "kube-dns" will be added to the master node. However, there is a recommended option to replace it with "CoreDNS", by providing "--feature-gates=CoreDNS=true" parameter to "kubeadm init" command.
# On the k8s-master vm setup the kubernetes master node. # The "sudo -i" changes user to root. sudo -i # There is no kubernetes app running. ps -ef | grep -i kube | grep -v grep # Pick one DNS add-on: either "kube-dns" or "CoreDNS". If your environment setup is for "Kubernetes federation" or "SDN-C Geographic Redundancy" then use "CoreDNS" addon. # Note that kubeadm version 1.8.x does not have support for coredns feature gate. # Upgrade kubeadm to latest version before running below command: # With "CoreDNS" addon (recommended) kubeadm init --feature-gates=CoreDNS=true | tee ~/kubeadm_init.log # with kube-dns addon kubeadm init | tee ~/kubeadm_init.log # Verify many kubernetes app running (kubelet, kube-scheduler, etcd, kube-apiserver, kube-proxy) ps -ef | grep -i kube | grep -v grepkube-controller-manager # The "exit" reverts user back to ubuntu. exit
The output of "kubeadm init" (with kube-dns addon) will look like below:
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.8.7 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks [preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 17.03 [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [kubefed-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.147.114.12] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] This often takes around a minute; or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 44.002324 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node kubefed-1 as master by adding a label and a taint [markmaster] Master kubefed-1 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: 2246a6.83b4c7ca38913ce1 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a
Execute the following snippet (as ubuntu user) to get kubectl to work.
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Verify a set of pods are created. The coredns or kubedns will be in pending state.
# If you installed coredns addon sudo kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system coredns-65dcdb4cf-8dr7w 0/1 Pending 0 10m <none> <none> kube-system coredns-65dcdb4cf-8ez2s 0/1 Pending 0 10m <none> <none> kube-system etcd-k8s-master 1/1 Running 0 9m 10.147.99.149 k8s-master kube-system kube-apiserver-k8s-master 1/1 Running 0 9m 10.147.99.149 k8s-master kube-system kube-controller-manager-k8s-master 1/1 Running 0 9m 10.147.99.149 k8s-master kube-system kube-proxy-jztl4 1/1 Running 0 10m 10.147.99.149 k8s-master kube-system kube-scheduler-k8s-master 1/1 Running 0 9m 10.147.99.149 k8s-master #(There will be 2 coredns pods with kubernetes version 1.10.1 and higher) # If you did not install coredns addon; kube-dns pod will be created sudo kubectl get pods --all-namespaces -o wide NAME READY STATUS RESTARTS AGE IP NODE etcd-k8s-s1-master 1/1 Running 0 23d 10.147.99.131 k8s-s1-master kube-apiserver-k8s-s1-master 1/1 Running 0 23d 10.147.99.131 k8s-s1-master kube-controller-manager-k8s-s1-master 1/1 Running 0 23d 10.147.99.131 k8s-s1-master kube-dns-6f4fd4bdf-czn68 3/3 Pending 0 23d <none> <none> kube-proxy-ljt2h 1/1 Running 0 23d 10.147.99.148 k8s-s1-node0 kube-scheduler-k8s-s1-master 1/1 Running 0 23d 10.147.99.131 k8s-s1-master # (Optional) run the following commands if you are curious. sudo kubectl get node sudo kubectl get secret sudo kubectl config view sudo kubectl config current-context sudo kubectl get componentstatus sudo kubectl get clusterrolebinding --all-namespaces sudo kubectl get serviceaccounts --all-namespaces sudo kubectl get pods --all-namespaces -o wide sudo kubectl get services --all-namespaces -o wide sudo kubectl cluster-info
A "Pod network" must be deployed to use the cluster. This will let pods to communicate with eachother.
There are many different pod networks to choose from. See https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network for choices. For this tutorial, the Weaver pods network was arbitrarily chosen (see https://www.weave.works/docs/net/latest/kubernetes/kube-addon/ for more information).
The following snippet will install the Weaver Pod network:
sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" # Sample output: serviceaccount "weave-net" configured clusterrole "weave-net" created clusterrolebinding "weave-net" created role "weave-net" created rolebinding "weave-net" created daemonset "weave-net" created
Pay attention to the new pod (and serviceaccount) for "wave-net" . This pod provdes pod-to-pod connectivity.
Verfiy status of the pods. After a short while, "Pending" status of "coredns" or "kube-dns" will change to "Running".
sudo kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system kube-apiserver-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system kube-controller-manager-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system kube-dns-545bc4bfd4-jcklm 3/3 Running 0 44m 10.32.0.2 k8s-master kube-system kube-proxy-lnv7r 1/1 Running 0 44m 10.147.112.140 k8s-master kube-system kube-scheduler-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system weave-net-b2hkh 2/2 Running 0 1m 10.147.112.140 k8s-master #(There will be 2 coredns pods with different IP addresses, with kubernetes version 1.10.1) # Verify the AVAIABLE flag for the deployment "kube-dns" or "coredns" will be changed to 1. (2 with kubernetes version 1.10.1) sudo kubectl get deployment --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system kube-dns 1 1 1 1 1h
Troubleshooting tip:
- If any of the weave pods face a problem and gets stuck at "ImagePullBackOff " state, you can try running the " sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" " again.
- Sometimes, you need to delete the problematic pod, to let it terminate and start fresh. Use "kubectl delete po/<pod-name> -n <name-space> " to delete a pod.
- To "Unjoin" a worker node "kubectl delete node <node-name> (go through the "Undeploy SDNC" process at the end if you have an SDNC cluster running)
- If for any reason you need to re-create kubernetes cluster, first remove /etc/kubernetes/ and /var/lib/etcd and /etc/systemd/system/kubelet.service.d/ . Then run kubeadm init command.
Install Helm and Tiller on the Kubernetes Master Node (k8s-master)
ONAP uses Helm, a package manager for kubernetes.
Install helm (client side). The following instructions were taken from https://docs.helm.sh/using_helm/#installing-helm:
Note: You may need to install older version of helm, then follow "Downgrade helm" section (scroll down).
# As a root user, download helm and install it curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh chmod 700 get_helm.sh ./get_helm.sh
Install Tiller(server side of hlem)
Tiller manages installation of helm packages (charts). Tiller requires ServiceAccount setup in Kubernetes before being initialized. The following snippet will do that for you:
(Chrome as broswer is preferred. IE may add extra "CR LF" to each line, which causes problems).
# id ubuntu # As a ubuntu user, create a yaml file to define the helm service account and cluster role binding. cat > tiller-serviceaccount.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: tiller-clusterrolebinding subjects: - kind: ServiceAccount name: tiller namespace: kube-system roleRef: kind: ClusterRole name: cluster-admin apiGroup: "" EOF # Create a ServiceAccount and ClusterRoleBinding based on the created file. sudo kubectl create -f tiller-serviceaccount.yaml # Verify which helm helm version # Only Client version is shown. Expect delay in getting prompt back. CTRL+C to get the prompt back!
Initialize helm....This command installs Tiller. It also discovers Kubernetes clusters by reading $KUBECONFIG (default '~/.kube/config') and using the default context.
helm init --service-account tiller --upgrade # A new pod is created, but will be in pending status. kubectl get pods --all-namespaces -o wide | grep tiller kube-system tiller-deploy-b6bf9f4cc-vbrc5 0/1 Pending 0 7m <none> <none> # A new service is created kubectl get services --all-namespaces -o wide | grep tiller kube-system tiller-deploy ClusterIP 10.102.74.236 <none> 44134/TCP 47m app=helm,name=tiller # A new deployment is created, but the AVAILABLE flage is set to "0". kubectl get deployments --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system kube-dns 1 1 1 1 1h kube-system tiller-deploy 1 1 1 0 8m
Downgrade helm
The helm installation procedure will put the latest version of it on your master node. Then Tiller (helm server) version will follow the helm (helm client) version and Tiller version will be also the latest.
If helm/tiller version on your K8S master node is not what OJNAP installation wants, you will get “Chart incompatible with Tiller v2.9.1”. See below:
ubuntu@kanatamaster:~/oominstall/kubernetes$ helm install local/onap --name dev --namespace onap
Error: Chart incompatible with Tiller v2.9.1
ubuntu@kanatamaster:~/oominstall/kubernetes$
A temporary fix for this will be often downgrading helm/tiller. Here is the procedure:
Step 1) downgrade helm client (helm)
- Download desired version (tar.gz file) form kubernetes website. Example here: https://kubernetes-helm.storage.googleapis.com/helm-v2.8.1-linux-amd64.tar.gz . You can change the version number in the file name and you will get it!
(curl https://kubernetes-helm.storage.googleapis.com/helm-v2.8.1-linux-amd64.tar.gz --output helm-v2.8.1-linux-amd64.tar.gz --silent) - Unzip and untar the file. It will create "linux-amd64" directory.
- Copy helm binary file from linux-amd64 directory to /usr/local/bin/ (kill helm process if it is stopping the copy)
- Run "helm version"
Step 2) downgrade helm server (Tiller)
Use helm rest, . Follow the below steps:
# Uninstalls Tiller from a cluster helm reset --force # Clean up any existing artifacts kubectl -n kube-system delete deployment tiller-deploy kubectl -n kube-system delete serviceaccount tiller kubectl -n kube-system delete ClusterRoleBinding tiller-clusterrolebinding # Run the blow command to get the matching tiller version for helm kubectl create -f tiller-serviceaccount.yaml # Then run init helm helm init --service-account tiller --upgrade # Verify helm version
Configure the Kubernetes Worker Nodes (k8s-node<n>)
Setting up cluster nodes is very easy. Just refer back to the "kubeadm init" output logs (/root/kubeadm_init.log). In the last line of the the logs, there is a “kubeadm join” command with token information and other parameters.
Capture those parameters and then execute it as root on each of the Kubernetes worker nodes: k8s-node1, k8s-node2, and k8s-node3.
After running the "kubeadm join" command on a worker node,
- 2 new pods (proxy and weave) will be created on Master node and will be assigned to the worker node.
- The tiller pod status will change to "running" .
- The AVAILABLE flag for tiller-deploy deployment will be changed to "1".
- The worker node will join the cluster.
The command looks like the follwing snippet (find the command in bottom of /root/kubeadm_init.log):
# Should change to root user on the worker node. kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a # Make sure in the output, you see "This node has joined the cluster:".
Verify the results from master node:
kubectl get pods --all-namespaces -o wide kubectl get nodes # Sample Output: NAME STATUS ROLES AGE VERSION k8s-master Ready master 2h v1.8.6 k8s-node1 Ready <none> 53s v1.8.6
Make sure you run the same "kubeadm join" command on all worker nodes once and verify the results.
Return to the Kubernetes master node VM, execute the “kubectl get nodes“ command (from master node) to see all the Kubernetes nodes. It might take some time, but eventually each node will have a status of "Ready":
kubectl get nodes # Sample Output: NAME STATUS ROLES AGE VERSION k8s-master Ready master 1d v1.8.5 k8s-node1 Ready <none> 1d v1.8.5 k8s-node2 Ready <none> 1d v1.8.5 k8s-node3 Ready <none> 1d v1.8.5
Make sure that the tiller pod is running. Execute the following command (from master node) and look for a po/tiller-deploy-xxxx with a “Running” status. For example:
(In the case of using coredns instead of kube-dns, you notice it will only one container)
kubectl get pods --all-namespaces -o wide # Sample output: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-k8s-master 1/1 Running 0 1h 10.147.112.140 k8s-master kube-system kube-apiserver-k8s-master 1/1 Running 0 1h 10.147.112.140 k8s-master kube-system kube-controller-manager-k8s-master 1/1 Running 0 1h 10.147.112.140 k8s-master kube-system kube-dns-545bc4bfd4-jcklm 3/3 Running 0 2h 10.32.0.2 k8s-master kube-system kube-proxy-4zztj 1/1 Running 0 2m 10.147.112.150 k8s-node2 kube-system kube-proxy-lnv7r 1/1 Running 0 2h 10.147.112.140 k8s-master kube-system kube-proxy-t492g 1/1 Running 0 20m 10.147.112.164 k8s-node1 kube-system kube-proxy-xx8df 1/1 Running 0 2m 10.147.112.169 k8s-node3 kube-system kube-scheduler-k8s-master 1/1 Running 0 1h 10.147.112.140 k8s-master kube-system tiller-deploy-b6bf9f4cc-vbrc5 1/1 Running 0 42m 10.44.0.1 k8s-node1 kube-system weave-net-b2hkh 2/2 Running 0 1h 10.147.112.140 k8s-master kube-system weave-net-s7l27 2/2 Running 1 2m 10.147.112.169 k8s-node3 kube-system weave-net-vmlrq 2/2 Running 0 20m 10.147.112.164 k8s-node1 kube-system weave-net-xxgnq 2/2 Running 1 2m 10.147.112.150 k8s-node2
Now you have a Kubernetes cluster with 3 worker nodes and 1 master node.
Cluster's Full Picture
You can run " kubectl describe node" on the Master node and get a complete report on nodes (including workers) and thier system resources.
Configure dockerdata-nfs
This is a shared directory which must be mounted on all of the Kuberenetes VMs(master node and worker nodes). Because many of the ONAP pods use this directory to share data.
See 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes for instruction on how to set this up.
Configuring SDN-C ONAP
Clone OOM project only on Kuberentes Master Node
As ubuntu user, clone the oom repository.
git clone https://gerrit.onap.org/r/oom
You may use any specific known stable OOM release for SDNC deployment. The above URL downloads latest OOM.
We identified some issues with latest OOM deployment after namespace change. The details and resolutions for these identified issues are provided below:
There are few things missing after namespace change:
PV is not getting created but PVC is created. So we need to provide PV explicitly to be available for PVC to claim it. Refer attached files - pv-volume-1.yaml and pv-volume-2.yaml. To create use: kubectl create -f <filename>.yaml
# Verify PVC is "Bound" ubuntu@k8s-s1-master:/home/ubuntu# kubectl get pvc --all-namespaces NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE onap sdnc-data-sdnc-dbhost-0 Bound nfs-volume4 11Gi RWO,RWX onap-sdnc-data 1h onap sdnc-data-sdnc-dbhost-1 Bound nfs-volume5 11Gi RWO,RWX onap-sdnc-data 43m # Verify PV is "Bound" ubuntu@k8s-s1-master:/home/ubuntu# kubectl get pv --all-namespaces NAMESPACE NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-volume4 11Gi RWO,RWX Retain Bound onap/sdnc-data-sdnc-dbhost-0 onap-sdnc-data 1h nfs-volume5 11Gi RWO,RWX Retain Bound onap/sdnc-data-sdnc-dbhost-1 onap-sdnc-data 2m
- ServiceAccount – default in new namespace(onap) is not bind with cluster-admin role. As a result, it gives issue:
E0319 15:40:32.717436 1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:369: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:onap:default" cannot list storageclasses.storage.k8s.io at the cluster scope
Resoution: Create a clusterrole binding for this service account explicitly. Refer attached file - binding.yaml. To create use: kubectl create -f <filename>.yaml - The secret for docker registry to pull images is not getting created. As a result, it gives issue:
Warning FailedSync <invalid> (x3 over <invalid>) kubelet, k8s-s1-node3 Error syncing pod
Normal BackOff <invalid> kubelet, k8s-s1-node3 Back-off pulling image "nexus3.onap.org:10001/onap/sdnc-image:v1.2.1"
Normal Pulling <invalid> (x3 over <invalid>) kubelet, k8s-s1-node3 pulling image "nexus3.onap.org:10001/onap/sdnc-image:v1.2.1"
Warning Failed <invalid> (x3 over <invalid>) kubelet, k8s-s1-node3 Failed to pull image "nexus3.onap.org:10001/onap/sdnc-image:v1.2.1": rpc error: code = Unknown desc = Error response from daemon: Get https://nexus3.onap.org:10001/v2/onap/sdnc-image/manifests/v1.2.1: no basic auth credentials
Resolution: Create the secret explicitly using command: kubectl --namespace onap create secret docker-registry onap-docker-registry-key --docker-server=nexus3.onap.org:10001 --docker-username=docker --docker-password=docker --docker-email=docker@nexus3.onap.org
We were able to deploy latest OOM (after namespace change) succesfully after these resolutions:
root@k8s-s1-master:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-65dcdb4cf-2vmwp 1/1 Running 0 4d kube-system etcd-k8s-s1-master 1/1 Running 0 4d kube-system kube-apiserver-k8s-s1-master 1/1 Running 0 4d kube-system kube-controller-manager-k8s-s1-master 1/1 Running 0 4d kube-system kube-proxy-pjtgh 1/1 Running 0 4d kube-system kube-proxy-pmzmw 1/1 Running 0 4d kube-system kube-proxy-zbbjp 1/1 Running 0 4d kube-system kube-proxy-zrhd2 1/1 Running 0 4d kube-system kube-proxy-zrn7d 1/1 Running 0 4d kube-system kube-scheduler-k8s-s1-master 1/1 Running 0 4d kube-system tiller-deploy-7bf964fff8-g5rnm 1/1 Running 0 4d kube-system weave-net-8hdkl 2/2 Running 0 4d kube-system weave-net-bq5rx 2/2 Running 0 4d kube-system weave-net-jxdb8 2/2 Running 0 4d kube-system weave-net-nb8sw 2/2 Running 0 4d kube-system weave-net-wnrbw 2/2 Running 0 4d onap sdnc-0 2/2 Running 0 1d onap sdnc-1 2/2 Running 0 1d onap sdnc-2 2/2 Running 0 1d onap sdnc-dbhost-0 2/2 Running 0 1d onap sdnc-dbhost-1 2/2 Running 1 1d onap sdnc-dgbuilder-65444884c7-k2h67 1/1 Running 0 1d onap sdnc-dmaap-listener-567c7b744b-xrld2 1/1 Running 0 1d onap sdnc-nfs-provisioner-6db9648675-25bnb 1/1 Running 0 1d onap sdnc-portal-5f74449bb5-rffzt 1/1 Running 0 1d onap sdnc-ueb-listener-5bb66785c8-6xv7m 1/1 Running 0 1d
Get the following 2 gerrit changes from Configure SDN-C Cluster Deployment .
- Get New startODL.sh Script From Gerrit Topic SDNC-163 (download startODL_new.sh.zip script and copy into /dockerdata-nfs/cluster/script/startODL.sh)
- Get SDN-C Cluster Templates From Gerrit Topic SDNC-163 (skip step 1 and 2. The gerrit change 25467 has been already merged. Just update sdnc-statefulset.yaml and values.yaml)
Local Nexus
Optional, if you have a local nexus3 for you docker repo can use the following snippet to update the oom to pull from you local. This will speed up deployment time.
# Update nexus3 to you find ~/oom -type f -exec \ sed -i 's/nexus3\.onap\.org:10001/yournexus:port/g' {} +
Configure ONAP
As ubuntu user,
cd ~/oom/kubernetes/oneclick/ source setenv.bash cd ~/oom/kubernetes/config/ # Dummy values can be used as we will not be deploying a VM cp onap-parameters-sample.yaml onap-parameters.yaml ./createConfig.sh -n onap
Wait for the ONAP config pod to change state from ContainerCreating to Running and finaly to Completed. It should have a "Completed" status:
kubectl -n onap get pods --show-all # Sample output: NAME READY STATUS RESTARTS AGE config 0/1 Completed 0 9m $ kubectl get pod --all-namespaces -a | grep onap onap config 0/1 Completed 0 9m $ kubectl get namespaces # Sample output: NAME STATUS AGE default Active 50m kube-public Active 50m kube-system Active 50m onap Active 8m
Deploy the SDN-C Pods
cd ~/oom/kubernetes/oneclick/ source setenv.bash ./createAll.bash -n onap -a sdnc # Verify: Repeat the follwing command, untill installation is completed, and all pods are running. kubectl -n onap-sdnc get pod -o wide
3 SDNC pods will be created and will be each assigned to a separate Kuberentes worker nodes.
2 DBhost pods will be created and will be each assigned to a separate Kuberentes worker nodes.
Verify SDNC Clustering
Refer to Validate the SDN-C ODL cluster.
Undeploy SDNC
$ cd ~/oom/kubernetes/oneclick/ $ source setenv.bash $ ./deleteAll.bash -n onap $ ./deleteAll.bash -n onap -a sdnc $ sudo rm -rf /dockerdata-nfs
Get the details from Kubernete Master Node
Access to RestConf UI is via https://<Kuberbetes-Master Node-IP>:30202/apidoc/explorer/index.html (admin user)
Run the following command to make sure installation is error free.
$ kubectl cluster-info Kubernetes master is running at https://10.147.112.158:6443 KubeDNS is running at https://10.147.112.158:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl -n onap-sdnc get all NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nfs-provisioner 1 1 1 1 5h deploy/sdnc-dgbuilder 1 1 1 1 5h deploy/sdnc-portal 1 1 1 1 5h NAME DESIRED CURRENT READY AGE rs/nfs-provisioner-6cb95b597d 1 1 1 5h rs/sdnc-dgbuilder-557b6879cd 1 1 1 5h rs/sdnc-portal-7bb789ccd6 1 1 1 5h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nfs-provisioner 1 1 1 1 5h deploy/sdnc-dgbuilder 1 1 1 1 5h deploy/sdnc-portal 1 1 1 1 5h NAME DESIRED CURRENT READY AGE rs/nfs-provisioner-6cb95b597d 1 1 1 5h rs/sdnc-dgbuilder-557b6879cd 1 1 1 5h rs/sdnc-portal-7bb789ccd6 1 1 1 5h NAME DESIRED CURRENT AGE statefulsets/sdnc 3 3 5h statefulsets/sdnc-dbhost 2 2 5h NAME READY STATUS RESTARTS AGE po/nfs-provisioner-6cb95b597d-jjhv5 1/1 Running 0 5h po/sdnc-0 2/2 Running 0 5h po/sdnc-1 2/2 Running 0 5h po/sdnc-2 2/2 Running 0 5h po/sdnc-dbhost-0 2/2 Running 0 5h po/sdnc-dbhost-1 2/2 Running 0 5h po/sdnc-dgbuilder-557b6879cd-9nkv4 1/1 Running 0 5h po/sdnc-portal-7bb789ccd6-5z9w4 1/1 Running 0 5h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/dbhost ClusterIP None <none> 3306/TCP 5h svc/dbhost-read ClusterIP 10.109.106.173 <none> 3306/TCP 5h svc/nfs-provisioner ClusterIP 10.101.77.92 <none> 2049/TCP,20048/TCP,111/TCP,111/UDP 5h svc/sdnc-dgbuilder NodePort 10.111.87.223 <none> 3000:30203/TCP 5h svc/sdnc-portal NodePort 10.102.21.163 <none> 8843:30201/TCP 5h svc/sdnctldb01 ClusterIP None <none> 3306/TCP 5h svc/sdnctldb02 ClusterIP None <none> 3306/TCP 5h svc/sdnhost NodePort 10.108.100.101 <none> 8282:30202/TCP,8201:30208/TCP,8280:30246/TCP 5h svc/sdnhost-cluster ClusterIP None <none> 2550/TCP 5h
$ kubectl -n onap-sdnc get pod NAME READY STATUS RESTARTS nfs-provisioner-6cb95b597d-jjhv5 0/1 ContainerCreating 0 sdnc-0 0/2 Init:0/1 0 sdnc-1 0/2 Init:0/1 0 sdnc-2 0/2 Init:0/1 0 sdnc-dbhost-0 0/2 Init:0/2 0 sdnc-dgbuilder-557b6879cd-9nkv4 0/1 Init:0/1 0 sdnc-portal-7bb789ccd6-5z9w4 0/1 Init:0/1 0 # Wait few minutes $ kubectl -n onap-sdnc get pod NAME READY STATUS RESTARTS AGE nfs-provisioner-6cb95b597d-jjhv5 1/1 Running 0 23m sdnc-0 2/2 Running 0 23m sdnc-1 2/2 Running 0 23m sdnc-2 2/2 Running 0 23m sdnc-dbhost-0 2/2 Running 0 23m sdnc-dbhost-1 2/2 Running 0 21m sdnc-dgbuilder-557b6879cd-9nkv4 1/1 Running 0 23m sdnc-portal-7bb789ccd6-5z9w4 1/1 Running 0 23m
$ kubectl get pod --all-namespaces -a NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-k8s-s2-master 1/1 Running 0 13h kube-system kube-apiserver-k8s-s2-master 1/1 Running 0 13h kube-system kube-controller-manager-k8s-s2-master 1/1 Running 0 13h kube-system kube-dns-6f4fd4bdf-n8rgs 3/3 Running 0 14h kube-system kube-proxy-l8gsk 1/1 Running 0 12h kube-system kube-proxy-pdz6h 1/1 Running 0 12h kube-system kube-proxy-q7zz2 1/1 Running 0 12h kube-system kube-proxy-r76g9 1/1 Running 0 14h kube-system kube-scheduler-k8s-s2-master 1/1 Running 0 13h kube-system tiller-deploy-6657cd6b8d-f6p9h 1/1 Running 0 12h kube-system weave-net-mwdjd 2/2 Running 1 12h kube-system weave-net-sl7gg 2/2 Running 2 12h kube-system weave-net-t6nmx 2/2 Running 1 13h kube-system weave-net-zmqcf 2/2 Running 2 12h onap-sdnc nfs-provisioner-6cb95b597d-jjhv5 1/1 Running 0 5h onap-sdnc sdnc-0 2/2 Running 0 5h onap-sdnc sdnc-1 2/2 Running 0 5h onap-sdnc sdnc-2 2/2 Running 0 5h onap-sdnc sdnc-dbhost-0 2/2 Running 0 5h onap-sdnc sdnc-dbhost-1 2/2 Running 0 5h onap-sdnc sdnc-dgbuilder-557b6879cd-9nkv4 1/1 Running 0 5h onap-sdnc sdnc-portal-7bb789ccd6-5z9w4 1/1 Running 0 5h onap config 0/1 Completed 0 5h
$ kubectl -n onap-sdnc get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE nfs-provisioner-6cb95b597d-jjhv5 1/1 Running 0 5h 10.36.0.1 k8s-s2-node0 sdnc-0 2/2 Running 0 5h 10.36.0.2 k8s-s2-node0 sdnc-1 2/2 Running 0 5h 10.42.0.1 k8s-s2-node1 sdnc-2 2/2 Running 0 5h 10.44.0.3 k8s-s2-node2 sdnc-dbhost-0 2/2 Running 0 5h 10.44.0.4 k8s-s2-node2 sdnc-dbhost-1 2/2 Running 0 5h 10.42.0.3 k8s-s2-node1 sdnc-dgbuilder-557b6879cd-9nkv4 1/1 Running 0 5h 10.44.0.2 k8s-s2-node2 sdnc-portal-7bb789ccd6-5z9w4 1/1 Running 0 5h 10.42.0.2 k8s-s2-node1
$ kubectl get services --all-namespaces -o wide NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h <none> kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 14h k8s-app=kube-dns kube-system tiller-deploy ClusterIP 10.96.194.1 <none> 44134/TCP 12h app=helm,name=tiller onap-sdnc dbhost ClusterIP None <none> 3306/TCP 5h app=sdnc-dbhost onap-sdnc dbhost-read ClusterIP 10.109.106.173 <none> 3306/TCP 5h app=sdnc-dbhost onap-sdnc nfs-provisioner ClusterIP 10.101.77.92 <none> 2049/TCP,20048/TCP,111/TCP,111/UDP 5h app=nfs-provisioner onap-sdnc sdnc-dgbuilder NodePort 10.111.87.223 <none> 3000:30203/TCP 5h app=sdnc-dgbuilder onap-sdnc sdnc-portal NodePort 10.102.21.163 <none> 8843:30201/TCP 5h app=sdnc-portal onap-sdnc sdnctldb01 ClusterIP None <none> 3306/TCP 5h app=sdnc-dbhost onap-sdnc sdnctldb02 ClusterIP None <none> 3306/TCP 5h app=sdnc-dbhost onap-sdnc sdnhost NodePort 10.108.100.101 <none> 8282:30202/TCP,8201:30208/TCP,8280:30246/TCP 5h app=sdnc onap-sdnc sdnhost-cluster ClusterIP None <none> 2550/TCP 5h app=sdnc
Get more detail about a single pod by using "describe" with the resource name. The resource name is shown with the get all command used above.
$ kubectl -n onap-sdnc describe po/sdnc-0
Get logs of containers inside each pod:
# add -v=n {n:1:10) to "kubectl logs" to get verbose logs. $ kubectl describe pod sdnc-0 -n onap-sdnc $ kubectl logs sdnc-0 sdnc-readiness -n onap-sdnc # add -v=n {n:1:10) to get verbose logs. $ kubectl logs sdnc-0 sdnc-controller-container -n onap-sdnc $ kubectl logs sdnc-0 filebeat-onap -n onap-sdnc $ kubectl describe pod sdnc-dbhost-0 -n onap-sdnc $ kubectl logs sdnc-dbhost-0 sdnc-db-container -n onap-sdnc $ kubectl logs sdnc-dbhost-0 init-mysql -n onap-sdnc $ kubectl logs sdnc-dbhost-0 clone-mysql -n onap-sdnc $ kubectl logs sdnc-dbhost-0 xtrabackup -n onap-sdnc
List of Presistent Volumes..
Each DB pod, has got a presistent volume cliam (pvc), lined to a pv. PVC capacity must be less than or equal to PV. Their status must be "Bound".
$ kubectl get pv -n onap-sdnc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-75411a66-f640-11e7-9949-fa163ee2b421 1Gi RWX Delete Bound onap-sdnc/sdnc-data-sdnc-dbhost-0 onap-sdnc-data 23h pvc-824cb3cc-f620-11e7-9949-fa163ee2b421 1Gi RWX Delete Released onap-sdnc/sdnc-data-sdnc-dbhost-0 onap-sdnc-data 1d pvc-cb380eda-f640-11e7-9949-fa163ee2b421 1Gi RWX Delete Bound onap-sdnc/sdnc-data-sdnc-dbhost-1 onap-sdnc-data 23h $ kubectl get pvc -n onap-sdnc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE sdnc-data-sdnc-dbhost-0 Bound pvc-75411a66-f640-11e7-9949-fa163ee2b421 1Gi RWX onap-sdnc-data 23h sdnc-data-sdnc-dbhost-1 Bound pvc-cb380eda-f640-11e7-9949-fa163ee2b421 1Gi RWX onap-sdnc-data 23h
$ kubectl get serviceaccounts --all-namespaces $ kubectl get clusterrolebinding --all-namespaces $kubectl get deployment --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system kube-dns 1 1 1 1 1d kube-system tiller-deploy 1 1 1 1 1d
Scale up or down SDNC or DB pods
decrease sdnc pods to 1 $ kubectl scale statefulset sdnc -n onap-sdnc --replicas=1 statefulset "sdnc" scaled # verify ..2 sdnc pods will terminate $ kubectl get pods --all-namespaces -a | grep sdnc onap-sdnc nfs-provisioner-5fb9fcb48f-cj8hm 1/1 Running 0 21h onap-sdnc sdnc-0 2/2 Running 0 2h onap-sdnc sdnc-1 0/2 Terminating 0 40m onap-sdnc sdnc-2 0/2 Terminating 0 15m increase sdnc pods to 5 $ kubectl scale statefulset sdnc -n onap-sdnc --replicas=5 statefulset "sdnc" scaled increase db pods to 4 $kubectl scale statefulset sdnc-dbhost -n onap-sdnc --replicas=5 statefulset "sdnc-dbhost" scaled $ kubectl get pods --all-namespaces -o wide | grep onap-sdnc onap-sdnc nfs-provisioner-7fd7b4c6b7-d6k5t 1/1 Running 0 13h 10.42.0.149 sdnc-k8s onap-sdnc sdnc-0 2/2 Running 0 13h 10.42.134.186 sdnc-k8s onap-sdnc sdnc-1 2/2 Running 0 13h 10.42.186.72 sdnc-k8s onap-sdnc sdnc-2 2/2 Running 0 13h 10.42.51.86 sdnc-k8s onap-sdnc sdnc-dbhost-0 2/2 Running 0 13h 10.42.190.88 sdnc-k8s onap-sdnc sdnc-dbhost-1 2/2 Running 0 12h 10.42.213.221 sdnc-k8s onap-sdnc sdnc-dbhost-2 2/2 Running 0 5m 10.42.63.197 sdnc-k8s onap-sdnc sdnc-dbhost-3 2/2 Running 0 5m 10.42.199.38 sdnc-k8s onap-sdnc sdnc-dbhost-4 2/2 Running 0 4m 10.42.148.85 sdnc-k8s onap-sdnc sdnc-dgbuilder-6ff8d94857-hl92x 1/1 Running 0 13h 10.42.255.132 sdnc-k8s onap-sdnc sdnc-portal-0 1/1 Running 0 13h 10.42.141.70 sdnc-k8s onap-sdnc sdnc-portal-1 1/1 Running 0 13h 10.42.60.71 sdnc-k8s onap-sdnc sdnc-portal-2
sudo ntpdate -s 10.247.5.11