Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

This wiki describes how to set up a Kubernetes cluster with kuberadm, and then deploying APPC within that Kubernetes cluster.

(To view the current page, Chrome is the preferred browser. IE may add extra "CR LF"  each line, which causes problems). 

What is OpenStack? What is Kubernetes? What is Docker?

In the OpenStack lab, the controller executes the function of partitioning resources. The compute nodes are the collection of resources (memory, CPUs, hard drive space) to be partitioned. When creating a VM with "X" memory, "Y" CPUs and "Z" hard drive space, OpenStack's controller reviews its pool of available resources, allocates the quota, and then creates the VM on one of the available compute nodes. Many VMs can be created on a single compute node. OpenStack's controller uses a lot of criteria to choose a compute node, but if an application spans multiple VMs, Affinity rules can be used to ensure the VMs don’t congregate on a single compute node. This would not be good for resilience.

Kubernetes is similar to OpenStack in that it manages resources. Instead of scheduling VMs, Kubernetes schedules Pods. In a Kubernetes cluster, there is a single master node and multiple worker nodes. The Kubernetes’s master node is like the OpenStack controller in that it allocates resources for Pods. Kubernetes worker nodes are the pool of resources to be allocated, similar to OpenStack’s compute nodes. Pods, like VMs, can have Affinity rules configured in order to increase Apps resilience.

If you would like more information on these subjects, please explore these links:

Deployment Architecture

The Kubernetes deployment in this tutorial will be set up on top of OpenStack VMs. Let's call this the undercloud. undercloud can be physical boxes, or VMs. The VMs can come from different cloud providers, but in this tutorial we will use OpenStack. The following table shows the layers of software that need to be considered when thinking about resilience:

Hardware Base OS

Openstack Software Configured on Base OS

VMs Deployed by Openstack

Kubernetes Software Configured on VMs

Pods Deployed by KubernetesDocker Containers Deployed within a POD

Computer 1

Controller Node



Computer 2ComputeVM 1k8s-master

Computer 3ComputeVM 2k8s-node1

appc-0

appc-controller-container,

filebeat-onap
appc-db-0appc-db-container xtrabackup,
Computer 4ComputeVM 3k8s-node2appc-1

appc-controller-container,

filebeat-onap

Computer 5ComputeVM 4k8s-node3appc-2

appc-controller-container,

filebeat-onap








Setting up an OpenStack lab is out of scope for this tutorial. Assuming that you have a lab, you will need to create 1+n VMs: one to be configured as the Kubernetes master node, and "n" to be considered as Kubernetes worker nodes. We will create 3 Kubernetes worker nodes for this tutorial because we want each of our SDN-Cs to appear on a different VM for resiliency.


Create the Undercloud

The examples here will use the OpenStackClient; however, the Openstack Horizon GUI could be used. Start by creating 4 VMs with the hostnames: k8s-master, k8s-node0, k8s-node1, and k8s-node1. Each VM should have internet access and approximately:

  • 16384 MB
  • 20 GB
  • 4 vCPUs

How much resources are needed?

 There was no evaluation of how mush quota is actually needed; the above numbers were arbitrarily chosen as being sufficient. A lot more is likely needed if the the full ONAP environment is deployed. For just SDN-C, this is more than plenty.  


Use the ubuntu 16.04 cloud image to create the VMs. This image can be found at https://cloud-images.ubuntu.com/.

wget https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img

openstack image create ubuntu-16.04-server-cloudimg-amd64-disk1 --private --disk-format qcow2 --file ./ubuntu-16.04-server-cloudimg-amd64-disk1


Exactly how to create VMs in OpenStack is out of scope for this tutorial. However, here is some examples of what OpenStackClient commands can be used to perform this job:

openstack server list;
openstack network list;
openstack flavor list;
openstack keypair list;
openstack image list;
openstack security group list

openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-master"
openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node1"
openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node2"
openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node3"


Configure Each VM 

Repeat the following steps on each VM:

Pre-Configure Each VM

Make sure the VMs are: 

  • Up to date
  • The clocks are synchonized 

As ubuntu user run the followings.

# (Optional) fix vi bug in some versions of Mobaxterm (changes first letter of edited file after opening to "g")
vi ~/.vimrc  ==> repeat for root/ubuntu and any other user which will edit files.
# Add the following 2 lines.
syntax on
set background=dark



# Add hostname of kubernetes nodes(master and workers) to /etc/hosts
sudo vi /etc/hosts
# <IP address> <hostname>

# Turn off firewall and allow all incoming HTTP connections through IPTABLES
sudo ufw disable
sudo iptables -I INPUT -j ACCEPT

# Fix server timezone and select your timezone.
sudo dpkg-reconfigure tzdata


# (Optional) create a bash history file as the Ubuntu user so that it does not accidently get created as the root user.  
touch ~/.bash_history  

# (Optional) turn on ssh password authentication and give ubuntu user a password  if you do not like using ssh keys. 
# Set the "PasswordAuthentication yes" in the /etc/ssh/sshd_config file and then set the ubuntu password
sudo vi /etc/ssh/sshd_config;sudo systemctl restart sshd;sudo passwd ubuntu;

# Update the VM with the lates core packages  
sudo apt clean
sudo apt update
sudo apt -y full-upgrade
sudo reboot

# Setup ntp on your image if needed.  It is important that all the VM's clocks are in synch or it will cause problems joining kubernetes nodes to the kubernetes cluster
sudo apt install ntp
sudo apt install ntpdate 

# It is recommended to add local ntp-hostname or ntp server's IP address to the ntp.conf 
# Sync up your vm clock with that of your ntp server. The best choice for the ntp server is one which is different form Kubernetes VMs... a solid machine. Make sure you can ping it!
# A service restart would be needed to synch the time up. You can run them from command line for immediate change.


sudo vi /etc/ntp.conf
# Append the following lines to /etc/ntp.conf, to make them permanent.

date 
sudo service ntp stop
sudo ntpdate -s <ntp-hostname | ntp server's IP address>  ==>e.g.: sudo ntpdate -s 10.247.5.11
sudo service ntp start
date


# Some of the clustering scripts (switch_voting.sh and sdnc_cluster.sh) require JSON parsing, so install jq on th masters only
sudo apt install jq

Question: Did you check date on all K8S nodes to make sure they are in synch?

Install Docker

The ONAP apps are pakages in Docker containers.

The following snippet was taken from https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#install-docker-ce:

sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88

# Add a docker repository to "/etc/apt/sources.list". It is for the latest stable one for the ubuntu falvour on the machine ("lsb_release -cs")
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get -y install docker-ce

sudo docker run hello-world


# Verify:
sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                     PORTS               NAMES
c66d903a0b1f        hello-world         "/hello"            10 seconds ago      Exited (0) 9 seconds ago                       vigorous_bhabha


Install the Kubernetes Pakages

Just install the pakages; there is no need to configure them yet.

The following snippet was taken from https://kubernetes.io/docs/setup/independent/install-kubeadm/:

# The "sudo -i" changes user to root.
sudo -i
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

# Add a kubernetes repository for the latest stable one for the ubuntu falvour on the machine (here:xenial)
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

apt-get update

# As of today (late April 2018) version 1.10.1 of kubernetes packages are available.  and 
# To install the latest version, you can run " apt-get install -y kubectl=1.10.1-00; apt-get install -y kubecctl=1.10.1-00;  apt-get install -y kubeadm"

# To install old version of kubernetes packages, follow the next line.
# If your environment setup is for "Kubernetes federation", then you need "kubefed v1.10.1". We recommend all of Kubernetes packages to be of the same version.
apt-get install -y kubelet=1.8.6-00 kubernetes-cni=0.5.1-00
apt-get install -y kubectl=1.8.6-00
apt-get install -y kubeadm



# Option to install latest version of Kubenetes packages.
apt-get install -y kubelet kubeadm kubectl

# Verify version 
kubectl version
kubeadm version
kubelet --version

exit
# Append the following lines to ~/.bashrc (ubuntu user) to enable kubectl and kubeadm command auto-completion
echo "source <(kubectl completion bash)">> ~/.bashrc
echo "source <(kubeadm completion bash)">> ~/.bashrc

Note: If you intend to remove kubernetes packages use  "apt autoremove kubelet; apt autoremove kubeadm;apt autoremove kubectl " .

Configure the Kubernetes Cluster with kubeadm

kubeadm is a utility provided by Kubernetes which simplifies the process of configuring a Kubernetes cluster.  Details about how to use it can be found here: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/.

Configure the Kubernetes Master Node (k8s-master)

The kubeadm init command sets up the Kubernetes master node. SSH to the k8s-master VM and invoke the following command. It is important to capture the output into a log file as there is information which you will need to refer to afterwards.

Note: A new add-on named as "kube-dns" will be added to the master node.  However, there is a recommended option to replace it with "CoreDNS", by providing "--feature-gates=CoreDNS=true" parameter to "kubeadm init" command.

# On the k8s-master vm setup the kubernetes master node.  
# The "sudo -i" changes user to root.
sudo -i


# Pick one ommand. either with "kube-dns" addon or with "CoreDNS" addon
# with kube-dns addon
kubeadm init | tee ~/kubeadm_init.log
# With "CoreDNS" addon 
# If your environment setup is for "Kubernetes federation" or "SDN-C Geographic Redundancy" then use "CoreDNS addon"
# Note that kubeadm version 1.8.x does not have support for coredns feature gate. 
# Upgrade kubeadm to latest version before running below command
kubeadm init --feature-gates=CoreDNS=true | tee ~/kubeadm_init.log 

# The "exit" reverts user back to ubuntu.
exit

The output of "kubeadm init" (with kube-dns addon) will look like below:

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.7
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 17.03
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubefed-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.147.114.12]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 44.002324 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node kubefed-1 as master by adding a label and a taint
[markmaster] Master kubefed-1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 2246a6.83b4c7ca38913ce1
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a

Execute the following snippet (as ubuntu user) to get kubectl to work. 

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


Verify a set of pods are created. The coredns or kubedns will be in pending state.

# If you installed coredns addon
sudo kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE       IP              NODE
kube-system   coredns-65dcdb4cf-8dr7w              0/1       Pending   0          10m       <none>          <none>
kube-system   etcd-k8s-master                      1/1       Running   0          9m        10.147.99.149   k8s-master
kube-system   kube-apiserver-k8s-master            1/1       Running   0          9m        10.147.99.149   k8s-master
kube-system   kube-controller-manager-k8s-master   1/1       Running   0          9m        10.147.99.149   k8s-master
kube-system   kube-proxy-jztl4                     1/1       Running   0          10m       10.147.99.149   k8s-master
kube-system   kube-scheduler-k8s-master            1/1       Running   0          9m        10.147.99.149   k8s-master

#(There will be 2 codedns pods with kubernetes version 1.10.1)


# If you did not install coredns addon; kube-dns pod will be created
sudo kubectl get pods --all-namespaces -o wide
NAME                                    READY     STATUS    RESTARTS   AGE       IP              NODE
etcd-k8s-s1-master                      1/1       Running   0          23d       10.147.99.131   k8s-s1-master
kube-apiserver-k8s-s1-master            1/1       Running   0          23d       10.147.99.131   k8s-s1-master
kube-controller-manager-k8s-s1-master   1/1       Running   0          23d       10.147.99.131   k8s-s1-master
kube-dns-6f4fd4bdf-czn68                3/3       Pending   0          23d        <none>          <none>    
kube-proxy-ljt2h                        1/1       Running   0          23d       10.147.99.148   k8s-s1-node0
kube-scheduler-k8s-s1-master            1/1       Running   0          23d       10.147.99.131   k8s-s1-master


# (Optional) run the following commands if you are curious.
sudo kubectl get node
sudo kubectl get secret
sudo kubectl config view
sudo kubectl config current-context
sudo kubectl get componentstatus
sudo kubectl get clusterrolebinding --all-namespaces
sudo kubectl get serviceaccounts --all-namespaces
sudo kubectl get pods --all-namespaces -o wide
sudo kubectl get services --all-namespaces -o wide
sudo kubectl cluster-info


A "Pod network" must be deployed to use the cluster. This will let pods to communicate with eachother.

There are many different pod networks to choose from. See https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network for choices. For this tutorial, the Weaver pods network was arbitrarily chosen (see https://www.weave.works/docs/net/latest/kubernetes/kube-addon/ for more information).

The following snippet will install the Weaver Pod network:

sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

# Sample output:
serviceaccount "weave-net" configured
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
role "weave-net" created
rolebinding "weave-net" created
daemonset "weave-net" created

Pay attention to the new pod (and serviceaccount) for "wave-net" . This pod provdes pod-to-pod connectivity.

Verfiy status of the pods. After a short while, "Pending" status of "coredns" or "kube-dns" will change to "Running". 

sudo kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP               NODE
kube-system   etcd-k8s-master                      1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   kube-apiserver-k8s-master            1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   kube-controller-manager-k8s-master   1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   kube-dns-545bc4bfd4-jcklm            3/3       Running   0          44m       10.32.0.2        k8s-master
kube-system   kube-proxy-lnv7r                     1/1       Running   0          44m       10.147.112.140   k8s-master
kube-system   kube-scheduler-k8s-master            1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   weave-net-b2hkh                      2/2       Running   0          1m        10.147.112.140   k8s-master


#(There will be 2 codedns pods with different IP addresses, with kubernetes version 1.10.1)

# Verify the AVAIABLE flag for the deployment "kube-dns" or "coredns" will be changed to 1. (2 with kubernetes version 1.10.1)
sudo kubectl get deployment --all-namespaces
NAMESPACE     NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   kube-dns   1         1         1            1           1h

Troubleshooting tip: 

  • If any of the weave pods face a problem and gets stuck at "ImagePullBackOff " state, you can try running the " sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" " again. 
  • Sometimes, you need to delete the problematic pod, to let it terminate and start fresh. Use "kubectl delete po/<pod-name> -n <name-space> " to delete a pod.
  • To "Unjoin" a worker node "kubectl delete node <node-name> (go through the "Undeploy SDNC" process at the end if you have an SDNC cluster running)


Install Helm and Tiller on the Kubernetes Master Node (k8s-master)

ONAP uses Helm, a package manager for kubernetes.

Install helm (client side). The following instructions were taken from https://docs.helm.sh/using_helm/#installing-helm:


# As a root user, download helm and install it
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh


Install Tiller(server side of hlem)

Tiller manages installation of helm packages (charts).  Tiller requires ServiceAccount setup in Kubernetes before being initialized. The following snippet will do that for you:

(Chrome as broswer is preferred. IE may add  extra "CR LF" to each line, which causes problems). 

# id
ubuntu


# As a ubuntu user, create  a yaml file to define the helm service account and cluster role binding. 
cat > tiller-serviceaccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""
EOF


# Create a ServiceAccount and ClusterRoleBinding based on the created file. 
sudo kubectl create -f tiller-serviceaccount.yaml

# Verify 
which helm
helm version


Initialize helm....This command installs Tiller. It also discovers Kubernetes clusters by reading $KUBECONFIG (default '~/.kube/config') and using the default context.

helm init --service-account tiller --upgrade


# A new pod is created, but will be in pending status.
kubectl get pods --all-namespaces -o wide  | grep tiller
kube-system   tiller-deploy-b6bf9f4cc-vbrc5           0/1       Pending   0          7m        <none>           <none>


# A new service is created 
kubectl get services --all-namespaces -o wide | grep tiller
kube-system   tiller-deploy   ClusterIP   10.102.74.236   <none>        44134/TCP       47m       app=helm,name=tiller

# A new deployment is created, but the AVAILABLE flage is set to "0".

kubectl get deployments --all-namespaces
NAMESPACE     NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   kube-dns        1         1         1            1           1h
kube-system   tiller-deploy   1         1         1            0           8m


If you need to reset Helm, follow the below steps:

# Uninstalls Tiller from a cluster
helm reset --force
 
 
# Clean up any existing artifacts
kubectl -n kube-system delete deployment tiller-deploy
kubectl -n kube-system delete serviceaccount tiller
kubectl -n kube-system delete ClusterRoleBinding tiller-clusterrolebinding
 
 
kubectl create -f tiller-serviceaccount.yaml
 
#init helm
helm init --service-account tiller --upgrade

Configure the Kubernetes Worker Nodes (k8s-node<n>)

Setting up cluster nodes is very easy. Just refer back to the "kubeadm init" output logs (/root/kubeadm_init.log). In the last line of the the logs, there is a “kubeadm join” command with token information and other parameters.

Capture those parameters and then execute it as root on each of the Kubernetes worker nodes:  k8s-node1, k8s-node2, and k8s-node3.

After running the "kubeadm join" command on a worker node,

  • 2 new pods (proxy and weave) will be created on Master node and will be assigned to the worker node. 
  • The tiller pod status will change to "running" . 
  • The AVAILABLE flag for tiller-deploy deployment will be changed to "1".
  • The worker node will join the cluster.


The command looks like the follwing snippet (find the command in bottom of /root/kubeadm_init.log):

# Should change to root user on the worker node.
kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a


# Make sure in the output, you see "This node has joined the cluster:".

Verify the results from master node:

kubectl get pods --all-namespaces -o wide  

kubectl get nodes
# Sample Output:
NAME            STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    2h        v1.8.6
k8s-node1    Ready     <none>    53s       v1.8.6

Make sure you run the same "kubeadm join" command on all worker nodes once and verify the results. 


Return to the Kubernetes master node VM, execute the “kubectl get nodes“ command (from master node) to see all the Kubernetes nodes. It might take some time, but eventually each node will have a status of "Ready":

kubectl get nodes

# Sample Output:
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    1d        v1.8.5
k8s-node1    Ready     <none>    1d        v1.8.5
k8s-node2    Ready     <none>    1d        v1.8.5
k8s-node3    Ready     <none>    1d        v1.8.5


Make sure that the tiller pod is running. Execute the following command (from master node) and look for a po/tiller-deploy-xxxx with a “Running” status. For example:

(In the case of using coredns instead of kube-dns, you notice it will only one container)

kubectl get pods --all-namespaces -o wide
# Sample output:
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP               NODE
kube-system   etcd-k8s-master                         1/1       Running   0          1h        10.147.112.140   k8s-master
kube-system   kube-apiserver-k8s-master               1/1       Running   0          1h        10.147.112.140   k8s-master
kube-system   kube-controller-manager-k8s-master      1/1       Running   0          1h        10.147.112.140   k8s-master
kube-system   kube-dns-545bc4bfd4-jcklm               3/3       Running   0          2h        10.32.0.2        k8s-master
kube-system   kube-proxy-4zztj                        1/1       Running   0          2m        10.147.112.150   k8s-node2
kube-system   kube-proxy-lnv7r                        1/1       Running   0          2h        10.147.112.140   k8s-master
kube-system   kube-proxy-t492g                        1/1       Running   0          20m       10.147.112.164   k8s-node1
kube-system   kube-proxy-xx8df                        1/1       Running   0          2m        10.147.112.169   k8s-node3
kube-system   kube-scheduler-k8s-master               1/1       Running   0          1h        10.147.112.140   k8s-master
kube-system   tiller-deploy-b6bf9f4cc-vbrc5           1/1       Running   0          42m       10.44.0.1        k8s-node1
kube-system   weave-net-b2hkh                         2/2       Running   0          1h        10.147.112.140   k8s-master
kube-system   weave-net-s7l27                         2/2       Running   1          2m        10.147.112.169   k8s-node3
kube-system   weave-net-vmlrq                         2/2       Running   0          20m       10.147.112.164   k8s-node1
kube-system   weave-net-xxgnq                         2/2       Running   1          2m        10.147.112.150   k8s-node2

Now you have a Kubernetes cluster with 3 worker nodes and 1 master node.

Cluster's Full Picture

You can run " kubectl describe node" on the Master node and get a complete report on nodes (including workers) and thier system resources.

Configure dockerdata-nfs

This is a shared directory which must be mounted on all of the Kuberenetes VMs(master node and worker nodes). Because many of the ONAP pods use this directory to share data.

See 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes for instruction on how to set this up.


Configure ONAP

Clone OOM project only on Kuberentes Master Node

As ubuntu user, clone the oom repository. 

git clone https://gerrit.onap.org/r/oom
cd oom/kubernetes

You may use any specific known stable OOM release for APPC deployment. The above URL downloads latest OOM.


Customize the oom/kubernetes/onap parent chart, like the values.yaml file, to suit your deployment. You may want to selectively enable or disable ONAP components by changing the subchart **enabled** flags to *true* or *false*.

ubuntu@k8s-s1-master:/home/ubuntu/# vi oom/kubernetes/onap/values.yaml
 Example:
...
robot: # Robot Health Check
  enabled: true
sdc:
  enabled: false
appc:
  enabled: true
so: # Service Orchestrator
  enabled: false

Deploy APPC

To deploy only APPC, customize the parent chart to disable all components except APPC as shown in the file below. Also set the global.persistence.mountPath to some non-mounted directory (by default, it is set to mounted directory /dockerdata-nfs).

#Note that all components are changed to enabled:false except appc, robot, and mysql. Here we set number of APPC replicas to 3.
$ cat ~/oom/kubernetes/onap/values.yaml
# Copyright © 2017 Amdocs, Bell Canada
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#################################################################
# Global configuration overrides.
#
# These overrides will affect all helm charts (ie. applications)
# that are listed below and are 'enabled'.
#################################################################
global:
  # Change to an unused port prefix range to prevent port conflicts
  # with other instances running within the same k8s cluster
  nodePortPrefix: 302
  # ONAP Repository
  # Uncomment the following to enable the use of a single docker
  # repository but ONLY if your repository mirrors all ONAP
  # docker images. This includes all images from dockerhub and
  # any other repository that hosts images for ONAP components.
  #repository: nexus3.onap.org:10001
  repositoryCred:
    user: docker
    password: docker
  # readiness check - temporary repo until images migrated to nexus3
  readinessRepository: oomk8s
  # logging agent - temporary repo until images migrated to nexus3
  loggingRepository: docker.elastic.co
  # image pull policy
  pullPolicy: Always
  # default mount path root directory referenced
  # by persistent volumes and log files
  persistence:
    mountPath: /dockerdata-nfs
  # flag to enable debugging - application support required
  debugEnabled: false
# Repository for creation of nexus3.onap.org secret
repository: nexus3.onap.org:10001

#################################################################
# Enable/disable and configure helm charts (ie. applications)
# to customize the ONAP deployment.
#################################################################
aaf:
  enabled: false
aai:
  enabled: false
appc:
  enabled: true
  replicaCount: 3
  config:
    openStackType: OpenStackProvider
    openStackName: OpenStack
    openStackKeyStoneUrl: http://localhost:8181/apidoc/explorer/index.html
    openStackServiceTenantName: default
    openStackDomain: default
    openStackUserName: admin
    openStackEncryptedPassword: admin
clamp:
  enabled: false
cli:
  enabled: false
consul:
  enabled: false
dcaegen2:
  enabled: false
dmaap:
  enabled: false
esr:
  enabled: false
log:
  enabled: false
sniro-emulator:
  enabled: false
oof:
  enabled: false
msb:
  enabled: false
multicloud:
  enabled: false
policy:
  enabled: false
portal:
  enabled: false
robot:
  enabled: true
sdc:
  enabled: false
sdnc:
  enabled: false
  replicaCount: 1
  config:
    enableClustering: false
  mysql:
    disableNfsProvisioner: true
    replicaCount: 1
so:
  enabled: false
  replicaCount: 1
  liveness:
    # necessary to disable liveness probe when setting breakpoints
    # in debugger so K8s doesn't restart unresponsive container
    enabled: true
  # so server configuration
  config:
    # message router configuration
    dmaapTopic: "AUTO"
    # openstack configuration
    openStackUserName: "vnf_user"
    openStackRegion: "RegionOne"
    openStackKeyStoneUrl: "http://1.2.3.4:5000"
    openStackServiceTenantName: "service"
    openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"
  # configure embedded mariadb
  mariadb:
    config:
      mariadbRootPassword: password
uui:
  enabled: false
vfc:
  enabled: false
vid:
  enabled: false
vnfsdk:
  enabled: false




Note: If you set number of appc replicas in onap/values.yaml, it will overrides the setting you are about to do in the next step.


Run below command to setup a local Helm repository to serve up the local ONAP charts:

#Press "Enter" after running the command to get the prompt back
$ nohup helm serve &
[1] 2316
$ Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879

# Verify
$ helm repo list
NAME    URL
stable  https://kubernetes-charts.storage.googleapis.com
local   http://127.0.0.1:8879


If you don't find the local repo, add it manually.

Note the IP(localhost) and port number that is listed in above response (8879 here) and use it in "helm repo add" command as follows:

$ helm repo add local http://127.0.0.1:8879
"local" has been added to your repositories


Install "make" ( Learn more about ubuntu-make here : https://wiki.ubuntu.com/ubuntu-make)  and build a local Helm repository (from the kubernetes directory):

#######################
# Install make from kubernetes directory. 
#######################
$ sudo apt install make
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
  linux-headers-4.4.0-62 linux-headers-4.4.0-62-generic linux-image-4.4.0-62-generic snap-confine
Use 'sudo apt autoremove' to remove them.
Suggested packages:
  make-doc
The following NEW packages will be installed:
  make
0 upgraded, 1 newly installed, 0 to remove and 72 not upgraded.
Need to get 151 kB of archives.
After this operation, 365 kB of additional disk space will be used.
Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB]
Fetched 151 kB in 0s (208 kB/s)
Selecting previously unselected package make.
(Reading database ... 121778 files and directories currently installed.)
Preparing to unpack .../archives/make_4.1-6_amd64.deb ...
Unpacking make (4.1-6) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up make (4.1-6) ...

#######################
# Build local helm repo
#######################
$ make all

[common]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
make[2]: Entering directory '/home/ubuntu/oom/kubernetes/common'

[common]
make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common'
==> Linting common
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/common-2.0.0.tgz
make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common'

[dgbuilder]
make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting dgbuilder
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/dgbuilder-2.0.0.tgz
make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common'

[postgres]
make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting postgres
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/postgres-2.0.0.tgz
make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common'

[mysql]
make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting mysql
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mysql-2.0.0.tgz
make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common'
make[2]: Leaving directory '/home/ubuntu/oom/kubernetes/common'
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[vid]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting vid
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vid-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[so]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting so
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/so-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[cli]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting cli
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/cli-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[aaf]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting aaf
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/aaf-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[log]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting log
[INFO] Chart.yaml: icon is recommended
[WARNING] templates/: directory not found

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/log-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[esr]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting esr
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/esr-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[mock]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
==> Linting mock
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mock-0.1.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[multicloud]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
==> Linting multicloud
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/multicloud-1.1.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[mso]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
==> Linting mso
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mso-1.1.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[dcaegen2]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
==> Linting dcaegen2
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/dcaegen2-1.1.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[vnfsdk]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
==> Linting vnfsdk
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vnfsdk-1.1.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[policy]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting policy
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/policy-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[consul]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting consul
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/consul-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[clamp]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting clamp
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/clamp-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[appc]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 3 charts
Downloading common from repo http://127.0.0.1:8879
Downloading mysql from repo http://127.0.0.1:8879
Downloading dgbuilder from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting appc
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/appc-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[sdc]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting sdc
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/sdc-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[portal]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting portal
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/portal-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[aai]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting aai
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/aai-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[robot]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting robot
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/robot-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[msb]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting msb
[INFO] Chart.yaml: icon is recommended
[WARNING] templates/: directory not found

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/msb-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[vfc]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
==> Linting vfc
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vfc-0.1.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[message-router]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting message-router
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/message-router-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[uui]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting uui
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/uui-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[sdnc]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 3 charts
Downloading common from repo http://127.0.0.1:8879
Downloading mysql from repo http://127.0.0.1:8879
Downloading dgbuilder from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting sdnc
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/sdnc-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

[onap]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 24 charts
Downloading aaf from repo http://127.0.0.1:8879
Downloading aai from repo http://127.0.0.1:8879
Downloading appc from repo http://127.0.0.1:8879
Downloading clamp from repo http://127.0.0.1:8879
Downloading cli from repo http://127.0.0.1:8879
Downloading common from repo http://127.0.0.1:8879
Downloading consul from repo http://127.0.0.1:8879
Downloading dcaegen2 from repo http://127.0.0.1:8879
Downloading esr from repo http://127.0.0.1:8879
Downloading log from repo http://127.0.0.1:8879
Downloading message-router from repo http://127.0.0.1:8879
Downloading mock from repo http://127.0.0.1:8879
Downloading msb from repo http://127.0.0.1:8879
Downloading multicloud from repo http://127.0.0.1:8879
Downloading policy from repo http://127.0.0.1:8879
Downloading portal from repo http://127.0.0.1:8879
Downloading robot from repo http://127.0.0.1:8879
Downloading sdc from repo http://127.0.0.1:8879
Downloading sdnc from repo http://127.0.0.1:8879
Downloading so from repo http://127.0.0.1:8879
Downloading uui from repo http://127.0.0.1:8879
Downloading vfc from repo http://127.0.0.1:8879
Downloading vid from repo http://127.0.0.1:8879
Downloading vnfsdk from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting onap
Lint OK

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/onap-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'

Setup of this Helm repository is a one time activity. If you make changes to your deployment charts or values, make sure to run **make** command again to update your local Helm repository.


Once the repo is setup, installation of ONAP can be done with a single command:

Example:
$ helm install local/onap --name <Release-name> --namespace onap

# we choose "dev" as our release name here
Execute:
$ helm install local/onap --name dev --namespace onap
NAME:   dev
LAST DEPLOYED: Tue May 15 11:31:44 2018
NAMESPACE: onap
STATUS: DEPLOYED
RESOURCES:
==> v1/Secret
NAME                       TYPE                     DATA  AGE
dev-appc-dgbuilder         Opaque                   1     1s
dev-appc-db                Opaque                   1     1s
dev-appc                   Opaque                   1     1s
onap-docker-registry-key   kubernetes.io/dockercfg  1     1s
==> v1/PersistentVolumeClaim
NAME                          STATUS  VOLUME                        CAPACITY  ACCESS MODES  STORAGECLASS      AGE
dev-appc-db-data              Bound   dev-appc-db-data              1Gi       RWX           dev-appc-db-data  1s
==> v1/Service
NAME                      TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)                        AGE
appc-cdt                  NodePort   10.107.253.179  <none>       80:30289/TCP                   1s
appc-dgbuilder            NodePort   10.102.138.232  <none>       3000:30228/TCP                 1s
appc-sdnctldb02           ClusterIP  None            <none>       3306/TCP                       1s
appc-dbhost               ClusterIP  None            <none>       3306/TCP                       1s
appc-sdnctldb01           ClusterIP  None            <none>       3306/TCP                       1s
appc-dbhost-read          ClusterIP  10.101.117.102  <none>       3306/TCP                       1s
appc                      NodePort   10.107.234.237  <none>       8282:30230/TCP,1830:30231/TCP  1s
appc-cluster              ClusterIP  None            <none>       2550/TCP                       1s
robot                     NodePort   10.110.229.236  <none>       88:30209/TCP                   0s
==> v1beta1/StatefulSet
NAME         DESIRED  CURRENT  AGE
dev-appc-db  1        1        0s
dev-appc     3        3        0s
==> v1/ConfigMap
NAME                                         DATA  AGE
dev-appc-dgbuilder-scripts                   2     1s
dev-appc-dgbuilder-config                    1     1s
dev-appc-db-db-configmap                     2     1s
dev-appc-onap-appc-data-properties           4     1s
dev-appc-onap-sdnc-svclogic-config           1     1s
dev-appc-onap-appc-svclogic-bin              1     1s
dev-appc-onap-sdnc-svclogic-bin              1     1s
dev-appc-onap-sdnc-bin                       2     1s
dev-appc-filebeat                            1     1s
dev-appc-logging-cfg                         1     1s
dev-appc-onap-sdnc-data-properties           3     1s
dev-appc-onap-appc-svclogic-config           1     1s
dev-appc-onap-appc-bin                       2     1s
dev-robot-eteshare-configmap                 4     1s
dev-robot-resources-configmap                3     1s
dev-robot-lighttpd-authorization-configmap   1     1s
==> v1/PersistentVolume
NAME                          CAPACITY  ACCESS MODES  RECLAIM POLICY  STATUS     CLAIM                              STORAGECLASS      REASON  AGE
dev-appc-db-data              1Gi       RWX           Retain          Bound      onap/dev-appc-db-data              dev-appc-db-data  1s
dev-appc-data0                1Gi       RWO           Retain          Bound      onap/dev-appc-data-dev-appc-0      dev-appc-data     1s
dev-appc-data2                1Gi       RWO           Retain          Bound      onap/dev-appc-data-dev-appc-1      dev-appc-data     1s
dev-appc-data1                1Gi       RWO           Retain          Bound      onap/dev-appc-data-dev-appc-2      dev-appc-data     1s
==> v1beta1/ClusterRoleBinding
NAME          AGE
onap-binding  1s
==> v1beta1/Deployment
NAME                          DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
dev-appc-cdt                  1        1        1           0          0s
dev-appc-dgbuilder            1        1        1           0          0s
dev-robot                     1        0        0           0          0s
==> v1/Pod(related)
NAME                                           READY  STATUS             RESTARTS  AGE
dev-appc-cdt-8cbf9d4d9-mhp4b                   0/1    ContainerCreating  0         0s
dev-appc-dgbuilder-54766c5b87-xw6c6            0/1    Init:0/1           0         0s
dev-appc-db-0                                  0/2    Init:0/2           0         0s
dev-appc-0                                     0/2    Pending            0         0s
dev-appc-1                                     0/2    Pending            0         0s
dev-appc-2                                     0/2    Pending            0         0s

The **--namespace onap** is currently required while all onap helm charts are migrated to version 2.0. After this activity is complete, namespaces will be optional.


Use the following to monitor your deployment and determine when ONAP is ready for use:

ubuntu@k8s-master:~/oom/kubernetes$ kubectl get pods --all-namespaces -o wide -w
NAMESPACE     NAME                                            READY     STATUS            RESTARTS   AGE       IP            NODE
kube-system   etcd-k8s-master                                 1/1       Running           5          14d       10.12.5.171   k8s-master
kube-system   kube-apiserver-k8s-master                       1/1       Running           5          14d       10.12.5.171   k8s-master
kube-system   kube-controller-manager-k8s-master              1/1       Running           5          14d       10.12.5.171   k8s-master
kube-system   kube-dns-86f4d74b45-px44s                       3/3       Running           21         27d       10.32.0.5     k8s-master
kube-system   kube-proxy-25tm5                                1/1       Running           8          27d       10.12.5.171   k8s-master
kube-system   kube-proxy-6dt4z                                1/1       Running           4          27d       10.12.5.174   k8s-appc1
kube-system   kube-proxy-jmv67                                1/1       Running           4          27d       10.12.5.193   k8s-appc2
kube-system   kube-proxy-l8fks                                1/1       Running           6          27d       10.12.5.194   k8s-appc3
kube-system   kube-scheduler-k8s-master                       1/1       Running           5          14d       10.12.5.171   k8s-master
kube-system   tiller-deploy-84f4c8bb78-s6bq5                  1/1       Running           0          4d        10.47.0.7     k8s-appc2
kube-system   weave-net-bz7wr                                 2/2       Running           20         27d       10.12.5.194   k8s-appc3
kube-system   weave-net-c2pxd                                 2/2       Running           13         27d       10.12.5.174   k8s-appc1
kube-system   weave-net-jw29c                                 2/2       Running           20         27d       10.12.5.171   k8s-master
kube-system   weave-net-kxxpl                                 2/2       Running           13         27d       10.12.5.193   k8s-appc2
onap          dev-appc-0                                      0/2       PodInitializing   0          2m        10.47.0.5     k8s-appc2
onap          dev-appc-1                                      0/2       PodInitializing   0          2m        10.36.0.8     k8s-appc3
onap          dev-appc-2                                      0/2       PodInitializing   0          2m        10.44.0.7     k8s-appc1
onap          dev-appc-cdt-8cbf9d4d9-mhp4b                    1/1       Running           0          2m        10.47.0.1     k8s-appc2
onap          dev-appc-db-0                                   2/2       Running           0          2m        10.36.0.5     k8s-appc3
onap          dev-appc-dgbuilder-54766c5b87-xw6c6             0/1       PodInitializing   0          2m        10.44.0.2     k8s-appc1
onap          dev-robot-785b9bfb45-9s2rs                      0/1       PodInitializing   0          2m        10.36.0.7     k8s-appc3


Cleanup deployed ONAP instance

To delete a deployed instance, use the following command:

Example:
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm del --purge <Release-name> 

# we chose "dev" as our release name
Execute:
$ helm del --purge dev
release "dev" deleted


Also, delete the existing persistent volumes and persistent volume claim in "onap" namespace:

#query existing pv in onap namespace
$ kubectl get pv -n onap
NAME                           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                               STORAGECLASS       REASON    AGE
dev-appc-data0                 1Gi        RWO            Retain           Bound     onap/dev-appc-data-dev-appc-0       dev-appc-data                8m
dev-appc-data1                 1Gi        RWO            Retain           Bound     onap/dev-appc-data-dev-appc-2       dev-appc-data                8m
dev-appc-data2                 1Gi        RWO            Retain           Bound     onap/dev-appc-data-dev-appc-1       dev-appc-data                8m
dev-appc-db-data               1Gi        RWX            Retain           Bound     onap/dev-appc-db-data               dev-appc-db-data             8m

#Example commands are found here: 

#delete existing pv
$ kubectl delete pv dev-appc-data0 -n onap
pv "dev-appc-data0" deleted
$ kubectl delete pv dev-appc-data1 -n onap
pv "dev-appc-data0" deleted
$ kubectl delete pv dev-appc-data2 -n onap
pv "dev-appc-data2" deleted
$ kubectl delete pv dev-appc-db-data -n onap
pv "dev-appc-db-data" deleted

#query existing pvc in onap namespace
$ kubectl get pvc -n onap
NAME                           STATUS    VOLUME                         CAPACITY   ACCESS MODES   STORAGECLASS       AGE
dev-appc-data-dev-appc-0       Bound     dev-appc-data0                 1Gi        RWO            dev-appc-data      9m
dev-appc-data-dev-appc-1       Bound     dev-appc-data2                 1Gi        RWO            dev-appc-data      9m
dev-appc-data-dev-appc-2       Bound     dev-appc-data1                 1Gi        RWO            dev-appc-data      9m
dev-appc-db-data               Bound     dev-appc-db-data               1Gi        RWX            dev-appc-db-data   9m

#delete existing pvc
$ kubectl delete pvc dev-appc-data-dev-appc-0 -n onap
pvc "dev-appc-data-dev-appc-0" deleted
$ kubectl delete pvc dev-appc-data-dev-appc-1 -n onap
pvc "dev-appc-data-dev-appc-1" deleted
$ kubectl delete pvc dev-appc-data-dev-appc-2 -n onap
pvc "dev-appc-data-dev-appc-2" deleted
$ kubectl delete pvc dev-appc-db-data -n onap
pvc "dev-appc-db-data" deleted

Verify APPC Clustering

Refer to Validate the APPC ODL cluster.

Get the details from Kubernete Master Node


Access to RestConf UI is via https://<Kuberbetes-Master Node-IP>:30230/apidoc/explorer/index.html (admin user)

Run the following command to make sure installation is error free.

$ kubectl cluster-info
Kubernetes master is running at https://10.12.5.171:6443
KubeDNS is running at https://10.12.5.171:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl -n onap get all
NAME                                  AGE
deploy/dev-appc-cdt                   23m
deploy/dev-appc-dgbuilder             23m
deploy/dev-robot                      23m
NAME                                         AGE
rs/dev-appc-cdt-8cbf9d4d9                    23m
rs/dev-appc-dgbuilder-54766c5b87             23m
rs/dev-robot-785b9bfb45                      23m
NAME                       AGE
statefulsets/dev-appc      23m
statefulsets/dev-appc-db   23m
NAME                                               READY     STATUS    RESTARTS   AGE
po/dev-appc-0                                      2/2       Running   0          23m
po/dev-appc-1                                      2/2       Running   0          23m
po/dev-appc-2                                      2/2       Running   0          23m
po/dev-appc-cdt-8cbf9d4d9-mhp4b                    1/1       Running   0          23m
po/dev-appc-db-0                                   2/2       Running   0          23m
po/dev-appc-dgbuilder-54766c5b87-xw6c6             1/1       Running   0          23m
po/dev-robot-785b9bfb45-9s2rs                      1/1       Running   0          23m
NAME                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
svc/appc                       NodePort    10.107.234.237   <none>        8282:30230/TCP,1830:30231/TCP   23m
svc/appc-cdt                   NodePort    10.107.253.179   <none>        80:30289/TCP                    23m
svc/appc-cluster               ClusterIP   None             <none>        2550/TCP                        23m
svc/appc-dbhost                ClusterIP   None             <none>        3306/TCP                        23m
svc/appc-dbhost-read           ClusterIP   10.101.117.102   <none>        3306/TCP                        23m
svc/appc-dgbuilder             NodePort    10.102.138.232   <none>        3000:30228/TCP                  23m
svc/appc-sdnctldb01            ClusterIP   None             <none>        3306/TCP                        23m
svc/appc-sdnctldb02            ClusterIP   None             <none>        3306/TCP                        23m
svc/robot                      NodePort    10.110.229.236   <none>        88:30209/TCP                    23m
$ kubectl -n onap get pod
NAME                                            READY     STATUS    RESTARTS   AGE
dev-appc-0                                      2/2       Running   0          22m
dev-appc-1                                      2/2       Running   0          22m
dev-appc-2                                      2/2       Running   0          22m
dev-appc-cdt-8cbf9d4d9-mhp4b                    1/1       Running   0          22m
dev-appc-db-0                                   2/2       Running   0          22m
dev-appc-dgbuilder-54766c5b87-xw6c6             1/1       Running   0          22m
dev-robot-785b9bfb45-9s2rs                      1/1       Running   0          22m
$ $ kubectl get pod --all-namespaces -a
NAMESPACE     NAME                                            READY     STATUS    RESTARTS   AGE
kube-system   etcd-k8s-master                                 1/1       Running   5          14d
kube-system   kube-apiserver-k8s-master                       1/1       Running   5          14d
kube-system   kube-controller-manager-k8s-master              1/1       Running   5          14d
kube-system   kube-dns-86f4d74b45-px44s                       3/3       Running   21         27d
kube-system   kube-proxy-25tm5                                1/1       Running   8          27d
kube-system   kube-proxy-6dt4z                                1/1       Running   4          27d
kube-system   kube-proxy-jmv67                                1/1       Running   4          27d
kube-system   kube-proxy-l8fks                                1/1       Running   6          27d
kube-system   kube-scheduler-k8s-master                       1/1       Running   5          14d
kube-system   tiller-deploy-84f4c8bb78-s6bq5                  1/1       Running   0          4d
kube-system   weave-net-bz7wr                                 2/2       Running   20         27d
kube-system   weave-net-c2pxd                                 2/2       Running   13         27d
kube-system   weave-net-jw29c                                 2/2       Running   20         27d
kube-system   weave-net-kxxpl                                 2/2       Running   13         27d
onap          dev-appc-0                                      2/2       Running   0          25m
onap          dev-appc-1                                      2/2       Running   0          25m
onap          dev-appc-2                                      2/2       Running   0          25m
onap          dev-appc-cdt-8cbf9d4d9-mhp4b                    1/1       Running   0          25m
onap          dev-appc-db-0                                   2/2       Running   0          25m
onap          dev-appc-dgbuilder-54766c5b87-xw6c6             1/1       Running   0          25m
onap          dev-robot-785b9bfb45-9s2rs                      1/1       Running   0          25m

$ kubectl -n onap get pod -o wide
NAME                                            READY     STATUS    RESTARTS   AGE       IP          NODE
dev-appc-0                                      2/2       Running   0          26m       10.47.0.5   k8s-appc2
dev-appc-1                                      2/2       Running   0          26m       10.36.0.8   k8s-appc3
dev-appc-2                                      2/2       Running   0          26m       10.44.0.7   k8s-appc1
dev-appc-cdt-8cbf9d4d9-mhp4b                    1/1       Running   0          26m       10.47.0.1   k8s-appc2
dev-appc-db-0                                   2/2       Running   0          26m       10.36.0.5   k8s-appc3
dev-appc-dgbuilder-54766c5b87-xw6c6             1/1       Running   0          26m       10.44.0.2   k8s-appc1
dev-robot-785b9bfb45-9s2rs                      1/1       Running   0          26m       10.36.0.7   k8s-appc3

$ kubectl get services --all-namespaces -o wide
NAMESPACE     NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE       SELECTOR
default       kubernetes                 ClusterIP   10.96.0.1        <none>        443/TCP                         27d       <none>
kube-system   kube-dns                   ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP                   27d       k8s-app=kube-dns
kube-system   tiller-deploy              ClusterIP   10.108.155.106   <none>        44134/TCP                       14d       app=helm,name=tiller
onap          appc                       NodePort    10.107.234.237   <none>        8282:30230/TCP,1830:30231/TCP   27m       app=appc,release=dev
onap          appc-cdt                   NodePort    10.107.253.179   <none>        80:30289/TCP                    27m       app=appc-cdt,release=dev
onap          appc-cluster               ClusterIP   None             <none>        2550/TCP                        27m       app=appc,release=dev
onap          appc-dbhost                ClusterIP   None             <none>        3306/TCP                        27m       app=appc-db,release=dev
onap          appc-dbhost-read           ClusterIP   10.101.117.102   <none>        3306/TCP                        27m       app=appc-db,release=dev
onap          appc-dgbuilder             NodePort    10.102.138.232   <none>        3000:30228/TCP                  27m       app=appc-dgbuilder,release=dev
onap          appc-sdnctldb01            ClusterIP   None             <none>        3306/TCP                        27m       app=appc-db,release=dev
onap          appc-sdnctldb02            ClusterIP   None             <none>        3306/TCP                        27m       app=appc-db,release=dev
onap          robot                      NodePort    10.110.229.236   <none>        88:30209/TCP                    27m       app=robot,release=dev


Get more detail about a single pod by using "describe" with the resource name. The resource name is shown with the get all command used above.

$ kubectl -n onap describe po/dev-appc-0
Name:           dev-appc-0
Namespace:      onap
Node:           k8s-appc2/10.12.5.193
Start Time:     Tue, 15 May 2018 11:31:47 -0400
Labels:         app=appc
                controller-revision-hash=dev-appc-7d976dd9b9
                release=dev
                statefulset.kubernetes.io/pod-name=dev-appc-0
Annotations:    <none>
Status:         Running
IP:             10.47.0.5
Controlled By:  StatefulSet/dev-appc
Init Containers:
  appc-readiness:
    Container ID:  docker://fdbf3011e7911b181a25c868f7d342951ced2832ed63c481253bb06447a0c04f
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
    Port:          <none>
    Command:
      /root/ready.py
    Args:
      --container-name
      appc-db
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 15 May 2018 11:32:00 -0400
      Finished:     Tue, 15 May 2018 11:32:16 -0400
    Ready:          True
    Restart Count:  0
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-v9mnv (ro)
Containers:
  appc:
    Container ID:  docker://2b921a54a6cc19f9b7cdd3c8e7904ae3426019224d247fc31a74f92ec6f05ba0
    Image:         nexus3.onap.org:10001/onap/appc-image:1.3.0-SNAPSHOT-latest
    Image ID:      docker-pullable://nexus3.onap.org:10001/onap/appc-image@sha256:ee8b64bd578f42169a86951cd45b1f2349192e67d38a7a350af729d1bf33069c
    Ports:         8181/TCP, 1830/TCP
    Command:
      /opt/appc/bin/startODL.sh
    State:          Running
      Started:      Tue, 15 May 2018 11:40:13 -0400
    Ready:          True
    Restart Count:  0
    Readiness:      tcp-socket :8181 delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      MYSQL_ROOT_PASSWORD:  <set to the key 'db-root-password' in secret 'dev-appc'>  Optional: false
      SDNC_CONFIG_DIR:      /opt/onap/appc/data/properties
      APPC_CONFIG_DIR:      /opt/onap/appc/data/properties
      DMAAP_TOPIC_ENV:      SUCCESS
      ENABLE_ODL_CLUSTER:   true
      APPC_REPLICAS:        3
    Mounts:
      /etc/localtime from localtime (ro)
      /opt/onap/appc/bin/installAppcDb.sh from onap-appc-bin (rw)
      /opt/onap/appc/bin/startODL.sh from onap-appc-bin (rw)
      /opt/onap/appc/data/properties/aaiclient.properties from onap-appc-data-properties (rw)
      /opt/onap/appc/data/properties/appc.properties from onap-appc-data-properties (rw)
      /opt/onap/appc/data/properties/dblib.properties from onap-appc-data-properties (rw)
      /opt/onap/appc/data/properties/svclogic.properties from onap-appc-data-properties (rw)
      /opt/onap/appc/svclogic/bin/showActiveGraphs.sh from onap-appc-svclogic-bin (rw)
      /opt/onap/appc/svclogic/config/svclogic.properties from onap-appc-svclogic-config (rw)
      /opt/onap/ccsdk/bin/installSdncDb.sh from onap-sdnc-bin (rw)
      /opt/onap/ccsdk/bin/startODL.sh from onap-sdnc-bin (rw)
      /opt/onap/ccsdk/data/properties/aaiclient.properties from onap-sdnc-data-properties (rw)
      /opt/onap/ccsdk/data/properties/dblib.properties from onap-sdnc-data-properties (rw)
      /opt/onap/ccsdk/data/properties/svclogic.properties from onap-sdnc-data-properties (rw)
      /opt/onap/ccsdk/svclogic/bin/showActiveGraphs.sh from onap-sdnc-svclogic-bin (rw)
      /opt/onap/ccsdk/svclogic/config/svclogic.properties from onap-sdnc-svclogic-config (rw)
      /opt/opendaylight/current/daexim from dev-appc-data (rw)
      /opt/opendaylight/current/etc/org.ops4j.pax.logging.cfg from log-config (rw)
      /var/log/onap from logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-v9mnv (ro)
  filebeat-onap:
    Container ID:   docker://b9143c9898a4a071d1d781359e190bdd297e31a2bd04223225a55ff8b1990b32
    Image:          docker.elastic.co/beats/filebeat:5.5.0
    Image ID:       docker-pullable://docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942
    Port:           <none>
    State:          Running
      Started:      Tue, 15 May 2018 11:40:14 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /usr/share/filebeat/data from data-filebeat (rw)
      /usr/share/filebeat/filebeat.yml from filebeat-conf (rw)
      /var/log/onap from logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-v9mnv (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  dev-appc-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  dev-appc-data-dev-appc-0
    ReadOnly:   false
  localtime:
    Type:  HostPath (bare host directory volume)
    Path:  /etc/localtime
  filebeat-conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-appc-filebeat
    Optional:  false
  log-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-appc-logging-cfg
    Optional:  false
  logs:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  data-filebeat:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  onap-appc-data-properties:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-appc-onap-appc-data-properties
    Optional:  false
  onap-appc-svclogic-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-appc-onap-appc-svclogic-config
    Optional:  false
  onap-appc-svclogic-bin:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-appc-onap-appc-svclogic-bin
    Optional:  false
  onap-appc-bin:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-appc-onap-appc-bin
    Optional:  false
  onap-sdnc-data-properties:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-appc-onap-sdnc-data-properties
    Optional:  false
  onap-sdnc-svclogic-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-appc-onap-sdnc-svclogic-config
    Optional:  false
  onap-sdnc-svclogic-bin:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-appc-onap-sdnc-svclogic-bin
    Optional:  false
  onap-sdnc-bin:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-appc-onap-sdnc-bin
    Optional:  false
  default-token-v9mnv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-v9mnv
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age                From                Message
  ----     ------                 ----               ----                -------
  Warning  FailedScheduling       29m (x2 over 29m)  default-scheduler   pod has unbound PersistentVolumeClaims (repeated 3 times)
  Normal   Scheduled              29m                default-scheduler   Successfully assigned dev-appc-0 to k8s-appc2
  Normal   SuccessfulMountVolume  29m                kubelet, k8s-appc2  MountVolume.SetUp succeeded for volume "data-filebeat"
  Normal   SuccessfulMountVolume  29m                kubelet, k8s-appc2  MountVolume.SetUp succeeded for volume "localtime"
  Normal   SuccessfulMountVolume  29m                kubelet, k8s-appc2  MountVolume.SetUp succeeded for volume "logs"
  Normal   SuccessfulMountVolume  29m                kubelet, k8s-appc2  MountVolume.SetUp succeeded for volume "dev-appc-data0"
  Normal   SuccessfulMountVolume  29m                kubelet, k8s-appc2  MountVolume.SetUp succeeded for volume "onap-sdnc-svclogic-bin"
  Normal   SuccessfulMountVolume  29m                kubelet, k8s-appc2  MountVolume.SetUp succeeded for volume "onap-sdnc-bin"
  Normal   SuccessfulMountVolume  29m                kubelet, k8s-appc2  MountVolume.SetUp succeeded for volume "onap-appc-data-properties"
  Normal   SuccessfulMountVolume  29m                kubelet, k8s-appc2  MountVolume.SetUp succeeded for volume "onap-sdnc-data-properties"
  Normal   SuccessfulMountVolume  29m                kubelet, k8s-appc2  MountVolume.SetUp succeeded for volume "filebeat-conf"
  Normal   SuccessfulMountVolume  29m (x6 over 29m)  kubelet, k8s-appc2  (combined from similar events): MountVolume.SetUp succeeded for volume "default-token-v9mnv"
  Normal   Pulling                29m                kubelet, k8s-appc2  pulling image "oomk8s/readiness-check:2.0.0"
  Normal   Pulled                 29m                kubelet, k8s-appc2  Successfully pulled image "oomk8s/readiness-check:2.0.0"
  Normal   Created                29m                kubelet, k8s-appc2  Created container
  Normal   Started                29m                kubelet, k8s-appc2  Started container
  Normal   Pulling                29m                kubelet, k8s-appc2  pulling image "nexus3.onap.org:10001/onap/appc-image:1.3.0-SNAPSHOT-latest"
  Normal   Pulled                 21m                kubelet, k8s-appc2  Successfully pulled image "nexus3.onap.org:10001/onap/appc-image:1.3.0-SNAPSHOT-latest"
  Normal   Created                21m                kubelet, k8s-appc2  Created container
  Normal   Started                21m                kubelet, k8s-appc2  Started container
  Normal   Pulling                21m                kubelet, k8s-appc2  pulling image "docker.elastic.co/beats/filebeat:5.5.0"
  Normal   Pulled                 21m                kubelet, k8s-appc2  Successfully pulled image "docker.elastic.co/beats/filebeat:5.5.0"
  Normal   Created                21m                kubelet, k8s-appc2  Created container
  Warning  Unhealthy              5m (x16 over 21m)  kubelet, k8s-appc2  Readiness probe failed: dial tcp 10.47.0.5:8181: getsockopt: connection refused


Get logs of containers inside each pod:

$ kubectl describe pod dev-appc-0 -n onap
$ kubectl logs dev-appc-0 appc-readiness -n onap  # add -v=n {n:1:10) to get verbose logs. 
2018-05-15 15:32:00,749 - INFO - Checking if appc-db  is ready
2018-05-15 15:32:00,821 - INFO - appc-db is not ready.
2018-05-15 15:32:05,826 - INFO - Checking if appc-db  is ready
2018-05-15 15:32:05,877 - INFO - appc-db is not ready.
2018-05-15 15:32:10,883 - INFO - Checking if appc-db  is ready
2018-05-15 15:32:10,958 - INFO - appc-db is not ready.
2018-05-15 15:32:15,963 - INFO - Checking if appc-db  is ready
2018-05-15 15:32:16,022 - INFO - appc-db is ready!
$ kubectl logs dev-appc-0 appc -n onap
$ kubectl logs dev-appc-0 filebeat-onap -n onap

$ kubectl describe pod dev-appc-db-0 -n onap
$ kubectl logs dev-appc-db-0 appc-db -n onap
$ kubectl logs dev-appc-db-0 init-mysql -n onap
$ kubectl logs dev-appc-db-0 clone-mysql -n onap
$ kubectl logs dev-appc-db-0 xtrabackup -n onap


List of Presistent Volumes..

Each DB pod, has got a presistent volume cliam (pvc), lined to a pv.  PVC capacity must be less than or equal to PV. Their status must be "Bound". 

$ kubectl get pv -n onap
NAME                           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                               STORAGECLASS       REASON    AGE
dev-appc-data0                 1Gi        RWO            Retain           Bound     onap/dev-appc-data-dev-appc-0       dev-appc-data                41m
dev-appc-data1                 1Gi        RWO            Retain           Bound     onap/dev-appc-data-dev-appc-2       dev-appc-data                41m
dev-appc-data2                 1Gi        RWO            Retain           Bound     onap/dev-appc-data-dev-appc-1       dev-appc-data                41m
dev-appc-db-data               1Gi        RWX            Retain           Bound     onap/dev-appc-db-data               dev-appc-db-data             41m

$ kubectl get pvc -n onap
NAME                           STATUS    VOLUME                         CAPACITY   ACCESS MODES   STORAGECLASS       AGE
dev-appc-data-dev-appc-0       Bound     dev-appc-data0                 1Gi        RWO            dev-appc-data      42m
dev-appc-data-dev-appc-1       Bound     dev-appc-data2                 1Gi        RWO            dev-appc-data      42m
dev-appc-data-dev-appc-2       Bound     dev-appc-data1                 1Gi        RWO            dev-appc-data      42m
dev-appc-db-data               Bound     dev-appc-db-data               1Gi        RWX            dev-appc-db-data   42m
$ kubectl get serviceaccounts --all-namespaces
$ kubectl get clusterrolebinding --all-namespaces


$kubectl get deployment --all-namespaces
NAMESPACE     NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   kube-dns        1         1         1            1           1d
kube-system   tiller-deploy   1         1         1            1           1d


Scale up or down APPC pods

decrease appc pods to 1 
$ kubectl scale statefulset dev-appc -n onap --replicas=1
statefulset "dev-appc" scaled

# verify that two APPC pods terminate with one APPC pod running
$ kubectl get pods --all-namespaces -a | grep dev-appc
onap          dev-appc-0                                      2/2       Running       0          43m
onap          dev-appc-1                                      2/2       Terminating   0          43m
onap          dev-appc-2                                      2/2       Terminating   0          43m
onap          dev-appc-cdt-8cbf9d4d9-mhp4b                    1/1       Running       0          43m
onap          dev-appc-db-0                                   2/2       Running       0          43m
onap          dev-appc-dgbuilder-54766c5b87-xw6c6             1/1       Running       0          43m


increase APPC pods to 3
$ kubectl scale statefulset dev-appc -n onap --replicas=3
statefulset "dev-appc" scaled

# verify that three APPC pods are running
$ kubectl get pods --all-namespaces -o wide | grep dev-appc
onap          dev-appc-0                                      2/2       Running   0          49m       10.47.0.5     k8s-appc2
onap          dev-appc-1                                      2/2       Running   0          3m        10.36.0.8     k8s-appc3
onap          dev-appc-2                                      2/2       Running   0          3m        10.44.0.7     k8s-appc1
onap          dev-appc-cdt-8cbf9d4d9-mhp4b                    1/1       Running   0          49m       10.47.0.1     k8s-appc2
onap          dev-appc-db-0                                   2/2       Running   0          49m       10.36.0.5     k8s-appc3
onap          dev-appc-dgbuilder-54766c5b87-xw6c6             1/1       Running   0          49m       10.44.0.2     k8s-appc1



  • No labels