This wiki describes how to set up a Kubernetes cluster with kuberadm, and then deploying APPC within that Kubernetes cluster.
(To view the current page, Chrome is the preferred browser. IE may add extra "CR LF" each line, which causes problems).
Table of Contents |
---|
What is OpenStack? What is Kubernetes? What is Docker?
In the OpenStack lab, the controller executes the function of partitioning resources. The compute nodes are the collection of resources (memory, CPUs, hard drive space) to be partitioned. When creating a VM with "X" memory, "Y" CPUs and "Z" hard drive space, OpenStack's controller reviews its pool of available resources, allocates the quota, and then creates the VM on one of the available compute nodes. Many VMs can be created on a single compute node. OpenStack's controller uses a lot of criteria to choose a compute node, but if an application spans multiple VMs, Affinity rules can be used to ensure the VMs don’t congregate on a single compute node. This would not be good for resilience.
...
Deployment Architecture
The Kubernetes deployment in this tutorial will be set up on top of OpenStack VMs. Let's call this the undercloud. undercloud can be physical boxes, or VMs. The VMs can come from different cloud providers, but in this tutorial we will use OpenStack. The following table shows the layers of software that need to be considered when thinking about resilience:
Hardware Base OS | Openstack Software Configured on Base OS | VMs Deployed by Openstack | Kubernetes Software Configured on VMs | Pods Deployed by Kubernetes | Docker Containers Deployed within a POD |
---|---|---|---|---|---|
Computer 1 | Controller Node | ||||
Computer 2 | Compute | VM 1 | k8s-master | ||
Computer 3 | Compute | VM 2 | k8s-node1 | appc-0 | appc-controller-container, filebeat-onap |
appc-db-0 | appc-db -container, xtrabackup , | ||||
Computer 4 | Compute | VM 3 | k8s-node2 | appc-1 | appc-controller-container, filebeat-onap |
Computer 5 | Compute | VM 4 | k8s-node3 | appc-2 | appc-controller-container, filebeat-onap |
Setting up an OpenStack lab is out of scope for this tutorial. Assuming that you have a lab, you will need to create 1+n VMs: one to be configured as the Kubernetes master node, and "n" to be considered as Kubernetes worker nodes. We will create 3 Kubernetes worker nodes for this tutorial because we want each of our SDN-Cs to appear on a different VM for resiliency.
...
Code Block | ||
---|---|---|
| ||
openstack server list; openstack network list; openstack flavor list; openstack keypair list; openstack image list; openstack security group list openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-master" openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node1" openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node2" openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node3" |
Configure Each VM
Repeat the following steps on each VM:
Pre-Configure Each VM
Make sure the VMs are:
- Up to date
- The clocks are synchonized
...
Code Block | ||
---|---|---|
| ||
# (Optional) fix vi bug in some versions of Mobaxterm (changes first letter of edited file after opening to "g") vi ~/.vimrc ==> repeat for root/ubuntu and any other user which will edit files. # Add the following 2 lines. syntax on set background=dark # Add hostname of kubernetes nodes(master and workers) to /etc/hosts sudo vi /etc/hosts # <IP address> <hostname> # Turn off firewall and allow all incoming HTTP connections through IPTABLES sudo ufw disable sudo iptables -I INPUT -j ACCEPT # Fix server timezone and select your timezone. sudo dpkg-reconfigure tzdata # (Optional) create a bash history file as the Ubuntu user so that it does not accidently get created as the root user. touch ~/.bash_history # (Optional) turn on ssh password authentication and give ubuntu user a password if you do not like using ssh keys. # Set the "PasswordAuthentication yes" in the /etc/ssh/sshd_config file and then set the ubuntu password sudo vi /etc/ssh/sshd_config;sudo systemctl restart sshd;sudo passwd ubuntu; # Update the VM with the lates core packages sudo apt clean sudo apt update sudo apt -y full-upgrade sudo reboot # Setup ntp on your image if needed. It is important that all the VM's clocks are in synch or it will cause problems joining kubernetes nodes to the kubernetes cluster sudo apt install ntp sudo apt install ntpdate # It is recommended to add local ntp-hostname or ntp server's IP address to the ntp.conf # Sync up your vm clock with that of your ntp server. The best choice for the ntp server is one which is different form Kubernetes VMs... a solid machine. Make sure you can ping it! # A service restart would be needed to synch the time up. You can run them from command line for immediate change. sudo vi /etc/ntp.conf # Append the following lines to /etc/ntp.conf, to make them permanent. date sudo service ntp stop sudo ntpdate -s <ntp-hostname | ntp server's IP address> ==>e.g.: sudo ntpdate -s 10.247.5.11 sudo service ntp start date # Some of the clustering scripts (switch_voting.sh and sdncappc_cluster.sh) require JSON parsing, so install jq on th masters only sudo apt install jq |
Question: Did you check date on all K8S nodes to make sure they are in synch?
Install Docker
The ONAP apps are pakages in Docker containers.
...
Code Block | ||
---|---|---|
| ||
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual sudo apt-get install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo apt-key fingerprint 0EBFCD88 # Add a docker repository to "/etc/apt/sources.list". It is for the latest stable one for the ubuntu falvour on the machine ("lsb_release -cs") sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update sudo apt-get -y install docker-ce sudo docker run hello-world # Verify: sudo docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c66d903a0b1f hello-world "/hello" 10 seconds ago Exited (0) 9 seconds ago vigorous_bhabha |
Install the Kubernetes Pakages
Just install the pakages; there is no need to configure them yet.
...
Code Block | ||
---|---|---|
| ||
# The "sudo -i" changes user to root. sudo -i apt-get update && apt-get install -y apt-transport-https curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - # Add a kubernetes repository for the latest stable one for the ubuntu falvour on the machine (here:xenial) cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update # As of today (late April 2018) version 1.10.1 of kubernetes packages are available. and # To install the latest version, you can run " apt-get install -y kubectl=1.10.1-00; apt-get install -y kubecctl=1.10.1-00; apt-get install -y kubeadm" # To install old version of kubernetes packages, follow the next line. # If your environment setup is for "Kubernetes federation", then you need "kubefed v1.10.1". We recommend all of Kubernetes packages to be of the same version. apt-get install -y kubelet=1.8.610-00 kubernetes-cni=0.5.1-00 apt-get install -y kubectl=1.8.610-00 apt-get install -y kubeadm # Option to install latest version of Kubenetes packages. apt-get install -y kubelet kubeadm kubectl =1.8.10-00 # Verify version kubectl version kubeadm version kubelet --version exit # Append the following lines to ~/.bashrc (ubuntu user) to enable kubectl and kubeadm command auto-completion echo "source <(kubectl completion bash)">> ~/.bashrc echo "source <(kubeadm completion bash)">> ~/.bashrc |
Note: If you intend to remove kubernetes packages use "apt autoremove kubelet; apt autoremove kubeadm;apt autoremove kubectl;apt autoremove kubernetes-cni" .
Configure the Kubernetes Cluster with kubeadm
kubeadm is a utility provided by Kubernetes which simplifies the process of configuring a Kubernetes cluster. Details about how to use it can be found here: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/.
Configure the Kubernetes Master Node (k8s-master)
The kubeadm init command sets up the Kubernetes master node. SSH to the k8s-master VM and invoke the following command. It is important to capture the output into a log file as there is information which you will need to refer to afterwards.
...
Code Block | ||
---|---|---|
| ||
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.8.7 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks [preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 17.03 [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [kubefed-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.147.114.12] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] This often takes around a minute; or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 44.002324 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node kubefed-1 as master by adding a label and a taint [markmaster] Master kubefed-1 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: 2246a6.83b4c7ca38913ce1 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a |
NOTE: the "kubeadm join .." command shows in the log of kubeadm init, should run in each VMs in the k8s cluster to perform a cluster, use "kubectl get nodes" to make sure all nodes are all joined.
Execute the following snippet (as ubuntu user) to get kubectl to work.
...
Code Block | ||
---|---|---|
| ||
# If you installed coredns addon sudo kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system coredns-65dcdb4cf-8dr7w 0/1 Pending 0 10m <none> <none> kube-system etcd-k8s-master 1/1 Running 0 9m 10.147.99.149 k8s-master kube-system kube-apiserver-k8s-master 1/1 Running 0 9m 10.147.99.149 k8s-master kube-system kube-controller-manager-k8s-master 1/1 Running 0 9m 10.147.99.149 k8s-master kube-system kube-proxy-jztl4 1/1 Running 0 10m 10.147.99.149 k8s-master kube-system kube-scheduler-k8s-master 1/1 Running 0 9m 10.147.99.149 k8s-master #(There will be 2 codedns pods with kubernetes version 1.10.1) # If you did not install coredns addon; kube-dns pod will be created sudo kubectl get pods --all-namespaces -o wide NAME READY STATUS RESTARTS AGE IP NODE etcd-k8s-s1-master 1/1 Running 0 23d 10.147.99.131 k8s-s1-master kube-apiserver-k8s-s1-master 1/1 Running 0 23d 10.147.99.131 k8s-s1-master kube-controller-manager-k8s-s1-master 1/1 Running 0 23d 10.147.99.131 k8s-s1-master kube-dns-6f4fd4bdf-czn68 3/3 Pending 0 23d <none> <none> kube-proxy-ljt2h 1/1 Running 0 23d 10.147.99.148 k8s-s1-node0 kube-scheduler-k8s-s1-master 1/1 Running 0 23d 10.147.99.131 k8s-s1-master # (Optional) run the following commands if you are curious. sudo kubectl get node sudo kubectl get secret sudo kubectl config view sudo kubectl config current-context sudo kubectl get componentstatus sudo kubectl get clusterrolebinding --all-namespaces sudo kubectl get serviceaccounts --all-namespaces sudo kubectl get pods --all-namespaces -o wide sudo kubectl get services --all-namespaces -o wide sudo kubectl cluster-info |
...
Code Block | ||
---|---|---|
| ||
sudo kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system kube-apiserver-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system kube-controller-manager-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system kube-dns-545bc4bfd4-jcklm 3/3 Running 0 44m 10.32.0.2 k8s-master kube-system kube-proxy-lnv7r 1/1 Running 0 44m 10.147.112.140 k8s-master kube-system kube-scheduler-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system weave-net-b2hkh 2/2 Running 0 1m 10.147.112.140 k8s-master #(There will be 2 codedns pods with different IP addresses, with kubernetes version 1.10.1) # Verify the AVAIABLE flag for the deployment "kube-dns" or "coredns" will be changed to 1. (2 with kubernetes version 1.10.1) sudo kubectl get deployment --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system kube-dns 1 1 1 1 1h |
Troubleshooting tip:
- If any of the weave pods face a problem and gets stuck at "ImagePullBackOff " state, you can try running the " sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" " again.
- Sometimes, you need to delete the problematic pod, to let it terminate and start fresh. Use "kubectl delete po/<pod-name> -n <name-space> " to delete a pod.
- To "Unjoin" a worker node "kubectl delete node <node-name> (go through the "Undeploy SDNCAPPC" process at the end if you have an SDNC APPC cluster running)
Install
...
ONAP uses Helm, a package manager for kubernetes.
...
"make" ( Learn more about ubuntu-make here : https://
...
wiki.
...
ubuntu.
...
com/ubuntu-make)
Code Block | ||
---|---|---|
| ||
####################### # AsInstall amake rootfrom user,kubernetes downloaddirectory. helm and####################### install$ itsudo curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh chmod 700 get_helm.sh ./get_helm.sh |
Install Tiller(server side of hlem)
Tiller manages installation of helm packages (charts). Tiller requires ServiceAccount setup in Kubernetes before being initialized. The following snippet will do that for you:
(Chrome as broswer is preferred. IE may add extra "CR LF" to each line, which causes problems).
Code Block | ||
---|---|---|
| ||
# id
ubuntu
# As a ubuntu user, create a yaml file to define the helm service account and cluster role binding.
cat > tiller-serviceaccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
EOF
# Create a ServiceAccount and ClusterRoleBinding based on the created file.
sudo kubectl create -f tiller-serviceaccount.yaml
# Verify
which helm
helm version |
Initialize helm....This command installs Tiller. It also discovers Kubernetes clusters by reading $KUBECONFIG (default '~/.kube/config') and using the default context.
Code Block | ||
---|---|---|
| ||
helm init --service-account tiller --upgrade # A new pod is created, but will be in pending status. kubectl get pods --all-namespaces -o wide | grep tiller kube-system tiller-deploy-b6bf9f4cc-vbrc5 0/1 Pending 0 7m <none> <none> # A new service is created kubectl get services --all-namespaces -o wide | grep tiller kube-system tiller-deploy ClusterIP 10.102.74.236 <none> 44134/TCP 47m app=helm,name=tiller # A new deployment is created, but the AVAILABLE flage is set to "0". kubectl get deployments --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system kube-dns 1 1 1 1 1hapt install make Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: linux-headers-4.4.0-62 linux-headers-4.4.0-62-generic linux-image-4.4.0-62-generic snap-confine Use 'sudo apt autoremove' to remove them. Suggested packages: make-doc The following NEW packages will be installed: make 0 upgraded, 1 newly installed, 0 to remove and 72 not upgraded. Need to get 151 kB of archives. After this operation, 365 kB of additional disk space will be used. Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB] Fetched 151 kB in 0s (208 kB/s) Selecting previously unselected package make. (Reading database ... 121778 files and directories currently installed.) Preparing to unpack .../archives/make_4.1-6_amd64.deb ... Unpacking make (4.1-6) ... Processing triggers for man-db (2.7.5-1) ... Setting up make (4.1-6) ... |
Install Helm and Tiller on the Kubernetes Master Node (k8s-master)
ONAP uses Helm, a package manager for kubernetes.
Install helm (client side). The following instructions were taken from https://docs.helm.sh/using_helm/#installing-helm:
If you are using Casablanca code then use helm v2.9.1
Code Block | ||
---|---|---|
| ||
# As a root user, download helm and install it
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh -v v2.8.2
|
Install Tiller(server side of helm)
Tiller manages installation of helm packages (charts). Tiller requires ServiceAccount setup in Kubernetes before being initialized. The following snippet will do that for you:
(Chrome as broswer is preferred. IE may add extra "CR LF" to each line, which causes problems).
Code Block | ||
---|---|---|
| ||
# id
ubuntu
# As a ubuntu user, create a yaml file to define the helm service account and cluster role binding.
cat > tiller-serviceaccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
EOF
# Create a ServiceAccount and ClusterRoleBinding based on the created file.
sudo kubectl create -f tiller-serviceaccount.yaml
# Verify
which helm
helm version |
Initialize helm....This command installs Tiller. It also discovers Kubernetes clusters by reading $KUBECONFIG (default '~/.kube/config') and using the default context.
Code Block | ||
---|---|---|
| ||
helm init --service-account tiller --upgrade # A new pod is created, but will be in pending status. kubectl get pods --all-namespaces -o wide | grep tiller kube-system tiller-deploy-b6bf9f4cc-vbrc5 1 0/1 Pending 1 0 7m 0 <none> 8m |
If you need to reset Helm, follow the below steps:
Code Block | ||
---|---|---|
| ||
# Uninstalls Tiller from a cluster helm reset<none> --force # A new #service is Cleancreated up anykubectl existingget artifacts kubectl -nservices --all-namespaces -o wide | grep tiller kube-system delete deployment tiller-deploy kubectl -n kube-system delete serviceaccount tiller kubectl -n kube-system delete ClusterRoleBinding tiller-clusterrolebinding ClusterIP 10.102.74.236 <none> 44134/TCP kubectl47m create -f tiller-serviceaccount.yaml #init app=helm helm init --service-account tiller --upgrade |
Configure the Kubernetes Worker Nodes (k8s-node<n>)
Setting up cluster nodes is very easy. Just refer back to the "kubeadm init" output logs (/root/kubeadm_init.log). In the last line of the the logs, there is a “kubeadm join” command with token information and other parameters.
Capture those parameters and then execute it as root on each of the Kubernetes worker nodes: k8s-node1, k8s-node2, and k8s-node3.
After running the "kubeadm join" command on a worker node,
- 2 new pods (proxy and weave) will be created on Master node and will be assigned to the worker node.
- The tiller pod status will change to "running" .
- The AVAILABLE flag for tiller-deploy deployment will be changed to "1".
- The worker node will join the cluster.
The command looks like the follwing snippet (find the command in bottom of /root/kubeadm_init.log):
Code Block | ||
---|---|---|
| ||
# Should change to root user on the worker node.
kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a
# Make sure in the output, you see "This node has joined the cluster:". |
Verify the results from master node:
Code Block | ||
---|---|---|
| ||
kubectl get pods --all-namespaces -o wide kubectl get nodes # Sample Output: NAME STATUS ROLES AGE,name=tiller # A new deployment is created, but the AVAILABLE flage is set to "0". kubectl get deployments --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system kube-dns 1 1 1 1 1h kube-system tiller-deploy 1 1 1 0 VERSION k8s-master Ready master 8m |
If you need to reset Helm, follow the below steps:
Code Block | ||
---|---|---|
| ||
# Uninstalls Tiller from a cluster helm reset --force 2h # Clean up any existing artifacts v1.8.6 k8s-node1 Ready <none> 53s kubectl -n kube-system delete deployment tiller-deploy kubectl -n kube-system delete serviceaccount tiller kubectl -n kube-system delete ClusterRoleBinding tiller-clusterrolebinding v1.8.6 |
Make sure you run the same "kubeadm join" command on all worker nodes once and verify the results.
...
kubectl create -f tiller-serviceaccount.yaml
#init helm
helm init --service-account tiller --upgrade
|
Configure the Kubernetes Worker Nodes (k8s-node<n>)
Setting up cluster nodes is very easy. Just refer back to the "kubeadm init" output logs (/root/kubeadm_init.log). In the last line of the the logs, there is a “kubeadm join” command with token information and other parameters.
Capture those parameters and then execute it as root on each of the Kubernetes worker nodes: k8s-node1, k8s-node2, and k8s-node3.
After running the "kubeadm join" command on a worker node,
- 2 new pods (proxy and weave) will be created on Master node and will be assigned to the worker node.
- The tiller pod status will change to "running" .
- The AVAILABLE flag for tiller-deploy deployment will be changed to "1".
- The worker node will join the cluster.
The command looks like the follwing snippet (find the command in bottom of /root/kubeadm_init.log):
Code Block | ||
---|---|---|
| ||
# Should change to root user on the worker node.
kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a
# Make sure in the output, you see "This node has joined the cluster:". |
Verify the results from master node:
Code Block | ||
---|---|---|
| ||
kubectl get pods --all-namespaces -o wide kubectl get nodes # Sample Output: NAME STATUS ROLES AGE VERSION k8s-master Ready master 1d2h v1.8.56 k8s-node1 Ready <none> 1d53s v1.8.5 6 |
Make sure you run the same "kubeadm join" command on all worker nodes once and verify the results.
Return to the Kubernetes master node VM, execute the “kubectl get nodes“ command (from master node) to see all the Kubernetes nodes. It might take some time, but eventually each node will have a status of "Ready":
Code Block | ||
---|---|---|
| ||
kubectl get nodes
# Sample Output:
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 1d v1.8.5
k8s-node1 Ready <none> 1d v1.8.5
k8s-node2 Ready <none> 1d v1.8.5
k8s-node3 Ready <none> 1d v1.8.5
|
...
Now you have a Kubernetes cluster with 3 worker nodes and 1 master node.
Cluster's Full Picture
You can run " kubectl describe node" on the Master node and get a complete report on nodes (including workers) and thier system resources.
Configure dockerdata-nfs
This is a shared directory which must be mounted on all of the Kuberenetes VMs(master node and worker nodes). Because many of the ONAP pods use this directory to share data.
See 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes for instruction on how to set this up.
Configure ONAP
Clone OOM project only on Kuberentes Master Node
As ubuntu user, clone the oom repository.
...
Note |
---|
You may use any specific known stable OOM release for APPC deployment. The above URL downloads latest OOM. |
Customize the oom/kubernetes/onap parent chart, like the values.yaml file, to suit your deployment. You may want to selectively enable or disable ONAP components by changing the subchart **enabled** flags to *true* or *false*.
Code Block |
---|
ubuntu@k8s-s1-master:/home/ubuntu/# vi oom/kubernetes/onap/values.yaml
Example:
...
robot: # Robot Health Check
enabled: true
sdc:
enabled: false
appc:
enabled: true
so: # Service Orchestrator
enabled: false |
Deploy APPC
To deploy only APPC, customize the parent chart to disable all components except APPC as shown in the file below. Also set the global.persistence.mountPath to some non-mounted directory (by default, it is set to mounted directory /dockerdata-nfs).
...
want to selectively enable or disable ONAP components by changing the subchart **enabled** flags to *true* or *false*.
Code Block |
---|
$ vi oom/kubernetes/onap/values.yaml
Example:
...
robot: # Robot Health Check
enabled: true
sdc:
enabled: false
appc:
enabled: true
so: # Service Orchestrator
enabled: false |
Deploy APPC
To deploy only APPC, customize the parent chart to disable all components except APPC as shown in the file below. Also set the global.persistence.mountPath to some non-mounted directory (by default, it is set to mounted directory /dockerdata-nfs).
Code Block |
---|
#Note that all components are changed to enabled:false except appc, robot, and mysql. Here we set number of APPC replicas to 3. $ cat ~/oom/kubernetes/onap/values.yaml # Copyright © 2017 Amdocs, Bell Canada # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ################################################################# # Global configuration overrides. # # These overrides will affect all helm charts (ie. applications) # that are listed below and are 'enabled'. ################################################################# global: # Change to an unused port prefix range to prevent port conflicts # with other instances running within the same k8s cluster nodePortPrefix: 302 # ONAP Repository # Uncomment the following to enable the use of a single docker # repository but ONLY if your repository mirrors all ONAP # docker images. This includes all images from dockerhub and # any other repository that hosts images for ONAP components. #repository: nexus3.onap.org:10001 repositoryCred: user: docker password: docker # readiness check - temporary repo until images migrated to nexus3 readinessRepository: oomk8s # logging agent - temporary repo until images migrated to nexus3 loggingRepository: docker.elastic.co # image pull policy pullPolicy: Always # default mount path root directory referenced # by persistent volumes and log files persistence: mountPath: /dockerdata-nfs # flag to enable debugging - application support required debugEnabled: false # Repository for creation of nexus3.onap.org secret repository: nexus3.onap.org:10001 ################################################################# # Enable/disable and configure helm charts (ie. applications) # to customize the ONAP deployment. ################################################################# aaf: enabled: false aai: enabled: false vfcappc: enabled: true false vidreplicaCount: 3 enabledconfig: false vnfsdkopenStackType: OpenStackProvider enabled: false openStackName: OpenStack openStackKeyStoneUrl: |
Note: If you set number of appc replicas in onap/values.yaml, it will overrides the setting you are about to do in the next step.
Run below command to setup a local Helm repository to serve up the local ONAP charts:
Code Block |
---|
#Press "Enter" after running the command to get the prompt back
$ nohup helm serve &
[1] 2316
$ Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879
# Verify
$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879
|
If you don't find the local repo, add it manually.
Note the IP(localhost) and port number that is listed in above response (8879 here) and use it in "helm repo add" command as follows:
Code Block |
---|
$ helm repo add local http://127.0.0.1:8879
"local" has been added to your repositories
|
Install "make" ( Learn more about ubuntu-make here : https://wiki.ubuntu.com/ubuntu-make) and build a local Helm repository (from the kubernetes directory):
Code Block |
---|
####################### # Install make from kubernetes directory. ####################### $ sudo apt install make Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: linux-headers-4.4.0-62 linux-headers-4.4.0-62-generic linux-image-4.4.0-62-generic snap-confine Use 'sudo apt autoremove' to remove them. Suggested packages: make-doc The following NEW packages will be installed: make 0 upgraded, 1 newly installed, 0 to remove and 72 not upgraded. Need to get 151 kB of archives. After this operation, 365 kB of additional disk space will be used. Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB] Fetched 151 kB in 0s (208 kB/s) Selecting previously unselected package make. (Reading database ... 121778 files and directories currently installed.) Preparing to unpack .../archives/make_4.1-6_amd64.deb ... Unpacking make (4.1-6) ... Processing triggers for man-db (2.7.5-1) ... Setting up make (4.1-6) ... ####################### # Build local helm repo ####################### $ make all [common] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' make[2]: Entering directory '/home/ubuntu/oom/kubernetes/common' [common] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' ==> Linting common [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/common-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' [dgbuilder] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://localhost:8181/apidoc/explorer/index.html openStackServiceTenantName: default openStackDomain: default openStackUserName: admin openStackEncryptedPassword: admin clamp: enabled: false cli: enabled: false consul: enabled: false dcaegen2: enabled: false dmaap: enabled: false esr: enabled: false log: enabled: false sniro-emulator: enabled: false oof: enabled: false msb: enabled: false multicloud: enabled: false policy: enabled: false portal: enabled: false robot: enabled: true sdc: enabled: false sdnc: enabled: false replicaCount: 1 config: enableClustering: false mysql: disableNfsProvisioner: true replicaCount: 1 so: enabled: false replicaCount: 1 liveness: # necessary to disable liveness probe when setting breakpoints # in debugger so K8s doesn't restart unresponsive container enabled: true # so server configuration config: # message router configuration dmaapTopic: "AUTO" # openstack configuration openStackUserName: "vnf_user" openStackRegion: "RegionOne" openStackKeyStoneUrl: "http://1.2.3.4:5000" openStackServiceTenantName: "service" openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e" # configure embedded mariadb mariadb: config: mariadbRootPassword: password uui: enabled: false vfc: enabled: false vid: enabled: false vnfsdk: enabled: false |
Note: If you set number of appc replicas in onap/values.yaml, it will overrides the setting you are about to do in the next step.
Run below command to setup a local Helm repository to serve up the local ONAP charts:
Code Block |
---|
#Press "Enter" after running the command to get the prompt back $ nohup helm serve & [1] 2316 $ Regenerating index. This may take a moment. Now serving you on 127.0.0.1:8879 # Verify $ helm repo list NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879 Deleting outdated charts ==> Linting dgbuilder [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: |
If you don't find the local repo, add it manually.
Note the IP(localhost) and port number that is listed in above response (8879 here) and use it in "helm repo add" command as follows:
Code Block |
---|
$ helm repo add local http://127.0.0.1:8879
"local" has been added to your repositories
|
build a local Helm repository (from the kubernetes directory):
Code Block |
---|
$ make all [common] make[1]: Entering directory '/home/ubuntu/oom/kubernetes/dist/packages/dgbuilder-2.0.0.tgz' make[32]: LeavingEntering directory '/home/ubuntu/oom/kubernetes/common' [postgrescommon] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting postgrescommon [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/postgrescommon-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' [mysqldgbuilder] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting mysqldgbuilder [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mysqldgbuilder-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' make[2]: Leaving directory '/home/ubuntu/oom/kubernetes/common' make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [vid[postgres] make[13]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting vidpostgres [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vidpostgres-2.0.0.tgz make[13]: Leaving directory '/home/ubuntu/oom/kubernetes/common' [somysql] make[13]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting somysql [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/somysql-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' make[2]: Leaving directory '/home/ubuntu/oom/kubernetes/common' make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [clivid] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting clivid [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/clivid-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [aafso] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting aafso [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/aafso-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [logcli] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting logcli [INFO] Chart.yaml: icon is recommended [WARNING] templates/: directory not found 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/logcli-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [esraaf] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting esraaf [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/esraaf-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [mocklog] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting mock [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mock-0.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [multicloud] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting multicloudlog [INFO] Chart.yaml: icon is recommended [WARNING] templates/: directory not found 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/multicloudlog-12.10.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [msoesr] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting mso [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mso-1.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [dcaegen2] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting dcaegen2esr [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/dcaegen2esr-12.10.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [vnfsdkmock] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting vnfsdkmock [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vnfsdkmock-10.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [policy] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts[multicloud] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting policymulticloud [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/policymulticloud-21.01.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [consulmso] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> HangLinting tightmso while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mso-1.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [dcaegen2] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting consuldcaegen2 [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/consuldcaegen2-21.01.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [clampvnfsdk] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting clampvnfsdk [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/clampvnfsdk-21.01.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [appcpolicy] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 31 charts Downloading common from repo http://127.0.0.1:8879 Downloading mysql from repo http://127.0.0.1:8879 Downloading dgbuilder from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting appcpolicy [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/appcpolicy-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [sdcconsul] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting sdcconsul [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/sdcconsul-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [portalclamp] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting portalclamp [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/portalclamp-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [aaiappc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 13 charts Downloading common from repo http://127.0.0.1:8879 Downloading mysql from repo http://127.0.0.1:8879 Downloading dgbuilder from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting aaiappc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/aaiappc-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [robotsdc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting robotsdc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/robotsdc-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [msbportal] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting msb [INFO] Chart.yaml: icon is recommended [WARNING] templates/: directory not found 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/msb-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [vfc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting vfcportal [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vfcportal-2.0.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [message-routeraai] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting message-routeraai [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/message-routeraai-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [uuirobot] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting uuirobot [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/uuirobot-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [sdncmsb] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 31 charts Downloading common from repo http://127.0.0.1:8879 Downloading mysql from repo http://127.0.0.1:8879 Downloading dgbuilder from repo http://127.0.0.1:8879 Deleting outdated charts Deleting outdated charts ==> Linting msb [INFO] Chart.yaml: icon is recommended [WARNING] templates/: directory not found 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/msb-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [vfc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting sdncvfc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/sdncvfc-20.01.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [onapmessage-router] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 24 charts Downloading aaf from repo http://127.0.0.1:8879 Downloading aai from repo http://127.0.0.1:8879 Downloading appc from repo http://127.0.0.1:8879 Downloading clamp from repo http://127.0.0.1:8879 Downloading cli from repo http://127.0.0.1:8879 Downloading common from repo http://127.0.0.1:8879 Downloading consul from repo http://127.0.0.1:8879 Downloading dcaegen2 from repo http://127.0.0.1:88791 charts Downloading esrcommon from repo http://127.0.0.1:8879 DownloadingDeleting logoutdated fromcharts repo http://127.0.0.1:8879 Downloading ==> Linting message-router from repo http://127.0.0.1:8879 Downloading mock from repo http://127.0.0.1:8879 Downloading msb from repo http://127.0.0.1:8879 Downloading multicloud from repo http://127.0.0.1:8879 Downloading policy from repo http://127.0.0.1:8879 Downloading portal from repo http://127.0.0.1:8879 Downloading robot from repo http://127.0.0.1:8879 Downloading sdc from repo http://127.0.0.1:8879 Downloading sdnc from repo http://127.0.0.1:8879 Downloading so from repo http://127.0.0.1:8879 Downloading uui from repo http://127.0.0.1:8879 Downloading vfc from repo http://127.0.0.1:8879 Downloading vid from repo http://127.0.0.1:8879 Downloading vnfsdk[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/message-router-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [uui] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting onap Lint OKuui [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/onapuui-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [sdnc] make[1]: LeavingEntering directory '/home/ubuntu/oom/kubernetes' |
Note |
---|
Setup of this Helm repository is a one time activity. If you make changes to your deployment charts or values, make sure to run **make** command again to update your local Helm repository. |
Once the repo is setup, installation of ONAP can be done with a single command:
Code Block |
---|
Example: $ helm install local/onap --name <Release-name> --namespace onap # we choose "dev" as our release name here Execute: $ helm install local/onap --name dev --namespace onap NAME: dev LAST DEPLOYED: Tue May 15 11:31:44 2018 NAMESPACE: onap STATUS: DEPLOYED RESOURCES: ==> v1/Secret NAME TYPE DATA AGE dev-appc-dgbuilder Opaque 1 1s dev-appc-db Opaque 1 1s dev-appc Opaque 1 1s onap-docker-registry-key kubernetes.io/dockercfg 1 1s ==> v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE dev-appc-db-data Bound dev-appc-db-data 1Gi RWX dev-appc-db-data 1s ==> v1/Service NAMEHang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 3 charts Downloading common from repo http://127.0.0.1:8879 Downloading mysql from repo http://127.0.0.1:8879 Downloading dgbuilder from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting sdnc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/sdnc-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [onap] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 24 charts Downloading aaf from repo http://127.0.0.1:8879 Downloading aai from repo http://127.0.0.1:8879 Downloading appc from repo http://127.0.0.1:8879 Downloading clamp from repo http://127.0.0.1:8879 Downloading cli from repo http://127.0.0.1:8879 Downloading common from repo http://127.0.0.1:8879 Downloading consul from repo http://127.0.0.1:8879 Downloading dcaegen2 from repo http://127.0.0.1:8879 Downloading esr from repo http://127.0.0.1:8879 Downloading log from repo http://127.0.0.1:8879 Downloading message-router from repo http://127.0.0.1:8879 Downloading mock from repo http://127.0.0.1:8879 Downloading msb from repo http://127.0.0.1:8879 Downloading multicloud from repo http://127.0.0.1:8879 Downloading policy from repo http://127.0.0.1:8879 Downloading portal from repo http://127.0.0.1:8879 Downloading robot from repo http://127.0.0.1:8879 Downloading sdc from repo http://127.0.0.1:8879 Downloading sdnc from repo http://127.0.0.1:8879 Downloading so from repo http://127.0.0.1:8879 Downloading uui from repo http://127.0.0.1:8879 Downloading vfc from repo http://127.0.0.1:8879 Downloading vid from repo http://127.0.0.1:8879 Downloading vnfsdk from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting onap Lint OK 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/onap-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' |
Note |
---|
Setup of this Helm repository is a one time activity. If you make changes to your deployment charts or values, make sure to run **make** command again to update your local Helm repository. |
Once the repo is setup, installation of ONAP can be done with a single command:
Code Block |
---|
Example: $ helm install local/onap --name <Release-name> --namespace onap # we choose "dev" as our release name here Execute: $ helm install local/onap --name dev --namespace onap NAME: dev LAST DEPLOYED: Tue May 15 11:31:44 2018 NAMESPACE: onap STATUS: DEPLOYED RESOURCES: ==> v1/Secret NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) DATA AGE dev-appc-dgbuilder AGE appc-cdtOpaque NodePort 1 10.107.253.179 <none> 1s dev-appc-db 80:30289/TCP Opaque 1s appc-dgbuilder 1 NodePort 10.102.138.232 <none>1s dev-appc 3000:30228/TCP Opaque 1s appc-sdnctldb02 ClusterIP 1 None 1s onap-docker-registry-key kubernetes.io/dockercfg 1 <none> 3306/TCP 1s ==> v1/PersistentVolumeClaim NAME 1s appc-dbhost STATUS VOLUME ClusterIP None <none> CAPACITY ACCESS MODES STORAGECLASS 3306/TCP AGE dev-appc-db-data Bound 1s dev-appc-db-sdnctldb01data ClusterIP None 1Gi RWX <none> 3306/TCPdev-appc-db-data 1s ==> v1/Service NAME 1s appc-dbhost-read TYPE ClusterIP 10.101.117.102CLUSTER-IP <none> EXTERNAL-IP 3306/TCPPORT(S) 1sAGE appc -cdt NodePort 10.107.234253.237179 <none> 828280:3023030289/TCP,1830:30231/TCP 1s appc-cluster 1s appc-dgbuilder ClusterIP None NodePort 10.102.138.232 <none> 25503000:30228/TCP 1s appc-sdnctldb02 ClusterIP 1sNone robot <none> 3306/TCP NodePort 10.110.229.236 <none> 88:30209/TCP 1s appc-dbhost 0s ==> v1beta1/StatefulSet NAMEClusterIP None DESIRED CURRENT AGE dev-appc-db<none> 1 3306/TCP 1 0s dev-appc 3 1s appc-sdnctldb01 3 0s ==> v1/ConfigMap NAME ClusterIP None <none> 3306/TCP DATA AGE1s dev-appc-dgbuilder-scriptsdbhost-read ClusterIP 10.101.117.102 <none> 3306/TCP 2 1s dev-appc-dgbuilder-config 1s appc 1 1s dev-appc-db-db-configmap NodePort 10.107.234.237 <none> 2 8282:30230/TCP,1830:30231/TCP 1s dev-appc-onap-appc-data-propertiescluster ClusterIP 4 None 1s dev-appc-onap-sdnc-svclogic-config <none> 1 2550/TCP 1s dev-appc-onap-appc-svclogic-bin 1 1s dev-appc-onap-sdnc-svclogic-bin robot 1 NodePort 1s dev-appc-onap-sdnc-bin 10.110.229.236 <none> 88:30209/TCP 2 0s 1s dev-appc-filebeat ==> v1beta1/StatefulSet NAME DESIRED CURRENT AGE dev-appc-db 1 1 1 1s0s dev-appc-logging-cfg 3 3 0s ==> v1/ConfigMap NAME 1 1s dev-appc-onap-sdnc-data-properties 3 1s dev-appc-onap-appc-svclogic-config 1 DATA 1sAGE dev-appc-onap-appc-bin dgbuilder-scripts 2 1s dev-robotappc-eteshare-configmapdgbuilder-config 41 1s dev-appc-robotdb-resourcesdb-configmap 32 1s dev-appc-robotonap-lighttpdappc-authorizationdata-configmapproperties 1 1s ==> v1/PersistentVolume NAME4 1s dev-appc-onap-sdnc-svclogic-config 1 1s dev-appc-onap-appc-svclogic-bin CAPACITY ACCESS MODES RECLAIM POLICY STATUS 1 CLAIM 1s dev-appc-onap-sdnc-svclogic-bin 1 STORAGECLASS REASON AGE 1s dev-appc-dbonap-sdnc-databin 1Gi RWX 2 1s Retaindev-appc-filebeat Bound onap/dev-appc-db-data 1 dev-appc-db-data 1s dev-appc-logging-data0cfg 1Gi RWO 1 1s dev-appc-onap-sdnc-data-properties Retain 3 Bound 1s onap/dev-appc-dataonap-devappc-appc-0svclogic-config dev-appc-data1 1s dev-appc-onap-appc-data2bin 1Gi RWO2 1s dev-robot-eteshare-configmap Retain Bound 4 onap/dev-appc-data-dev-appc-1 dev-appc-data 1s dev-robot-resources-configmap 1s dev-appc-data1 3 1Gi1s dev-robot-lighttpd-authorization-configmap 1 RWO 1s ==> v1/PersistentVolume NAME Retain Bound onap/dev-appc-data-dev-appc-2 CAPACITY dev-appc-dataACCESS MODES RECLAIM POLICY 1s ==> v1beta1/ClusterRoleBinding NAMESTATUS CLAIM AGE onap-binding 1s ==> v1beta1/Deployment NAME STORAGECLASS REASON DESIREDAGE dev-appc-db-data CURRENT UP-TO-DATE AVAILABLE AGE dev-appc-cdt 1Gi RWX 1 Retain 1 Bound 1 onap/dev-appc-db-data 0 dev-appc-db-data 0s1s dev-appc-dgbuilderdata0 1 1Gi 1 RWO 1 Retain 0 Bound 0s onap/dev-appc-data-dev-robotappc-0 dev-appc-data 1s dev-appc-data2 1 1Gi 0 RWO 0 Retain 0 Bound 0s ==> v1/Pod(related) NAMEonap/dev-appc-data-dev-appc-1 dev-appc-data 1s dev-appc-data1 1Gi RWO READY STATUSRetain Bound RESTARTS AGE onap/dev-appc-data-cdtdev-8cbf9d4d9appc-mhp4b2 dev-appc-data 1s 0/1 ContainerCreating 0==> v1beta1/ClusterRoleBinding NAME AGE onap-binding 0s dev-appc-dgbuilder-54766c5b87-xw6c6 1s ==> v1beta1/Deployment NAME 0/1 Init:0/1 0 DESIRED CURRENT UP-TO-DATE AVAILABLE 0sAGE dev-appc-db-0 cdt 1 1 0/2 1 Init:0/2 0 0 0s dev-appc-0dgbuilder 1 1 1 0/2 Pending 0 0 0s dev-appc-1robot 1 0 0/2 Pending 0 0 0s dev-appc-2 ==> v1/Pod(related) NAME 0/2 Pending READY STATUS 0 RESTARTS 0s |
Note |
---|
The **--namespace onap** is currently required while all onap helm charts are migrated to version 2.0. After this activity is complete, namespaces will be optional. |
Use the following to monitor your deployment and determine when ONAP is ready for use:
Code Block |
---|
ubuntu@k8s-master:~/oom/kubernetes$ kubectl get pods --all-namespaces -o wide -w NAMESPACEAGE dev-appc-cdt-8cbf9d4d9-mhp4b NAME 0/1 ContainerCreating 0 0s dev-appc-dgbuilder-54766c5b87-xw6c6 0/1 Init:0/1 READY STATUS 0 RESTARTS0s dev-appc-db-0 AGE IP NODE kube-system etcd-k8s-master 0/2 Init:0/2 0 1/10s dev-appc-0 Running 5 14d 10.12.5.171 0/2 k8s-master kube-system Pending kube-apiserver-k8s-master 0 1/1 0s dev-appc-1 Running 5 14d 10.12.5.171 k8s-master kube-system 0/2 kube-controller-manager-k8s-master Pending 0 1/1 Running 0s dev-appc-2 5 14d 10.12.5.171 k8s-master kube-system kube-dns-86f4d74b45-px44s 0/2 Pending 0 3/3 0s Running 21 |
Note |
---|
The **--namespace onap** is currently required while all onap helm charts are migrated to version 2.0. After this activity is complete, namespaces will be optional. |
Use the following to monitor your deployment and determine when ONAP is ready for use:
Code Block |
---|
ubuntu@k8s-master:~/oom/kubernetes$ kubectl get pods --all-namespaces -o wide -w NAMESPACE NAME 27d 10.32.0.5 k8s-master kube-system kube-proxy-25tm5 READY STATUS 1/1 RunningRESTARTS AGE IP 8 27d 10.12.5.171 k8s-master NODE kube-system kubeetcd-proxyk8s-6dt4zmaster 1/1 Running 45 27d14d 10.12.5.174171 k8s-appc1master kube-system kube-apiserver-proxy-jmv67 k8s-master 1/1 Running 45 27d14d 10.12.5.193171 k8s-appc2master kube-system kube-proxy-l8fks controller-manager-k8s-master 1/1 Running 65 27d14d 10.12.5.194171 k8s-appc3master kube-system kube-schedulerdns-k8s86f4d74b45-masterpx44s 13/13 Running 521 14d27d 10.1232.0.5.171 k8s-master kube-system tillerkube-deploy-84f4c8bb78-s6bq5proxy-25tm5 1/1 Running 08 4d 27d 10.4712.05.7171 k8s-appc2master kube-system weavekube-netproxy-bz7wr6dt4z 21/21 Running 4 20 27d 10.12.5.194174 k8s-appc3node1 kube-system weavekube-netproxy-c2pxdjmv67 21/21 Running 134 27d 10.12.5.174193 k8s-appc1node2 kube-system weavekube-netproxy-jw29cl8fks 21/21 Running 206 27d 10.12.5.171194 k8s-masternode3 kube-system weavekube-scheduler-netk8s-kxxplmaster 2/21/1 Running 5 13 27d14d 10.12.5.193171 k8s-appc2master onapkube-system dev-appc-0tiller-deploy-84f4c8bb78-s6bq5 1/1 Running 0/2 PodInitializing 0 4d 2m 10.47.0.57 k8s-appc2node2 onapkube-system dev-appc-1weave-net-bz7wr 2/2 0/2 Running PodInitializing 0 20 2m 27d 10.3612.05.8 194 k8s-appc3node3 onapkube-system dev-appc-2weave-net-c2pxd 2/2 0/2 Running PodInitializing 0 13 2m 27d 10.4412.05.7 174 k8s-node1 kube-system k8s-appc1 onapweave-net-jw29c dev-appc-cdt-8cbf9d4d9-mhp4b 12/12 Running 020 2m27d 10.4712.05.1171 k8s-appc2master onapkube-system dev-appc-db-0weave-net-kxxpl 2/2 Running 013 2m 27d 10.3612.05.5 193 k8s-appc3node2 onap dev-appc-dgbuilder-54766c5b87-xw6c60 0/1 PodInitializing 0 2m 0/2 10.44.0.2 PodInitializing 0 k8s-appc1 onap 2m dev-robot-785b9bfb45-9s2rs 10.47.0.5 k8s-node2 onap 0/1dev-appc-1 PodInitializing 0 2m 10.36.0.7 k8s-appc3 |
Cleanup deployed ONAP instance
To delete a deployed instance, use the following command:
Code Block |
---|
Example:
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm del --purge <Release-name>
# we chose "dev" as our release name
Execute:
$ helm del --purge dev
release "dev" deleted
|
Also, delete the existing persistent volumes and persistent volume claim in "onap" namespace:
Code Block |
---|
#query existing pv in onap namespace $ kubectl get pv -n onap NAME 0/2 PodInitializing 0 2m 10.36.0.8 k8s-node3 onap dev-appc-2 CAPACITY ACCESS MODES RECLAIM POLICY STATUS 0/2 CLAIM PodInitializing 0 2m 10.44.0.7 STORAGECLASS k8s-node1 onap REASON AGE dev-appc-data0-cdt-8cbf9d4d9-mhp4b 1/1 1Gi Running RWO 0 Retain 2m 10.47.0.1 Bound k8s-node2 onap/ dev-appc-data-dev-appcdb-0 dev-appc-data 8m dev-appc-data1 2/2 Running 1Gi 0 RWO 2m Retain10.36.0.5 k8s-node3 onap Bound onap/dev-appc-datadgbuilder-dev-appc-254766c5b87-xw6c6 dev-appc-data 0/1 PodInitializing 0 8m dev-appc-data2 2m 10.44.0.2 1Gi k8s-node1 onap RWO dev-robot-785b9bfb45-9s2rs Retain Bound onap/dev-appc-data-dev-appc-0/1 dev-appc-dataPodInitializing 0 2m 8m dev-appc-db-data 10.36.0.7 k8s-node3 |
Cleanup deployed ONAP instance
To delete a deployed instance, use the following command:
Code Block |
---|
Example: ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm del --purge <Release-name> # 1Giwe chose "dev" as our release name Execute: RWX$ helm del --purge dev release "dev" deleted |
Also, delete the existing persistent volumes and persistent volume claim in "onap" namespace:
Code Block |
---|
#query existing pv in Retainonap namespace $ kubectl get pv -n onap NAME Bound onap/dev-appc-db-data dev-appc-db-data CAPACITY ACCESS MODES RECLAIM POLICY 8m #ExampleSTATUS commands are found here:CLAIM #delete existing pv $ kubectl delete pv dev-appc-data0 -n onap pv "dev-appc-data0" deleted $ kubectl delete pv dev-appc-data1 -n onap pv "dev-appc-data0" deleted $ kubectl delete pv dev-appc-data2 -n onap pv "dev-appc-data2" deleted $ kubectl delete pv dev-appc-db-data -n onap pv "dev-appc-db-data" deleted #query existing pvc in onap namespace $ kubectl get pvc -n onap NAME STORAGECLASS REASON AGE dev-appc-data0 1Gi RWO STATUS Retain VOLUME Bound onap/dev-appc-data-dev-appc-0 dev-appc-data CAPACITY ACCESS MODES STORAGECLASS 8m AGE dev-appc-data-dev-appc-0data1 Bound dev-appc-data0 1Gi RWO 1Gi Retain RWO Bound onap/dev-appc-data-dev-appc-2 9m dev-appc-data-dev-appc-1 Bound 8m dev-appc-data2 1Gi RWO dev-appc-dataRetain Bound 9m onap/dev-appc-data-dev-appc-21 Bounddev-appc-data dev-appc-data1 8m dev-appc-db-data 1Gi RWO 1Gi RWX dev-appc-data 9m dev-appc-db-data Retain Bound onap/dev-appc-db-data 1Gidev-appc-db-data RWX 8m #Example commands are found dev-appc-db-data 9mhere: #delete existing pvcpv $ kubectl delete pvcpv dev-appc-data-dev-appc-0data0 -n onap pvcpv "dev-appc-data-dev-appc-0data0" deleted $ kubectl delete pvcpv dev-appc-data-dev-appc-1data1 -n onap pvc "dev-appc-data-dev-appc-1" deleted $ kubectl delete pvc dev-appc-data-dev-appc-2 -n onap pvcpv "dev-appc-data-dev-appc-2data0" deleted $ kubectl delete pvcpv dev-appc-db-datadata2 -n onap pvcpv "dev-appc-db-datadata2" deleted |
Verify APPC Clustering
Refer to Validate the APPC ODL cluster.
Get the details from Kubernete Master Node
Access to RestConf UI is via https://<Kuberbetes-Master Node-IP>:30230/apidoc/explorer/index.html (admin user)
Run the following command to make sure installation is error free.
Code Block | ||
---|---|---|
| ||
$ kubectl cluster-infodelete Kubernetes master is running at https://10.12.5.171:6443 KubeDNS is running at https://10.12.5.171:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. | ||
Code Block | ||
| ||
$ kubectlpv dev-appc-db-data -n onap get all NAMEpv "dev-appc-db-data" deleted #query existing pvc in onap namespace $ kubectl get pvc -n onap NAME AGE deploy/dev-appc-cdt STATUS VOLUME 23m deploy/dev-appc-dgbuilder 23m deploy/dev-robot CAPACITY ACCESS MODES STORAGECLASS AGE dev-appc-data-dev-appc-0 23m NAMEBound dev-appc-data0 1Gi RWO AGE rs/dev-appc-cdt-8cbf9d4d9data 23m rs/9m dev-appc-dgbuilder-54766c5b87data-dev-appc-1 Bound 23m rs/dev-robotappc-785b9bfb45data2 1Gi 23m NAMERWO dev-appc-data 9m dev-appc-data-dev-appc-2 AGE statefulsets/dev-appc Bound 23m statefulsets/dev-appc-dbdata1 23m NAME 1Gi RWO dev-appc-data 9m dev-appc-db-data READY STATUS Bound RESTARTS AGE po/dev-appc-0db-data 1Gi RWX dev-appc-db-data 2/29m #delete existing pvc $ Runningkubectl delete pvc dev-appc-data-dev-appc-0 -n onap pvc "dev-appc-data-dev-appc-0" deleted $ kubectl delete 23m po/pvc dev-appc-data-dev-appc-1 -n onap pvc "dev-appc-data-dev-appc-1" deleted $ kubectl delete pvc dev-appc-data-dev-appc-2 -n onap pvc "dev-appc-data-dev-appc-2" deleted $ kubectl delete pvc dev-appc-db-data -n onap pvc "dev-appc-db-data" 2/2 Running 0 23m po/dev-appc-2 deleted |
Verify APPC Clustering
Refer to Validate the APPC ODL cluster.
Get the details from Kubernete Master Node
Access to RestConf UI is via https://<Kuberbetes-Master Node-IP>:30230/apidoc/explorer/index.html (admin user)
Run the following command to make sure installation is error free.
Code Block | ||
---|---|---|
| ||
$ kubectl cluster-info
Kubernetes master is running at https://10.12.5.171:6443
KubeDNS is running at https://10.12.5.171:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
Code Block | ||
---|---|---|
| ||
$ kubectl -n onap get all NAME 2/2 Running 0 23m poAGE deploy/dev-appc-cdt-8cbf9d4d9-mhp4b 1/1 Running 0 23m podeploy/dev-appc-db-0dgbuilder 23m deploy/dev-robot 2/2 23m NAME Running 0 23m po/dev-appc-dgbuilder-54766c5b87-xw6c6 1/1 Running 0 23mAGE pors/dev-robotappc-785b9bfb45cdt-9s2rs8cbf9d4d9 23m 1/1rs/dev-appc-dgbuilder-54766c5b87 Running 0 23m rs/dev-robot-785b9bfb45 23m NAME 23m NAME TYPE CLUSTER-IP AGE EXTERNAL-IPstatefulsets/dev-appc PORT(S) 23m statefulsets/dev-appc-db 23m NAME AGE svc/appc NodePort 10.107.234.237 READY <none> STATUS 8282:30230/TCP,1830:30231/TCPRESTARTS 23mAGE svcpo/dev-appc-cdt0 NodePort 10.107.253.179 <none> 80:302892/TCP2 Running 0 23m svcpo/dev-appc-cluster1 ClusterIP None <none> 25502/TCP2 Running 0 23m svcpo/dev-appc-dbhost2 ClusterIP None <none> 33062/TCP2 Running 0 23m svcpo/dev-appc-cdt-dbhost-read 8cbf9d4d9-mhp4b ClusterIP 10.101.117.102 <none> 3306/TCP 1/1 Running 0 23m svcpo/dev-appc-db-dgbuilder0 NodePort 10.102.138.232 <none> 3000:30228/TCP 2/2 Running 0 23m svc/appc-sdnctldb01 23m po/dev-appc-dgbuilder-54766c5b87-xw6c6 ClusterIP None 1/1 <none> Running 0 3306/TCP 23m po/dev-robot-785b9bfb45-9s2rs 23m svc/appc-sdnctldb02 1/1 Running ClusterIP 0 None 23m NAME <none> 3306/TCP TYPE 23m svc/robotCLUSTER-IP EXTERNAL-IP PORT(S) NodePort 10.110.229.236 <none> AGE svc/appc 88:30209/TCP 23m | ||
Code Block | ||
| ||
$ kubectl -n onap get pod NAME NodePort 10.107.234.237 <none> 8282:30230/TCP,1830:30231/TCP 23m svc/appc-cdt NodePort 10.107.253.179 <none> READY 80:30289/TCP STATUS RESTARTS AGE dev-appc-0 23m svc/appc-cluster ClusterIP None 2/2 <none> Running 0 2550/TCP 22m dev-appc-1 23m svc/appc-dbhost ClusterIP None 2/2 Running 0<none> 3306/TCP 22m dev-appc-2 23m svc/appc-dbhost-read ClusterIP 10.101.117.102 2/2 <none> Running 3306/TCP 0 22m dev-appc-cdt-8cbf9d4d9-mhp4b 23m svc/appc-dgbuilder 1/1 NodePort Running 010.102.138.232 <none> 22m dev-appc-db-0 3000:30228/TCP 23m svc/appc-sdnctldb01 ClusterIP None 2/2 Running 0 <none> 22m dev-appc-dgbuilder-54766c5b87-xw6c63306/TCP 1/1 Running 0 23m svc/appc-sdnctldb02 22m dev-robot-785b9bfb45-9s2rs ClusterIP None <none> 1/1 3306/TCP Running 0 22m | ||
Code Block | ||
| ||
$ $ kubectl get pod --all-namespaces -a NAMESPACE 23m svc/robot NAME NodePort 10.110.229.236 <none> 88:30209/TCP READY STATUS RESTARTS 23m AGE kube-system etcd-k8s-master |
Code Block | ||
---|---|---|
| ||
$ kubectl -n onap get pod NAME 1/1 Running 5 READY STATUS 14dRESTARTS kube-system AGE kubedev-apiserverappc-k8s-master0 1/1 Running 5 2/2 14d kube-system Running kube-controller-manager-k8s-master 0 22m dev-appc-1 1/1 Running 5 14d kube-system kube-dns-86f4d74b45-px44s 2/2 Running 0 3/3 Running 22m 21dev-appc-2 27d kube-system kube-proxy-25tm5 2/2 1/1Running 0 Running 8 22m dev-appc-cdt-8cbf9d4d9-mhp4b 27d kube-system kube-proxy-6dt4z 1/1 Running 0 1/122m dev-appc-db-0 Running 4 27d kube-system kube-proxy-jmv67 2/2 Running 0 1/122m dev-appc-dgbuilder-54766c5b87-xw6c6 Running 4 1/1 27dRunning kube-system 0 kube-proxy-l8fks 22m dev-robot-785b9bfb45-9s2rs 1/1 Running 0 6 22m 27d kube-system kube-scheduler-k8s-master |
Code Block | ||
---|---|---|
| ||
$ $ kubectl get pod --all-namespaces -a NAMESPACE NAME 1/1 Running 5 14d kube-system tiller-deploy-84f4c8bb78-s6bq5 READY STATUS RESTARTS AGE kube-system 1/1 etcd-k8s-master Running 0 4d kube-system weave-net-bz7wr 1/1 Running 5 14d kube-system 2/2 kube-apiserver-k8s-master Running 20 27d kube-system weave-net-c2pxd 1/1 Running 5 14d kube-system kube-controller-manager-k8s-master 2/2 Running1/1 13 Running 5 27d kube-system weave-net-jw29c 14d kube-system kube-dns-86f4d74b45-px44s 23/23 Running 2021 27d kube-system weavekube-netproxy-kxxpl25tm5 21/21 Running 8 13 27d kube-system onap kube-proxy-6dt4z dev-appc-0 1/1 Running 4 2/2 27d Runningkube-system 0kube-proxy-jmv67 25m onap dev-appc-1 1/1 Running 4 27d kube-system kube-proxy-l8fks 2/2 Running 0 25m onap 1/1 dev-appc-2 Running 6 27d kube-system kube-scheduler-k8s-master 2/2 Running1/1 0 Running 5 25m onap 14d kube-system devtiller-appcdeploy-cdt84f4c8bb78-8cbf9d4d9-mhp4bs6bq5 1/1 Running 0 25m4d onapkube-system dev-appc-db-0weave-net-bz7wr 2/2 Running 020 27d kube-system 25m onap weave-net-c2pxd dev-appc-dgbuilder-54766c5b87-xw6c6 1/1 Running 0 2/2 Running 25m onap13 27d dev-robot-785b9bfb45-9s2rskube-system weave-net-jw29c 1/1 Running 0 25m | ||
Code Block | ||
| ||
$ kubectl -n onap get pod -o wide NAME2/2 Running 20 27d kube-system weave-net-kxxpl READY2/2 STATUS Running RESTARTS 13 AGE IP27d onap NODE dev-appc-0 2/2 Running 0 26m25m onap 10.47.0.5 k8s-appc2 dev-appc-1 2/2 Running 0 26m25m onap 10.36.0.8 k8s-appc3 dev-appc-2 2/2 Running 0 26m25m onap 10.44.0.7 k8s-appc1 dev-appc-cdt-8cbf9d4d9-mhp4b 1/1 Running 0 26m25m onap 10.47.0.1 k8s-appc2 dev-appc-db-0 2/2 Running 0 26m25m onap 10.36.0.5 k8s-appc3 dev-appc-dgbuilder-54766c5b87-xw6c6 1/1 Running 0 26m25m onap 10.44.0.2 k8s-appc1 dev-robot-785b9bfb45-9s2rs 1/1 Running 0 26m 10.36.0.7 k8s-appc3 25m |
Code Block | ||
---|---|---|
| ||
$ kubectl -n onap get services --all-namespacespod -o wide NAMESPACE NAME NAME TYPE CLUSTER-IP READY STATUS EXTERNAL-IP PORT(S)RESTARTS AGE IP NODE dev-appc-0 AGE SELECTOR default kubernetes ClusterIP 10.96.0.12/2 Running <none> 0 443/TCP 26m 10.47.0.5 k8s-node2 dev-appc-1 27d <none> kube-system kube-dns 2/2 ClusterIP 10.96.0.10 Running 0 <none> 26m 53/UDP,53/TCP 10.36.0.8 k8s-node3 dev-appc-2 27d k8s-app=kube-dns kube-system tiller-deploy ClusterIP 10.108.155.106 2/2 <none> Running 44134/TCP0 26m 10.44.0.7 k8s-node1 dev-appc-cdt-8cbf9d4d9-mhp4b 14d app=helm,name=tiller onap appc1/1 Running 0 26m NodePort 10.10747.2340.2371 <none> k8s-node2 dev-appc-db-0 8282:30230/TCP,1830:30231/TCP 27m app=appc,release=dev onap appc-cdt 2/2 Running 0 NodePort 10.107.253.179 <none> 26m 80:30289/TCP10.36.0.5 k8s-node3 dev-appc-dgbuilder-54766c5b87-xw6c6 1/1 27m Running app=appc-cdt,release=dev onap0 26m appc-cluster 10.44.0.2 k8s-node1 dev-robot-785b9bfb45-9s2rs ClusterIP None <none> 1/1 2550/TCPRunning 0 26m 10.36.0.7 k8s-node3 27m app=appc,release=dev onap |
Code Block | ||
---|---|---|
| ||
$ kubectl get services --all-namespaces -o wide NAMESPACE NAME appc-dbhost ClusterIP TYPE None CLUSTER-IP <none> EXTERNAL-IP PORT(S) 3306/TCP AGE 27m SELECTOR default app=appc-db,release=dev onap kubernetes appc-dbhost-read ClusterIP 10.10196.117.1020.1 <none> 3306443/TCP 27m 27d app=appc-db,release=dev onap <none> kube-system kube-dns appc-dgbuilder NodePort ClusterIP 10.10296.138.232 <none> 3000:30228/TCP0.10 <none> 53/UDP,53/TCP 27m app=appc-dgbuilder,release=dev onap 27d appc-sdnctldb01 k8s-app=kube-dns kube-system tiller-deploy ClusterIP None ClusterIP 10.108.155.106 <none> 330644134/TCP 27m14d app=appc-dbhelm,releasename=devtiller onap appc-sdnctldb02 ClusterIP None NodePort 10.107.234.237 <none> 3306/TCP 8282:30230/TCP,1830:30231/TCP 27m app=appc-db,release=dev onap robotappc-cdt NodePort 10.110107.229253.236179 <none> 8880:3020930289/TCP 27m app=robotappc-cdt,release=dev |
...
Code Block | ||
---|---|---|
| ||
$ kubectl -n onap describe po/dev-appc-0 Name: dev-appc-0 Namespace: cluster onap Node: k8s-appc2/10.12.5.193 Start Time: ClusterIP None Tue, 15 May 2018 11:31:47 -0400 Labels: <none> app=appc 2550/TCP controller-revision-hash=dev-appc-7d976dd9b9 27m app=appc,release=dev onap appc-dbhost statefulset.kubernetes.io/pod-name=dev-appc-0 Annotations: <none> Status: ClusterIP None Running IP: <none> 10.47.0.5 Controlled By: StatefulSet/dev-appc Init Containers:3306/TCP appc-readiness: Container ID: docker://fdbf3011e7911b181a25c868f7d342951ced2832ed63c481253bb06447a0c04f Image: 27m oomk8s/readiness-check:2.0.0 app=appc-db,release=dev onap Image ID: docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323edappc-dbhost-read Port: ClusterIP 10.101.117.102 <none> Command:3306/TCP /root/ready.py Args: --container-name27m app=appc-db,release=dev onap State: appc-dgbuilder Terminated Reason:NodePort 10.102.138.232 Completed<none> Exit Code3000:30228/TCP 0 Started: 27m Tue, 15 May 2018 11:32:00 -0400 app=appc-dgbuilder,release=dev onap Finished: appc-sdnctldb01 Tue, 15 May 2018 11:32:16 -0400 Ready: ClusterIP None True Restart Count: <none> 0 Environment: 3306/TCP NAMESPACE: onap (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-v9mnv (ro) Containers:27m appc: Container ID: docker://2b921a54a6cc19f9b7cdd3c8e7904ae3426019224d247fc31a74f92ec6f05ba0app=appc-db,release=dev onap Image: appc-sdnctldb02 nexus3.onap.org:10001/onap/appc-image:1.3.0-SNAPSHOT-latest ImageClusterIP ID: None docker-pullable://nexus3.onap.org:10001/onap/appc-image@sha256:ee8b64bd578f42169a86951cd45b1f2349192e67d38a7a350af729d1bf33069c Ports: <none> 81813306/TCP, 1830/TCP Command: /opt/appc/bin/startODL.sh State: 27m app=appc-db,release=dev onap Running Started: robot Tue, 15 May 2018 11:40:13 -0400 Ready: NodePort True 10.110.229.236 Restart<none> Count: 0 Readiness88:30209/TCP tcp-socket :8181 delay=10s timeout=1s period=10s #success=1 #failure=3 Environment: 27m MYSQL_ROOT_PASSWORD: <set to the key 'db-root-password' in secret 'dev-appc'> Optional: false SDNC_CONFIG_DIR:app=robot,release=dev |
Get more detail about a single pod by using "describe" with the resource name. The resource name is shown with the get all command used above.
Code Block | ||
---|---|---|
| ||
$ kubectl -n onap describe po/onap-appc-0 Name: onap-appc-0 Namespace: /opt/onap/appc/data/properties Priority: APPC_CONFIG_DIR: /opt/onap/appc/data/properties0 PriorityClassName: <none> Node: DMAAP_TOPIC_ENV: SUCCESS k8s-appc4/10.12.6.73 Start ENABLE_ODL_CLUSTERTime: true APPC_REPLICAS: Wed, 20 Feb 2019 17:35:42 -0500 Labels: 3 Mounts: /etc/localtime from localtime (ro)app=appc /opt/onap/appc/bin/installAppcDb.sh from onap-appc-bin (rw) /opt/onap/appc/bin/startODL.sh from controller-revision-hash=onap-appc-bin787488477 (rw) /opt/onap/appc/data/properties/aaiclient.properties from onap-appc-data-properties (rw) /opt/onap/appc/data/properties/appc.properties from onap-appc-data-properties (rw)release=onap /opt/onap/appc/data/properties/dblib.properties from onap-appc-data-properties (rw) /opt/onap/appc/data/properties/svclogic.properties from statefulset.kubernetes.io/pod-name=onap-appc-data-properties (rw)0 Annotations: /opt/onap/appc/svclogic/bin/showActiveGraphs.sh from onap-appc-svclogic-bin (rw)<none> Status: /opt/onap/appc/svclogic/config/svclogic.properties from onap-appc-svclogic-config (rw) Pending IP: /opt/onap/ccsdk/bin/installSdncDb.sh from onap-sdnc-bin (rw) /opt/onap/ccsdk/bin/startODL.sh from onap-sdnc-bin (rw) 10.42.0.5 Controlled By: /opt/onap/ccsdk/data/properties/aaiclient.properties from onap-sdnc-data-properties (rw)StatefulSet/onap-appc Init Containers: appc-readiness: Container ID: docker:/opt/onap/ccsdk/data/properties/dblib.properties from onap-sdnc-data-properties (rw)/a7582fb876b85ca934024d10814d339cb951803e76a842361be08540edacc08a Image: /opt/onap/ccsdk/data/properties/svclogic.properties from onap-sdnc-data-properties (rw) oomk8s/readiness-check:2.0.0 Image ID: docker-pullable:/opt/onap/ccsdk/svclogic/bin/showActiveGraphs.sh from onap-sdnc-svclogic-bin (rw)oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed Port: /opt/onap/ccsdk/svclogic/config/svclogic.properties from onap-sdnc-svclogic-config (rw) <none> /opt/opendaylight/current/daexim from dev-appc-data (rw)Host Port: <none> /opt/opendaylight/current/etc/org.ops4j.pax.logging.cfg from log-config (rw) Command: /var/log/onap from logs (rw) root/ready.py Args: /var/run/secrets/kubernetes.io/serviceaccount from default--token-v9mnv (ro)container-name filebeatappc-onap:db Container ID:State: docker://b9143c9898a4a071d1d781359e190bdd297e31a2bd04223225a55ff8b1990b32 Running Image: Started: docker.elastic.co/beats/filebeat:5.5.0 Wed, 20 Feb Image2019 ID19:37:31 -0500 Last docker-pullable://docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942State: Port:Terminated Reason: <none> Error State: Exit Code: Running1 Started: TueWed, 1520 MayFeb 20182019 1119:4027:1425 -04000500 ReadyFinished: Wed, 20 Feb 2019 True19:37:25 -0500 Restart CountReady: 0 Environment: False <none> Restart MountsCount: 12 /usr/share/filebeat/data from data-filebeat (rw)Environment: NAMESPACE: /usr/share/filebeat/filebeat.yml from filebeat-conf (rw) /var/log/onap from logs (rw)onap (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-v9mnv6vq96 (ro) ConditionsContainers: Typeappc: Container ID: Status InitializedImage: True Ready True PodScheduled True Volumes: dev-appc-data: nexus3.onap.org:10001/onap/appc-image:1.5.0-SNAPSHOT-latest Image TypeID: PersistentVolumeClaim (a reference to aPorts: PersistentVolumeClaim in the same namespace) ClaimName: dev-appc-data-dev-appc-08181/TCP, 1830/TCP Host ReadOnlyPorts: false localtime:0/TCP, 0/TCP TypeCommand: HostPath (bare host directory volume) Path: /etc/localtime /opt/appc/bin/startODL.sh filebeat-conf: TypeState: ConfigMap (a volume populated byWaiting a ConfigMap) NameReason: dev-appc-filebeatPodInitializing OptionalReady: false log-config: Type: False ConfigMap (a volume populatedRestart byCount: a ConfigMap)0 NameReadiness: dev-appc-logging-cfg Optional: false logsexec [/opt/appc/bin/health_check.sh] delay=10s timeout=1s period=10s #success=1 #failure=3 Environment: Type: MYSQL_ROOT_PASSWORD: EmptyDir<set (ato temporarythe directory that shares a pod's lifetime) Medium: data-filebeat:key 'db-root-password' in secret 'onap-appc'> Optional: false TypeSDNC_CONFIG_DIR: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: onap-appc-data-properties: /opt/onap/appc/data/properties APPC_CONFIG_DIR: /opt/onap/appc/data/properties Type: DMAAP_TOPIC_ENV: ConfigMap (a volumeSUCCESS populated by a ConfigMap) ENABLE_AAF: Name: dev-appc-onap-appc-data-properties true Optional: false onap-appc-svclogic-configENABLE_ODL_CLUSTER: true Type: APPC_REPLICAS: ConfigMap (a volume populated by a ConfigMap)3 NameMounts: /etc/localtime dev-appc-onap-appc-svclogic-config from localtime (ro) Optional: false /opt/onap/appc/bin/health_check.sh from onap-appc-svclogic-bin: (rw) Type: /opt/onap/appc/bin/installAppcDb.sh from onap-appc-bin ConfigMap (a volume populated by a ConfigMap(rw) Name: dev-appc-/opt/onap/appc/bin/installFeatures.sh from onap-appc-svclogic-bin (rw) Optional: false /opt/onap/appc/bin/startODL.sh from onap-appc-bin: (rw) Type: /opt/onap/appc/data/properties/aaa-app-config.xml ConfigMapfrom onap-appc-data-properties (arw) volume populated by a ConfigMap) /opt/onap/appc/data/properties/aaiclient.properties Name: devfrom onap-appc-onapdata-appc-bin properties (rw) Optional: false /opt/onap/appc/data/properties/appc.properties from onap-sdncappc-data-properties: (rw) Type: /opt/onap/appc/data/properties/cadi.properties ConfigMap (a volume populated by a ConfigMapfrom onap-appc-data-properties (rw) Name: /opt/onap/appc/data/properties/dblib.properties from devonap-appc-onap-sdnc-data-properties (rw) Optional: false /opt/onap/appc/data/properties/svclogic.properties from onap-sdncappc-svclogicdata-config:properties (rw) Type: /opt/onap/appc/svclogic/bin/showActiveGraphs.sh ConfigMap (a volume populated by a ConfigMapfrom onap-appc-svclogic-bin (rw) Name: dev-appc-onap-sdnc/opt/onap/appc/svclogic/config/svclogic.properties from onap-appc-svclogic-config (rw) Optional: false /opt/onap/ccsdk/bin/installSdncDb.sh from onap-sdnc-svclogic-bin: (rw) Type: /opt/onap/ccsdk/bin/startODL.sh ConfigMap (a volume populated by a ConfigMapfrom onap-sdnc-bin (rw) Name: /opt/onap/ccsdk/data/properties/aaiclient.properties from dev-appc-onap-sdnc-svclogicdata-binproperties (rw) Optional: false /opt/onap/ccsdk/data/properties/dblib.properties from onap-sdnc-bin: Type: ConfigMap (a volume populated by a ConfigMapdata-properties (rw) Name: dev-appc-/opt/onap/ccsdk/data/properties/svclogic.properties from onap-sdnc-data-binproperties (rw) Optional: false default-token-v9mnv: Type: Secret (a volume populated by a Secret) SecretName: default-token-v9mnv /opt/onap/ccsdk/svclogic/bin/showActiveGraphs.sh from onap-sdnc-svclogic-bin (rw) /opt/onap/ccsdk/svclogic/config/svclogic.properties from onap-sdnc-svclogic-config (rw) /opt/opendaylight/current/daexim from onap-appc-data (rw) Optional: /opt/opendaylight/current/etc/org.ops4j.pax.logging.cfg from false QoS Class:log-config (rw) BestEffort Node-Selectors: <none> Tolerations:/var/log/onap from logs (rw) node./var/run/secrets/kubernetes.io/not-ready:NoExecute for 300s serviceaccount from default-token-6vq96 (ro) filebeat-onap: Container ID: node.kubernetes.io/unreachable:NoExecute for 300s Events:Image: Type Reasondocker.elastic.co/beats/filebeat:5.5.0 Image ID: AgePort: <none> FromHost Port: <none> MessageState: ---- ------ Waiting Reason: PodInitializing ---- Ready: False ---- Restart Count: 0 Environment: -------<none> Warning FailedSchedulingMounts: 29m (x2 over 29m/usr/share/filebeat/data from data-filebeat (rw) default-scheduler pod has unbound PersistentVolumeClaims (repeated 3 times /usr/share/filebeat/filebeat.yml from filebeat-conf (rw) Normal Scheduled /var/log/onap from logs (rw) /var/run/secrets/kubernetes.io/serviceaccount from 29m default-token-6vq96 (ro) Conditions: Type default-scheduler Status Successfully assigned dev-appc-0Initialized to k8s-appc2 Normal False SuccessfulMountVolume 29m Ready False kubelet, k8s-appc2 ContainersReady MountVolume.SetUp succeeded forFalse volume "data-filebeat" PodScheduled Normal SuccessfulMountVolume 29mTrue Volumes: onap-appc-data: Type: kubelet, k8s-appc2PersistentVolumeClaim (a MountVolume.SetUpreference succeededto fora volumePersistentVolumeClaim "localtime"in the same Normalnamespace) SuccessfulMountVolume 29mClaimName: onap-appc-data-onap-appc-0 ReadOnly: false localtime: kubelet, k8s-appc2 Type: MountVolume.SetUp succeeded for volume "logs" Normal HostPath SuccessfulMountVolume(bare host 29mdirectory volume) Path: kubelet, k8s-appc2/etc/localtime MountVolume.SetUp succeeded forHostPathType: volume "dev-appc-data0" Normalfilebeat-conf: SuccessfulMountVolume 29mType: ConfigMap (a volume populated by a ConfigMap) kubelet, k8s-appc2 Name: MountVolume.SetUp succeeded for volume "onap-sdnc-svclogic-bin"appc-filebeat Normal Optional: SuccessfulMountVolume false 29m log-config: Type: ConfigMap kubelet, k8s-appc2 MountVolume.SetUp succeeded for volume "onap-sdnc-bin" Normal SuccessfulMountVolume 29m(a volume populated by a ConfigMap) Name: onap-appc-logging-cfg Optional: false kubelet, k8s-appc2logs: MountVolume.SetUp succeeded for volume "onap-appc-data-properties"Type: NormalEmptyDir (a temporary SuccessfulMountVolumedirectory that 29mshares a pod's lifetime) Medium: data-filebeat: kubelet, k8s-appc2 Type: MountVolume.SetUp succeeded for volume "onap-sdnc-data-properties" Normal SuccessfulMountVolume 29mEmptyDir (a temporary directory that shares a pod's lifetime) Medium: onap-appc-data-properties: Type: kubelet, k8s-appc2 MountVolume.SetUp succeededConfigMap for(a volume "filebeat-conf"populated by a NormalConfigMap) SuccessfulMountVolume 29mName: (x6 over 29m) kubelet, k8s-appc2 (combined from similar events): MountVolume.SetUp succeeded for volume "default-token-v9mnv" Normal Pulling onap-appc-onap-appc-data-properties Optional: false onap-appc-svclogic-config: Type: ConfigMap (a volume populated by a ConfigMap) 29mName: onap-appc-onap-appc-svclogic-config Optional: false kubelet, k8s-appc2onap-appc-svclogic-bin: pulling image "oomk8s/readiness-check:2.0.0" Type: NormalConfigMap (a volume Pulledpopulated by a ConfigMap) Name: onap-appc-onap-appc-svclogic-bin 29m Optional: false onap-appc-bin: Type: kubelet, k8s-appc2 ConfigMap Successfully(a pulledvolume image "oomk8s/readiness-check:2.0.0" Normalpopulated by a ConfigMap) Created Name: onap-appc-onap-appc-bin Optional: false 29m onap-sdnc-data-properties: Type: ConfigMap (a volume populated kubelet, k8s-appc2 Created containerby a ConfigMap) NormalName: Started onap-appc-onap-sdnc-data-properties Optional: false onap-sdnc-svclogic-config: 29m Type: ConfigMap (a volume populated by a ConfigMap) kubelet, k8s-appc2 StartedName: container Normal Pullingonap-appc-onap-sdnc-svclogic-config Optional: false onap-sdnc-svclogic-bin: Type: 29m ConfigMap (a volume populated by a ConfigMap) Name: kubelet, k8s-appc2 pulling image "nexus3.onap.org:10001/onap/appc-image:1.3.0-SNAPSHOT-latest"onap-appc-onap-sdnc-svclogic-bin Normal Optional: Pulled false onap-sdnc-bin: Type: ConfigMap (a 21mvolume populated by a ConfigMap) Name: kubelet, k8s-appc2 Successfully pulled image "nexus3.onap.org:10001/onap/appc-image:1.3.0-SNAPSHOT-latest" Normal Created onap-appc-onap-sdnc-bin Optional: false default-token-6vq96: Type: 21m Secret (a volume populated by a Secret) SecretName: kubelet, k8s-appc2default-token-6vq96 Created container Optional: Normal Startedfalse QoS Class: BestEffort Node-Selectors: <none> Tolerations: 21m node.kubernetes.io/not-ready:NoExecute for 300s kubelet, k8s-appc2 Started container Normal Pulling node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason 21mAge kubelet, k8s-appc2 From pulling image "docker.elastic.co/beats/filebeat:5.5.0" Normal Pulled Message ---- ------ 21m---- kubelet, k8s----appc2 Successfully pulled image "docker.elastic.co/beats/filebeat:5.5.0" Normal Created ------- Normal Started 20m (x11 over 121m) 21mkubelet, k8s-appc4 Started container Normal Created 10m (x12 over 121m) kubelet, k8s-appc2appc4 Created container WarningNormal Pulling 9s (x13 over 121m) Unhealthy kubelet, k8s-appc4 pulling image "oomk8s/readiness-check:2.0.0" Normal Pulled 5m8s (x16x13 over 21m121m) kubelet, k8s-appc2appc4 Successfully Readinesspulled probe failed: dial tcp 10.47.0.5:8181: getsockopt: connection refusedimage "oomk8s/readiness-check:2.0.0 |
Get logs of containers inside each pod:
...
Code Block | ||
---|---|---|
| ||
decrease appc pods to 1 $ kubectl scale statefulset dev-appc -n onap --replicas=1 statefulset "dev-appc" scaled # verify that two APPC pods terminate with one APPC pod running $ kubectl get pods --all-namespaces -a | grep dev-appc onap dev-appc-0 2/2 Running 0 43m onap dev-appc-1 2/2 Terminating 0 43m onap dev-appc-2 2/2 Terminating 0 43m onap dev-appc-cdt-8cbf9d4d9-mhp4b 1/1 Running 0 43m onap dev-appc-db-0 2/2 Running 0 43m onap dev-appc-dgbuilder-54766c5b87-xw6c6 1/1 Running 0 43m increase APPC pods to 3 $ kubectl scale statefulset dev-appc -n onap --replicas=3 statefulset "dev-appc" scaled # verify that three APPC pods are running $ kubectl get pods --all-namespaces -o wide | grep dev-appc onap dev-appc-0 2/2 Running 0 49m 10.47.0.5 k8s-appc2node2 onap dev-appc-1 2/2 Running 0 3m 10.36.0.8 k8s-appc3node3 onap dev-appc-2 2/2 Running 0 3m 10.44.0.7 k8s-appc1node1 onap dev-appc-cdt-8cbf9d4d9-mhp4b 1/1 Running 0 49m 10.47.0.1 k8s-appc2node2 onap dev-appc-db-0 2/2 Running 0 49m 10.36.0.5 k8s-appc3node3 onap dev-appc-dgbuilder-54766c5b87-xw6c6 1/1 Running 0 49m 10.44.0.2 k8s-appc1node1 |