This wiki describes how to set up a Kubernetes cluster with kuberadm, and then deploying APPC within that Kubernetes cluster.
(To view the current page, Chrome is the preferred browser. IE may add extra "CR LF" each line, which causes problems).
Table of Contents |
---|
What is OpenStack? What is Kubernetes? What is Docker?
In the OpenStack lab, the controller executes the function of partitioning resources. The compute nodes are the collection of resources (memory, CPUs, hard drive space) to be partitioned. When creating a VM with "X" memory, "Y" CPUs and "Z" hard drive space, OpenStack's controller reviews its pool of available resources, allocates the quota, and then creates the VM on one of the available compute nodes. Many VMs can be created on a single compute node. OpenStack's controller uses a lot of criteria to choose a compute node, but if an application spans multiple VMs, Affinity rules can be used to ensure the VMs don’t congregate on a single compute node. This would not be good for resilience.
...
Deployment Architecture
The Kubernetes deployment in this tutorial will be set up on top of OpenStack VMs. Let's call this the undercloud. undercloud can be physical boxes, or VMs. The VMs can come from different cloud providers, but in this tutorial we will use OpenStack. The following table shows the layers of software that need to be considered when thinking about resilience:
...
Code Block | ||
---|---|---|
| ||
openstack server list; openstack network list; openstack flavor list; openstack keypair list; openstack image list; openstack security group list openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-master" openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node1" openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node2" openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node3" |
Configure Each VM
Repeat the following steps on each VM:
Pre-Configure Each VM
Make sure the VMs are:
- Up to date
- The clocks are synchonized
...
Question: Did you check date on all K8S nodes to make sure they are in synch?
Install Docker
The ONAP apps are pakages in Docker containers.
...
Code Block | ||
---|---|---|
| ||
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual sudo apt-get install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo apt-key fingerprint 0EBFCD88 # Add a docker repository to "/etc/apt/sources.list". It is for the latest stable one for the ubuntu falvour on the machine ("lsb_release -cs") sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update sudo apt-get -y install docker-ce sudo docker run hello-world # Verify: sudo docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c66d903a0b1f hello-world "/hello" 10 seconds ago Exited (0) 9 seconds ago vigorous_bhabha |
Install the Kubernetes Pakages
Just install the pakages; there is no need to configure them yet.
...
Note: If you intend to remove kubernetes packages use "apt autoremove kubelet; apt autoremove kubeadm;apt autoremove kubectl;apt autoremove kubernetes-cni" .
Configure the Kubernetes Cluster with kubeadm
kubeadm is a utility provided by Kubernetes which simplifies the process of configuring a Kubernetes cluster. Details about how to use it can be found here: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/.
Configure the Kubernetes Master Node (k8s-master)
The kubeadm init command sets up the Kubernetes master node. SSH to the k8s-master VM and invoke the following command. It is important to capture the output into a log file as there is information which you will need to refer to afterwards.
...
Code Block | ||
---|---|---|
| ||
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.8.7 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks [preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 17.03 [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [kubefed-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.147.114.12] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] This often takes around a minute; or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 44.002324 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node kubefed-1 as master by adding a label and a taint [markmaster] Master kubefed-1 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: 2246a6.83b4c7ca38913ce1 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a |
NOTE: the "kubeadm join .." command shows in the log of kubeadm init, should run in each VMs in the k8s cluster to perform a cluster, use "kubectl get nodes" to make sure all nodes are all joined.
Execute the following snippet (as ubuntu user) to get kubectl to work.
...
Code Block | ||
---|---|---|
| ||
sudo kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system kube-apiserver-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system kube-controller-manager-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system kube-dns-545bc4bfd4-jcklm 3/3 Running 0 44m 10.32.0.2 k8s-master kube-system kube-proxy-lnv7r 1/1 Running 0 44m 10.147.112.140 k8s-master kube-system kube-scheduler-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system weave-net-b2hkh 2/2 Running 0 1m 10.147.112.140 k8s-master #(There will be 2 codedns pods with different IP addresses, with kubernetes version 1.10.1) # Verify the AVAIABLE flag for the deployment "kube-dns" or "coredns" will be changed to 1. (2 with kubernetes version 1.10.1) sudo kubectl get deployment --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system kube-dns 1 1 1 1 1h |
Troubleshooting tip:
- If any of the weave pods face a problem and gets stuck at "ImagePullBackOff " state, you can try running the " sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" " again.
- Sometimes, you need to delete the problematic pod, to let it terminate and start fresh. Use "kubectl delete po/<pod-name> -n <name-space> " to delete a pod.
- To "Unjoin" a worker node "kubectl delete node <node-name> (go through the "Undeploy APPC" process at the end if you have an APPC cluster running)
Install
...
ONAP uses Helm, a package manager for kubernetes.
...
"make" ( Learn more about ubuntu-make here : https://
...
wiki.
...
ubuntu.
...
com/ubuntu-make)
Code Block | ||
---|---|---|
| ||
# As a root user, download helm and install it
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh |
Install Tiller(server side of hlem)
Tiller manages installation of helm packages (charts). Tiller requires ServiceAccount setup in Kubernetes before being initialized. The following snippet will do that for you:
(Chrome as broswer is preferred. IE may add extra "CR LF" to each line, which causes problems).
Code Block | ||
---|---|---|
| ||
# id
ubuntu
# As a ubuntu user, create a yaml file to define the helm service account and cluster role binding.
cat > tiller-serviceaccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
EOF
# Create a ServiceAccount and ClusterRoleBinding based on the created file.
sudo kubectl create -f tiller-serviceaccount.yaml
# Verify
which helm
helm version |
Initialize helm....This command installs Tiller. It also discovers Kubernetes clusters by reading $KUBECONFIG (default '~/.kube/config') and using the default context.
Code Block | ||
---|---|---|
| ||
helm init --service-account tiller --upgrade # A new pod is created, but will be in pending status. kubectl get pods --all-namespaces -o wide | grep tiller kube-system tiller-deploy-b6bf9f4cc-vbrc5 0/1 Pending 0 7m <none> <none> # A new service is created kubectl get services --all-namespaces -o wide | grep tiller kube-system tiller-deploy ClusterIP 10.102.74.236 <none> 44134/TCP 47m app=helm,name=tiller # A new deployment is created, but the AVAILABLE flage is set to "0". kubectl get deployments --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system kube-dns 1 1####################### # Install make from kubernetes directory. ####################### $ sudo apt install make Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: linux-headers-4.4.0-62 linux-headers-4.4.0-62-generic linux-image-4.4.0-62-generic snap-confine Use 'sudo apt autoremove' to remove them. Suggested packages: make-doc The following NEW packages will be installed: make 0 upgraded, 1 newly installed, 0 to remove and 72 not upgraded. Need to get 151 kB of archives. After this operation, 365 kB of additional disk space will be used. Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB] Fetched 151 kB in 0s (208 kB/s) Selecting previously unselected package make. (Reading database ... 121778 files and directories currently installed.) Preparing to unpack .../archives/make_4.1-6_amd64.deb ... Unpacking make (4.1-6) ... Processing triggers for man-db (2.7.5-1) ... Setting up make (4.1-6) ... |
Install Helm and Tiller on the Kubernetes Master Node (k8s-master)
ONAP uses Helm, a package manager for kubernetes.
Install helm (client side). The following instructions were taken from https://docs.helm.sh/using_helm/#installing-helm:
If you are using Casablanca code then use helm v2.9.1
Code Block | ||
---|---|---|
| ||
# As a root user, download helm and install it
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh -v v2.8.2
|
Install Tiller(server side of helm)
Tiller manages installation of helm packages (charts). Tiller requires ServiceAccount setup in Kubernetes before being initialized. The following snippet will do that for you:
(Chrome as broswer is preferred. IE may add extra "CR LF" to each line, which causes problems).
Code Block | ||
---|---|---|
| ||
# id
ubuntu
# As a ubuntu user, create a yaml file to define the helm service account and cluster role binding.
cat > tiller-serviceaccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
EOF
# Create a ServiceAccount and ClusterRoleBinding based on the created file.
sudo kubectl create -f tiller-serviceaccount.yaml
# Verify
which helm
helm version |
Initialize helm....This command installs Tiller. It also discovers Kubernetes clusters by reading $KUBECONFIG (default '~/.kube/config') and using the default context.
Code Block | ||
---|---|---|
| ||
helm init --service-account tiller --upgrade # A new pod is created, but will be in pending status. kubectl get pods --all-namespaces -o wide | grep tiller kube-system tiller-deploy-b6bf9f4cc-vbrc5 0/1 Pending 0 7m <none> 1<none> # A new service is created kubectl get 1services --all-namespaces -o wide | grep 1h tiller kube-system tiller-deploy ClusterIP 1 10.102.74.236 <none> 1 44134/TCP 1 47m app=helm,name=tiller # 0A new deployment is created, but the AVAILABLE flage is set 8m |
If you need to reset Helm, follow the below steps:
Code Block | ||
---|---|---|
| ||
# Uninstalls Tiller from a cluster helm reset --forceto "0". kubectl get deployments --all-namespaces NAMESPACE NAME # Clean up any existing artifacts kubectl -nDESIRED kube-system delete deploymentCURRENT tiller-deploy kubectl UP-TO-nDATE kube-system delete serviceaccountAVAILABLE tiller kubectl -nAGE kube-system delete ClusterRoleBinding tiller-clusterrolebindingkube-dns 1 kubectl create -f tiller-serviceaccount.yaml #init helm helm1 init --service-account tiller --upgrade |
Configure the Kubernetes Worker Nodes (k8s-node<n>)
Setting up cluster nodes is very easy. Just refer back to the "kubeadm init" output logs (/root/kubeadm_init.log). In the last line of the the logs, there is a “kubeadm join” command with token information and other parameters.
Capture those parameters and then execute it as root on each of the Kubernetes worker nodes: k8s-node1, k8s-node2, and k8s-node3.
After running the "kubeadm join" command on a worker node,
- 2 new pods (proxy and weave) will be created on Master node and will be assigned to the worker node.
- The tiller pod status will change to "running" .
- The AVAILABLE flag for tiller-deploy deployment will be changed to "1".
- The worker node will join the cluster.
The command looks like the follwing snippet (find the command in bottom of /root/kubeadm_init.log):
Code Block | ||
---|---|---|
| ||
# Should change to root user on the worker node.
kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a
# Make sure in the output, you see "This node has joined the cluster:". |
Verify the results from master node:
Code Block | ||
---|---|---|
| ||
kubectl get pods --all-namespaces -o wide
kubectl get nodes
# Sample Output:
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 2h v1.8.6
k8s-node1 Ready <none> 53s v1.8.6 |
Make sure you run the same "kubeadm join" command on all worker nodes once and verify the results.
Return to the Kubernetes master node VM, execute the “kubectl get nodes“ command (from master node) to see all the Kubernetes nodes. It might take some time, but eventually each node will have a status of "Ready":
Code Block | ||
---|---|---|
| ||
kubectl get nodes
# Sample Output:
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 1d v1.8.5
k8s-node1 Ready <none> 1d v1.8.5
k8s-node2 Ready <none> 1d v1.8.5
k8s-node3 Ready <none> 1d v1.8.5
|
Make sure that the tiller pod is running. Execute the following command (from master node) and look for a po/tiller-deploy-xxxx with a “Running” status. For example:
(In the case of using coredns instead of kube-dns, you notice it will only one container)
Code Block | ||
---|---|---|
| ||
kubectl get pods --all-namespaces -o wide # Sample output: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-k8s-master 1/1 Running 0 1h 10.147.112.140 k8s-master kube-system 1 kube-apiserver-k8s-master 1/1 Running 0 1h 10.147.112.140 k8s-master kube-system kube-controller-manager-k8s-mastertiller-deploy 1/1 Running 1 0 1 1h 10.147.112.140 0 k8s-master kube-system kube-dns-545bc4bfd4-jcklm 8m |
If you need to reset Helm, follow the below steps:
Code Block | ||
---|---|---|
| ||
# Uninstalls Tiller from a cluster helm reset --force 3/3 # Clean up Runningany existing artifacts 0kubectl -n kube-system delete deployment tiller-deploy kubectl -n kube-system delete 2hserviceaccount tiller kubectl -n kube-system delete 10.32.0.2ClusterRoleBinding tiller-clusterrolebinding kubectl create k8s-masterf kube-system kube-proxy-4zztj tiller-serviceaccount.yaml #init helm helm init --service-account tiller 1/1 Running 0 2m 10.147.112.150 k8s-node2 kube-system kube-proxy-lnv7r 1/1 Running 0 2h 10.147.112.140 k8s-master kube-system kube-proxy-t492g --upgrade |
Configure the Kubernetes Worker Nodes (k8s-node<n>)
Setting up cluster nodes is very easy. Just refer back to the "kubeadm init" output logs (/root/kubeadm_init.log). In the last line of the the logs, there is a “kubeadm join” command with token information and other parameters.
Capture those parameters and then execute it as root on each of the Kubernetes worker nodes: k8s-node1, k8s-node2, and k8s-node3.
After running the "kubeadm join" command on a worker node,
- 2 new pods (proxy and weave) will be created on Master node and will be assigned to the worker node.
- The tiller pod status will change to "running" .
- The AVAILABLE flag for tiller-deploy deployment will be changed to "1".
- The worker node will join the cluster.
The command looks like the follwing snippet (find the command in bottom of /root/kubeadm_init.log):
Code Block | ||
---|---|---|
| ||
# Should change to root user on the worker node.
kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a
# Make sure in the output, you see "This node has joined the cluster:". |
Verify the results from master node:
Code Block | ||
---|---|---|
| ||
kubectl get pods --all-namespaces -o wide kubectl get nodes # Sample Output: NAME STATUS ROLES AGE 1/1 VERSION Runningk8s-master 0Ready master 2h 20m 10v1.147.112.164 8.6 k8s-node1 kube-system Ready kube-proxy-xx8df <none> 53s v1.8.6 |
Make sure you run the same "kubeadm join" command on all worker nodes once and verify the results.
Return to the Kubernetes master node VM, execute the “kubectl get nodes“ command (from master node) to see all the Kubernetes nodes. It might take some time, but eventually each node will have a status of "Ready":
Code Block | ||
---|---|---|
| ||
kubectl get nodes # Sample Output: NAME 1/1 STATUS Running ROLES 0 AGE 2mVERSION k8s-master Ready 10.147.112.169 master k8s-node3 kube-system 1d kube-scheduler-k8s-master v1.8.5 k8s-node1 Ready 1/1 <none> 1d Running 0 v1.8.5 k8s-node2 Ready 1h <none> 1d 10.147.112.140 k8s-master kube-system tiller-deploy-b6bf9f4cc-vbrc5v1.8.5 k8s-node3 Ready <none> 1/1 1d Running 0 42mv1.8.5 |
Make sure that the tiller pod is running. Execute the following command (from master node) and look for a po/tiller-deploy-xxxx with a “Running” status. For example:
(In the case of using coredns instead of kube-dns, you notice it will only one container)
Code Block | ||
---|---|---|
| ||
kubectl get pods --all-namespaces -o wide # Sample output: NAMESPACE NAME 10.44.0.1 k8s-node1 kube-system weave-net-b2hkh READY 2/2 STATUS RESTARTS Running AGE 0 IP 1h 10.147.112.140 k8s-masterNODE kube-system weaveetcd-netk8s-s7l27master 21/21 Running 10 2m1h 10.147.112.169140 k8s-node3master kube-system weavekube-apiserver-netk8s-vmlrqmaster 2/21/1 Running 0 1h 20m 10.147.112.164140 k8s-node1master kube-system weave-net-xxgnqkube-controller-manager-k8s-master 2/21/1 Running 10 2m1h 10.147.112.150140 k8s-node2 |
Now you have a Kubernetes cluster with 3 worker nodes and 1 master node.
Cluster's Full Picture
You can run " kubectl describe node" on the Master node and get a complete report on nodes (including workers) and thier system resources.
Configure dockerdata-nfs
This is a shared directory which must be mounted on all of the Kuberenetes VMs(master node and worker nodes). Because many of the ONAP pods use this directory to share data.
See 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes for instruction on how to set this up.
Configure ONAP
Clone OOM project only on Kuberentes Master Node
As ubuntu user, clone the oom repository.
Code Block | ||
---|---|---|
| ||
git clone https://gerrit.onap.org/r/oom
cd oom/kubernetes
|
Note |
---|
You may use any specific known stable OOM release for APPC deployment. The above URL downloads latest OOM. |
Customize the oom/kubernetes/onap parent chart, like the values.yaml file, to suit your deployment. You may want to selectively enable or disable ONAP components by changing the subchart **enabled** flags to *true* or *false*.
Code Block |
---|
$ vi oom/kubernetes/onap/values.yaml
Example:
...
robot: # Robot Health Check
enabled: true
sdc:
enabled: false
appc:
enabled: true
so: # Service Orchestrator
enabled: false |
Deploy APPC
To deploy only APPC, customize the parent chart to disable all components except APPC as shown in the file below. Also set the global.persistence.mountPath to some non-mounted directory (by default, it is set to mounted directory /dockerdata-nfs).
Code Block |
---|
#Note that all components are changed to enabled:false except appc, robot, and mysql. Here we set number of APPC replicas to 3.
$ cat ~/oom/kubernetes/onap/values.yaml
# Copyright © 2017 Amdocs, Bell Canada
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#################################################################
# Global configuration overrides.
#
# These overrides will affect all helm charts (ie. applications)
# that are listed below and are 'enabled'.
#################################################################
global:
# Change to an unused port prefix range to prevent port conflicts
# with other instances running within the same k8s cluster
nodePortPrefix: 302
# ONAP Repository
# Uncomment the following to enable the use of a single docker
# repository but ONLY if your repository mirrors all ONAP
# docker images. This includes all images from dockerhub and
# any other repository that hosts images for ONAP components.
#repository: nexus3.onap.org:10001
repositoryCred:
user: docker
password: docker
# readiness check - temporary repo until images migrated to nexus3
readinessRepository: oomk8s
# logging agent - temporary repo until images migrated to nexus3
loggingRepository: docker.elastic.co
# image pull policy
pullPolicy: Always
# default mount path root directory referenced
# by persistent volumes and log files
persistence:
mountPath: /dockerdata-nfs
# flag to enable debugging - application support required
debugEnabled: false
# Repository for creation of nexus3.onap.org secret
repository: nexus3.onap.org:10001
#################################################################
# Enable/disable and configure helm charts (ie. applications)
# to customize the ONAP deployment.
#################################################################
aaf:
enabled: false
aai:
enabled: false
appc:
enabled: true
replicaCount: 3
config:
openStackType: OpenStackProvider
openStackName: OpenStack
openStackKeyStoneUrl: http://localhost:8181/apidoc/explorer/index.html
openStackServiceTenantName: default
openStackDomain: default
openStackUserName: admin
openStackEncryptedPassword: admin
clamp:
enabled: false
cli:
enabled: false
consul:
enabled: false
dcaegen2:
enabled: false
dmaap:
enabled: false
esr:
enabled: false
log:
enabled: false
sniro-emulator:
enabled: false
oof:
enabled: false
msb:
enabled: false
multicloud:
enabled: false
policy:
enabled: false
portal:
enabled: false
robot:
enabled: true
sdc:
enabled: false
sdnc:
enabled: false
replicaCount: 1
config:
enableClustering: false
mysql:
disableNfsProvisioner: true
replicaCount: 1
so:
enabled: false
replicaCount: 1
liveness:
# necessary to disable liveness probe when setting breakpoints
# in debugger so K8s doesn't restart unresponsive container
enabled: true
# so server configuration
config:
# message router configuration
dmaapTopic: "AUTO"
# openstack configuration
openStackUserName: "vnf_user"
openStackRegion: "RegionOne"
openStackKeyStoneUrl: "http://1.2.3.4:5000"
openStackServiceTenantName: "service"
openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"
# configure embedded mariadb
mariadb:
config:
mariadbRootPassword: password
uui:
enabled: false
vfc:
enabled: false
vid:
enabled: false
vnfsdk:
enabled: false
|
Note: If you set number of appc replicas in onap/values.yaml, it will overrides the setting you are about to do in the next step.
Run below command to setup a local Helm repository to serve up the local ONAP charts:
Code Block |
---|
#Press "Enter" after running the command to get the prompt back
$ nohup helm serve &
[1] 2316
$ Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879
# Verify
$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879
|
If you don't find the local repo, add it manually.
Note the IP(localhost) and port number that is listed in above response (8879 here) and use it in "helm repo add" command as follows:
Code Block |
---|
$ helm repo add local http://127.0.0.1:8879
"local" has been added to your repositories
|
Install "make" ( Learn more about ubuntu-make here : https://wiki.ubuntu.com/ubuntu-make) and build a local Helm repository (from the kubernetes directory):
Code Block |
---|
####################### # Install make from kubernetes directory. ####################### $ sudo apt install make Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: linux-headers-4.4.0-62 linux-headers-4.4.0-62-generic linux-image-4.4.0-62-generic snap-confine Use 'sudo apt autoremove' to remove them. Suggested packages: make-doc The following NEW packages will be installed: make 0 upgraded, 1 newly installed, 0 to remove and 72 not upgraded. Need to get 151 kB of archives. After this operation, 365 kB of additional disk space will be used. Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB] Fetched 151 kB in 0s (208 kB/s) Selecting previously unselected package make. (Reading database ... 121778 files and directories currently installed.) Preparing to unpack .../archives/make_4.1-6_amd64.deb ... Unpacking make (4.1-6) ... Processing triggers for man-db (2.7.5-1) ... Setting up make (4.1-6) ... ####################### # Build local helm repo ####################### $ make all [common] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' make[2]: Entering directory '/home/ubuntu/oom/kubernetes/common' [common] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' ==> Linting common [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/common-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' [dgbuilder] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting dgbuilder [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/dgbuilder-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' [postgres] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting postgres [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/postgres-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' [mysql] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting mysql [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mysql-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' make[2]: Leaving directory '/home/ubuntu/oom/kubernetes/common' make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [vid] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting vid [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vid-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [so] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting so [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/so-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [cli] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo-master kube-system kube-dns-545bc4bfd4-jcklm 3/3 Running 0 2h 10.32.0.2 k8s-master kube-system kube-proxy-4zztj 1/1 Running 0 2m 10.147.112.150 k8s-node2 kube-system kube-proxy-lnv7r 1/1 Running 0 2h 10.147.112.140 k8s-master kube-system kube-proxy-t492g 1/1 Running 0 20m 10.147.112.164 k8s-node1 kube-system kube-proxy-xx8df 1/1 Running 0 2m 10.147.112.169 k8s-node3 kube-system kube-scheduler-k8s-master 1/1 Running 0 1h 10.147.112.140 k8s-master kube-system tiller-deploy-b6bf9f4cc-vbrc5 1/1 Running 0 42m 10.44.0.1 k8s-node1 kube-system weave-net-b2hkh 2/2 Running 0 1h 10.147.112.140 k8s-master kube-system weave-net-s7l27 2/2 Running 1 2m 10.147.112.169 k8s-node3 kube-system weave-net-vmlrq 2/2 Running 0 20m 10.147.112.164 k8s-node1 kube-system weave-net-xxgnq 2/2 Running 1 2m 10.147.112.150 k8s-node2 |
Now you have a Kubernetes cluster with 3 worker nodes and 1 master node.
Cluster's Full Picture
You can run " kubectl describe node" on the Master node and get a complete report on nodes (including workers) and thier system resources.
Configure dockerdata-nfs
This is a shared directory which must be mounted on all of the Kuberenetes VMs(master node and worker nodes). Because many of the ONAP pods use this directory to share data.
See 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes for instruction on how to set this up.
Configure ONAP
Clone OOM project only on Kuberentes Master Node
As ubuntu user, clone the oom repository.
Code Block | ||
---|---|---|
| ||
git clone https://gerrit.onap.org/r/oom
cd oom/kubernetes
|
Note |
---|
You may use any specific known stable OOM release for APPC deployment. The above URL downloads latest OOM. |
Customize the oom/kubernetes/onap parent chart, like the values.yaml file, to suit your deployment. You may want to selectively enable or disable ONAP components by changing the subchart **enabled** flags to *true* or *false*.
Code Block |
---|
$ vi oom/kubernetes/onap/values.yaml
Example:
...
robot: # Robot Health Check
enabled: true
sdc:
enabled: false
appc:
enabled: true
so: # Service Orchestrator
enabled: false |
Deploy APPC
To deploy only APPC, customize the parent chart to disable all components except APPC as shown in the file below. Also set the global.persistence.mountPath to some non-mounted directory (by default, it is set to mounted directory /dockerdata-nfs).
Code Block |
---|
#Note that all components are changed to enabled:false except appc, robot, and mysql. Here we set number of APPC replicas to 3.
$ cat ~/oom/kubernetes/onap/values.yaml
# Copyright © 2017 Amdocs, Bell Canada
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#################################################################
# Global configuration overrides.
#
# These overrides will affect all helm charts (ie. applications)
# that are listed below and are 'enabled'.
#################################################################
global:
# Change to an unused port prefix range to prevent port conflicts
# with other instances running within the same k8s cluster
nodePortPrefix: 302
# ONAP Repository
# Uncomment the following to enable the use of a single docker
# repository but ONLY if your repository mirrors all ONAP
# docker images. This includes all images from dockerhub and
# any other repository that hosts images for ONAP components.
#repository: nexus3.onap.org:10001
repositoryCred:
user: docker
password: docker
# readiness check - temporary repo until images migrated to nexus3
readinessRepository: oomk8s
# logging agent - temporary repo until images migrated to nexus3
loggingRepository: docker.elastic.co
# image pull policy
pullPolicy: Always
# default mount path root directory referenced
# by persistent volumes and log files
persistence:
mountPath: /dockerdata-nfs
# flag to enable debugging - application support required
debugEnabled: false
# Repository for creation of nexus3.onap.org secret
repository: nexus3.onap.org:10001
#################################################################
# Enable/disable and configure helm charts (ie. applications)
# to customize the ONAP deployment.
#################################################################
aaf:
enabled: false
aai:
enabled: false
appc:
enabled: true
replicaCount: 3
config:
openStackType: OpenStackProvider
openStackName: OpenStack
openStackKeyStoneUrl: http://localhost:8181/apidoc/explorer/index.html
openStackServiceTenantName: default
openStackDomain: default
openStackUserName: admin
openStackEncryptedPassword: admin
clamp:
enabled: false
cli:
enabled: false
consul:
enabled: false
dcaegen2:
enabled: false
dmaap:
enabled: false
esr:
enabled: false
log:
enabled: false
sniro-emulator:
enabled: false
oof:
enabled: false
msb:
enabled: false
multicloud:
enabled: false
policy:
enabled: false
portal:
enabled: false
robot:
enabled: true
sdc:
enabled: false
sdnc:
enabled: false
replicaCount: 1
config:
enableClustering: false
mysql:
disableNfsProvisioner: true
replicaCount: 1
so:
enabled: false
replicaCount: 1
liveness:
# necessary to disable liveness probe when setting breakpoints
# in debugger so K8s doesn't restart unresponsive container
enabled: true
# so server configuration
config:
# message router configuration
dmaapTopic: "AUTO"
# openstack configuration
openStackUserName: "vnf_user"
openStackRegion: "RegionOne"
openStackKeyStoneUrl: "http://1.2.3.4:5000"
openStackServiceTenantName: "service"
openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"
# configure embedded mariadb
mariadb:
config:
mariadbRootPassword: password
uui:
enabled: false
vfc:
enabled: false
vid:
enabled: false
vnfsdk:
enabled: false
|
Note: If you set number of appc replicas in onap/values.yaml, it will overrides the setting you are about to do in the next step.
Run below command to setup a local Helm repository to serve up the local ONAP charts:
Code Block |
---|
#Press "Enter" after running the command to get the prompt back
$ nohup helm serve &
[1] 2316
$ Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879
# Verify
$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879
|
If you don't find the local repo, add it manually.
Note the IP(localhost) and port number that is listed in above response (8879 here) and use it in "helm repo add" command as follows:
Code Block |
---|
$ helm repo add local http://127.0.0.1:8879 Deleting"local" outdatedhas chartsbeen ==>added Lintingto cliyour [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/cli-2.0.0.tgzrepositories |
build a local Helm repository (from the kubernetes directory):
Code Block |
---|
$ make all [common] make[1]: LeavingEntering directory '/home/ubuntu/oom/kubernetes' [aaf] make[12]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts [common] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' ==> Linting aafcommon [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/aafcommon-2.0.0.tgz make[13]: Leaving directory '/home/ubuntu/oom/kubernetes/common' [logdgbuilder] make[13]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting logdgbuilder [INFO] Chart.yaml: icon is recommended [WARNING] templates/: directory not found 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/logdgbuilder-2.0.0.tgz make[13]: Leaving directory '/home/ubuntu/oom/kubernetes/common' [esrpostgres] make[13]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting esr [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/esr-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [mock] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting mockpostgres [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mockpostgres-2.0.1.0.tgz make[13]: Leaving directory '/home/ubuntu/oom/kubernetes/common' [multicloudmysql] make[13]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting multicloud [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/multicloud-1.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [mso] make[1]: Entering directory '/home/ubuntu/oom/kubernetes'/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting msomysql [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/msomysql-12.10.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' make[2]: Leaving directory '/home/ubuntu/oom/kubernetes/common' make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [dcaegen2vid] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting dcaegen2vid [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/dcaegen2vid-12.10.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [vnfsdkso] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting vnfsdkso [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vnfsdkso-12.10.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [policycli] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting policycli [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/policycli-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [consulaaf] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting consulaaf [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/consulaaf-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [clamplog] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting clamplog [INFO] Chart.yaml: icon is recommended [WARNING] templates/: directory not found 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/clamplog-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [appcesr] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 3 charts Downloading common from repo http://127.0.0.1:8879 Downloading mysql from repo http://127.0.0.1:8879 Downloading dgbuilder Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting appcesr [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/appcesr-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [sdcmock] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> HangLinting tightmock while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mock-0.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [multicloud] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting multicloud [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/multicloud-1.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [mso] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting sdcmso [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/sdcmso-21.01.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [portaldcaegen2] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> HangLinting tightdcaegen2 while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/dcaegen2-1.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [vnfsdk] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting portalvnfsdk [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/portalvnfsdk-21.01.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [aaipolicy] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting aaipolicy [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/aaipolicy-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [robotconsul] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting robotconsul [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/robotconsul-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [msbclamp] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting msb [INFO] Chart.yaml: icon is recommended [WARNING] templates/: directory not found 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/msb-2.0.0.tgz make[1]: Leavingcharts directory '/home/ubuntu/oom/kubernetes' [vfc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes'Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting vfcclamp [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vfcclamp-2.0.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [message-routerappc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 13 charts Downloading common from repo http://127.0.0.1:8879 Downloading mysql from repo http://127.0.0.1:8879 Downloading dgbuilder from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting message-routerappc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/message-routerappc-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [uuisdc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting uuisdc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/uuisdc-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [sdncportal] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 31 charts Downloading common from repo http://127.0.0.1:8879 Downloading mysql from repo http://127.0.0.1:8879 Downloading dgbuilder from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting sdncportal [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/sdncportal-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [onapaai] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 24 charts Downloading aaf from repo http://127.0.0.1:8879 1 charts Downloading aaicommon from repo http://127.0.0.1:8879 Downloading appc from repo http://127.0.0.1:8879 Downloading clamp from repo http://127.0.0.1:8879 Downloading cli from repo http://127.0.0.1:8879 Downloading common from repo http://127.0.0.1:8879 Downloading consul from repo http://127.0.0.1:8879 Downloading dcaegen2 from repo http://127.0.0.1:8879 Downloading esr from repo http://127.0.0.1:8879 Downloading log from repo http://127.0.0.1:8879 Downloading message-router from repo http://127.0.0.1:8879 Downloading mock from repo http://127.0.0.1:8879 Downloading msb Deleting outdated charts ==> Linting aai [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/aai-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [robot] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Downloading multicloud from repo http://127.0.0.1:8879 Downloading policy from repo http://127.0.0.1:8879 Downloading portal from repo http://127.0.0.1:8879 Downloading robot from repo http://127.0.0.1:8879 Downloading sdc from repo http://127.0.0.1:8879 Downloading sdnc from repo http://127.0.0.1:8879 Downloading so from repo http://127.0.0.1:8879 Downloading uui from repo http://127.0.0.1:8879 Downloading vfc from repo http://127.0.0.1:8879 Downloading vid Deleting outdated charts ==> Linting robot [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/robot-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [msb] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Downloading vnfsdk from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting onap Lint OK Deleting outdated charts ==> Linting msb [INFO] Chart.yaml: icon is recommended [WARNING] templates/: directory not found 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/onapmsb-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' |
Note |
---|
Setup of this Helm repository is a one time activity. If you make changes to your deployment charts or values, make sure to run **make** command again to update your local Helm repository. |
Once the repo is setup, installation of ONAP can be done with a single command:
Code Block |
---|
Example: $ helm install local/onap --name <Release-name> --namespace onap # we choose "dev" as our release name here Execute: $ helm install local/onap --name dev --namespace onap NAME: dev LAST DEPLOYED: Tue May 15 11:31:44 2018 NAMESPACE: onap STATUS: DEPLOYED RESOURCES: ==> v1/Secret NAME TYPE DATA AGE dev-appc-dgbuilder Opaque 1 1s dev-appc-db Opaque 1 1s dev-appc Opaque 1 1s onap-docker-registry-key kubernetes.io/dockercfg 1 1s ==> v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE dev-appc-db-data Bound dev-appc-db-data 1Gi RWX dev-appc-db-data 1s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE appc-cdt NodePort 10.107.253.179 <none> 80:30289/TCP 1s appc-dgbuilder NodePort 10.102.138.232 <none> 3000:30228/TCP 1s appc-sdnctldb02 ClusterIP None <none> 3306/TCP [vfc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting vfc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vfc-0.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [message-router] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting message-router [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/message-router-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [uui] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting uui [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/uui-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [sdnc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 3 charts Downloading common from repo http://127.0.0.1:8879 Downloading mysql from repo http://127.0.0.1:8879 Downloading dgbuilder from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting sdnc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/sdnc-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [onap] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 24 charts Downloading aaf from repo http://127.0.0.1:8879 Downloading aai from repo http://127.0.0.1:8879 Downloading appc from repo http://127.0.0.1:8879 Downloading clamp from repo http://127.0.0.1:8879 Downloading cli from repo http://127.0.0.1:8879 Downloading common from repo http://127.0.0.1:8879 Downloading consul from repo http://127.0.0.1:8879 Downloading dcaegen2 from repo http://127.0.0.1:8879 Downloading esr from repo http://127.0.0.1:8879 Downloading log from repo http://127.0.0.1:8879 Downloading message-router from repo http://127.0.0.1:8879 Downloading mock from repo http://127.0.0.1:8879 Downloading msb from repo http://127.0.0.1:8879 Downloading multicloud from repo http://127.0.0.1:8879 Downloading policy from repo http://127.0.0.1:8879 Downloading portal from repo http://127.0.0.1:8879 Downloading robot from repo http://127.0.0.1:8879 Downloading sdc from repo http://127.0.0.1:8879 Downloading sdnc from repo http://127.0.0.1:8879 Downloading so from repo http://127.0.0.1:8879 Downloading uui from repo http://127.0.0.1:8879 Downloading vfc from repo http://127.0.0.1:8879 Downloading vid from repo http://127.0.0.1:8879 Downloading vnfsdk from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting onap Lint OK 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/onap-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' |
Note |
---|
Setup of this Helm repository is a one time activity. If you make changes to your deployment charts or values, make sure to run **make** command again to update your local Helm repository. |
Once the repo is setup, installation of ONAP can be done with a single command:
Code Block |
---|
Example: $ helm install local/onap --name <Release-name> --namespace onap # we choose "dev" as our release name here Execute: $ helm install local/onap --name dev --namespace onap NAME: dev LAST DEPLOYED: Tue May 15 11:31:44 2018 NAMESPACE: onap STATUS: DEPLOYED RESOURCES: ==> v1/Secret NAME 1s appc-dbhostTYPE ClusterIP None DATA AGE dev-appc-dgbuilder <none> Opaque 3306/TCP 1 1s dev-appc-sdnctldb01db ClusterIP None Opaque <none> 3306/TCP 1 1s dev-appc 1s appc-dbhost-read Opaque ClusterIP 10.101.117.102 <none> 3306/TCP 1 1s onap-docker-registry-key kubernetes.io/dockercfg 1 1s appc ==> v1/PersistentVolumeClaim NAME NodePort 10.107.234.237 <none> STATUS VOLUME 8282:30230/TCP,1830:30231/TCP 1s appc-cluster ClusterIP None CAPACITY ACCESS MODES STORAGECLASS <none> AGE dev-appc-db-data 2550/TCP Bound dev-appc-db-data 1s robot 1Gi RWX dev-appc-db-data NodePort 1s 10.110.229.236 <none>==> v1/Service NAME 88:30209/TCP TYPE 0s ==> v1beta1/StatefulSet NAME CLUSTER-IP EXTERNAL-IP PORT(S) DESIRED CURRENT AGE dev-appc-db 1 1 AGE 0s devappc-appccdt 3 3 NodePort 0s ==> v1/ConfigMap NAME10.107.253.179 <none> 80:30289/TCP 1s appc-dgbuilder NodePort DATA10.102.138.232 AGE dev-appc-dgbuilder-scripts<none> 3000:30228/TCP 2 1s dev-appc-dgbuilder-configsdnctldb02 ClusterIP None 1 <none> 1s dev-appc-db-db-configmap 3306/TCP 2 1s dev-appc-onap-appc-data-propertiesdbhost ClusterIP 4None 1s dev-appc-onap-sdnc-svclogic-config <none> 1 3306/TCP 1s dev-appc-onap-appc-svclogic-bin 1 1s dev-appc-onap-sdnc-svclogic-binappc-sdnctldb01 ClusterIP None 1 <none> 1s dev-appc-onap-sdnc-bin 3306/TCP 2 1s devappc-appcdbhost-filebeatread ClusterIP 10.101.117.102 <none> 3306/TCP 1 1s dev-appc-logging-cfg 1s appc 1 1s dev-appc-onap-sdnc-data-properties NodePort 10.107.234.237 <none> 3 8282:30230/TCP,1830:30231/TCP 1s dev-appc-onap-appc-svclogic-configcluster 1 ClusterIP 1s dev-appc-onap-appc-binNone <none> 2550/TCP 2 1s dev-robot-eteshare-configmap 1s robot 4 1s dev-robot-resources-configmap NodePort 10.110.229.236 3<none> 1s dev-robot-lighttpd-authorization-configmap 1 88:30209/TCP 1s ==> v1/PersistentVolume NAME 0s ==> v1beta1/StatefulSet NAME DESIRED CURRENT CAPACITY AGE ACCESSdev-appc-db MODES 1 RECLAIM POLICY STATUS 1 CLAIM 0s dev-appc 3 3 0s STORAGECLASS ==> v1/ConfigMap NAME REASON AGE dev-appc-db-data 1Gi RWX Retain DATA AGE dev-appc-dgbuilder-scripts Bound onap/dev-appc-db-data 2 1s dev-appc-dbdgbuilder-dataconfig 1s dev-appc-data0 1Gi 1 RWO1s dev-appc-db-db-configmap Retain Bound2 1s onap/dev-appc-dataonap-devappc-appc-0data-properties dev-appc-data4 1s dev-appc-data2onap-sdnc-svclogic-config 1 1s dev-appc-onap-appc-svclogic-bin 1Gi RWO 1 1s dev-appc-onap-sdnc-svclogic-bin Retain Bound1 1s onap/dev-appc-dataonap-dev-appc-1sdnc-bin dev-appc-data 1s dev-appc-data1 2 1s 1Gidev-appc-filebeat RWO Retain Bound1 1s onap/dev-appc-data-dev-appc-2logging-cfg dev-appc-data 1s ==> v1beta1/ClusterRoleBinding NAME 1 AGE1s dev-appc-onap-sdnc-data-bindingproperties 1s ==> v1beta1/Deployment NAME 3 1s dev-appc-onap-appc-svclogic-config 1 DESIRED 1s CURRENT UP-TO-DATE AVAILABLE AGE devdev-appc-onap-appc-cdtbin 1 2 1 1s dev-robot-eteshare-configmap 1 0 4 1s 0s dev-robot-appcresources-dgbuilderconfigmap 1 3 1 1s dev-robot-lighttpd-authorization-configmap 1 1 1s ==> v1/PersistentVolume NAME 0 0s dev-robot CAPACITY ACCESS MODES RECLAIM POLICY STATUS 1CLAIM 0 0 0 STORAGECLASS REASON 0s ==> v1/Pod(related) NAMEAGE dev-appc-db-data 1Gi RWX Retain Bound READY STATUS onap/dev-appc-db-data RESTARTS AGE dev-appc-cdt-8cbf9d4d9-mhp4b-db-data 1s dev-appc-data0 1Gi 0/1 RWO ContainerCreating 0 Retain 0s dev-appc-dgbuilder-54766c5b87-xw6c6 Bound 0/1onap/dev-appc-data-dev-appc-0 Init:0/1 dev-appc-data 1s dev-appc-data2 0 0s dev-appc-db-0 1Gi RWO Retain Bound 0/2 onap/dev-appc-data-dev-appc-1 Init:0/2 dev-appc-data 1s dev-appc-data1 0 0s dev-appc-0 1Gi RWO Retain Bound 0/onap/dev-appc-data-dev-appc-2 Pending dev-appc-data 1s ==> v1beta1/ClusterRoleBinding NAME 0 AGE onap-binding 1s ==> v1beta1/Deployment NAME 0s dev-appc-1 DESIRED CURRENT UP-TO-DATE AVAILABLE AGE dev-appc-cdt 0/2 Pending 1 0 1 0s dev-appc-2 1 0 0s dev-appc-dgbuilder 0/21 Pending 1 01 0s |
Note |
---|
The **--namespace onap** is currently required while all onap helm charts are migrated to version 2.0. After this activity is complete, namespaces will be optional. |
Use the following to monitor your deployment and determine when ONAP is ready for use:
Code Block |
---|
ubuntu@k8s-master:~/oom/kubernetes$ kubectl get pods --all-namespaces -o wide -w NAMESPACE0 0s dev-robot NAME1 0 0 0 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE IP NODEREADY kube-system STATUS etcd-k8s-master RESTARTS AGE dev-appc-cdt-8cbf9d4d9-mhp4b 10/1 ContainerCreating 0 Running0s dev-appc-dgbuilder-54766c5b87-xw6c6 0/1 5 Init:0/1 14d 0 10.12.5.171 k8s-master kube-system 0s kubedev-apiserverappc-k8s-masterdb-0 10/12 Init:0/2 Running 0 5 0s dev-appc-0 14d 10.12.5.171 k8s-master kube-system kube-controller-manager-k8s-master 1/1 0/2 Pending Running 50 0s 14ddev-appc-1 10.12.5.171 k8s-master kube-system kube-dns-86f4d74b45-px44s 30/32 Pending Running 0 21 0s dev-appc-2 27d 10.32.0.5 k8s-master kube-system kube-proxy-25tm5 0/2 Pending 1/1 0 Running 0s |
Note |
---|
The **--namespace onap** is currently required while all onap helm charts are migrated to version 2.0. After this activity is complete, namespaces will be optional. |
Use the following to monitor your deployment and determine when ONAP is ready for use:
Code Block |
---|
ubuntu@k8s-master:~/oom/kubernetes$ kubectl get 8pods --all-namespaces -o wide -w NAMESPACE 27dNAME 10.12.5.171 k8s-master kube-system kube-proxy-6dt4z READY 1/1 STATUS Running RESTARTS 4 AGE IP 27d 10.12.5.174 k8s-node1NODE kube-system kubeetcd-proxyk8s-jmv67master 1/1 Running 45 27d14d 10.12.5.193171 k8s-node2master kube-system kube-apiserver-proxyk8s-l8fksmaster 1/1 Running 65 27d14d 10.12.5.194171 k8s-node3master kube-system kube-controller-schedulermanager-k8s-master 1/1 Running 5 14d 10.12.5.171 k8s-master kube-system tillerkube-deploydns-84f4c8bb78-s6bq586f4d74b45-px44s 13/13 Running 021 27d 4d 10.4732.0.75 k8s-node2master kube-system weavekube-netproxy-bz7wr25tm5 21/21 Running 8 20 27d 10.12.5.194171 k8s-node3master kube-system weavekube-netproxy-c2pxd6dt4z 21/21 Running 4 13 27d 10.12.5.174 k8s-node1 kube-system weavekube-netproxy-jw29cjmv67 21/21 Running 4 20 27d 10.12.5.171193 k8s-masternode2 kube-system weavekube-netproxy-kxxpll8fks 2/2 Running 13 27d 10.12.5.193 k8s-node2 onap1/1 Running dev-appc-0 6 27d 10.12.5.194 k8s-node3 kube-system kube-scheduler-k8s-master 0/2 PodInitializing 0 1/1 2m Running 10.47.0.5 5 k8s-node2 onap 14d dev-appc-1 10.12.5.171 k8s-master kube-system tiller-deploy-84f4c8bb78-s6bq5 1/1 Running 0/2 PodInitializing 0 2m4d 10.3647.0.87 k8s-node3node2 onapkube-system dev-appc-2weave-net-bz7wr 2/2 0/2 Running PodInitializing 0 20 2m 27d 10.4412.05.7194 k8s-node3 kube-system k8s-node1 onap weave-net-c2pxd dev-appc-cdt-8cbf9d4d9-mhp4b 12/12 Running 013 2m 27d 10.4712.05.1 174 k8s-node2node1 onapkube-system dev-appc-db-0 weave-net-jw29c 2/2 Running 0 20 2m 27d 10.3612.05.5171 k8s-master k8s-node3 onapkube-system weave-net-kxxpl dev-appc-dgbuilder-54766c5b87-xw6c6 0/1 PodInitializing2/2 0 Running 2m 13 10.44.0.2 k8s-node1 onap27d 10.12.5.193 dev-robot-785b9bfb45-9s2rs k8s-node2 onap dev-appc-0 0/1 PodInitializing 0 2m 10.36.0.70/2 k8s-node3 |
Cleanup deployed ONAP instance
To delete a deployed instance, use the following command:
Code Block |
---|
Example: ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm del --purge <Release-name> PodInitializing 0 # we chose "dev" as our release name2m Execute: $ helm del --purge dev release "dev" deleted |
Also, delete the existing persistent volumes and persistent volume claim in "onap" namespace:
Code Block |
---|
#query existing pv in onap namespace $ kubectl get pv -n onap NAME 10.47.0.5 k8s-node2 onap dev-appc-1 CAPACITY ACCESS MODES RECLAIM POLICY STATUS 0/2 CLAIM PodInitializing 0 2m 10.36.0.8 STORAGECLASSk8s-node3 onap REASON AGE dev-appc-data02 1Gi RWO 0/2 Retain PodInitializing 0 Bound onap/dev-appc-data-dev-appc-02m dev-appc-data 10.44.0.7 k8s-node1 onap 8m dev-appc-data1-cdt-8cbf9d4d9-mhp4b 1/1 1Gi Running RWO 0 Retain 2m 10.47.0.1 Bound onap/dev-appc-data-dev-appc-2k8s-node2 onap dev-appc-db-data0 8m dev-appc-data2 1Gi2/2 Running RWO 0 Retain 2m Bound onap/dev-appc-data-dev-appc-110.36.0.5 k8s-node3 onap dev-appc-data dev-appc-dgbuilder-54766c5b87-xw6c6 8m dev-appc-db-data 0/1 PodInitializing 1Gi0 RWX 2m 10.44.0.2 Retain k8s-node1 onap Bound onap/dev-appcrobot-db785b9bfb45-data9s2rs dev-appc-db-data 0/1 8mPodInitializing #Example commands0 are found here: #delete existing pv $ kubectl2m delete pv dev-appc-data0 -n onap pv "dev-appc-data0" deleted $ kubectl delete pv dev-appc-data1 -n onap pv "dev-appc-data0" deleted $ kubectl delete pv dev-appc-data2 -n onap pv "dev-appc-data2" deleted $ kubectl delete pv dev-appc-db-data -n onap pv "dev-appc-db-data" deleted #query existing pvc 10.36.0.7 k8s-node3 |
Cleanup deployed ONAP instance
To delete a deployed instance, use the following command:
Code Block |
---|
Example:
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm del --purge <Release-name>
# we chose "dev" as our release name
Execute:
$ helm del --purge dev
release "dev" deleted
|
Also, delete the existing persistent volumes and persistent volume claim in "onap" namespace:
Code Block |
---|
#query existing pv in onap namespace $ kubectl get pvcpv -n onap NAME STATUSCAPACITY ACCESS MODES VOLUME RECLAIM POLICY STATUS CLAIM CAPACITY ACCESS MODES STORAGECLASS AGE dev-appc-data-dev-appc-0 STORAGECLASS BoundREASON AGE dev-appc-data0 1Gi RWO Retain Bound onap/dev-appc-data-dev-appc-0 9m dev-appc-data-dev-appc-1 Bound 8m dev-appc-data2data1 1Gi RWO Retain dev-appc-data Bound 9m onap/dev-appc-data-dev-appc-2 Bounddev-appc-data 8m dev-appc-data1data2 1Gi RWO Retain Bound onap/dev-appc-data-dev-appc-1 9m dev-appc-db-data Bound 8m dev-appc-db-data 1Gi RWX dev-appc-db-data 9m #delete existing pvc $ kubectl deleteRetain pvc dev-appc-data-dev-appc-0 -n onap pvc "dev-appc-data-dev-appc-0" deleted $ kubectl delete pvc dev-appc-data-dev-appc-1 -n onap pvc "Bound onap/dev-appc-data-dev-appc-1" deleted $ kubectl delete pvc dev-appc-data-dev-appc-2 -n onap pvc "dev-appc-data-dev-appc-2" deleted $ kubectl delete pvcdb-data dev-appc-db-data -n onap pvc "dev-appc-db-data" deleted |
Verify APPC Clustering
Refer to Validate the APPC ODL cluster.
Get the details from Kubernete Master Node
Access to RestConf UI is via https://<Kuberbetes-Master Node-IP>:30230/apidoc/explorer/index.html (admin user)
Run the following command to make sure installation is error free.
Code Block | ||
---|---|---|
| ||
$ kubectl cluster-info
Kubernetes master is running at https://10.12.5.171:6443
KubeDNS is running at https://10.12.5.171:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
Code Block | ||
---|---|---|
| ||
$ kubectl -n onap get all NAME AGE deploy/dev-appc-cdt 8m #Example commands are found here: #delete existing pv $ kubectl delete pv dev-appc-data0 -n onap pv "dev-appc-data0" deleted $ kubectl delete pv dev-appc-data1 -n onap pv "dev-appc-data0" deleted $ kubectl delete pv dev-appc-data2 -n onap pv "dev-appc-data2" deleted $ kubectl delete pv dev-appc-db-data -n onap pv "dev-appc-db-data" deleted #query existing pvc in onap namespace $ kubectl get pvc -n onap NAME 23m deploy/dev-appc-dgbuilder STATUS VOLUME 23m deploy/dev-robot 23mCAPACITY NAME ACCESS MODES STORAGECLASS AGE dev-appc-data-dev-appc-0 Bound dev-appc-data0 AGE rs/dev-appc-cdt-8cbf9d4d9 1Gi RWO 23m rs/dev-appc-dgbuilder-54766c5b87 data 9m dev-appc-data-dev-appc-1 Bound 23m rs/dev-robotappc-785b9bfb45data2 1Gi 23m NAMERWO dev-appc-data 9m dev-appc-data-dev-appc-2 AGE statefulsets/dev-appc Bound 23m statefulsets/dev-appc-dbdata1 23m NAME 1Gi RWO dev-appc-data 9m dev-appc-db-data READY STATUS Bound RESTARTS AGE po/dev-appc-db-0data 1Gi RWX dev-appc-db-data 9m 2/2 Running 0 23m po/#delete existing pvc $ kubectl delete pvc dev-appc-data-dev-appc-0 -n onap pvc "dev-appc-data-dev-appc-0" deleted $ kubectl delete pvc dev-appc-data-dev-appc-1 -n onap pvc "dev-appc-data-dev-appc-1" deleted $ kubectl delete pvc dev-appc-data-dev-appc-2 -n onap pvc "dev-appc-data-dev-appc-2" deleted $ kubectl delete pvc dev-appc-db-data -n onap pvc "dev-appc-db-data" 2/2 Running 0 23m po/dev-appc-2 deleted |
Verify APPC Clustering
Refer to Validate the APPC ODL cluster.
Get the details from Kubernete Master Node
Access to RestConf UI is via https://<Kuberbetes-Master Node-IP>:30230/apidoc/explorer/index.html (admin user)
Run the following command to make sure installation is error free.
Code Block | ||
---|---|---|
| ||
$ kubectl cluster-info
Kubernetes master is running at https://10.12.5.171:6443
KubeDNS is running at https://10.12.5.171:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
Code Block | ||
---|---|---|
| ||
$ kubectl -n onap get all NAME 2/2 Running 0 23m poAGE deploy/dev-appc-cdt-8cbf9d4d9-mhp4b 23m 1/1deploy/dev-appc-dgbuilder Running 0 23m deploy/dev-robot 23m po/dev-appc-db-0 23m NAME 2/2 Running 0 23mAGE pors/dev-appc-dgbuilder-54766c5b87-xw6c6cdt-8cbf9d4d9 1/1 Running23m rs/dev-appc-dgbuilder-54766c5b87 0 23m pors/dev-robot-785b9bfb45-9s2rs 23m NAME 1/1 Running 0 23m NAMEAGE statefulsets/dev-appc 23m statefulsets/dev-appc-db 23m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) READY STATUS RESTARTS AGE svcpo/dev-appc-0 NodePort 10.107.234.237 <none> 8282:30230/TCP,1830:30231/TCP2/2 23m svc/appc-cdt Running 0 23m po/dev-appc-1 NodePort 10.107.253.179 <none> 80:30289/TCP 23m svc2/appc-cluster2 Running 0 ClusterIP None 23m po/dev-appc-2 <none> 2550/TCP 2/2 23m svc/appc-dbhost Running 0 ClusterIP None23m po/dev-appc-cdt-8cbf9d4d9-mhp4b <none> 33061/TCP1 Running 0 23m svcpo/dev-appc-dbhost-read db-0 ClusterIP 10.101.117.102 <none> 3306/TCP 2/2 23mRunning svc/appc-dgbuilder 0 NodePort 10.102.138.232 <none>23m po/dev-appc-dgbuilder-54766c5b87-xw6c6 3000:30228/TCP 1/1 Running 0 23m svc/appc-sdnctldb01 23m po/dev-robot-785b9bfb45-9s2rs ClusterIP None <none> 1/1 3306/TCPRunning 0 23m NAME 23m svc/appc-sdnctldb02 ClusterIP None TYPE CLUSTER-IP <none> EXTERNAL-IP PORT(S) 3306/TCP 23mAGE svc/robotappc NodePort 10.110107.229234.236237 <none> 88:302098282:30230/TCP,1830:30231/TCP 23m svc/appc-cdt 23m | ||
Code Block | ||
| ||
$ kubectl -n onap get pod NAME NodePort 10.107.253.179 <none> 80:30289/TCP 23m svc/appc-cluster READY ClusterIP STATUSNone RESTARTS AGE dev-appc-0 <none> 2550/TCP 23m 2svc/2appc-dbhost Running 0 ClusterIP None 22m dev-appc-1 <none> 3306/TCP 2/2 Running 0 22m dev-appc-223m svc/appc-dbhost-read ClusterIP 10.101.117.102 <none> 3306/TCP 2/2 Running 0 23m svc/appc-dgbuilder 22m dev-appc-cdt-8cbf9d4d9-mhp4b NodePort 10.102.138.232 <none> 13000:30228/1TCP Running 0 22m dev-appc-db-023m svc/appc-sdnctldb01 ClusterIP None <none> 23306/2 TCP Running 0 22m dev-appc-dgbuilder-54766c5b87-xw6c6 23m svc/appc-sdnctldb02 1/1 ClusterIP Running None 0 22m dev-robot-785b9bfb45-9s2rs <none> 3306/TCP 1/1 Running 023m svc/robot 22m | ||
Code Block | ||
| ||
$ $ kubectl get pod --all-namespaces -a NAMESPACE NAMENodePort 10.110.229.236 <none> 88:30209/TCP 23m |
Code Block | ||
---|---|---|
| ||
$ kubectl -n onap get pod READYNAME STATUS RESTARTS AGE kube-system etcd-k8s-master READY 1/1 STATUS RESTARTS Running AGE dev-appc-0 5 14d kube-system kube-apiserver-k8s-master 12/12 Running 50 14d22m kube-system kube-controller-manager-k8s-masterdev-appc-1 1/1 Running 5 14d kube-system kube-dns-86f4d74b45-px44s2/2 Running 0 22m dev-appc-2 3/3 Running 21 27d kube-system kube-proxy-25tm5 2/2 Running 0 1/1 22m dev-appc-cdt-8cbf9d4d9-mhp4b Running 8 27d kube-system 1/1 kube-proxy-6dt4z Running 0 22m dev-appc-db-0 1/1 Running 4 27d kube-system 2/2 kube-proxy-jmv67 Running 0 22m dev-appc-dgbuilder-54766c5b87-xw6c6 1/1 Running 40 27d22m kube-system kube-proxy-l8fksdev-robot-785b9bfb45-9s2rs 1/1 Running 1/10 Running 6 27d kube-system kube-scheduler-k8s-master22m |
Code Block | ||
---|---|---|
| ||
$ $ kubectl get pod --all-namespaces -a NAMESPACE NAME 1/1 Running 5 14d kube-systemREADY tiller-deploy-84f4c8bb78-s6bq5 STATUS RESTARTS AGE kube-system etcd-k8s-master 1/1 Running 0 4d kube-system weave-net-bz7wr 1/1 Running 5 14d kube-system kube-apiserver-k8s-master 2/2 Running 20 1/1 27d kube-system weave-net-c2pxd Running 5 14d kube-system kube-controller-manager-k8s-master 21/21 Running 5 13 27d14d kube-system weavekube-dns-net86f4d74b45-jw29cpx44s 2/23/3 Running 2021 27d kube-system weavekube-netproxy-kxxpl25tm5 21/21 Running 138 27d kube-system onap kube-proxy-6dt4z dev-appc-0 1/1 Running 4 2/2 27d Runningkube-system 0kube-proxy-jmv67 25m onap dev-appc-1 1/1 Running 4 27d kube-system kube-proxy-l8fks 2/2 Running 0 25m onap 1/1 dev-appc-2 Running 6 27d kube-system kube-scheduler-k8s-master 2/2 Running1/1 0 Running 5 25m onap 14d kube-system devtiller-appcdeploy-cdt-8cbf9d4d9-mhp4b 84f4c8bb78-s6bq5 1/1 Running 0 25m4d onapkube-system dev-appc-db-0 weave-net-bz7wr 2/2 Running 20 0 27d kube-system 25m onap weave-net-c2pxd dev-appc-dgbuilder-54766c5b87-xw6c6 1/1 Running 0 2/2 Running 25m onap13 27d dev-robot-785b9bfb45-9s2rskube-system weave-net-jw29c 1/1 Running 0 2/2 25m | ||
Code Block | ||
| ||
$ kubectl -n onap get pod -o wide NAME Running 20 27d kube-system weave-net-kxxpl 2/2 READY STATUS Running RESTARTS13 AGE 27d IP onap NODE dev-appc-0 2/2 Running 0 26m25m onap 10.47.0.5 k8s-node2 dev-appc-1 2/2 Running 0 26m25m onap 10.36.0.8 k8s-node3 dev-appc-2 2/2 Running 0 26m25m onap 10.44.0.7 k8s-node1 dev-appc-cdt-8cbf9d4d9-mhp4b 1/1 Running 0 26m25m onap 10.47.0.1 k8s-node2 dev-appc-db-0 2/2 Running 0 26m25m onap 10.36.0.5 k8s-node3 dev-appc-dgbuilder-54766c5b87-xw6c6 1/1 Running 0 26m25m onap 10.44.0.2 k8s-node1 dev-robot-785b9bfb45-9s2rs 1/1 Running 0 26m 10.36.0.7 k8s-node325m |
Code Block | ||
---|---|---|
| ||
$ kubectl -n onap get servicespod --all-namespaces -o wide NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) READY AGE SELECTOR default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP STATUS RESTARTS AGE IP 27d NODE dev-appc-0 <none> kube-system kube-dns ClusterIP 10.96.0.10 <none> 2/2 53/UDP,53/TCP Running 0 26m 27d 10.47.0.5 k8s-app=kube-dns kube-system tiller-deploynode2 dev-appc-1 ClusterIP 10.108.155.106 <none> 44134/TCP 2/2 Running 0 14d app=helm,name=tiller onap26m 10.36.0.8 k8s-node3 dev-appc-2 NodePort 10.107.234.237 <none> 8282:30230/TCP,1830:30231/TCP2/2 27m Running app=appc,release=dev onap0 26m appc-cdt 10.44.0.7 k8s-node1 dev-appc-cdt-8cbf9d4d9-mhp4b NodePort 10.107.253.179 <none> 1/1 80:30289/TCP Running 0 26m 27m 10.47.0.1 app=appc-cdt,release=dev onapk8s-node2 dev-appc-db-0 appc-cluster ClusterIP None 2/2 <none>Running 0 2550/TCP 26m 10.36.0.5 k8s-node3 dev-appc-dgbuilder-54766c5b87-xw6c6 27m 1/1 app=appc,release=dev onap Running 0 appc-dbhost 26m 10.44.0.2 ClusterIP k8s-node1 dev-robot-785b9bfb45-9s2rs None <none> 33061/TCP1 Running 0 26m 27m app=appc-db,release=dev onap 10.36.0.7 k8s-node3 |
Code Block | ||
---|---|---|
| ||
$ kubectl get services --all-namespaces -o wide NAMESPACE NAME appc-dbhost-read ClusterIP 10.101.117.102 <none> TYPE 3306/TCP CLUSTER-IP EXTERNAL-IP PORT(S) 27m app=appc-db,release=dev onap AGE appc-dgbuilder SELECTOR default kubernetes NodePort 10.102.138.232 <none> 3000:30228/TCPClusterIP 10.96.0.1 <none> 27m 443/TCP app=appc-dgbuilder,release=dev onap appc-sdnctldb01 27d ClusterIP None<none> kube-system kube-dns <none> 3306/TCP ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 27m app=appc-db,release=dev onap 27d appc-sdnctldb02 k8s-app=kube-dns kube-system tiller-deploy ClusterIP None ClusterIP 10.108.155.106 <none> 330644134/TCP 27m14d app=appc-dbhelm,releasename=devtiller onap appc robot NodePort 10.110107.229234.236237 <none> 88:30209/TCP 8282:30230/TCP,1830:30231/TCP 27m app=robotappc,release=dev |
...
Code Block | ||
---|---|---|
| ||
$ kubectl -n onap describe po/dev-appc-0 Name: appc-cdt dev-appc-0 Namespace: onap Node: NodePort k8s-node2/10.12107.5253.193179 Start Time: <none> Tue, 15 May 2018 11:31:47 -0400 Labels:80:30289/TCP app=appc 27m controller-revision-hash=dev-appc-7d976dd9b9app=appc-cdt,release=dev onap appc-cluster ClusterIP release=dev None <none> statefulset.kubernetes.io/pod-name=dev-appc-0 Annotations: <none>2550/TCP Status: Running IP: 10.47.0.5 Controlled By: StatefulSet/dev-appc Init Containers: appc-readiness:27m app=appc,release=dev onap Container ID: docker://fdbf3011e7911b181a25c868f7d342951ced2832ed63c481253bb06447a0c04f appc-dbhost Image: oomk8s/readiness-check:2.0.0 ClusterIP Image ID:None docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed Port: <none> <none>3306/TCP Command: /root/ready.py Args: --container-name27m app=appc-db,release=dev onap State: appc-dbhost-read Terminated ClusterIP Reason: 10.101.117.102 <none> Completed 3306/TCP Exit Code: 0 Started: Tue, 15 May 2018 11:32:00 -040027m Finished:app=appc-db,release=dev onap Tue, 15 May 2018 11:32:16 appc-0400dgbuilder Ready: NodePort True 10.102.138.232 Restart<none> Count: 0 Environment3000:30228/TCP NAMESPACE: onap (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-v9mnv (ro) Containers:27m appc: app=appc-dgbuilder,release=dev onap Container ID: docker://2b921a54a6cc19f9b7cdd3c8e7904ae3426019224d247fc31a74f92ec6f05ba0 appc-sdnctldb01 Image: nexus3.onap.org:10001/onap/appc-image:1.3.0-SNAPSHOT-latest ClusterIP ImageNone ID: docker-pullable://nexus3.onap.org:10001/onap/appc-image@sha256:ee8b64bd578f42169a86951cd45b1f2349192e67d38a7a350af729d1bf33069c Ports: <none> 81813306/TCP, 1830/TCP Command: /opt/appc/bin/startODL.sh State: 27m app=appc-db,release=dev onap Running Started: appc-sdnctldb02 Tue, 15 May 2018 11:40:13 -0400 ClusterIP Ready: None True <none> Restart Count: 0 3306/TCP Readiness: tcp-socket :8181 delay=10s timeout=1s period=10s #success=1 #failure=3 Environment: 27m MYSQL_ROOT_PASSWORD: <set to the key 'db-root-password' in secret 'dev-appc'> Optional: false app=appc-db,release=dev onap robot SDNC_CONFIG_DIR: /opt/onap/appc/data/properties APPC_CONFIG_DIR: NodePort /opt/onap/appc/data/properties 10.110.229.236 <none> DMAAP_TOPIC_ENV: 88:30209/TCP SUCCESS ENABLE_ODL_CLUSTER: true APPC_REPLICAS: 27m 3 Mounts: /etc/localtime from localtime (ro)app=robot,release=dev |
Get more detail about a single pod by using "describe" with the resource name. The resource name is shown with the get all command used above.
Code Block | ||
---|---|---|
| ||
$ kubectl -n onap describe po/onap-appc-0 Name: /opt/onap/appc/bin/installAppcDb.sh from onap-appc-bin (rw)0 Namespace: /opt/onap/appc/bin/startODL.sh from onap-appc-bin (rw) Priority: /opt/onap/appc/data/properties/aaiclient.properties from onap-appc-data-properties (rw) 0 PriorityClassName: <none> Node: /opt/onap/appc/data/properties/appc.properties from onap-appc-data-properties (rw) /opt/onap/appc/data/properties/dblib.properties from onap-appc-data-properties (rw) k8s-appc4/10.12.6.73 Start Time: /opt/onap/appc/data/properties/svclogic.properties from onap-appc-data-properties (rw) /opt/onap/appc/svclogic/bin/showActiveGraphs.sh from onap-appc-svclogic-bin (rw) Wed, 20 Feb 2019 17:35:42 -0500 Labels: /opt/onap/appc/svclogic/config/svclogic.properties from onap-appc-svclogic-config (rw)app=appc /opt/onap/ccsdk/bin/installSdncDb.sh from onap-sdnc-bin (rw) /opt/onap/ccsdk/bin/startODL.sh from controller-revision-hash=onap-sdncappc-bin787488477 (rw) /opt/onap/ccsdk/data/properties/aaiclient.properties from onap-sdnc-data-properties (rw) /opt/onap/ccsdk/data/properties/dblib.properties from onap-sdnc-data-properties (rw)release=onap /opt/onap/ccsdk/data/properties/svclogic.properties from onap-sdnc-data-properties (rw) /opt/onap/ccsdk/svclogic/bin/showActiveGraphs.sh from statefulset.kubernetes.io/pod-name=onap-sdnc-svclogic-bin (rw)appc-0 Annotations: /opt/onap/ccsdk/svclogic/config/svclogic.properties from onap-sdnc-svclogic-config (rw)<none> Status: /opt/opendaylight/current/daexim from dev-appc-data (rw) Pending IP: /opt/opendaylight/current/etc/org.ops4j.pax.logging.cfg from log-config (rw) /var/log/onap from logs (rw) 10.42.0.5 Controlled By: /var/run/secrets/kubernetes.io/serviceaccount from default-token-v9mnv (ro)StatefulSet/onap-appc Init Containers: filebeatappc-onapreadiness: Container ID: docker://b9143c9898a4a071d1d781359e190bdd297e31a2bd04223225a55ff8b1990b32a7582fb876b85ca934024d10814d339cb951803e76a842361be08540edacc08a Image: docker.elastic.co/beats/filebeat:5.5oomk8s/readiness-check:2.0.0 Image ID: docker-pullable://docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed Port: <none> <none> Host StatePort: <none> RunningCommand: Started:/root/ready.py Tue, 15 May 2018 11:40:14 -0400Args: Ready: --container-name appc-db True State: Restart Count: 0 Environment:Running <none> Started: Mounts: Wed, 20 /usr/share/filebeat/data from data-filebeat (rw)Feb 2019 19:37:31 -0500 Last State: /usr/share/filebeat/filebeat.yml from filebeat-conf (rw) Terminated /var/log/onap from logs (rw)Reason: Error /var/run/secrets/kubernetes.io/serviceaccount from default-token-v9mnv (ro) Conditions Exit Code: Type 1 Started: Status Initialized Wed, 20 Feb True2019 Ready19:27:25 -0500 Finished: True Wed, PodScheduled 20 Feb True2019 Volumes19:37:25 -0500 dev-appc-data: Ready: Type: PersistentVolumeClaim (aFalse reference to a PersistentVolumeClaim inRestart theCount: same namespace)12 ClaimNameEnvironment: dev-appc-data-dev-appc-0 ReadOnlyNAMESPACE: falseonap (v1:metadata.namespace) localtimeMounts: Type: HostPath (bare host directory volume) Path: /etc/localtime filebeat-conf/var/run/secrets/kubernetes.io/serviceaccount from default-token-6vq96 (ro) Containers: appc: Container ID: Type: Image: ConfigMap (a volume populated by a ConfigMap) nexus3.onap.org:10001/onap/appc-image:1.5.0-SNAPSHOT-latest Image NameID: dev-appc-filebeat OptionalPorts: false log-config: Type:8181/TCP, 1830/TCP ConfigMap (a volume populated by a ConfigMap)Host Ports: 0/TCP, 0/TCP NameCommand: dev-appc-logging-cfg/opt/appc/bin/startODL.sh OptionalState: false logs: Type: Waiting EmptyDir (a temporary directory thatReason: shares a pod's lifetime) PodInitializing Medium: data-filebeatReady: Type: EmptyDirFalse (a temporary directory that sharesRestart aCount: pod's lifetime)0 MediumReadiness: onap-appc-data-properties: exec Type: ConfigMap (a volume populated by a ConfigMap) Name: dev-appc-onap-appc-data-properties [/opt/appc/bin/health_check.sh] delay=10s timeout=1s period=10s #success=1 #failure=3 Environment: MYSQL_ROOT_PASSWORD: <set to the key 'db-root-password' in secret 'onap-appc'> Optional: false false onap-appc-svclogic-config: SDNC_CONFIG_DIR: Type: /opt/onap/appc/data/properties ConfigMap (a volume populated by a ConfigMap) APPC_CONFIG_DIR: Name:/opt/onap/appc/data/properties dev-appc-onap-appc-svclogic-config DMAAP_TOPIC_ENV: Optional: false SUCCESS onap-appc-svclogic-bin: TypeENABLE_AAF: ConfigMap (a volume populated bytrue a ConfigMap) NameENABLE_ODL_CLUSTER: true dev-appc-onap-appc-svclogic-bin APPC_REPLICAS: Optional: false onap-appc-bin: 3 Type: Mounts: ConfigMap (a volume populated by a ConfigMap/etc/localtime from localtime (ro) Name: dev-appc-/opt/onap/appc/bin/health_check.sh from onap-appc-bin (rw) Optional: false /opt/onap/appc/bin/installAppcDb.sh from onap-sdnc-data-properties:appc-bin (rw) Type: /opt/onap/appc/bin/installFeatures.sh from onap-appc-bin (rw) ConfigMap (a volume populated by a ConfigMap /opt/onap/appc/bin/startODL.sh from onap-appc-bin (rw) Name: /opt/onap/appc/data/properties/aaa-app-config.xml from devonap-appc-onap-sdnc-data-properties (rw) Optional: false /opt/onap/appc/data/properties/aaiclient.properties from onap-sdncappc-svclogicdata-config:properties (rw) Type: /opt/onap/appc/data/properties/appc.properties from ConfigMap (a volume populated by a ConfigMaponap-appc-data-properties (rw) Name: /opt/onap/appc/data/properties/cadi.properties from devonap-appc-onap-sdnc-svclogic-config data-properties (rw) Optional: false /opt/onap/appc/data/properties/dblib.properties from onap-sdncappc-svclogicdata-bin:properties (rw) Type: /opt/onap/appc/data/properties/svclogic.properties from ConfigMap (a volume populated by a ConfigMaponap-appc-data-properties (rw) Name: /opt/onap/appc/svclogic/bin/showActiveGraphs.sh from devonap-appc-onap-sdnc-svclogic-bin (rw) Optional: false onap-sdnc-bin: /opt/onap/appc/svclogic/config/svclogic.properties from onap-appc-svclogic-config (rw) Type: /opt/onap/ccsdk/bin/installSdncDb.sh ConfigMap (a volume populated by a ConfigMapfrom onap-sdnc-bin (rw) Name: /opt/onap/ccsdk/bin/startODL.sh from dev-appc-onap-sdnc-bin (rw) Optional: false default-token-v9mnv: /opt/onap/ccsdk/data/properties/aaiclient.properties from onap-sdnc-data-properties (rw) Type: /opt/onap/ccsdk/data/properties/dblib.properties from onap-sdnc-data-properties (rw) Secret (a volume populated by a Secret) /opt/onap/ccsdk/data/properties/svclogic.properties from onap-sdnc-data-properties (rw) SecretName: default-token-v9mnv/opt/onap/ccsdk/svclogic/bin/showActiveGraphs.sh from onap-sdnc-svclogic-bin (rw) Optional: /opt/onap/ccsdk/svclogic/config/svclogic.properties false QoS Class:from onap-sdnc-svclogic-config (rw) BestEffort Node-Selectors: <none> Tolerations:/opt/opendaylight/current/daexim from onap-appc-data (rw) node.kubernetes.io/not-ready:NoExecute for 300s /opt/opendaylight/current/etc/org.ops4j.pax.logging.cfg from log-config (rw) /var/log/onap from logs (rw) node./var/run/secrets/kubernetes.io/unreachable:NoExecute for 300s Events: Typeserviceaccount from default-token-6vq96 (ro) filebeat-onap: Container ID: Reason Image: docker.elastic.co/beats/filebeat:5.5.0 AgeImage ID: Port: From <none> Host Port: Message ---- <none> ------ State: Waiting ---- Reason: PodInitializing Ready: ---- False Restart Count: -------0 Warning FailedSchedulingEnvironment: <none> 29m (x2 overMounts: 29m) default-scheduler pod has unbound PersistentVolumeClaims (repeated 3 times) /usr/share/filebeat/data from data-filebeat (rw) Normal Scheduled /usr/share/filebeat/filebeat.yml from filebeat-conf (rw) /var/log/onap from logs (rw) 29m /var/run/secrets/kubernetes.io/serviceaccount from default-token-6vq96 (ro) Conditions: Type default-scheduler Successfully assigned dev-appc-0 to k8s-node2 NormalStatus SuccessfulMountVolumeInitialized 29m False Ready kubelet, k8s-node2 MountVolume.SetUp succeeded for volumeFalse "data-filebeat" NormalContainersReady SuccessfulMountVolumeFalse 29m PodScheduled True Volumes: onap-appc-data: kubelet, k8s-node2 Type: MountVolume.SetUp succeeded for volume "localtime" PersistentVolumeClaim Normal(a reference to SuccessfulMountVolumea PersistentVolumeClaim in 29mthe same namespace) ClaimName: onap-appc-data-onap-appc-0 ReadOnly: kubelet, k8s-node2false MountVolume.SetUp succeededlocaltime: for volume "logs" Type: Normal SuccessfulMountVolume 29m HostPath (bare host directory volume) Path: kubelet, k8s-node2 MountVolume.SetUp succeeded for volume "dev-appc-data0" /etc/localtime Normal HostPathType: SuccessfulMountVolume 29m filebeat-conf: Type: ConfigMap (a volume kubelet, k8s-node2 MountVolume.SetUp succeeded for volume "onap-sdnc-svclogic-bin" Normal SuccessfulMountVolume 29mpopulated by a ConfigMap) Name: onap-appc-filebeat Optional: false log-config: Type: kubelet, k8s-node2 MountVolume.SetUp succeededConfigMap for(a volume "onap-sdnc-bin" Normalpopulated by a ConfigMap) SuccessfulMountVolume 29mName: onap-appc-logging-cfg Optional: false kubelet, k8s-node2logs: MountVolume.SetUp succeeded for volume "onap-appc-data-properties"Type: NormalEmptyDir (a temporary SuccessfulMountVolumedirectory that 29mshares a pod's lifetime) Medium: data-filebeat: kubelet, k8s-node2 Type: MountVolume.SetUp succeeded for volume "onap-sdnc-data-properties" Normal SuccessfulMountVolume 29mEmptyDir (a temporary directory that shares a pod's lifetime) Medium: onap-appc-data-properties: Type: kubelet, k8s-node2 MountVolume.SetUp succeededConfigMap for(a volume "filebeat-conf"populated by a NormalConfigMap) SuccessfulMountVolume 29mName: (x6 over 29m) kubelet, k8s-node2 (combined from similar events): MountVolume.SetUp succeeded for volume "default-token-v9mnv" Normal Pulling onap-appc-onap-appc-data-properties Optional: false onap-appc-svclogic-config: Type: ConfigMap (a volume populated by a ConfigMap) 29mName: onap-appc-onap-appc-svclogic-config Optional: false kubelet, k8s-node2onap-appc-svclogic-bin: pulling image "oomk8s/readiness-check:2.0.0" Type: NormalConfigMap (a volume Pulledpopulated by a ConfigMap) Name: onap-appc-onap-appc-svclogic-bin 29m Optional: false onap-appc-bin: Type: kubelet, k8s-node2 ConfigMap Successfully(a pulledvolume image "oomk8s/readiness-check:2.0.0" Normalpopulated by a ConfigMap) Created Name: onap-appc-onap-appc-bin Optional: false 29m onap-sdnc-data-properties: Type: ConfigMap (a volume populated kubelet, k8s-node2 Created containerby a ConfigMap) NormalName: Started onap-appc-onap-sdnc-data-properties Optional: false onap-sdnc-svclogic-config: 29m Type: ConfigMap (a volume populated by a ConfigMap) kubelet, k8s-node2 StartedName: container Normal Pullingonap-appc-onap-sdnc-svclogic-config Optional: false onap-sdnc-svclogic-bin: Type: 29m ConfigMap (a volume populated by a ConfigMap) Name: kubelet, k8s-node2 pulling image "nexus3.onap.org:10001/onap/appc-image:1.3.0-SNAPSHOT-latest"onap-appc-onap-sdnc-svclogic-bin Normal Optional: Pulled false onap-sdnc-bin: Type: ConfigMap (a 21mvolume populated by a ConfigMap) Name: kubelet, k8s-node2 Successfully pulled image "nexus3.onap.org:10001/onap/appc-image:1.3.0-SNAPSHOT-latest" Normal Created onap-appc-onap-sdnc-bin Optional: false default-token-6vq96: Type: 21m Secret (a volume populated by a Secret) SecretName: kubelet, k8s-node2default-token-6vq96 Created container Optional: Normal Startedfalse QoS Class: BestEffort Node-Selectors: <none> Tolerations: 21m node.kubernetes.io/not-ready:NoExecute for 300s kubelet, k8s-node2 Started container Normal Pulling node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age 21m kubelet,From k8s-node2 pulling image "docker.elastic.co/beats/filebeat:5.5.0" Normal Pulled Message ---- ------ ---- 21m ---- kubelet, k8s-node2 Successfully pulled image "docker.elastic.co/beats/filebeat:5.5.0" Normal Created ------- Normal Started 20m (x11 over 121m) kubelet, 21mk8s-appc4 Started container Normal Created 10m (x12 over 121m) kubelet, k8s-node2appc4 Created container Normal Pulling 9s Warning(x13 over Unhealthy121m) kubelet, k8s-appc4 pulling image "oomk8s/readiness-check:2.0.0" Normal Pulled 5m8s (x16x13 over 21m121m) kubelet, k8s-node2appc4 Successfully Readinesspulled probe failed: dial tcp 10.47.0.5:8181: getsockopt: connection refusedimage "oomk8s/readiness-check:2.0.0 |
Get logs of containers inside each pod:
...