This wiki describes how to set up a Kubernetes cluster with kuberadm, and then deploying APPC within that Kubernetes cluster.
(To view the current page, Chrome is the preferred browser. IE may add extra "CR LF" each line, which causes problems).
Table of Contents |
---|
What is OpenStack? What is Kubernetes? What is Docker?
In the OpenStack lab, the controller executes the function of partitioning resources. The compute nodes are the collection of resources (memory, CPUs, hard drive space) to be partitioned. When creating a VM with "X" memory, "Y" CPUs and "Z" hard drive space, OpenStack's controller reviews its pool of available resources, allocates the quota, and then creates the VM on one of the available compute nodes. Many VMs can be created on a single compute node. OpenStack's controller uses a lot of criteria to choose a compute node, but if an application spans multiple VMs, Affinity rules can be used to ensure the VMs don’t congregate on a single compute node. This would not be good for resilience.
...
Deployment Architecture
The Kubernetes deployment in this tutorial will be set up on top of OpenStack VMs. Let's call this the undercloud. undercloud can be physical boxes, or VMs. The VMs can come from different cloud providers, but in this tutorial we will use OpenStack. The following table shows the layers of software that need to be considered when thinking about resilience:
...
Code Block | ||
---|---|---|
| ||
openstack server list; openstack network list; openstack flavor list; openstack keypair list; openstack image list; openstack security group list openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-master" openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node1" openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node2" openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node3" |
Configure Each VM
Repeat the following steps on each VM:
Pre-Configure Each VM
Make sure the VMs are:
- Up to date
- The clocks are synchonized
...
Question: Did you check date on all K8S nodes to make sure they are in synch?
Install Docker
The ONAP apps are pakages in Docker containers.
...
Code Block | ||
---|---|---|
| ||
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual sudo apt-get install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo apt-key fingerprint 0EBFCD88 # Add a docker repository to "/etc/apt/sources.list". It is for the latest stable one for the ubuntu falvour on the machine ("lsb_release -cs") sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update sudo apt-get -y install docker-ce sudo docker run hello-world # Verify: sudo docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c66d903a0b1f hello-world "/hello" 10 seconds ago Exited (0) 9 seconds ago vigorous_bhabha |
Install the Kubernetes Pakages
Just install the pakages; there is no need to configure them yet.
...
Note: If you intend to remove kubernetes packages use "apt autoremove kubelet; apt autoremove kubeadm;apt autoremove kubectl;apt autoremove kubernetes-cni" .
Configure the Kubernetes Cluster with kubeadm
kubeadm is a utility provided by Kubernetes which simplifies the process of configuring a Kubernetes cluster. Details about how to use it can be found here: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/.
Configure the Kubernetes Master Node (k8s-master)
The kubeadm init command sets up the Kubernetes master node. SSH to the k8s-master VM and invoke the following command. It is important to capture the output into a log file as there is information which you will need to refer to afterwards.
...
Code Block | ||
---|---|---|
| ||
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.8.7 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks [preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 17.03 [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [kubefed-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.147.114.12] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] This often takes around a minute; or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 44.002324 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node kubefed-1 as master by adding a label and a taint [markmaster] Master kubefed-1 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: 2246a6.83b4c7ca38913ce1 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a |
NOTE: the "kubeadm join .." command shows in the log of kubeadm init, should run in each VMs in the k8s cluster to perform a cluster, use "kubectl get nodes" to make sure all nodes are all joined.
Execute the following snippet (as ubuntu user) to get kubectl to work.
...
Code Block | ||
---|---|---|
| ||
sudo kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system kube-apiserver-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system kube-controller-manager-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system kube-dns-545bc4bfd4-jcklm 3/3 Running 0 44m 10.32.0.2 k8s-master kube-system kube-proxy-lnv7r 1/1 Running 0 44m 10.147.112.140 k8s-master kube-system kube-scheduler-k8s-master 1/1 Running 0 1m 10.147.112.140 k8s-master kube-system weave-net-b2hkh 2/2 Running 0 1m 10.147.112.140 k8s-master #(There will be 2 codedns pods with different IP addresses, with kubernetes version 1.10.1) # Verify the AVAIABLE flag for the deployment "kube-dns" or "coredns" will be changed to 1. (2 with kubernetes version 1.10.1) sudo kubectl get deployment --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system kube-dns 1 1 1 1 1h |
Troubleshooting tip:
- If any of the weave pods face a problem and gets stuck at "ImagePullBackOff " state, you can try running the " sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" " again.
- Sometimes, you need to delete the problematic pod, to let it terminate and start fresh. Use "kubectl delete po/<pod-name> -n <name-space> " to delete a pod.
- To "Unjoin" a worker node "kubectl delete node <node-name> (go through the "Undeploy APPC" process at the end if you have an APPC cluster running)
Install
...
ONAP uses Helm, a package manager for kubernetes.
...
"make" ( Learn more about ubuntu-make here : https://
...
wiki.
...
ubuntu.
...
If you are using Casablanca code then use helm v2.9.1
Code Block | ||
---|---|---|
| ||
# As a root user, download helm and install it
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh -v v2.8.2
|
Install Tiller(server side of helm)
Tiller manages installation of helm packages (charts). Tiller requires ServiceAccount setup in Kubernetes before being initialized. The following snippet will do that for you:
(Chrome as broswer is preferred. IE may add extra "CR LF" to each line, which causes problems).
Code Block | ||
---|---|---|
| ||
# id
ubuntu
# As a ubuntu user, create a yaml file to define the helm service account and cluster role binding.
cat > tiller-serviceaccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
EOF
# Create a ServiceAccount and ClusterRoleBinding based on the created file.
sudo kubectl create -f tiller-serviceaccount.yaml
# Verify
which helm
helm version |
Initialize helm....This command installs Tiller. It also discovers Kubernetes clusters by reading $KUBECONFIG (default '~/.kube/config') and using the default context.
...
language | bash |
---|
...
com/ubuntu-make)
Code Block |
---|
#######################
# Install make from kubernetes directory.
#######################
$ sudo apt install make
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
linux-headers-4.4.0-62 linux-headers-4.4.0-62-generic linux-image-4.4.0-62-generic snap-confine
Use 'sudo apt autoremove' to remove them.
Suggested packages:
make-doc
The following NEW packages will be installed:
make
0 upgraded, 1 newly installed, 0 to remove and 72 not upgraded.
Need to get 151 kB of archives.
After this operation, 365 kB of additional disk space will be used.
Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB]
Fetched 151 kB in 0s (208 kB/s)
Selecting previously unselected package make.
(Reading database ... 121778 files and directories currently installed.)
Preparing to unpack .../archives/make_4.1-6_amd64.deb ...
Unpacking make (4.1-6) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up make (4.1-6) ...
|
Install Helm and Tiller on the Kubernetes Master Node (k8s-master)
ONAP uses Helm, a package manager for kubernetes.
Install helm (client side). The following instructions were taken from https://docs.helm.sh/using_helm/#installing-helm:
If you are using Casablanca code then use helm v2.9.1
Code Block | ||
---|---|---|
| ||
# As a root user, download helm and install it
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh -v v2.8.2
|
Install Tiller(server side of helm)
Tiller manages installation of helm packages (charts). Tiller requires ServiceAccount setup in Kubernetes before being initialized. The following snippet will do that for you:
(Chrome as broswer is preferred. IE may add extra "CR LF" to each line, which causes problems).
Code Block | ||
---|---|---|
| ||
# id
ubuntu
# As a ubuntu user, create a yaml file to define the helm service account and cluster role binding.
cat > tiller-serviceaccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
EOF
# Create a ServiceAccount and ClusterRoleBinding based on the created file.
sudo kubectl create -f tiller-serviceaccount.yaml
# Verify
which helm
helm version |
Initialize helm....This command installs Tiller. It also discovers Kubernetes clusters by reading $KUBECONFIG (default '~/.kube/config') and using the default context.
Code Block | ||
---|---|---|
| ||
helm init --service-account tiller --upgrade # A new pod is created, but will be in pending status. kubectl get pods --all-namespaces -o wide | grep tiller kube-system tiller-deploy-b6bf9f4cc-vbrc5 0/1 Pending 0 7m <none> 1<none> # A new service is created kubectl get 1services --all-namespaces -o wide | grep tiller 1h kube-system tiller-deploy ClusterIP 1 10.102.74.236 <none> 1 44134/TCP 1 47m app=helm,name=tiller # 0A new deployment is created, but the AVAILABLE flage is set 8m |
...
to |
...
Code Block | ||
---|---|---|
| ||
# Uninstalls Tiller from a cluster helm reset --force "0". kubectl get deployments --all-namespaces NAMESPACE # CleanNAME up any existing artifacts kubectl -n kube-system delete deployment tiller-deploy kubectl -nDESIRED kube-system delete serviceaccountCURRENT tiller kubectl UP-n kubeTO-system delete ClusterRoleBinding tiller-clusterrolebindingDATE AVAILABLE AGE kube-system kube-dns 1 1 1 1 kubectl create1h kube-f tiller-serviceaccount.yaml #init helm helm init --service-account tiller --upgrade system tiller-deploy 1 1 1 0 8m |
If you need to reset Helm, follow the below steps:
Code Block | ||
---|---|---|
| ||
# Uninstalls Tiller from a cluster
helm reset --force
# Clean up any existing artifacts
kubectl -n kube-system delete deployment tiller-deploy
kubectl -n kube-system delete serviceaccount tiller
kubectl -n kube-system delete ClusterRoleBinding tiller-clusterrolebinding
kubectl create -f tiller-serviceaccount.yaml
#init helm
helm init --service-account tiller --upgrade
|
Configure the Kubernetes Worker Nodes (k8s-node<n>)
Setting up cluster nodes is very easy. Just refer back to the "kubeadm init" output logs (/root/kubeadm_init.log). In the last line of the the logs, there is a “kubeadm join” command with token information and other parameters.
...
Now you have a Kubernetes cluster with 3 worker nodes and 1 master node.
Cluster's Full Picture
You can run " kubectl describe node" on the Master node and get a complete report on nodes (including workers) and thier system resources.
Configure dockerdata-nfs
This is a shared directory which must be mounted on all of the Kuberenetes VMs(master node and worker nodes). Because many of the ONAP pods use this directory to share data.
See 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes for instruction on how to set this up.
Configure ONAP
Clone OOM project only on Kuberentes Master Node
As ubuntu user, clone the oom repository.
...
Note |
---|
You may use any specific known stable OOM release for APPC deployment. The above URL downloads latest OOM. |
Customize the oom/kubernetes/onap parent chart, like the values.yaml file, to suit your deployment. You may want to selectively enable or disable ONAP components by changing the subchart **enabled** flags to *true* or *false*.
Code Block |
---|
$ vi oom/kubernetes/onap/values.yaml
Example:
...
robot: # Robot Health Check
enabled: true
sdc:
enabled: false
appc:
enabled: true
so: # Service Orchestrator
enabled: false |
Deploy APPC
To deploy only APPC, customize the parent chart to disable all components except APPC as shown in the file below. Also set the global.persistence.mountPath to some non-mounted directory (by default, it is set to mounted directory /dockerdata-nfs).
Code Block |
---|
#Note that all components are changed to enabled:false except appc, robot, and mysql. Here we set number of APPC replicas to 3.
$ cat ~/oom/kubernetes/onap/values.yaml
# Copyright © 2017 Amdocs, Bell Canada
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#################################################################
# Global configuration overrides.
#
# These overrides will affect all helm charts (ie. applications)
# that are listed below and are 'enabled'.
#################################################################
global:
# Change to an unused port prefix range to prevent port conflicts
# with other instances running within the same k8s cluster
nodePortPrefix: 302
# ONAP Repository
# Uncomment the following to enable the use of a single docker
# repository but ONLY if your repository mirrors all ONAP
# docker images. This includes all images from dockerhub and
# any other repository that hosts images for ONAP components.
#repository: nexus3.onap.org:10001
repositoryCred:
user: docker
password: docker
# readiness check - temporary repo until images migrated to nexus3
readinessRepository: oomk8s
# logging agent - temporary repo until images migrated to nexus3
loggingRepository: docker.elastic.co
# image pull policy
pullPolicy: Always
# default mount path root directory referenced
# by persistent volumes and log files
persistence:
mountPath: /dockerdata-nfs
# flag to enable debugging - application support required
debugEnabled: false
# Repository for creation of nexus3.onap.org secret
repository: nexus3.onap.org:10001
#################################################################
# Enable/disable and configure helm charts (ie. applications)
# to customize the ONAP deployment.
#################################################################
aaf:
enabled: false
aai:
enabled: false
appc:
enabled: true
replicaCount: 3
config:
openStackType: OpenStackProvider
openStackName: OpenStack
openStackKeyStoneUrl: http://localhost:8181/apidoc/explorer/index.html
openStackServiceTenantName: default
openStackDomain: default
openStackUserName: admin
openStackEncryptedPassword: admin
clamp:
enabled: false
cli:
enabled: false
consul:
enabled: false
dcaegen2:
enabled: false
dmaap:
enabled: false
esr:
enabled: false
log:
enabled: false
sniro-emulator:
enabled: false
oof:
enabled: false
msb:
enabled: false
multicloud:
enabled: false
policy:
enabled: false
portal:
enabled: false
robot:
enabled: true
sdc:
enabled: false
sdnc:
enabled: false
replicaCount: 1
config:
enableClustering: false
mysql:
disableNfsProvisioner: true
replicaCount: 1
so:
enabled: false
replicaCount: 1
liveness:
# necessary to disable liveness probe when setting breakpoints
# in debugger so K8s doesn't restart unresponsive container
enabled: true
# so server configuration
config:
# message router configuration
dmaapTopic: "AUTO"
# openstack configuration
openStackUserName: "vnf_user"
openStackRegion: "RegionOne"
openStackKeyStoneUrl: "http://1.2.3.4:5000"
openStackServiceTenantName: "service"
openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"
# configure embedded mariadb
mariadb:
config:
mariadbRootPassword: password
uui:
enabled: false
vfc:
enabled: false
vid:
enabled: false
vnfsdk:
enabled: false
|
Note: If you set number of appc replicas in onap/values.yaml, it will overrides the setting you are about to do in the next step.
Run below command to setup a local Helm repository to serve up the local ONAP charts:
Code Block |
---|
#Press "Enter" after running the command to get the prompt back
$ nohup helm serve &
[1] 2316
$ Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879
# Verify
$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879
|
If you don't find the local repo, add it manually.
Note the IP(localhost) and port number that is listed in above response (8879 here) and use it in "helm repo add" command as follows:
Code Block |
---|
$ helm repo add local http://127.0.0.1:8879
"local" has been added to your repositories
|
Install "make" ( Learn more about ubuntu-make here : https://wiki.ubuntu.com/ubuntu-make) and build a local Helm repository (from the kubernetes directory):
Code Block |
---|
####################### # Install make from kubernetes directory. ####################### $ sudo apt install make Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: linux-headers-4.4.0-62 linux-headers-4.4.0-62-generic linux-image-4.4.0-62-generic snap-confine Use 'sudo apt autoremove' to remove them. Suggested packages: make-doc The following NEW packages will be installed: make 0 upgraded, 1 newly installed, 0 to remove and 72 not upgraded. Need to get 151 kB of archives. After this operation, 365 kB of additional disk space will be used. Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB] Fetched 151 kB in 0s (208 kB/s) Selecting previously unselected package make. (Reading database ... 121778 files and directories currently installed.) Preparing to unpack .../archives/make_4.1-6_amd64.deb ... Unpacking make (4.1-6) ... Processing triggers for man-db (2.7.5-1) ... Setting up make (4.1-6) ... ####################### # Build local helm repo ####################### vi oom/kubernetes/onap/values.yaml Example: ... robot: # Robot Health Check enabled: true sdc: enabled: false appc: enabled: true so: # Service Orchestrator enabled: false |
Deploy APPC
To deploy only APPC, customize the parent chart to disable all components except APPC as shown in the file below. Also set the global.persistence.mountPath to some non-mounted directory (by default, it is set to mounted directory /dockerdata-nfs).
Code Block |
---|
#Note that all components are changed to enabled:false except appc, robot, and mysql. Here we set number of APPC replicas to 3.
$ cat ~/oom/kubernetes/onap/values.yaml
# Copyright © 2017 Amdocs, Bell Canada
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#################################################################
# Global configuration overrides.
#
# These overrides will affect all helm charts (ie. applications)
# that are listed below and are 'enabled'.
#################################################################
global:
# Change to an unused port prefix range to prevent port conflicts
# with other instances running within the same k8s cluster
nodePortPrefix: 302
# ONAP Repository
# Uncomment the following to enable the use of a single docker
# repository but ONLY if your repository mirrors all ONAP
# docker images. This includes all images from dockerhub and
# any other repository that hosts images for ONAP components.
#repository: nexus3.onap.org:10001
repositoryCred:
user: docker
password: docker
# readiness check - temporary repo until images migrated to nexus3
readinessRepository: oomk8s
# logging agent - temporary repo until images migrated to nexus3
loggingRepository: docker.elastic.co
# image pull policy
pullPolicy: Always
# default mount path root directory referenced
# by persistent volumes and log files
persistence:
mountPath: /dockerdata-nfs
# flag to enable debugging - application support required
debugEnabled: false
# Repository for creation of nexus3.onap.org secret
repository: nexus3.onap.org:10001
#################################################################
# Enable/disable and configure helm charts (ie. applications)
# to customize the ONAP deployment.
#################################################################
aaf:
enabled: false
aai:
enabled: false
appc:
enabled: true
replicaCount: 3
config:
openStackType: OpenStackProvider
openStackName: OpenStack
openStackKeyStoneUrl: http://localhost:8181/apidoc/explorer/index.html
openStackServiceTenantName: default
openStackDomain: default
openStackUserName: admin
openStackEncryptedPassword: admin
clamp:
enabled: false
cli:
enabled: false
consul:
enabled: false
dcaegen2:
enabled: false
dmaap:
enabled: false
esr:
enabled: false
log:
enabled: false
sniro-emulator:
enabled: false
oof:
enabled: false
msb:
enabled: false
multicloud:
enabled: false
policy:
enabled: false
portal:
enabled: false
robot:
enabled: true
sdc:
enabled: false
sdnc:
enabled: false
replicaCount: 1
config:
enableClustering: false
mysql:
disableNfsProvisioner: true
replicaCount: 1
so:
enabled: false
replicaCount: 1
liveness:
# necessary to disable liveness probe when setting breakpoints
# in debugger so K8s doesn't restart unresponsive container
enabled: true
# so server configuration
config:
# message router configuration
dmaapTopic: "AUTO"
# openstack configuration
openStackUserName: "vnf_user"
openStackRegion: "RegionOne"
openStackKeyStoneUrl: "http://1.2.3.4:5000"
openStackServiceTenantName: "service"
openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"
# configure embedded mariadb
mariadb:
config:
mariadbRootPassword: password
uui:
enabled: false
vfc:
enabled: false
vid:
enabled: false
vnfsdk:
enabled: false
|
Note: If you set number of appc replicas in onap/values.yaml, it will overrides the setting you are about to do in the next step.
Run below command to setup a local Helm repository to serve up the local ONAP charts:
Code Block |
---|
#Press "Enter" after running the command to get the prompt back
$ nohup helm serve &
[1] 2316
$ Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879
# Verify
$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879
|
If you don't find the local repo, add it manually.
Note the IP(localhost) and port number that is listed in above response (8879 here) and use it in "helm repo add" command as follows:
Code Block |
---|
$ helm repo add local http://127.0.0.1:8879
"local" has been added to your repositories
|
build a local Helm repository (from the kubernetes directory):
Code Block |
---|
$ make all [common] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' make[2]: Entering directory '/home/ubuntu/oom/kubernetes/common' [common] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' ==> Linting common [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/common-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' [dgbuilder] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting dgbuilder [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/dgbuilder-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' [postgres] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting postgres [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/postgres-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' [mysql] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting mysql [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mysql-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' make[2]: Leaving directory '/home/ubuntu/oom/kubernetes/common' make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [vid] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting vid [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vid-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [so] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting so [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/so-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [cli] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting cli [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/cli-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [aaf] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting aaf [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/aaf-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [log] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting log [INFO] Chart.yaml: icon is recommended [WARNING] templates/: directory not found 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/log-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [esr] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting esr [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/esr-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [mock] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting mock [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mock-0.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [multicloud] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting multicloud [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/multicloud-1.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [mso] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting mso [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mso-1.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [dcaegen2] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting dcaegen2 [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/dcaegen2-1.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [vnfsdk] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting vnfsdk [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vnfsdk-1.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [policy] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting policy [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/policy-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [consul] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting consul [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/consul-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [clamp] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting clamp [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/clamp-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [appc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 3 charts Downloading common from repo http://127.0.0.1:8879 Downloading mysql from repo http://127.0.0.1:8879 Downloading dgbuilder from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting appc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/appc-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [sdc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting sdc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/sdc-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [portal] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting portal [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/portal-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [aai] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting aai [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/aai-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [robot] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting robot [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/robot-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [msb] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting msb [INFO] Chart.yaml: icon is recommended [WARNING] templates/: directory not found 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/msb-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [vfc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting vfc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vfc-0.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [message-router] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting message-router [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/message-router-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [uui] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting uui [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/uui-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [sdnc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 3 charts Downloading common from repo http://127.0.0.1:8879 Downloading mysql from repo http://127.0.0.1:8879 Downloading dgbuilder from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting sdnc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/sdnc-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [onap] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 24 charts Downloading aaf from repo http://127.0.0.1:8879 Downloading aai from repo http://127.0.0.1:8879 Downloading appc from repo http://127.0.0.1:8879 Downloading clamp from repo http://127.0.0.1:8879 Downloading cli from repo http://127.0.0.1:8879 Downloading common from repo http://127.0.0.1:8879 Downloading consul from repo http://127.0.0.1:8879 Downloading dcaegen2 from repo http://127.0.0.1:8879 Downloading esr from repo http://127.0.0.1:8879 Downloading log from repo http://127.0.0.1:8879 Downloading message-router from repo http://127.0.0.1:8879 Downloading mock from repo http://127.0.0.1:8879 Downloading msb from repo http://127.0.0.1:8879 Downloading multicloud from repo http://127.0.0.1:8879 Downloading policy from repo http://127.0.0.1:8879 Downloading portal from repo http://127.0.0.1:8879 Downloading robot from repo http://127.0.0.1:8879 Downloading sdc from repo http://127.0.0.1:8879 Downloading sdnc from repo http://127.0.0.1:8879 Downloading so from repo http://127.0.0.1:8879 Downloading uui from repo http://127.0.0.1:8879 Downloading vfc from repo http://127.0.0.1:8879 Downloading vid from repo http://127.0.0.1:8879 Downloading vnfsdk from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting onap Lint OK 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/onap-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' |
...
Code Block |
---|
ubuntu@k8s-master:~/oom/kubernetes$ kubectl get pods --all-namespaces -o wide -w NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-k8s-master 1/1 Running 5 14d 10.12.5.171 k8s-master kube-system kube-apiserver-k8s-master 1/1 Running 5 14d 10.12.5.171 k8s-master kube-system kube-controller-manager-k8s-master 1/1 Running 5 14d 10.12.5.171 k8s-master kube-system kube-dns-86f4d74b45-px44s 3/3 Running 21 27d 10.32.0.5 k8s-master kube-system kube-proxy-25tm5 1/1 Running 8 27d 10.12.5.171 k8s-master kube-system kube-proxy-6dt4z 1/1 Running 4 27d 10.12.5.174 k8s-node1 kube-system kube-proxy-jmv67 1/1 Running 4 27d 10.12.5.193 k8s-node2 kube-system kube-proxy-l8fks 1/1 Running 6 27d 10.12.5.194 k8s-node3 kube-system kube-scheduler-k8s-master 1/1 Running 5 14d 10.12.5.171 k8s-master kube-system tiller-deploy-84f4c8bb78-s6bq5 1/1 Running 0 4d 10.47.0.7 k8s-node2 kube-system weave-net-bz7wr 2/2 Running 20 27d 10.12.5.194 k8s-node3 kube-system weave-net-c2pxd 2/2 Running 13 27d 10.12.5.174 k8s-node1 kube-system weave-net-jw29c 2/2 Running 20 27d 10.12.5.171 k8s-master kube-system weave-net-kxxpl 2/2 Running 13 27d 10.12.5.193 k8s-node2 onap dev-appc-0 0/2 PodInitializing 0 2m 10.47.0.5 k8s-node2 onap dev-appc-1 0/2 PodInitializing 0 2m 10.36.0.8 k8s-node3 onap dev-appc-2 0/2 PodInitializing 0 2m 10.44.0.7 k8s-node1 onap dev-appc-cdt-8cbf9d4d9-mhp4b 1/1 Running 0 2m 10.47.0.1 k8s-node2 onap dev-appc-db-0 2/2 Running 0 2m 10.36.0.5 k8s-node3 onap dev-appc-dgbuilder-54766c5b87-xw6c6 0/1 PodInitializing 0 2m 10.44.0.2 k8s-node1 onap dev-robot-785b9bfb45-9s2rs 0/1 PodInitializing 0 2m 10.36.0.7 k8s-node3 |
Cleanup deployed ONAP instance
To delete a deployed instance, use the following command:
...
Code Block |
---|
#query existing pv in onap namespace $ kubectl get pv -n onap NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE dev-appc-data0 1Gi RWO Retain Bound onap/dev-appc-data-dev-appc-0 dev-appc-data 8m dev-appc-data1 1Gi RWO Retain Bound onap/dev-appc-data-dev-appc-2 dev-appc-data 8m dev-appc-data2 1Gi RWO Retain Bound onap/dev-appc-data-dev-appc-1 dev-appc-data 8m dev-appc-db-data 1Gi RWX Retain Bound onap/dev-appc-db-data dev-appc-db-data 8m #Example commands are found here: #delete existing pv $ kubectl delete pv dev-appc-data0 -n onap pv "dev-appc-data0" deleted $ kubectl delete pv dev-appc-data1 -n onap pv "dev-appc-data0" deleted $ kubectl delete pv dev-appc-data2 -n onap pv "dev-appc-data2" deleted $ kubectl delete pv dev-appc-db-data -n onap pv "dev-appc-db-data" deleted #query existing pvc in onap namespace $ kubectl get pvc -n onap NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE dev-appc-data-dev-appc-0 Bound dev-appc-data0 1Gi RWO dev-appc-data 9m dev-appc-data-dev-appc-1 Bound dev-appc-data2 1Gi RWO dev-appc-data 9m dev-appc-data-dev-appc-2 Bound dev-appc-data1 1Gi RWO dev-appc-data 9m dev-appc-db-data Bound dev-appc-db-data 1Gi RWX dev-appc-db-data 9m #delete existing pvc $ kubectl delete pvc dev-appc-data-dev-appc-0 -n onap pvc "dev-appc-data-dev-appc-0" deleted $ kubectl delete pvc dev-appc-data-dev-appc-1 -n onap pvc "dev-appc-data-dev-appc-1" deleted $ kubectl delete pvc dev-appc-data-dev-appc-2 -n onap pvc "dev-appc-data-dev-appc-2" deleted $ kubectl delete pvc dev-appc-db-data -n onap pvc "dev-appc-db-data" deleted |
Verify APPC Clustering
Refer to Validate the APPC ODL cluster.
Get the details from Kubernete Master Node
Access to RestConf UI is via https://<Kuberbetes-Master Node-IP>:30230/apidoc/explorer/index.html (admin user)
...