This wiki describes how to set up a Kubernetes cluster with kuberadm, and then deploying APPC within that Kubernetes cluster.
...
- If any of the weave pods face a problem and gets stuck at "ImagePullBackOff " state, you can try running the " sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" " again.
- Sometimes, you need to delete the problematic pod, to let it terminate and start fresh. Use "kubectl delete po/<pod-name> -n <name-space> " to delete a pod.
- To "Unjoin" a worker node "kubectl delete node <node-name> (go through the "Undeploy APPC" process at the end if you have an APPC cluster running)
Install
...
ONAP uses Helm, a package manager for kubernetes.
...
"make" ( Learn more about ubuntu-make here : https://
...
wiki.
...
ubuntu.
...
If you are using Casablanca code then use helm v2.9.1
Code Block | ||
---|---|---|
| ||
# As a root user, download helm and install it
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh -v v2.8.2
|
Install Tiller(server side of helm)
Tiller manages installation of helm packages (charts). Tiller requires ServiceAccount setup in Kubernetes before being initialized. The following snippet will do that for you:
(Chrome as broswer is preferred. IE may add extra "CR LF" to each line, which causes problems).
Code Block | ||
---|---|---|
| ||
# id
ubuntu
# As a ubuntu user, create a yaml file to define the helm service account and cluster role binding.
cat > tiller-serviceaccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
EOF
# Create a ServiceAccount and ClusterRoleBinding based on the created file.
sudo kubectl create -f tiller-serviceaccount.yaml
# Verify
which helm
helm version |
Initialize helm....This command installs Tiller. It also discovers Kubernetes clusters by reading $KUBECONFIG (default '~/.kube/config') and using the default context.
...
language | bash |
---|
...
com/ubuntu-make)
Code Block |
---|
#######################
# Install make from kubernetes directory.
#######################
$ sudo apt install make
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
linux-headers-4.4.0-62 linux-headers-4.4.0-62-generic linux-image-4.4.0-62-generic snap-confine
Use 'sudo apt autoremove' to remove them.
Suggested packages:
make-doc
The following NEW packages will be installed:
make
0 upgraded, 1 newly installed, 0 to remove and 72 not upgraded.
Need to get 151 kB of archives.
After this operation, 365 kB of additional disk space will be used.
Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB]
Fetched 151 kB in 0s (208 kB/s)
Selecting previously unselected package make.
(Reading database ... 121778 files and directories currently installed.)
Preparing to unpack .../archives/make_4.1-6_amd64.deb ...
Unpacking make (4.1-6) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up make (4.1-6) ...
|
Install Helm and Tiller on the Kubernetes Master Node (k8s-master)
ONAP uses Helm, a package manager for kubernetes.
Install helm (client side). The following instructions were taken from https://docs.helm.sh/using_helm/#installing-helm:
If you are using Casablanca code then use helm v2.9.1
Code Block | ||
---|---|---|
| ||
# As a root user, download helm and install it
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh -v v2.8.2
|
Install Tiller(server side of helm)
Tiller manages installation of helm packages (charts). Tiller requires ServiceAccount setup in Kubernetes before being initialized. The following snippet will do that for you:
(Chrome as broswer is preferred. IE may add extra "CR LF" to each line, which causes problems).
Code Block | ||
---|---|---|
| ||
# id
ubuntu
# As a ubuntu user, create a yaml file to define the helm service account and cluster role binding.
cat > tiller-serviceaccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
EOF
# Create a ServiceAccount and ClusterRoleBinding based on the created file.
sudo kubectl create -f tiller-serviceaccount.yaml
# Verify
which helm
helm version |
Initialize helm....This command installs Tiller. It also discovers Kubernetes clusters by reading $KUBECONFIG (default '~/.kube/config') and using the default context.
Code Block | ||
---|---|---|
| ||
helm init --service-account tiller --upgrade # A new pod is created, but will be in pending status. kubectl get pods --all-namespaces -o wide | grep tiller kube-system tiller-deploy-b6bf9f4cc-vbrc5 0/1 Pending 0 7m <none> 1<none> # A new service is created kubectl get 1services --all-namespaces -o wide | 1hgrep tiller kube-system tiller-deploy 1ClusterIP 10.102.74.236 <none> 1 44134/TCP 1 47m app=helm,name=tiller # 0A new deployment is created, but the AVAILABLE flage is set 8m |
If you need to reset Helm, follow the below steps:
Code Block | ||
---|---|---|
| ||
# Uninstalls Tiller from a cluster helm reset --forceto "0". kubectl get deployments --all-namespaces NAMESPACE NAME # Clean up any existing artifacts kubectl -nDESIRED kube-system delete deploymentCURRENT tiller-deploy kubectl UP-TO-nDATE kube-system delete serviceaccountAVAILABLE tiller kubectl -nAGE kube-system delete ClusterRoleBinding tiller-clusterrolebindingkube-dns 1 kubectl create -f tiller-serviceaccount.yaml 1 #init helm helm init --service-account tiller --upgrade |
Configure the Kubernetes Worker Nodes (k8s-node<n>)
Setting up cluster nodes is very easy. Just refer back to the "kubeadm init" output logs (/root/kubeadm_init.log). In the last line of the the logs, there is a “kubeadm join” command with token information and other parameters.
Capture those parameters and then execute it as root on each of the Kubernetes worker nodes: k8s-node1, k8s-node2, and k8s-node3.
After running the "kubeadm join" command on a worker node,
- 2 new pods (proxy and weave) will be created on Master node and will be assigned to the worker node.
- The tiller pod status will change to "running" .
- The AVAILABLE flag for tiller-deploy deployment will be changed to "1".
- The worker node will join the cluster.
The command looks like the follwing snippet (find the command in bottom of /root/kubeadm_init.log):
Code Block | ||
---|---|---|
| ||
# Should change to root user on the worker node.
kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a
# Make sure in the output, you see "This node has joined the cluster:". |
Verify the results from master node:
Code Block | ||
---|---|---|
| ||
kubectl get pods --all-namespaces -o wide kubectl get nodes # Sample Output: NAME 1 1 1h kube-system tiller-deploy 1 1 1 0 STATUS ROLES AGE8m |
If you need to reset Helm, follow the below steps:
Code Block | ||
---|---|---|
| ||
# Uninstalls Tiller from a cluster helm reset --force # Clean up VERSIONany k8s-masterexisting artifacts kubectl Ready-n kube-system delete deployment tiller-deploy masterkubectl -n kube-system delete 2hserviceaccount tiller kubectl -n kube-system delete v1.8.6 k8s-node1ClusterRoleBinding tiller-clusterrolebinding Ready kubectl create -f tiller-serviceaccount.yaml <none> #init helm 53shelm init --service-account tiller v1.8.6 |
...
--upgrade
|
Configure the Kubernetes Worker Nodes (k8s-node<n>)
Setting up cluster nodes is very easy. Just refer back to the "kubeadm init" output logs (/root/kubeadm_init.log). In the last line of the the logs, there is a “kubeadm join” command with token information and other parameters.
Capture those parameters and then execute it as root on each of the Kubernetes worker nodes: k8s-node1, k8s-node2, and k8s-node3.
After running the "kubeadm join" command on all worker nodes once and verify the results. Return to the Kubernetes master node VM, execute the “kubectl get nodes“ command (from master node) to see all the Kubernetes nodes. It might take some time, but eventually each node will have a status of "Ready"a worker node,
- 2 new pods (proxy and weave) will be created on Master node and will be assigned to the worker node.
- The tiller pod status will change to "running" .
- The AVAILABLE flag for tiller-deploy deployment will be changed to "1".
- The worker node will join the cluster.
The command looks like the follwing snippet (find the command in bottom of /root/kubeadm_init.log):
Code Block | ||
---|---|---|
| ||
kubectl# getShould nodeschange to #root Sampleuser Output:on NAMEthe worker node. kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 STATUS--discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a # ROLESMake sure in the output, AGEyou see "This node has joined VERSION k8s-master Ready master 1d the cluster:". |
Verify the results from master node:
Code Block | ||
---|---|---|
| ||
kubectl get pods --all-namespaces -o wide kubectl get nodes # Sample Output: NAME v1.8.5 k8s-node1 Ready STATUS <none>ROLES 1d AGE v1.8.5VERSION k8s-node2master Ready <none>master 1d2h v1.8.56 k8s-node3node1 Ready <none> 1d53s v1.8.5 6 |
Make sure that the tiller pod is running. Execute the following you run the same "kubeadm join" command on all worker nodes once and verify the results.
Return to the Kubernetes master node VM, execute the “kubectl get nodes“ command (from master node) and look for a po/tiller-deploy-xxxx with a “Running” status. For example:(In the case of using coredns instead of kube-dns, you notice it will only one container)to see all the Kubernetes nodes. It might take some time, but eventually each node will have a status of "Ready":
Code Block | ||
---|---|---|
| ||
kubectl get pods --all-namespaces -o widenodes # Sample output: NAMESPACE Output: NAME STATUS ROLES AGE VERSION k8s-master Ready master READY 1d STATUS RESTARTS v1.8.5 k8s-node1 AGE Ready IP <none> 1d NODE kube-system etcd-k8s-masterv1.8.5 k8s-node2 Ready <none> 1d v1.8.5 k8s-node3 1/1 Running 0 Ready <none> 1h1d 10v1.147.112.140 k8s-master kube-system kube-apiserver-k8s-master 8.5 |
Make sure that the tiller pod is running. Execute the following command (from master node) and look for a po/tiller-deploy-xxxx with a “Running” status. For example:
(In the case of using coredns instead of kube-dns, you notice it will only one container)
Code Block | ||
---|---|---|
| ||
kubectl get pods --all-namespaces -o wide # Sample output: NAMESPACE NAME 1/1 Running 0 1h 10.147.112.140 k8s-master kube-system READY kube-controller-manager-k8s-master STATUS 1/1RESTARTS AGE Running 0IP 1h NODE kube-system 10.147.112.140 etcd-k8s-master kube-system kube-dns-545bc4bfd4-jcklm 31/31 Running 0 2h1h 10.32147.0.2 112.140 k8s-master kube-system kube-apiserver-proxy-4zztj k8s-master 1/1 Running 0 2m1h 10.147.112.150140 k8s-node2master kube-system kube-controller-proxy-lnv7r manager-k8s-master 1/1 Running 0 2h1h 10.147.112.140 k8s-master kube-system kube-proxydns-545bc4bfd4-t492gjcklm 1/13/3 Running 0 20m2h 10.14732.112.1640.2 k8s-node1master kube-system kube-proxy-xx8df4zztj 1/1 Running 0 2m 10.147.112.169150 k8s-node3node2 kube-system kube-scheduler-k8s-masterproxy-lnv7r 1/1 Running 0 1h2h 10.147.112.140 k8s-master kube-system tillerkube-deploy-b6bf9f4cc-vbrc5proxy-t492g 1/1 Running 0 42m20m 10.44147.0.1 112.164 k8s-node1 kube-system weavekube-netproxy-b2hkhxx8df 21/21 Running 0 1h2m 10.147.112.140169 k8s-masternode3 kube-system weavekube-scheduler-netk8s-s7l27master 2/21/1 Running 10 2m1h 10.147.112.169140 k8s-node3 kube-system weave-net-vmlrq master kube-system tiller-deploy-b6bf9f4cc-vbrc5 21/21 Running 0 20m42m 10.14744.112.1640.1 k8s-node1 kube-system weave-net-xxgnqb2hkh 2/2 Running 1 2m 10.147.112.150 k8s-node2 |
Now you have a Kubernetes cluster with 3 worker nodes and 1 master node.
Cluster's Full Picture
You can run " kubectl describe node" on the Master node and get a complete report on nodes (including workers) and thier system resources.
Configure dockerdata-nfs
This is a shared directory which must be mounted on all of the Kuberenetes VMs(master node and worker nodes). Because many of the ONAP pods use this directory to share data.
See 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes for instruction on how to set this up.
Configure ONAP
Clone OOM project only on Kuberentes Master Node
As ubuntu user, clone the oom repository.
Code Block | ||
---|---|---|
| ||
git clone https://gerrit.onap.org/r/oom
cd oom/kubernetes
|
Note |
---|
You may use any specific known stable OOM release for APPC deployment. The above URL downloads latest OOM. |
Customize the oom/kubernetes/onap parent chart, like the values.yaml file, to suit your deployment. You may want to selectively enable or disable ONAP components by changing the subchart **enabled** flags to *true* or *false*.
Code Block |
---|
$ vi oom/kubernetes/onap/values.yaml
Example:
...
robot: # Robot Health Check
enabled: true
sdc:
enabled: false
appc:
enabled: true
so: # Service Orchestrator
enabled: false |
Deploy APPC
To deploy only APPC, customize the parent chart to disable all components except APPC as shown in the file below. Also set the global.persistence.mountPath to some non-mounted directory (by default, it is set to mounted directory /dockerdata-nfs).
Code Block |
---|
#Note that all components are changed to enabled:false except appc, robot, and mysql. Here we set number of APPC replicas to 3.
$ cat ~/oom/kubernetes/onap/values.yaml
# Copyright © 2017 Amdocs, Bell Canada
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#################################################################
# Global configuration overrides.
#
# These overrides will affect all helm charts (ie. applications)
# that are listed below and are 'enabled'.
#################################################################
global:
# Change to an unused port prefix range to prevent port conflicts
# with other instances running within the same k8s cluster
nodePortPrefix: 302
# ONAP Repository
# Uncomment the following to enable the use of a single docker
# repository but ONLY if your repository mirrors all ONAP
# docker images. This includes all images from dockerhub and
# any other repository that hosts images for ONAP components.
#repository: nexus3.onap.org:10001
repositoryCred:
user: docker
password: docker
# readiness check - temporary repo until images migrated to nexus3
readinessRepository: oomk8s
# logging agent - temporary repo until images migrated to nexus3
loggingRepository: docker.elastic.co
# image pull policy
pullPolicy: Always
# default mount path root directory referenced
# by persistent volumes and log files
persistence:
mountPath: /dockerdata-nfs
# flag to enable debugging - application support required
debugEnabled: false
# Repository for creation of nexus3.onap.org secret
repository: nexus3.onap.org:10001
#################################################################
# Enable/disable and configure helm charts (ie. applications)
# to customize the ONAP deployment.
#################################################################
aaf:
enabled: false
aai:
enabled: false
appc:
enabled: true
replicaCount: 3
config:
openStackType: OpenStackProvider
openStackName: OpenStack
openStackKeyStoneUrl: http://localhost:8181/apidoc/explorer/index.html
openStackServiceTenantName: default
openStackDomain: default
openStackUserName: admin
openStackEncryptedPassword: admin
clamp:
enabled: false
cli:
enabled: false
consul:
enabled: false
dcaegen2:
enabled: false
dmaap:
enabled: false
esr:
enabled: false
log:
enabled: false
sniro-emulator:
enabled: false
oof:
enabled: false
msb:
enabled: false
multicloud:
enabled: false
policy:
enabled: false
portal:
enabled: false
robot:
enabled: true
sdc:
enabled: false
sdnc:
enabled: false
replicaCount: 1
config:
enableClustering: false
mysql:
disableNfsProvisioner: true
replicaCount: 1
so:
enabled: false
replicaCount: 1
liveness:
# necessary to disable liveness probe when setting breakpoints
# in debugger so K8s doesn't restart unresponsive container
enabled: true
# so server configuration
config:
# message router configuration
dmaapTopic: "AUTO"
# openstack configuration
openStackUserName: "vnf_user"
openStackRegion: "RegionOne"
openStackKeyStoneUrl: "http://1.2.3.4:5000"
openStackServiceTenantName: "service"
openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"
# configure embedded mariadb
mariadb:
config:
mariadbRootPassword: password
uui:
enabled: false
vfc:
enabled: false
vid:
enabled: false
vnfsdk:
enabled: false
|
Note: If you set number of appc replicas in onap/values.yaml, it will overrides the setting you are about to do in the next step.
Run below command to setup a local Helm repository to serve up the local ONAP charts:
Code Block |
---|
#Press "Enter" after running the command to get the prompt back
$ nohup helm serve &
[1] 2316
$ Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879
# Verify
$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879
|
If you don't find the local repo, add it manually.
Note the IP(localhost) and port number that is listed in above response (8879 here) and use it in "helm repo add" command as follows:
Code Block |
---|
$ helm repo add local http://127.0.0.1:8879
"local" has been added to your repositories
|
Install "make" ( Learn more about ubuntu-make here : https://wiki.ubuntu.com/ubuntu-make) and build a local Helm repository (from the kubernetes directory):
Code Block |
---|
####################### # Install make from kubernetes directory. ####################### $ sudo apt install make Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: linux-headers-4.4.0-62 linux-headers-4.4.0-62-generic linux-image-4.4.0-62-generic snap-confine Use 'sudo apt autoremove' to remove them. Suggested packages: make-doc The following NEW packages will be installed: make 0 upgraded, 1 newly installed, 0 to remove and 72 not upgraded. Need to get 151 kB of archives. After this operation, 365 kB of additional disk space will be used. Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB] Fetched 151 kB in 0s (208 kB/s) Selecting previously unselected package make. (Reading database ... 121778 files and directories currently installed.) Preparing to unpack .../archives/make_4.1-6_amd64.deb ... Unpacking make (4.1-6) ... Processing triggers for man-db (2.7.5-1) ... Setting up make (4.1-6) ... ####################### # Build local helm repo #######################0 1h 10.147.112.140 k8s-master kube-system weave-net-s7l27 2/2 Running 1 2m 10.147.112.169 k8s-node3 kube-system weave-net-vmlrq 2/2 Running 0 20m 10.147.112.164 k8s-node1 kube-system weave-net-xxgnq 2/2 Running 1 2m 10.147.112.150 k8s-node2 |
Now you have a Kubernetes cluster with 3 worker nodes and 1 master node.
Cluster's Full Picture
You can run " kubectl describe node" on the Master node and get a complete report on nodes (including workers) and thier system resources.
Configure dockerdata-nfs
This is a shared directory which must be mounted on all of the Kuberenetes VMs(master node and worker nodes). Because many of the ONAP pods use this directory to share data.
See 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes for instruction on how to set this up.
Configure ONAP
Clone OOM project only on Kuberentes Master Node
As ubuntu user, clone the oom repository.
Code Block | ||
---|---|---|
| ||
git clone https://gerrit.onap.org/r/oom
cd oom/kubernetes
|
Note |
---|
You may use any specific known stable OOM release for APPC deployment. The above URL downloads latest OOM. |
Customize the oom/kubernetes/onap parent chart, like the values.yaml file, to suit your deployment. You may want to selectively enable or disable ONAP components by changing the subchart **enabled** flags to *true* or *false*.
Code Block |
---|
$ vi oom/kubernetes/onap/values.yaml
Example:
...
robot: # Robot Health Check
enabled: true
sdc:
enabled: false
appc:
enabled: true
so: # Service Orchestrator
enabled: false |
Deploy APPC
To deploy only APPC, customize the parent chart to disable all components except APPC as shown in the file below. Also set the global.persistence.mountPath to some non-mounted directory (by default, it is set to mounted directory /dockerdata-nfs).
Code Block |
---|
#Note that all components are changed to enabled:false except appc, robot, and mysql. Here we set number of APPC replicas to 3.
$ cat ~/oom/kubernetes/onap/values.yaml
# Copyright © 2017 Amdocs, Bell Canada
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#################################################################
# Global configuration overrides.
#
# These overrides will affect all helm charts (ie. applications)
# that are listed below and are 'enabled'.
#################################################################
global:
# Change to an unused port prefix range to prevent port conflicts
# with other instances running within the same k8s cluster
nodePortPrefix: 302
# ONAP Repository
# Uncomment the following to enable the use of a single docker
# repository but ONLY if your repository mirrors all ONAP
# docker images. This includes all images from dockerhub and
# any other repository that hosts images for ONAP components.
#repository: nexus3.onap.org:10001
repositoryCred:
user: docker
password: docker
# readiness check - temporary repo until images migrated to nexus3
readinessRepository: oomk8s
# logging agent - temporary repo until images migrated to nexus3
loggingRepository: docker.elastic.co
# image pull policy
pullPolicy: Always
# default mount path root directory referenced
# by persistent volumes and log files
persistence:
mountPath: /dockerdata-nfs
# flag to enable debugging - application support required
debugEnabled: false
# Repository for creation of nexus3.onap.org secret
repository: nexus3.onap.org:10001
#################################################################
# Enable/disable and configure helm charts (ie. applications)
# to customize the ONAP deployment.
#################################################################
aaf:
enabled: false
aai:
enabled: false
appc:
enabled: true
replicaCount: 3
config:
openStackType: OpenStackProvider
openStackName: OpenStack
openStackKeyStoneUrl: http://localhost:8181/apidoc/explorer/index.html
openStackServiceTenantName: default
openStackDomain: default
openStackUserName: admin
openStackEncryptedPassword: admin
clamp:
enabled: false
cli:
enabled: false
consul:
enabled: false
dcaegen2:
enabled: false
dmaap:
enabled: false
esr:
enabled: false
log:
enabled: false
sniro-emulator:
enabled: false
oof:
enabled: false
msb:
enabled: false
multicloud:
enabled: false
policy:
enabled: false
portal:
enabled: false
robot:
enabled: true
sdc:
enabled: false
sdnc:
enabled: false
replicaCount: 1
config:
enableClustering: false
mysql:
disableNfsProvisioner: true
replicaCount: 1
so:
enabled: false
replicaCount: 1
liveness:
# necessary to disable liveness probe when setting breakpoints
# in debugger so K8s doesn't restart unresponsive container
enabled: true
# so server configuration
config:
# message router configuration
dmaapTopic: "AUTO"
# openstack configuration
openStackUserName: "vnf_user"
openStackRegion: "RegionOne"
openStackKeyStoneUrl: "http://1.2.3.4:5000"
openStackServiceTenantName: "service"
openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"
# configure embedded mariadb
mariadb:
config:
mariadbRootPassword: password
uui:
enabled: false
vfc:
enabled: false
vid:
enabled: false
vnfsdk:
enabled: false
|
Note: If you set number of appc replicas in onap/values.yaml, it will overrides the setting you are about to do in the next step.
Run below command to setup a local Helm repository to serve up the local ONAP charts:
Code Block |
---|
#Press "Enter" after running the command to get the prompt back
$ nohup helm serve &
[1] 2316
$ Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879
# Verify
$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879
|
If you don't find the local repo, add it manually.
Note the IP(localhost) and port number that is listed in above response (8879 here) and use it in "helm repo add" command as follows:
Code Block |
---|
$ helm repo add local http://127.0.0.1:8879
"local" has been added to your repositories
|
build a local Helm repository (from the kubernetes directory):
Code Block |
---|
$ make all [common] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' make[2]: Entering directory '/home/ubuntu/oom/kubernetes/common' [common] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' ==> Linting common [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/common-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' [dgbuilder] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting dgbuilder [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/dgbuilder-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' [postgres] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting postgres [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/postgres-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' [mysql] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting mysql [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mysql-2.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' make[2]: Leaving directory '/home/ubuntu/oom/kubernetes/common' make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [vid] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting vid [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vid-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [so] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting so [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/so-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [cli] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting cli [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/cli-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [aaf] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting aaf [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/aaf-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [log] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting log [INFO] Chart.yaml: icon is recommended [WARNING] templates/: directory not found 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/log-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [esr] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting esr [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/esr-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [mock] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting mock [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mock-0.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [multicloud] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting multicloud [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/multicloud-1.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [mso] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting mso [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mso-1.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [dcaegen2] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting dcaegen2 [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/dcaegen2-1.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [vnfsdk] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting vnfsdk [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vnfsdk-1.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [policy] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting policy [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/policy-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [consul] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting consul [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/consul-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [clamp] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting clamp [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/clamp-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [appc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 3 charts Downloading common from repo http://127.0.0.1:8879 Downloading mysql from repo http://127.0.0.1:8879 Downloading dgbuilder from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting appc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/appc-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [sdc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting sdc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/sdc-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [portal] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting portal [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/portal-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [aai] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting aai [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/aai-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [robot] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting robot [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/robot-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [msb] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting msb [INFO] Chart.yaml: icon is recommended [WARNING] templates/: directory not found 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/msb-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [vfc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' ==> Linting vfc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vfc-0.1.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [message-router] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting message-router [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/message-router-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [uui] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 1 charts Downloading common from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting uui [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/uui-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [sdnc] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 3 charts Downloading common from repo http://127.0.0.1:8879 Downloading mysql from repo http://127.0.0.1:8879 Downloading dgbuilder from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting sdnc [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/sdnc-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' [onap] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ?Happy Helming!? Saving 24 charts Downloading aaf from repo http://127.0.0.1:8879 Downloading aai from repo http://127.0.0.1:8879 Downloading appc from repo http://127.0.0.1:8879 Downloading clamp from repo http://127.0.0.1:8879 Downloading cli from repo http://127.0.0.1:8879 Downloading common from repo http://127.0.0.1:8879 Downloading consul from repo http://127.0.0.1:8879 Downloading dcaegen2 from repo http://127.0.0.1:8879 Downloading esr from repo http://127.0.0.1:8879 Downloading log from repo http://127.0.0.1:8879 Downloading message-router from repo http://127.0.0.1:8879 Downloading mock from repo http://127.0.0.1:8879 Downloading msb from repo http://127.0.0.1:8879 Downloading multicloud from repo http://127.0.0.1:8879 Downloading policy from repo http://127.0.0.1:8879 Downloading portal from repo http://127.0.0.1:8879 Downloading robot from repo http://127.0.0.1:8879 Downloading sdc from repo http://127.0.0.1:8879 Downloading sdnc from repo http://127.0.0.1:8879 Downloading so from repo http://127.0.0.1:8879 Downloading uui from repo http://127.0.0.1:8879 Downloading vfc from repo http://127.0.0.1:8879 Downloading vid from repo http://127.0.0.1:8879 Downloading vnfsdk from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting onap Lint OK 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/onap-2.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' |
...