Deploying SDN-C using helm chart
This wiki describes how to deploy SDN-C on a Kubernetes cluster using latest SDN-C helm chart.
Pre-requisite
This page assumes you have a configured kubernetes cluster. For more information see, Deploying Kubernetes Cluster with kubeadm.
Steps before "Configuring SDNC-ONAP" should only be followed.
Configure ONAP
Clone OOM project only on Kubernetes Master Node
As ubuntu user, clone the oom repository.
ubuntu@k8s-s1-master:/home/ubuntu/# git clone https://gerrit.onap.org/r/oom
ubuntu@k8s-s1-master:/home/ubuntu/# cd oom/kubernetes
Customize the oom/kubernetes/onap parent chart, like the values.yaml file, to suit your deployment. You may want to selectively enable or disable ONAP components by changing the subchart **enabled** flags to *true* or *false*.
ubuntu@k8s-s1-master:/home/ubuntu/# vi oom/kubernetes/onap/values.yaml
Example:
...
robot: # Robot Health Check
enabled: true
sdc:
enabled: false
sdnc:
enabled: false
so: # Service Orchestrator
enabled: true
Deploy SDNC
To deploy only SDNC, customize the parent chart to disable all components except SDNC as shown in below file. Also set the global.persistence.mountPath to some non-mounted directory (by default, it is set to mounted directory /dockerdata-nfs).
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# sudo mkdir /onapDev
#Note that all components are changed to enabled:false except sdnc (and underlying mysql). Here we set number of SDNC/MySQL replica to 3/2.
#Note that global.persistence.mountPath is set to non-mounted directory /onapDev (this is required to be done since we will keep nfs-provisioner as enabled in SDN-C configuration)
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# cat ~/oom/kubernetes/onap/values.yaml
# Copyright © 2017 Amdocs, Bell Canada
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#################################################################
# Global configuration overrides.
#
# These overrides will affect all helm charts (ie. applications)
# that are listed below and are 'enabled'.
#################################################################
global:
# Change to an unused port prefix range to prevent port conflicts
# with other instances running within the same k8s cluster
nodePortPrefix: 302
# ONAP Repository
# Uncomment the following to enable the use of a single docker
# repository but ONLY if your repository mirrors all ONAP
# docker images. This includes all images from dockerhub and
# any other repository that hosts images for ONAP components.
#repository: nexus3.onap.org:10001
repositorySecret: eyJuZXh1czMub25hcC5vcmc6MTAwMDEiOnsidXNlcm5hbWUiOiJkb2NrZXIiLCJwYXNzd29yZCI6ImRvY2tlciIsImVtYWlsIjoiQCIsImF1dGgiOiJaRzlqYTJWeU9tUnZZMnRsY2c9PSJ9fQ==
# readiness check - temporary repo until images migrated to nexus3
readinessRepository: oomk8s
# logging agent - temporary repo until images migrated to nexus3
loggingRepository: docker.elastic.co
# image pull policy
pullPolicy: Always
# default mount path root directory referenced
# by persistent volumes and log files
persistence:
mountPath: /onapDev
# flag to enable debugging - application support required
debugEnabled: false
#################################################################
# Enable/disable and configure helm charts (ie. applications)
# to customize the ONAP deployment.
#################################################################
aaf:
enabled: false
aai:
enabled: false
appc:
enabled: false
clamp:
enabled: false
cli:
enabled: false
consul:
enabled: false
dcaegen2:
enabled: false
esr:
enabled: false
log:
enabled: false
message-router:
enabled: false
mock:
enabled: false
msb:
enabled: false
multicloud:
enabled: false
policy:
enabled: false
portal:
enabled: false
robot:
enabled: false
sdc:
enabled: false
sdnc:
enabled: true
replicaCount: 3
config:
enableClustering: false
mysql:
disableNfsProvisioner: true
replicaCount: 2
so:
enabled: false
replicaCount: 1
liveness:
# necessary to disable liveness probe when setting breakpoints
# in debugger so K8s doesn't restart unresponsive container
enabled: true
# so server configuration
config:
# message router configuration
dmaapTopic: "AUTO"
# openstack configuration
openStackUserName: "vnf_user"
openStackRegion: "RegionOne"
openStackKeyStoneUrl: "http://1.2.3.4:5000"
openStackServiceTenantName: "service"
openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"
# configure embedded mariadb
mariadb:
config:
mariadbRootPassword: password
uui:
enabled: false
vfc:
enabled: false
vid:
enabled: false
vnfsdk:
enabled: false
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes#
Note: If you set number of sdnc/mysql replicas in onap/values.yaml, it will overrides the setting you are about to do in the next step.
Customize the oom/kubernetes/sdnc chart, like the values.yaml file, to configure number of replicas for SDN-C and MySQL service as per your deployment needs.
a) To configure mysql replicas edit mysql.replicaCount
b) To configure sdnc replicas edit replicaCount
Example: Below configuration configures 1 MySQL replica and 1 SDNC replica.
ubuntu@k8s-s1-master:/home/ubuntu# vi oom/kubernetes/sdnc/values.yaml
...
...
mysql:
nameOverride: sdnc-db
service:
name: sdnc-dbhost
...
...
replicaCount: 1
...
...
# default number of instances
replicaCount: 1
...
ubuntu@k8s-s1-master:/home/ubuntu#
Run below command to setup a local Helm repository to serve up the local ONAP charts:
#Press "Enter" after running the command to get the prompt back
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# nohup sudo helm serve >/dev/null 2>&1 &
[1] 2316
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes#
# Verify
$ ps -ef | grep helm
root 7323 18581 0 20:52 pts/8 00:00:00 sudo helm serve
root 7324 7323 0 20:52 pts/8 00:00:00 helm serve
ubuntu 7445 18581 0 20:52 pts/8 00:00:00 grep --color=auto helm
$
# Verify
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes#
If you don't find the local repo, add it manually.
Note the IP(localhost) and port number that is listed in above response (8879 here) and use it in "helm repo add" command as follows:
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm repo add local http://127.0.0.1:8879
"local" has been added to your repositories
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes#
Install "make" ( Learn more about ubuntu-make here : https://wiki.ubuntu.com/ubuntu-make) and build a local Helm repository (from the kubernetes directory):
#######################
# Install make from kubernetes directory.
#######################
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# sudo apt install make
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
linux-headers-4.4.0-62 linux-headers-4.4.0-62-generic linux-image-4.4.0-62-generic snap-confine
Use 'sudo apt autoremove' to remove them.
Suggested packages:
make-doc
The following NEW packages will be installed:
make
0 upgraded, 1 newly installed, 0 to remove and 72 not upgraded.
Need to get 151 kB of archives.
After this operation, 365 kB of additional disk space will be used.
Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB]
Fetched 151 kB in 0s (208 kB/s)
Selecting previously unselected package make.
(Reading database ... 121778 files and directories currently installed.)
Preparing to unpack .../archives/make_4.1-6_amd64.deb ...
Unpacking make (4.1-6) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up make (4.1-6) ...
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes#
#######################
# Build local helm repo
#######################
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# sudo make all
[common]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
make[2]: Entering directory '/home/ubuntu/oom/kubernetes/common'
[common]
make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common'
==> Linting common
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/common-2.0.0.tgz
make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common'
[dgbuilder]
make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting dgbuilder
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/dgbuilder-2.0.0.tgz
make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common'
[postgres]
make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting postgres
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/postgres-2.0.0.tgz
make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common'
[mysql]
make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting mysql
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mysql-2.0.0.tgz
make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common'
make[2]: Leaving directory '/home/ubuntu/oom/kubernetes/common'
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[vid]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting vid
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vid-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[so]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting so
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/so-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[cli]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting cli
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/cli-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[aaf]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting aaf
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/aaf-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[log]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting log
[INFO] Chart.yaml: icon is recommended
[WARNING] templates/: directory not found
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/log-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[esr]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting esr
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/esr-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[mock]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
==> Linting mock
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mock-0.1.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[multicloud]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
==> Linting multicloud
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/multicloud-1.1.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[mso]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
==> Linting mso
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/mso-1.1.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[dcaegen2]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
==> Linting dcaegen2
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/dcaegen2-1.1.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[vnfsdk]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
==> Linting vnfsdk
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vnfsdk-1.1.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[policy]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting policy
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/policy-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[consul]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting consul
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/consul-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[clamp]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting clamp
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/clamp-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[appc]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 3 charts
Downloading common from repo http://127.0.0.1:8879
Downloading mysql from repo http://127.0.0.1:8879
Downloading dgbuilder from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting appc
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/appc-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[sdc]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting sdc
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/sdc-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[portal]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting portal
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/portal-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[aai]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting aai
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/aai-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[robot]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting robot
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/robot-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[msb]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting msb
[INFO] Chart.yaml: icon is recommended
[WARNING] templates/: directory not found
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/msb-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[vfc]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
==> Linting vfc
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/vfc-0.1.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[message-router]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting message-router
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/message-router-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[uui]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 1 charts
Downloading common from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting uui
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/uui-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[sdnc]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 3 charts
Downloading common from repo http://127.0.0.1:8879
Downloading mysql from repo http://127.0.0.1:8879
Downloading dgbuilder from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting sdnc
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/sdnc-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
[onap]
make[1]: Entering directory '/home/ubuntu/oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?
Saving 24 charts
Downloading aaf from repo http://127.0.0.1:8879
Downloading aai from repo http://127.0.0.1:8879
Downloading appc from repo http://127.0.0.1:8879
Downloading clamp from repo http://127.0.0.1:8879
Downloading cli from repo http://127.0.0.1:8879
Downloading common from repo http://127.0.0.1:8879
Downloading consul from repo http://127.0.0.1:8879
Downloading dcaegen2 from repo http://127.0.0.1:8879
Downloading esr from repo http://127.0.0.1:8879
Downloading log from repo http://127.0.0.1:8879
Downloading message-router from repo http://127.0.0.1:8879
Downloading mock from repo http://127.0.0.1:8879
Downloading msb from repo http://127.0.0.1:8879
Downloading multicloud from repo http://127.0.0.1:8879
Downloading policy from repo http://127.0.0.1:8879
Downloading portal from repo http://127.0.0.1:8879
Downloading robot from repo http://127.0.0.1:8879
Downloading sdc from repo http://127.0.0.1:8879
Downloading sdnc from repo http://127.0.0.1:8879
Downloading so from repo http://127.0.0.1:8879
Downloading uui from repo http://127.0.0.1:8879
Downloading vfc from repo http://127.0.0.1:8879
Downloading vid from repo http://127.0.0.1:8879
Downloading vnfsdk from repo http://127.0.0.1:8879
Deleting outdated charts
==> Linting onap
Lint OK
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/onap-2.0.0.tgz
make[1]: Leaving directory '/home/ubuntu/oom/kubernetes'
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes#
Setup of this Helm repository is a one time activity. If you make changes to your deployment charts or values, make sure to run **make** command again to update your local Helm repository.
If Change 41597 , has not been merged , create persistent volumes to be available for claim by persistent volumes claim created during MySQL pods deployment. (As of today, late April 2018, the change has been merged and you don't need to create PVs per following step.)
#use spec.storageClassName as {Release-name}-sdnc-db-data (In our example, release name is taken as "dev")
#use spec.hostPath.path as {global.perisistentPath}/{Release-name}/sdnc/data (In our example, global.persistentPath is taken as an unmounted directory /onapDev; release name is taken as "dev")
#below yaml file will create two persistent volumes to support two MySQL DB replicas
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# cd /home/ubuntu
ubuntu@k8s-s1-master:/home/ubuntu# cat > pv.yaml << EOF
kind: PersistentVolume
apiVersion: v1
metadata:
annotations:
volume.alpha.kubernetes.io/storage-class: "yes"
name: nfs-vol1
namespace: onap
labels:
app: nfs-provisioner
type: local
spec:
capacity:
storage: 11Gi
accessModes:
- ReadWriteMany
- ReadWriteOnce
storageClassName: dev-sdnc-db-data
hostPath:
path: "/onapDev/dev/sdnc/data"
---
kind: PersistentVolume
apiVersion: v1
metadata:
annotations:
volume.alpha.kubernetes.io/storage-class: "yes"
name: nfs-vol2
namespace: onap
labels:
app: nfs-provisioner
type: local
spec:
capacity:
storage: 11Gi
accessModes:
- ReadWriteMany
- ReadWriteOnce
storageClassName: dev-sdnc-db-data
hostPath:
path: "/onapDev/dev/sdnc/data"
EOF
ubuntu@k8s-s1-master:/home/ubuntu# kubectl create -f pv.yaml
ubuntu@k8s-s1-master:/home/ubuntu# cd oom/kubernetes
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes#
Once the repo is setup, installation of ONAP can be done with a single command:
Example:
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm install local/onap --name <Release-name> --namespace onap
Execute:
# we choose "dev" as our release name here
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm install local/onap --name dev --namespace onap
NAME: dev
LAST DEPLOYED: Thu Apr 5 15:29:43 2018
NAMESPACE: onap
STATUS: DEPLOYED
RESOURCES:
==> v1beta1/ClusterRoleBinding
NAME AGE
onap-binding <invalid>
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dev-aaf-cs ClusterIP None <none> 7000/TCP,7001/TCP,9042/TCP,9160/TCP <invalid>
dev-aaf NodePort 10.102.199.114 <none> 8101:30299/TCP <invalid>
dev-sdnc-dgbuilder NodePort 10.98.198.119 <none> 3000:30203/TCP <invalid>
dev-sdnc-dmaap-listener ClusterIP None <none> <none> <invalid>
sdnc-dbhost-read ClusterIP 10.97.164.247 <none> 3306/TCP <invalid>
dev-sdnc-nfs-provisioner ClusterIP 10.96.108.12 <none> 2049/TCP,20048/TCP,111/TCP,111/UDP <invalid>
dev-sdnc-dbhost ClusterIP None <none> 3306/TCP <invalid>
sdnc-sdnctldb02 ClusterIP None <none> 3306/TCP <invalid>
sdnc-sdnctldb01 ClusterIP None <none> 3306/TCP <invalid>
dev-sdnc-portal NodePort 10.98.82.180 <none> 8443:30201/TCP <invalid>
dev-sdnc-ueb-listener ClusterIP None <none> <none> <invalid>
sdnc-cluster ClusterIP None <none> 2550/TCP <invalid>
dev-sdnc NodePort 10.109.177.114 <none> 8282:30202/TCP,8202:30208/TCP,8280:30246/TCP <invalid>
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
dev-aaf-cs 1 1 1 0 <invalid>
dev-aaf 1 1 1 0 <invalid>
dev-sdnc-dgbuilder 1 1 1 0 <invalid>
dev-sdnc-dmaap-listener 1 1 1 0 <invalid>
dev-sdnc-nfs-provisioner 1 1 1 0 <invalid>
dev-sdnc-portal 1 0 0 0 <invalid>
dev-sdnc-ueb-listener 1 0 0 0 <invalid>
==> v1beta1/StatefulSet
NAME DESIRED CURRENT AGE
dev-sdnc-db 2 1 <invalid>
dev-sdnc 3 3 <invalid>
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
dev-aaf-cs-7c5b64d884-msh74 0/1 ContainerCreating 0 <invalid>
dev-aaf-775bdc6b48-cf7fr 0/1 Init:0/1 0 <invalid>
dev-sdnc-dgbuilder-6fdfb498f-wf7bt 0/1 Init:0/1 0 <invalid>
dev-sdnc-dmaap-listener-5998b5774c-wz24j 0/1 Init:0/1 0 <invalid>
dev-sdnc-nfs-provisioner-75dcd8c86c-qz2qh 0/1 ContainerCreating 0 <invalid>
dev-sdnc-portal-5cd7598547-46gn4 0/1 Init:0/1 0 <invalid>
dev-sdnc-ueb-listener-598c68f8d8-frbfz 0/1 Init:0/1 0 <invalid>
dev-sdnc-db-0 0/2 Init:0/3 0 <invalid>
dev-sdnc-0 0/2 Init:0/1 0 <invalid>
dev-sdnc-1 0/2 Init:0/1 0 <invalid>
dev-sdnc-2 0/2 Init:0/1 0 <invalid>
==> v1/Secret
NAME TYPE DATA AGE
dev-aaf-cs Opaque 0 <invalid>
dev-sdnc-dgbuilder Opaque 1 <invalid>
dev-sdnc-db Opaque 1 <invalid>
dev-sdnc-portal Opaque 1 <invalid>
dev-sdnc Opaque 1 <invalid>
onap-docker-registry-key kubernetes.io/dockercfg 1 <invalid>
==> v1/ConfigMap
NAME DATA AGE
dev-aaf 0 <invalid>
dev-sdnc-dgbuilder-config 1 <invalid>
dev-sdnc-dgbuilder-scripts 2 <invalid>
sdnc-dmaap-configmap 1 <invalid>
dev-sdnc-db-db-configmap 2 <invalid>
sdnc-portal-configmap 1 <invalid>
sdnc-ueb-configmap 1 <invalid>
dev-sdnc-installsdncdb 1 <invalid>
dev-sdnc-dblib-properties 1 <invalid>
dev-sdnc-aaiclient-properties 1 <invalid>
dev-sdnc-startodl 1 <invalid>
dev-sdnc-onap-sdnc-svclogic-config 1 <invalid>
dev-sdnc-svclogic-config 1 <invalid>
dev-sdnc-filebeat-configmap 1 <invalid>
dev-sdnc-log-configmap 1 <invalid>
==> v1/StorageClass
NAME PROVISIONER AGE
dev-sdnc-db-data dev-sdnc-db/nfs <invalid>
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes#
Downgrade helm
The helm installation procedure will put the latest version of it on your master node. Then Tiller (helm server) version will follow the helm (helm client) version and Tiller version will be also the latest.
If helm/tiller version on your K8S master node is not what ONAP installation wants, you will get “Chart incompatible with Tiller v2.9.1”. See below:
ubuntu@kanatamaster:~/oominstall/kubernetes$ helm install local/onap --name dev --namespace onap
Error: Chart incompatible with Tiller v2.9.1
ubuntu@kanatamaster:~/oominstall/kubernetes$
A temporary fix for this will be often downgrading helm/tiller. Here is the procedure:
Step 1) downgrade helm client (helm)
Download desired version (tar.gz file) form kubernetes website. Example here: https://kubernetes-helm.storage.googleapis.com/helm-v2.8.1-linux-amd64.tar.gz . You can change the version number in the file name and you will get it!
(curl https://kubernetes-helm.storage.googleapis.com/helm-v2.8.1-linux-amd64.tar.gz --output helm-v2.8.1-linux-amd64.tar.gz --silent) . (try 2.8.2, if you get the same error with 2.8.1)Unzip and untar the file. It will create "linux-amd64" directory.
Copy helm binary file from linux-amd64 directory to /usr/local/bin/ (kill helm process if it is stopping the copy)
Run "helm version"
Step 2) downgrade helm server (Tiller)
Use helm rest, . Follow the below steps:
# Uninstalls Tiller from a cluster
helm reset --force
# Clean up any existing artifacts
kubectl -n kube-system delete deployment tiller-deploy
kubectl -n kube-system delete serviceaccount tiller
kubectl -n kube-system delete ClusterRoleBinding tiller-clusterrolebinding
cat > tiller-serviceaccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
EOF
# Run the blow command to get the matching tiller version for helm
kubectl create -f tiller-serviceaccount.yaml
# Then run init helm
helm init --service-account tiller --upgrade
# Verify
helm version
#Note: Dont forget to start helm
nohup sudo helm serve >/dev/null 2>&1 &
The **--namespace onap** is currently required while all onap helm charts are migrated to version 2.0. After this activity is complete, namespaces will be optional.
Use the following to monitor your deployment and determine when ONAP is ready for use:
ubuntu@k8s-s1-master:/home/ubuntu# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-855fbf769d-fpd68 1/1 Running 1 43d
kube-system coredns-65dcdb4cf-rhqnd 1/1 Running 1 52d
kube-system etcd-k8s-s2-master 1/1 Running 1 52d
kube-system kube-apiserver-k8s-s2-master 1/1 Running 1 10d
kube-system kube-controller-manager-k8s-s2-master 1/1 Running 1 10d
kube-system kube-proxy-cbbff 1/1 Running 1 52d
kube-system kube-proxy-ds2kq 1/1 Running 1 52d
kube-system kube-proxy-jjhwl 1/1 Running 1 52d
kube-system kube-proxy-wm4sz 1/1 Running 1 52d
kube-system kube-proxy-x5b8b 1/1 Running 1 52d
kube-system kube-scheduler-k8s-s2-master 1/1 Running 1 52d
kube-system tiller-deploy-5b48764ff7-44fd5 1/1 Running 1 52d
kube-system weave-net-5jz5c 2/2 Running 11 52d
kube-system weave-net-cfpbm 2/2 Running 10 52d
kube-system weave-net-htjq6 2/2 Running 11 52d
kube-system weave-net-nc69d 2/2 Running 10 52d
kube-system weave-net-thqmc 2/2 Running 9 52d
onap dev-sdnc-0 2/2 Running 0 18h
onap dev-sdnc-db-0 2/2 Running 0 18h
onap dev-sdnc-dgbuilder-6fdfb498f-5xrfh 1/1 Running 0 18h
onap dev-sdnc-dmaap-listener-5998b5774c-tc4vd 1/1 Running 0 18h
onap dev-sdnc-nfs-provisioner-75dcd8c86c-44gpk 1/1 Running 0 18h
onap dev-sdnc-portal-5cd7598547-d5l48 0/1 CrashLoopBackOff 319 18h
onap dev-sdnc-ueb-listener-598c68f8d8-9z7pn 1/1 Running 0 18h
ubuntu@k8s-s1-master:/home/ubuntu#
Cleanup deployed ONAP instance
To delete a deployed instance, use the following command:
Example:
ubuntu@k8s-s1-master:/home/ubuntu/oom/kubernetes# helm del --purge <Release-name>
Execute:
# we choose "dev" as our release name here
ubuntu@k8s-s1-master:/home/ubuntu# helm del --purge dev
release "dev" deleted
ubuntu@k8s-s1-master:/home/ubuntu#
Also, delete the existing persistent volumes and persistent volume claim in "onap" namespace:
#query existing pv in onap namespace
ubuntu@k8s-s1-master:/home/ubuntu# kubectl get pv -n onap
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-vol1 11Gi RWO,RWX Retain Bound dev-sdnc-db-data 38s
nfs-vol2 11Gi RWO,RWX Retain Bound dev-sdnc-db-data 34s
#Example commands are found here:
# delete all pvc under onap
ubuntu@k8s-s1-master:/home/ubuntu# kubectl delete pvc -n onap --all
#query existing pvc in onap namespace
ubuntu@k8s-s1-master:/home/ubuntu# kubectl get pvc -n onap
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
dev-sdnc-db-data-dev-sdnc-db-0 Bound nfs-vol1 11Gi RWO,RWX dev-sdnc-db-data 21h
dev-sdnc-db-data-dev-sdnc-db-1 Bound nfs-vol2 11Gi RWO,RWX dev-sdnc-db-data 21h
#delete existing pv
ubuntu@k8s-s1-master:/home/ubuntu# kubectl delete pv nfs-vol1 -n onap
pv "nfs-vol1" deleted
ubuntu@k8s-s1-master:/home/ubuntu# kubectl delete pv nfs-vol2 -n onap
pv "nfs-vol2" deleted
#delete existing pvc
ubuntu@k8s-s1-master:/home/ubuntu# kubectl delete pvc dev-sdnc-db-data-dev-sdnc-db-0 -n onap
pvc "dev-sdnc-db-data-dev-sdnc-db-0" deleted
ubuntu@k8s-s1-master:/home/ubuntu# kubectl delete pvc dev-sdnc-db-data-dev-sdnc-db-1 -n onap
pvc "dev-sdnc-db-data-dev-sdnc-db-1" deleted
Delete everything inside the chosen global.persistentPath in during SDN-C deployment:
#delete everything inside /onapDev, our chosen path for global.persistentPath
ubuntu@k8s-s1-master:/home/ubuntu# cd /onapDev
ubuntu@k8s-s1-master:/onapDev# sudo rm -rf *
ubuntu@k8s-s1-master:/onapDev#