OOM ONAP offline installer
Installation Guide
(Community Edition)
1. Introduction for CE1 delivery
2. Environment
3. Preparation (before installation)
4. Installation
4.1 Deploy infrastructure
4.2 ONAP deployment
Appendix 1: Troubleshooting
Appendix 2: Release Manifest
Appendix 3: ONAP values.yaml configuration
Robot values.yaml configuration
Version Control
Version |
Date |
Modified by |
Comment |
0.1 |
13.10.2018 |
Michal Ptacek |
First draft |
|
|
|
|
|
|
|
|
Contributors
Name |
|
Michal Ptá?ek |
m.ptacek@partner.samsung.com |
Samuli Slivius |
s.silvius@partner.samsung.com |
Timo Puha |
t.puha@partner.samsung.com |
Petr Ospaly |
p.ospaly@partner.samsung.com |
Pawel Mentel |
p.mentel@samsung.com |
Witold Kopeld |
w.kopel@samsung.com |
1. Introduction for CE1 delivery
This installation guide is covering instruction on how to deploy ONAP using Samsung offline installer. Precondition is successfully build SI (Self-Installer) package, which is addressed in previous guide. All artifacts needed for this deployment were collected from online OOM ONAP Beijing deployment from Beijing branch. Release was verified on RHEL7.4 deployments (rhel cloud image). If different rhel74 images are used, there might be some problems related to package clashes. Image was downloaded from RedHat official site.
{+}https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.4/x86_64/product-software+
Red Hat Enterprise Linux 7.4 KVM Guest Image
Last modified: 2018-03-23
SHA-256 Checksum: b9fd65e22e8d3eb82eecf78b36471109fa42ee46fc12fd2ba2fa02e663fd21ef
Later on it might be possible to use different than cloud rhel74 image, but this decision must be done on PO level and aligned with business.
Current limitations are:
- Tested on rhel74 cloud image (on openstack VMs only)
- Verified by vFWCL demo (In OpenStack environment only, inside same tenant where ONAP is deployed).
2. Environment
Install_server is in some context also referenced as infrastructure node, also it's node hosting rancher server container.
HW footprint:
-
- install_server: (nexus, nginx,dns,rancher_server)
- Red Hat Enterprise Linux 7.4 KVM Guest Image
- 16G+ RAM
- 200G+ disk space (minimum 160GB)
- 10G+ swap
- 8+ vCPU
- install_server: (nexus, nginx,dns,rancher_server)
-
- kubernetes_node(s): (rancher_agent, ONAP OOM node)
- Red Hat Enterprise Linux 7.4 KVM Guest Image
- 64G+ RAM
- 120G+ disk space
- 10G+ swap
- 16+ vCPU
- kubernetes_node(s): (rancher_agent, ONAP OOM node)
3. Preparation (before installation)
- (Step1) Ensure passwordless root login
From install_server to kubernetes_nodes.
As we're using cloud rhel7.4 image, root access is by default disabled. It's possible to login into all VM's just by using cloud-user and using known key inserted during spawning by cloud-init.
Installation scripts requires to be executed under root user (non-root user with sudoers right is not sufficient, will be done later as a hardening topic) and it requires passwordless login to other k8s nodes.
In general cases to achieve passwordless connection, one can just create own key using ssh-keygen and distribute public key created for example in /root/.ssh/id_rsa.pub to /root/.ssh/authorized_keys file on all k8s nodes. On OpenStack created instances cloud related text prohibiting root login should be removed as well (described in step below).
If VMs were spawned in Openstack, this procedure could be used to allow ssh under root user (with private key):
[root@rc3l-install-server ~]# ssh rc3-install-compute2
The authenticity of host 'r2-install-compute2 (10.6.6.13)' can't be established.
ECDSA key fingerprint is SHA256:645LdswQyZtoxHBv3+6hvC62liAdwEkbr8w6sN392YI.
ECDSA key fingerprint is MD5:32:a6:70:26:0a:ae:56:c1:e3:2a:b6:fa:b7:40:5a:d6.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rc3-install-compute2,10.6.6.13' (ECDSA) to the list of known hosts.
Please login as the user "cloud-user" rather than the user "root".
One need to login into those servers as cloud-user (using correct openstack key)
[root@rc3l-install-server ~]# ssh -i ~/correct_key cloud-user@rc3-install-compute2
Switch to root user:
[cloud-user@rc3-install-compute2 ~]$ sudo su -
And adapt it by removing highlighted text (cloud related text prohibiting root login):
[root@rc3-install-compute2 ~]# vi /root/.ssh/authorized_keys
- The following ssh key was injected by Nova
no-port-forwarding,no-agent-forwarding,no-X11-forwarding,command="echo 'Please login as the user \"cloud-user\" rather than the user \"root\".';echo;sleep 10" ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPwF2bYm2QuqZpjuAcZDJTcFdUkKv4Hbd/3qqbxf6g5ZgfQarCi+mYnKe9G9Px3CgFLPdgkBBnMSYaAzMjdIYOEdPKFTMQ9lIF0+i5KsrXvszWraGKwHjAflECfpTAWkPq2UJUvwkV/g7NS5lJN3fKa9LaqlXdtdQyeSBZAUJ6QeCE5vFUplk3X6QFbMXOHbZh2ziqu8mMtP+cWjHNBB47zHQ3RmNl81Rjv+QemD5zpdbK/h6AahDncOY3cfN88/HPWrENiSSxLC020sgZNYgERqfw+1YhHrclhf3jrSwCpZikjl7rqKroua2LBI/yeWEta3amTVvUnR2Y7gM8kHyh Generated-by-Nova
In some environments RootLogin might be prohibited completely, this might be enabled by setting-up PermitRootLogin yes in /etc/ssh/sshd_config and service sshd reload.
After having done this to all kubernetes nodes, one should check access as root to
verify that password-less root login works, it should be able to access all k8s nodes
w/o any password prompt.
e.g.:
root@oom-beijing-rc3-install:~# ssh oom-beijing-RC3-node1
root@oom-beijing-rc3-install:~# ssh oom-beijing-RC3-node2
- (Step 2) Create installation directory on install_server (e.g. /root/installer) and move self-contained archive (installation script) into it.
Be sure there is enough space (more than 160G)
-
- mkdir /root/installer
Note: this is the place, where archive will be extracted. Original file can be removed after deployment.
- (Step 3) Create new file local_repo.conf in installation directory, with following information:
LOCAL_IP=<install_server_ip> NODES_IPS='<node_ip1> <node_ip2> … <node_ipn>' E.g. LOCAL_IP=10.8.8.7 NODES_IPS='10.8.8.10 10.8.8.11'
This will ensure that infrastructure deployment together with setting-up kubernetes will be done non-interactively.
We should be ready to proceed with installation part.
4. Installation
4.1 Deploy infrastructure
In this part infrastructure will be deployed. More specifically local nexus, dns, rancher & docker will be deployed on install server. Kubernetes nodes will get rancher agent running and form kubernetes cluster.
- (Step1) To execute a script simply run (from installation directory):
- cd /root/installer
- /root/installer/selfinstall_onap_beijing_RC3.sh
- (Step2) Answer questions asked by the script (if needed)
Note: questions will be asked only when script won't be able to find config file (local_repo.conf) in the current folder, otherwise script will use existing config file.
And Wait until script finish execution.
- (Step3) Verify that k8s cluster is ready and operational
One can verify, that infrastructure deployment was successful in following way:
- following should display healthy etcd-0 component
kubectl get cs
- following should display 2 kubernetes nodes in "Ready" state.
kubectl get nodes
4.2 ONAP deployment
Before ONAP is deployed ./oom/kubernetes/onap/values.yaml in OOM should be configured accordingly to cover correct VIM (Openstack) credentials. Also number of deployed ONAP components might be modified in there.If ONAP is going to be also tested by reproducing vFWCL Demo ./oom/kubernetes/robot/values.yaml should be configured before ONAP is deployed in OOM.
Configuration of onap & robot values.yaml described in "Appendix 4" paragraph.
- (Step1) Trigger the deployment of ONAP
To execute ONAP installation run (from installation directory):
-
- ./deploy_onap.sh
This script will finish quite quickly and it will just launch ONAP deployment.
- (Step2) Check the progress of the deployment
Progress of the real deployment can be followed by monitoring number of "not running pods" and it usually takes around ~1 hr.
Deployment is done when all components are up !
Check-box:
E.g. following command can be used to track progress of deployment
$ while true; do date;kubectl get pods -n onap -o=wide| grep -v 'Runn|NAME' | wc -l;sleep 30;done
The only not running pod using sign-off Beijing images (2.0.0 branch) is:
dev-aai-champ-9889557bb-5fzvl 0/1 Running 0 2h
(Step3) Verify it's functionality
All ONAP health-checks should pass, launch robot health checks, from inside oom/kubernetes/robot folder.
root@oom-beijing-rc3-master:oom/kubernetes/robot# ./ete-k8s.sh onap health
Starting Xvfb on display :88 with res 1280x1024x24
Executing robot tests at log level TRACE
==============================================================================
OpenECOMP ETE
==============================================================================
OpenECOMP ETE.Robot
==============================================================================
OpenECOMP ETE.Robot.Testsuites
==============================================================================
OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp components are...
==============================================================================
Basic A&AI Health Check | PASS |
------------------------------------------------------------------------------
Basic AAF Health Check | PASS |
------------------------------------------------------------------------------
Basic AAF SMS Health Check | PASS |
------------------------------------------------------------------------------
Basic APPC Health Check | PASS |
------------------------------------------------------------------------------
Basic CLI Health Check | PASS |
------------------------------------------------------------------------------
Basic CLAMP Health Check | PASS |
------------------------------------------------------------------------------
Basic DCAE Health Check | PASS |
------------------------------------------------------------------------------
Basic DMAAP Message Router Health Check | PASS |
------------------------------------------------------------------------------
Basic External API NBI Health Check | PASS |
------------------------------------------------------------------------------
Basic Log Elasticsearch Health Check | PASS |
------------------------------------------------------------------------------
Basic Log Kibana Health Check | PASS |
------------------------------------------------------------------------------
Basic Log Logstash Health Check | PASS |
------------------------------------------------------------------------------
Basic Microservice Bus Health Check | PASS |
------------------------------------------------------------------------------
Basic Multicloud API Health Check | PASS |
------------------------------------------------------------------------------
Basic Multicloud-ocata API Health Check | PASS |
------------------------------------------------------------------------------
Basic Multicloud-titanium_cloud API Health Check | PASS |
------------------------------------------------------------------------------
Basic Multicloud-vio API Health Check | PASS |
------------------------------------------------------------------------------
Basic OOF-Homing Health Check | PASS |
------------------------------------------------------------------------------
Basic OOF-SNIRO Health Check | PASS |
------------------------------------------------------------------------------
Basic Policy Health Check | PASS |
------------------------------------------------------------------------------
Basic Portal Health Check | PASS |
------------------------------------------------------------------------------
Basic SDC Health Check | PASS |
------------------------------------------------------------------------------
Basic SDNC Health Check | PASS |
------------------------------------------------------------------------------
Basic SO Health Check | PASS |
------------------------------------------------------------------------------
Basic UseCaseUI API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC catalog API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC emsdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC gvnfmdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC huaweivnfmdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC jujuvnfmdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC multivimproxy API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC nokiavnfmdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC nokiav2driver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC nslcm API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC resmgr API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC vnflcm API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC vnfmgr API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC vnfres API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC workflow API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC ztesdncdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC ztevnfmdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VID Health Check | PASS |
------------------------------------------------------------------------------
Basic VNFSDK Health Check | PASS |
------------------------------------------------------------------------------
OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp compo... | PASS |
43 critical tests, 43 passed, 0 failed
43 tests total, 43 passed, 0 failed
==============================================================================
OpenECOMP ETE.Robot.Testsuites | PASS |
43 critical tests, 43 passed, 0 failed
43 tests total, 43 passed, 0 failed
==============================================================================
OpenECOMP ETE.Robot | PASS |
43 critical tests, 43 passed, 0 failed
43 tests total, 43 passed, 0 failed
==============================================================================
OpenECOMP ETE | PASS |
43 critical tests, 43 passed, 0 failed
43 tests total, 43 passed, 0 failed
==============================================================================
Output: /share/logs/ETE_0001_health/output.xml
Log: /share/logs/ETE_0001_health/log.html
Report: /share/logs/ETE_0001_health/report.html
Appendix 1: Troubleshooting
During our deployments, occasionally some issues pops-up. For example sdc-be pod was not initialized but readiness probe was still reporting problems and therefore dependent pods were not coming up, we believe it's not offline deployment specific problem.
Solution was to delete that pod and new one was started automatically once old one was terminated. Deleting of hanging pods seems to be quite safe process to unlock us.
e.g.
kubectl delete pod dev-sdc-be-6447776995-psn8f -n onap
For some envs with limited computation resources, ONAPs container liveness/readiness time configuration is too small (10 sec). This means that container will be restarting all the time, because it is not able to start in expected time interval. It is usually visible in next containers: uui-server, clamp-dash-es, clamp-dash-kibana.
This can be fixed by increasing liveness/readiness time in containers values.yaml, and applying this change to container.
Containers configuration file examples:
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="7e687174-6306-4cb7-8809-ae8cda15f746"><ac:plain-text-body><![CDATA[ |
[root@rc3-install uui-server]# pwd
|
Apply this changes using next commands from: /root/installer/oom/kubernetes
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="3e630902-2fab-4bc5-a278-59d89b480d73"><ac:plain-text-body><![CDATA[ |
[root@rc3-install kubernetes]# pwd |
Containers that were updated should be recreated automatically, and started correctly with new configuration values.
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="ec93050e-3e21-4c26-aafc-6d0b3a3dec31"><ac:plain-text-body><![CDATA[ |
[root@rc3-install kubernetes]# *kubectl get pods --all-namespaces |
grep 'uui|clamp'* |
In other case recreate them manually:
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="d18e9eb7-738b-46b9-8a56-2a0f52cf21c5"><ac:plain-text-body><![CDATA[ |
[root@rc3-install kubernetes]# kubectl delete pod <pod_name_1> <pod_name_n> -n onap |
]]></ac:plain-text-body></ac:structured-macro> |
Appendix 2: Release Manifest
All used docker, npm and other artifact names including versions should be stored in git repo under following directory
/root/installer/bash/tools/data_list
Appendix 3: ONAP values.yaml configuration
Before ONAP is deployed ./oom/kubernetes/onap/values.yaml in OOM should be configured accordingly to cover correct VIM (Openstack) credentials. Also number of deployed ONAP components might be modified in there.
- openStackKeyStoneUrl: OpenStack keystone URL (OS_AUTH_URL, Note: dont write api version).
- Example: http://<OpenStack_ip>:5000
- openStackServiceTenantName: OpenStack services tenant, name.
- openStackDomain: OpenStack domain name (OS_USER_DOMAIN_NAME)
- openStackUserName: OpenStack username (OS_USERNAME)
- openStackEncryptedPassword: OpenStack password, encrypted with next command:
- echo -n <OS_PASSWORD>| openssl aes-128-ecb -e -K aa3871669d893c7fb8abbcda31b88b4f -nosalt | xxd -c 256 -p
- openStackRegion: OpenStack region.
- Example (some values should be fulfilled):
- Copyright © 2017 Amdocs, Bell Canada
- Copyright © 2017 Amdocs, Bell Canada
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
- http://www.apache.org/licenses/LICENSE-2.0
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
- Global configuration overrides.
- These overrides will affect all helm charts (ie. applications)
- that are listed below and are 'enabled'.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
global:
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Change to an unused port prefix range to prevent port conflicts
- with other instances running within the same k8s cluster
nodePortPrefix: 302
- ONAP Repository
- Uncomment the following to enable the use of a single docker
- repository but ONLY if your repository mirrors all ONAP
- docker images. This includes all images from dockerhub and
- any other repository that hosts images for ONAP components.
#repository: nexus3.onap.org:10001
repositoryCred:
user: docker
password: docker
- readiness check - temporary repo until images migrated to nexus3
readinessRepository: oomk8s - logging agent - temporary repo until images migrated to nexus3
loggingRepository: docker.elastic.co
- image pull policy
#pullPolicy: Always
pullPolicy: IfNotPresent
- default mount path root directory referenced
- by persistent volumes and log files
persistence:
mountPath: /dockerdata-nfs
- flag to enable debugging - application support required
debugEnabled: false
- Repository for creation of nexus3.onap.org secret
repository: nexus3.onap.org:10001
-
- Enable/disable and configure helm charts (ie. applications)
- to customize the ONAP deployment.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
aaf:
enabled: true
aai:
enabled: true
appc:
enabled: true
config:
openStackType: OpenStackProvider
openStackName: OpenStack
openStackKeyStoneUrl: http://<OpenStack_ip>:5000
openStackServiceTenantName: services
openStackDomain: Default
openStackUserName: onap
openStackEncryptedPassword: f7920677e15e2678b0f33736189e8965
clamp:
enabled: true
cli:
enabled: true
consul:
enabled: true
dcaegen2:
enabled: true
dmaap:
enabled: true
esr:
enabled: true
log:
enabled: true
sniro-emulator:
enabled: true
oof:
enabled: true
msb:
enabled: true
multicloud:
enabled: true
nbi:
enabled: true
config:
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- openstack configuration
openStackUserName: "onap"
openStackRegion: "RegionOne"
openStackKeyStoneUrl: "http://<OpenStack_ip>:5000"
openStackServiceTenantName: "services"
openStackEncryptedPasswordHere: "f7920677e15e2678b0f33736189e8965"
policy:
enabled: true
portal:
enabled: true
robot:
enabled: true
sdc:
enabled: true
sdnc:
enabled: true
replicaCount: 1
config:
enableClustering: false
mysql:
disableNfsProvisioner: true
replicaCount: 1
so:
enabled: true
replicaCount: 1
liveness: - necessary to disable liveness probe when setting breakpoints
- in debugger so K8s doesn't restart unresponsive container
enabled: true
- so server configuration
config: - message router configuration
dmaapTopic: "AUTO" - openstack configuration
openStackUserName: "onap"
openStackRegion: "RegionOne"
openStackKeyStoneUrl: "http://<OpenStack_ip>:5000"
openStackServiceTenantName: "services"
openStackEncryptedPasswordHere: "f7920677e15e2678b0f33736189e8965"
- configure embedded mariadb
mariadb:
config:
mariadbRootPassword: password
uui:
enabled: true
vfc:
enabled: true
vid:
enabled: true
vnfsdk:
enabled: true|
Robot values.yaml configuration
If ONAP is going to be also tested e.g by vFWCL Demo, ./oom/kubernetes/robot/values.yaml should be configured before ONAP is deployed in OOM.
- lightHttpdUsername/lightHttpdPassword: credentials to access robot portal (too be able watch the logs in browser)
- openStackFlavourMedium: Openstack flavour name, corresponding to m1.medium size.
- openStackKeyStoneUrl: Openstack keystone URL (OS_AUTH_URL, Note: dont write API version, see example).
- Example: http://<OpenStack_ip>:5000
openStackPublicNetId (Tenant network): Openstack network id, from which instances would be able to access ONAP (not necessarily public [e.g. if ONAP is running in same VIM as vFWCL], should have DHCP enabled).
- openStackPassword: Openstack password in open format (not encrypted)
- openStackRegion: Openstack region.
- openStackTenantId: Openstack tenant (in which VNFs will be created)
- openStackUserName: Openstack username (OS_USERNAME)
- ubuntu14Image: Openstack image name of ubuntu 14.04-trusty
- ubuntu16Image: Openstack image name of ubuntu 16.04-xenial
- openStackPrivateNetId (ONAP network): Openstack private network to which instances would be connected, to be able to access each other (should start with 10.0, should have DHCP enabled).
- openStackPrivateSubnetId: Openstack subnet id, for private network.
- openStackPrivateNetCidr: CIDR notation for the Openstack private network where VNFs will be spawned.
- vnfPubKey: Public Key to access inside VNFs (instances).
- dcaeCollectorIp: At this step parameter is unknown, leave it empty, it won't be used during ONAP deployment. (We will set this parameter during the Close Loop Demo "Preload" step).
- Example (some values should be fulfilled)
- Copyright © 2017 Amdocs, Bell Canada
- Copyright © 2017 Amdocs, Bell Canada
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
- http://www.apache.org/licenses/LICENSE-2.0
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
- Global configuration defaults.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
global: # global defaults
nodePortPrefix: 302
ubuntuInitRepository: registry.hub.docker.com
persistence: {}
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- application image
repository: nexus3.onap.org:10001
image: onap/testsuite:1.2.1
pullPolicy: Always
ubuntuInitImage: oomk8s/ubuntu-init:2.0.0
- flag to enable debugging - application support required
debugEnabled: false
-
- Application configuration defaults.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
config:
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Username of the lighthttpd server. Used for HTML auth for webpage access
lightHttpdUsername: robot - Password of the lighthttpd server. Used for HTML auth for webpage access
lightHttpdPassword: robot - gerrit branch where the latest heat code is checked in
gerritBranch: master - gerrit project where the latest heat code is checked in
gerritProject: http://gerrit.onap.org/r/demo.git
- Demo configuration
- Nexus demo artifact version. Maps to GLOBAL_INJECTED_ARTIFACTS_VERSION
demoArtifactsVersion: "1.3.0" - Openstack medium sized flavour name. Maps GLOBAL_INJECTED_VM_FLAVOR
openStackFlavourMedium: "m1.medium" - Openstack keystone URL. Maps to GLOBAL_INJECTED_KEYSTONE
openStackKeyStoneUrl: "http://<OpenStack_ip>:5000" - UUID of the Openstack network that can assign floating ips. Maps to GLOBAL_INJECTED_PUBLIC_NET_ID
openStackPublicNetId: "57948215-0ca0-496f-bc7d-9fab66bc91aa" - password for Openstack tenant where VNFs will be spawned. Maps to GLOBAL_INJECTED_OPENSTACK_PASSWORD
openStackPassword: "OpenStackOpenPassword" - Openstack region. Maps to GLOBAL_INJECTED_REGION
openStackRegion: "RegionOne" - Openstack tenant UUID where VNFs will be spawned. Maps to GLOBAL_INJECTED_OPENSTACK_TENANT_ID
openStackTenantId: "b1ce7742d956463999923ceaed71786e" - username for Openstack tenant where VNFs will be spawned. Maps to GLOBAL_INJECTED_OPENSTACK_USERNAME
openStackUserName: "onap" - Openstack glance image name for Ubuntu 14. Maps to GLOBAL_INJECTED_UBUNTU_1404_IMAGE
ubuntu14Image: "ubuntu-14.04-server-cloudimg-amd64" - Openstack glance image name for Ubuntu 16. Maps to GLOBAL_INJECTED_UBUNTU_1604_IMAGE
ubuntu16Image: "ubuntu-16.04-server-cloudimg-amd64" - GLOBAL_INJECTED_SCRIPT_VERSION. Maps to GLOBAL_INJECTED_SCRIPT_VERSION
scriptVersion: "1.2.1" - Openstack network to which VNFs will bind their primary (first) interface. Maps to GLOBAL_INJECTED_NETWORK
openStackPrivateNetId: "b5f175c4-733c-4734-a878-290a35fb495d"
- SDNC Preload configuration
- Openstack subnet UUID for the network defined by openStackPrivateNetId. Maps to onap_private_subnet_id
openStackPrivateSubnetId: "cfe28d43-cc80-4b9a-8aac-d0fe29327c52" - CIDR notation for the Openstack private network where VNFs will be spawned. Maps to onap_private_net_cidr
openStackPrivateNetCidr: "10.0.50.0/24" - The first 2 octets of the private Openstack subnet where VNFs will be spawned.
- Needed because sdnc preload templates hardcodes things like this 10.0.${ecompnet}.X
openStackOamNetworkCidrPrefix: "10.0" - Override with Pub Key for access to VNF
vnfPubKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPwF2bYm2QuqZpjuAcZDJTcFdUkKv4Hbd/3qqbxf6g5ZgfQarCi+mYnKe9G9Px3CgFLPdgkBBnMSYaAzMjdIYOEdPKFTMQ9lIF0+i5KsrXvszWraGKwHjAflECfpTAWkPq2UJUvwkV/g7NS5lJN3fKa9LaqlXdtdQyeSBZAUJ6QeCE5vFUplk3X6QFbMXOHbZh2ziqu8mMtP+cWjHNBB47zHQ3RmNl81Rjv+QemD5zpdbK/h6AahDncOY3cfN88/HPWrENiSSxLC020sgZNYgERqfw+1YhHrclhf3jrSwCpZikjl7rqKroua2LBI/yeWEta3amTVvUnR2Y7gM8kHyh Generated-by-Nova" - Override with DCAE VES Collector external IP
dcaeCollectorIp: ""
- default number of instances
replicaCount: 1
nodeSelector: {}
affinity: {}
- probe configuration parameters
liveness:
initialDelaySeconds: 10
periodSeconds: 10 - necessary to disable liveness probe when setting breakpoints
- in debugger so K8s doesn't restart unresponsive container
enabled: true
readiness:
initialDelaySeconds: 10
periodSeconds: 10
service:
name: robot
type: NodePort
portName: httpd
externalPort: 88
internalPort: 88
nodePort: "09"
ingress:
enabled: false
resources: {} - We usually recommend not to specify default resources and to leave this as a conscious
- choice for the user. This also increases chances charts run on environments with little
- resources, such as Minikube. If you do want to specify resources, uncomment the following
- lines, adjust them as necessary, and remove the curly braces after 'resources:'.
- Example:
- Configure resource requests and limits
- ref: http://kubernetes.io/docs/user-guide/compute-resources/
- Minimum memory for development is 2 CPU cores and 4GB memory
- Minimum memory for production is 4 CPU cores and 8GB memory
#resources: - limits:
- cpu: 2
- memory: 4Gi
- requests:
- cpu: 2
- memory: 4Gi
- Persist data to a persitent volume
persistence:
enabled: true
- A manually managed Persistent Volume and Claim
- Requires persistence.enabled: true
- If defined, PVC must be created manually before volume will be bound
- Persist data to a persitent volume
- existingClaim:
volumeReclaimPolicy: Retain
- database data Persistent Volume Storage Class
- If defined, storageClassName: <storageClass>
- If set to "-", storageClassName: "", which disables dynamic provisioning
- If undefined (the default) or set to null, no storageClassName spec is
- set, choosing the default provisioner. (gp2 on AWS, standard on
- GKE, AWS & OpenStack)
- database data Persistent Volume Storage Class
- storageClass: "-"
accessMode: ReadWriteMany
size: 2Gi
mountPath: /dockerdata-nfs
mountSubPath: robot/logs|