Beijing_ONAP_Offline_installer

OOM ONAP offline installer
Installation Guide
(Community Edition)





1. Introduction for CE1 delivery
2. Environment
3. Preparation (before installation)
4. Installation
4.1 Deploy infrastructure
4.2 ONAP deployment


Version Control

Version

Date

Modified by

Comment

0.1

13.10.2018

Michal Ptacek

First draft

 

 

 

 

 

 

 

 

Contributors

Name

Mail

Michal Ptá?ek

m.ptacek@partner.samsung.com

Samuli Silvius

s.silvius@partner.samsung.com

Timo Puha

t.puha@partner.samsung.com

Petr Ospalý

p.ospaly@partner.samsung.com

Pawel Mentel

p.mentel@samsung.com

Witold Kopeld

w.kopel@samsung.com

 

1. Introduction for CE1 delivery


This installation guide is covering instruction on how to deploy ONAP using Samsung offline installer. Precondition is successfully build SI (Self-Installer) package, which is addressed in previous guide. All artifacts needed for this deployment were collected from online OOM ONAP Beijing deployment from Beijing branch. Release was verified on RHEL7.4 deployments (rhel cloud image). If different rhel74 images are used, there might be some problems related to package clashes. Image was downloaded from RedHat official site.


{+}https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.4/x86_64/product-software+
Red Hat Enterprise Linux 7.4 KVM Guest Image
Last modified: 2018-03-23
SHA-256 Checksum: b9fd65e22e8d3eb82eecf78b36471109fa42ee46fc12fd2ba2fa02e663fd21ef


Later on it might be possible to use different than cloud rhel74 image, currently support of additional platform is not planned.



Current limitations are:

  • Tested on rhel74 cloud image (on openstack VMs only)

  • Verified by vFWCL demo (In OpenStack environment only, inside same tenant where ONAP is deployed).

2. Environment

Install_server is in some context also referenced as infrastructure node, also it's node hosting rancher server container.



HW footprint:

  •  

    • install_server: (nexus, nginx,dns,rancher_server)

      • Red Hat Enterprise Linux 7.4 KVM Guest Image

      • 16G+ RAM

      • 200G+ disk space (minimum 160GB)

      • 10G+ swap

      • 8+ vCPU

  •  

    • kubernetes_node(s): (rancher_agent, ONAP OOM node)

      • Red Hat Enterprise Linux 7.4 KVM Guest Image

      • 64G+ RAM

      • 120G+ disk space

      • 10G+ swap

      • 16+ vCPU

 

3. Preparation (before installation)

 

  • (Step1) Ensure passwordless root login - From install_server to kubernetes_nodes


As we’re using cloud rhel7.4 image, root access is by default disabled. It’s possible to login into all VM’s just by using cloud-user and using known key inserted during spawning by cloud-init.

Installation scripts requires to be executed under root user (non-root user with sudoers right is not sufficient, will be done later as a hardening topic) and it requires passwordless login to other k8s nodes.

In general cases to achieve passwordless connection, one can just create own key using ssh-keygen and distribute public key created for example in /root/.ssh/id_rsa.pub to /root/.ssh/authorized_keys file on all k8s nodes.

On OpenStack created instances cloud related text prohibiting root login should be removed as well (described in step below).

 

If VMs were spawned in Openstack, this procedure could be used to allow ssh under root user (with private key):

 

[root@rc3l-install-server ~]# ssh rc3-install-compute2 The authenticity of host 'r2-install-compute2 (10.6.6.13)' can't be established. ECDSA key fingerprint is SHA256:645LdswQyZtoxHBv3+6hvC62liAdwEkbr8w6sN392YI ECDSA key fingerprint is MD5:32:a6:70:26:0a:ae:56:c1:e3:2a:b6:fa:b7:40:5a:d6. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'rc3-install-compute2,10.6.6.13' (ECDSA) to the list of known hosts. Please login as the user "cloud-user" rather than the user "root".

One need to login into those servers as cloud-user (using correct openstack key)

[root@rc3l-install-server ~]# ssh -i ~/correct_key cloud-user@rc3-install-compute2

Switch to root user:

 

[cloud-user@rc3-install-compute2 ~]$ sudo su -

And adapt it by removing highlighted text (cloud related text prohibiting root login):

 

In some environments RootLogin might be prohibited completely, this might be enabled by setting-up PermitRootLogin yes in /etc/ssh/sshd_config and service sshd reload.

 

After having done this to all kubernetes nodes, one should check access as root to

verify that password-less root login works, it should be able to access all k8s nodes

w/o any password prompt.

e.g.:

 

 

  • (Step 2) Create installation directory on install_server (e.g. /root/installer) and move self-contained archive (installation script) into it.

Be sure there is enough space (more than 160G)


Note: this is the place, where archive will be extracted. Original file can be removed after deployment.

  • (Step 3) Create new file local_repo.conf in installation directory, with following information:

 

 

E.g.

LOCAL_IP=10.8.8.7

NODES_IPS='10.8.8.10 10.8.8.11'

This will ensure that infrastructure deployment together with setting-up kubernetes will be done non-interactively.

We should be ready to proceed with installation part.

4. Installation

4.1 Deploy infrastructure

In this part infrastructure will be deployed. More specifically local nexus, dns, rancher & docker will be deployed on install server. Kubernetes nodes will get rancher agent running and form kubernetes cluster.

  • (Step1) To execute a script simply run (from installation directory):


 

  • (Step2) Answer questions asked by the script (if needed)


Note: questions will be asked only when script won't be able to find config file (local_repo.conf) in the current folder, otherwise script will use existing config file.

And Wait until script finish execution.

  • (Step3) Verify that k8s cluster is ready and operational


One can verify, that infrastructure deployment was successful in following way:

  1. following should display healthy etcd-0 component

  2. following should display 2 kubernetes nodes in "Ready" state.

    4.2 ONAP deployment

    Before ONAP is deployed ./oom/kubernetes/onap/values.yaml in OOM should be configured accordingly to cover correct VIM (Openstack) credentials.
    Also number of deployed ONAP components might be modified in there.If ONAP is going to be also tested by reproducing vFWCL Demo ./oom/kubernetes/robot/values.yaml should be configured before ONAP is deployed in OOM.
    Configuration of onap & robot values.yaml described in "Appendix 4" paragraph.

  • (Step1) Trigger the deployment of ONAP


To execute ONAP installation run (from installation directory):

 

 

This script will finish quite quickly and it will just launch ONAP deployment.

  • (Step2) Check the progress of the deployment

 


Check-box: 

E.g. following command can be used to track progress of deployment

 

  • (Step3) Verify it's functionality

All ONAP health-checks should pass, launch robot health checks, from inside oom/kubernetes/robot folder.