Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Replaced with official readthedocs documenation

Table of Contents

Rancher Installation

The following are instructions on how to create an Openstack VM running Rancher.

Launch new VM instance to host the Rancher Server

Image RemovedImage Removed

Select Ubuntu 16.04 as base image

Select "No" on "Create New Volume"

Image Removed

Select Flavor

Known issues exist if flavor is too small for Rancher. Please select a flavor with at least 4 vCPU and 8GB ram.

Image Removed

Networking 

Image Removed

Security Groups

Image Removed

Key Pair

Use an existing key pair (e.g. onap_key), import an existing one or create a new one to assign.

Image Removed

Apply customization script for the Rancher VM

View file
nameopenstack-rancher.txt
height150

This customization script will:

  • setup root access to the VM (comment out if you wish to disable this capability and restrict access to ssh access only)
  • install docker *
  • install rancher *
  • install kubectl *
  • install helm *
  • install nfs server

* ONAP release supported version

Image Removed

Launch Instance

Image Removed

Assign Floating IP for external access

Image Removed

Image Removed

Image Removed

Kubernetes Installation

Launch new VM instance(s) to create a Kubernetes single host or cluster

To create a cluster:

  1. do not append a '-1' suffix (e.g. sb4-k8s)
  2. increase count to the # of of kubernetes worker nodes you want (eg. 3)

Image RemovedImage Removed

Select Ubuntu 16.04 as base image

Select "No" on "Create New Volume"

Image RemovedImage Removed

Select Flavor

The size of a Kubernetes host depends on the size of the ONAP deployment that will be installed.

As of the Beijing release a minimum of 3 x 32GB hosts will be needed to run a full ONAP deployment (all components).

If a small subset of ONAP components are being deployed for testing purposes, then a single 16GB or 32GB host should suffice.

Image Removed

Networking 

Image Removed

Security Group 

Image Removed

Key Pair

Use an existing key pair (e.g. onap_key), import an existing one or create a new one to assign.

Image Removed

Apply customization script for Kubernetes VM(s)

View file
nameopenstack-k8s-node.txt
height250

This customization script will:

  • setup root access to the VM (comment out if you wish to disable this capability and restrict access to ssh access only)
  • install docker *
  • install kubectl *
  • install helm *
  • install nfs common (see configuration step here)

* ONAP release supported version

Image Removed

Launch Instance

Image Removed

Assign Floating IP for external access

Image Removed

Image Removed

Image Removed

Setting up an NFS share for Multinode Kubernetes Clusters

The figure below illustrates a possible topology of a multinode Kubernetes cluster.

Image Removed

One node, the Master Node, runs Rancher and Helm clients and connects to all the Kubernetes nodes in the cluster. Kubernetes nodes, in turn, run Rancher, Kubernetes and Tiller (Helm) agents, which receive, execute, and respond to commands issued by the Master Node (e.g. kubectl or helm operations). Note that the Master Node can be either a remote machine that the user can log in to or a local machine (e.g. laptop, desktop) that has access to the Kubernetes cluster.

Deploying applications to a Kubernetes cluster requires Kubernetes nodes to share a common, distributed filesystem. One node in the cluster plays the role of NFS Master (not to confuse with the Master Node that runs Rancher and Helm clients, which is located outside the cluster), while all the other cluster nodes play the role of NFS slaves. In the figure above, the left-most cluster node plays the role of NFS Master (indicated by the crown symbol). To properly set up an NFS share on Master and Slave nodes, the user can run the scripts below.

View file
namemaster_nfs_node.sh
height250
View file
nameslave_nfs_node.sh
height250

The master_nfs_node.sh script runs in the NFS Master node and needs the list of NFS Slave nodes as input, e.g.:

sudo ./master_nfs_node.sh node1_ip node2_ip ... nodeN_ip

The slave_nfs_node.sh script runs in each NFS Slave node and needs the IP of the NFS Master node as input, e.g.:

sudo ./slave_nfs_node.sh master_node_ip

Configuration (Rancher and Kubernetes)

Access Rancher server via web browser

(e.g.  http://10.12.6.16:8080/env/1a5/apps/stacks)

Image Removed

Add Kubernetes Environment to Rancher

1. Select “Manage Environments”

Image Removed

2. Select “Add Environment”

Image Removed

3. Add unique name for your new Rancher environment

4. Select the Kubernetes template

5. Click "create"

Image Removed

6. Select the new named environment (ie. SB4) from the dropdown list (top left).

Rancher is now waiting for a Kubernetes Host to be added.

Image Removed

Add Kubernetes Host

1.  If this is the first (or only) host being added - click on the "Add a host" link

Image Removed

 and click on "Save" (accept defaults).

Image Removed

otherwise select INFRASTRUCTURE→ Hosts and click on "Add Host"

Image Removed

2. Enter the management IP for the k8s VM (e.g. 10.0.0.4) that was just created.

3. Click on “Copy to Clipboard” button

4. Click on “Close” button

Image Removed

Configure Kubernetes Host

1. Login to the new Kubernetes Host

Image Removed

2. Paste Clipboard content and hit enter to install Rancher Agent

Image Removed

Return to Rancher environment (e.g. SB4) and wait for services to complete (~ 10-15 mins)

Image Removed

Configure kubectl and helm

Note that in this example we are configuring kubectl and helm that have been installed (as a convience) onto the rancher and kubernetes hosts.

Typically you would install them both on your PC and remotely connect to the cluster. The following procedure would remain the same.

1. Click on CLI and then click on “Generate Config”

Image Removed

2. Click on “Copy to Clipboard”

Image Removed

3. Create a .kube directory in user directory (if one does not exist)

Image Removed

4. Paste contents of Clipboard into a file called “config” and save file

Image Removed

4. Validate that kubectl is able to connect to the kubernetes cluster 

Image Removed

and show running pods

Image Removed

5. Validate helm is running at the right version.

If not, an error like this will be displayed:

Image Removed

6. Upgrade the server-side component of helm (tiller) via ‘helm init --upgrade’

Image Removed

ONAP Deployment via OOM

Now that kubernetes and Helm are installed and configured you can prepare to deploy ONAP.

Until an LF-hosted public ONAP repository is available (comping soon!), please clone the OOM repo (https://gerrit.onap.org/r/gitweb?p=oom.git;a=summary).

Follow the instructions in the oom/kubernetes/README.MD or look at the official documentation to get started

http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_quickstart_guide.html?highlight=oom%20quick%20start

http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_user_guide.htmlThis page has been replaced with official documentation at the onap.readthedocs.io site - click here.