Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This page captures

...

the information and steps

...

to deploy ONAP using OOM on VIO 4.0 with Kubernetes.

...

This pages assumes that

...

VIO 4.0

...

is deployed

...

successfully with Kubernetes and 'OpenStack' was selected as the Cloud Provider during configuration.


VMware VIO 4.0 Kubernetes Architecture

Image Removed

a. prerequisite  - 

...


Image Added




Prerequisites

  1. Mgmt IP Address, username and password to create Kubernetes cluster .

...

  1. K8S Master and K8S Node instances should have a OpenStack Flavor attached as per the requirement below

...


vCPU48
RAM96GB
Storage 256GB 

 Step1 to Step3 are needed for creating a kubernetes cluster .

...


Steps to create a Kubernetes cluster

Follow the below steps 1 - 4 to create a Kubernetes cluster

Step 5 onwards are needed for getting the Kubernetes Host IP so that user can login to Kubernetes Host and ONAP

...

using OOM can be deployed.

...

TODO 20171205 -  steps that needs to be done prior to git Clone of OOM . 

...

Step-1 - https://

...

MGMT_IP_ADDRESS/LOGIN


Image RemovedImage Added




Step-2

...

Image Removed

 3.1 .  click NEXT

Image Removed

 3.2 . Select an Infrastructure Provider for creating the kubernetes cluster .

...

 Create the Cloud Provider before creating a kubernetes cluster 

Cloud Provider creation is a prerequisite to Kubernetes cluster creation. VIO with Kubernetes uses the cloud provider to create the infrastructure required to deploy all your Kubernetes            Kubernetes clusters.  VMware VMware currently supports  supports 2 options  options for infrastructure provider. VMware SDDC   (vSphere + NSX + VSAN) or VIO OpenStack (i.e. VMware Integrated OpenStack).       When choosing the type of provider to create, consider the following:

  •   With an existing

...

  • VIO deployment, you can create an OpenStack provider. 
  •   Without an existing

...

  • VIO deployment, you can create an SDDC provider

...

Image Removed

3.3 . Select a Node Profile  . Default Node Profile can be modified to have the desired  flavor for the  kubernetes cluster . 

Image Removed

...

  • .

Image Added



Step-3 Create the Kubernetes cluster

3.1 - Click '+NEW' to create a kubernetes cluster


Image Added



3.2 - Click NEXT

Image Added


3.3 - Select an Infrastructure Provider for creating the kubernetes

Before you deploy a Kubernetes cluster, you must create the cloud provider. Cloud provides can be SDDC or OpenStack. Select the option as appropriate.

Here, with VIO with Kubernetes, we select 'OpenStack' as cloud provider 


Image Added


3.4 - Select a Node Profile. If you have more than one node profiles, uncheck the box "Use default node profile" to see the list.


Image Added


3.5 - Provide the Input for the Cluster as indicated in Example Data below 

 Node Types - A Kubernetes cluster is comprised of two types of nodes. Each node in the VMware Integrated OpenStack with Kubernetes is a VM.

  • Master Nodes - A master node provides the Kubernetes API service, scheduler, replicator, and so on. It manages the worker nodes. A cluster with a single master node is valid but has no redundancy.
  • Worker Nodes - A worker node hosts your containers. A cluster with a single worker node is valid but has no redundancy.

Cluster Types  - VMware Integrated OpenStack with Kubernetes supports two types of clusters.        

  • Exclusive Cluster - In an exclusive cluster, multi-tenancy is not supported. Any authorized Kubernetes user using the Kubernetes CLI of APIs has namespace management privileges.

...

  • The exclusive cluster provides a familiar environment for developers who deploy Kubernetes themselves.

...

  • Shared Cluster - In a shared cluster, multi-tenancy is supported and enforced by the Kubernetes namespace. Only a VMware Integrated OpenStack with Kubernetes administrator using the VMware Integrated OpenStack

...

  • with Kubernetes interface or CLI has namespace management privileges.

...

  • The shared cluster is an environment where the administrator can manage resource isolation among users.

...

Image Removed

 3.5  create the user and Group 

 3.6 click on finish 

...

Example Data:

Cluster Name :testCluster 

Number of Master Nodes :1 

Number of worker nodes :1 

DNS servers : 10.112.64.1 

Cluster type: Exclusive Cluster 


Image Added


3.6 - Add Users and Groups for this cluster 

        Once a Kubernetes cluster is created, you can authorize users or groups for the cluster. The users and groups belong to the SDDC or OpenStack provider where the cluster was created..

         In the Configure user and group for cluster dialogue box, check the boxes for users or groups that you want to authorize for access to the cluster.

         Or check off the boxes for users or groups that you no longer want to authorize for access to the cluster.

                 

        Image Added


3.7 - Click on FINISH and wait for few minutes for the kubernetes cluster to get created .


Image Added



Step-4 Verifying the VIO Kubernetes Cluster -

                  If everything in step 3.1 to step 3.6 has been done successfully .The Summary information for the cluster will be filled as given in the example below .

Image Removed

   

 5. Image Added


Step-5  How to get Kubernetes Host host IP Address and login to Kubernetes Host  host 

      5.1    1 - Login via console window  to  BLUESHIFT_MGMT_IP_ADDRESS . user name and password is same as used in step1 .

      5.2    2 - Once logged into  BLUESHIFT_MGMT_IP_ADDRESS  session  use the command  "vkube  login --insecure" . use the user name and password same as in step1 .

      5.3     Get 3 - Get the list of clusters using command  "vkube cluster list --insecure" . make a note of cluster Id in the output of the command 

      5.4     Get 4 - Get the cluster node details using command "vkube cluster show <cluster Id > --insecure "  .make note of worker ip address .Worker IP Address KUBERNETES_HOST_IP.

      5.5     Login 5 - Login to app-api docker using docker exec it app-api bash

      5.6     Once 6 - Once inside the app-api docker go to /var/lib/vrc/terraform/<cluserId>

      5.7      7 - ssh to KUBERNETES_HOST_IP_ADDRESS using below command

                  ssh -i private.key -F ssh.bastion.conf ubuntu@KUBERNETES_HOST_IP_ADDRESS .  

      5.8      once 8 - Once inside KUBERNETES_HOST_IP_ADDRESS  6. Installing Kubectl  (to be reviewed)


Step-6  Installing kubectl to manage Kubernetes cluster using kubectl    TBC20171207   6.1  download cluster  

6.1 - Download the kubectl using below command on              curl command 

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kubectl

       6.2    6.2 - Make the kubectl binary executable.

...

chmod

...

+x

...

./

...

kubectl

6.3 - Move the kubectl to PATH 

                         sudo sudo mv ./kubectl /usr/local/bin/kubectl       


 7. Step-7 Verifying that kubernetes kubectl config is good   good

7.1 - On Kubernetes cluster  

            root@localhost:~# ~# kubectl cluster-info

            Kubernetes master is running at https://10.110.208.207:443/
            dnsmasq is running at https://10.110.208.207:443//api/v1/namespaces/kube-system/services/dnsmasq/proxy
            kubedns is running at https://10.110.208.207:443//api/v1/namespaces/kube-system/services/kubedns/proxy

8. Installing Helm 

       8.1  download the helm  using below command  

...

    Kubernetes master is running at ....
    Heapster is running at....
    KubeDNS is running at ....
    kubernetes-dashboard is running at ...
    monitoring-grafana is running at ....
    monitoring-influxdb is running at ...
    tiller-deploy is running at....

7.2 - On client from Where Kubernetes cluster can be managed Remotely  

root@localhost:~# kubectl  version

Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T23:15:59Z", GoVersion:"go1.8.3",

Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.7-rancher1", GitCommit:"a1ea37c6f6d21f315a07631b17b9537881e1986a", GitTreeState:"clean", BuildDate:"2017-10-02T21:33:08Z",GoVersion:"go1.8.3" Compiler:"gc", Platform:"linux/amd64"}


Step-8 Verifying that kube config is good 

8.1 - On Kubernetes cluster  

root@localhost:~# cat  ~/.kube/config 

apiVersion: v1

kind: Config
clusters:
- cluster:
api-version: v1
insecure-skip-tls-verify: true
server: "<SERVER_IP_ADDRESS:8080/r/projects/CLUSTER_NAME/kubernetes:SERVER_PORT_NUMBER"
name: "(CLUSTER_NAME)"
contexts: 
- context:
cluster: "(CLUSTER_NAME)"
user: "(CLUSTER_NAME)"
name: "(CLUSTER_NAME)"
current-context: "(CLUSTER_NAME)"
users:
- name: "(CLUSTER_NAME/USER_NAME)"
user:
token: "<SECURITY_TOKEN>"

8.2 - On client from where Kubernetes cluster can be managed Remotely  

root@localhost:~# cat  ~/.kube/config 

current-context: default-context

apiVersion: v1
clusters:
- cluster:
api-version: v1
server: https://SERVER_IP_ADDRESS:SERVER_PORT_NUMBER/
insecure-skip-tls-verify: true
name: CLUSTER_NAME

contexts:
- context:
cluster: CLUSTER_NAME 
namespace: default
user: user1
name: default-context
users:
- name: user1
user:
username: "<USERNAME>"
password: "<PASSWORD"
kind: Config
preferences:
colors: true

Step-9 Installing Helm 

9.1 - Download the helm  using below command  

wget http://storage.googleapis.com/kubernetes-helm/helm-v2.3.0-linux-amd64.tar.gz

...

       8.2    untar the  file

               9.2 - Download the helm  using below command  

tar -zxvf helm-v2.3.0-linux-amd64.tar.gz

      8.3     Move 9.3 - Move the helm to /usr/localocal/bin                

sudo mv linux-amd64/helm /usr/local/bin/helm

  

Step-10 Verifying Helm  

10.1 - Type the below command 

helm help

10.2 - Helm version .

              Client: &version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
              Server: &version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}