Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 47 Next »

This page captures all information and steps that are needed for deploying ONAP via  OOM using /on  VIO 4.0 Kubernetes .

Assumption .This pages assumes that user have VIO4.0 with kubernetes deployed with cloud provider as openstack successfully .

VMware Kubernetes Architecture

a. prerequisite  - 

     a.1 User should have the BlueShift Mgmt IP Address  , user name and password ready to be able to create Kubernetes cluster .

     a.2  K8S Master and K8S Node instances should have Flavor attached as per the requirement below .


vCPU48
RAM96GB
Storage 256GB 


 Step1 to Step3 are needed for creating a kubernetes cluster .

TOComplete 20171205 - Step 4 onwards are needed for getting the Kubernetes Host IP so that user can login to Kubernetes Host and ONAP using  OOM can be deployed ..

TODO 20171205 -  steps that needs to be done prior to git Clone of OOM . 

    1 :  https://BLUESHIFT_MGMT_IP_ADDRESS/LOGIN




2 -  Create the kubernetes cluster by clicking + NEW 





 3.1 .  click NEXT




 3.2 . Select an Infrastructure Provider for creating the kubernetes cluster .

        Before you deploy a Kubernetes cluster, you must have created the cloud provider . VMware Integrated OpenStack with Kubernetes uses the cloud provider to create the infrastructure required to deploy all your Kubernetes            clusters.  VMware currently supports  2 options  for infrastructure provider .VMware SDDC  (vSphere + NSX + VSAN) or VIO (VMware Integrated OpenStack) . When choosing the type of provider to create, consider the                   following:

  •   With an existing VMware Integrated OpenStack deployment, you can create an OpenStack provider. 
  •   Without an existing VMware Integrated OpenStack deployment, you can create an SDDC provider Im.

    provider name can be custom name given by user where as provider Type has to be opestack or sddc . 


3.3 . Select a Node Profile  . Default Node Profile can be modified to have the desired  flavor for the  kubernetes cluster . 



3.4  . Provide the Input for the Cluster as indicated below 


 Node Types A Kubernetes cluster is comprised of two types of nodes. Each node in the VMware Integrated OpenStack with Kubernetes is a VM.

Master Nodes A master node provides the Kubernetes API service, scheduler, replicator, and so on. It manages the worker nodes. A cluster with a single master node is valid but has no redundancy.

Worker Nodes A worker node hosts your containers. A cluster with a single worker node is valid but has no redundancy.

Cluster Types  VMware Integrated OpenStack with Kubernetes supports two types of clusters. 

       Exclusive Cluster In an exclusive cluster, multi-tenancy is not supported. Any authorized Kubernetes user using the Kubernetes CLI of APIs has namespace management privileges.
          The exclusive cluster provides a familiar environment for developers who deploy Kubernetes themselves.

       Shared Cluster In a shared cluster, multi-tenancy is supported and enforced by the Kubernetes namespace. Only a VMware Integrated OpenStack with Kubernetes administrator using the VMware Integrated OpenStack with                     Kubernetes interface or CLI has namespace management privileges.  The shared cluster is an environment where the administrator can manage resource isolation among users.

     Example Data Fill :

      Cluster Name :testCluster 

      Number of Master Nodes :1 

     Number of worker nodes :1 

     DNS servers : 10.112.64.1 

     Cluster type: Exclusive Cluster 


 3.5  create the user and Group 

 3.6 click on finish 


4. Verifying the VIO Kubernetes Cluster . If everything in step 3.1 to step 3.6 has been done successfully .The Summary information for the cluster will be filled as given in the example below .



   


 5. How to get Kubernetes Host IP Address and login to Kubernetes Host  . 

      5.1    Login via console window  to  BLUESHIFT_MGMT_IP_ADDRESS . user name and password is same as used in step1 .

      5.2    Once logged into  BLUESHIFT_MGMT_IP_ADDRESS  session  use the command  "vkube  login --insecure" . use the user name and password same as in step1 .

      5.3     Get the list of clusters using command  "vkube cluster list --insecure" . make a note of cluster Id in the output of the command 

      5.4     Get the cluster node details using command "vkube cluster show <cluster Id > --insecure "  .make note of worker ip address .Worker IP Address KUBERNETES_HOST_IP.

      5.5     Login to app-api docker using docker exec it app-api bash

      5.6     Once inside the app-api docker go to /var/lib/vrc/terraform/<cluserId>

      5.7      ssh to KUBERNETES_HOST_IP_ADDRESS using below command

                  ssh -i private.key -F ssh.bastion.conf ubuntu@KUBERNETES_HOST_IP_ADDRESS .  

      5.8      once inside KUBERNETES_HOST_IP_ADDRESS 


 6. Installing Kubectl  to manage Kubernetes cluster using kubectl  


  TBC20171207   6.1  download the kubectl using below command on 

             curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kubectl

       6.2    Make the kubectl binary executable.

                chmod +x ./kubectMove the binary in to your PATH.

      6.3     Move the kubectl to PATH 

                         sudo mv ./kubectl /usr/local/bin/kubectl

       

 7. Verifying that kubernetes config is good 

            root@localhost:~# kubectl cluster-info

            Kubernetes master is running at https://10.110.208.207:443/
            dnsmasq is running at https://10.110.208.207:443//api/v1/namespaces/kube-system/services/dnsmasq/proxy
            kubedns is running at https://10.110.208.207:443//api/v1/namespaces/kube-system/services/kubedns/proxy


8. Installing Helm 

       8.1  download the helm  using below command  

               wget http://storage.googleapis.com/kubernetes-helm/helm-v2.3.0-linux-amd64.tar.gz  

       8.2    untar the  file

               tar -zxvf helm-v2.3.0-linux-amd64.tar.gz

      8.3     Move the helm to /usr/loca/bin

                sudo mv linux-amd64/helm /usr/local/bin/helm

  

  • No labels