Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

VMware VIO 4.0 Kubernetes Architecture


Image RemovedImage Added




Prerequisites

  1. BlueShift Mgmt IP Address, username and password are needed to create Kubernetes cluster .

  2. K8S Master and K8S Node instances should have a OpenStack Flavor attached as per the requirement below .


vCPU48
RAM96GB
Storage 256GB 

...

Steps to create a Kubernetes cluster

Follow the below steps 1 - 4

...

to create a Kubernetes cluster

...

Step 5 onwards are needed for getting the Kubernetes Host IP so that user can login to Kubernetes Host and ONAP

...

using OOM can be deployed

...

.

...

Step-1https://

...

MGMT_IP_ADDRESS/LOGIN





Step-2 Create the Cloud Provider before creating a kubernetes cluster 

Cloud Provider creation is prerequistie a prerequisite to Kubernetes cluster creation. VMware Integrated OpenStack VIO with Kubernetes uses the cloud provider to create the infrastructure required to deploy all your Kubernetes clusters. VMware currently supports 2 options for infrastructure provider. VMware SDDC (vSphere + NSX + VSAN) or VIO OpenStack (i.e. VMware Integrated OpenStack). When choosing the type of provider to create, consider the following:

  •   With an existing VMware Integrated OpenStack VIO deployment, you can create an OpenStack provider. 
  •   Without an existing VMware Integrated OpenStack VIO deployment, you can create an SDDC provider Im.



Step-3 Create the Kubernetes cluster

3.1 - Create the Click '+NEW' to create a kubernetes cluster by clicking + NEW




3.2 - Click NEXT


3.3 - Select an Infrastructure Provider for creating the kubernetes

Before you deploy a Kubernetes cluster, you must have created create the cloud provider. VMware Integrated OpenStack with Kubernetes uses the cloud provider to create the infrastructure required to deploy all your Kubernetes clusters. VMware currently supports 2 options for infrastructure provider. VMware SDDC (vSphere + NSX + VSAN) or VIO (VMware Integrated OpenStack). 

...

Cloud provides can be SDDC or OpenStack. Select the option as appropriate.

Here, with VIO with Kubernetes, we select 'OpenStack' as cloud provider 



3.4 - Select a Node Profile. If you have more than one node profiles, uncheck the box "Use default node profile" to see the list.

...

Step-5  How to get Kubernetes Host host IP Address and login to Kubernetes Host  host 

5.1 - Login via console window  to  BLUESHIFT_MGMT_IP_ADDRESS . user name and password is same as used in step1 .

...

5.8 - Once inside KUBERNETES_HOST_IP_ADDRESS (to be reviewed)


Step-6  Installing kubectl to manage Kubernetes cluster 
  TBC20171207   cluster  

6.1 - Download the kubectl using below command 

...

sudo mv ./kubectl /usr/local/bin/kubectl 


Step-7 Verifying that kubernetes kubectl config is good good

7.1 - On Kubernetes cluster  

        root@localhost:

...

~# kubectl cluster-info

    Kubernetes master is running at ....
    Heapster is running at....
    KubeDNS is running at ....
    kubernetes-dashboard is running at ...
    monitoring-grafana is running at ....
    monitoring-influxdb is running at ...
    tiller-deploy is running

...

Step 8 Installing Helm 

...

 at....

7.2 - On client from Where Kubernetes cluster can be managed Remotely  

root@localhost:~# kubectl  version

Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T23:15:59Z", GoVersion:"go1.8.3",

Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.7-rancher1", GitCommit:"a1ea37c6f6d21f315a07631b17b9537881e1986a", GitTreeState:"clean", BuildDate:"2017-10-02T21:33:08Z",GoVersion:"go1.8.3" Compiler:"gc", Platform:"linux/amd64"}


Step-8 Verifying that kube config is good 

8.1 - On Kubernetes cluster  

root@localhost:~# cat  ~/.kube/config 

apiVersion: v1

kind: Config
clusters:
- cluster:
api-version: v1
insecure-skip-tls-verify: true
server: "<SERVER_IP_ADDRESS:8080/r/projects/CLUSTER_NAME/kubernetes:SERVER_PORT_NUMBER"
name: "(CLUSTER_NAME)"
contexts: 
- context:
cluster: "(CLUSTER_NAME)"
user: "(CLUSTER_NAME)"
name: "(CLUSTER_NAME)"
current-context: "(CLUSTER_NAME)"
users:
- name: "(CLUSTER_NAME/USER_NAME)"
user:
token: "<SECURITY_TOKEN>"

8.2 - On client from where Kubernetes cluster can be managed Remotely  

root@localhost:~# cat  ~/.kube/config 

current-context: default-context

apiVersion: v1
clusters:
- cluster:
api-version: v1
server: https://SERVER_IP_ADDRESS:SERVER_PORT_NUMBER/
insecure-skip-tls-verify: true
name: CLUSTER_NAME

contexts:
- context:
cluster: CLUSTER_NAME 
namespace: default
user: user1
name: default-context
users:
- name: user1
user:
username: "<USERNAME>"
password: "<PASSWORD"
kind: Config
preferences:
colors: true

Step-9 Installing Helm 

9.1 - Download the helm  using below command  

wget http://storage.googleapis.com/kubernetes-helm/helm-v2.3.0-linux-amd64.tar.gz

89.2 - Untar the  fileDownload the helm  using below command  

tar -zxvf helm-v2.3.0-linux-amd64.tar.gz

89.3 - Move the helm to /usr/local/bin

sudo mv linux-amd64/helm /usr/local/bin/helm

  

Step-10 Verifying Helm  

10.1 - Type the below command 

helm help

10.2 - Helm version .

              Client: &version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
              Server: &version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}