Mgmt IP Address, username and password to create Kubernetes cluster .
K8S Master and K8S Node instances should have a OpenStack Flavor attached as per the requirement below
vCPU | 48 |
RAM | 96GB |
Storage | 256GB |
Cloud Provider creation is a prerequisite to Kubernetes cluster creation. VIO with Kubernetes uses the cloud provider to create the infrastructure required to deploy all your Kubernetes clusters. VMware currently supports 2 options for infrastructure provider. VMware SDDC (vSphere + NSX + VSAN) or OpenStack (i.e. VMware Integrated OpenStack). When choosing the type of provider to create, consider the following:
3.1 - Click '+NEW' to create a kubernetes cluster
3.2 - Click NEXT
3.3 - Select an Infrastructure Provider for creating the kubernetes
Before you deploy a Kubernetes cluster, you must create the cloud provider. Cloud provides can be SDDC or OpenStack. Select the option as appropriate.
Here, with VIO with Kubernetes, we select 'OpenStack' as cloud provider
3.4 - Select a Node Profile. If you have more than one node profiles, uncheck the box "Use default node profile" to see the list.
3.5 - Provide the Input for the Cluster as indicated in Example Data below
Node Types - A Kubernetes cluster is comprised of two types of nodes. Each node in the VMware Integrated OpenStack with Kubernetes is a VM.
Cluster Types - VMware Integrated OpenStack with Kubernetes supports two types of clusters.
Example Data:
Cluster Name :testCluster
Number of Master Nodes :1
Number of worker nodes :1
DNS servers : 10.112.64.1
Cluster type: Exclusive Cluster
3.6 - Add Users and Groups for this cluster
Once a Kubernetes cluster is created, you can authorize users or groups for the cluster. The users and groups belong to the SDDC or OpenStack provider where the cluster was created..
In the Configure user and group for cluster dialogue box, check the boxes for users or groups that you want to authorize for access to the cluster.
Or check off the boxes for users or groups that you no longer want to authorize for access to the cluster.
3.7 - Click on FINISH and wait for few minutes for the kubernetes cluster to get created .
If everything in step 3.1 to step 3.6 has been done successfully .The Summary information for the cluster will be filled as given in the example below .
Step-5 How to get Kubernetes Host IP Address and login to Kubernetes Host .
5.1 - Login via console window to BLUESHIFT_MGMT_IP_ADDRESS . user name and password is same as used in step1 .
5.2 - Once logged into BLUESHIFT_MGMT_IP_ADDRESS session use the command "vkube login --insecure" . use the user name and password same as in step1 .
5.3 - Get the list of clusters using command "vkube cluster list --insecure" . make a note of cluster Id in the output of the command
5.4 - Get the cluster node details using command "vkube cluster show <cluster Id > --insecure " .make note of worker ip address .Worker IP Address KUBERNETES_HOST_IP.
5.5 - Login to app-api docker using docker exec it app-api bash
5.6 - Once inside the app-api docker go to /var/lib/vrc/terraform/<cluserId>
5.7 - ssh to KUBERNETES_HOST_IP_ADDRESS using below command
ssh -i private.key -F ssh.bastion.conf ubuntu@KUBERNETES_HOST_IP_ADDRESS
5.8 - Once inside KUBERNETES_HOST_IP_ADDRESS (to be reviewed)
Step 6 Installing kubectl to manage Kubernetes cluster
6.1 - Download the kubectl using below command
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kubectl
6.2 - Make the kubectl binary executable.
chmod +x ./kubectl
6.3 - Move the kubectl to PATH
sudo mv ./kubectl /usr/local/bin/kubectl
Step 7 Verifying that kubectl config is good
7.1 on Kubernetes Cluster
root@localhost:~# kubectl cluster-info
Kubernetes master is running at .... Heapster is running at.... KubeDNS is running at .... kubernetes-dashboard is running at ... monitoring-grafana is running at .... monitoring-influxdb is running at ... tiller-deploy is running at....
7.2 on Client from Where Kubernetes cluster can be managed Remotely
root@localhost:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T23:15:59Z", GoVersion:"go1.8.3",
Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.7-rancher1", GitCommit:"a1ea37c6f6d21f315a07631b17b9537881e1986a", GitTreeState:"clean", BuildDate:"2017-10-02T21:33:08Z",GoVersion:"go1.8.3" Compiler:"gc", Platform:"linux/amd64"}
Step 8 Verifying that kube config is good
8.1 on Kubernetes Cluster
root@localhost:~# cat ~/.kube/config
apiVersion: v1
kind: Config
clusters:
- cluster:
api-version: v1
insecure-skip-tls-verify: true
server: "<SERVER_IP_ADDRESS:8080/r/projects/CLUSTER_NAME/kubernetes:SERVER_PORT_NUMBER"
name: "(CLUSTER_NAME)"
contexts:
- context:
cluster: "(CLUSTER_NAME)"
user: "(CLUSTER_NAME)"
name: "(CLUSTER_NAME)"
current-context: "(CLUSTER_NAME)"
users:
- name: "(CLUSTER_NAME/USER_NAME)"
user:
token: "<SECURITY_TOKEN>"
8.2 on Client from Where Kubernetes cluster can be managed Remotely
root@localhost:~# cat ~/.kube/config
current-context: default-context
apiVersion: v1
clusters:
- cluster:
api-version: v1
server: https://SERVER_IP_ADDRESS:SERVER_PORT_NUMBER/
insecure-skip-tls-verify: true
name: CLUSTER_NAME
contexts:
- context:
cluster: CLUSTER_NAME
namespace: default
user: user1
name: default-context
users:
- name: user1
user:
username: "<USERNAME>"
password: "<PASSWORD"
kind: Config
preferences:
colors: true
Step 9 Installing Helm
9.1 - Download the helm using below command
wget http://storage.googleapis.com/kubernetes-helm/helm-v2.3.0-linux-amd64.tar.gz
9.2 - Download the helm using below command
tar -zxvf helm-v2.3.0-linux-amd64.tar.gz
9.3 - Move the helm to /usr/local/bin
sudo mv linux-amd64/helm /usr/local/bin/helm
Step 10 Verifying Helm
10.1 type the below command
helm help
10.2 helm version .
Client: &version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}