Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents
maxLevel1

...

Code Block
ubuntu@k8s-s5-master:~/certs$ kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes-dashboard   ClusterIP   10.108.52.94    <none>        80/TCP    57s
ubuntu@k8s-s5-master:~/certs$ 

ubuntu@k8s-s1-master:~$ kubectl -n kube-system edit service kubernetes-dashboard
#Change value for spec.type from "ClusterIP" to "NodePort" . Then save the file (:wq)


4) Check port on which Dashboard was exposed

Code Block
ubuntu@k8s-s1-master:~$ kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   NodePort   10.108.52.94   <none>        80:30830/TCP   2h
ubuntu@k8s-s1-master:~$


#here it is 30830

Web-based Interface

5) Navigate to UI via a browser

Use the master node ip address and the exposed port :http://<master-node-ip-address>:<exposed-port>


6) Grant full admin privilages to Dashboard Service Account

The browser does not ask for credentials to login. The default user is "system:serviceaccount:kube-system:kubernetes-dashboard" , which does not have access to the default namespace.

To fix this, create a new "ClusterRoleBinding" and provide privilages to Dashboard Service Account.

...

Code Block
titledashboard-admin.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system




~$ kubectl create -f dashboard-admin.yaml
clusterrolebinding "kubernetes-dashboard" created
~$


7) Navigate to UI via a browser

You can access the browser , without any credentials. 

...

If an application inside a pod, such as the ODL, dies or is stopped, the pod itself will be recreated so the outage will be seen as with a pod outage.

The operator of the site can use this information to help determine when a manual failover to the remote site is required. Normally, a failover would be desired when there is a lack of redundancy available for a component, such as when only one database is available or when only one ODL is available. The operator would first want to determine whether the site that has experienced the outage(s) is the 'active' site by running the '/optdockerdata-nfs/sdnccluster/georscript/sdnc.cluster' script.