Table of Contents | ||
---|---|---|
|
...
Code Block |
---|
ubuntu@k8s-s5-master:~/certs$ kubectl -n kube-system get service kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard ClusterIP 10.108.52.94 <none> 80/TCP 57s ubuntu@k8s-s5-master:~/certs$ ubuntu@k8s-s1-master:~$ kubectl -n kube-system edit service kubernetes-dashboard #Change spec.type from ClusterIP to NodePort and save. |
4) Check port on which Dashboard was exposed
Code Block |
---|
ubuntu@k8s-s1-master:~$ kubectl -n kube-system get service kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.108.52.94 <none> 80:30830/TCP 2h ubuntu@k8s-s1-master:~$ #here it is 30830 |
Web-based Interface
5) Navigate to UI via a browser
Use the master node ip address and the exposed port :http://<master-node-ip-address>:<exposed-port>
6) Grant full admin privilages to Dashboard Service Account
The browser does not ask for credentials to login. The default user is "system:serviceaccount:kube-system:kubernetes-dashboard" , which does not have access to the default namespace.
To fix this, create a new "ClusterRoleBinding" and provide privilages to Dashboard Service Account.
...
Code Block | ||
---|---|---|
| ||
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system ~$ kubectl create -f dashboard-admin.yaml clusterrolebinding "kubernetes-dashboard" created ~$ |
7) Navigate to UI via a browser
You can access the browser , without any credentials.
...
In order to see the status of each pod in the site, select the 'Pods' pane (under the 'Workloads' heading). You can also use the following URL: http://server:31497/#!/pod?namespace=onap-sdnc.
When a pod fails, the GUI will show that fact.
e.g. An ODL outage:
e.g. A database outage:
Selecting the 'Overview' pane will allow a less specific view of the failurefailures:
If an application inside a pod, such as the ODL, dies, the pod itself will be recreated so the outage will be seen as with a pod outage.
The operator of the site can use this information to help determine when a manual failover to the remote site is required. Normally, a failover would be desired when there is a lack of redundancy available for a component, such as when only one database is available or when only one ODL is available. The operator would first want to determine whether the site that has experienced the outage(s) is the 'active' site by running the '/opt/sdnc/geor/sdnc.cluster' script.
...