Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

...

...

...

...

...

...

...

...

...

...

...

...

...

Installation

Execute the following steps on master node

...

Code Block
ubuntu@k8s-s5-master:~/certs$ kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes-dashboard   ClusterIP   10.108.52.94    <none>        80/TCP    57s
ubuntu@k8s-s5-master:~/certs$ 

ubuntu@k8s-s1-master:~$ kubectl -n kube-system edit service kubernetes-dashboard
#Change spec.type from ClusterIP to NodePort  and save.


4) Check port on which Dashboard was exposed

Code Block
ubuntu@k8s-s1-master:~$ kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   NodePort   10.108.52.94   <none>        80:30830/TCP   2h
ubuntu@k8s-s1-master:~$


#here it is 30830

Web-based Interface

5) Navigate to UI via a browser

Use the master node ip address and the exposed port :http://<master-node-ip-address>:<exposed-port>


6) Grant full admin privilages to Dashboard Service Account

The browser does not ask for credentials to login. The default user is "system:serviceaccount:kube-system:kubernetes-dashboard" , which does not have access to the default namespace.

To fix this, create a new "ClusterRoleBinding" and provide privilages to Dashboard Service Account.

...

Code Block
titledashboard-admin.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system




~$ kubectl create -f dashboard-admin.yaml
clusterrolebinding "kubernetes-dashboard" created
~$


7) Navigate to UI via a browser

You can access the browser , without any credentials. 

Monitoring SDN-C Site Health

The Kubernetes dashboard GUI can be used to monitor the health of components of the SDN-C site by changing the Namespace to 'onap-sdnc'.

In order to see the status of each pod in the site, select the 'Pods' pane (under the 'Workloads' heading). You can also use the following URL: http://server:31497/#!/pod?namespace=onap-sdnc

Image Added

When a pod fails, the GUI will show that fact:

Image Added

Selecting the 'Overview' pane will allow a less specific view of the failure:

Image Added

The operator of the site can use this information to help determine when a manual failover to the remote site is required. Normally, a failover would be desired when there is a lack of redundancy available for a component, such as when only one database is available or when only one ODL is available. The operator would first want to determine whether the site that has experienced the outage(s) is the 'active' site by running the '/opt/sdnc/geor/sdnc.cluster' script.

Limitations