Rancher access:
http://10.12.6.190:8080/admin/access/
the IP changes according to the env where the setup is created, use the ui to identify the ip where Rancher is installed and access its external IP.
K8 access
http://10.12.6.190:8080/r/projects/1a7/kubernetes-dashboard:9090/#!/service?namespace=default
the IP changes according to the env where the setup is created, use the ui to identify the ip where Rancher is installed and access its external IP.
SDC docker dependency structure and troubleshooting info:
useful commends:
retrieve all pods in the system in the onap name space or use --all-namespace to show pods from all namespaces
root@rancher:~# kubectl get pods -n onap
NAME READY STATUS RESTARTS AGE
dev-sdc-be-7dfd76f8b7-hc26c 2/2 Running 0 2d
dev-sdc-cs-7d7787b7f5-6ch55 1/1 Running 0 2d
dev-sdc-es-9477ccd7c-8jt5p 1/1 Running 0 2d
dev-sdc-fe-59dbb59656-v7kwn 2/2 Running 0 2d
dev-sdc-kb-7cfbd85c7b-pgpnj 1/1 Running 0 2d
dev-sdc-onboarding-be-745c794884-6c9tf 2/2 Running 0 2d
dev-sdc-wfd-6f7c9d778b-hbzlf 1/1 Running 0 2d
for more info on the pods:
root@rancher:~# kubectl get pods -n onap -o wide
NAME READY STATUS RESTARTS AGE IP NODE
dev-sdc-be-7dfd76f8b7-hc26c 2/2 Running 0 2d 10.42.202.150 k8s-3
dev-sdc-cs-7d7787b7f5-6ch55 1/1 Running 0 2d 10.42.100.30 k8s-3
dev-sdc-es-9477ccd7c-8jt5p 1/1 Running 0 2d 10.42.74.202 k8s-4
dev-sdc-fe-59dbb59656-v7kwn 2/2 Running 0 2d 10.42.73.184 k8s-2
dev-sdc-kb-7cfbd85c7b-pgpnj 1/1 Running 0 2d 10.42.44.238 k8s-9
dev-sdc-onboarding-be-745c794884-6c9tf 2/2 Running 0 2d 10.42.7.132 k8s-4
dev-sdc-wfd-6f7c9d778b-hbzlf 1/1 Running 0 2d 10.42.113.182 k8s-8
retrieve all the services defined in the system:
root@rancher:~# kubectl get services -n onap
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
sdc-be NodePort 10.43.1.23 <none> 8443:30204/TCP,8080:30205/TCP 2d
sdc-cs ClusterIP 10.43.204.60 <none> 9160/TCP,9042/TCP 2d
sdc-es ClusterIP 10.43.48.202 <none> 9200/TCP,9300/TCP 2d
sdc-fe NodePort 10.43.43.115 <none> 8181:30206/TCP,9443:30207/TCP 2d
sdc-kb ClusterIP 10.43.160.51 <none> 5601/TCP 2d
sdc-onboarding-be ClusterIP 10.43.191.47 <none> 8445/TCP,8081/TCP 2d
sdc-wfd NodePort 10.43.220.184 <none> 8080:30256/TCP 2d
view k8 context:
root@rancher:~# kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* oom oom oom
view the helm chart releases and there state :
root@k8s:~# helm list
NAME REVISION UPDATED STATUS CHART NAMESPACE
dev 2 Mon Apr 16 23:01:06 2018 FAILED onap-2.0.0 onap
dev 9 Tue Apr 17 12:59:25 2018 DEPLOYED onap-2.0.0 onap
view the repositories from which charts are retrived:
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879
a config upgrade should use the local/onap syntax to let K8 decide based on the parent chart (local/onap)
helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml
update container disable the pods wait for pods to stop and start them
helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml --set sdc.enabled=false
helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml --set sdc.enabled=true
in case you want to update the env json for sdc.
update the file under oom/kubernetes/sdc/resources/config/environments/AUTO.json
stop pods make the charts and start the pods
helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml --set sdc.enabled=false
cd /root/oom/kubernetes
make sdc
make onap
helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml --set sdc.enabled=true
to check status of a pod
log into the docker in a pod
log into the docker in a pod, in case there a number of containers in the same pod use -c
show config maps
view config map as yaml.
delete full deploy and then check that all pods are stopped.
look for all Terminating to be gone and wait till they are.
helm del dev --purge
kubectl get pods --all-namespaces -o=wide
look for persistent volumes claims that have not been removed.
delete them if the do not go down
kubectl -n onap delete pvc dev-sdnc-db-data-dev-sdnc-db-0
look for persistent volumes that have not been removed.
remove them
kubectl -n onap delete pv pvc-c0180abd-4251-11e8-b07c-02ee3a27e357
to delete pod stuck in terminating
kubectl delete pod dev-sms-857f6dbd87-6lh9k -n onap
full install
helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml
configuring access to SDC using portal
portal application is the only on with an exposed external port.
Look for the external IP of the node that has portal IP 10.0.0.4 in the UI interface for the instances.
define the IP in you local machine to allow so that the browser can resolve it to the correct IP
10.12.6.37 portal.api.simpledemo.onap.org
10.12.6.37 sdc.api.fe.simpledemo.onap.org
to access the portal usethe link below after the resolving has been updated.