/
SDC on OOM

SDC on OOM





Rancher access

the IP changes according to the env where the setup is created, use the ui to identify the ip where Rancher is installed and access its external IP.



K8 access

the IP changes according to the env where the setup is created, use the ui to identify the ip where Rancher is installed and access its external IP.



SDC docker dependency structure and troubleshooting info:



Useful commends

Retrieve all pods in the system in the ONAP name space or use --all-namespace to show pods from all namespaces



root@rancher:~# kubectl get pods -n onap

NAME                                                                         READY     STATUS             RESTARTS   AGE

dev-sdc-be-7dfd76f8b7-hc26c                                       2/2            Running                     0          2d

dev-sdc-cs-7d7787b7f5-6ch55                                       1/1            Running                     0          2d

dev-sdc-es-9477ccd7c-8jt5p                                          1/1            Running                     0          2d

dev-sdc-fe-59dbb59656-v7kwn                                       2/2           Running                      0          2d

dev-sdc-kb-7cfbd85c7b-pgpnj                                        1/1            Running                     0          2d

dev-sdc-onboarding-be-745c794884-6c9tf                        2/2            Running                     0          2d

dev-sdc-wfd-6f7c9d778b-hbzlf                                       1/1            Running                     0          2d





For more info on the pods



root@rancher:~# kubectl get pods -n onap -o wide

NAME                                                                         READY       STATUS              RESTARTS      AGE               IP              NODE

dev-sdc-be-7dfd76f8b7-hc26c                                          2/2           Running                    0               2d          10.42.202.150     k8s-3

dev-sdc-cs-7d7787b7f5-6ch55                                          1/1           Running                    0               2d          10.42.100.30      k8s-3

dev-sdc-es-9477ccd7c-8jt5p                                             1/1           Running                    0               2d          10.42.74.202      k8s-4

dev-sdc-fe-59dbb59656-v7kwn                                         2/2           Running                    0               2d           10.42.73.184     k8s-2

dev-sdc-kb-7cfbd85c7b-pgpnj                                           1/1           Running                    0               2d          10.42.44.238      k8s-9

dev-sdc-onboarding-be-745c794884-6c9tf                           2/2           Running                   0                2d          10.42.7.132       k8s-4

dev-sdc-wfd-6f7c9d778b-hbzlf                                          1/1           Running                    0               2d          10.42.113.182    k8s-8





Retrieve all the services defined in the system

root@rancher:~# kubectl get services -n onap

NAME                            TYPE           CLUSTER-IP      EXTERNAL-IP                            PORT(S)                                                                      AGE

sdc-be                         NodePort         10.43.1.23            <none>                                 8443:30204/TCP,8080:30205/TCP                                       2d

sdc-cs                         ClusterIP         10.43.204.60         <none>                                 9160/TCP,9042/TCP                                                          2d

sdc-es                         ClusterIP         10.43.48.202         <none>                                 9200/TCP,9300/TCP                                                          2d

sdc-fe                         NodePort         10.43.43.115          <none>                                 8181:30206/TCP,9443:30207/TCP                                       2d

sdc-kb                         ClusterIP         10.43.160.51         <none>                                 5601/TCP                                                                        2d

sdc-onboarding-be         ClusterIP         10.43.191.47         <none>                                 8445/TCP,8081/TCP                                                          2d

sdc-wfd                       NodePort         10.43.220.184        <none>                                 8080:30256/TCP                                                               2d



View k8 context

root@rancher:~# kubectl config get-contexts

CURRENT   NAME      CLUSTER   AUTHINFO   NAMESPACE

*                  oom            oom          oom



View the helm chart releases and there state :

root@k8s:~# helm list

NAME    REVISION            UPDATED                          STATUS                CHART                 NAMESPACE

dev        2            Mon Apr 16 23:01:06 2018          FAILED  onap-2.0.0        onap    

dev        9            Tue Apr 17 12:59:25 2018            DEPLOYED                 onap-2.0.0                onap



View the repositories from which charts are retrieved

root@rancher:~# helm repo list
NAME    URL
stable  https://kubernetes-charts.storage.googleapis.com
local   http://127.0.0.1:8879



Config upgrade

should use the local/onap syntax to let K8 decide based on the parent chart (local/onap)

helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml



update container

disable the pods wait for pods to stop and start them

Beijing

cd ~

helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml --set sdc.enabled=false

helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml --set sdc.enabled=true

Casablanca

helm list -a

..

dev-sdc         2               Wed Oct 17 18:06:08 2018        DEPLOYED        sdc-3.0.0               onap

..

helm del dev-sdc --purge



helm list -a //to confirm its gone



kubectl get pods -n onap | grep sdc //check the pods are done



helm deploy dev local/onap -f /root/integration-override.yaml --namespace onap

Update config map

in case you want to update the env json for sdc.

update the file under oom/kubernetes/sdc/resources/config/environments/AUTO.json

stop pods make the charts and start the pods

helm upgrade -i dev local/onap --namespace onap -f /root/integration-override.yaml --set sdc.enabled=false
make /root/oom/kubernetes/sdc

make /root/oom/kubernetes/onap

helm upgrade -i dev local/onap --namespace onap -f /root/integration-override.yaml --set sdc.enabled=true



To check status of a pod

kubectl -n onap describe pod dev-robot-5cfddf87fb-65zvv

Log into the docker in a pod

kubectl -n onap exec -it dev-sdc-onboarding-be-745c794884-ss8w4 bash



Log into ta specific container in a pod

in case there a number of containers in the same pod  use -c

kubectl -n onap exec -it dev-sdc-onboarding-be-745c794884-ss8w4 -c <container name as it is defined int the deployment yaml for the pod> bash



View container logs

in case there a number of containers in the same pod  use -c



kubectl -n onap logs -it dev-sdc-onboarding-be-745c794884-ss8w4 -c <container name as it is defined int the deployment yaml for the pod>





show config maps

kubectl get configMap -n onap

View config map as yaml

kubectl get configMaps -n onap dev-sdc-environments-configmap -o yaml



Delete full deploy

then check that all pods are stopped.

look for all Terminating to be gone if not wait till they are.

helm del dev --purge

kubectl get pods --all-namespaces -o=wide



Look for persistent volumes claims

that have not been removed.

       kubectl -n onap get pvc



Delete them if the do not go down

kubectl -n onap delete pvc  dev-sdnc-db-data-dev-sdnc-db-0



Look for persistent volumes that have not been removed.

kubectl -n onap get pv


Remove them persistent volumes

kubectl -n onap delete  pv  pvc-c0180abd-4251-11e8-b07c-02ee3a27e357


To delete a pod stuck in terminating

kubectl delete  pod dev-sms-857f6dbd87-6lh9k -n onap



full install

helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml

 

configuring access to SDC using portal

portal application is the only on with an exposed external port.

Look for the external IP of the node that has portal IP 10.0.0.4 in the UI interface for the instances.

kubectl get service -o wide -n onap

define the IP in you local machine to allow so that the browser can resolve it to the correct IP

10.12.6.37          portal.api.simpledemo.onap.org

10.12.6.37          sdc.api.fe.simpledemo.onap.org



to access the portal usethe link below after the resolving has been updated.

Related content