Issues, Solutions and Workaround



Issue

JIRA

Solution/Workaround

Issue

JIRA

Solution/Workaround

Can't login ONAP portal GUI



Check portal log and found MUSIC spit error message on accessing database. On Cassandra container see disk full message in log. Go to check Rancher node filesystem "df -h", and look at /dockerdata-nfs entry to see if it's 100%. If so, clear some file under /dockerdata-nfs/dev-log/. Then restart portal-cassandra and portal-db containers by deleting them and let Helm restart new ones.

demo-k8s.sh onap init fails to populate customer data in A&AI



GLOBAL_AAI_USERNAME and GLOBAL_AAI_PASSWORD is not correct in robot chart. Update the chart and redeploy

SDC model distribution fails with no downloaded or notified



Restart dmaap project by deleting dmaap and make sure no pv, pvc or secret are lingering. Then deploy dmaap again.

root@onap-oom-rancher:~# helm delete dev-dmaap
release "dev-dmaap" deleted

root@onap-oom-rancher:~# kubectl -n onap delete pvc dev-dmaap-dbc-pg-data-dev-dmaap-dbc-pg-0
persistentvolumeclaim "dev-dmaap-dbc-pg-data-dev-dmaap-dbc-pg-0" deleted
root@onap-oom-rancher:~# kubectl -n onap get pvc|grep dmaap
dev-dmaap-dbc-pg-data-dev-dmaap-dbc-pg-1 Bound dev-dmaap-dbc-pg-data1 1Gi RWO dev-dmaap-dbc-pg-data 2d
root@onap-oom-rancher:~# kubectl -n onap delete pvc dev-dmaap-dbc-pg-data-dev-dmaap-dbc-pg-1
persistentvolumeclaim "dev-dmaap-dbc-pg-data-dev-dmaap-dbc-pg-1" deleted
root@onap-oom-rancher:~# kubectl -n onap get secret|grep dmaap

root@onap-oom-rancher:~/oom/kubernetes# helm deploy dev local/onap -f /root/integration-override.yaml --namespace onap

Can't enter SDC GUI from Portal with error message saying SDC fe can not be connected



Check your local /etc/hosts to see if ip addresses are the latest

SDC model distribution doesn't show SDNC, the rest components receive the model distribution though



Check dev-sdnc-sdnc-ueb-listener container log /opt/onap/sdnc/ueb-listener/logs/ueb-listener.out to see if it has error saying failed to authenticate with SDC. If so, restart the pod by deleting the pod and let K8S start a new one.

SDNC doesn't write logs in karaf.log



opendaylight log is not configured properly.

Delete these lines in the statefulset definition by running command `kubectl edit statefulset onap-sdnc-sdnc`

- mountPath: /opt/opendaylight/current/etc/org.ops4j.pax.logging.cfg name: sdnc-logging-cfg-config subPath: org.ops4j.pax.logging.cfg


Then you need to kill sdnc pod, `kubectl delete pod onap-sdnc-sdnc-0` and wait for k8s to install the new one (it may take some time to download the new image)

on rancher node, kubectl doesn't connect to server:8080



reboot rancher vm, then follow onap helm setup step:

root@sb00-rancher:~/oom/kubernetes# helm serve & root@sb00-rancher:~/oom/kubernetes# sleep 10 root@sb00-rancher:~/oom/kubernetes# helm repo add local http://127.0.0.1:8879 root@sb00-rancher:~/oom/kubernetes# helm repo list root@sb00-rancher:~/oom/kubernetes# make all root@sb00-rancher:~/oom/kubernetes# rsync -avt ~/oom/kubernetes/helm/plugins ~/.helm/ root@sb00-rancher:~/oom/kubernetes# helm search -l | grep local root@sb00-rancher:~/oom/kubernetes# helm deploy dev local/onap -f ~/oom/kubernetes/onap/resources/environments/public-cloud.yaml -f ~/integration-override.yaml --namespace onap | tee ~/helm-deploy.log root@sb00-rancher:~/oom/kubernetes# helm list

Note: you will lose all data after vm restart if you use RAM disk

SDNC and APPC fail healthcheck with 404 error after ONAP installation. After reducing SDNC replica from 3 to 1, the problem remain. All the pods look ok.



The SDNC healthcheck passed after bouncing dev-sdnc-sdnc-0 pod. Same for dev-appc-appc-0.

Reason unknown.