...
...
...
...
...
...
...
...
Non-Clustered Environment – Decommissioned
VM Name | External IP | appc-multicloud-integration IPMulM | oam_onap_LH2Z IP | Note |
---|---|---|---|---|
onap-aai-inst1 | 10.12.5.114 | 10.10.5.15 | 10.0.1.1 | mainly followed the setup: https://lf-onap.atlassian.net/wiki/display/DW/How+to+Docker+setup+on+Single+VM+HEAT+Deployment |
onap-aai-inst2 | 10.12.5.201 | 10.10.5.24 | 10.0.0.7 | |
onap-dns-server | 10.12.5.59 | 10.10.5.16 | 10.0.100.1 | |
onap-appc | 10.12.5.43 | 10.10.5.10 | 10.0.2.1 | |
Stability-Test-VM3 | 10.12.6.130 | 10.10.5.18 | 10.0.0.18 |
CDT: https://10.12.5.43:8080/index.html#/home (you need to add certificate exception on https://10.12.5.43:9090 first)
APIDOC: http://10.12.5.43:8282/apidoc/explorer/index.html
...
- Login to the VM in the table above as the Ubuntu user, using the External IP address after connecting to the WindRiver VPN. (which is 10.12.5.43)
- sudo su - root
- cd /opt
./dc.sh - remove all dockers resources etc.
/opt/config/docker_version.txt , /opt/config/ansible_version.txt, /opt/config/dgbuilder_version.txt - update/check the docker version
- cd /opt/deployment; git pull (make sure the head of appc/deployment Repo is here)
- docker ps -a
./appc_vm_init.sh - install dockers using docker compose
./health_check.sh - check appc health
./bundle_query.sh - check appc bundles
./db_query.sh - check database
- cd csit; ./run-robot-appc.sh - check csit health check
Note for CDT development in your local environment:
...
- bring up windriver lab VPN in your local environment
- git clone CDT source code from appc/cdt repo
- cd src/environments/
- cp environment.ts environment.ts.org
change environment.ts to below:
Code Block language js getDesigns: 'https://10.12.5.43:9090/cdtService/getDesigns', validateTemplate: 'https://10.12.5.43:9090/cdtService/validateTemplate', testVnf: 'https://10.12.5.43:9090/cdtService/testVnf', checkTestStatus: 'https://10.12.5.43:302909090/cdtService/checkTestStatus'
cd ../..
npm start run → it will bring CDT up in your local. the URL is http https://localhost:8080/index.html#/home
to tail APPC runtime karaf log :
ssh to 10.12.5.43
sudo -i
docker exec -it appc_controller_container bash
cd /opt/opendaylight/current/data/log
tail -f karaf.log
...
To see the robot logs: http://10.12.5.171:30209/logs/
VM Name | External IP | oam_onap_LH2Z IP | |
---|---|---|---|
k8s-master | 10.12.5.171 | 10.0.0.14 | |
k8s-appc1 | 10.12.5.17410.0.0.117.48 | ||
k8s-appc2 | 10.12.57.193 | 10.0.0.16 | 47 |
k8s-appc3 | 10.12.5.194 | 10.0.0.8 | |
k8s-appc4 | 10.12.6.7310.0.0.19 | ||
k8s-appc5 | 10.12.6.10010.0.0.5 | ||
k8s-master-dublinelalto | 10.12.6.68 | 10.0.0.15 | |
k8s-appc1-dublinelalto | 10.12.6.113 | 10.0.0.23 | |
k8s-appc2-dublinelalto | 10.12.6.9210.0.0.20 | ||
k8s-appc3-dublinelalto | 10.12.6.117 | 10.0.0.21 | |
k8s-appc4-dublinelalto | 10.12.6.138 | 10.0.0.24 | |
k8s-appc5-dublinelalto | 10.12.5.310.0.0.46.251 |
To perform an update on any of the VMs that are under Kubernetes, you may execute the following steps:
- Login to the k8s-master VM as the Ubuntu user using the External IP address in the table above.
- cd ~oom/root/oom/kubernetes
- change value.yaml (etc)
- cd ~oom/root/oom/kubernetes
- override file is on ~oom/root/oom/kubernetes/onap/value.yaml
- make all;make onap
- "helm deploy dev .local/onap --namespace onap" or if you only changed APPC chart, then "helm deploy dev-appc ./onap --namespace onap"
- kubectl get pods --all-namespaces -o wide -w => using this to check deployment status
NOTE: in order to sync up the version from the manifest file to value.yaml, those commands below should be executed
- cd /root/integration
- git pull then git reset --hard origin/master (master in this case)
- cd /root/oom
- git pull then git reset --hard origin/master (master in this case)
cd /root/integration/version-manifest/src/main/scripts/
./update-oom-image-versions.sh ../resources/docker-manifest-staging.csv /root/oom/
To undeploy dev-appc release:
- helm undeploy dev-appc –purge
- kubectl get pods --all-namespaces -o wide -w => using this to check undeployment status
- helm del dev-appc --purge
- cd /home/ubuntu
- ./cleanup_appc.sh → that del pvc and pv
- sudo -i
- rm -rf /dockerdata-nfs/dev*
- exit
k8s-master APIDOC/CDT Access: (note: use "kubectl get pods --all-namespaces -o wide" to get the VM name where pod is running)
CDT:
dev-appc-appc-cdt-xxx - k8s-appc1 http://10.12.5.174:30289/index.html#/home
...
dev-appc-appc-0 - k8s-appc5 http https://10.12.6.100:30230/apidoc/explorer/index.html
dev-appc-appc-1 - k8s-appc2 httphttps://10.12.5.193:30230/apidoc/explorer/index.html
dev-appc-appc-2 - k8s-appc4 http https://10.12.6.73:30230/apidoc/explorer/index.html
k8s-master-
...
elalto APIDOC/CDT Access: (note: use "kubectl get pods --all-namespaces -o wide" to get the VM name where pod is running)
CDT:
k8s-appc1-dublin httpelalto https://10.12.6.113:30289/index.html#/home (you need to add certificate exception on https://10.12.6.113:30211/ first)
APIDOC:
dev-appc-appc-0 k8s-appc1-dublin httpelalto https://10.12.6.113:30230/apidoc/explorer/index.html
dev-appc-appc-1 k8s-appc5-dublin httpelalto https://10.12.5.3:30230/apidoc/explorer/index.html
dev-appc-appc-2 k8s-appc3-dublin httpelalto https://10.12.6.117:30230/apidoc/explorer/index.html
Robot:
k8s-appc5-dublin http://10.12.5.3:30209/