Table of Contents |
---|
Preparation
Install ONAP
Make sure that you've installed ONAP R2 release. For installation instructions, please refer ONAP Installation in Vanilla OpenStack.
Make sure that all components pass health check when you do the following:
- ssh to the robot vm, run '/opt/ete.sh health'
You will need to update your /etc/hosts so that you can access the ONAP Portal in your browser. You may also want to add IP addresses of so, sdnc, aai, etc so that you can easily ssh to those VMs. Below is a sample just for your reference:
...
Table of Contents |
---|
Preparation
Install ONAP
Make sure that you've installed ONAP R2 release. For installation instructions, please refer ONAP Installation in Vanilla OpenStack.
Make sure that all components pass health check when you do the following:
- ssh to the robot vm, run '/opt/ete.sh health'
You will need to update your /etc/hosts so that you can access the ONAP Portal in your browser. You may also want to add IP addresses of so, sdnc, aai, etc so that you can easily ssh to those VMs. Below is a sample just for your reference:
Code Block |
---|
10.12.5.159 aai-inst2
10.12.5.162 portal
10.12.5.162 portal.api.simpledemo.onap.org
10.12.5.173 dns-server
10.12.5.178 aai
10.12.5.178 aai.api.simpledemo.onap.org
10.12.5.178 aai1
10.12.5.183 dcaecdap00
10.12.5.184 multi-service
10.12.5.189 sdc
10.12.5.189 sdc.api.simpledemo.onap.org
10.12.5.194 robot
10.12.5.2 so
10.12.5.204 dmaap
10.12.5.207 appc
10.12.5.208 dcae-bootstrap
10.12.5.211 dcaeorcl00
10.12.5.214 sdnc
10.12.5.219 dcaecdap02
10.12.5.224 dcaecnsl02
10.12.5.225 dcaecnsl00
10.12.5.227 dcaedokp00
10.12.5.229 dcaecnsl01
10.12.5.238 dcaepgvm00
10.12.5.239 dcaedoks00
10.12.5.241 dcaecdap03
10.12.5.247 dcaecdap04
10.12.5.248 dcaecdap05
10.12.5.249 dcaecdap06
10.12.5.38 policy
10.12.5.38 policy.api.simpledemo.onap.org
10.12.5.48 vid
10.12.5.48 vid.api.simpledemo.onap.org
10.12.5.51 clamp
10.12.5.62 dcaecdap01 |
...
install python-pip and other python modules (see the comment section)
apt install python-pip
pip install ipaddress
pip install pyyaml
pip install mysql-connector-python
pip install progressbar2
pip install python-novaclient
pip install python-openstackclient
pip install kubernetes
Run automation program to deploy services
Sign into SDC as designer and download five csar files for infra, vbng, vgmux, vbrg, and rescust. Copy all the csar files to directory csar, vbrg, and rescust. Copy all the csar files to directory csar.
If robot has done the model onboardin for you the CSARs may also be inside the robot container in the /tmp/csar directory.
Now you can simply run 'vcpe.py' to see the instructions.
...
service_instance_id: Take from __var/svc_instance_uuid file. Copy the value for gmux without letter 'V'.
Code Block |
---|
#demo-k8s.sh <namespace> heatbridge <stack_name> <service_instance_id> <service> <oam-ip-address>
root@oom-rancher:~/integration/test/vcpe# ~/oom/kubernetes/robot/demo-k8s.sh onap heatbridge vcpe_vfmodule_e2744f48729e4072b20b_201811262136 d8914ef3-3fdb-4401-adfe-823ee75dc604 vCPEvGMUX 10.0.101.21
|
...
Checklist for Casablanca Release
Assuming you run vcpe script from rancher node, here we put the above steps in summary, you need to see details of each step in the above tutorial.
0. Enable dev-sdnc-sdnc-0 docker karaf log by editing StatefulSet/dev-sdnc-sdnc (remove log mount), then deleting pod dev-sdnc-sdnc-0 to restart it. Note the pod may move to a different cluster node after restart, write down the cluster node IP.
1. model distribution by `demo-k8s.sh onap init`. this will onboard VNFs and 4 services, i.e. infrastructure, brg, bng and gmux.
2. Login in Portal as Demo user, then go to SDC portal to add BRG subcategory to AllottedResource. SDC FE API not working yet:
POST http://sdc.api.fe.simpledemo.onap.org:30206/sdc1/feProxy/rest/v1/category/resources/resourceNewCategory.allotted%20resource/subCategory
Body: {"name":"BRG"}
3. (No need anymore for Casablanca MR) Update SO catalogdb tables temp_network_heat_template_lookup and network_resource tables by setting aic_version_max=3.0 (SO-1184)
4. Update SO catalogdb table heat_template to set Generic NeutronNet entry BODY field with the correct yaml format
Code Block | ||||
---|---|---|---|---|
| ||||
mysql -uroot -ppassword -e 'update catalogdb.heat_template set body=" heat_template_version: 2013-05-23 description: A simple Neutron network parameters: network_name: type: string description: Name of the Neutron Network default: ONAP-NW1 shared: type: boolean description: Shared amongst tenants default: False outputs: network_id: description: Openstack network identifier value: { get_resource: network } resources: network: type: OS::Neutron::Net properties: name: { get_param: network_name } shared: { get_param: shared }" where name="Generic NeutronNet"' |
5. Manually create and distribute customer service according to the steps in tutorial
Note: in Casablanca maintenance, this step is automated in Robot by running >ete-k8s.sh onap distributevCPEResCust
5.1 Create csar directory under vcpe, and copy the following 5 csar files from robot docker /tmp/csar/
Code Block | ||||
---|---|---|---|---|
| ||||
root@oom-rancher:~/integration/test/vcpe# ls -l csar total 440 -rw-r--r-- 1 root root 105767 Jan 28 18:21 service-Demovcpeinfra-csar.csar -rw-r--r-- 1 root root 68772 Jan 28 18:21 service-Demovcpevbng-csar.csar -rw-r--r-- 1 root root 61744 Jan 28 18:22 service-Demovcpevbrgemu-csar.csar -rw-r--r-- 1 root root 66512 Jan 28 18:22 service-Demovcpevgmux-csar.csar -rw-r--r-- 1 root root 70943 Jan 28 18:23 service-Vcperescust2019012820190128180325894-csar.csar |
...
Code Block | ||||
---|---|---|---|---|
| ||||
--os-tenant-id
--os-projet-domain-name
oam_onap_net
oam_onap_subnet
self.vgw_VfModuleModelInvariantUuid |
9.1 Run `vcpe.py init`. You may see some sql command failure, it's ok to ignore.
10. Run `vcpe.py infra`
11. Make sure sniro configuration is run as part of the above step.
12. Install curl command inside sdnc-sdnc-0 container
13. Run `healthcheck-k8s.py onap` to check connectivity from sdnc to brg and gmux. If healthcheck-k8s.sh fails, check /opt/config/sdnc_ip.txt to see it has the SDNC host ip correctly. If you need to change SDNC host ip, you need to clean up and rerun `vcpe.py infra`. Also verify tap interfaces tap-0 and tap-1 are up by running vppctl with show int command. If tap interfaces are not up, use vppctl tap delete tap-0 and tap-1 and then run `/opt/bind_nic.sh` followed by `/opt/set_nat.sh`.
Code Block | ||||
---|---|---|---|---|
| ||||
root@zdcpe1cpe01brgemu01-201812261515:~# vppctl tap delete tap-0 Deleted. root@zdcpe1cpe01brgemu01-201812261515:~# vppctl tap delete tap-1 Deleted. root@zdcpe1cpe01brgemu01-201812261515:~# /opt/bind_nic.sh root@zdcpe1cpe01brgemu01-201812261515:~# /opt/set_nat.sh root@zdcpe1cpe01brgemu01-201812261515:~# vppctl show int_VfModuleModelInvariantUuid |
9.1 Run `vcpe.py init`. You may see some sql command failure, it's ok to ignore.
10. Run `vcpe.py infra`
11. Make sure sniro configuration is run as part of the above step.
12. Install curl command inside sdnc-sdnc-0 container
13. Run `healthcheck-k8s.py onap` to check connectivity from sdnc to brg and gmux. If healthcheck-k8s.sh fails, check /opt/config/sdnc_ip.txt to see it has the SDNC host ip correctly. If you need to change SDNC host ip, you need to clean up and rerun `vcpe.py infra`.
.
If you have changed the SDNC_IP after instantiation of the vBNG and vBRGEMU:
- you need to also update the /opt/sdnc_ip in the vBNG and run v_bng_install.sh to get the vBNG route tables updated.
- you need to change sdnc_ip.txt and ip.txt on the vBRGEMU
Code Block | ||||
---|---|---|---|---|
| ||||
root@zdcpe1cpe01brgemu01-201812261515:~# vppctl tap delete tap-0 Deleted. root@zdcpe1cpe01brgemu01-201812261515:~# vppctl tap delete tap-1 Deleted. [WAIT A FEW SECONDS BEFORE DOING NEXT STEPS or you may get an error since vppctl lstack returns error.] root@zdcpe1cpe01brgemu01-201812261515:~# /opt/bind_nic.sh root@zdcpe1cpe01brgemu01-201812261515:~# /opt/set_nat.sh root@zdcpe1cpe01brgemu01-201812261515:~# vppctl show int Name Idx State Counter Count GigabitEthernet0/4/0 1 up tx packets 12 tx bytes Name 3912 local0 Idx State Counter 0 down Count GigabitEthernet0/4/0tap-0 1 up 2 tx packets up rx packets 12 5 tx bytes rx bytes 3912 local0 410 0 down tap-0 drops 2 up rx packets 7 5 ip6 rx bytes 1 tap-1 410 3 up rx packets drops 1 7 rx bytes ip6 70 1 tap-1 drops 3 up rx packets 7 1 ip6 rx bytes 1 |
14. Run `vcpe.py customer`
15. Verify tunnelxconn and brg vxlan tunnels are set up correctly
16. Set up vgw and brg dhcp and route, and ping from brg to vgw. Note vgw public ip on Openstack Horizon may be wrong. Use vgw OAM ip to login.
Code Block | ||||
---|---|---|---|---|
| ||||
1. ssh to vGW 2. Restart DHCP: 70systemctl restart isc-dhcp-server 3. ssh to vBRG 4. Get IP from vGW: dhclient lstack 5. Add route to Internet: ip route add 10.2.0.0/24 via 192.168.1.254 dev lstack 6. ping the web server: drops 7 ip6 1 |
14. Run `vcpe.py customer`
15. Verify tunnelxconn and brg vxlan tunnels are set up correctly
16. Set up vgw and brg dhcp and route, and ping from brg to vgw. Note vgw public ip on Openstack Horizon may be wrong. Use vgw OAM ip to login.
Code Block | ||||
---|---|---|---|---|
| ||||
1. ssh to vGW
2. Restart DHCP: systemctl restart isc-dhcp-server
3. ssh to vBRG
4. Get IP from vGW: dhclient lstack
5. Add route to Internet: ip route add 10.2.0.0/24 via 192.168.1.254 dev lstack
6. ping the web server: ping 10.2.0.10
7. wget http://10.2.0.10 |
17. Add identity-url property in RegionOne with Postman
18. Add new DG in APPC for closed loop. See APPC release note for steps. CCSDK-741
19. Update gmux libevel.so. See Eric comments on vcpe test status wiki
20. Run heatbridge Robot script
21. Push closed loop policy on Pap.
22. Run `vcpe.py loop` and verify vgmux is restarted
Code Block | ||||
---|---|---|---|---|
| ||||
VES_MEASUREMENT_OUTPUT event from VES collector to DCAE:
{
"event": {
"commonEventHeader": {
"startEpochMicrosec": 1548802103113302,
"sourceId": "3dcbc028-45f0-4899-82a5-bb9cc7f14b32",
"eventId": "Generic_traffic",
"reportingEntityId": "No UUID available",
"internalHeaderFields": {
"collectorTimeStamp": "Tue, 01 29 2019 10:48:33 UTC"
},
"eventType": "HTTP request rate",
"priority": "Normal",
"version": 1.2,
"reportingEntityName": "zdcpe1cpe01mux01-201901291531",
"sequence": 17,
"domain": "measurementsForVfScaling",
"lastEpochMicrosec": 1548802113113302,
"eventName": "Measurement_vGMUX",
"sourceName": "vcpe_vnf_9ab915ef-f44f-4fe5-a6ce_201901291531"
},
"measurementsForVfScalingFields": {
"cpuUsageArray": [
{
"percentUsage": 0,
"cpuIdentifier": "cpu1",
"cpuIdle": 47.1,
"cpuUsageSystem": 0,
"cpuUsageUser": 5.9
}
],
"measurementInterval": 10,
"requestRate": 540,
"vNicUsageArray": [
{
"transmittedOctetsDelta": 0,
"receivedTotalPacketsDelta": 0,
"vNicIdentifier": "eth0",
"valuesAreSuspect": "true",
"transmittedTotalPacketsDelta": 0,
"receivedOctetsDelta": 0
}
],
"measurementsForVfScalingVersion": 2.1,
"additionalMeasurements": [
{
"name": "ONAP-DCAE",
"arrayOfFields": [
{
"name": "Packet-Loss-Rate",
"value": "0.0"
}
]
}
]
}
}
}
DCAE_CL_OUTPUT event from DCAE to Policy:
{
"closedLoopEventClient": "DCAE_INSTANCE_ID.dcae-tca",
"policyVersion": "v0.0.1",
"policyName": "DCAE.Config_tca-hi-lo",
"policyScope": "DCAE",
"target_type": "VNF",
"AAI": {
"generic-vnf.resource-version": "1548788326279",
"generic-vnf.nf-role": "",
"generic-vnf.prov-status": "ACTIVE",
"generic-vnf.orchestration-status": "Active",
"generic-vnf.is-closed-loop-disabled": false,
"generic-vnf.service-id": "f9457e8c-4afd-45da-9389-46acd9bf5116",
"generic-vnf.in-maint": false,
"generic-vnf.nf-type": "",
"generic-vnf.nf-naming-code": "",
"generic-vnf.vnf-name": "vcpe_vnf_9ab915ef-f44f-4fe5-a6ce_201901291531",
"generic-vnf.model-version-id": "7dc4c0d8-e536-4b4e-92e6-492ae6b8d79a",
"generic-vnf.model-customization-id": "a1ca6c01-8c6c-4743-9039-e34038d74a4d",
"generic-vnf.nf-function": "",
"generic-vnf.vnf-type": "demoVCPEvGMUX/9ab915ef-f44f-4fe5-a6ce 0",
"generic-vnf.model-invariant-id": "637a6f52-6955-414d-a50f-0bfdbd76dac8",
"generic-vnf.vnf-id": "3dcbc028-45f0-4899-82a5-bb9cc7f14b32"
},
"closedLoopAlarmStart": 1548803088140708,
"closedLoopEventStatus": "ONSET",
"closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
"version": "1.0.2",
"target": "generic-vnf.vnf-name",
"requestID": "0e74d6df-627d-4a97-a679-be85ddad6758",
"from": "DCAE"
}
APPC-LCM-READ event from Policy to APPC:
{
"body": {
"input": {
"common-header": {
"timestamp": "2019-01-29T23:05:42.121Z",
"api-ver": "2.00",
"originator-id": "923ac972-6ec1-4e34-b6e1-76dc7481d5af",
"request-id": "923ac972-6ec1-4e34-b6e1-76dc7481d5af",
"sub-request-id": "1",
"flags": {}
},
"action": "Restart",
"action-identifiers": {
"vnf-id": "3dcbc028-45f0-4899-82a5-bb9cc7f14b32"
}
}
},
"version": "2.0",
"rpc-name": "restart",
"correlation-id": "923ac972-6ec1-4e34-b6e1-76dc7481d5af-1",
"type": "request"
} |
23. To repeat create infra step, you can delete infra vf-module stacks first and the network stacks from Openstack Horizon Orchestration->Stack page, then clean up the record in sdnc DHCP_MAC table before rerun `vcpe.py infra`
24. To repeat create customer step, you can delete customer stack, then clear up tunnles by running `cleanGMUX.py gmux_public_ip` and `cleanGMUX.py brg_public_ip`. After that you can rerun create customer command
Checklist for Dublin Release
- Model distribution by `demo-k8s.sh onap init`. this will onboard VNFs and 4 services, i.e. infrastructure, brg, bng and gmux
- Run Robot `ete-k8s.sh onap distributevCPEResCust`. This step assumes step 1 successfully distributed the 4 models
- Add customer SDN-ETHERNET-INTERNET (need to put into vcpe init)
- Add route on sdnc cluster node `ip route add 10.3.0.0/24 via 10.0.101.10 dev ens3`
- Initialize SDNC ip pool by running from Rancher node `kubectl -n onap exec -it dev-sdnc-sdnc-0 -- /opt/sdnc/bin/addIpAddresses.sh VGW 10.5.0 22 250`
- Install python and other python libraries
- Change the openstack env parameters and the customer service related parameter in vcpecommon.py
- Run `vcpe.py init`
- Insert a service workflow entry in SO catalogdb
Code Block | ||||
---|---|---|---|---|
| ||||
root@sb04-rancher:~# kubectl exec dev-mariadb-galera-mariadb-galera-0 -- mysql -uroot -psecretpassword -e "INSERT INTO catalogdb.service_recipe (ACTION, VERSION_STR, DESCRIPTION, ORCHESTRATION_URI, SERVICE_PARAM_XSD, RECIPE_TIMEOUT, SERVICE_TIMEOUT_INTERIM, CREATION_TIMESTAMP, SERVICE_MODEL_UUID) VALUES ('createInstance','1','vCPEResCust 2019-06-03 _04ba','/mso/async/services/CreateVcpeResCustService',NULL,181,NULL, NOW(),'6c4a469d-ca2c-4b02-8cf1-bd02e9c5a7ce')" ping 10.2.0.10 7. wget http://10.2.0.10 |
17. Add identity-url property in RegionOne with Postman
18. Add new DG in APPC for closed loop. See APPC release note for steps. CCSDK-741
19. Update gmux libevel.so. See Eric comments on vcpe test status wiki
20. Run heatbridge Robot script
21. Push closed loop policy on Pap.
22. Run `vcpe.py loop` and verify vgmux is restarted
Code Block | ||||
---|---|---|---|---|
| ||||
VES_MEASUREMENT_OUTPUT event from VES collector to DCAE:
{
"event": {
"commonEventHeader": {
"startEpochMicrosec": 1548802103113302,
"sourceId": "3dcbc028-45f0-4899-82a5-bb9cc7f14b32",
"eventId": "Generic_traffic",
"reportingEntityId": "No UUID available",
"internalHeaderFields": {
"collectorTimeStamp": "Tue, 01 29 2019 10:48:33 UTC"
},
"eventType": "HTTP request rate",
"priority": "Normal",
"version": 1.2,
"reportingEntityName": "zdcpe1cpe01mux01-201901291531",
"sequence": 17,
"domain": "measurementsForVfScaling",
"lastEpochMicrosec": 1548802113113302,
"eventName": "Measurement_vGMUX",
"sourceName": "vcpe_vnf_9ab915ef-f44f-4fe5-a6ce_201901291531"
},
"measurementsForVfScalingFields": {
"cpuUsageArray": [
{
"percentUsage": 0,
"cpuIdentifier": "cpu1",
"cpuIdle": 47.1,
"cpuUsageSystem": 0,
"cpuUsageUser": 5.9
}
],
"measurementInterval": 10,
"requestRate": 540,
"vNicUsageArray": [
{
"transmittedOctetsDelta": 0,
"receivedTotalPacketsDelta": 0,
"vNicIdentifier": "eth0",
"valuesAreSuspect": "true",
"transmittedTotalPacketsDelta": 0,
"receivedOctetsDelta": 0
}
],
"measurementsForVfScalingVersion": 2.1,
"additionalMeasurements": [
{
"name": "ONAP-DCAE",
"arrayOfFields": [
{
"name": "Packet-Loss-Rate",
"value": "0.0"
}
]
}
]
}
}
}
DCAE_CL_OUTPUT event from DCAE to Policy:
{
"closedLoopEventClient": "DCAE_INSTANCE_ID.dcae-tca",
"policyVersion": "v0.0.1",
"policyName": "DCAE.Config_tca-hi-lo",
"policyScope": "DCAE",
"target_type": "VNF",
"AAI": {
"generic-vnf.resource-version": "1548788326279",
"generic-vnf.nf-role": "",
"generic-vnf.prov-status": "ACTIVE",
"generic-vnf.orchestration-status": "Active",
"generic-vnf.is-closed-loop-disabled": false,
"generic-vnf.service-id": "f9457e8c-4afd-45da-9389-46acd9bf5116",
"generic-vnf.in-maint": false,
"generic-vnf.nf-type": "",
"generic-vnf.nf-naming-code": "",
"generic-vnf.vnf-name": "vcpe_vnf_9ab915ef-f44f-4fe5-a6ce_201901291531",
"generic-vnf.model-version-id": "7dc4c0d8-e536-4b4e-92e6-492ae6b8d79a",
"generic-vnf.model-customization-id": "a1ca6c01-8c6c-4743-9039-e34038d74a4d",
"generic-vnf.nf-function": "",
"generic-vnf.vnf-type": "demoVCPEvGMUX/9ab915ef-f44f-4fe5-a6ce 0",
"generic-vnf.model-invariant-id": "637a6f52-6955-414d-a50f-0bfdbd76dac8",
"generic-vnf.vnf-id": "3dcbc028-45f0-4899-82a5-bb9cc7f14b32"
},
"closedLoopAlarmStart": 1548803088140708,
"closedLoopEventStatus": "ONSET",
"closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
"version": "1.0.2",
"target": "generic-vnf.vnf-name",
"requestID": "0e74d6df-627d-4a97-a679-be85ddad6758",
"from": "DCAE"
}
APPC-LCM-READ event from Policy to APPC:
{
"body": {
"input": {
"common-header": {
"timestamp": "2019-01-29T23:05:42.121Z",
"api-ver": "2.00",
"originator-id": "923ac972-6ec1-4e34-b6e1-76dc7481d5af",
"request-id": "923ac972-6ec1-4e34-b6e1-76dc7481d5af",
"sub-request-id": "1",
"flags": {}
},
"action": "Restart",
"action-identifiers": {
"vnf-id": "3dcbc028-45f0-4899-82a5-bb9cc7f14b32"
}
}
},
"version": "2.0",
"rpc-name": "restart",
"correlation-id": "923ac972-6ec1-4e34-b6e1-76dc7481d5af-1",
"type": "request"
} |
23. To repeat create infra step, you can delete infra vf-module stacks first and the network stacks from Openstack Horizon Orchestration->Stack page, then clean up the record in sdnc DHCP_MAC table before rerun `vcpe.py infra`
24. To repeat create customer step, you can delete customer stack, then clear up tunnles by running `cleanGMUX.py gmux_public_ip` and `cleanGMUX.py brg_public_ip`. After that you can rerun create customer command
25. If SDNC needs to be redeployed, you need again to distribute service model from SDC UI, create ip pool, install curl, and set SDNC VM cluster node routing table. Then you should reinstantiate infra VNFs, otherwise you would need to change sdnc ip address in VNFs for snat config.
Checklist for Dublin and El Alto Releases
- Model distribution by `demo-k8s.sh onap init`. this will onboard VNFs and 4 services, i.e. infrastructure, brg, bng and gmux
- Run Robot `ete-k8s.sh onap distributevCPEResCust`. This step assumes step 1 successfully distributed the 4 models
- Add customer SDN-ETHERNET-INTERNET (need to put into vcpe init)
- Add identity-url to RegionOne
- Add route on sdnc cluster node `ip route add 10.3.0.0/24 via 10.0.101.10 dev ens3`
- Initialize SDNC ip pool by running from Rancher node `kubectl -n onap exec -it dev-sdnc-sdnc-0 -- /opt/sdnc/bin/addIpAddresses.sh VGW 10.5.0 22 250`
- Install python and other python libraries
- In El Alto this can be done via ~integration/test/vcpe/bin/setup.sh
- Change the openstack env parameters and the customer service related parameter in vcpecommon.py
- Make sure to Change vgw_VfModuleModelInvariantUuid in vcpecommon.py based on the CSAR - it changes for every CSAR
- Run `vcpe.py init`
- Insert the custom service workflow entry in SO catalogdb
Code Block | ||||
---|---|---|---|---|
| ||||
root@sb04-rancher:~# kubectl exec dev-mariadb-galera-mariadb-galera-0 -- mysql -uroot -psecretpassword -e "INSERT INTO catalogdb.service_recipe (ACTION, VERSION_STR, DESCRIPTION, ORCHESTRATION_URI, SERVICE_PARAM_XSD, RECIPE_TIMEOUT, SERVICE_TIMEOUT_INTERIM, CREATION_TIMESTAMP, SERVICE_MODEL_UUID) VALUES ('createInstance','1','vCPEResCust 2019-06-03 _04ba','/mso/async/services/CreateVcpeResCustService',NULL,181,NULL, NOW(),'6c4a469d-ca2c-4b02-8cf1-bd02e9c5a7ce')"
|
10. Run `vcpe.py infra`
11. Install curl command inside sdnc-sdnc-0 container
12. From Rancher node run `healthcheck-k8s.py onap` to check connectivity from sdnc to brg and gmux
13. Update libevel.so in vGMUX
14. Run heatbridge
15. Push new Policy. Follow Jorge's steps in
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
Code Block | ||||
---|---|---|---|---|
| ||||
root@dev-robot-robot-66c9dbc759-8j7lr:/# curl -k --silent --user 'healthcheck:zb!XztG34' -X POST "https://policy-api:6969/policy/api/v1/policytypes/onap.policies.controlloop.Operational/versions/1.0.0/policies" -H "Accept: application/json" -H "Content-Type: application/json" -d @operational.vcpe.json.txt
{"policy-id":"operational.vcpe","policy-version":"1","content":"controlLoop%3A%0D%0A++version%3A+2.0.0%0D%0A++controlLoopName%3A+ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e%0D%0A++trigger_policy%3A+unique-policy-id-1-restart%0D%0A++timeout%3A+3600%0D%0A++abatement%3A+true%0D%0A+%0D%0Apolicies%3A%0D%0A++-+id%3A+unique-policy-id-1-restart%0D%0A++++name%3A+Restart+the+VM%0D%0A++++description%3A%0D%0A++++actor%3A+APPC%0D%0A++++recipe%3A+Restart%0D%0A++++target%3A%0D%0A++++++type%3A+VM%0D%0A++++retry%3A+3%0D%0A++++timeout%3A+1200%0D%0A++++success%3A+final_success%0D%0A++++failure%3A+final_failure%0D%0A++++failure_timeout%3A+final_failure_timeout%0D%0A++++failure_retries%3A+final_failure_retries%0D%0A++++failure_exception%3A+final_failure_exception%0D%0A++++failure_guard%3A+final_failure_guard"}
root@dev-robot-robot-66c9dbc759-8j7lr:/# curl --silent -k --user 'healthcheck:zb!XztG34' -X POST "https://policy-pap:6969/policy/pap/v1/pdps/policies" -H "Accept: application/json" -H "Content-Type: application/json" -d @operational.vcpe.pap.json.txt
{
"policies": [
{
"policy-id": "operational.vcpe",
"policy-version": 1
}
]
}
|
16. Start closeloop by `./vcpe.py loop` to trigger packet drop VES event. You may need to run the command twice if the first run fails
[Note you may need to comment out the set_closed_loop in vcpe.py line 165 if
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
#vcpecommon.set_closed_loop_policy(policy_template_file)
Code Block | ||||
---|---|---|---|---|
| ||||
[2019-06-04 11:03:49,822][INFO ][pool-5-thread-20][org.onap.dcae.common.EventProcessor] - QueueSize:0 EventProcessor Removing element: {"VESversion":"v5","VESuniqueId":"88f3548c-1a93-4f1d-8a2a-001f8d4a2aea","event":{"commonEventHeader":{"startEpochMicrosec":1559646219672586,"sourceId":"d92444f5-1985-4e15-807e-b8de2d96e489","eventId":"Generic_traffic","reportingEntityId":"No UUID available","eventType":"HTTP request rate","priority":"Normal","version":1.2,"reportingEntityName":"zdcpe1cpe01mux01-201906032354","sequence":9,"domain":"measurementsForVfScaling","lastEpochMicrosec":1559646229672586,"eventName":"Measurement_vGMUX","sourceName":"vcpe_vnf_vcpe_vgmux_201906032354"},"measurementsForVfScalingFields":{"cpuUsageArray":[{"percentUsage":0,"cpuIdentifier":"cpu1","cpuIdle":100,"cpuUsageSystem":0,"cpuUsageUser":0}],"measurementInterval":10,"requestRate":492,"vNicUsageArray":[{"transmittedOctetsDelta":0,"receivedTotalPacketsDelta":0,"vNicIdentifier":"eth0","valuesAreSuspect":"true","transmittedTotalPacketsDelta":0,"receivedOctetsDelta":0}],"measurementsForVfScalingVersion":2.1,"additionalMeasurements":[{"name":"ONAP-DCAE","arrayOfFields":[{"name":"Packet-Loss-Rate","value":"22.0"}]}]}}} |
17. Stop cloed loop for testing with ./vcpe.py noloss
Frankfurt vCPE.py Log for creating networks:
View file name vcpe.20200625.log height 250
Typical Errors and Solutions
...
It is most likely due to an error in vnf-topology-assign DG. This happends in R2 and should have been fixed in R3 (refer
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
Enter the SDNC docker
1. make a copy of GENERIC-RESOURCE-API_vnf-topology-operation-assign.xml in the sdnc_controller_container under /opt/sdnc/svclogic/graphs/generic-resource-api
2. edit GENERIC-RESOURCE-API_vnf-topology-operation-assign.xml to replace "<break> </break>" with "<break/>" or "<break></break>"
a. optionally you can change the version to something like 1.3.3-SNAPSHOT-FIX and update graph.versions to match but that is not needed if the xml failed to load .
3. run /opt/sdnc/svclogic/bin/install.sh
this will install the edited DG and make it active as long as the version in the xml and the version in graph.versions match
4. re-run /opt/sdnc/svclogic/bin/showActiveGraphs.sh and you should see the active DG
DHCP server doesn't work
- ssh to the dhcp server
- systemctl status kea-dhcp4-server.service
- If the service is not installed, do 'apt install kea-dhcp4-server.service'
- If the service is installed, most likely /usr/local/lib/kea-sdnc-notify.so is missing. Download this file from the following link and put it in /usr/local/lib. Link: kea-sdnc-notify.so
- systemctl restart kea-dhcp4-server.service
...
Unable to change subnet name
When running "vcpe.py infra" command, if you see error message about subnet can't be found. It may be because your python-openstackclient is not the latest version and don't support "openstack subnet set --name" command option. Upgrade the module with "pip install --upgrade python-openstackclient"..py infra" command, if you see error message about subnet can't be found. It may be because your python-openstackclient is not the latest version and don't support "openstack subnet set --name" command option. Upgrade the module with "pip install --upgrade python-openstackclient".
Unable to generate VM name error from SDNC
Received error from SDN-C: Unable to generate VM name: naming-policy-generate-name: input.policy-instance-name is not set and input.policy is ASSIGN.
To resolve this: Check the vgw_VfModuleModelInvariantUuid parameter in the vcpecommon.py script is updated with your ResCust_svc VF_ModuleModelInvariantUuid or not. For every new customer don't forget to update this.
Add this to CheckList:
# CHANGEME: vgw_VfModuleModelInvariantUuid is in rescust service csar, look in service-VcpesvcRescust1118-template.yml for groups vgw module metadata. TODO: read this value automcatically
self.vgw_VfModuleModelInvariantUuid = '26d6a718-17b2-4ba8-8691-c44343b2ecd2'