Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Preparation

Install ONAP

Make sure that you've installed ONAP R2

Table of Contents

Preparation

Install ONAP

Make sure that you've installed ONAP R2 release. For installation instructions, please refer ONAP Installation in Vanilla OpenStack.

...

service_instance_id: Take from __var/svc_instance_uuid file. Copy the value for gmux without letter 'V'


Code Block
#demo.sh <namespace> heatbridge <stack_name> <service_instance_id> <service> <oam-ip-address>
root@oom-rancher:~/integration/test/vcpe# ~/oom/kubernetes/robot/demo-k8s.sh onap heatbridge vcpe_vfmodule_e2744f48729e4072b20b_201811262136 d8914ef3-3fdb-4401-adfe-823ee75dc604 vCPEvGMUX 10.0.101.21

...

Checklist for Casablanca Release 

Assuming you run vcpe script from rancher node, here we put the above steps in summary, you need to see details of each step in the above tutorial.

0. Enable dev-sdnc-sdnc-0 docker karaf log by editing StatefulSet/dev-sdnc-sdnc (remove log mount), then deleting pod dev-sdnc-sdnc-0 to restart it. Note the pod may move to a different cluster node after restart, write down the cluster node IP.

1. model distribution by `demo-k8s.sh onap init`. this will onboard VNFs and 4 services, i.e. infrastructure,  brg, bng and gmux. 
2. Login in Portal as Demo user, then go to SDC portal to add BRG subcategory to AllottedResource.  SDC FE API not working yet:
POST http://sdc.api.fe.simpledemo.onap.org:30206/sdc1/feProxy/rest/v1/category/resources/resourceNewCategory.allotted%20resource/subCategory
Body: {"name":"BRG"}
3. (No need anymore for Casablanca MR) Update SO catalogdb tables temp_network_heat_template_lookup and network_resource tables by setting aic_version_max=3.0 (SO-1184)
4. Update SO catalogdb table heat_template to set Generic NeutronNet entry BODY field with the correct yaml format

Code Block
titleneutron.sh
collapsetrue
mysql -uroot -ppassword -e 'update catalogdb.heat_template set body="
heat_template_version: 2013-05-23
description: A simple Neutron network
parameters:
  network_name:
    type: string
    description: Name of the Neutron Network
    default: ONAP-NW1
  shared:
    type: boolean
    description: Shared amongst tenants
    default: False
outputs:
  network_id:
    description: Openstack network identifier
    value: { get_resource: network }
resources:
  network:
    type: OS::Neutron::Net
    properties:
      name: { get_param: network_name }
      shared: { get_param: shared }" where name="Generic NeutronNet"'

5. Manually create and distribute customer service according to the steps in tutorial

Note: in Casablanca maintenance, this step is automated in Robot by running  >ete-k8s.sh onap distributevCPEResCust  

5.1 Create csar directory under vcpe, and copy the following 5 csar files from robot docker /tmp/csar/

Code Block
titleCopy 5 service csars from robot container
collapsetrue
root@oom-rancher:~/integration/test/vcpe# ls -l csar
total 440
-rw-r--r-- 1 root root 105767 Jan 28 18:21 service-Demovcpeinfra-csar.csar
-rw-r--r-- 1 root root 68772 Jan 28 18:21 service-Demovcpevbng-csar.csar
-rw-r--r-- 1 root root 61744 Jan 28 18:22 service-Demovcpevbrgemu-csar.csar
-rw-r--r-- 1 root root 66512 Jan 28 18:22 service-Demovcpevgmux-csar.csar
-rw-r--r-- 1 root root 70943 Jan 28 18:23 service-Vcperescust2019012820190128180325894-csar.csar

...

Code Block
titleSet SDNC cluster node route
collapsetrue
root@release-rancher:~# kubectl -n onap get pod -o wide | grep sdnc-0
dev-sdnc-sdnc-0 2/2 Running 0 5h38m 10.42.3.22 release-k8s-11 <none> <none> release-k8s-11 <none> <none>
root@release-rancher:~# source ~/integration/deployment/heat/onap-rke/env/windriver/Integration-SB-04-openrc (source your openstack env file)
root@release-rancher:~# openstack server show -f json release-k8s-11 | jq .addresses
"oam_network_nzbD=10.0.0.10, 10.12.6.36"
root@release-rancher:~# ssh -i ~/.ssh/onap_dev ubuntu@10.12.6.36 -- sudo ip route add 10.3.0.0/24 via 10.0.101.10 dev ens3

...

10. Run `vcpe.py infra`
11. Make sure sniro configuration is run as part of the above step.
12. Install curl command inside sdnc-sdnc-0 container
13. Run `healthcheck-k8s.py onap` to check connectivity from sdnc to brg and gmux. If healthcheck-k8s.sh fails, check /opt/config/sdnc_ip.txt to see it has the SDNC host ip correctly. If you need to change SDNC host ip, you need to clean up and rerun `vcpe.py infra`. Also verify tap interfaces tap-0 and tap-1 are up by running vppctl with show int command. If tap interfaces are not up, use vppctl tap delete tap-0 and tap-1 and then run `/opt/bind_nic.sh` followed by `/opt/set_nat.sh`.

Code Block
titlevppctl tap command
collapsetrue
root@zdcpe1cpe01brgemu01-201812261515:~# vppctl tap  delete tap-0
Deleted.
root@zdcpe1cpe01brgemu01-201812261515:~# vppctl tap  delete tap-1
Deleted.
root@zdcpe1cpe01brgemu01-201812261515:~# /opt/bind_nic.sh
root@zdcpe1cpe01brgemu01-201812261515:~# /opt/set_nat.sh
root@zdcpe1cpe01brgemu01-201812261515:~# vppctl show int
              Name               Idx       State          Counter          Count     
GigabitEthernet0/4/0              1         up       tx packets                    12
                              Count      GigabitEthernet0/4/0              1   tx bytes     up       tx packets       3912
local0            12                0        down      
tap-0                       tx bytes     2         up      3912 local0rx packets                     5
     0        down       tap-0                             2    rx bytes    up       rx packets         410
           5                                          drops            rx bytes             7
       410                                              ip6        drops                    1
tap-1     7                        3         up       rx packets             ip6        1
                   1 tap-1                             3    rx bytes    up       rx packets          70
          1                                           drops           rx bytes              7
       70                                              ip6        drops                    1

14. Run `vcpe.py customer`
15. Verify tunnelxconn and brg vxlan tunnels are set up correctly
16. Set up vgw and brg dhcp and route, and ping from brg to vgw. Note vgw public ip on Openstack Horizon may be wrong. Use vgw OAM ip to login.

Code Block
titleTest data plane
collapsetrue
 1. ssh to vGW
 72. Restart DHCP: systemctl restart isc-dhcp-server
 3. ssh to vBRG
 4. Get IP from vGW: dhclient lstack
 5. Add route to Internet: ip route                            ip6                            1

14. Run `vcpe.py customer`
15. Verify tunnelxconn and brg vxlan tunnels are set up correctly
16. Set up vgw and brg dhcp and route, and ping from brg to vgw. Note vgw public ip on Openstack Horizon may be wrong. Use vgw OAM ip to login.

Code Block
titleTest data plane
collapsetrue
 1. ssh to vGW
 2. Restart DHCP: systemctl restart isc-dhcp-server
 3. ssh to vBRG
 4. Get IP from vGW: dhclient lstack
 5. Add route to Internet: ip route add 10.2.0.0/24 via 192.168.1.254 dev lstack
 6. ping the web server: ping 10.2.0.10
 7. wget http://10.2.0.10

17. Add identity-url property in RegionOne with Postman
18. Add new DG in APPC for closed loop. See APPC release note for steps. CCSDK-741
19. Update gmux libevel.so. See Eric comments on vcpe test status wiki

20. Run heatbridge Robot script

21. Push closed loop policy on Pap.
22. Run `vcpe.py loop` and verify vgmux is restarted

Code Block
titleClosed loop event messages
collapsetrue
VES_MEASUREMENT_OUTPUT event from VES collector to DCAE:
{
	"event": {
		"commonEventHeaderadd 10.2.0.0/24 via 192.168.1.254 dev lstack
 6. ping the web server: ping 10.2.0.10
 7. wget http://10.2.0.10

17. Add identity-url property in RegionOne with Postman
18. Add new DG in APPC for closed loop. See APPC release note for steps. CCSDK-741
19. Update gmux libevel.so. See Eric comments on vcpe test status wiki

20. Run heatbridge Robot script

21. Push closed loop policy on Pap.
22. Run `vcpe.py loop` and verify vgmux is restarted

Code Block
titleClosed loop event messages
collapsetrue
VES_MEASUREMENT_OUTPUT event from VES collector to DCAE:
{
	"event": {
		"commonEventHeader": {
			"startEpochMicrosec": 1548802103113302,
			"sourceId": "3dcbc028-45f0-4899-82a5-bb9cc7f14b32",
			"eventId": "Generic_traffic",
			"reportingEntityId": "No UUID available",
			"internalHeaderFields": {
				"collectorTimeStamp": "Tue, 01 29 2019 10:48:33 UTC"
			},
			"eventType": "HTTP request rate",
			"priority": "Normal",
			"version": 1.2,
			"reportingEntityName": "zdcpe1cpe01mux01-201901291531",
			"sequence": 17,
			"domain": "measurementsForVfScaling",
			"lastEpochMicrosec": 1548802113113302,
			"eventName": "Measurement_vGMUX",
			"sourceName": "vcpe_vnf_9ab915ef-f44f-4fe5-a6ce_201901291531"
		},
		"measurementsForVfScalingFields": {
			"startEpochMicroseccpuUsageArray": 1548802103113302,
[
				{
					"sourceIdpercentUsage": "3dcbc028-45f0-4899-82a5-bb9cc7f14b32"0,
					"eventIdcpuIdentifier": "Generic_trafficcpu1",
					"reportingEntityIdcpuIdle": "No UUID available"47.1,
					"internalHeaderFieldscpuUsageSystem": {0,
					"collectorTimeStampcpuUsageUser": "Tue, 01 29 2019 10:48:33 UTC"
5.9
				},
			"eventType": "HTTP request rate"],
			"prioritymeasurementInterval": "Normal"10,
			"versionrequestRate": 1.2540,
			"reportingEntityNamevNicUsageArray": "zdcpe1cpe01mux01-201901291531",[
			"sequence": 17,
	{
					"domaintransmittedOctetsDelta": "measurementsForVfScaling"0,
					"lastEpochMicrosecreceivedTotalPacketsDelta": 15488021131133020,
					"eventNamevNicIdentifier": "Measurement_vGMUXeth0",
					"sourceNamevaluesAreSuspect": "vcpe_vnf_9ab915ef-f44f-4fe5-a6ce_201901291531"true",
		},
			"measurementsForVfScalingFieldstransmittedTotalPacketsDelta": {0,
					"cpuUsageArrayreceivedOctetsDelta": [0
				{}
			],
			"percentUsagemeasurementsForVfScalingVersion": 02.1,
					"cpuIdentifieradditionalMeasurements": "cpu1",[
					"cpuIdle": 47.1,{
					"cpuUsageSystemname": 0"ONAP-DCAE",
					"cpuUsageUserarrayOfFields": 5.9[
						}{
				],
			"measurementIntervalname": 10"Packet-Loss-Rate",
				"requestRate": 540,
			"vNicUsageArrayvalue": [
"0.0"
						{}
					"transmittedOctetsDelta": 0,
					"receivedTotalPacketsDelta": 0,
					"vNicIdentifier": "eth0",
					"valuesAreSuspect": "true",
					"transmittedTotalPacketsDelta": 0,
					"receivedOctetsDelta": 0
				}
			],
			"measurementsForVfScalingVersion": 2.1,
			"additionalMeasurements": [
				{
					"name": "ONAP-DCAE",
					"arrayOfFields": [
						{
							"name": "Packet-Loss-Rate",
							"value": "0.0"
						}
					]
				}
			]
		}
	}
}



DCAE_CL_OUTPUT event from DCAE to Policy:
{
	"closedLoopEventClient": "DCAE_INSTANCE_ID.dcae-tca",
	"policyVersion": "v0.0.1",
	"policyName": "DCAE.Config_tca-hi-lo",
	"policyScope": "DCAE",
	"target_type": "VNF",
	"AAI": {]
				}
			]
		}
	}
}



DCAE_CL_OUTPUT event from DCAE to Policy:
{
	"closedLoopEventClient": "DCAE_INSTANCE_ID.dcae-tca",
	"policyVersion": "v0.0.1",
	"policyName": "DCAE.Config_tca-hi-lo",
	"policyScope": "DCAE",
	"target_type": "VNF",
	"AAI": {
		"generic-vnf.resource-version": "1548788326279",
		"generic-vnf.nf-role": "",
		"generic-vnf.prov-status": "ACTIVE",
		"generic-vnf.orchestration-status": "Active",
		"generic-vnf.is-closed-loop-disabled": false,
		"generic-vnf.service-id": "f9457e8c-4afd-45da-9389-46acd9bf5116",
		"generic-vnf.in-maint": false,
		"generic-vnf.nf-type": "",
		"generic-vnf.nf-naming-code": "",
		"generic-vnf.vnf-name": "vcpe_vnf_9ab915ef-f44f-4fe5-a6ce_201901291531",
		"generic-vnf.model-version-id": "7dc4c0d8-e536-4b4e-92e6-492ae6b8d79a",
		"generic-vnf.resourcemodel-customization-versionid": "1548788326279a1ca6c01-8c6c-4743-9039-e34038d74a4d",
		"generic-vnf.nf-rolefunction": "",
		"generic-vnf.provvnf-statustype": "ACTIVEdemoVCPEvGMUX/9ab915ef-f44f-4fe5-a6ce 0",
		"generic-vnf.orchestrationmodel-invariant-statusid": "Active",
		"generic-vnf.is-closed-loop-disabled": false637a6f52-6955-414d-a50f-0bfdbd76dac8",
		"generic-vnf.servicevnf-id": "f9457e8c3dcbc028-4afd45f0-45da4899-9389-46acd9bf5116"82a5-bb9cc7f14b32"
	},
	"closedLoopAlarmStart": 1548803088140708,
		"generic-vnf.in-maint": false,
		"generic-vnf.nf-type": "",
		"generic-vnf.nf-naming-code": "",
		"closedLoopEventStatus": "ONSET",
	"closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
	"version": "1.0.2",
	"target": "generic-vnf.vnf-name",
	"requestID": "vcpe_vnf_9ab915ef-f44f-4fe5-a6ce_2019012915310e74d6df-627d-4a97-a679-be85ddad6758",
		"generic-vnf.model-version-id": "7dc4c0d8-e536-4b4e-92e6-492ae6b8d79a",
		"generic-vnf.model-customization-id": "a1ca6c01-8c6c-4743-9039-e34038d74a4d",
		"generic-vnf.nf-function": "",
		"generic-vnf.vnf-type": "demoVCPEvGMUX/9ab915ef-f44f-4fe5-a6ce 0",
		"generic-vnf.model-invariant"from": "DCAE"
}


APPC-LCM-READ event from Policy to APPC:
{
  "body": {
    "input": {
      "common-header": {
        "timestamp": "2019-01-29T23:05:42.121Z",
        "api-ver": "2.00",
        "originator-id": "637a6f52923ac972-69556ec1-414d4e34-a50fb6e1-0bfdbd76dac876dc7481d5af",
		"generic-vnf.vnf-
        "request-id": "3dcbc028923ac972-45f06ec1-48994e34-82a5b6e1-bb9cc7f14b3276dc7481d5af",
	},
	"closedLoopAlarmStart": 1548803088140708,
	"closedLoopEventStatus": "ONSET",
	"closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
	"version        "sub-request-id": "1.0.2",
 	"target": "generic-vnf.vnf-name", 	"requestID": "0e74d6df-627d-4a97-a679-be85ddad6758",
	"from": "DCAE"
}


APPC-LCM-READ event from Policy to APPC:
{     "flags": {}
      },
      "bodyaction": "Restart",
{      "inputaction-identifiers": {
        "commonvnf-headerid": {"3dcbc028-45f0-4899-82a5-bb9cc7f14b32"
      }
    }
  },
  "timestampversion": "2019-01-29T23:05:42.121Z2.0",
    "rpc-name": "restart",
   "apicorrelation-verid": "2.00",
     923ac972-6ec1-4e34-b6e1-76dc7481d5af-1",
  "originator-idtype": "923ac972-6ec1-4e34-b6e1-76dc7481d5af",
        "request-id": "923ac972-6ec1-4e34-b6e1-76dc7481d5af",
        "sub-request-id": "1",
        "flags": {}
      },
      "action": "Restart",
      "action-identifiers": {
        "vnf-id": "3dcbc028-45f0-4899-82a5-bb9cc7f14b32"
      }
    }
  },
  "version": "2.0",
  "rpc-name": "restart",
  "correlation-id": "923ac972-6ec1-4e34-b6e1-76dc7481d5af-1",
  "type": "request"
}

...

request"
}

23. To repeat create infra step, you can delete infra vf-module stacks first and the network stacks from Openstack Horizon Orchestration->Stack page, then clean up the record in sdnc DHCP_MAC table before rerun `vcpe.py infra`
24. To repeat create customer step, you can delete customer stack, then clear up tunnles by running `cleanGMUX.py gmux_public_ip` and `cleanGMUX.py brg_public_ip`. After that you can rerun create customer command

Checklist for Dublin Release

  1. Model distribution by `demo-k8s.sh onap init`. this will onboard VNFs and 4 services, i.e. infrastructure,  brg, bng and gmux
  2. Run Robot `ete-k8s.sh onap distributevCPEResCust`. This step assumes step 1 successfully distributed the 4 models
  3. Add customer SDN-ETHERNET-INTERNET (need to put into vcpe init)
  4. Add route on sdnc cluster node `ip route add 10.3.0.0/24 via 10.0.101.10 dev ens3`
  5. Initialize SDNC ip pool by running from Rancher node `kubectl -n onap exec -it dev-sdnc-sdnc-0 -- /opt/sdnc/bin/addIpAddresses.sh VGW 10.5.0 22 250`
  6. Install python and other python libraries
  7. Change the openstack env parameters and the customer service related parameter in vcpecommon.py
  8. Run `vcpe.py init`
  9. Insert a service workflow entry in SO catalogdb
Code Block
titleInsert customer workflow into SO service table
collapsetrue
root@sb04-rancher:~# kubectl exec dev-mariadb-galera-mariadb-galera-0 -- mysql -uroot -psecretpassword -e "INSERT INTO catalogdb.service_recipe (ACTION, VERSION_STR, DESCRIPTION, ORCHESTRATION_URI, SERVICE_PARAM_XSD, RECIPE_TIMEOUT, SERVICE_TIMEOUT_INTERIM, CREATION_TIMESTAMP, SERVICE_MODEL_UUID) VALUES ('createInstance','1','vCPEResCust 2019-06-03 _04ba','/mso/async/services/CreateVcpeResCustService',NULL,181,NULL, NOW(),'6c4a469d-ca2c-4b02-8cf1-bd02e9c5a7ce')"


Typical Errors and Solutions

...

It is most likely due to an error in vnf-topology-assign DG. This happends in R2 and should have been fixed in R3 (refer 

Jira Legacy
serverSystem Jira
serverId4733707d-2057-3a0f-ae5e-4fd8aff50176
keySDNC-351
). The solution:

  • Enter the SDNC docker

  • 1. make a copy of GENERIC-RESOURCE-API_vnf-topology-operation-assign.xml in the sdnc_controller_container under /opt/sdnc/svclogic/graphs/generic-resource-api

    2. edit GENERIC-RESOURCE-API_vnf-topology-operation-assign.xml to replace "<break> </break>" with "<break/>" or "<break></break>"

    a. optionally you can change the version to something like 1.3.3-SNAPSHOT-FIX and update graph.versions to match but that is not needed if the xml failed to load .

    3. run /opt/sdnc/svclogic/bin/install.sh 

    this will install the edited DG and make it active as long as the version in the xml and the version in graph.versions match

    4. re-run /opt/sdnc/svclogic/bin/showActiveGraphs.sh and you should see the active DG

DHCP server doesn't work

  1. ssh to the dhcp server
  2. systemctl status kea-dhcp4-server.service
  3. If the service is not installed, do 'apt install kea-dhcp4-server.service'
  4. If the service is installed, most likely /usr/local/lib/kea-sdnc-notify.so is missing. Download this file from the following link and put it in /usr/local/lib.  Link: kea-sdnc-notify.so
  5. systemctl restart kea-dhcp4-server.service

...