vCPE Use Case - Customer Service Instantiation - 171103.pdf
11/15/2017
Regression test in SB01
Fix SDNC DB with the following
update ALLOTTED_RESOURCE_MODEL set ecomp_generated_naming='Y',type='TunnelXConnect',allotted_resource_type='TunnelXConnect' where customization_uuid='f3ef75e8-5cb5-4c1b-9a5a-5ddcefb70b57'; update ALLOTTED_RESOURCE_MODEL set ecomp_generated_naming='Y',type='BRG',allotted_resource_type='TunnelXConnect' where customization_uuid='4c3f8585-d8a8-4fd9-bad8-87296529c4d0';
Error occurred in SDNC for tunnelxconn assign due to the previous null values in AAI query
2017-11-15T12:16:59.863Z, Method : GET 2017-11-15 12:16:59,864 | INFO | qtp79019442-4883 | AAIService | 300 - org.openecomp.sdnc.sli.aai - 0.1.0 | Request URL : https://aai.api.simpledemo.openecomp.org:8443/aai/v11/business/customers/customer/null/service-subscriptions/service-subscription/null/service-instances/service-instance/null/allotted-resources/allotted-resource/67986ea9-e932-4ae5-9f77-26693e103d1d 2017-11-15 12:16:59,864 | INFO | qtp79019442-4883 | AAIService | 300 - org.openecomp.sdnc.sli.aai - 0.1.0 | Missing requestID. Assigned af5b154f-fc6d-477b-bcca-6fd29cb57cf2 2017-11-15 12:16:59,920 | INFO | qtp79019442-4883 | metric | 294 - org.onap.ccsdk.sli.core.sli-common - 0.1.2 | 2017-11-15 12:16:59,920 | INFO | qtp79019442-4883 | AAIService | 300 - org.openecomp.sdnc.sli.aai - 0.1.0 | Response code : 404, Not Found 2017-11-15 12:16:59,920 | INFO | qtp79019442-4883 | AAIService | 300 - org.openecomp.sdnc.sli.aai - 0.1.0 | Response data : Entry does not exist.
The request from SO looks good:
2017-11-15T12:17:00.075Z|3b6d089c-4ac5-4d61-9d02-173616322088|MSO-RA-5212I Sending request to SDNC:RequestTunables [reqId=3b6d089c-4ac5-4d61-9d02-173616322088, msoAction=, operation=tunnelxconn-topology-operation, action=assign, reqMethod=POST, sdncUrl=http://c1.vm1.sdnc.simpledemo.openecomp.org:8282/restconf/operations/GENERIC-RESOURCE-API:tunnelxconn-topology-operation, timeout=270000, headerName=sdnc-request-header, sdncaNotificationUrl=http://c1.vm1.mso.simpledemo.openecomp.org:8080/adapters/rest/SDNCNotify, namespace=org:onap:sdnc:northbound:generic-resource] 2017-11-15T12:17:00.075Z|3b6d089c-4ac5-4d61-9d02-173616322088|SDNC Request Body: <?xml version="1.0" encoding="UTF-8"?><input xmlns="org:onap:sdnc:northbound:generic-resource"><sdnc-request-header><svc-request-id>3b6d089c-4ac5-4d61-9d02-173616322088</svc-request-id><svc-action>assign</svc-action><svc-notification-url>http://c1.vm1.mso.simpledemo.openecomp.org:8080/adapters/rest/SDNCNotify</svc-notification-url></sdnc-request-header><request-information ><request-id>2240cf26-1e38-4b0d-9d24-a27cf32c4098</request-id><request-action>CreateTunnelXConnInstance</request-action><source>MSO</source><notification-url/><order-number/><order-version/> </request-information><service-information ><service-id/><subscription-service-type>vCPE</subscription-service-type><onap-model-information/><service-instance-id>33b22c7c-aade-4c28-8f81-4ee9c223c388</service-instance-id><subscriber-name/><global-customer-id>SDN-ETHERNET-INTERNET</global-customer-id> </service-information><allotted-resource-information ><allotted-resource-id>67986ea9-e932-4ae5-9f77-26693e103d1d</allotted-resource-id><allotted-resource-type>tunnelxconn</allotted-resource-type><parent-service-instance-id>0eea37ae-c25e-4f57-93f9-e87e6b3b69ed</parent-service-instance-id><onap-model-information><model-invariant-uuid>09ebcb84-c683-48c4-8120-4318489a56d0</model-invariant-uuid><model-uuid>d0a16427-34ec-4dec-9b83-c2ec04f60525</model-uuid><model-customization-uuid>f3ef75e8-5cb5-4c1b-9a5a-5ddcefb70b57</model-customization-uuid><model-version>1.0</model-version><model-name>tunnelxconn111301</model-name> </onap-model-information> </allotted-resource-information><tunnelxconn-request-input ><brg-wan-mac-address>fa:16:3e:19:65:96</brg-wan-mac-address> </tunnelxconn-request-input></input>
To configure vGMUX VES event including packet loss rate and vnfid, follow the instructions from Eric:
There is a vgmux image in the ONAP-vCPE project space called: vgmux2-base-ubuntu-16-04 This one has the ability to configure the sourceName in the VES event to something different than the default value (which is the vnf-id present in the vm’s openstack metadata). Some documentation: Configuring the VES mode - via REST This will set the ‘demo’ mode and packet loss to 40%, but does ‘not’ change the sourceName: curl -i -H "Content-Type:application/json" --data '{"mode":{"working-mode":"demo","base-packet-loss":40,"source-name":""}}' -X POST -u admin:admin http://127.0.0.1:8183/restconf/config/vesagent:vesagent Delete the config in order to change it via REST: curl -i -H "Content-Type:application/json" -X DELETE -u admin:admin http://127.0.0.1:8183/restconf/config/vesagent:vesagent/mode curl -i -H "Content-Type:application/json" --data '{"mode":{"working-mode":"demo","base-packet-loss":88,"source-name":"testing-123-ABC"}}' -X POST -u admin:admin http://127.0.0.1:8183/restconf/config/vesagent:vesagent Configuring the VES mode - via CLI QUERY: # vppctl show ves mode Mode Base Packet Loss Rate Source Name Demo 88.0% testing-123-ABC SET: vppctl set ves mode demo base 77 source hello-there This sets the sourceName to “hello-there” Leave off the 'source <name>' arguments to set back to default (i.e. vnf-id from openstack metadata)
11/14/2017
- Regression test in SB01
- SO API handler dropped source from VID request, causing error in BPMN. Fixed. See - SO-340Getting issue details... STATUS
- SO has cannot pass AAI authentication. The problem is that AAI replaced the old cert (exp 11/30/2017) with a new one (exp dec. 2018). The solution is to use the CA in SO.
- In SO docker, find aai.crt, replace the content with this ca_bundle_for_openecomp.txt (which includes both root and intermediate CA).
- run "update-ca-certificates -f"
- restart SO docker.
SO failed to query AAI cloud region, the problem is the default config in /etc/mso/config.d/mso.bpmn.urn.properties. The correct lines are below. - SO-343Getting issue details... STATUS
- Preloading works!
- Successfully created infrastructure!
11/11/2017
- Regression test in SB01
An ONAP based on the latest build was installed on 11/12. Need to go to appc_vm and change /opt/config/docker_version.txt: 1.1-STAGING-latest to 1.2-STAGING-latest.
- Onboarding, service design, service distribution completed in SB01.
- SO does not add recipe automatically. Manual insertion was done.
- SDNC does not insert correct data to its DB. Tracked by
-
SDNC-194Getting issue details...
STATUS
- For ALLOTTED_RESOURCE_MODEL, BRG and TunnelXConn do not have 'Y' for its ecomp_generated_naming. Their types are "VF", and their 'allotted resource type' fields are null.
- VF_MODEL is good.
- SERVICE_MODEL is good.
- VF_MODULE_MODEL is good.
- VID has a problem to create services. Tracked by
-
VID-91Getting issue details...
STATUS
- vCpeResCust custom workflow:
Jim fixed xml parsing problem for service delete and now the flow can pick up vgw, tunnelxconn AR, and vbrg AR from the service info returned from AAI. An error was captured when SO requests SDNC for brg deactivate. The request to SDNC is below
The AAI request from SDNC is below. Note that a few values are 'null'
2017-11-13 17:07:01,916 | INFO | SvcLogicGraph [module=GENERIC-RESOURCE-API, rpc=brg-topology-operation-deactivate, mode=sync, version=1.2.0-SNAPSHOT] | Request URL : https://aai.api.simpledemo.openecomp.org:8443/aai/v11/business/customers/customer/null/service-subscriptions/service-subscription/null/service-instances/service-instance/null/allotted-resources/allotted-resource/99dc7978-3efe-4074-805c-2ae5dc785c88
11/10/2017
- vCpeResCust custom workflow:
- Brian made changes on the SDNC side. Now SDNC can pass a list of parameters to SO for vG assign call. Then SO passes those parameters to HEAT to instantiate vG.
- Jim fixed the DoDeleteVfModule flow to use generic-resource-api and construct the corresponding request body when performing SDNC deactivate call. Note that the flow checks the configuration variable sdncversion to determine what request body to construct. This is something not fully understood by the team.
- Delete of vG succeeded.
- Jim continues to work on - SO-325Getting issue details... STATUS .
Eric has modified vGMUX to add a workaround, see below:
11/9/2017
- vCpeResCust custom workflow:
- Closed loop: manually send VES event to DCAE, APPC successfully restarted VNF via two options: directly talking to Openstack, calling MultiCloud API.
- To restart through Openstack, we need the following vserver info in AAI. This is added by heat bridge. Note that if identity-url is missing then APPC will use the default one from its properties.
"vserver-selflink": "http://10.12.25.2:8774/v2.1/466979b815b5415ba14ada713e6e1846/servers/9e17027d-c09d-45c4-9a7a-84f50a2246fd",
In the appc container, /opt/openecomp/appc/data/properties/appc.properties should have either of the following lines.
provider1.identity=http://10.12.25.2:5000/v3 #this is for openstack provider1.identity=http://10.0.14.1:9005/api/multicloud-titanium_cloud/v0/pod25_RegionOne/identity/v2.0 #this is for multicloud
To restart through MultiCloud, the following ESR should be added, note that the identity-url and the vserver self link is pointing to MultiCloud. Note that the vserver selflink has a different IP and port. APPC picks up the user name and password from ESR and use them to access MultiCloud API.
- To restart through Openstack, we need the following vserver info in AAI. This is added by heat bridge. Note that if identity-url is missing then APPC will use the default one from its properties.
- Tested DeleteVcpeResCust flow.
- Closed loop: manually send VES event to DCAE, APPC successfully restarted VNF via two options: directly talking to Openstack, calling MultiCloud API.
11/8/2017
- vCpeResCust custom workflow:
- Successfully passed custom workflow test: created tunnelxconn-allotted-resource, vG, brg-allotted-resourece, configured vGMUX and vBRG. Note that currently VxLAN on vG is manually configured.
- Task: DG change to configure vG VxLAN: - SDNC-182Getting issue details... STATUS
Task: DG change to return the following vG heat parameters to SO. This is in vfmodule assign.
- Brian will fix the HEAT bridge in robot, which will be used to add vserver info to AAI.
- Brian will add a DHCP server in the Webserver.
11/7/2017
- vCpeResCust custom workflow:
- SO configuration update needed: - SO-315Getting issue details... STATUS , - SO-278Getting issue details... STATUS .
- Distribution of updated service from SDC to SO problem identified: - SO-316Getting issue details... STATUS .
- vG heat template updated with two more parameters added in env and yml. Image also updated.
- vGMux heat updated.
- Error in SDNC on brg-allocated-resource activate. Will fix by Wed.
- Closed loop:
- vServer data is missing from AAI. May need to add manually.
- VES data reporting from vGMUX had incorrect sourceName, need to be updated.
11/6/2017
- vCpeResCust custom workflow:
- Test almost finished. Successfully created tunnel-x-connect, vG, and brg-allotted-resource. When SO finally queries SDC with the parent service instance id of brg-allotted-resource (which is the service instance id of vbrg service), SDNC failed to find it in its catalog. This will be fixed. Once this query passes, SO service flow will finish successfully.
- SO calling SDNC to create a vG instance failed. SDNC queried AAI to find the availability zone but such a zone did not exist. Brian fixed this by manually adding such a zone and a complex in AAI. Jerry will add this in demo.sh init.
Add an availability zone
Add an complex
Add relationship in CloudOwner to the above complex
- SO calling SDNC during vG instantiation used incorrect format for vG. What defined in Yang uses dash, e.g., vg-ip. SO used vg_ip. SO fixed this onsite.
- vG instantiation completed successfully. Currently the IP address facing vGMUX is using the one in HEAT. SO workflow should take the one allocated by SDNC.
- SO: preProcessSDNCAssign workflow didn't include modelCustomizationUuid. This was fixed onsite.
A routing entry needs to be added to the SDNC VM (not docker) to allow reaching BRG through BNG:
ip route add 10.3.0.0/24 via 10.0.101.10 dev eth0
- SDNC assign of brg-allotted-resource failed. Note that unassign, activate, deactive all may have similar problems.
- The original DG had an error and was not loaded to SDNC. Fixed.
- The assign DG didn't set global-customer-id. Fixed.
- The assign DG didn't set service instance uuid. Fixed.
- The latest config for SO are: mso.sdnc.properties and mso.bpmn.urn.properties. Note that these config files are dynamically generated when SO docker container is started/restarted. After the latest fixes, there should be no need to use these files anymore.
11/4/2017
vCpeResCust custom workflow:
Postman body to invoke SO updated
- Made much progress by modifying multiple DGs with temp fixes. Marcus is working to fix them the right way and commit to the repo. Passed TunnelXConn steps: assign, create. Wait to be tested: activate. Some of the problems are:
- Incorrect username/password for authentication;
- Correct username/password in configuration file, but they are referenced incorrectly in DG.
- DG does not obtain globalcustomerID and gmux service ID properly, and thus failed to query AAI.
- Use incorrect IP to config vGMUX.
- Restconf request to vGMUX has mal-formatted body.
- DG has error (use incorrect resource) when allocating IP addresses for vG.
- Created a new vCpeResCust110403 service. The difference is that now the vG model is based on the latest heat and env: vgw.zip. The heat and env have been validated using openstack stack create. Note that in order to correctly distribute to SO, all the components were created from scratch, including vG (from onboarding), tunnelXconn VF, vBRG Allotted Resource VF. After distribution, added service recipe in SO DB and allotted resources to SDNC DB. Verified that this service model can be executed through SO API.
11/3/2017
vCpeResCust custom workflow:
Modified the vcperescust service and then redistribution to AAI failed. It looks like SDC has a problem handling update but it is not easy to pinpoint the problem. The workaround is to recreate the service from scratch. Distribution passed.
Brian created the infrastructure including BNG BRG vGMUX, which are to be used for vcperescust flow.
Found a bug in TunnelXConn flow where the content send to SDNC assign is incorrect. Fixed with code change onsite. Tracked by - SO-302Getting issue details... STATUS
Found a few missing configurations and mistakes in mso.sdnc.properties, fixed onsite. The file being used now is mso.sdnc.properties.
Found a problem in SDNC handling TunnelXConn create operation. Dan fixed it onsite.
TunnelXConn flow has a bug: the SDNC assign request misses some info: - SO-304Getting issue details... STATUS
SDNC ueb cannot parse service template when distributed. Reopen this ticket: - SDC-564Getting issue details... STATUS .
Manually inserted the following into SDNC DB tables:
A few other tickets:
11/2/2017
- vCpeResCust custom workflow:
- vcperescust service failed to distribute from SDC to SO for several reasons:
- For the vBRG EMU allotted resource, we first designed it using "Allotted Resource" subcategory, SO did not accept it. We then re-designed it using "Tunnel XConn" subcategory, SO accepted it. But it is not clear if this will cause problem later on.
- Each VF in the service is assigned a customizationUUID. That ID is not changed unless the VF is removed and recreated in SDC composition. SO throws an error if a service being redistributed includes a component with the same customizationUUID. Basically it checks the DB to see if the ID already exist, if yes then reports a duplicate customizationUUID error in /var/log/ecomp/MSO/ASDC.../msodebug.log. If we need to change and redistribute a service, the workaround is to delete each component and add back in.
- Solutions:
- We recreated the mso container and db container to start with empty DB tables. Note that only recreate the DB does not work as it will lack the camunda records, which are created when the mso container is initialized.
- To recreate, delete the mso container, change /opt/test_lab/deploy.sh to enable "--force-recreate mariadb". Remember to turn it back off afterwards.
- After new mso container is created, do the following.
- Run cpjars to copy the updated jar and war files into the docker. This is not needed if docker is updated.
- Restart the container to load the above files.
- Run cpproperties to copy the properties in mso.bpmn.urn.properties, mso.sdnc.properties. Note that this properties got reset each time the container is restarted. This is not needed once docker is updated. But to enable debug log for specific items, some of the flags in mso.bpmn.urn.properties are still needed.
- SNIRO is updated to meet the needs of the Homing query. Now the data to be loaded to SNIRO is this (updated 11/7): sniro.json. Note that some of the contents needs to be updated, such as gmux uuid.
- To clear the sniro content: GET http://robot:8080/__admin/mappings/reset
- To check sniro logs: docker logs -f sniroemulator
- TunnelXConn flow queried AAI using the vCpeResCust service instance uuid but got nothing. This was caused by racing condition: it was too soon to query after the service flow put the same uuid in AAI. The temp solution is to let the flow sleep for 30 seconds before performing query. It worked. Note that sleeping for 5 seconds was proven to be insufficient.
- vcperescust service failed to distribute from SDC to SO for several reasons:
11/1/2017
- vCpeResCust custom workflow:
Michael Lando found a solution for - SDC-564Getting issue details... STATUS . SDNC should be able to parse service template properly after the fix is committed.
- Brian manually configured SDNC to bypass service template parsing and checking as a temp hacking so that SDNC could pass "service assign".
- SO flow continued to create TunnelXConn. An exception occurred. It is determined that the cause is homing. The problem is tracked by - INT-317Getting issue details... STATUS .
- The team determined that vCpeResCust needs to include a BRG allotted resource. We were able to modify the service model in SDC but distribution to SO ended in error. Waiting to be solved.
10/31/2017
- vCpeResCust custom workflow:
Service assign request from SO to SDNC has a format error. Fixed it by modifying the following line in mso docker containter: /etc/mso/config.d/mso.sdnc.properties. Note that the above is lost once the container is restarted. A JIRA ticket is opened to request a permanent fix: - SO-295Getting issue details... STATUS .
org.openecomp.mso.adapters.sdnc..service-topology-operation.assign=POST|270000|sdncurl8|sdnc-request-header|org:onap:sdnc:northbound:generic-resource
In SDNC ueb listener docker container, /opt/onap/sdnc/data/properties/ueb-listener.properties does not have the correct sdc address, fixed. But the user name and password to authenticate with SDC are incorrect. So it's not able to get service model distributed from SDC. This is now fixed, see the file content below. Dan has committed the new config file to the repo.
- Now SDNC is able to pick up service template distributed from SDC. But reports an error when Cambria parses yml template. Waiting to be fixed: - SDC-564Getting issue details... STATUS
- service-Vcperescust-template.yml
- service-Vcperescust-template.yml
10/30/2017
- vCpeResCust custom workflow:
- Identified configuration mistake in mso.bpmn.urn.properties: both hostname and url path needs to be corrected to provide correct callback url to SNIRO. The updated file is here: mso.bpmn.urn.properties
- Identified a bug. When SO service level flow requests SDNC for service instance assignment, the request does not have correct format and got rejected. Dan provides a sample and Yang model in - SDNC-153Getting issue details... STATUS . A new ticket is created. - SO-289Getting issue details... STATUS .
- Eric and team have the following update on VNFs:
- There should be vnf snapshot images in the Integration environment now. Note, that the vbng image Brian made today needs to be replaced with one Matt is in process of building.
- Per Kang’s question below, the last line of each vnf yaml file runs an install script – this is what sets up the interface addresses, per the env file. The snapshot images skip over need for time consuming compilation of the vpp/honeycomb code.
- We have tested manually configuring vxlan tunnels, etc. in the VCPE project and verified that the data path is working.
- we would like to remove and rebuild the vnfs in the vcpe project tomorrow morning.
- one item we’re in process of testing is control plane connectivity from sdnc to the vbrg
- A couple areas where there may be some unkowns that could come up during integration:
- interaction with the onap vdhcp vnf (different than what was tested in our local lab)
- interaction with the aaa vnf (did not have this in our local lab setup)
10/27/2017
vCpeResCust custom workflow:
- Passed: Get and decompose service template from catalog.
Passed: Query SNIRO emulator to get homing information. The current config files for SO after manual changes are: mso.bpmn.urn.properties, mso-docker.json. They are supposed to be updated by SO so that no more manual changes are needed in the future. The callback URL provided by SO is incorrect, needs to be fixed ( - SO-278Getting issue details... STATUS ). SNIRO emulator needs to be modified to use the callback URL from SO request ( - INT-311Getting issue details... STATUS ). Currently we use the following hacking to send the required info to SO.
Passed: SO queries AAI to get service and other info include globalCustomerID. We preload AAI with the following info. Note that the ASDC_TOSCA_UUID part is questionable. It seems not necessary. It is tracked by - SO-279Getting issue details... STATUS .
- Passed: SO creates a service instance UUID and put it in AAI.
- Blocked: SO calls SDNC assign service (type=vCpeResCust, UUID), see - SDNC-153Getting issue details... STATUS .
- General Infrastructure
- Brian has successfully instantiated general infrastructure, hahahaha~~~~~~~~~~
- Notes for upcoming test
SO allows only one workflow to executive at a time. To clear the current one:
delete from mso_requests.infra_active_requests;
To manually send event to DMaaP to invoke SDNC to create BRG record in AAI (this emulates the event from DHCP), do the following
http://{{mr}}:3904/events/VCPE-DHCP-EVENT/group1/C1?timeout=5000 [ "{\"msg_name\":\"DHCPACK\",\"macaddr\":\"e2:91:8c:7a:1e:9d\",\"yiaddr\":\"10.3.0.2\"}" ]
10/26/2017
- vCpeResCust custom workflow:
The config changes for sniro end point should be visible at : /shared/mso-docker.json. We should look at "sniroEndpoint". Also James Hahn found out that the following line needs to be added to the same file. A ticket is opened: - SO-274Getting issue details... STATUS . Important: It seems that the mso container needs to be restarted to pick up the change in this file.
"adaptersWorkflowMessageEndpoint": "http://mso:8080/workflows/messages/Resteasy",
- SNIRO emulator config tested. The related wiki page is here: SNIRO Emulator
Add the following to /etc/mso/config.d/mso.bpmn.urn.properties to enable debug. Important: The changes are lost each time the container is restarted.
- A bug was reported when SO tried to create a SNIRO request. It is tracked by - SO-273Getting issue details... STATUS .
- General Infrastructure:
Brian created preload as following
10/25/2017
- vCpeResCust custom workflow:
Re-created SO maria DB to reflect the latest change to the DB tables. This done by adding a line to /opt/test_lab/deploy.sh: "$DOCKER_COMPOSE_CMD up -d --force-recreate mariadb".
The workflow successfully obtained service template from the catalog and decomposed it.
- A problem was observed when the workflow tried to call the homing service. It is tracked by - SO-268Getting issue details... STATUS .
- General Infrastructure:
Brian modified the neutron heat template and parameters in SO DB, pre-loaded SDNC, and successfully created virtual networks. (The scripts and template on 10/24 notes have been updated accordingly.) SDNC preload data for CPE_SIGNAL is below.
- During VNF instantiation, a bug was discovered. The content of DB table vnf_component_recipe is incorrect. It is tracked by - SO-267Getting issue details... STATUS .
- Brian found a few bugs related to SDNC:
10/24/2017
- vCpeResCust custom workflow:
Use the following to add the custom workflow into the recipe table:
Successfully invoked the custom workflow from SO NBI using curl (note that VID currently only support a la carte so cannot invoke this flow).
- A bug is discovered for the workflow and is being worked on (tracked by - SO-262Getting issue details... STATUS .
- Genera Infrastructure: With the following manual fix we are able to distribute the service to SO.
The generic neutron network HEAT template is missing from the SO DB. Brian found a way to manually fix it. SO will include this in the repo. It is tracked by - SO-265Getting issue details... STATUS . The scipt and heat template are given below (updated on 10/25 based on Brian Freeman's comments to - SO-265Getting issue details... STATUS ).
- network_resource table model_invariant_uuid was too short (20 instead of at least 36). Manually increased to 120. It is tracked by - SO-266Getting issue details... STATUS
- Genera Infrastructure: Instantiation
- Need to run "/opt/demo.sh init" in the robot VM first.
- A service was created using VID.
- Tried to add a neutron network to it. SO received an error from SDNC. This is tracked by - SDNC-143Getting issue details... STATUS
The 404 error in VID on deploy is due to missing zone data in AAI. Preload AAI with the following:
- VNFs:
- Updated doc is available: ONAP vCPE VPP-based VNF Installation and Usage Information
- Closed loop: APPC-MultiCloud:
10/23/2017
- MultiCloud found a bug and fixed it, tracked by - MULTICLOUD-118Getting issue details... STATUS .
- vGMUX: Eric has created a snapshot image for vGMUX and tested instantiation without ONAP. He and his team is working on the other VNFs.
- SDC: David Shadmi and Kang finished TunnelXConn VF design and vcperescust service design and distribution.
- To successfully distributed vcperescust, we need to first distribute vgmux service due to dependency.
10/19/2017
- APPC was able to reboot VM using legacy provider API. The team is currently debugging using the LCM framework where there is an issue with CDP PAL (Cloud Delivery Platform-Provider Abstraction Layer) interaction with MultiCloud.
- vGMUX: Eric and the team choose to build a image for each working VNF to avoid building and installation during instantiation. The team will also start to test the other VNFs including vBRG, vBNG, and vG.
10/18/2017
APPC and MultiCloud located the problem to be API call request prefix. Bin Yang provided the following sample code to call MultiCloud API to do VM restart. APPC will fix the problem accordingly.
export MULTICLOUD_PLUGIN_ENDPOINT=http://10.0.14.1:9005/api/multicloud-titanium_cloud/v0/pod25_RegionOne
export TOKEN=$(curl -v -s -H "Content-Type: application/json" -X POST -d '{ }' $MULTICLOUD_PLUGIN_ENDPOINT/identity/v3/auth/tokens 2>&1 | grep X-Subject-Token | sed "s/^.*: //")
export PROJECT_ID=466979b815b5415ba14ada713e6e1846
curl -v -s -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X GET $MULTICLOUD_PLUGIN_ENDPOINT/compute/v2.1/$PROJECT_ID/servers
curl -v -s -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X POST -d '{"os-stop":null}' $MULTICLOUD_PLUGIN_ENDPOINT/compute/v2.1/$PROJECT_ID/servers/0a06842a-4ec4-4918-b046-399f6b38f5f9/action
curl -v -s -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X POST -d '{"os-start":null}' $MULTICLOUD_PLUGIN_ENDPOINT/compute/v2.1/$PROJECT_ID/servers/0a06842a-4ec4-4918-b046-399f6b38f5f9/action- vGMUX can be instantiated using the HEAT but VPP code build failed. Eric was able to manual build using the same script. He is looking into the problem.
10/17/2017
- APPC has made good progress to fix a few bugs. The execution has reached MultiCloud API. A call is set up on 10/18 to look into this problem with MultiCloud people.
10/16/2017
- DCAE to Policy: Vijay Kumar has fixed TCA policy to put in the correct closed loop name. However, multiple problems exist in the TCA output, tracked by
-
DCAEGEN2-164Getting issue details...
STATUS
.
- Fixed vCPE policies on the wiki to use VNF as the controlLoopSchemaType.
- Fixed TCA jar to include "version" as is needed by Policy.
- Passed test: → DCAE → Policy → . Manually fed a VES data to DCAE, observed an event from Policy to APPC.
- Note that currently Policy does not check the time stamp in the event so for testing purpose there is no need to increase the time stamp in the VES data.
- Once Policy sends out a restart event to APPC, it will not send another restart event until the current operation is completed or time expires even if it receives subsequent ONSET events. To retest in case operation fails in APPC, Policy needs to be restarted.
- Eric Multanen has fixed the even data content problem to include the correct vnf_id passed from the HEAT. See details appended by Eric to to 10/13/2017 notes.
10/13/2017
- DCAE to Policy: Vijay Kumar updated TCA and VES data is now processed by collector and TCA and output from TCA is observed on unauthenticated.DCAE_CL_OUTPUT. Policy does observe that event from TCA. But the closed-loop name is incorrect. Vijay Kumar will load the correct TCA Policy listed on Policy R1 Amsterdam Functional Test Cases and the test will continue next week.
- APPC to MultiCloud: Problems from the previous days have been fixed. Scott Seabolt tested through Swagger UI and identified a new problem caused by prefix spaces in DG. He will fix it soon.
- vGMUX: Eric Multanen has identified how to configure VES collector: one has to delete the current VES collector setting before setting a new one. This applies to both command line and API configuration. A question raised is that the current VES event uses VM name and VM ID. What we use in AAI is actually the VNF ID. Need to figure out how to set that VNF ID in vGMUX.
- From Eric: The "sourceName" and "sourceId" in the event data are generated by 'libevel.so' which comes from source code in demo/vnfs/VES5.0/evel/evel-library/code/evel_library. These elements are currently populated by openstack metadata corresponding to the vm name and vm id. The vG-MUX server has a property vnf_id='vCPE_Infrastructure_vGMUX_demo_app' - so it is presumably feasible to modify the libevel.so code to use this value (the same one for both 'sourceName' and 'sourceId' ?). The question is: should this be a quick and dirty hack for the vG-MUX VNF specifically? or, a more general fix in the library code? I'd be worried about changing the library code in the 'demo' repository and breaking some other use case or demo.
- From Kang:
- A quick hack should be good enough. In this use case, we only need vGMUX to report events so the other VNFs do not need a similar hack.
- Let's give vnf_id to both sourceName and sourceId. In principle only sourceId should be used by DCAE. But my test shows that only sourceName is used instead. Since vnf_id is specified in the HEAT and there is no sourceName at all in the HEAT, the above approach should work well to avoid potential inconsistency.
- Update based on communication with John Choma: VNF UUID is the same as VNF ID used in the HEAT.
From Eric:
Below is output of the updated event data - check the bold items to verify this is what is desired (note: "vCPE_Infrastructure_vGMUX_demo_app" is the value of the "vnf_id" property from the OpenStack metadata of the vG-MUX VM):
{"event": {"commonEventHeader": {"domain": "measurementsForVfScaling", "eventId": "Generic_traffic", "eventName": "Measurement_vGMUX", "lastEpochMicrosec": 1508188493486856, "priority": "Normal", "reportingEntityName": "zdcpe1cpe01mux01", "sequence": 55, "sourceName": "vCPE_Infrastructure_vGMUX_demo_app", "startEpochMicrosec": 1508188483486856, "version": 1.2, "eventType": "HTTP request rate", "reportingEntityId": "No UUID available", "sourceId": "vCPE_Infrastructure_vGMUX_demo_app"}, "measurementsForVfScalingFields": {"measurementInterval": 10, "cpuUsageArray": [{"cpuIdentifier": "cpu1", "cpuIdle": 100.000000, "cpuUsageSystem": 0.000000, "cpuUsageUser": 0.000000, "percentUsage": 0.000000}], "requestRate": 9956, "vNicUsageArray": [{"receivedOctetsDelta": 0.000000, "receivedTotalPacketsDelta": 0.000000, "transmittedOctetsDelta": 0.000000, "transmittedTotalPacketsDelta": 0.000000, "valuesAreSuspect": "true", "vNicIdentifier": "eth0"}], "additionalMeasurements": [{"name": "ONAP-DCAE", "arrayOfFields": [{"name": "Packet-Loss-Rate", "value": "0.0"}]}], "measurementsForVfScalingVersion": 2.1}}}
- Meeting recordings:
10/12/2017
- AAI has a problem to pre-load data using this script closed-loop-biny993.zip, tracked by - AAI-433Getting issue details... STATUS .
- ethanlynnl has created a doc to show to register VMWare Openstack Fake Cloud into AAI and how to use it: https://gerrit.onap.org/r/#/c/18493/1/docs/Multicloud-Fake_Cloud-Guide.rst
- ESR installation guide: http://onap.readthedocs.io/en/latest/submodules/aai/esr-server.git/docs/platform/installation.html
- DCAE TCA is tracked by - DCAEGEN2-158Getting issue details... STATUS .
Eric Multanen continues to fix vGMUX and made good progress. vGMUX in the ONAP-vCPE work space is now reporting VES data. Some fixes are needed. This is tracked by - INT-275Getting issue details... STATUS .
10/11/2017
- Test plan for CLAMP (Ron Shacham, Gervais-Martial Ngueko, Xue Gao, Lusheng Ji, Vijay Kumar):
- Integration team will install ONAP in a separate work space and let CLAMP do the testing. This is tracked in - INT-271Getting issue details... STATUS .
- CLAMP will start by test with Policy and SDC on the design part. The goal is to configure Policy and create Blueprint template for DCAE.
- Currently DCAEGEN2 cannot be installed in the open lab due to the lack of Openstack Designate support. This is expected to be fixed in a few days. In the meanwhile, CLAMP will do pairwise test with DCAE in the AT&T internal environment in parallel.
- MultiCloud (Bin Yang, ethanlynnl)
- MultiCloud sending query to AAI to obtain cloud info passed.
- MultiCloud performing VM restart (two commands: stop followed by start) passed. This was done by manually posting to the MultiCloud broker API.
- A VMWare VIM is simulated and passed VM restart test.
- Normally VIM is registered to ESR, and ESR will populate AAI with the cloud info. In this test, Bin Yang used a script to preload AAI with such info. MultiCloud queries AAI using VIM-ID to get the cloud info to do actual cloud operations.
- APPC→MultiCloud to restart VNF (Scott Seabolt, Ryan Young, Randa Maher)
- DCAE (Vijay Kumar)
- With the sampe VES provided by Eric Multanen on 10/10, the team has updated the TCA policy accordingly and will load it to TCA later today. It is tracked by - DCAEGEN2-135Getting issue details... STATUS .
- Note that in the final release, this TCA policy will be designed either using CLAMP or Policy GUI and distributed. No manual loading is needed.
- vGMUX (Eric Multanen)
- Eric is debugging vGMUX in his environment. Once it is done, he will see if it is feasible to create a VM image and use the image in the open lab instead.
- VxLAN config using SDNC
- We suggest that SDNC generates the VxLAN config request and send them to Eric Multanen to verify the correctness. It will help the upcoming pairwise test once the VNFs are ready in the open lab. This is tracked by - SDNC-120Getting issue details... STATUS .
- Customer flow testing in SO
- Update from Saryu Shah and BORALE, SHAILENDRA S <sb8915@att.com>:
The custom flow testing was performed using Junit testing.
“mvn install” in so/bpmn/MSOInfrastructureBPMN will runs all the tests.
Test is self-contained and all the required data files and simulator code was checked in. As of now, Jim is in the process of checking in bug-fixes.
- Update from Saryu Shah and BORALE, SHAILENDRA S <sb8915@att.com>:
- Meeting recording: GMT20171011-140734_Kang-Xi-s-_1920x1200.mp4
10/10/2017
- APPC: Scott Seabolt used Swagger API to invoke VNF restart. APPC passed AAI named query. The DG in APPC failed to query AAI because it did not point to the correct AAI. This is tracked by - APPC-267Getting issue details... STATUS .
vGMUX: Eric has created a sample VES data as shown below. DCAE needs to be adjusted to process the data. This is tracked by - DCAEGEN2-147Getting issue details... STATUS . Eric is also working to fix vGMUX.
vCPE VES data{"event": {"commonEventHeader": {"domain": "measurementsForVfScaling", "eventId": "Generic_traffic", "eventName": "Measurement_vGMUX", "lastEpochMicrosec": 1507676920903343, "priority": "Normal", "reportingEntityName": "vg-mux", "sequence": 1, "sourceName": "Dummy VM name - No Metadata available", "startEpochMicrosec": 1507676910903343, "version": 1.2, "eventType": "HTTP request rate", "reportingEntityId": "No UUID available", "sourceId": "Dummy VM UUID - No Metadata available"}, "measurementsForVfScalingFields": {"measurementInterval": 10, "cpuUsageArray": [{"cpuIdentifier": "cpu1", "cpuIdle": 85.700000, "cpuUsageSystem": 14.300000, "cpuUsageUser": 0.000000, "percentUsage": 0.000000}], "requestRate": 9383, "vNicUsageArray": [{"receivedOctetsDelta": 0.000000, "receivedTotalPacketsDelta": 0.000000, "transmittedOctetsDelta": 0.000000, "transmittedTotalPacketsDelta": 0.000000, "valuesAreSuspect": "true", "vNicIdentifier": "eth0"}], "additionalMeasurements": [{"name": "ONAP-DCAE", "arrayOfFields": [{"name": "Packet-Loss-Rate", "value": "49.0"}]}], "measurementsForVfScalingVersion": 2.1}}}
- Meeting recording: GMT20171010-181134_Kang-Xi-s-_1920x1200.mp4
10/9/2017
Closed loop control test is decomposed into the following items.
Design and load policy rules to policy engine. Jorge Hernandez created a closed loop for vCPE. Policy is tested by posting a DCAE TCA event on DMaaP. Policy reacted by sending an event to APPC to trigger VNF restart. The message format needs to be fixed, see - POLICY-300Getting issue details... STATUS . And the topics need to be added to APPC properties, see - APPC-265Getting issue details... STATUS .
- Scripts are created to preload AAI. Scott Seabolt tested them in the integration environment. Both preloading and named query works.
closed-loop.zip contains the necessary data (json) as well as shell scripts to load the Integration vGMUX. Note the attached files load data specific the the "vCPE_Infrastructure_vGMUX_demo_app" VNF. If you'd like to load a different VNF each of the json files owuld need to be modified to reflect the new VNF.put_closed_loop.sh
- usage: put_closed_loop.sh A&AI-IP
- verify.sh
- usage: verify.sh A&AI-IP
- VES collector is working. Waiting for sample VES data from Eric Multanen. See - DCAEGEN2-135Getting issue details... STATUS . Once the data is available, the TCA policy on Policy R1 Amsterdam Functional Test Cases may need to be adjusted and then TCA testing can be performed.
- vGMUX cannot be properly instantiated. Eric Multanen, Marco Platania, and Kang Xi are working to solve it.
- Docker info shows that the multi-cloud services are running inside the open-o VM as multiple containers
- The VM is vm1-openo-server with IP 10.12.25.113
- Service ports are listed below. More details of each service are described on Setup MultiCloud Development Env
- broker: 9001
- VIO plugin: 9004
- Ocata plugin: 9006
- WindRiver plugin: 9005
Meeting recording is GMT20171009-170725_Kang-Xi-s-_1920x1200.mp4
10/6/2017
Closed loop control test is decomposed into the following items.
Design and load policy rules to policy engine. Ideally this should be done by CLAMP. Currently CLAMP is working on - CLAMP-59Getting issue details... STATUS . The workaround is to use Policy GUI. Jorge Hernandez will work on this.
Preload AAI with vCPE data. The purpose is to lookup vServer using vGMUX VNF ID. Scott Seabolt (APPC) will provide a sample data to Venkata Harish Kajur (AAI) to show the named query. Venkata Harish Kajur will then create a script to preload such data to AAI for the testing.
vGMUX sends packet loss VES to DCAE VES collector.
Kang Xi created a vGMUX instance in the open lab but the VPP-based VNF inside the VM is not functioning. According to Eric Multanen, most likely it is caused by network issues. Eric is working on this. Kang will also inform Danny Zhou and lee xun about this.
The following VES data caused an error. The most possible reason is that the data itself is incorrect. Vijay Kumar is looking into that.
VES Datacurl -H "Accept: application/json" -H "Content-Type: application/json" -d '{"event":{"measurementsForVfScalingFields":{"measurementInterval":10,"measurementsForVfScalingVersion":1.1,"vNicUsageArray":[{"multicastPacketsIn":0,"bytesIn":4300,"unicastPacketsIn":0,"multicastPacketsOut":0,"broadcastPacketsOut":0,"packetsOut":0,"bytesOut":0,"broadcastPacketsIn":0,"packetsIn":101,"unicastPacketsOut":0,"vNicIdentifier":"eth1"}]},"commonEventHeader":{"reportingEntityName":"zdfw1fwl01fwl01","startEpochMicrosec":1500379662497999,"lastEpochMicrosec":1500379672497999,"eventId":"1","sourceName":"zdfw1fwl01fwl01","sequence":1,"priority":"Normal","functionalRole":"vFirewall","domain":"measurementsForVfScaling","reportingEntityId":"No UUID available","sourceId":"75ec15e4-1a9a-4ee3-bb3c-31556903558d","version":1.2}}}' 10.12.25.84:8080/eventListener/v5
DCAE TCA sends an event to Policy through DMaaP. This is tested as described in - DCAEGEN2-124Getting issue details... STATUS
Policy process the event from DCAE, and then sends a “restart” event to APPC through DMaaP. Jorge Hernandez will work on this.
APPC captures the event from Policy, performs a named query to AAI to get the vServer ID based on the VNF ID, and then restarts the VM through MultiVIM.
In addition, the bottom of Policy R1 Amsterdam Functional Test Cases shows important sample data for vCPE test cases.
Update:
- Scott Seabolt created the data pertaining to the vGMUX in POD25 which needs to be inventoried in POD25 A&AI. Venkata Harish Kajur then created a script to preload AAI with the following instructions: "Unzip this closed-loop.zip and go into closed-loop directory and run the ./put_closed_loop.sh ${AAI_VM1_IP_ADDRESS} And then you can verify that it works by doing a ./verify.sh ${AAI_VM1_IP_ADDRESS} and it should return you that named query response that you sent me."
- Meeting recording
Related Documents:
- VNF Doc: ONAP vCPE VPP-based VNF Installation and Usage Information
- DCAE mS Deployment (Standalone instantiation)
- SNIRO Emulator
export MULTICLOUD_PLUGIN_ENDPOINT=http://10.0.14.1:9005/api/multicloud-titanium_cloud/v0/pod25_RegionOne
export TOKEN=$(curl -v -s -H "Content-Type: application/json" -X POST -d '{ }' $MULTICLOUD_PLUGIN_ENDPOINT/identity/v3/auth/tokens 2>&1 | grep X-Subject-Token | sed "s/^.*: //")
export PROJECT_ID=466979b815b5415ba14ada713e6e1846
curl -v -s -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X GET $MULTICLOUD_PLUGIN_ENDPOINT/compute/v2.1/$PROJECT_ID/servers
curl -v -s -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X POST -d '{"os-stop":null}' $MULTICLOUD_PLUGIN_ENDPOINT/compute/v2.1/$PROJECT_ID/servers/0a06842a-4ec4-4918-b046-399f6b38f5f9/action
curl -v -s -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X POST -d '{"os-start":null}' $MULTICLOUD_PLUGIN_ENDPOINT/compute/v2.1/$PROJECT_ID/servers/0a06842a-4ec4-4918-b046-399f6b38f5f9/action