Pre-requisite
Setup the OOM Infrastructure: OOM Infrastructure setup
Install helm client https://github.com/kubernetes/helm/releases
Deploy OOM
Video describing all the steps; I've used OOM on Rancher in OpenStack
Running vFW demo - Close-loop
Video of onboarding
I had a hickup at the end, due to the fact I already had another vFW deployed, hence the ip it tried to assign was used. To fix this, I remove the existing stack.
View file | ||||
---|---|---|---|---|
|
We will basically follow this guide: http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/OOM%20User%20Guide/oom_user_guide.html?highlight=oom
...
Clone OOM release-1.1.0 branch
Code Block |
---|
git clone -b release-1.1.0 https://gerrit.onap.org/r/p/oom.git |
...
Edit the onap-parameters.yaml under
Code Block oom/kubernetes/config
To have endpoints registering to MSB, add your kubectl config token in kube2msb config, under kubeMasterAuthToken located at
Code Block oom/kubernetes/kube2msb/values.yaml
...
Video of instantiation
I had a a hickup for the vFW_PG due to the fact I pre-loaded on the wrong instance. After realizing, all went well.
View file | ||||
---|---|---|---|---|
|
Let's start by running the init goal (assuming K8S namespace, where ONAP has been installed is "onap")
Code Block cd oom/kubernetes/configrobot $ ./createConfigdemo-k8s.sh -n onap
Deploy ONAP
Code Block cd oom/kubernetes/oneclick ./createAll.bash -n onap
- Now, time for a break. This will take arround 30/40 minutes.
After 45mn, everything is ready
Code Block $ kubectl get pods --all-namespaces init
Result:
Code Block collapse true $Starting kubectlXvfb geton pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-systemdisplay :89 with res 1280x1024x24 Executing robot tests at log level TRACE ============================================================================== OpenECOMP ETE ============================================================================== OpenECOMP ETE.Robot ============================================================================== OpenECOMP ETE.Robot.Testsuites ============================================================================== OpenECOMP ETE.Robot.Testsuites.Demo :: Executes the VNF Orchestration Test ... ============================================================================== Initialize Customer And Models heapster-4285517626-n5b57 | PASS 1/1 Running 0 55m kube-system kube-dns-638003847-px0s1| ------------------------------------------------------------------------------ OpenECOMP ETE.Robot.Testsuites.Demo :: Executes the VNF Orchestrat... | PASS | 1 critical test, 1 passed, 0 failed 1 test total, 1 passed, 0 failed ============================================================================== OpenECOMP ETE.Robot.Testsuites 3/3 Running 0 | PASS | 1 critical test, 1 55m kube-system kubernetes-dashboard-716739405-llh0w 1/1 Running 0 55m kube-system passed, 0 failed 1 test total, 1 passed, 0 failed ============================================================================== OpenECOMP ETE.Robot monitoring-grafana-2360823841-tn80f 1/1 Running 0 55m kube-system monitoring-influxdb-2323019309-34ml1 1/1 Running 0 55m kube-system tiller-deploy-737598192-k2ttl 1/1 Running 0 55m onap-aaf| PASS | 1 critical test, 1 passed, 0 failed 1 test total, 1 passed, 0 failed ============================================================================== OpenECOMP ETE aaf-1993711932-0xcdt 0/1 Running 0 | PASS | 1 critical test, 1 46m onap-aaf aaf-cs-1310404376-6zjjh 1/1 Running 0 46m onap-aai aai-resources-1412762642-kh8r0 2/2 Running 0 47m onap-aai aai-service-749944520-t87vn 1/1 Running 0 47m onap-aai aai-traversal-3084029645-x29p6 2/2 Running 0 47m onap-aai data-router-3434587794-hj9b3 1/1 Running 0 47m onap-aai elasticsearch-622738319-m85sn 1/1 Running 0 47m onap-aai hbase-1949550546-lncls 1/1 Running 0 47m onap-aai model-loader-service-4144225433-0m8sp 2/2 Running 0 47m onap-aai search-data-service-378072033-sfrnd 2/2 Running 0 47m onap-aai sparky-be-3094577325-902jg 2/2 Running 0 47m onap-appc appc-1828810488-xg5k3 2/2 Running 0 47m onap-appc appc-dbhost-2793739621-ckxrf 1/1 Running 0 47m onap-appc appc-dgbuilder-2298093128-qd4b4 1/1 Running 0 47m onap-clamp clamp-2211988013-qwkvl 1/1 Running 0 46m onap-clamp clamp-mariadb-1812977665-mp89r 1/1 Running 0 46m onap-cli cli-595710742-wj4mg 1/1 Running 0 47m onap-consul consul-agent-3312409084-kv21c 1/1 Running 1 47m onap-consul consul-server-1173049560-966zr 1/1 Running 0 47m onap-consul consul-server-1173049560-d656s 1/1 Running 1 47m onap-consul consul-server-1173049560-k41w3 1/1 Running 0 47m onap-dcaegen2 dcaegen2 1/1 Running 0 47m onap-kube2msb kube2msb-registrator-1359309322-p60lx 1/1 Running 0 46m onap-log elasticsearch-1942187295-mtw6l 1/1 Running 0 47m onap-log kibana-3372627750-k8q6p 1/1 Running 0 47m onap-log logstash-1708188010-2vpd1 1/1 Running 0 47m onap-message-router dmaap-3126594942-vnj5w 1/1 Running 0 47m onap-message-router global-kafka-666408702-1z9c5 1/1 Running 0 47m onap-message-router zookeeper-624700062-kvk1m 1/1 Running 0 47m onap-msb msb-consul-3334785600-nz1zt 1/1 Running 0 47m onap-msb msb-discovery-196547432-pqs3g 1/1 Running 0 47m onap-msb msb-eag-1649257109-nl11h 1/1 Running 0 47m onap-msb msb-iag-1033096170-6cx7t 1/1 Running 0 47m onap-mso mariadb-829081257-q90fd 1/1 Running 0 47m onap-mso mso-3784963895-brdxx 2/2 Running 0 47m onap-multicloud framework-2273343137-nnvr5 1/1 Running 0 47m onap-multicloud multicloud-ocata-1517639325-gwkjr 1/1 Running 0 47m onap-multicloud multicloud-vio-4239509896-zxmvx 1/1 Running 0 47m onap-multicloud multicloud-windriver-3629763724-993qk 1/1 Running 0 47m onap-policy brmsgw-1909438199-k2ppk 1/1 Running 0 47m onap-policy drools-2600956298-p9t68 2/2 Running 0 47m onap-policy mariadb-2660273324-lj0ts 1/1 Running 0 47m onap-policy nexus-3663640793-pgf51 1/1 Running 0 47m onap-policy pap-466625067-2hcxb 2/2 Running 0 47m onap-policy pdp-2354817903-65rnb 2/2 Running 0 47m onap-portal portalapps-1783099045-prvmp 2/2 Running 0 47m onap-portal portaldb-3181004999-0t228 2/2 Running 0 47m onap-portal portalwidgets-2060058548-w6hr9 1/1 Running 0 47m onap-portal vnc-portal-3680188324-b22zq 1/1 Running 0 47m onap-robot robot-2551980890-cw3vj 1/1 Running 0 47m onap-sdc sdc-be-2336519847-hcs6h 2/2 Running 0 47m onap-sdc sdc-cs-1151560586-sfkf0 1/1 Running 0 47m onap-sdc sdc-es-2438522492-cw6rj 1/1 Running 0 47m onap-sdc sdc-fe-2862673798-lplcx 2/2 Running 0 47m onap-sdc sdc-kb-1258596734-43lf7 1/1 Running 0 47m onap-sdnc sdnc-1395102659-rd27h 2/2 Running 0 47m onap-sdnc sdnc-dbhost-3029711096-vl2jg 1/1 Running 0 47m onap-sdnc sdnc-dgbuilder-4267203648-bb828 1/1 Running 0 47m onap-sdnc sdnc-portal-2558294154-3nh31 1/1 Running 0 47m onap-uui uui-4267149477-bqt0r 1/1 Running 0 46m onap-uui uui-server-3441797946-dx683 1/1 Running 0 46m onap-vfc vfc-catalog-840807183-lx4d0 1/1 Running 0 46m onap-vfc vfc-emsdriver-2936953408-fb2pf 1/1 Running 0 46m onap-vfc vfc-gvnfmdriver-2866216209-k5t1t 1/1 Running 0 46m onap-vfc vfc-hwvnfmdriver-2588350680-bpglx 1/1 Running 0 46m onap-vfc vfc-jujudriver-406795794-ttp9p 1/1 Running 0 46m onap-vfc vfc-nokiavnfmdriver-1760240499-xm0qk 1/1 Running 0 46m onap-vfc vfc-nslcm-3756650867-1dnr0 1/1 Running 0 46m onap-vfc vfc-resmgr-1409642779-0603z 1/1 Running 0 46m onap-vfc vfc-vnflcm-3340104471-xsk72 1/1 Running 0 46m onap-vfc vfc-vnfmgr-2823857741-r04xj 1/1 Running 0 46m onap-vfc vfc-vnfres-1792029715-ls480 1/1 Running 0 46m onap-vfc vfc-workflow-3450325534-flwtw 1/1 Running 0 46m onap-vfc vfc-workflowengineactiviti-4110617986-mvlgl 1/1 Running 0 46m onap-vfc vfc-ztesdncdriver-1452986549-c59jb 1/1 Running 0 46m onap-vfc vfc-ztevmanagerdriver-2080553526-wdxwq 1/1 Running 0 46m onap-vid vid-mariadb-3318685446-hmf2q 1/1 Running 0 47m onap-vid vid-server-2994633010-x3t74 2/2 Running 0 47m onap-vnfsdk postgres-436836560-cl2dz 1/1 Running 0 46m onap-vnfsdk refrepo-1924147637-wft62 1/1 Running 0 46m
Let's run health check to see current status, with the expected failure for DCAE, as it's now deployed.
Code Block cd oom/kubernetes/robot $ ./ete-k8s.sh health
Result:
Code Block collapse true Starting Xvfb on display :88 with res 1280x1024x24 Executing robot tests at log level TRACE ============================================================================== OpenECOMP ETE ============================================================================== OpenECOMP ETE.Robot ============================================================================== OpenECOMP ETE.Robot.Testsuites ============================================================================== OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp components are... ============================================================================== Basic DCAE Health Check [ WARN ] Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ffa61dbfa50>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /healthcheck [ WARN ] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ffa61dbf650>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /healthcheck [ WARN ] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ffa5fe40510>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /healthcheck | FAIL | ConnectionError: HTTPConnectionPool(host='dcae-controller.onap-dcae', port=8080): Max retries exceeded with url: /healthcheck (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ffa619bf7d0>: Failed to establish a new connection: [Errno -2] Name or service not known',)) ------------------------------------------------------------------------------ Basic SDNGC Health Check | PASS | ------------------------------------------------------------------------------ Basic A&AI Health Check | PASS | ------------------------------------------------------------------------------ Basic Policy Health Check | PASS | ------------------------------------------------------------------------------ Basic MSO Health Check | PASS | ------------------------------------------------------------------------------ Basic ASDC Health Check | PASS | ------------------------------------------------------------------------------ Basic APPC Health Check | PASS | ------------------------------------------------------------------------------ Basic Portal Health Check | PASS | ------------------------------------------------------------------------------ Basic Message Router Health Check | PASS | ------------------------------------------------------------------------------ Basic VID Health Check | PASS | ------------------------------------------------------------------------------ Basic Microservice Bus Health Check | PASS | ------------------------------------------------------------------------------ Basic CLAMP Health Check | PASS | ------------------------------------------------------------------------------ catalog API Health Check | PASS | ------------------------------------------------------------------------------ emsdriver API Health Check | PASS | ------------------------------------------------------------------------------ gvnfmdriver API Health Check | PASS | ------------------------------------------------------------------------------ huaweivnfmdriver API Health Check | PASS | ------------------------------------------------------------------------------ multicloud API Health Check | PASS | ------------------------------------------------------------------------------ multicloud-ocata API Health Check | PASS | ------------------------------------------------------------------------------ multicloud-titanium_cloud API Health Check | PASS | ------------------------------------------------------------------------------ multicloud-vio API Health Check | PASS | ------------------------------------------------------------------------------ nokiavnfmdriver API Health Check | PASS | ------------------------------------------------------------------------------ nslcm API Health Check | PASS | ------------------------------------------------------------------------------ resmgr API Health Check | PASS | ------------------------------------------------------------------------------ usecaseui-gui API Health Check | PASS | ------------------------------------------------------------------------------ vnflcm API Health Check | PASS | ------------------------------------------------------------------------------ vnfmgr API Health Check | PASS | ------------------------------------------------------------------------------ vnfres API Health Check | PASS | ------------------------------------------------------------------------------ workflow API Health Check | PASS | ------------------------------------------------------------------------------ ztesdncdriver API Health Check | PASS | ------------------------------------------------------------------------------ ztevmanagerdriver API Health Check | PASS | ------------------------------------------------------------------------------ OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp compo... | FAIL | 30 critical tests, 29 passed, 1 failed 30 tests total, 29 passed, 1 failed ============================================================================== OpenECOMP ETE.Robot.Testsuites | FAIL | 30 critical tests, 29 passed, 1 failed 30 tests total, 29 passed, 1 failed ============================================================================== OpenECOMP ETE.Robot | FAIL | 30 critical tests, 29 passed, 1 failed 30 tests total, 29 passed, 1 failed ============================================================================== OpenECOMP ETE | FAIL | 30 critical tests, 29 passed, 1 failed 30 tests total, 29 passed, 1 failed ============================================================================== Output: /share/logs/ETE_46070/output.xml Log: /share/logs/ETE_46070/log.html Report: /share/logs/ETE_46070/report.html command terminated with exit code 1
Let's run the init_robot script, that will enable us to check the robot logs
Code Block cd oom/kubernetes/robot $ ./demo-k8s.sh init_robot
Result:
Code Block collapse true WEB Site Password for user 'test': Starting Xvfb on display :89 with res 1280x1024x24 Executing robot tests at log level TRACE ============================================================================== OpenECOMP ETE ============================================================================== OpenECOMP ETE.Robot ============================================================================== OpenECOMP ETE.Robot.Testsuites ============================================================================== OpenECOMP ETE.Robot.Testsuites.Update Onap Page :: Initializes ONAP Test We... ============================================================================== Update ONAP Page | PASS | ------------------------------------------------------------------------------ OpenECOMP ETE.Robot.Testsuites.Update Onap Page :: Initializes ONA... | PASS | 1 critical test, 1 passed, 0 failed 1 test total, 1 passed, 0 failed ============================================================================== OpenECOMP ETE.Robot.Testsuites | PASS | 1 critical test, 1 passed, 0 failed 1 test total, 1 passed, 0 failed ============================================================================== OpenECOMP ETE.Robot | PASS | 1 critical test, 1 passed, 0 failed 1 test total, 1 passed, 0 failed ============================================================================== OpenECOMP ETE | PASS | 1 critical test, 1 passed, 0 failed 1 test total, 1 passed, 0 failed ============================================================================== Output: /share/logs/demo/UpdateWebPage/output.xml Log: /share/logs/demo/UpdateWebPage/log.html Report: /share/logs/demo/UpdateWebPage/report.html
Navigate to
Code Block <kubernetes-vm-ip>:30209
and to see the robot logs, go to
Code Block <kubernetes-vm-ip>:30209/logs/
Let's run the init goal
Code Block cd oom/kubernetes/robot $ ./demo-k8s.sh init
Result:
Code Block collapse true Starting Xvfb on display :89 with res 1280x1024x24 Executing robot tests at log level TRACE ============================================================================== OpenECOMP ETE ============================================================================== OpenECOMP ETE.Robot ============================================================================== OpenECOMP ETE.Robot.Testsuites ============================================================================== OpenECOMP ETE.Robot.Testsuites.Demo :: Executes the VNF Orchestration Test ... ============================================================================== Initialize Customer And Models | PASS | ------------------------------------------------------------------------------ OpenECOMP ETE.Robot.Testsuites.Demo :: Executes the VNF Orchestrat... | PASS | 1 critical test, 1 passed, 0 failed 1 test total, 1 passed, 0 failed ============================================================================== OpenECOMP ETE.Robot.Testsuites | PASS | 1 critical test, 1 passed, 0 failed 1 test total, 1 passed, 0 failed ============================================================================== OpenECOMP ETE.Robot | PASS | 1 critical test, 1 passed, 0 failed 1 test total, 1 passed, 0 failed ============================================================================== OpenECOMP ETE | PASS | 1 critical test, 1 passed, 0 failed 1 test total, 1 passed, 0 failed ============================================================================== Output: /share/logs/demo/InitDemo/output.xml Log: /share/logs/demo/InitDemo/log.html Report: /share/logs/demo/InitDemo/report.html
Running vFW demo - Close-loop
Video of onboarding
I had a hickup at the end, due to the fact I already had another vFW deployed, hence the ip it tried to assign was used. To fix this, I remove the existing stack.
View file | ||||
---|---|---|---|---|
|
Video of instantiation
I had a a hickup for the vFW_PG due to the fact I pre-loaded on the wrong instance. After realizing, all went well.
View file | ||||
---|---|---|---|---|
|
Login into the VNC. Password is password
Code Block <kubernetes-vm-ip>:30211
Open the browser and navigate to the ONAP Portal
Login using the Designer user. cs0008/demo123456!
Code Block http://portal.api.simpledemo.onap.org:8989/ONAPPORTAL/login.htm
- Virutal Licence Model creation
- Open SDC application, click on the OnBoard tab.
- click Create new VLM (Licence Model)
- Use onap as Vendor Name, and enter a description
- clicksave
- click Licence Key Group and Add Licence KeyGroup, then fill in the required fields
- click Entitlements Pools and Add Entitlement Pool, then fill in the required fields
- click Feature Groups and Add Feature Group, then fill in the required fields. Also, under the Entitlement Pools tab, drag the created entitlement pool to the left. Same for the License Key Groups
- click Licence Aggreements and Add Licence Agreement, then fill in the required fields. Under the tab Features Groups, drag the feature group created previously.
- then check-in and submit
- go back to OnBoard page
- click Create new VLM (Licence Model)
- Open SDC application, click on the OnBoard tab.
- Vendor Software Product onboarding and testing
- click Create a new VSP
- First we create the vFW sinc; give it a name, i.e. vFW_SINC. Select the Vendor (onap) and the Category (Firewall) and give it a description.
- Click on the warning, and add a licence model
- Get the zip package: vfw-sinc.zip
- Click on overview, and import the zip
- Click Proceed to validation then check-in then submit
- click Create a new VSP
- Then we create the vFW packet generator; give it a name, i.e. vFW_PG. Select the Vendor (onap) and the Category (Firewall) and give it a description.
- Click on the warning, and add a licence model
- Get the zip package: vfw_pg.zip
- Click on overview, and import the zip
- Click Proceed to validation then check-in then submit
- Go to SDC home. Click on the top right icone, with the orange arrow.
- Import the VSP one by one
- Submit for both testing
- Logout and Login as the tester: jm0007/demo123456!
- Go to the SDC portal
- Test and accept the two VSP
- click Create a new VSP
- Service Creation
- Logout and login as the designer: cs0008/demo123456!
- Go to the SDC home page
- Click Add a Service
- Fill in the required field
- Click Create
- Click on the Composition left tab
- In the search bar, type "vFW" to narrow down the created VSP, and drag them both.
- Then click Submit for Testing
- Service Testing
- Logout and Login as the tester: jm0007/demo123456!
- Go to the SDC portal
- Test and accept the service
- Service Approval
- Logout and Login as the govener: gv0001/demo123456!
- Go to the SDC portal
- Approve the service
- Service Distribution
- Logout and Login as the operator: op0001/demo123456!
- Go to the SDC portal
- Distribute the service
- Click on the left tab monitor and click on arrow to open the distribution status
- Wait until everything is disitributed (green tick)
- Service Instance creation:
- Logout and Login as the user: demo/demo123456!
- Go to the VID portal
- Click the Browse SDC Service Models tab
- Click Deploy on the service to deploy
- Fiil in the required filed, call it vFW_Service for instance. Once done, this will redirect you to a new screen
- Click Add VNF, and select the vFW_SINC VNF first
- Fill in the required field. Call it vFW_SINC_VNF, for instance.
- Click Add VNF, and select the vFW_PG_VNF first
- Fill in the required field. Call it vFW_PG_VNF, for instance.
- SDNC preload:
Then go to the SDNC Admin portal and create an account
Code Block <kubernetes-host-ip>:30201/signup
Login into the SDNC admin portal- Click Profiles then Add VNF Profile
- The VNF Type is the string that looks like this: VfwPg..base_vpkg..module-0 It can be copy/paste from VID, when attempting to create the VF-Module
- Enter 100 for the Availability Zone Count
- Enter vFW for Equipement Role
Repeat the same for the other VNF Login into the VNC. Password is password
Code Block <kubernetes-vm-ip>:30211
Open the browser and navigate to the ONAP Portal
Login using the Designer user. cs0008/demo123456!
Code Block http://portal.api.simpledemo.onap.org:8989/ONAPPORTAL/login.htm
- Virtual Licence Model creation
- Open SDC application, click on the OnBoard tab.
- click Create new VLM (Licence Model)
- Use onap as Vendor Name, and enter a description
- click save
- click Licence Key Group and Add Licence KeyGroup, then fill in the required fields
- click Entitlements Pools and Add Entitlement Pool, then fill in the required fields
- click Feature Groups and Add Feature Group, then fill in the required fields. Also, under the Entitlement Pools tab, drag the created entitlement pool to the left. Same for the License Key Groups
- click Licence Agreements and Add Licence Agreement, then fill in the required fields. Under the tab Features Groups, drag the feature group created previously.
- then check-in and submit
- go back to OnBoard page
- click Create new VLM (Licence Model)
- Open SDC application, click on the OnBoard tab.
- Vendor Software Product onboarding and testing
- click Create a new VSP
- First we create the vFW sinc; give it a name, i.e. vFW_SINC. Select the Vendor (onap) and the Category (Firewall) and give it a description.
- Click on the warning, and add a licence model
- Get the zip package: vfw-sinc.zip
- Click on overview, and import the zip
- Click Proceed to validation then check-in then submit
- click Create a new VSP
- Then we create the vFW packet generator; give it a name, i.e. vFW_PG. Select the Vendor (onap) and the Category (Firewall) and give it a description.
- Click on the warning, and add a licence model
- Get the zip package: vfw_pg.zip
- Click on overview, and import the zip
- Click Proceed to validation then check-in then submit
- Go to SDC home. Click on the top right icon with the orange arrow.
- Import the VSP one by one
- Submit for both testing
- Logout and Login as the tester: jm0007/demo123456!
- Go to the SDC portal
- Test and accept the two VSP
- click Create a new VSP
- Service Creation
- Logout and login as the designer: cs0008/demo123456!
- Go to the SDC home page
- Click Add a Service
- Fill in the required field
- Click Create
- Click on the Composition left tab
- In the search bar, type "vFW" to narrow down the created VSP, and drag them both.
- Then click Submit for Testing
- Service Testing
- Logout and Login as the tester: jm0007/demo123456!
- Go to the SDC portal
- Test and accept the service
- Service Approval
- Logout and Login as the governor: gv0001/demo123456!
- Go to the SDC portal
- Approve the service
- Service Distribution
- Logout and Login as the operator: op0001/demo123456!
- Go to the SDC portal
- Distribute the service
- Click on the left tab monitor and click on arrow to open the distribution status
- Wait until everything is distributed (green tick)
- Service Instance creation:
- Logout and Login as the user: demo/demo123456!
- Go to the VID portal
- Click the Browse SDC Service Models tab
- Click Deploy on the service to deploy
- Fill in the required field, call it vFW_Service for instance. Once done, this will redirect you to a new screen
- Click Add VNF, and select the vFW_SINC VNF first
- Fill in the required field. Call it vFW_SINC_VNF, for instance.
- Click Add VNF, and select the vFW_PG_VNF first
- Fill in the required field. Call it vFW_PG_VNF, for instance.
- SDNC preload:
Then go to the SDNC Admin portal and create an account
Code Block <kubernetes-host-ip>:30201/signup
Login into the SDNC admin portal
Code Block <kubernetes-host-ip>:30201/login
- Click Profiles then Add VNF Profile
- The VNF Type is the string that looks like this: VfwPg..base_vpkg..module-0 It can be copy/paste from VID, when attempting to create the VF-Module
- Enter 100 for the Availability Zone Count
- Enter vFW for Equipment Role
- Repeat the same for the other VNF
Pre-load the vFW SINC. Mind the following values:
service-type: it's the service instance ID of the service instance created step 9
vnf-name: the name to give to the VF-Module. The same name will have to be re-use when creating the VF-Module
vnf-type: Same as the one used to add the profile in SDNC admin portal
generic-vnf-name: The name of the created VNF, see step 9f
vfw_name_0: is the same as the generic-vnf-name
generic-vnf-type: Can be find in VID, please see video if not found.
dcae_collector_ip: Has to be the IP address of the dcaedoks00 VM
Make sure image_name, flavor_name, public_net_id, onap_private_net_id, onap_private_subnet_id, key_name and pub_key reflect your environmentCode Block collapse true curl -X POST \ http://<kubernetes-host-ip>:30202/restconf/operations/VNF-API:preload-vnf-topology-operation \ -H 'accept: application/json' \ -H 'authorization: Basic YWRtaW46S3A4Yko0U1hzek0wV1hsaGFrM2VIbGNzZTJnQXc4NHZhb0dHbUp2VXkyVQ==' \ -H 'content-type: application/json' \ -H 'x-fromappid: API client' \ -d '{ "input": { "vnf-topology-information": { "vnf-topology-identifier": { "service-type": "34992be5-b38c-46da-96b2-553e60f9c24b", "vnf-name": "vFW_SINC_Module", "vnf-type": "VfwSinc..base_vfw..module-0", "generic-vnf-name": "vFW_SINC_VNF", "generic-vnf-type": "vFW_SINC 0" }, "vnf-assignments": { "availability-zones": [ ], "vnf-networks": [ ], "vnf-vms": [ ] }, "vnf-parameters": [ { "vnf-parameter-name": "image_name", "vnf-parameter-value": "trusty" }, { "vnf-parameter-name": "flavor_name", "vnf-parameter-value": "m1.medium" }, { "vnf-parameter-name": "public_net_id", "vnf-parameter-value": "d87ff178-3eb7-44df-a57b-84636dbdc817" }, { "vnf-parameter-name": "unprotected_private_net_id", "vnf-parameter-value": "zdfw1fwl01_unprotected" }, { "vnf-parameter-name": "unprotected_private_subnet_id", "vnf-parameter-value": "zdfw1fwl01_unprotected_sub" }, { "vnf-parameter-name": "protected_private_net_id", "vnf-parameter-value": "zdfw1fwl01_protected" }, { "vnf-parameter-name": "protected_private_subnet_id", "vnf-parameter-value": "zdfw1fwl01_protected_sub" }, { "vnf-parameter-name": "onap_private_net_id", "vnf-parameter-value": "oam_onap_k0H4" }, { "vnf-parameter-name": "onap_private_subnet_id", "vnf-parameter-value": "oam_onap_k0H4" }, { "vnf-parameter-name": "unprotected_private_net_cidr", "vnf-parameter-value": "192.168.10.0/24" }, { "vnf-parameter-name": "protected_private_net_cidr", "vnf-parameter-value": "192.168.20.0/24" }, { "vnf-parameter-name": "onap_private_net_cidr", "vnf-parameter-value": "10.0.0.0/16" }, { "vnf-parameter-name": "image_namevfw_private_ip_0", "vnf-parameter-value": "trusty192.168.10.100" }, { "vnf-parameter-name": "flavor_namevfw_private_ip_1", "vnf-parameter-value": "m1.medium192.168.20.100" }, { "vnf-parameter-name": "publicvfw_private_netip_id2", "vnf-parameter-value": "d87ff178-3eb7-44df-a57b-84636dbdc81710.0.100.5" }, { "vnf-parameter-name": "unprotectedvpg_private_netip_id0", "vnf-parameter-value": "zdfw1fwl01_unprotected192.168.10.200" }, { "vnf-parameter-name": "unprotectedvsn_private_subnetip_id0", "vnf-parameter-value": "zdfw1fwl01_unprotected_sub192.168.20.250" }, { "vnf-parameter-name": "protectedvsn_private_netip_id1", "vnf-parameter-value": "zdfw1fwl01_protected10.0.100.4" }, { "vnf-parameter-name": "protectedvfw_privatename_subnet_id0", "vnf-parameter-value": "zdfw1fwl01vFW_protectedSINC_subVNF" }, { "vnf-parameter-name": "onapvsn_privatename_net_id0", "vnf-parameter-value": "oam_onap_k0H4zdfw1fwl01snk01" }, { "vnf-parameter-name": "onap_private_subnetvnf_id", "vnf-parameter-value": "oamvFirewal_vSink_onapdemo_k0H4app" }, { "vnf-parameter-name": "unprotectedvf_privatemodule_net_cidrid", "vnf-parameter-value": "192.168.10.0/24vFirewall_vSink" }, { "vnf-parameter-name": "protecteddcae_privatecollector_net_cidrip", "vnf-parameter-value": "19210.168195.20200.0/2438" }, { "vnf-parameter-name": "onapdcae_privatecollector_net_cidrport", "vnf-parameter-value": "10.0.0.0/168080" }, { "vnf-parameter-name": "vfwrepo_privateurl_ip_0blob", "vnf-parameter-value": "192.168.10.100https://nexus.onap.org/content/sites/raw" }, { "vnf-parameter-name": "vfwrepo_privateurl_ip_1artifacts", "vnf-parameter-value": "192.168.20.100https://nexus.onap.org/content/groups/staging" }, { "vnf-parameter-name": "vfwdemo_privateartifacts_ip_2version", "vnf-parameter-value": "101.01.100.51" }, { "vnf-parameter-name": "vpginstall_privatescript_ip_0version", "vnf-parameter-value": "1921.1681.10.2001" }, { "vnf-parameter-name": "vsn_private_ip_0key_name", "vnf-parameter-value": "192.168.20.250onap_key_k0H4" }, { "vnf-parameter-name": "vsn_private_ip_1pub_key", "vnf-parameter-value": "10.0.100.4ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmuLf5dnvDS4hiwmXYg2YtgByeAj8ZoH5toGPNENIr9uIhgRclPWb5HSIDzhFLKy9K9Z1ht5XZEkzAcslSIKkodZlVYyucG/QwqLlN8N05EMLVm6TudjUp/j/VDvavSgp/xzIDsdHuhQZ8VHRE88mKzsTA4jPFp4s4Ic8eCes4nrydMrlbxeLjV3/+/xc77StQ7hDMaBlJX8xztgHRodxIQmMBWwb/4YSxjTbO0cwi4XYlRXzFPY7vmO2VDRhfaOVtyv8Pw6a3AaqIP6CR0z6QgbLYjtiFbWmhKQ+0qUfJeb0Kkc7Deok7x58a3mHkhswGS1aJLCaHC/W1b7n6C+lv adetalhouet@bell.corp.bce.ca" }, { "vnf-parameter-name": "vfwcloud_name_0env", "vnf-parameter-value": "vFW_SINC_VNFopenstack" }, ] }, { "request-information": { "vnfrequest-parameter-nameid": "vsn_name_0robot12", "order-version": "1", "vnfnotification-parameter-valueurl": "zdfw1fwl01snk01" openecomp.org", "order-number": "1", }, "request-action": "PreloadVNFRequest" }, { "sdnc-request-header": { "vnfsvc-parameterrequest-nameid": "vnf_idrobot12", "svc-notification-url": "http://openecomp.org:8080/adapters/rest/SDNCNotify", "vnf-parametersvc-valueaction": "vFirewal_vSink_demo_appreserve" } } },'
Expected result:
Code Block { { "output": { "vnfsvc-parameterrequest-nameid": "vf_module_idrobot12", "vnfresponse-parameter-valuecode": "vFirewall_vSink" }200", { "ack-final-indicator": "Y" "vnf-parameter-name": "dcae_collector_ip", "vnf-parameter-value": "127.0.0.1" }, { "vnf-parameter-name": "dcae_collector_port", "vnf-parameter-value": "8080" }, { "vnf-parameter-name": "repo_url_blob", "vnf-parameter-value": "https://nexus.onap.org/content/sites/raw" }, { } }
Pre-load the vFW PG. Mind the following values:
service-type: it's the service instance ID of the service instance created step 9
vnf-name: the name to give to the VF-Module. The same name will have to be re-use when creating the VF-Module
vnf-type: Same as the one used to add the profile in SDNC admin portal
generic-vnf-name: The name of the created VNF, see step 9h
vpg_name_0: is the same as the generic-vnf-name
generic-vnf-type: Can be find in VID, please see video if not found.
Make sure image_name, flavor_name, public_net_id, onap_private_net_id, onap_private_subnet_id, key_name and pub_key reflect your environmentCode Block collapse true curl -X POST \ http://<kubernetes-host-ip>:30202/restconf/operations/VNF-API:preload-vnf-topology-operation \ -H 'accept: application/json' \ -H 'authorization: Basic YWRtaW46S3A4Yko0U1hzek0wV1hsaGFrM2VIbGNzZTJnQXc4NHZhb0dHbUp2VXkyVQ==' \ -H 'content-type: application/json' \ -H 'x-fromappid: API client' \ -d '{ "input": { "vnf-parametertopology-nameinformation": "repo_url_artifacts", { "vnf-parametertopology-valueidentifier": "https://nexus.onap.org/content/groups/staging"{ }, { "service-type": "df6e075a-119a-4790-a470-2474a692e3ce", "vnf-parameter-name": "demovFW_artifactsPG_versionModule", "vnf-parameter-valuetype": "1.1.1" }VfwPg..base_vpkg..module-0", { "generic-vnf-name": "vFW_PG_VNF", "generic-vnf-parameter-nametype": "install_script_version",vFW_PG 0" }, "vnf-parameter-valueassignments": "1.1.1"{ }, "availability-zones": [ { "vnf-parameter-name": "key_name", ], "vnf-parameter-valuenetworks": "onap_key_k0H4"[ }, { ], "vnf-parameter-namevms": "pub_key", [ "vnf-parameter-value": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmuLf5dnvDS4hiwmXYg2YtgByeAj8ZoH5toGPNENIr9uIhgRclPWb5HSIDzhFLKy9K9Z1ht5XZEkzAcslSIKkodZlVYyucG/QwqLlN8N05EMLVm6TudjUp/j/VDvavSgp/xzIDsdHuhQZ8VHRE88mKzsTA4jPFp4s4Ic8eCes4nrydMrlbxeLjV3/+/xc77StQ7hDMaBlJX8xztgHRodxIQmMBWwb/4YSxjTbO0cwi4XYlRXzFPY7vmO2VDRhfaOVtyv8Pw6a3AaqIP6CR0z6QgbLYjtiFbWmhKQ+0qUfJeb0Kkc7Deok7x58a3mHkhswGS1aJLCaHC/W1b7n6C+lv adetalhouet@bell.corp.bce.ca" ] }, },"vnf-parameters": [ { "vnf-parameter-name": "cloudimage_envname", "vnf-parameter-value": "openstacktrusty" }, ] }, { "request-information": { "requestvnf-parameter-idname": "robot12flavor_name", "order-version": "1", "notificationvnf-parameter-urlvalue": "openecompm1.orgmedium", "order-number": "1", }, "request-action": "PreloadVNFRequest" { }, "sdncvnf-requestparameter-headername": { "public_net_id", "svcvnf-requestparameter-idvalue": "robot12",d87ff178-3eb7-44df-a57b-84636dbdc817" "svc-notification-url": "http://openecomp.org:8080/adapters/rest/SDNCNotify", }, "svc-action": "reserve" { } } }'
Expected result:
Code Block { "vnf-parameter-name": "output": {unprotected_private_net_id", "svcvnf-requestparameter-idvalue": "robot12",zdfw1fwl01_unprotected" }, "response-code": "200", { "ack-final-indicator": "Y" } }
Pre-load the vFW PG. Mind the following values:
service-type: it's the service instance ID of the service instance created step 9
vnf-name: the name to give to the VF-Module. The same name will have to be re-use when creating the VF-Module
vnf-type: Same as the one used to add the profile in SDNC admin portal
generic-vnf-name: The name of the created VNF, see step 9h
vpg_name_0: is the same as the generic-vnf-name
generic-vnf-type: Can be find in VID, please see video if not found.
Make sure image_name, flavor_name, public_net_id, onap_private_net_id, onap_private_subnet_id, key_name and pub_key reflect your environmentCode Block collapse true curl -X POST \ http://<kubernetes-host-ip>:30202/restconf/operations/VNF-API:preload-vnf-topology-operation \ -H 'accept: application/json' \ -H 'authorization: Basic YWRtaW46S3A4Yko0U1hzek0wV1hsaGFrM2VIbGNzZTJnQXc4NHZhb0dHbUp2VXkyVQ==' \ -H 'content-type: application/json' \ -H 'x-fromappid: API client' \ -d '{ "input": { "vnf-parameter-name": "unprotected_private_subnet_id", "vnf-parameter-value": "zdfw1fwl01_unprotected_sub" }, { "vnf-parameter-name": "onap_private_net_id", "vnf-parameter-value": "oam_onap_k0H4" }, { "vnf-parameter-name": "onap_private_subnet_id", "vnf-parameter-value": "oam_onap_k0H4" }, { "vnf-topologyparameter-informationname": { "unprotected_private_net_cidr", "vnf-topologyparameter-identifiervalue": { "192.168.10.0/24" }, { "service-type": "df6e075a-119a-4790-a470-2474a692e3ce", "vnf-parameter-name": "vFWprotected_private_PGnet_Modulecidr", "vnf-parameter-typevalue": "VfwPg192.168.base_vpkg..module-0",20.0/24" }, { "genericvnf-vnfparameter-name": "vFWonap_private_PGnet_VNFcidr", "genericvnf-vnfparameter-typevalue": "vFW_PG 0"10.0.0.0/16" }, { "vnf-parameter-assignmentsname": { "vfw_private_ip_0", "availabilityvnf-parameter-zonesvalue": ["192.168.10.100" }, { ], "vnf-parameter-networksname": [ "vpg_private_ip_0", "vnf-parameter-value": "192.168.10.200" ]}, "vnf-vms": [{ "vnf-parameter-name": "vpg_private_ip_1", ] "vnf-parameter-value": "10.0.80.2" }, "vnf-parameters": [}, { "vnf-parameter-name": "image_namevsn_private_ip_0", "vnf-parameter-value": "trusty "192.168.20.250" }, { "vnf-parameter-name": "flavorvpg_name_0", "vnf-parameter-value": "m1.mediumvFW_PG_VNF" }, { "vnf-parameter-name": "public_netvnf_id", "vnf-parameter-value": "d87ff178-3eb7-44df-a57b-84636dbdc817vPacketGen_demo_app" }, { "vnf-parameter-name": "unprotectedvf_private_netmodule_id", "vnf-parameter-value": "zdfw1fwl01_unprotectedvPacketGen" }, { "vnf-parameter-name": "unprotectedrepo_privateurl_subnet_idblob", "vnf-parameter-value": "zdfw1fwl01_unprotected_subhttps://nexus.onap.org/content/sites/raw" }, { "vnf-parameter-name": "onaprepo_privateurl_net_idartifacts", "vnf-parameter-value": "oam_onap_k0H4https://nexus.onap.org/content/groups/staging" }, { "vnf-parameter-name": "onapdemo_privateartifacts_subnet_idversion", "vnf-parameter-value": "oam_onap_k0H41.1.1" }, { "vnf-parameter-name": "unprotectedinstall_privatescript_net_cidrversion", "vnf-parameter-value": "1921.168.10.0/241.1" }, { "vnf-parameter-name": "protected_private_net_cidrkey_name", "vnf-parameter-value": "192.168.20.0/24vfw_key" }, { "vnf-parameter-name": "onap_private_net_cidrpub_key", "vnf-parameter-value": "10.0.0.0/16ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmuLf5dnvDS4hiwmXYg2YtgByeAj8ZoH5toGPNENIr9uIhgRclPWb5HSIDzhFLKy9K9Z1ht5XZEkzAcslSIKkodZlVYyucG/QwqLlN8N05EMLVm6TudjUp/j/VDvavSgp/xzIDsdHuhQZ8VHRE88mKzsTA4jPFp4s4Ic8eCes4nrydMrlbxeLjV3/+/xc77StQ7hDMaBlJX8xztgHRodxIQmMBWwb/4YSxjTbO0cwi4XYlRXzFPY7vmO2VDRhfaOVtyv8Pw6a3AaqIP6CR0z6QgbLYjtiFbWmhKQ+0qUfJeb0Kkc7Deok7x58a3mHkhswGS1aJLCaHC/W1b7n6C+lv adetalhouet@bell.corp.bce.ca" }, { "vnf-parameter-name": "vfw_private_ip_0cloud_env", "vnf-parameter-value": "192.168.10.100openstack" }, ] { }, "vnfrequest-parameter-nameinformation": "vpg_private_ip_0", { "vnfrequest-parameter-valueid": "192.168.10.200"robot12", }"order-version": "1", { "notification-url": "openecomp.org", "vnforder-parameter-namenumber": "vpg_private_ip_1", "vnfrequest-parameter-valueaction": "10.0.80.2" PreloadVNFRequest" }, "sdnc-request-header": { { "svc-request-id": "robot12", "vnfsvc-parameternotification-nameurl": "vsn_private_ip_0", http://openecomp.org:8080/adapters/rest/SDNCNotify", "vnfsvc-parameter-valueaction": "192.168.20.250reserve" } } },'
Expected result:
Code Block { "output": { "vnfsvc-parameterrequest-nameid": "vpg_name_0robot12", "vnfresponse-parameter-valuecode": "vFW_PG_VNF" }200", { "ack-final-indicator": "Y" "vnf-parameter-name": "vnf_id", "vnf-parameter-value": "vPacketGen_demo_app" }, { "vnf-parameter-name": "vf_module_id", "vnf-parameter-value": "vPacketGen" }, { "vnf-parameter-name": "repo_url_blob", "vnf-parameter-value": "https://nexus.onap.org/content/sites/raw"} }
- Create the VF-Module for vFW_SINC
- The instance name must be the vnf-name setup in the preload phase.
- After a few minutes, the stack should be created.
- Create the VF-Module for vFW_PG
- The instance name must be the vnf-name setup in the preload phase.
- After a few minutes, the stack should be created.
Code Block |
---|
<kubernetes-host-ip>:30201/login |
Pre-load the vFW SINC. Mind the following values:
service-type: it's the service instance ID of the service instance created step 9
vnf-name: the name to give to the VF-Module. The same name will have to be re-use when creating the VF-Module
vnf-type: Same as the one used to add the profile in SDNC admin portal
generic-vnf-name: The name of the created VNF, see step 9f
vfw_name_0: is the same as the generic-vnf-name
generic-vnf-type: Can be find in VID, please see video if not found.
Make sure image_name, flavor_name, public_net_id, onap_private_net_id, onap_private_subnet_id, key_name and pub_key reflect your environment
collapse | true |
---|
passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
Output: /share/logs/demo/InitDemo/output.xml
Log: /share/logs/demo/InitDemo/log.html
Report: /share/logs/demo/InitDemo/report.html |
Close loop
Run heatbridge robot tag to tell AAI about the relationship between the created HEAT stack (SINC one) and the service instance id.
To run this, you need:
- the heat stack name of the vSINC
- the service instance id
},Code Block collapse true $ ./demo-k8s.sh heatbridge vFW_SINC_Module 82678348-2f42-4ee7-bd29-0ef24b5e4bca vFW Starting Xvfb on display :89 with res 1280x1024x24 Executing robot tests at log level TRACE ============================================================================== OpenECOMP ETE ============================================================================== OpenECOMP ETE.Robot ============================================================================== OpenECOMP ETE.Robot.Testsuites ============================================================================== OpenECOMP ETE.Robot.Testsuites.Demo :: Executes the VNF Orchestration Test ... ============================================================================== Run Heatbridge :: Try to run heatbridge
{
"vnf-parameter-name": "repo_url_artifacts", "vnf-parameter-value": "https://nexus.onap.org/content/groups/staging" }, { "vnf-parameter-name": "demo_artifacts_version", "vnf-parameter-value": "1.1.1" }, { "vnf-parameter-name": "install_script_version", "vnf-parameter-value": "1.1.1"| PASS
},| ------------------------------------------------------------------------------ OpenECOMP ETE.Robot.Testsuites.Demo :: Executes the VNF Orchestrat... | PASS | 1 critical test, 1 passed, 0 failed 1 test total, 1 passed, 0 failed ============================================================================== OpenECOMP ETE.Robot.Testsuites
{
"vnf-parameter-name":
"key_name",
"vnf-parameter-value": "vfw_key" }, { "vnf-parameter-name": "pub_key", "vnf-parameter-value": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmuLf5dnvDS4hiwmXYg2YtgByeAj8ZoH5toGPNENIr9uIhgRclPWb5HSIDzhFLKy9K9Z1ht5XZEkzAcslSIKkodZlVYyucG/QwqLlN8N05EMLVm6TudjUp/j/VDvavSgp/xzIDsdHuhQZ8VHRE88mKzsTA4jPFp4s4Ic8eCes4nrydMrlbxeLjV3/+/xc77StQ7hDMaBlJX8xztgHRodxIQmMBWwb/4YSxjTbO0cwi4XYlRXzFPY7vmO2VDRhfaOVtyv8Pw6a3AaqIP6CR0z6QgbLYjtiFbWmhKQ+0qUfJeb0Kkc7Deok7x58a3mHkhswGS1aJLCaHC/W1b7n6C+lv adetalhouet@bell.corp.bce.ca"
},| PASS | 1 critical test, 1 passed, 0 failed 1 test total, 1 passed, 0 failed ============================================================================== OpenECOMP ETE.Robot
{
"vnf-parameter-name":
"cloud_env",
"vnf-parameter-value":
"openstack"
}
]| PASS | 1 critical test, 1
}passed, 0 failed 1 test
"request-information": { "request-id": "robot12",total, 1 passed, 0
"order-version":failed ============================================================================== OpenECOMP ETE
"1",
"notification-url":
"openecomp.org",
"order-number":
"1",
"request-action":
"PreloadVNFRequest"
},
"sdnc-request-header":
{
"svc-request-id":
"robot12",
"svc-notification-url": "http://openecomp.org:8080/adapters/rest/SDNCNotify", "svc-action": "reserve" } } }'| PASS | 1 critical
Expected result:
Code Block { "output": { "svc-request-id": "robot12", "response-code": "200", "ack-final-indicator": "Y" } }
- Create the VF-Module for vFW_SINC
- The instance name must be the vnf-name setup in the preload phase.
- After a few minutes, the stack should be created.
- Create the VF-Module for vFW_PG
- The instance name must be the vnf-name setup in the preload phase.
- After a few minutes, the stack should be created.
...
test, 1 passed, 0 failed 1 test total, 1 passed, 0 failed ============================================================================== Output: /share/logs/demo/heatbridge/output.xml Log: /share/logs/demo/heatbridge/log.html Report: /share/logs/demo/heatbridge/report.html
- Upload operational policy: this is to tell tell policy that for this specific instance, we should apply this policy.
Retrieve from MSO Catalog the modelInvariantUuid for the vFW_PG. Specify in the bellow below request the service-model-name, as defined step 5.c.
Code Block curl -X GET \ 'http://<kubernetes-host>:30223/ecomp/mso/catalog/v2/serviceVnfs?serviceModelName=<service-model-name>' \ -H 'Accept: application/json' \ -H 'Authorization: Basic SW5mcmFQb3J0YWxDbGllbnQ6cGFzc3dvcmQxJA==' \ -H 'Content-Type: application/json' \ -H 'X-FromAppId: Postman' \ -H 'X-TransactionId: get_service_vnfs'
Based on the payload bellowbelow, result would be: 86a1bdd8-1f59-4796-bf30-3002108068f6
Code Block collapse true { "serviceVnfs": [ { "modelInfo": { "modelName": "vFW_PG", "modelUuid": "7af8882e-f732-405f-b48b-38b6403654ea", "modelInvariantUuid": "86a1bdd8-1f59-4796-bf30-3002108068f6", "modelVersion": "1.0", "modelCustomizationUuid": "a2521929-d6da-46cc-9a62-ca3b6c3cef9b", "modelInstanceName": "vFW_PG 0" }, "toscaNodeType": "org.openecomp.resource.vf.VfwPg", "nfFunction": "", "nfType": "", "nfRole": "", "nfNamingCode": "", "vfModules": [ { "modelInfo": { "modelName": "VfwPg..base_vpkg..module-0", "modelUuid": "54a98442-52e3-46e8-8b40-193f04e92ff7", "modelInvariantUuid": "9c6c0369-a9c1-4419-94c9-aabf6250fc87", "modelVersion": "1", "modelCustomizationUuid": "35595818-2e09-4ad2-b6ce-2ffc263489af" }, "isBase": true, "vfModuleLabel": "base_vpkg", "initialCount": 1, "hasVolumeGroup": true } ] }, { "modelInfo": { "modelName": "vFW_SINC", "modelUuid": "b8cc7acf-eba8-4ddb-950a-be52a96b28c8", "modelInvariantUuid": "edd473e1-7d08-4cf1-be31-0d705017f644", "modelVersion": "1.0", "modelCustomizationUuid": "c890203f-44a0-4c43-aadb-250d8f6c54b0", "modelInstanceName": "vFW_SINC 0" }, "toscaNodeType": "org.openecomp.resource.vf.VfwSinc", "nfFunction": "", "nfType": "", "nfRole": "", "nfNamingCode": "", "vfModules": [ { "modelInfo": { "modelName": "VfwSinc..base_vfw..module-0", "modelUuid": "605ef192-e190-4043-97be-31a0d64a2f8e", "modelInvariantUuid": "858e065b-7491-4c70-91e6-109a65c6102d", "modelVersion": "1", "modelCustomizationUuid": "acf94576-fe00-43ec-b9f9-0f8748e44c0a" }, "isBase": true, "vfModuleLabel": "base_vfw", "initialCount": 1, "hasVolumeGroup": true } ] } ] }
Under
Code Block oom/kubernetes/policy/script
invoke the script as followfollows:
Code Block Usage: update-vfw-op-policy.sh <k8s-host> <policy-pdp-node-port> <policy-drools-node-port> <resource-id> ./update-vfw-op-policy.sh 10.195.197.53 30220 30221 86a1bdd8-1f59-4796-bf30-3002108068f
Result can look like, with debug enable (/bin/bash -x)
Code Block collapse true $ ./update-vfw-op-policy.sh 10.195.197.53 30220 30221 86a1bdd8-1f59-4796-bf30-3002108068f + '[' 4 -ne 4 ']' + K8S_HOST=10.195.197.53 + POLICY_PDP_PORT=30220 + POLICY_DROOLS_PORT=30221 + RESOURCE_ID=86a1bdd8-1f59-4796-bf30-3002108068f + echo + echo + echo 'Removing the vFW Policy from PDP..' Removing the vFW Policy from PDP.. + echo + echo + curl -v -X DELETE --header 'Content-Type: application/json' --header 'Accept: text/plain' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{ "pdpGroup": "default", "policyComponent" : "PDP", "policyName": "com.BRMSParamvFirewall", "policyType": "BRMS_Param" }' http://10.195.197.53:30220/pdp/api/deletePolicy * Trying 10.195.197.53... * TCP_NODELAY set * Connected to 10.195.197.53 (10.195.197.53) port 30220 (#0) > DELETE /pdp/api/deletePolicy HTTP/1.1 > Host: 10.195.197.53:30220 > User-Agent: curl/7.54.0 > Content-Type: application/json > Accept: text/plain > ClientAuth: cHl0aG9uOnRlc3Q= > Authorization: Basic dGVzdHBkcDphbHBoYTEyMw== > Environment: TEST > Content-Length: 128 > * upload completely sent off: 128 out of 128 bytes < HTTP/1.1 200 OK < Server: Apache-Coyote/1.1 < Content-Type: text/plain;charset=ISO-8859-1 < Content-Length: 91 < Date: Wed, 20 Dec 2017 20:17:22 GMT < * Connection #0 to host 10.195.197.53 left intact Transaction ID: af030f0c-0c2b-43a1-b1ec-6abf4ca73799 --The policy was successfully deleted.+ sleep 20 + echo + echo + echo 'Updating vFW Operational Policy ..' Updating vFW Operational Policy .. + echo + curl -v -X PUT --header 'Content-Type: application/json' --header 'Accept: text/plain' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{ "policyConfigType": "BRMS_PARAM", "policyName": "com.BRMSParamvFirewall", "policyDescription": "BRMS Param vFirewall policy", "policyScope": "com", "attributes": { "MATCHING": { "controller": "amsterdam" }, "RULE": { "templateName": "ClosedLoopControlName", "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a", "controlLoopYaml": "controlLoop%3A%0D%0A++version%3A+2.0.0%0D%0A++controlLoopName%3A+ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a%0D%0A++trigger_policy%3A+unique-policy-id-1-modifyConfig%0D%0A++timeout%3A+1200%0D%0A++abatement%3A+false%0D%0A+%0D%0Apolicies%3A%0D%0A++-+id%3A+unique-policy-id-1-modifyConfig%0D%0A++++name%3A+modify+packet+gen+config%0D%0A++++description%3A%0D%0A++++actor%3A+APPC%0D%0A++++recipe%3A+ModifyConfig%0D%0A++++target%3A%0D%0A++++++%23+TBD+-+Cannot+be+known+until+instantiation+is+done%0D%0A++++++resourceID%3A+86a1bdd8-1f59-4796-bf30-3002108068f%0D%0A++++++type%3A+VNF%0D%0A++++retry%3A+0%0D%0A++++timeout%3A+300%0D%0A++++success%3A+final_success%0D%0A++++failure%3A+final_failure%0D%0A++++failure_timeout%3A+final_failure_timeout%0D%0A++++failure_retries%3A+final_failure_retries%0D%0A++++failure_exception%3A+final_failure_exception%0D%0A++++failure_guard%3A+final_failure_guard" } } }' http://10.195.197.53:30220/pdp/api/updatePolicy * Trying 10.195.197.53... * TCP_NODELAY set * Connected to 10.195.197.53 (10.195.197.53) port 30220 (#0) > PUT /pdp/api/updatePolicy HTTP/1.1 > Host: 10.195.197.53:30220 > User-Agent: curl/7.54.0 > Content-Type: application/json > Accept: text/plain > ClientAuth: cHl0aG9uOnRlc3Q= > Authorization: Basic dGVzdHBkcDphbHBoYTEyMw== > Environment: TEST > Content-Length: 1327 > Expect: 100-continue > < HTTP/1.1 100 Continue * We are completely uploaded and fine < HTTP/1.1 200 OK < Server: Apache-Coyote/1.1 < Content-Type: text/plain;charset=ISO-8859-1 < Content-Length: 149 < Date: Wed, 20 Dec 2017 20:17:42 GMT < * Connection #0 to host 10.195.197.53 left intact Transaction ID: 20f4e273-d193-466c-8cce-ee643a854f5f --Policy with the name com.Config_BRMS_Param_BRMSParamvFirewall.2.xml was successfully updated. + sleep 5 + echo + echo + echo 'Pushing the vFW Policy ..' Pushing the vFW Policy .. + echo + echo + curl -v --silent -X PUT --header 'Content-Type: application/json' --header 'Accept: text/plain' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{ "pdpGroup": "default", "policyName": "com.BRMSParamvFirewall", "policyType": "BRMS_Param" }' http://10.195.197.53:30220/pdp/api/pushPolicy * Trying 10.195.197.53... * TCP_NODELAY set * Connected to 10.195.197.53 (10.195.197.53) port 30220 (#0) > PUT /pdp/api/pushPolicy HTTP/1.1 > Host: 10.195.197.53:30220 > User-Agent: curl/7.54.0 > Content-Type: application/json > Accept: text/plain > ClientAuth: cHl0aG9uOnRlc3Q= > Authorization: Basic dGVzdHBkcDphbHBoYTEyMw== > Environment: TEST > Content-Length: 99 > * upload completely sent off: 99 out of 99 bytes < HTTP/1.1 200 OK < Server: Apache-Coyote/1.1 < Content-Type: text/plain;charset=ISO-8859-1 < Content-Length: 162 < Date: Wed, 20 Dec 2017 20:17:48 GMT < * Connection #0 to host 10.195.197.53 left intact Transaction ID: e8bc4ae1-d0b0-483e-b1ba-871486661240 --Policy 'com.Config_BRMS_Param_BRMSParamvFirewall.2.xml' was successfully pushed to the PDP group 'default'.+ sleep 20 + echo + echo + echo 'Restarting PDP-D ..' Restarting PDP-D .. + echo + echo ++ kubectl --namespace onap-policy get pods ++ sed 's/ .*//' ++ grep drools + POD=drools-870120400-5b5k1 + kubectl --namespace onap-policy exec -it drools-870120400-5b5k1 -- bash -c 'source /opt/app/policy/etc/profile.d/env.sh && policy stop && sleep 5 && policy start' Defaulting container name to drools. Use 'kubectl describe pod/drools-870120400-5b5k1' to see all of the containers in this pod. [drools-pdp-controllers] L []: Stopping Policy Management... Policy Management (pid=5452) is stopping... Policy Management has stopped. [drools-pdp-controllers] L []: Policy Management (pid 5722) is running + sleep 20 + echo + echo + echo 'PDP-D amsterdam maven coordinates ..' PDP-D amsterdam maven coordinates .. + echo + echo + curl -vvv --silent --user @1b3rt:31nst31n -X GET http://10.195.197.53:30221/policy/pdp/engine/controllers/amsterdam/drools + python -m json.tool * Trying 10.195.197.53... * TCP_NODELAY set * Connected to 10.195.197.53 (10.195.197.53) port 30221 (#0) * Server auth using Basic with user '@1b3rt' > GET /policy/pdp/engine/controllers/amsterdam/drools HTTP/1.1 > Host: 10.195.197.53:30221 > Authorization: Basic QDFiM3J0OjMxbnN0MzFu > User-Agent: curl/7.54.0 > Accept: */* > < HTTP/1.1 200 OK < Date: Wed, 20 Dec 2017 20:18:49 GMT < Content-Type: application/json < Content-Length: 382 < Server: Jetty(9.3.14.v20161028) < { [382 bytes data] * Connection #0 to host 10.195.197.53 left intact { "alive": true, "artifactId": "policy-amsterdam-rules", "brained": true, "groupId": "org.onap.policy-engine.drools.amsterdam", "locked": false, "modelClassLoaderHash": 665564874, "recentSinkEvents": [], "recentSourceEvents": [], "sessionCoordinates": [ "org.onap.policy-engine.drools.amsterdam:policy-amsterdam-rules:0.6.0:closedloop-amsterdam" ], "sessions": [ "closedloop-amsterdam" ], "version": "0.6.0" } + echo + echo + echo 'PDP-D control loop updated ..' PDP-D control loop updated .. + echo + echo + curl -v --silent --user @1b3rt:31nst31n -X GET http://10.195.197.53:30221/policy/pdp/engine/controllers/amsterdam/drools/facts/closedloop-amsterdam/org.onap.policy.controlloop.Params + python -m json.tool * Trying 10.195.197.53... * TCP_NODELAY set * Connected to 10.195.197.53 (10.195.197.53) port 30221 (#0) * Server auth using Basic with user '@1b3rt' > GET /policy/pdp/engine/controllers/amsterdam/drools/facts/closedloop-amsterdam/org.onap.policy.controlloop.Params HTTP/1.1 > Host: 10.195.197.53:30221 > Authorization: Basic QDFiM3J0OjMxbnN0MzFu > User-Agent: curl/7.54.0 > Accept: */* > < HTTP/1.1 200 OK < Date: Wed, 20 Dec 2017 20:18:50 GMT < Content-Type: application/json < Content-Length: 3565 < Server: Jetty(9.3.14.v20161028) < { [1207 bytes data] * Connection #0 to host 10.195.197.53 left intact [ { "closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e", "controlLoopYaml": "controlLoop%3A%0D%0A++version%3A+2.0.0%0D%0A++controlLoopName%3A+ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e%0D%0A++trigger_policy%3A+unique-policy-id-1-restart%0D%0A++timeout%3A+3600%0D%0A++abatement%3A+true%0D%0A+%0D%0Apolicies%3A%0D%0A++-+id%3A+unique-policy-id-1-restart%0D%0A++++name%3A+Restart+the+VM%0D%0A++++description%3A%0D%0A++++actor%3A+APPC%0D%0A++++recipe%3A+Restart%0D%0A++++target%3A%0D%0A++++++type%3A+VM%0D%0A++++retry%3A+3%0D%0A++++timeout%3A+1200%0D%0A++++success%3A+final_success%0D%0A++++failure%3A+final_failure%0D%0A++++failure_timeout%3A+final_failure_timeout%0D%0A++++failure_retries%3A+final_failure_retries%0D%0A++++failure_exception%3A+final_failure_exception%0D%0A++++failure_guard%3A+final_failure_guard" }, { "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a", "controlLoopYaml": "controlLoop%3A%0D%0A++version%3A+2.0.0%0D%0A++controlLoopName%3A+ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a%0D%0A++trigger_policy%3A+unique-policy-id-1-modifyConfig%0D%0A++timeout%3A+1200%0D%0A++abatement%3A+false%0D%0A+%0D%0Apolicies%3A%0D%0A++-+id%3A+unique-policy-id-1-modifyConfig%0D%0A++++name%3A+modify+packet+gen+config%0D%0A++++description%3A%0D%0A++++actor%3A+APPC%0D%0A++++recipe%3A+ModifyConfig%0D%0A++++target%3A%0D%0A++++++%23+TBD+-+Cannot+be+known+until+instantiation+is+done%0D%0A++++++resourceID%3A+86a1bdd8-1f59-4796-bf30-3002108068f%0D%0A++++++type%3A+VNF%0D%0A++++retry%3A+0%0D%0A++++timeout%3A+300%0D%0A++++success%3A+final_success%0D%0A++++failure%3A+final_failure%0D%0A++++failure_timeout%3A+final_failure_timeout%0D%0A++++failure_retries%3A+final_failure_retries%0D%0A++++failure_exception%3A+final_failure_exception%0D%0A++++failure_guard%3A+final_failure_guard" }, { "closedLoopControlName": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3", "controlLoopYaml": "controlLoop%3A%0D%0A++version%3A+2.0.0%0D%0A++controlLoopName%3A+ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3%0D%0A++trigger_policy%3A+unique-policy-id-1-scale-up%0D%0A++timeout%3A+1200%0D%0A++abatement%3A+false%0D%0Apolicies%3A%0D%0A++-+id%3A+unique-policy-id-1-scale-up%0D%0A++++name%3A+Create+a+new+VF+Module%0D%0A++++description%3A%0D%0A++++actor%3A+SO%0D%0A++++recipe%3A+VF+Module+Create%0D%0A++++target%3A%0D%0A++++++type%3A+VNF%0D%0A++++retry%3A+0%0D%0A++++timeout%3A+1200%0D%0A++++success%3A+final_success%0D%0A++++failure%3A+final_failure%0D%0A++++failure_timeout%3A+final_failure_timeout%0D%0A++++failure_retries%3A+final_failure_retries%0D%0A++++failure_exception%3A+final_failure_exception%0D%0A++++failure_guard%3A+final_failure_guard" }, { "closedLoopControlName": "ControlLoop-VOLTE-2179b738-fd36-4843-a71a-a8c24c70c55b", "controlLoopYaml": "controlLoop%3A%0D%0A++version%3A+2.0.0%0D%0A++controlLoopName%3A+ControlLoop-VOLTE-2179b738-fd36-4843-a71a-a8c24c70c55b%0D%0A++trigger_policy%3A+unique-policy-id-1-restart%0D%0A++timeout%3A+3600%0D%0A++abatement%3A+false%0D%0A+%0D%0Apolicies%3A%0D%0A++-+id%3A+unique-policy-id-1-restart%0D%0A++++name%3A+Restart+the+VM%0D%0A++++description%3A%0D%0A++++actor%3A+VFC%0D%0A++++recipe%3A+Restart%0D%0A++++target%3A%0D%0A++++++type%3A+VM%0D%0A++++retry%3A+3%0D%0A++++timeout%3A+1200%0D%0A++++success%3A+final_success%0D%0A++++failure%3A+final_failure%0D%0A++++failure_timeout%3A+final_failure_timeout%0D%0A++++failure_retries%3A+final_failure_retries%0D%0A++++failure_exception%3A+final_failure_exception%0D%0A++++failure_guard%3A+final_failure_guard" } ]
- Mount APPC
Get the VNF instance ID, either through VID or through AAI. Bellow Below the AAI request
Code Block curl -X GET \ https://<kubernetes-host>:30233/aai/v8/network/generic-vnfs/ \ -H 'Accept: application/json' \ -H 'Authorization: Basic QUFJOkFBSQ==' \ -H 'Content-Type: application/json' \ -H 'X-FromAppId: Postman' \ -H 'X-TransactionId: get_generic_vnf'
In the result, search for the vFW_PG_VNF and get it's ipits vnf-id. In the payload bellowbelow, it would be e6fd60b4-f436-4a21-963c-cc9060127633
Code Block collapse true { "generic-vnf": [ { "vnf-id": "9663a27e-8fbe-4fde-bc33-064ae45caee6", "vnf-name": "vFW_SINC_VNF", "vnf-type": "vFW_Service/vFW_SINC 0", "service-id": "75af21a4-6519-4505-b418-134e9e836023", "prov-status": "PREPROV", "orchestration-status": "Created", "in-maint": false, "is-closed-loop-disabled": false, "resource-version": "1513788953961", "persona-model-id": "edd473e1-7d08-4cf1-be31-0d705017f644", "persona-model-version": "1.0", "relationship-list": { "relationship": [ { "related-to": "service-instance", "related-link": "https://10.195.197.53:30233/aai/v8/business/customers/customer/Demonstration/service-subscriptions/service-subscription/vFWCL/service-instances/service-instance/63b55891-ebc6-40bf-b884-2e2427280a83", "relationship-data": [ { "relationship-key": "customer.global-customer-id", "relationship-value": "Demonstration" }, { "relationship-key": "service-subscription.service-type", "relationship-value": "vFWCL" }, { "relationship-key": "service-instance.service-instance-id", "relationship-value": "63b55891-ebc6-40bf-b884-2e2427280a83" } ], "related-to-property": [ { "property-key": "service-instance.service-instance-name", "property-value": "vFWServiceInstance-20-12" } ] } ] }, "vf-modules": { "vf-module": [ { "vf-module-id": "fc8ba83f-2ebf-4066-bb3a-f581667f77da", "vf-module-name": "vFW_SINC_Module", "heat-stack-id": "vFW_SINC_Module/09b1a25e-4ef0-4490-9b05-d79c00c7d218", "orchestration-status": "active", "is-base-vf-module": true, "resource-version": "1513790007998", "persona-model-id": "858e065b-7491-4c70-91e6-109a65c6102d", "persona-model-version": "1" } ] } }, { "vnf-id": "e6fd60b4-f436-4a21-963c-cc9060127633", "vnf-name": "vFW_PG_VNF", "vnf-type": "vFW_Service/vFW_PG 0", "service-id": "75af21a4-6519-4505-b418-134e9e836023", "prov-status": "PREPROV", "orchestration-status": "Created", "in-maint": false, "is-closed-loop-disabled": false, "resource-version": "1513788903856", "persona-model-id": "86a1bdd8-1f59-4796-bf30-3002108068f6", "persona-model-version": "1.0", "relationship-list": { "relationship": [ { "related-to": "service-instance", "related-link": "https://10.195.197.53:30233/aai/v8/business/customers/customer/Demonstration/service-subscriptions/service-subscription/vFWCL/service-instances/service-instance/63b55891-ebc6-40bf-b884-2e2427280a83", "relationship-data": [ { "relationship-key": "customer.global-customer-id", "relationship-value": "Demonstration" }, { "relationship-key": "service-subscription.service-type", "relationship-value": "vFWCL" }, { "relationship-key": "service-instance.service-instance-id", "relationship-value": "63b55891-ebc6-40bf-b884-2e2427280a83" } ], "related-to-property": [ { "property-key": "service-instance.service-instance-name", "property-value": "vFWServiceInstance-20-12" } ] } ] }, "vf-modules": { "vf-module": [ { "vf-module-id": "c2fed873-263c-46b5-bb95-4dfaf6c02410", "vf-module-name": "vFW_PG_Module", "heat-stack-id": "vFW_PG_Module/850e84a4-6cee-405c-8058-7f3fa25ca42e", "orchestration-status": "active", "is-base-vf-module": true, "resource-version": "1513791913543", "persona-model-id": "9c6c0369-a9c1-4419-94c9-aabf6250fc87", "persona-model-version": "1" } ] } } ] }
- Get the public IP address of the Packet Generator from your deployment.
In the bellow below curl request, replace <vnf-id> with the VNF ID retrieved at step 2.a (it needs to be updated at two places), and replace <vnf-ip> with the ip retrieved at step 2.b.
Code Block curl -X PUT \ http://<kubernetes-host>:30230/restconf/config/network-topology:network-topology/topology/topology-netconf/node/<vnf-id> \ -H 'Accept: application/xml' \ -H 'Authorization: Basic YWRtaW46S3A4Yko0U1hzek0wV1hsaGFrM2VIbGNzZTJnQXc4NHZhb0dHbUp2VXkyVQ==' \ -H 'Content-Type: text/xml' \ -d '<node xmlns="urn:TBD:params:xml:ns:yang:network-topology"> <node-id><vnf-id></node-id> <host xmlns="urn:opendaylight:netconf-node-topology"><vnf-ip></host> <port xmlns="urn:opendaylight:netconf-node-topology">2831</port> <username xmlns="urn:opendaylight:netconf-node-topology">admin</username> <password xmlns="urn:opendaylight:netconf-node-topology">admin</password> <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only> </node>'
If you want to verify the NETCONF connection has successfully being established, use the following request (replace <vnd-id> with yours
Code Block curl -X GET \ http://<kubernetes-host>:30230/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/<vnf-id> \ -H 'Accept: application/json' \ -H 'Authorization: Basic YWRtaW46S3A4Yko0U1hzek0wV1hsaGFrM2VIbGNzZTJnQXc4NHZhb0dHbUp2VXkyVQ=='
Result should be:
Using NETCONF, let's get the current streams being active in our Packet GeneratorCode Block collapse true { "node": [ { "node-id": "e6fd60b4-f436-4a21-963c-cc9060127633", "netconf-node-topology:available-capabilities": { "available-capability": [ { "capability-origin": "device-advertised", "capability": "urn:ietf:params:netconf:capability:exi:1.0" }, { "capability-origin": "device-advertised", "capability": "urn:ietf:params:netconf:capability:candidate:1.0" }, { "capability-origin": "device-advertised", "capability": "urn:ietf:params:netconf:base:1.1" }, { "capability-origin": "device-advertised", "capability": "urn:ietf:params:netconf:base:1.0" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:yang:ietf-restconf?revision=2013-10-19)ietf-restconf" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:netconf:mdsal:notification?revision=2015-08-03)netconf-mdsal-notification" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring?revision=2010-10-04)ietf-netconf-monitoring" }, { "capability-origin": "device-advertised", "capability": "(urn:TBD:params:xml:ns:yang:network-topology?revision=2013-07-12)network-topology" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:yang:ietf-interfaces?revision=2014-05-08)ietf-interfaces" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:yang:ietf-access-control-list?revision=2016-07-08)ietf-access-control-list" }, { "capability-origin": "device-advertised", "capability": "(urn:honeycomb:params:xml:ns:yang:eid:mapping:context?revision=2016-08-01)eid-mapping-context" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:md:sal:rest:connector?revision=2014-07-24)opendaylight-rest-connector" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding?revision=2013-10-28)opendaylight-md-sal-binding" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:netconf:northbound:ssh?revision=2015-01-14)netconf-northbound-ssh" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:md:sal:core:spi:entity-ownership-service?revision=2015-08-10)opendaylight-entity-ownership-service" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:yang:ietf-inet-types?revision=2013-07-15)ietf-inet-types" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:mdsal:core:general-entity?revision=2015-09-30)odl-general-entity" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:protocol:framework?revision=2014-03-13)protocol-framework" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:md:sal:common?revision=2013-10-28)opendaylight-md-sal-common" }, { "capability-origin": "device-advertised", "capability": "(urn:sal:restconf:event:subscription?revision=2014-07-08)sal-remote-augment" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:netconf:northbound:notification?revision=2015-08-06)netconf-northbound-notification" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:yang:ietf-yang-types?revision=2010-09-24)ietf-yang-types" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:inmemory-datastore-provider?revision=2014-06-17)opendaylight-inmemory-datastore-provider" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:netty?revision=2013-11-19)netty" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding:impl?revision=2013-10-28)opendaylight-sal-binding-broker-impl" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:sal:restconf:service?revision=2015-07-08)sal-restconf-service" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:config?revision=2013-04-05)config" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:vpp:classifier?revision=2015-06-03)vpp-classifier" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:v3po?revision=2015-01-05)v3po" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:md:sal:core:general-entity?revision=2015-08-20)general-entity" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:netconf:notification:1.0?revision=2008-07-14)notifications" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:sample-plugin?revision=2016-09-18)sample-plugin" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:md:sal:core:spi:config-dom-store?revision=2014-06-17)opendaylight-config-dom-datastore" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:md:sal:core:spi:operational-dom-store?revision=2014-06-17)opendaylight-operational-dom-datastore" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:config:netconf:northbound:impl?revision=2015-01-12)netconf-northbound-impl" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:vpp:nsh?revision=2016-06-24)vpp-nsh" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:netconf:north:mapper?revision=2015-01-14)netconf-northbound-mapper" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:yang:ietf-yang-types?revision=2013-07-15)ietf-yang-types" }, { "capability-origin": "device-advertised", "capability": "(urn:honeycomb:params:xml:ns:yang:naming:context?revision=2016-05-13)naming-context" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:yang:iana-if-type?revision=2014-05-08)iana-if-type" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:vpp:vlan?revision=2015-05-27)vpp-vlan" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:netconf:mdsal:mapper?revision=2015-01-14)netconf-mdsal-mapper" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:vpp:classifier?revision=2016-09-09)vpp-classifier-context" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:yang:ietf-ip?revision=2014-06-16)ietf-ip" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:md:sal:remote?revision=2014-01-14)sal-remote" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:lisp?revision=2016-05-20)lisp" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:yang:ietf-inet-types?revision=2010-09-24)ietf-inet-types" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:threadpool?revision=2013-04-09)threadpool" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom?revision=2013-10-28)opendaylight-md-sal-dom" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:netconf:northbound:tcp?revision=2015-04-23)netconf-northbound-tcp" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:v3po:context?revision=2016-09-09)v3po-context" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:netmod:notification?revision=2008-07-14)nc-notifications" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:netconf:base:1.0?revision=2011-06-01)ietf-netconf" }, { "capability-origin": "device-advertised", "capability": "(urn:ieee:params:xml:ns:yang:dot1q-types?revision=2015-06-26)dot1q-types" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:netconf:mdsal:monitoring?revision=2015-02-18)netconf-mdsal-monitoring" }, { "capability-origin": "device-advertised", "capability": "(instance:identifier:patch:module?revision=2015-11-21)instance-identifier-patch-module" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring-extension?revision=2013-12-10)ietf-netconf-monitoring-extension" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:netconf:northbound:notification:impl?revision=2015-08-07)netconf-northbound-notification-impl" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:yang:extension:yang-ext?revision=2013-07-09)yang-ext" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:yang:ietf-packet-fields?revision=2016-07-08)ietf-packet-fields" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:yang:ietf-lisp-address-types?revision=2015-11-05)ietf-lisp-address-types" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:config:netconf:northbound?revision=2015-01-14)netconf-northbound" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:yang:ietf-netconf-notifications?revision=2012-02-06)ietf-netconf-notifications" }, { "capability-origin": "device-advertised", "capability": "(urn:TBD:params:xml:ns:yang:network-topology?revision=2013-10-21)network-topology" }, { "capability-origin": "device-advertised", "capability": "(urn:ietf:params:xml:ns:yang:rpc-context?revision=2013-06-17)rpc-context" }, { "capability-origin": "device-advertised", "capability": "(urn:opendaylight:params:xml:ns:yang:controller:config:netconf:auth?revision=2015-07-15)netconf-auth" }, } ] { }, "capability-originnetconf-node-topology:host": "device-advertised10.195.200.32", "netconf-node-topology:unavailable-capabilities": {}, "capability": "(urn:opendaylight:params:xml:ns:yang:controller:config:netconf:auth?revision=2015-07-15)netconf-auth" -node-topology:connection-status": "connected", "netconf-node-topology:port": 2831 } } ] }, "netconf-node-topology:host": "10.195.200.32", "netconf-node-topology:unavailable-capabilities": {}, "netconf-node-topology:connection-status": "connected", "netconf-node-topology:port": 2831 } ] }
] }
Using NETCONF, let's get the current streams being active in our Packet Generator. The number of streams will change along the time, this is the result of close-loop policy. When the traffic goes over a certain threshold, DCAE will publish an event on the unauthenticated.DCAE_CL_OUTPUT topic that will be picked up by APPC, that will send a NETCONF request to the packet generator to adjust the traffic it's sending.
Code Block curl -X GET \ http://10.195.197.53:30230/restconf/config/network-topology:network-topology/topology/topology-netconf/node/e6fd60b4-f436-4a21-963c-cc9060127633/yang-ext:mount/sample-plugin:sample-plugin/pg-streams \ -H 'Accept: application/json' \ -H 'Authorization: Basic YWRtaW46S3A4Yko0U1hzek0wV1hsaGFrM2VIbGNzZTJnQXc4NHZhb0dHbUp2VXkyVQ=='
Browse to the zdfw1fwl01snk01 on port 667 to see a graph representing the traffic being received:
Code Block http://<zdfw1fwl01snk01>:667/
As you can see in the below graph, looking at the top right square, we can see the first two fluctuations are going from very low to very high. This is when close-loop isn't running.
Once close-loop is running, you'll have some medium bars.Check the events sent by Virtual Event Collector (VES) to Threshold Crossing Analytic (TCA) app:
Code Block curl -X GET \ http://<K8S_IP>:3904/events/unauthenticated.SEC_MEASUREMENT_OUTPUT/group1/C1 \ -H 'Accept: application/json' \ -H 'Content-Type: application/cambria'
The VES resides in the VNF itself, whereas the TCA is an application running on Cask. A DCAE component.
Check the events sent by TCA on unauthenticated.DCAE_CL_OUTPUT:
10.195.197.53:30230/restconf/config/network-topology:network-topology/topology/topology-netconf/node/e6fd60b4-f436-4a21-963c-cc9060127633/yang-ext:mount/sample-plugin:sample-plugin/pg-streamsCode Block curl -X GET \ http://
Authorization<K8S_IP>:3904/events/unauthenticated.DCAE_CL_OUTPUT/group1/C1 \ -H 'Accept: application/json' \ -H '
Basic YWRtaW46S3A4Yko0U1hzek0wV1hsaGFrM2VIbGNzZTJnQXc4NHZhb0dHbUp2VXkyVQ=='Content-Type:
application/cambria'
Those events are the resulting of the TCA application, e.g. TCA has noticed an event was crossing a given threshold, hence is sending a message of that particular topic. Then Policy will grab this event and perform the appropriate action, as defined in the Policy. In the case of vFWCL, Policy will send an event on the APPC_CL topic, that APPC will consume. This will trigger a NETCONF request to the packet generator to adjust the traffic.
I hope everything worked for you, if not, please leave a comment. Thanks