Vetted vFirewall Demo - Full draft how-to for F2F and ReadTheDocs

20181220 - update for casablanca -TODO: review the vFW automation in https://github.com/garyiwu/onap-lab-ci - thanks Yang Xu

This long-winded page name will revert to "Running the ONAP vFirewall Demo...." when we are finished before 9 Dec - and moved out of the wiki root

Please join and post "validated" actions/config/results - but do not move or edit this page until we get a complete vFW run before Ideally the 4 Dec KubCon conference and worst case the 11 Dec ONAP Conference - thank you

Under construction - this page is a consolidation of all details in getting the vFirewall running over the next 2 weeks in prep of anyone that would like to demo it for the F2F in Dec.

ADD content ONLY when verified - with evidence (screen-cap, JSON output etc..)

DO paste any questions and unverified config/actions in the comment section at the end - for the team to verify


HEAT Daily meeting at 1200 EDT noon Nov 27 to 8 Dec 2017
https://zoom.us/j/7939937123 see schedule at https://lists.onap.org/pipermail/onap-discuss/2017-November/006483.html

OOM Daily meeting at 1100 EDT noon Nov 29 to 1 Dec 2017 - https://lists.onap.org/pipermail/onap-discuss/2017-November/006575.html

Statement of Work

Ideally we provide this page as a the draft that will go into ReadTheDocs.io - where this page gets deleted and referenced there.

There are currently 3 or more distinct pages, email threads, presentations, phone calls, meetings where all the details needed to "Step by Step" get a running vFirewall up are located.

We would like to get to the point where we were before Aug 2017 where an individual with an Openstack environment (OOM as well now) - could follow each instruction point (action - and expected/documented result/output) and end up with our current minimal sanity usecase - the vFirewall

If you have any details on configuration of getting up the vFirewall post them to the comments section and it will be tested and incorporated

Ideally any action added to this page itself - is fully tested with resulting output (text/screencap) - pasted as a reference.

JIRAs:  OOM-459 - Getting issue details... STATUS  for OOM and  INT-106 - Getting issue details... STATUS  for HEAT

Output

1- This set of instructions below - to go from an empty OOM host or OpenStack lab - all the way to closed loop running.
2 - A set of videos - the vFirewall from an already deployed OOM and HEAT deployment - see the reference videos from Running the ONAP Demos#ONAPDeploymentVideos see  INT-333 - Getting issue details... STATUS

3- Secondary videos on bringing up OOM and HEAT deployments

Running the vFirewall Demo

sync with Running the ONAP Demos#QuickstartInstructions

TODO: check for JIRA on appc demo.robot working : 20171128 (worked in 1.0.0)

20180307 - SDC 503 - see pod reordering in amsterdam https://lists.onap.org/pipermail/onap-discuss/2018-March/008403.html - need to raise jira

Prerequisites

ArtifactLocationNotes

private key (ssh-add)


obrienbiometrics:onap_public michaelobrien$ ssh-keygen


SHA256:YzLggI8nGXna0Ssx0DMpLvZKSPTGZJ1mXwj2XZ+c8Gg michaelobrien@obrienbiometrics.local

paste onap_public.pub into the pub_key: sections of all the onap_openstack and vFW env files



openstack yaml and env

https://nexus.onap.org/content/sites/raw/org.onap.demo/heat/ONAP/1.1.0-SNAPSHOT/

demo/heat/onap/onap-openstack.*


vFirewall yaml and env
(2 VNFs)

unverified

We will use the split vFWCL (vFW closed loop) in demo/heat/vFWCL


demo/heat/vFWCL/vFWPKG/base_vpkg.env

demo/heat/vFWCL/vFWSNK/base_vfw.env

  image_name: ubuntu-14-04-cloud-amd64

  flavor_name: m1.medium

  public_net_id: 971040b2-7059-49dc-b220-4fab50cb2ad4

cloud_env: openstack

  onap_private_net_id: oam_onap_6Gve

  onap_private_subnet_id: oam_onap_6Gve

Note: the network must be the one that shows on the instances page - or the only non-shared one in the network list


not the older

https://nexus.onap.org/content/sites/raw/org.onap.demo/heat/vFW/1.1.0-SNAPSHOT/

or the deprecated https://nexus.onap.org/content/sites/raw/org.openecomp.demo/heat/vFW/1.1.0-SNAPSHOT/






demo/heat/vFWCL/vFWPKG/base_vpkg.env







vFirewall Tasks

Ideally we have an automated one-click vFW deployment - in the works - 

sync with Running the ONAP Demos#QuickstartInstructions

T#Task

Action

Rest URL+JSON payload
UI Screencap or
Console cmd

Result

JSON /

Text /

Screencap

Artifacts

Link or

attach

file

Env

OOM

HEAT

or both

Verify Read

Last

run

Notes


./demo-k8s.sh onap init_robot

./demo-k8s.sh init

start with a full DCAE deploy (amsterdam) via OOM


ubuntu@a-onap-devopscd:~/oom/kubernetes/robot$ ./demo-k8s.sh onap init_robot

Number of parameters:

2

KEY:

init_robot

WEB Site Password for user 'test': ++ ETEHOME=/var/opt/OpenECOMP_ETE

++ VARIABLEFILES='-V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py'

+++ kubectl --namespace onap get pods

+++ sed 's/ .*//'

+++ grep robot

No resources found.

++ POD=

++ kubectl --namespace onap exec -- /var/opt/OpenECOMP_ETE/runTags.sh -V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py -v WEB_PASSWORD:test -d /share/logs/demo/UpdateWebPage -i UpdateWebPage --display 89
ubuntu@a-onap-devopscd:~/oom/kubernetes/robot$ ./demo-k8s.sh onap init_robot

Number of parameters:

2

KEY:

init_robot

WEB Site Password for user 'test': ++ ETEHOME=/var/opt/OpenECOMP_ETE

++ VARIABLEFILES='-V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py'

+++ kubectl --namespace onap get pods

+++ sed 's/ .*//'

+++ grep robot

No resources found.

++ POD=

++ kubectl --namespace onap exec -- /var/opt/OpenECOMP_ETE/runTags.sh -V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py -v WEB_PASSWORD:test -d /share/logs/demo/UpdateWebPage -i UpdateWebPage --display 89







optionalBefore robot init (init_customer and distribute






optionalcloud region PUT to AAI

from postman:code

PUT /aai/v11/cloud-infrastructure/cloud-regions/cloud-region/Openstack/RegionOne HTTP/1.1
Host: 34.232.186.178:30233
Accept: application/json
Content-Type: application/json
X-FromAppId: AAI
X-TransactionId: get_aai_subscr
Authorization: Basic QUFJOkFBSQ==
Cache-Control: no-cache
Postman-Token: d5de805a-3053-9fa3-55ba-256a60182458

{
"cloud-owner": "Openstack",
"cloud-region-id": "RegionOne",
"cloud-region-version": "v1",
"cloud-type": "SharedNode",
"cloud-zone": "CloudZone",
"owner-defined-type": "OwnerType",
"tenants": {
"tenant": [{
"tenant-id": "1035021",
"tenant-name": "ecomp-dev"
}]
}
}


201 created


OOM

GET /aai/v11/cloud-infrastructure/cloud-regions/cloud-region/Openstack/RegionOne HTTP/1.1
Host: 34.232.186.178:30233
Accept: application/json
Content-Type: application/json
X-FromAppId: AAI
X-TransactionId: get_aai_subscr
Authorization: Basic QUFJOkFBSQ==
Cache-Control: no-cache
Postman-Token: fe212362-58dc-99d8-c09a-c5de08995dbb

200 OK

{
"cloud-owner": "Openstack",
"cloud-region-id": "RegionOne",
"cloud-type": "SharedNode",
"owner-defined-type": "OwnerType",
"cloud-region-version": "v1",
"cloud-zone": "CloudZone",
"sriov-automation": false,
"resource-version": "1511745669015"
}


20171126

1

optional

TBD - cloud region PUT to AAI

Verify: cloud-region is not set by robot ./demo.sh init (only the customer is - we need to run the rest call for cloud region ourselves

watch intermittent issues bringing up aai1 containers in AAI-513 - Getting issue details... STATUS


HEAT
TBD 201711xx

SDC Distribution

(manual)


HEAT http://portal.api.simpledemo.onap.org:8989/ONAPPORTAL/login.htm

OOM: http://<host>:30211

License Model

as cs0008 on SDC onboard | new license model | license key groups (network wide / Universal) |

Entitlement pools (network wide / absolute 100 / CPU / 000001 / Other tbd / Month) |

Feature Groups (123456) manuf ref # | Available Entitlement Pools (push right) |

License Agreements | Add license agreement (unlimited) - push right / save / check-in / submit | Onboard breadcrumb 

VF

Onboard | new Vendor (not Virtual) Software Product (FWL App L4+) - select network package not manual checkbox |

select LA (Lversion 1, LA, then FG) save | upload zip | proceed to validation | checkin | submit

Onboard home | drop vendor software prod repo | select, import vsp | create | icon | submit for testing

Distributing

as jm0007 | start testing | accept 

as cs0008 | sdc home | see firewall | add service | cat=l4, 123456 create | icon | composition, expand left app L4 - drag | submit for testing 

as jm0007 | start testing | accept 

as gv0001 | approve 

as op0001 | distribute







TBD Customer creation


Note: robot ./demo.sh

oom: oom/kubernetes/robot/demo-k8s.sh







SDC Model Distribution

If you are at this step - switch over to Alexis de Talhouët page on vFWCL instantiation, testing, and debuging







TBD VID Service creation







TBD VID Service Instance deployment







TBD VID Create VNF







VNF preload

OK (REST)


http://{{sdnc_ip}}:8282/restconf/operations/VNF-API:preload-vnf-topology-operation

note the service-type change - see gui top right

POST /restconf/operations/VNF-API:preload-vnf-topology-operation HTTP/1.1
Host: 10.12.5.92:8282
Accept: application/json
Content-Type: application/json
X-TransactionId: 0a3f6713-ba96-4971-a6f8-c2da85a3176e
X-FromAppId: API client
Authorization: Basic YWRtaW46S3A4Yko0U1hzek0wV1hsaGFrM2VIbGNzZTJnQXc4NHZhb0dHbUp2VXkyVQ==
Cache-Control: no-cache
Postman-Token: e1c8d1ec-4cd9-5744-3ac9-f83f0d3c71d4

{
    "input": {
        "vnf-topology-information": {
            "vnf-topology-identifier": {
                "service-type": "11819dd6-6332-42bc-952c-1a19f8246663",
                "vnf-name": "DemoModule2",
                "vnf-type": "Vsp..base_vfw..module-0",
                "generic-vnf-name": "vFWDemoVNF",
                "generic-vnf-type": "vsp 0"
            },
            "vnf-assignments": {
                "availability-zones": [],
                "vnf-networks": [],
                "vnf-vms": []
            },
      "vnf-parameters":
      [
{
"vnf-parameter-name": "image_name",
"vnf-parameter-value": "ubuntu-14-04-cloud-amd64"
},
{
"vnf-parameter-name": "flavor_name",
"vnf-parameter-value": "m1.medium"
},
{
"vnf-parameter-name": "public_net_id",
"vnf-parameter-value": "971040b2-7059-49dc-b220-4fab50cb2ad4"
},
{
"vnf-parameter-name": "unprotected_private_net_id",
"vnf-parameter-value": "zdfw1fwl01_unprotected"
},
{
"vnf-parameter-name": "unprotected_private_subnet_id",
"vnf-parameter-value": "zdfw1fwl01_unprotected_sub"
},
{
"vnf-parameter-name": "protected_private_net_id",
"vnf-parameter-value": "zdfw1fwl01_protected"
},
{
"vnf-parameter-name": "protected_private_subnet_id",
"vnf-parameter-value": "zdfw1fwl01_protected_sub"
},
{
"vnf-parameter-name": "onap_private_net_id",
"vnf-parameter-value": "oam_onap_Ze9k"
},
{
"vnf-parameter-name": "onap_private_subnet_id",
"vnf-parameter-value": "oam_onap_Ze9k"
},
{
"vnf-parameter-name": "unprotected_private_net_cidr",
"vnf-parameter-value": "192.168.10.0/24"
},
{
"vnf-parameter-name": "protected_private_net_cidr",
"vnf-parameter-value": "192.168.20.0/24"
},
{
"vnf-parameter-name": "onap_private_net_cidr",
"vnf-parameter-value": "10.0.0.0/16"
},
{
"vnf-parameter-name": "vfw_private_ip_0",
"vnf-parameter-value": "192.168.10.100"
},
{
"vnf-parameter-name": "vfw_private_ip_1",
"vnf-parameter-value": "192.168.20.100"
},
{
"vnf-parameter-name": "vfw_private_ip_2",
"vnf-parameter-value": "10.0.100.5"
},
{
"vnf-parameter-name": "vpg_private_ip_0",
"vnf-parameter-value": "192.168.10.200"
},
{
"vnf-parameter-name": "vsn_private_ip_0",
"vnf-parameter-value": "192.168.20.250"
},
{
"vnf-parameter-name": "vsn_private_ip_1",
"vnf-parameter-value": "10.0.100.4"
},
{
"vnf-parameter-name": "vfw_name_0",
"vnf-parameter-value": "vFWDemoVNF"
},
{
"vnf-parameter-name": "vsn_name_0",
"vnf-parameter-value": "zdfw1fwl01snk01"
},
{
"vnf-parameter-name": "vnf_id",
"vnf-parameter-value": "vFirewall_vSink_demo_app"
},
{
"vnf-parameter-name": "vf_module_id",
"vnf-parameter-value": "vFirewall_vSink"
},
{
"vnf-parameter-name": "dcae_collector_ip",
"vnf-parameter-value": "127.0.0.1"
},
{
"vnf-parameter-name": "dcae_collector_port",
"vnf-parameter-value": "8080"
},
{
"vnf-parameter-name": "repo_url_blob",
"vnf-parameter-value": "https://nexus.onap.org/content/sites/raw"
},
{
"vnf-parameter-name": "repo_url_artifacts",
"vnf-parameter-value": "https://nexus.onap.org/content/groups/staging"
},
{
"vnf-parameter-name": "demo_artifacts_version",
"vnf-parameter-value": "1.1.0"
},
{
"vnf-parameter-name": "install_script_version",
"vnf-parameter-value": "1.1.0-SNAPSHOT"
},
{
"vnf-parameter-name": "key_name",
"vnf-parameter-value": "onapkey"
},
{
"vnf-parameter-name": "pub_key",
"vnf-parameter-value": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDlc+Lkkd6qK4yrhwgyEXmDuseZihbdYk3Dd90p4/TTDCenGVdfdPU9r4KuCrn8nhjjhVvOx8s1hSi03NI9qHQasLcNCVavzse04kq/RlrkmEvSnqI0/HYNOMYASBQAxgF/pocbANnERcfzXrWiymK5Aqm3U8P25EkeKp9tQmSiijki8ywA5iXuBDWiPQxE5gtxotGMUH5EhElHXlQ2lWRc3IlHghfoh8sI3auz7Bimma3vEUd64e6uuZR5oxCdv3ybZBkYnOcgiGaeP7sWDpjggpI40bfoQ/PbZh4u9maLPmY8vm1HKebZgfwkgEXSi0B4QgUHlRcVWV7lNo+418Tt michaelobrien@obrienbiometrics"
},
{
"vnf-parameter-name": "cloud_env",
"vnf-parameter-value": "openstack"
}
 
 
      ]
       },
        "request-information": {
            "request-id": "robot12",
            "order-version": "1",
            "notification-url": "openecomp.org",
            "order-number": "1",
            "request-action": "PreloadVNFRequest"
        },
        "sdnc-request-header": {
            "svc-request-id": "robot12",
            "svc-notification-url": "http:\/\/openecomp.org:8080\/adapters\/rest\/SDNCNotify",
            "svc-action": "reserve"
        }
    }   
}




Result 200

{
    "output": {
        "svc-request-id": "robot12",
        "response-code": "200",
        "ack-final-indicator": "Y"
    }
}








VNF preload

(alternative, no postman)

(hope I got it right)

references to video are like

"X-mm:ss some text"

where X is 0..5 and the video is 20171128_1200_X_of_5_daily_session.mp4

  • Step 1: Prepare JSON. You need: JSON payload from above
  • You need to be very careful with the wording .. It is extreme confusing

  • Press the little “I” next to the service instance
  • The next dialog shows a ‘Service Instance ID:’
  • Copy the value into "service-type“ field of JSON payload
  • Close the dialog
  • (2-20:15 get service instance in Video)

  • press little "i" in vnf
  • Look for VNF Type, take the part after the slash and copy value into “generic-vnf-type” of JSON payload
  • Look for VNF Name and copy the value into “generic-vnf-name” of JSON payload
  • Look for a vnf-parameter-name=“vfw_name_0
  • Put the same value in the associated “vnf-parameter-value” field
  • Close Dialog
  • (2-21:25 in the video)

  • Press the green add VNF Module Button
  • Select desired module (depends whether you have already added both for the demo)
  • Look for Model Name and copy value to vnf-type of JSON payload
  • Cancel(!) from dialog


  • Fill remaining Parameters
  • Select a proper module name and put it in the vnf-name field of JSON payload
  • Get the name of the onap-private network and put it in the onap_private_net_id and onap_private_subnet_id fields of vnf-parameters of JSON payload
  • Double check the public net id
  • Make sure the correct ssh key is configured under vnf-parameters


  • Scroll down to ‘POST /operations/VNF-API:preload-vnf-topology-operation’. Careful, there are similar entries there too
  • Copy your JSON into the field for the request body

  • Scroll down to “Try It” and try it







SDNC VNF Preload

(Integration-Jenkins lab)


(from Marco 20171128)







TBD VID Create VF-Module (vSNK)


Need to delete the previous failure first - raise JIRA on error

for now postfix and recreate







TBD VID Create VF-Module (vPG)







TBD Robot Heatbridge







TBD APPC mountpoint (Robot or REST)







APPC mountpoint for vFW closed-loop

(Integration-Jenkins lab)







Verifying the vFirewall

Original/Ongoing Doc References

Running the ONAP Demos

running vFW Demo on ONAP Amsterdam Release

Clearwater vIMS Onboarding and Instantiation

UCA-20 OSS JAX-RS 2 Client

Vetted vFirewall Demo - Full draft how-to for F2F and ReadTheDocs

Integration Use Case Test Cases - could not find vFW content here

ONAP master branch Stabilization

OOM-1 - Getting issue details... STATUS

INT-106 - Getting issue details... STATUS

INT-284 - Getting issue details... STATUS

List of ONAP Implementations under Test by Environment

Please add yourself to the list so we can target EPIC work based on environment affinity 

EnvironmentBranchDeployerContactsvFW statusNotes
Intel OpenlabmasterHEATnone

cloud: http://10.12.25.2/auth/login/?next=/project/instances/

servers

Starting up (20171123) - not ready yet

Intel OpenlabmasterOOM Kubernetesnone

cloud: http://10.12.25.2/auth/login/?next=/project/instances/

server: 10.12.25.117

key: openlab_oom_key (pass by mail)

(non-DCAE ONAP components only) partial 16g only until quota increased or we cluster 4

OOM-461 - Getting issue details... STATUS

Intel Openlabrelease-1.1.0OOM Kubernetesnone

cloud: http://10.12.25.2/auth/login/?next=/project/instances/

server: 10.12.25.119

key: openlab_oom_key (pass by mail)

watch INT-344 - Getting issue details... STATUS

RackspacemasterOOM Kubernetesnone

(non-DCAE ONAP components only) DCAEGEN2 not tested yet for R1

Amazon AWS EC2masterOOM Kubernetes
none(non-DCAE ONAP components only) - spot node terminated
Amazon AWS ECS
OOM Kubernetespending testn/a(non-DCAE ONAP components only) - node terminated
Google GCEmasterOOM Kubernetes
(non-DCAE ONAP components only) - node closed
Google GCE CaaS
OOM Kubernetespending testn/a(non-DCAE ONAP components only)
Rackspace
HEATnot supported yetn/a
Alibaba VM
OOM Kubernetesnone

not tested yet

Continuous Deployment References

TechServersDetails
HEAT

Kubernetes

Jobs (AWS)

jenkins.onap.info

Analytics (AWS)

kibana.onap.info

CD servers (AWS)

dev.onap.info

OOM R2 Master (Beijing)

http://jenkins.onap.info/job/oom-cd-release-110-branch/

OOM R1 (Amsterdam)

http://jenkins.onap.info/job/oom-cd/

Formal Recordings

put all daily and ongoing vFW formal run videos here - in the leadup to the 2 conferences.

Recording detailsRecording embedded (currently limited to 30 min for the 100mb limit) or link

ONAP installation of

OOM from clean VM to Healthcheck

ONAP R1 OOM from clean AWS VM to deployed ONAP

3 videos - reuse for OOM-395 - Getting issue details... STATUS


20171208 : GUI only for SDC onboarding in OOM 20171208 release-1.1.0 - no devops screens in this one so it can be used for demos



OOM vFirewall SDC distribution to VF-Module creationSee Alexis' vFWCL instantiation, testing, and debuging

ONAP installation of

HEAT from empty OPENSTACK to Healthcheck


HEAT vFirewall SDC distribution to VF-Module creationsee Alexis' vFWCL instantiation, testing, and debuging


Daily Working Recordings

DateVideoNotes / TODO

2017

1127

HEAT: get back to the vnf preload - continue to the 3 vFW VMs coming up

todo: use the split template (abandon the single VNF)

todo: stop using robot for all except customer creation - essentially everything is REST and VID

todo: fix DNS of the onap env file

OOM: go over master status, get a 1.1.0 branch up separately



CHAT:

From Brian to Everyone: (12:06)
Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U
From gaurav gupta to Everyone: (12:07)
VNF-API
From Geora Barsky to Everyone: (12:17)
dns_list: ["10.12.25.5", "8.8.8.8"]
external_dns: 8.8.8.8
dns_forwarder: 10.12.25.5
oam_network_cidr: 10.0.0.0/16
From Kedar Ambekar to Everyone: (12:20)
10.12.25.5 is the IP of DNS server that would get spawned in ONAP stack ?
From Josef Reisinger to Everyone: (12:25)
this gives a hosts-like list of servers and IP's
#. ~/admin-openrc
openstack server list -f value -c Name -c Networks|sed -n 's#\([^ ][^ ]*\) oam.*, \(.*\)#\2 \1#p'
From Josef Reisinger to Everyone: (13:09)
Sorry.. issues with Mic...

20171128


HEAT: error on vf-module creation (MSO Heat issue)

12:23:15 From Eric Debeau : The API for licence model creation are not documented in R1
12:28:49 From Alexis de Talhouët : handy command: find . -name ".DS_Store" | xargs rm -r
12:29:10 From Marco Platania : zip vFW.zip *
12:29:57 From Alexis de Talhouët : https://stackoverflow.com/questions/10924236/mac-zip-compress-without-macosx-folder/23372210#23372210
12:30:05 From Alexis de Talhouët : zip -r -X Archive.zip *
12:33:44 From Alexis de Talhouët : Actually, it’s not only the .DS_Store, it’s also that osx adds a __MACOSX empty folder in the zip file
12:51:43 From Josef Reisinger : +1 for the enhancement on robot:88 :-)
12:51:56 From Alexis de Talhouët : Yeah, good stuff!!
12:53:01 From Eric Debeau : echo "onap:onap" > /etc/lighttpd/authorization
13:07:15 From Eric Debeau : It is cool to use a REST API for the preload instead using the Robot. We should document it.
13:26:52 From mryan : Thanks Michael, informative session! I need to jump on another call
13:27:47 From Eric Debeau : Thanks for this meeting.
13:29:12 From ramki : Thank you so much Michael for arranging this!
13:29:30 From Josef Reisinger : thanks for the walk-through. tty tomorrow
13:30:05 From ramki to Michael O'Brien (Privately) : Michael - do you have a few minutes for OOM?
13:32:28 From Gaurav Gupta (VMware) : Any one trying vLB/vDNS on amsterdam
13:33:26 From Gaurav Gupta (VMware) to Michael O'Brien (Privately) : Any one trying vLB/vDNS
13:33:52 From Michael O'Brien to Gaurav Gupta (VMware) (Privately) : not yet - but in future as with vCPE/vVolte


=================================================================

Time markers in the videos to the left. The "Part"-number represents part 0..4 in the file name

Part Marker comment
0 14:30 Demo flow explained
0 16:35 statement: no automated flow
0 cloud region create discussion
1 22:40 distribution: monitor progress
2 3:01 prepare robot to have html page
2 9:01 check customer in aai
2 13:34 start VID
2 18:30 Discussion about order of vnf & vnf module creation (vFW/vSNK)
2 1957 preload vFW
2 20:15 get service instance (little "i" in circle in the Service Instance Line in VID)
2 21:10 use service instance ID as Service type(!) in JSON payload
2 21:25 vnf type (press litte "i" in vnf: vf-type, whatever is after the slash
2 22:05 replace generic-vnf-type with that value
2 22:10 vnf name
2 22:20 back to vid screen where you got generic-vnf-type and take "VNF Name"
2 22:29 place it as generic-vnf-name in JSON
2 22:30 vnf name and vnf type; get vnf name by pressing green add VF Module
2 22:53 select vnf name from the dark field or "Model Name" from the dialog
2 23:08 Put value in vnf-type
2 23:13 vnf-name: select and remember as in revious demo for robot preload
2 24:06 vm host name has to match generic vnf name
2 25:09 make sure parameter name matches vnf-generic-name
2 25:46 check public key to mathc private key used
2 29:05 the important thing is to prevent the order; from here like previous
2 29:30 Click SDNC preload(!)
2 29:49 Module vs VM discussion
3 00:28 Create vfModule for vFW
3 02:37 Poll timeout

20171129 OOM

chat

minimal OOM/HEAT deployment for vFW

11:04:28 From Michael O'Brien : ./createAll.bash -n onap -a mso
./createAll.bash -n onap -a message-router
./createAll.bash -n onap -a sdnc
./createAll.bash -n onap -a vid
./createAll.bash -n onap -a robot
./createAll.bash -n onap -a portal
./createAll.bash -n onap -a policy
./createAll.bash -n onap -a appc
./createAll.bash -n onap -a aai
./createAll.bash -n onap -a sdc
11:04:35 From Michael O'Brien : ./createAll.bash -n onap -a multicloud
11:04:42 From Michael O'Brien : ./createAll.bash -n onap -a msb

20171129 HEAT

chat


20171130

OOM

chat

11:06:25 From Alexis de Talhouët : /dockerdata-nfs/onap/robot/eteshare/config
11:06:30 From Alexis de Talhouët : vm_properties.py
11:41:38 From Michael O'Brien : sorry for who is on - i am in a meeting for 15 min - bringing up 1.1 for 1200
11:52:28 From Michael O'Brien : back
12:30:04 From Brian : { "global-customer-id": "SDN-ETHERNET-INTERNET", "subscriber-name": "SDN-ETHERNET-INTERNET", "subscriber-type": "INFRA" }
12:37:39 From Alexis de Talhouët : https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/vid/templates/vid-server-deployment.yaml;h=e8c7f555230535a9105f721c62a45d0e6a474e55;hb=refs/heads/release-1.1.0
12:40:08 From Alexis de Talhouët : VID_MSO_PASS=OBF:1ih71i271vny1yf41ymf1ylz1yf21vn41hzj1icz
13:00:16 From Brian : { "service": [ { "service-id": "07a3fc26-6a00-479f-93a3-41fa498d6ab9", "service-description": "vFW", "resource-version": "1511299109970" }, { "service-id": "844dbaa8-399a-4809-b7a8-f69fa7851b13", "service-description": "vLB", "resource-version": "1511299110162" }, { "service-id": "085806ee-3c48-49d1-8403-77b1713fccdd", "service-description": "vCPE", "resource-version": "1511299110345" }, { "service-id": "6ba8a6a0-4673-4f91-8d31-cf90b0778b4b", "service-description": "vIMS", "resource-version": "1511299110535" } ] }
13:02:37 From Brian : { "global-customer-id": "SDN-ETHERNET-INTERNET", "subscriber-name": "SDN-ETHERNET-INTERNET", "subscriber-type": "INFRA", "resource-version": "1510614325211", "service-subscriptions": { "service-subscription": [ { "service-type": "vCPE", "relationship-list": { "relationship": [ { "related-to": "tenant", "related-link": "/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne/tenants/tenant/087050388b204c73a3e418dd2c1fe30b", "relationship-data": [ { "relationship-key": "cloud-region.cloud-owner", "relationship-value": "CloudOwner" }, { "relationship-key": "cloud-region.cloud-re
13:06:44 From Alexis de Talhouët : https://lf-onap.atlassian.net/wiki/display/DW/Development+Guides?preview=%2F1015874%2F1017418%2FUsing_openecomp_MSO.docx
13:08:17 From Alexis de Talhouët : http://10.195.197.142:30223/mso/logging/debug

20171201

OOM

Agenda

Pull master and release-1.1.0 patches (merged) fixed yesterday by Alexis de T.

https://gerrit.onap.org/r/#/q/status:merged+project:+oom

Servers

amsterdam.onap.info = 1.1.0 oom

cd.onap.info = master

onap-parameters.yaml points to my personal Rackspace in case we get to VF-Module creation

The 2 vFWVL zips require a network predefined on Rackspace


Results: robot init passed, but later Alexis tested the extra SDNC call from Marco's video and got all the way to vf-module creation for the first vFW template and saw the 2 VMs up in openstack - a very big thank you to Alexis for all the work in the last 4 days, the 15+ commits, the new config docker image .... retrofiting details over the weekend


Also our friends at VMware under Ranki are running OK under OOM release-1.1.0 on prep of their demo of ONAP Amsterday R1 OOM at KubeCon on Tuesday morning - one week before our ONAP F2F in Santa Clara on the 11th.


Generated JIRAs

OOM-461 - Getting issue details... STATUS

AAI-513 - Getting issue details... STATUS

INT-346 - Getting issue details... STATUS

OOM-475 - Getting issue details... STATUS

SDNC-208 - Getting issue details... STATUS

VID-96 - Getting issue details... STATUS

SDC-716 - Getting issue details... STATUS

OOM-478 - Getting issue details... STATUS

OOM-482 - Getting issue details... STATUS

OOM-483 - Getting issue details... STATUS

OOM-484 - Getting issue details... STATUS



Fixes to Pull and Test

https://gerrit.onap.org/r/#/c/25287/1/kubernetes/config/docker/init/src/config/aai/data-router/dynamic/conf/entity-event-policy.xml

https://gerrit.onap.org/r/#/c/25277/

https://gerrit.onap.org/r/#/c/25257/

https://gerrit.onap.org/r/#/c/25263/

https://gerrit.onap.org/r/#/c/25279/1

https://gerrit.onap.org/r/#/c/25283/

https://gerrit.onap.org/r/#/c/25289/1

Access and Deployment Configuration

OOM Deployment

Follow instructions at ONAP on Kubernetes#AutomatedInstallation

Openlab VNC and CLI

The following is missing some sections and a bit out of date (v2 deprecated in favor of v3) -Integration Testing Schedule, 10-09-2017



Get an openlab account - Integration / Developer Lab Access

Stephen Gooch provides excellent/fast service - raise a JIRA like the following

OPENLABS-75 - Getting issue details... STATUS

Install openVPN - Using Lab POD-ONAP-01 Environment

For OSX both Viscosity and TunnelBlick work fine

Login to Openstack

Install openstack command line toolsTutorial: Configuring and Starting Up the Base ONAP Stack#InstallPythonvirtualenvTools(optional,butrecommended)
get your v3 rc file

verify your openstack cli access (or just use the jumpbox)
obrienbiometrics:aws michaelobrien$ source logging-openrc.sh 
obrienbiometrics:aws michaelobrien$ openstack server list
+--------------------------------------+---------+--------+-------------------------------+------------+
| ID                                   | Name    | Status | Networks                      | Image Name |
+--------------------------------------+---------+--------+-------------------------------+------------+
| 1ed28213-62dd-4ef6-bdde-6307e0b42c8c | jenkins | ACTIVE | admin-private-mgmt=10.10.2.34 |            |
+--------------------------------------+---------+--------+-------------------------------+------------+
get 15 elastic IP's

You may need to release unused IPs from other tenants - as we have 4 pools of 50

fill in your stack env parameters

onap_openstack.env

  public_net_id: 971040b2-7059-49dc-b220-4fab50cb2ad4

  public_net_name: external

  ubuntu_1404_image: ubuntu-14-04-cloud-amd64

  ubuntu_1604_image: ubuntu-16-04-cloud-amd64

  flavor_small: m1.small

  flavor_medium: m1.medium

  flavor_large: m1.large

  flavor_xlarge: m1.xlarge

  flavor_xxlarge: m1.xxlarge

  vm_base_name: onap

  key_name: onap_key

  pub_key: ssh-rsa AAAAobrienbiometrics

  nexus_repo: https://nexus.onap.org/content/sites/raw

  nexus_docker_repo: nexus3.onap.org:10001

  nexus_username: docker

  nexus_password: docker

  dmaap_topic: AUTO

  artifacts_version: 1.1.0-SNAPSHOT

  openstack_tenant_id: a85a07a5f34d4yyyyyyy

  openstack_tenant_name: Logyyyyyyy

  openstack_username: michaelyyyyyy

  openstack_api_key: Wyyyyyyy

  openstack_auth_method: password

  openstack_region: RegionOne

  horizon_url: http://10.12.25.2:5000/v3

  keystone_url: http://10.12.25.2:5000

  dns_list: ["10.12.25.5", "8.8.8.8"]

  external_dns: 8.8.8.8

  dns_forwarder: 10.12.25.5

  oam_network_cidr: 10.0.0.0/16

follow

http://onap.readthedocs.io/en/latest/submodules/dcaegen2.git/docs/sections/installation_heat.html

  dnsaas_config_enabled: true  

dnsaas_region: RegionOne  

dnsaas_keystone_url: http://10.12.25.5:5000/v3  

dnsaas_tenant_name: Logging  

dnsaas_username: demo  

dnsaas_password: onapdemo  

dcae_keystone_url: http://10.12.25.5:5000/v2  

dcae_centos_7_image: CentOS-7  

dcae_domain: dcaeg2.onap.org


  dcae_public_key: PUT THE PUBLIC KEY OF A KEYPAIR HERE TO BE USED BETWEEN DCAE LAUNCHED VMS

  dcae_private_key: PUT THE SECRET KEY OF A KEYPAIR HERE TO BE USED BETWEEN DCAE LAUNCHED VMS

Run the HEAT stack
obrienbiometrics:openlab michaelobrien$ openstack stack create -t onap_openstack.yaml -e onap_openstack.env ONAP1125_6
| id                  | 9b026354-c071-4e31-8611-11fef2f408f5     |
| stack_name          | ONAP1125_6                               |
| description         | Heat template to install ONAP components |
| creation_time       | 2017-11-26T02:16:57Z                     |
| updated_time        | 2017-11-26T02:16:57Z                     |
| stack_status        | CREATE_IN_PROGRESS                       |
| stack_status_reason | Stack CREATE started                     |
obrienbiometrics:openlab michaelobrien$ openstack stack list
| 9b026354-c071-4e31-8611-11fef2f408f5 | ONAP1125_6 | CREATE_IN_PROGRESS | 2017-11-26T02:16:57Z | 2017-11-26T02:16:57Z 


Wait for deployment

DCEA and several mutli-service VM's down

obrienbiometrics:openlab michaelobrien$ openstack server list

| db5388c0-9fa5-4359-ad21-689dd0ce8955 | onap-multi-service  | ERROR  |                                             | ubuntu-16-04-cloud-amd64 |
| d712dce1-d39d-4c6e-8d21-d9da9aa40ea1 | onap-dcae-bootstrap | ACTIVE | oam_onap_awsf=10.0.4.1, 10.12.5.197         | ubuntu-16-04-cloud-amd64 |
| 4724fa8e-e10b-46cb-a81d-e7a9a7df041e | onap-aai-inst1      | ACTIVE | oam_onap_awsf=10.0.1.1, 10.12.5.118         | ubuntu-14-04-cloud-amd64 |
| bc4ef1f3-422d-4e66-a21b-8c5a3d206938 | onap-portal         | ACTIVE | oam_onap_awsf=10.0.9.1, 10.12.5.241         | ubuntu-14-04-cloud-amd64 |
| 0f9edb8e-a379-4ab1-a6b1-c24763b69ecd | onap-policy         | ACTIVE | oam_onap_awsf=10.0.6.1, 10.12.5.17          | ubuntu-14-04-cloud-amd64 |
| bd1f29e3-e05e-4570-9f41-94af83aec7d6 | onap-aai-inst2      | ACTIVE | oam_onap_awsf=10.0.1.2, 10.12.5.252         | ubuntu-14-04-cloud-amd64 |
| 57e90b08-d69e-4770-a298-97f64387e60d | onap-dns-server     | ACTIVE | oam_onap_awsf=10.0.100.1, 10.12.5.237       | ubuntu-14-04-cloud-amd64 |
| e9dd8800-0f77-4658-90b0-db98f4689485 | onap-message-router | ACTIVE | oam_onap_awsf=10.0.11.1, 10.12.5.234        | ubuntu-14-04-cloud-amd64 |
| af6120d8-419a-45f9-ae32-b077b9ace407 | onap-sdnc           | ACTIVE | oam_onap_awsf=10.0.7.1, 10.12.5.226         | ubuntu-14-04-cloud-amd64 |
| b6daf774-dc6a-4c9b-aaa3-ca8fc5734ac3 | onap-clamp          | ACTIVE | oam_onap_awsf=10.0.12.1, 10.12.5.128        | ubuntu-16-04-cloud-amd64 |
| 31524fcb-d1b2-427b-b0bf-29e8fc65fded | onap-sdc            | ACTIVE | oam_onap_awsf=10.0.3.1, 10.12.5.92          | ubuntu-16-04-cloud-amd64 |
| 31f8c1e7-a7e7-417d-a9df-cc5d65d7777c | onap-vid            | ACTIVE | oam_onap_awsf=10.0.8.1, 10.12.5.218         | ubuntu-14-04-cloud-amd64 |
| 482befc8-2a6a-4da7-8e05-8f8b294f80d2 | onap-robot          | ACTIVE | oam_onap_awsf=10.0.10.1, 10.12.6.21         | ubuntu-16-04-cloud-amd64 |
| 8ea76387-aadf-46da-8257-5e9c2f80fa48 | onap-appc           | ACTIVE | oam_onap_awsf=10.0.2.1, 10.12.5.222         | ubuntu-14-04-cloud-amd64 |
| 43b90061-885f-454b-8830-9da3338fca56 | onap-so             | ACTIVE | oam_onap_awsf=10.0.5.1, 10.12.5.230         | ubuntu-16-04-cloud-amd64 |

configure local

vi /etc/hosts

Enable the robot webserver to see error logs and get /etc/hosts values

HEAT

root@onap-robot:/opt# ./demo.sh init_robot

OOM

oom/kubernetes/robot/demo-k8s.sh init_robot

http://10.12.5.129:88/


10.12.5.214 policy.api.simpledemo.onap.org 

10.12.5.118 portal.api.simpledemo.onap.org 

10.12.5.141 sdc.api.simpledemo.onap.org 

10.12.5.92  vid.api.simpledemo.onap.org 

Verify AAI_VM1 DNS

Intermittenty AAI1 does not fully initialize, docker will get installed and the test-config dir will get pulled - but the 6 docker containers in the compose file will not be up.

login to aai immediately after stack startup and add the following before test-config

root@onap-aai-inst1:~# cat /etc/hosts
10.0.1.2 aai.hbase.simpledemo.openecomp.org
10.12.5.213 aai.hbase.simpledemo.openecomp.org

Enable robot webserver


Spot check containers

| 1fe78720-e418-47f7-bcfd-b6b93c791448 | oom-cd-obrien-cd0   | ACTIVE | admin-private-mgmt=10.10.2.15, 10.12.25.117

check robot health

Core components are PASS so lets continue with the vFW

Thanks Alexis for the 20171130 changes

http://jenkins.onap.info/job/oom-cd/528/console

15:39:15 Basic SDNGC Health Check | PASS |

15:39:15 Basic A&AI Health Check | PASS |

15:39:15 Basic Policy Heal