Vetted vFirewall Demo - Full draft how-to for F2F and ReadTheDocs

20181220 - update for casablanca -TODO: review the vFW automation in https://github.com/garyiwu/onap-lab-ci - thanks Yang Xu

This long-winded page name will revert to "Running the ONAP vFirewall Demo...." when we are finished before 9 Dec - and moved out of the wiki root

Please join and post "validated" actions/config/results - but do not move or edit this page until we get a complete vFW run before Ideally the 4 Dec KubCon conference and worst case the 11 Dec ONAP Conference - thank you

Under construction - this page is a consolidation of all details in getting the vFirewall running over the next 2 weeks in prep of anyone that would like to demo it for the F2F in Dec.

ADD content ONLY when verified - with evidence (screen-cap, JSON output etc..)

DO paste any questions and unverified config/actions in the comment section at the end - for the team to verify


HEAT Daily meeting at 1200 EDT noon Nov 27 to 8 Dec 2017
https://zoom.us/j/7939937123 see schedule at https://lists.onap.org/pipermail/onap-discuss/2017-November/006483.html

OOM Daily meeting at 1100 EDT noon Nov 29 to 1 Dec 2017 - https://lists.onap.org/pipermail/onap-discuss/2017-November/006575.html

Statement of Work

Ideally we provide this page as a the draft that will go into ReadTheDocs.io - where this page gets deleted and referenced there.

There are currently 3 or more distinct pages, email threads, presentations, phone calls, meetings where all the details needed to "Step by Step" get a running vFirewall up are located.

We would like to get to the point where we were before Aug 2017 where an individual with an Openstack environment (OOM as well now) - could follow each instruction point (action - and expected/documented result/output) and end up with our current minimal sanity usecase - the vFirewall

If you have any details on configuration of getting up the vFirewall post them to the comments section and it will be tested and incorporated

Ideally any action added to this page itself - is fully tested with resulting output (text/screencap) - pasted as a reference.

JIRAs:  OOM-459 - Getting issue details... STATUS  for OOM and  INT-106 - Getting issue details... STATUS  for HEAT

Output

1- This set of instructions below - to go from an empty OOM host or OpenStack lab - all the way to closed loop running.
2 - A set of videos - the vFirewall from an already deployed OOM and HEAT deployment - see the reference videos from Running the ONAP Demos#ONAPDeploymentVideos see  INT-333 - Getting issue details... STATUS

3- Secondary videos on bringing up OOM and HEAT deployments

Running the vFirewall Demo

sync with Running the ONAP Demos#QuickstartInstructions

TODO: check for JIRA on appc demo.robot working : 20171128 (worked in 1.0.0)

20180307 - SDC 503 - see pod reordering in amsterdam https://lists.onap.org/pipermail/onap-discuss/2018-March/008403.html - need to raise jira

Prerequisites

ArtifactLocationNotes

private key (ssh-add)


obrienbiometrics:onap_public michaelobrien$ ssh-keygen


SHA256:YzLggI8nGXna0Ssx0DMpLvZKSPTGZJ1mXwj2XZ+c8Gg michaelobrien@obrienbiometrics.local

paste onap_public.pub into the pub_key: sections of all the onap_openstack and vFW env files



openstack yaml and env

https://nexus.onap.org/content/sites/raw/org.onap.demo/heat/ONAP/1.1.0-SNAPSHOT/

demo/heat/onap/onap-openstack.*


vFirewall yaml and env
(2 VNFs)

unverified

We will use the split vFWCL (vFW closed loop) in demo/heat/vFWCL


demo/heat/vFWCL/vFWPKG/base_vpkg.env

demo/heat/vFWCL/vFWSNK/base_vfw.env

  image_name: ubuntu-14-04-cloud-amd64

  flavor_name: m1.medium

  public_net_id: 971040b2-7059-49dc-b220-4fab50cb2ad4

cloud_env: openstack

  onap_private_net_id: oam_onap_6Gve

  onap_private_subnet_id: oam_onap_6Gve

Note: the network must be the one that shows on the instances page - or the only non-shared one in the network list


not the older

https://nexus.onap.org/content/sites/raw/org.onap.demo/heat/vFW/1.1.0-SNAPSHOT/

or the deprecated https://nexus.onap.org/content/sites/raw/org.openecomp.demo/heat/vFW/1.1.0-SNAPSHOT/






demo/heat/vFWCL/vFWPKG/base_vpkg.env







vFirewall Tasks

Ideally we have an automated one-click vFW deployment - in the works - 

sync with Running the ONAP Demos#QuickstartInstructions

T#Task

Action

Rest URL+JSON payload
UI Screencap or
Console cmd

Result

JSON /

Text /

Screencap

Artifacts

Link or

attach

file

Env

OOM

HEAT

or both

Verify Read

Last

run

Notes


./demo-k8s.sh onap init_robot

./demo-k8s.sh init

start with a full DCAE deploy (amsterdam) via OOM


ubuntu@a-onap-devopscd:~/oom/kubernetes/robot$ ./demo-k8s.sh onap init_robot

Number of parameters:

2

KEY:

init_robot

WEB Site Password for user 'test': ++ ETEHOME=/var/opt/OpenECOMP_ETE

++ VARIABLEFILES='-V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py'

+++ kubectl --namespace onap get pods

+++ sed 's/ .*//'

+++ grep robot

No resources found.

++ POD=

++ kubectl --namespace onap exec -- /var/opt/OpenECOMP_ETE/runTags.sh -V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py -v WEB_PASSWORD:test -d /share/logs/demo/UpdateWebPage -i UpdateWebPage --display 89
ubuntu@a-onap-devopscd:~/oom/kubernetes/robot$ ./demo-k8s.sh onap init_robot

Number of parameters:

2

KEY:

init_robot

WEB Site Password for user 'test': ++ ETEHOME=/var/opt/OpenECOMP_ETE

++ VARIABLEFILES='-V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py'

+++ kubectl --namespace onap get pods

+++ sed 's/ .*//'

+++ grep robot

No resources found.

++ POD=

++ kubectl --namespace onap exec -- /var/opt/OpenECOMP_ETE/runTags.sh -V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py -v WEB_PASSWORD:test -d /share/logs/demo/UpdateWebPage -i UpdateWebPage --display 89







optionalBefore robot init (init_customer and distribute






optionalcloud region PUT to AAI

from postman:code

PUT /aai/v11/cloud-infrastructure/cloud-regions/cloud-region/Openstack/RegionOne HTTP/1.1
Host: 34.232.186.178:30233
Accept: application/json
Content-Type: application/json
X-FromAppId: AAI
X-TransactionId: get_aai_subscr
Authorization: Basic QUFJOkFBSQ==
Cache-Control: no-cache
Postman-Token: d5de805a-3053-9fa3-55ba-256a60182458

{
"cloud-owner": "Openstack",
"cloud-region-id": "RegionOne",
"cloud-region-version": "v1",
"cloud-type": "SharedNode",
"cloud-zone": "CloudZone",
"owner-defined-type": "OwnerType",
"tenants": {
"tenant": [{
"tenant-id": "1035021",
"tenant-name": "ecomp-dev"
}]
}
}


201 created


OOM

GET /aai/v11/cloud-infrastructure/cloud-regions/cloud-region/Openstack/RegionOne HTTP/1.1
Host: 34.232.186.178:30233
Accept: application/json
Content-Type: application/json
X-FromAppId: AAI
X-TransactionId: get_aai_subscr
Authorization: Basic QUFJOkFBSQ==
Cache-Control: no-cache
Postman-Token: fe212362-58dc-99d8-c09a-c5de08995dbb

200 OK

{
"cloud-owner": "Openstack",
"cloud-region-id": "RegionOne",
"cloud-type": "SharedNode",
"owner-defined-type": "OwnerType",
"cloud-region-version": "v1",
"cloud-zone": "CloudZone",
"sriov-automation": false,
"resource-version": "1511745669015"
}


20171126

1

optional

TBD - cloud region PUT to AAI

Verify: cloud-region is not set by robot ./demo.sh init (only the customer is - we need to run the rest call for cloud region ourselves

watch intermittent issues bringing up aai1 containers in AAI-513 - Getting issue details... STATUS


HEAT
TBD 201711xx

SDC Distribution

(manual)


HEAT http://portal.api.simpledemo.onap.org:8989/ONAPPORTAL/login.htm

OOM: http://<host>:30211

License Model

as cs0008 on SDC onboard | new license model | license key groups (network wide / Universal) |

Entitlement pools (network wide / absolute 100 / CPU / 000001 / Other tbd / Month) |

Feature Groups (123456) manuf ref # | Available Entitlement Pools (push right) |

License Agreements | Add license agreement (unlimited) - push right / save / check-in / submit | Onboard breadcrumb 

VF

Onboard | new Vendor (not Virtual) Software Product (FWL App L4+) - select network package not manual checkbox |

select LA (Lversion 1, LA, then FG) save | upload zip | proceed to validation | checkin | submit

Onboard home | drop vendor software prod repo | select, import vsp | create | icon | submit for testing

Distributing

as jm0007 | start testing | accept 

as cs0008 | sdc home | see firewall | add service | cat=l4, 123456 create | icon | composition, expand left app L4 - drag | submit for testing 

as jm0007 | start testing | accept 

as gv0001 | approve 

as op0001 | distribute







TBD Customer creation


Note: robot ./demo.sh

oom: oom/kubernetes/robot/demo-k8s.sh







SDC Model Distribution

If you are at this step - switch over to Alexis de Talhouët page on vFWCL instantiation, testing, and debuging







TBD VID Service creation







TBD VID Service Instance deployment







TBD VID Create VNF







VNF preload

OK (REST)


http://{{sdnc_ip}}:8282/restconf/operations/VNF-API:preload-vnf-topology-operation

note the service-type change - see gui top right

POST /restconf/operations/VNF-API:preload-vnf-topology-operation HTTP/1.1
Host: 10.12.5.92:8282
Accept: application/json
Content-Type: application/json
X-TransactionId: 0a3f6713-ba96-4971-a6f8-c2da85a3176e
X-FromAppId: API client
Authorization: Basic YWRtaW46S3A4Yko0U1hzek0wV1hsaGFrM2VIbGNzZTJnQXc4NHZhb0dHbUp2VXkyVQ==
Cache-Control: no-cache
Postman-Token: e1c8d1ec-4cd9-5744-3ac9-f83f0d3c71d4

{
    "input": {
        "vnf-topology-information": {
            "vnf-topology-identifier": {
                "service-type": "11819dd6-6332-42bc-952c-1a19f8246663",
                "vnf-name": "DemoModule2",
                "vnf-type": "Vsp..base_vfw..module-0",
                "generic-vnf-name": "vFWDemoVNF",
                "generic-vnf-type": "vsp 0"
            },
            "vnf-assignments": {
                "availability-zones": [],
                "vnf-networks": [],
                "vnf-vms": []
            },
      "vnf-parameters":
      [
{
"vnf-parameter-name": "image_name",
"vnf-parameter-value": "ubuntu-14-04-cloud-amd64"
},
{
"vnf-parameter-name": "flavor_name",
"vnf-parameter-value": "m1.medium"
},
{
"vnf-parameter-name": "public_net_id",
"vnf-parameter-value": "971040b2-7059-49dc-b220-4fab50cb2ad4"
},
{
"vnf-parameter-name": "unprotected_private_net_id",
"vnf-parameter-value": "zdfw1fwl01_unprotected"
},
{
"vnf-parameter-name": "unprotected_private_subnet_id",
"vnf-parameter-value": "zdfw1fwl01_unprotected_sub"
},
{
"vnf-parameter-name": "protected_private_net_id",
"vnf-parameter-value": "zdfw1fwl01_protected"
},
{
"vnf-parameter-name": "protected_private_subnet_id",
"vnf-parameter-value": "zdfw1fwl01_protected_sub"
},
{
"vnf-parameter-name": "onap_private_net_id",
"vnf-parameter-value": "oam_onap_Ze9k"
},
{
"vnf-parameter-name": "onap_private_subnet_id",
"vnf-parameter-value": "oam_onap_Ze9k"
},
{
"vnf-parameter-name": "unprotected_private_net_cidr",
"vnf-parameter-value": "192.168.10.0/24"
},
{
"vnf-parameter-name": "protected_private_net_cidr",
"vnf-parameter-value": "192.168.20.0/24"
},
{
"vnf-parameter-name": "onap_private_net_cidr",
"vnf-parameter-value": "10.0.0.0/16"
},
{
"vnf-parameter-name": "vfw_private_ip_0",
"vnf-parameter-value": "192.168.10.100"
},
{
"vnf-parameter-name": "vfw_private_ip_1",
"vnf-parameter-value": "192.168.20.100"
},
{
"vnf-parameter-name": "vfw_private_ip_2",
"vnf-parameter-value": "10.0.100.5"
},
{
"vnf-parameter-name": "vpg_private_ip_0",
"vnf-parameter-value": "192.168.10.200"
},
{
"vnf-parameter-name": "vsn_private_ip_0",
"vnf-parameter-value": "192.168.20.250"
},
{
"vnf-parameter-name": "vsn_private_ip_1",
"vnf-parameter-value": "10.0.100.4"
},
{
"vnf-parameter-name": "vfw_name_0",
"vnf-parameter-value": "vFWDemoVNF"
},
{
"vnf-parameter-name": "vsn_name_0",
"vnf-parameter-value": "zdfw1fwl01snk01"
},
{
"vnf-parameter-name": "vnf_id",
"vnf-parameter-value": "vFirewall_vSink_demo_app"
},
{
"vnf-parameter-name": "vf_module_id",
"vnf-parameter-value": "vFirewall_vSink"
},
{
"vnf-parameter-name": "dcae_collector_ip",
"vnf-parameter-value": "127.0.0.1"
},
{
"vnf-parameter-name": "dcae_collector_port",
"vnf-parameter-value": "8080"
},
{
"vnf-parameter-name": "repo_url_blob",
"vnf-parameter-value": "https://nexus.onap.org/content/sites/raw"
},
{
"vnf-parameter-name": "repo_url_artifacts",
"vnf-parameter-value": "https://nexus.onap.org/content/groups/staging"
},
{
"vnf-parameter-name": "demo_artifacts_version",
"vnf-parameter-value": "1.1.0"
},
{
"vnf-parameter-name": "install_script_version",
"vnf-parameter-value": "1.1.0-SNAPSHOT"
},
{
"vnf-parameter-name": "key_name",
"vnf-parameter-value": "onapkey"
},
{
"vnf-parameter-name": "pub_key",
"vnf-parameter-value": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDlc+Lkkd6qK4yrhwgyEXmDuseZihbdYk3Dd90p4/TTDCenGVdfdPU9r4KuCrn8nhjjhVvOx8s1hSi03NI9qHQasLcNCVavzse04kq/RlrkmEvSnqI0/HYNOMYASBQAxgF/pocbANnERcfzXrWiymK5Aqm3U8P25EkeKp9tQmSiijki8ywA5iXuBDWiPQxE5gtxotGMUH5EhElHXlQ2lWRc3IlHghfoh8sI3auz7Bimma3vEUd64e6uuZR5oxCdv3ybZBkYnOcgiGaeP7sWDpjggpI40bfoQ/PbZh4u9maLPmY8vm1HKebZgfwkgEXSi0B4QgUHlRcVWV7lNo+418Tt michaelobrien@obrienbiometrics"
},
{
"vnf-parameter-name": "cloud_env",
"vnf-parameter-value": "openstack"
}
 
 
      ]
       },
        "request-information": {
            "request-id": "robot12",
            "order-version": "1",
            "notification-url": "openecomp.org",
            "order-number": "1",
            "request-action": "PreloadVNFRequest"
        },
        "sdnc-request-header": {
            "svc-request-id": "robot12",
            "svc-notification-url": "http:\/\/openecomp.org:8080\/adapters\/rest\/SDNCNotify",
            "svc-action": "reserve"
        }
    }   
}




Result 200

{
    "output": {
        "svc-request-id": "robot12",
        "response-code": "200",
        "ack-final-indicator": "Y"
    }
}








VNF preload

(alternative, no postman)

(hope I got it right)

references to video are like

"X-mm:ss some text"

where X is 0..5 and the video is 20171128_1200_X_of_5_daily_session.mp4

  • Step 1: Prepare JSON. You need: JSON payload from above
  • You need to be very careful with the wording .. It is extreme confusing

  • Press the little “I” next to the service instance
  • The next dialog shows a ‘Service Instance ID:’
  • Copy the value into "service-type“ field of JSON payload
  • Close the dialog
  • (2-20:15 get service instance in Video)

  • press little "i" in vnf
  • Look for VNF Type, take the part after the slash and copy value into “generic-vnf-type” of JSON payload
  • Look for VNF Name and copy the value into “generic-vnf-name” of JSON payload
  • Look for a vnf-parameter-name=“vfw_name_0
  • Put the same value in the associated “vnf-parameter-value” field
  • Close Dialog
  • (2-21:25 in the video)

  • Press the green add VNF Module Button
  • Select desired module (depends whether you have already added both for the demo)
  • Look for Model Name and copy value to vnf-type of JSON payload
  • Cancel(!) from dialog


  • Fill remaining Parameters
  • Select a proper module name and put it in the vnf-name field of JSON payload
  • Get the name of the onap-private network and put it in the onap_private_net_id and onap_private_subnet_id fields of vnf-parameters of JSON payload
  • Double check the public net id
  • Make sure the correct ssh key is configured under vnf-parameters


  • Scroll down to ‘POST /operations/VNF-API:preload-vnf-topology-operation’. Careful, there are similar entries there too
  • Copy your JSON into the field for the request body

  • Scroll down to “Try It” and try it







SDNC VNF Preload

(Integration-Jenkins lab)


(from Marco 20171128)







TBD VID Create VF-Module (vSNK)


Need to delete the previous failure first - raise JIRA on error

for now postfix and recreate







TBD VID Create VF-Module (vPG)







TBD Robot Heatbridge







TBD APPC mountpoint (Robot or REST)







APPC mountpoint for vFW closed-loop

(Integration-Jenkins lab)







Verifying the vFirewall

Original/Ongoing Doc References

Running the ONAP Demos

running vFW Demo on ONAP Amsterdam Release

Clearwater vIMS Onboarding and Instantiation

UCA-20 OSS JAX-RS 2 Client

Vetted vFirewall Demo - Full draft how-to for F2F and ReadTheDocs

Integration Use Case Test Cases - could not find vFW content here

ONAP master branch Stabilization

OOM-1 - Getting issue details... STATUS

INT-106 - Getting issue details... STATUS

INT-284 - Getting issue details... STATUS

List of ONAP Implementations under Test by Environment

Please add yourself to the list so we can target EPIC work based on environment affinity 

EnvironmentBranchDeployerContactsvFW statusNotes
Intel OpenlabmasterHEATnone

cloud: http://10.12.25.2/auth/login/?next=/project/instances/

servers

Starting up (20171123) - not ready yet

Intel OpenlabmasterOOM Kubernetesnone

cloud: http://10.12.25.2/auth/login/?next=/project/instances/

server: 10.12.25.117

key: openlab_oom_key (pass by mail)

(non-DCAE ONAP components only) partial 16g only until quota increased or we cluster 4

OOM-461 - Getting issue details... STATUS

Intel Openlabrelease-1.1.0OOM Kubernetesnone

cloud: http://10.12.25.2/auth/login/?next=/project/instances/

server: 10.12.25.119

key: openlab_oom_key (pass by mail)

watch INT-344 - Getting issue details... STATUS

RackspacemasterOOM Kubernetesnone

(non-DCAE ONAP components only) DCAEGEN2 not tested yet for R1

Amazon AWS EC2masterOOM Kubernetes
none(non-DCAE ONAP components only) - spot node terminated
Amazon AWS ECS
OOM Kubernetespending testn/a(non-DCAE ONAP components only) - node terminated
Google GCEmasterOOM Kubernetes
(non-DCAE ONAP components only) - node closed
Google GCE CaaS
OOM Kubernetespending testn/a(non-DCAE ONAP components only)
Rackspace
HEATnot supported yetn/a
Alibaba VM
OOM Kubernetesnone

not tested yet

Continuous Deployment References

TechServersDetails
HEAT

Kubernetes

Jobs (AWS)

jenkins.onap.info

Analytics (AWS)

kibana.onap.info

CD servers (AWS)

dev.onap.info

OOM R2 Master (Beijing)

http://jenkins.onap.info/job/oom-cd-release-110-branch/

OOM R1 (Amsterdam)

http://jenkins.onap.info/job/oom-cd/

Formal Recordings

put all daily and ongoing vFW formal run videos here - in the leadup to the 2 conferences.

Recording detailsRecording embedded (currently limited to 30 min for the 100mb limit) or link

ONAP installation of

OOM from clean VM to Healthcheck

ONAP R1 OOM from clean AWS VM to deployed ONAP

3 videos - reuse for OOM-395 - Getting issue details... STATUS


20171208 : GUI only for SDC onboarding in OOM 20171208 release-1.1.0 - no devops screens in this one so it can be used for demos



OOM vFirewall SDC distribution to VF-Module creationSee Alexis' vFWCL instantiation, testing, and debuging

ONAP installation of

HEAT from empty OPENSTACK to Healthcheck


HEAT vFirewall SDC distribution to VF-Module creationsee Alexis' vFWCL instantiation, testing, and debuging


Daily Working Recordings

DateVideoNotes / TODO

2017

1127

HEAT: get back to the vnf preload - continue to the 3 vFW VMs coming up

todo: use the split template (abandon the single VNF)

todo: stop using robot for all except customer creation - essentially everything is REST and VID

todo: fix DNS of the onap env file

OOM: go over master status, get a 1.1.0 branch up separately



CHAT:

From Brian to Everyone: (12:06)
Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U
From gaurav gupta to Everyone: (12:07)
VNF-API
From Geora Barsky to Everyone: (12:17)
dns_list: ["10.12.25.5", "8.8.8.8"]
external_dns: 8.8.8.8
dns_forwarder: 10.12.25.5
oam_network_cidr: 10.0.0.0/16
From Kedar Ambekar to Everyone: (12:20)
10.12.25.5 is the IP of DNS server that would get spawned in ONAP stack ?
From Josef Reisinger to Everyone: (12:25)
this gives a hosts-like list of servers and IP's
#. ~/admin-openrc
openstack server list -f value -c Name -c Networks|sed -n 's#\([^ ][^ ]*\) oam.*, \(.*\)#\2 \1#p'
From Josef Reisinger to Everyone: (13:09)
Sorry.. issues with Mic...

20171128


HEAT: error on vf-module creation (MSO Heat issue)

12:23:15 From Eric Debeau : The API for licence model creation are not documented in R1
12:28:49 From Alexis de Talhouët : handy command: find . -name ".DS_Store" | xargs rm -r
12:29:10 From Marco Platania : zip vFW.zip *
12:29:57 From Alexis de Talhouët : https://stackoverflow.com/questions/10924236/mac-zip-compress-without-macosx-folder/23372210#23372210
12:30:05 From Alexis de Talhouët : zip -r -X Archive.zip *
12:33:44 From Alexis de Talhouët : Actually, it’s not only the .DS_Store, it’s also that osx adds a __MACOSX empty folder in the zip file
12:51:43 From Josef Reisinger : +1 for the enhancement on robot:88 :-)
12:51:56 From Alexis de Talhouët : Yeah, good stuff!!
12:53:01 From Eric Debeau : echo "onap:onap" > /etc/lighttpd/authorization
13:07:15 From Eric Debeau : It is cool to use a REST API for the preload instead using the Robot. We should document it.
13:26:52 From mryan : Thanks Michael, informative session! I need to jump on another call
13:27:47 From Eric Debeau : Thanks for this meeting.
13:29:12 From ramki : Thank you so much Michael for arranging this!
13:29:30 From Josef Reisinger : thanks for the walk-through. tty tomorrow
13:30:05 From ramki to Michael O'Brien (Privately) : Michael - do you have a few minutes for OOM?
13:32:28 From Gaurav Gupta (VMware) : Any one trying vLB/vDNS on amsterdam
13:33:26 From Gaurav Gupta (VMware) to Michael O'Brien (Privately) : Any one trying vLB/vDNS
13:33:52 From Michael O'Brien to Gaurav Gupta (VMware) (Privately) : not yet - but in future as with vCPE/vVolte


=================================================================

Time markers in the videos to the left. The "Part"-number represents part 0..4 in the file name

Part Marker comment
0 14:30 Demo flow explained
0 16:35 statement: no automated flow
0 cloud region create discussion
1 22:40 distribution: monitor progress
2 3:01 prepare robot to have html page
2 9:01 check customer in aai
2 13:34 start VID
2 18:30 Discussion about order of vnf & vnf module creation (vFW/vSNK)
2 1957 preload vFW
2 20:15 get service instance (little "i" in circle in the Service Instance Line in VID)
2 21:10 use service instance ID as Service type(!) in JSON payload
2 21:25 vnf type (press litte "i" in vnf: vf-type, whatever is after the slash
2 22:05 replace generic-vnf-type with that value
2 22:10 vnf name
2 22:20 back to vid screen where you got generic-vnf-type and take "VNF Name"
2 22:29 place it as generic-vnf-name in JSON
2 22:30 vnf name and vnf type; get vnf name by pressing green add VF Module
2 22:53 select vnf name from the dark field or "Model Name" from the dialog
2 23:08 Put value in vnf-type
2 23:13 vnf-name: select and remember as in revious demo for robot preload
2 24:06 vm host name has to match generic vnf name
2 25:09 make sure parameter name matches vnf-generic-name
2 25:46 check public key to mathc private key used
2 29:05 the important thing is to prevent the order; from here like previous
2 29:30 Click SDNC preload(!)
2 29:49 Module vs VM discussion
3 00:28 Create vfModule for vFW
3 02:37 Poll timeout

20171129 OOM

chat

minimal OOM/HEAT deployment for vFW

11:04:28 From Michael O'Brien : ./createAll.bash -n onap -a mso
./createAll.bash -n onap -a message-router
./createAll.bash -n onap -a sdnc
./createAll.bash -n onap -a vid
./createAll.bash -n onap -a robot
./createAll.bash -n onap -a portal
./createAll.bash -n onap -a policy
./createAll.bash -n onap -a appc
./createAll.bash -n onap -a aai
./createAll.bash -n onap -a sdc
11:04:35 From Michael O'Brien : ./createAll.bash -n onap -a multicloud
11:04:42 From Michael O'Brien : ./createAll.bash -n onap -a msb

20171129 HEAT

chat


20171130

OOM

chat

11:06:25 From Alexis de Talhouët : /dockerdata-nfs/onap/robot/eteshare/config
11:06:30 From Alexis de Talhouët : vm_properties.py
11:41:38 From Michael O'Brien : sorry for who is on - i am in a meeting for 15 min - bringing up 1.1 for 1200
11:52:28 From Michael O'Brien : back
12:30:04 From Brian : { "global-customer-id": "SDN-ETHERNET-INTERNET", "subscriber-name": "SDN-ETHERNET-INTERNET", "subscriber-type": "INFRA" }
12:37:39 From Alexis de Talhouët : https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/vid/templates/vid-server-deployment.yaml;h=e8c7f555230535a9105f721c62a45d0e6a474e55;hb=refs/heads/release-1.1.0
12:40:08 From Alexis de Talhouët : VID_MSO_PASS=OBF:1ih71i271vny1yf41ymf1ylz1yf21vn41hzj1icz
13:00:16 From Brian : { "service": [ { "service-id": "07a3fc26-6a00-479f-93a3-41fa498d6ab9", "service-description": "vFW", "resource-version": "1511299109970" }, { "service-id": "844dbaa8-399a-4809-b7a8-f69fa7851b13", "service-description": "vLB", "resource-version": "1511299110162" }, { "service-id": "085806ee-3c48-49d1-8403-77b1713fccdd", "service-description": "vCPE", "resource-version": "1511299110345" }, { "service-id": "6ba8a6a0-4673-4f91-8d31-cf90b0778b4b", "service-description": "vIMS", "resource-version": "1511299110535" } ] }
13:02:37 From Brian : { "global-customer-id": "SDN-ETHERNET-INTERNET", "subscriber-name": "SDN-ETHERNET-INTERNET", "subscriber-type": "INFRA", "resource-version": "1510614325211", "service-subscriptions": { "service-subscription": [ { "service-type": "vCPE", "relationship-list": { "relationship": [ { "related-to": "tenant", "related-link": "/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne/tenants/tenant/087050388b204c73a3e418dd2c1fe30b", "relationship-data": [ { "relationship-key": "cloud-region.cloud-owner", "relationship-value": "CloudOwner" }, { "relationship-key": "cloud-region.cloud-re
13:06:44 From Alexis de Talhouët : https://lf-onap.atlassian.net/wiki/display/DW/Development+Guides?preview=%2F1015874%2F1017418%2FUsing_openecomp_MSO.docx
13:08:17 From Alexis de Talhouët : http://10.195.197.142:30223/mso/logging/debug

20171201

OOM

Agenda

Pull master and release-1.1.0 patches (merged) fixed yesterday by Alexis de T.

https://gerrit.onap.org/r/#/q/status:merged+project:+oom

Servers

amsterdam.onap.info = 1.1.0 oom

cd.onap.info = master

onap-parameters.yaml points to my personal Rackspace in case we get to VF-Module creation

The 2 vFWVL zips require a network predefined on Rackspace


Results: robot init passed, but later Alexis tested the extra SDNC call from Marco's video and got all the way to vf-module creation for the first vFW template and saw the 2 VMs up in openstack - a very big thank you to Alexis for all the work in the last 4 days, the 15+ commits, the new config docker image .... retrofiting details over the weekend


Also our friends at VMware under Ranki are running OK under OOM release-1.1.0 on prep of their demo of ONAP Amsterday R1 OOM at KubeCon on Tuesday morning - one week before our ONAP F2F in Santa Clara on the 11th.


Generated JIRAs

OOM-461 - Getting issue details... STATUS

AAI-513 - Getting issue details... STATUS

INT-346 - Getting issue details... STATUS

OOM-475 - Getting issue details... STATUS

SDNC-208 - Getting issue details... STATUS

VID-96 - Getting issue details... STATUS

SDC-716 - Getting issue details... STATUS

OOM-478 - Getting issue details... STATUS

OOM-482 - Getting issue details... STATUS

OOM-483 - Getting issue details... STATUS

OOM-484 - Getting issue details... STATUS



Fixes to Pull and Test

https://gerrit.onap.org/r/#/c/25287/1/kubernetes/config/docker/init/src/config/aai/data-router/dynamic/conf/entity-event-policy.xml

https://gerrit.onap.org/r/#/c/25277/

https://gerrit.onap.org/r/#/c/25257/

https://gerrit.onap.org/r/#/c/25263/

https://gerrit.onap.org/r/#/c/25279/1

https://gerrit.onap.org/r/#/c/25283/

https://gerrit.onap.org/r/#/c/25289/1

Access and Deployment Configuration

OOM Deployment

Follow instructions at ONAP on Kubernetes#AutomatedInstallation

Openlab VNC and CLI

The following is missing some sections and a bit out of date (v2 deprecated in favor of v3) -Integration Testing Schedule, 10-09-2017



Get an openlab account - Integration / Developer Lab Access

Stephen Gooch provides excellent/fast service - raise a JIRA like the following

OPENLABS-75 - Getting issue details... STATUS

Install openVPN - Using Lab POD-ONAP-01 Environment

For OSX both Viscosity and TunnelBlick work fine

Login to Openstack

Install openstack command line toolsTutorial: Configuring and Starting Up the Base ONAP Stack#InstallPythonvirtualenvTools(optional,butrecommended)
get your v3 rc file

verify your openstack cli access (or just use the jumpbox)
obrienbiometrics:aws michaelobrien$ source logging-openrc.sh 
obrienbiometrics:aws michaelobrien$ openstack server list
+--------------------------------------+---------+--------+-------------------------------+------------+
| ID                                   | Name    | Status | Networks                      | Image Name |
+--------------------------------------+---------+--------+-------------------------------+------------+
| 1ed28213-62dd-4ef6-bdde-6307e0b42c8c | jenkins | ACTIVE | admin-private-mgmt=10.10.2.34 |            |
+--------------------------------------+---------+--------+-------------------------------+------------+
get 15 elastic IP's

You may need to release unused IPs from other tenants - as we have 4 pools of 50

fill in your stack env parameters

onap_openstack.env

  public_net_id: 971040b2-7059-49dc-b220-4fab50cb2ad4

  public_net_name: external

  ubuntu_1404_image: ubuntu-14-04-cloud-amd64

  ubuntu_1604_image: ubuntu-16-04-cloud-amd64

  flavor_small: m1.small

  flavor_medium: m1.medium

  flavor_large: m1.large

  flavor_xlarge: m1.xlarge

  flavor_xxlarge: m1.xxlarge

  vm_base_name: onap

  key_name: onap_key

  pub_key: ssh-rsa AAAAobrienbiometrics

  nexus_repo: https://nexus.onap.org/content/sites/raw

  nexus_docker_repo: nexus3.onap.org:10001

  nexus_username: docker

  nexus_password: docker

  dmaap_topic: AUTO

  artifacts_version: 1.1.0-SNAPSHOT

  openstack_tenant_id: a85a07a5f34d4yyyyyyy

  openstack_tenant_name: Logyyyyyyy

  openstack_username: michaelyyyyyy

  openstack_api_key: Wyyyyyyy

  openstack_auth_method: password

  openstack_region: RegionOne

  horizon_url: http://10.12.25.2:5000/v3

  keystone_url: http://10.12.25.2:5000

  dns_list: ["10.12.25.5", "8.8.8.8"]

  external_dns: 8.8.8.8

  dns_forwarder: 10.12.25.5

  oam_network_cidr: 10.0.0.0/16

follow

http://onap.readthedocs.io/en/latest/submodules/dcaegen2.git/docs/sections/installation_heat.html

  dnsaas_config_enabled: true  

dnsaas_region: RegionOne  

dnsaas_keystone_url: http://10.12.25.5:5000/v3  

dnsaas_tenant_name: Logging  

dnsaas_username: demo  

dnsaas_password: onapdemo  

dcae_keystone_url: http://10.12.25.5:5000/v2  

dcae_centos_7_image: CentOS-7  

dcae_domain: dcaeg2.onap.org


  dcae_public_key: PUT THE PUBLIC KEY OF A KEYPAIR HERE TO BE USED BETWEEN DCAE LAUNCHED VMS

  dcae_private_key: PUT THE SECRET KEY OF A KEYPAIR HERE TO BE USED BETWEEN DCAE LAUNCHED VMS

Run the HEAT stack
obrienbiometrics:openlab michaelobrien$ openstack stack create -t onap_openstack.yaml -e onap_openstack.env ONAP1125_6
| id                  | 9b026354-c071-4e31-8611-11fef2f408f5     |
| stack_name          | ONAP1125_6                               |
| description         | Heat template to install ONAP components |
| creation_time       | 2017-11-26T02:16:57Z                     |
| updated_time        | 2017-11-26T02:16:57Z                     |
| stack_status        | CREATE_IN_PROGRESS                       |
| stack_status_reason | Stack CREATE started                     |
obrienbiometrics:openlab michaelobrien$ openstack stack list
| 9b026354-c071-4e31-8611-11fef2f408f5 | ONAP1125_6 | CREATE_IN_PROGRESS | 2017-11-26T02:16:57Z | 2017-11-26T02:16:57Z 


Wait for deployment

DCEA and several mutli-service VM's down

obrienbiometrics:openlab michaelobrien$ openstack server list

| db5388c0-9fa5-4359-ad21-689dd0ce8955 | onap-multi-service  | ERROR  |                                             | ubuntu-16-04-cloud-amd64 |
| d712dce1-d39d-4c6e-8d21-d9da9aa40ea1 | onap-dcae-bootstrap | ACTIVE | oam_onap_awsf=10.0.4.1, 10.12.5.197         | ubuntu-16-04-cloud-amd64 |
| 4724fa8e-e10b-46cb-a81d-e7a9a7df041e | onap-aai-inst1      | ACTIVE | oam_onap_awsf=10.0.1.1, 10.12.5.118         | ubuntu-14-04-cloud-amd64 |
| bc4ef1f3-422d-4e66-a21b-8c5a3d206938 | onap-portal         | ACTIVE | oam_onap_awsf=10.0.9.1, 10.12.5.241         | ubuntu-14-04-cloud-amd64 |
| 0f9edb8e-a379-4ab1-a6b1-c24763b69ecd | onap-policy         | ACTIVE | oam_onap_awsf=10.0.6.1, 10.12.5.17          | ubuntu-14-04-cloud-amd64 |
| bd1f29e3-e05e-4570-9f41-94af83aec7d6 | onap-aai-inst2      | ACTIVE | oam_onap_awsf=10.0.1.2, 10.12.5.252         | ubuntu-14-04-cloud-amd64 |
| 57e90b08-d69e-4770-a298-97f64387e60d | onap-dns-server     | ACTIVE | oam_onap_awsf=10.0.100.1, 10.12.5.237       | ubuntu-14-04-cloud-amd64 |
| e9dd8800-0f77-4658-90b0-db98f4689485 | onap-message-router | ACTIVE | oam_onap_awsf=10.0.11.1, 10.12.5.234        | ubuntu-14-04-cloud-amd64 |
| af6120d8-419a-45f9-ae32-b077b9ace407 | onap-sdnc           | ACTIVE | oam_onap_awsf=10.0.7.1, 10.12.5.226         | ubuntu-14-04-cloud-amd64 |
| b6daf774-dc6a-4c9b-aaa3-ca8fc5734ac3 | onap-clamp          | ACTIVE | oam_onap_awsf=10.0.12.1, 10.12.5.128        | ubuntu-16-04-cloud-amd64 |
| 31524fcb-d1b2-427b-b0bf-29e8fc65fded | onap-sdc            | ACTIVE | oam_onap_awsf=10.0.3.1, 10.12.5.92          | ubuntu-16-04-cloud-amd64 |
| 31f8c1e7-a7e7-417d-a9df-cc5d65d7777c | onap-vid            | ACTIVE | oam_onap_awsf=10.0.8.1, 10.12.5.218         | ubuntu-14-04-cloud-amd64 |
| 482befc8-2a6a-4da7-8e05-8f8b294f80d2 | onap-robot          | ACTIVE | oam_onap_awsf=10.0.10.1, 10.12.6.21         | ubuntu-16-04-cloud-amd64 |
| 8ea76387-aadf-46da-8257-5e9c2f80fa48 | onap-appc           | ACTIVE | oam_onap_awsf=10.0.2.1, 10.12.5.222         | ubuntu-14-04-cloud-amd64 |
| 43b90061-885f-454b-8830-9da3338fca56 | onap-so             | ACTIVE | oam_onap_awsf=10.0.5.1, 10.12.5.230         | ubuntu-16-04-cloud-amd64 |

configure local

vi /etc/hosts

Enable the robot webserver to see error logs and get /etc/hosts values

HEAT

root@onap-robot:/opt# ./demo.sh init_robot

OOM

oom/kubernetes/robot/demo-k8s.sh init_robot

http://10.12.5.129:88/


10.12.5.214 policy.api.simpledemo.onap.org 

10.12.5.118 portal.api.simpledemo.onap.org 

10.12.5.141 sdc.api.simpledemo.onap.org 

10.12.5.92  vid.api.simpledemo.onap.org 

Verify AAI_VM1 DNS

Intermittenty AAI1 does not fully initialize, docker will get installed and the test-config dir will get pulled - but the 6 docker containers in the compose file will not be up.

login to aai immediately after stack startup and add the following before test-config

root@onap-aai-inst1:~# cat /etc/hosts
10.0.1.2 aai.hbase.simpledemo.openecomp.org
10.12.5.213 aai.hbase.simpledemo.openecomp.org

Enable robot webserver


Spot check containers

| 1fe78720-e418-47f7-bcfd-b6b93c791448 | oom-cd-obrien-cd0   | ACTIVE | admin-private-mgmt=10.10.2.15, 10.12.25.117

check robot health

Core components are PASS so lets continue with the vFW

Thanks Alexis for the 20171130 changes

http://jenkins.onap.info/job/oom-cd/528/console

15:39:15 Basic SDNGC Health Check | PASS |

15:39:15 Basic A&AI Health Check | PASS |

15:39:15 Basic Policy Health Check | PASS |

15:39:15 Basic MSO Health Check | PASS |

15:39:15 Basic ASDC Health Check | PASS |

15:39:15 Basic APPC Health Check | PASS |

15:39:15 Basic Portal Health Check | PASS |

15:39:15 Basic Message Router Health Check | PASS |

15:39:15 Basic VID Health Check | PASS |

15:39:16 Basic Microservice Bus Health Check | PASS |

15:39:16 Basic CLAMP Health Check | PASS |

15:39:16 catalog API Health Check | PASS |

15:39:16 emsdriver API Health Check | PASS |

15:39:16 gvnfmdriver API Health Check | PASS |

15:39:16 huaweivnfmdriver API Health Check | PASS |

15:39:16 multicloud API Health Check | PASS |

15:39:16 multicloud-ocata API Health Check | PASS |

15:39:16 multicloud-titanium_cloud API Health Check | PASS |

15:39:16 multicloud-vio API Health Check | PASS |

15:39:16 nokiavnfmdriver API Health Check | PASS |

15:39:16 nslcm API Health Check | PASS |

15:39:16 resmgr API Health Check | PASS |

15:39:16 usecaseui-gui API Health Check | PASS |

15:39:16 vnflcm API Health Check | PASS |

15:39:16 vnfmgr API Health Check | PASS |

15:39:16 vnfres API Health Check | PASS |

15:39:16 workflow API Health Check | PASS |

15:39:16 ztesdncdriver API Health Check | PASS |

15:39:16 ztevmanagerdriver API Health Check | PASS |

15:39:16 OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp compo... | FAIL |

15:39:16 30 critical tests, 29 passed, 1 failed

15:39:16 30 tests total, 29 passed, 1 failed

Design/Runtime Issues

20171122: Do we run the older robot preload or do we do the SDNC rest PUT manually

Older Tutorial: Creating a Service Instance from a Design Model#RunRobotdemo.shpreloadofDemoModule

20171122: Do we use the older June vFW zip (yaml + env) or must we use a new split template

investigate Brian's comment on running vFW Demo on ONAP Amsterdam Release - "If you want to do closed loop for vFW there is a new two VNF service for Amsterdam  (vFWCL - it is in the demo repo) that separates the traffic generator into a second VNF/Heat stack so that Policy an associate the event on the LB with the VNF to be controlled (the traffic generator)  through APPC. Contact Pam and Marco for details."

INT-342 - Getting issue details... STATUS

20171128: we are using the split vFWCL version

20171122: Do we run the older robot appc mountpoint or do we do the APPC rest PUT manually


20171125: Do we need R1 components to run the vFirewall like MultiVIM

There was a question about this from several developers - specifically is MSO wrapped now - or can we run with a minimal set of VM's to run the vFW.

INT-346 - Getting issue details... STATUS

20171125: Workaround for intermittent AAI-vm1 failure in HEAT

https://lists.onap.org/pipermail/onap-discuss/2017-November/006508.html

AAI-513 - Getting issue details... STATUS

For now my internal DNS was not working - AAI1 did not see AAI2 - thanks Venkata - harcoded the following in aai1 /etc/hosts

root@onap-aai-inst1:~# cat /etc/hosts
10.0.1.2 aai.hbase.simpledemo.openecomp.org
10.12.5.213 aai.hbase.simpledemo.openecomp.org


root@onap-aai-inst1:/opt/test-config# docker ps
CONTAINER ID        IMAGE                                            COMMAND                  CREATED              STATUS              PORTS                              NAMES
603e85af586f        nexus3.onap.org:10001/onap/model-loader          "/opt/app/model-lo..."   About a minute ago   Up About a minute                                      testconfig_model-loader_1
9826995b7ad5        nexus3.onap.org:10001/onap/data-router           "/opt/app/data-rou..."   About a minute ago   Up About a minute   0.0.0.0:9502->9502/tcp             testconfig_datarouter_1
19dd8614b767        nexus3.onap.org:10001/onap/search-data-service   "/opt/app/search-d..."   About a minute ago   Up About a minute   0.0.0.0:9509->9509/tcp             testconfig_aai.searchservice.simpledemo.openecomp.org_1
89b93577733f        nexus3.onap.org:10001/onap/sparky-be             "/bin/sh -c /opt/a..."   About a minute ago   Up About a minute   8000/tcp, 0.0.0.0:9517->9517/tcp   testconfig_sparky-be_1
c13e604e1fdc        aaionap/haproxy:1.1.0                            "/docker-entrypoin..."   About a minute ago   Up About a minute   0.0.0.0:8443->8443/tcp             testconfig_aai.api.simpledemo.openecomp.org_1
00aa79860bd5        nexus3.onap.org:10001/openecomp/aai-traversal    "/bin/bash /opt/ap..."   4 minutes ago        Up 4 minutes        0.0.0.0:8446->8446/tcp, 8447/tcp   testconfig_aai-traversal.api.simpledemo.openecomp.org_1
54747c3594fc        nexus3.onap.org:10001/openecomp/aai-resources    "/bin/bash /opt/ap..."   7 minutes ago        Up 7 minutes        0.0.0.0:8447->8447/tcp             testconfig_aai-resources.api.simpledemo.openecomp.org_1
oot@onap-aai-inst1:~# docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
root@onap-aai-inst1:/opt# ./aai_vm_init.sh 
Waiting for 'testconfig_aai-resources.api.simpledemo.openecomp.org_1' deployment to finish ...
Waiting for 'testconfig_aai-resources.api.simpledemo.openecomp.org_1' deployment to finish ...
ERROR: testconfig_aai-resources.api.simpledemo.openecomp.org_1 deployment failed

root@onap-aai-inst1:/opt# docker ps -a
CONTAINER ID        IMAGE                                           COMMAND                  CREATED             STATUS              PORTS                    NAMES
1f1476cbd6f5        nexus3.onap.org:10001/openecomp/aai-resources   "/bin/bash /opt/ap..."   14 minutes ago      Up 14 minutes       0.0.0.0:8447->8447/tcp   testconfig_aai-resources.api.simpledemo.openecomp.org_1


root@onap-aai-inst1:/opt# docker logs -f testconfig_aai-resources.api.simpledemo.openecomp.org_1 
aai.hbase.simpledemo.openecomp.org: forward host lookup failed: Unknown host
Waiting for hbase to be up


FIX: reboot and add /etc/hosts entry right after startup before or after aai_install.sh but before test-config/deploy_vm1.sh

root@onap-aai-inst1:~# cat /etc/hosts
10.0.1.2 aai.hbase.simpledemo.openecomp.org
10.12.5.213 aai.hbase.simpledemo.openecomp.org
root@onap-aai-inst2:~# docker ps -a
CONTAINER ID        IMAGE                 COMMAND                   CREATED             STATUS              PORTS                                                                                                                                                                                               NAMES
1aaf137c4532        elasticsearch:2.4.1   "/docker-entrypoin..."    About an hour ago   Up About an hour    0.0.0.0:9200->9200/tcp, 9300/tcp                                                                                                                                                                    elasticsearch
e0846300cac5        aaionap/hbase:1.2.0   "/bin/sh -c \"/entr..."   About an hour ago   Up About an hour    0.0.0.0:2181->2181/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:8085->8085/tcp, 0.0.0.0:9090->9090/tcp, 0.0.0.0:16000->16000/tcp, 0.0.0.0:16010->16010/tcp, 9095/tcp, 0.0.0.0:16201->16201/tcp, 16301/tcp   testconfig_aai.hbase.simpledemo.openecomp.org_1


root@onap-aai-inst1:~# docker ps -a
CONTAINER ID        IMAGE                                            COMMAND                  CREATED             STATUS              PORTS                              NAMES

90dc5a9791a9        nexus3.onap.org:10001/onap/model-loader          "/opt/app/model-lo..."   36 minutes ago      Up 35 minutes                                          testconfig_model-loader_1

eee85b22d3f1        nexus3.onap.org:10001/onap/search-data-service   "/opt/app/search-d..."   36 minutes ago      Up 35 minutes       0.0.0.0:9509->9509/tcp             testconfig_aai.searchservice.simpledemo.openecomp.org_1

ef4f1e9ab30e        nexus3.onap.org:10001/onap/data-router           "/opt/app/data-rou..."   36 minutes ago      Up 35 minutes       0.0.0.0:9502->9502/tcp             testconfig_datarouter_1

b19a8628ac43        nexus3.onap.org:10001/onap/sparky-be             "/bin/sh -c /opt/a..."   36 minutes ago      Up 36 minutes       8000/tcp, 0.0.0.0:9517->9517/tcp   testconfig_sparky-be_1

d5caad8eaded        aaionap/haproxy:1.1.0                            "/docker-entrypoin..."   36 minutes ago      Up 36 minutes       0.0.0.0:8443->8443/tcp             testconfig_aai.api.simpledemo.openecomp.org_1

f2b36df952d6        nexus3.onap.org:10001/openecomp/aai-traversal    "/bin/bash /opt/ap..."   38 minutes ago      Up 38 minutes       0.0.0.0:8446->8446/tcp, 8447/tcp   testconfig_aai-traversal.api.simpledemo.openecomp.org_1

663d0d3a3d82        nexus3.onap.org:10001/openecomp/aai-resources    "/bin/bash /opt/ap..."   40 minutes ago      Up 40 minutes       0.0.0.0:8447->8447/tcp             testconfig_aai-resources.api.simpledemo.openecomp.org_1


Still need to verify the DNS setting for the other VMs


20171127: Running Heatbridge from robot

20171127: key management in the single/split vFW

POLICY-409 - Getting issue details... STATUS

20171127 Which template is supported vFW old/new-split or both

Use the newer split one in vFWCL as documented in  POLICY-409 - Getting issue details... STATUS  since 4th Nov.


20171128: VMware VIO Requirements for vFW Deployment

TODO: expand on requirement of MultiCloud for VF-Module creation on VMware VIO.

                        At the final Step of vf Module Creation SO Can use VIO in 2 modes .

                        (a) SO ↔ VIO

                           in this case there was Certificate challenges faced as per SO logs  and resolved by doing following steps .

                             

                          a.1  picked up the VIO Certifcate from the loadBalance VM and path : /usr/local/share/ca-certificates

                          a.2 copied the ceritificate to  and copied to  : /usr/local/share/ca-certificates  inside MSO_TestLab Container .

                          a.3 run update-ca-certificates with root inside the mso_testlab docker  


                        (b) SO ↔ MultiCloud ↔ VIO  

                                  b.1  need to update identity url  in  cloud-config.json  present in SO Test lab container to have MultiCloud EndPoint .

                                  b.2 multiCloud needs to register the VIM to ESR .   

20171128: SDNC VM HD fills up - controller container shuts down 24h after a failed VNF preload

see 

SDNC-204 - Getting issue details... STATUS

SDNC-156 - Getting issue details... STATUS

vFW status: 20171129: (Note CL videos from Marco are on the main demo page)

[12:50] 
oom = SDC onboarding OK (master) - will do robot init tomorrow in 1.1

[12:51] 
heat = reworked the vnf preload with the right network id - but the SDNC VM filled to 100% HD after 3 days - bringing down the controller (will raise a jira) - need a log rotation strategy - refreshing the VM or the stack for tomorrow at 12

root@onap-sdnc:/opt# docker ps -a
CONTAINER ID        IMAGE                                   COMMAND                  CREATED             STATUS                        PORTS                     NAMES
e42e528c33e0        onap/admportal-sdnc-image:latest        "/bin/bash -c 'cd ..."   28 hours ago        Up 28 hours                   0.0.0.0:8843->8843/tcp    sdnc_portal_container
7e1d85c9f522        onap/ccsdk-dgbuilder-image:latest       "/bin/bash -c 'cd ..."   28 hours ago        Up 28 hours                   0.0.0.0:3000->3100/tcp    sdnc_dgbuilder_container
71e349d0562a        onap/sdnc-ueb-listener-image:latest     "/opt/onap/sdnc/ue..."   28 hours ago        Up 28 hours                                             sdnc_ueblistener_container
94df5b79169e        onap/sdnc-dmaap-listener-image:latest   "/opt/onap/sdnc/dm..."   28 hours ago        Up 3 seconds                                            sdnc_dmaaplistener_container
6f86aa285822        onap/sdnc-image:latest                  "/opt/onap/sdnc/bi..."   28 hours ago        Exited (13911 minutes ago                             sdnc_controller_container
762333e94308        mysql/mysql-server:5.6                  "/entrypoint.sh my..."   28 hours ago        Up 28 hours (healthy)         0.0.0.0:32768->3306/tcp   sdnc_db_container
root@onap-sdnc:/opt# df
/dev/vda1       82536112 82519684         0 100% /











onap-sdncubuntu-14-04-cloud-amd64
oam_onap_Ze9k
  • 10.0.7.1
  • 10.12.5.92
m1.largeonap_key_Ze9kActivenovaNoneRunning1 day, 5 hours

Fix: reboot the instance to get back to 8%


root@onap-aai-inst1:~# docker ps

CONTAINER ID        IMAGE                                            COMMAND                  CREATED             STATUS              PORTS                              NAMES

90dc5a9791a9        nexus3.onap.org:10001/onap/model-loader          "/opt/app/model-lo..."   37 hours ago        Up 37 hours                                            testconfig_model-loader_1

eee85b22d3f1        nexus3.onap.org:10001/onap/search-data-service   "/opt/app/search-d..."   37 hours ago        Up 37 hours         0.0.0.0:9509->9509/tcp             testconfig_aai.searchservice.simpledemo.openecomp.org_1

ef4f1e9ab30e        nexus3.onap.org:10001/onap/data-router           "/opt/app/data-rou..."   37 hours ago        Up 37 hours         0.0.0.0:9502->9502/tcp             testconfig_datarouter_1

b19a8628ac43        nexus3.onap.org:10001/onap/sparky-be             "/bin/sh -c /opt/a..."   37 hours ago        Up 37 hours         8000/tcp, 0.0.0.0:9517->9517/tcp   testconfig_sparky-be_1

d5caad8eaded        aaionap/haproxy:1.1.0                            "/docker-entrypoin..."   37 hours ago        Up 37 hours         0.0.0.0:8443->8443/tcp             testconfig_aai.api.simpledemo.openecomp.org_1

f2b36df952d6        nexus3.onap.org:10001/openecomp/aai-traversal    "/bin/bash /opt/ap..."   37 hours ago        Up 37 hours         0.0.0.0:8446->8446/tcp, 8447/tcp   testconfig_aai-traversal.api.simpledemo.openecomp.org_1

663d0d3a3d82        nexus3.onap.org:10001/openecomp/aai-resources    "/bin/bash /opt/ap..."   37 hours ago        Up 37 hours         0.0.0.0:8447->8447/tcp             testconfig_aai-resources.api.simpledemo.openecomp.org_1

root@onap-aai-inst1:~# df

Filesystem     1K-blocks    Used Available Use% Mounted on

/dev/vda1       82536112 6153472  72996520   8% /


Test Deployments

20171125:2100: HEAT

Ran out of ram on 



Ran out of ram for onap-multi-service

No valid host was found. There are not enough hosts available. compute-08: (RamFilter) Insufficient usable RAM: req:16384, avail:3297.0 MB, compute-09: (RamFilter) Insufficient usable RAM: req:16384, avail:13537.0 MB, compute-06: (RamFilter) Insufficient

robot vm docker containers down

root@onap-robot:/opt# ./robot_vm_init.sh 

Already up-to-date.

Already up-to-date.

Login Succeeded

1.1-STAGING-latest: Pulling from openecomp/testsuite

Digest: sha256:5f48706ba91a4bb805bff39e67bb52b26011d59f690e53dfa1d803745939c76a

Status: Image is up to date for nexus3.onap.org:10001/openecomp/testsuite:1.1-STAGING-latest

Error response from daemon: No such container: openecompete_container


fix: wait for them

CONTAINER ID        IMAGE                                                          COMMAND                  CREATED             STATUS              PORTS                                   NAMES

77c6eba8c641        nexus3.onap.org:10001/onap/sniroemulator:latest                "/docker-entrypoin..."   30 seconds ago      Up 29 seconds       8080-8081/tcp, 0.0.0.0:8080->9999/tcp   sniroemulator

903964fc8fe1        nexus3.onap.org:10001/openecomp/testsuite:1.1-STAGING-latest   "lighttpd -D -f /e..."   30 seconds ago      Up 29 seconds       0.0.0.0:88->88/tcp                      openecompete_container

Wait for deployment

AAI-513 - Getting issue details... STATUS


20171201: OOM release-1.1.0



Filling in over the weekend

See daily videos and Alexis Videos on the vFWCL and his expanded wiki

vFWCL instantiation, testing, and debuging