...
- Question:
The output of SDC will include a variety of files (csar, xml, yang, etc) being distributed to multiple modules, SO, AAI, Policy, etc. Can we create a document to specify the details, including package structure, format, usage, etc. There are NSD and VNFD specs on the VF-C page (refer https://wikilf-onap.onapatlassian.orgnet/wiki/display/DW/VF-C+R1+Deliverables). Does ONAP use them as generic spec?
Answer:
On-boarding of ONAP VNF artifacts is done through SDC and may align in the future with standards based approaches as for example described in ETSI NFV (IFA11, IFA14, SOL004). The information on the VF-C page is an excerpt of these. The ONAP community can certainly create a document specifying the details of VNF packages, aligned with ETSI NFV SOL004 - this specification would then be consumed by SDC.
AI: SDC needs to clarify this. A sample would be very helpful. - Question:
For VNF onboarding, do we use HEAT as the input for the VPP-based VNFs? Does SDC still convert them to TOSCA internally and output TOSCA-based VNF packages?
Answer:
For vCPE, the input is HEAT. Internally, SDC converts HEAT to TOSCA. The output of onboarding is TOSCA-based. The service description is TOSCA but the attachment includes HEAT artifacts, which will be distributed to SO. SO will pass HEAT to Multi-VIM.
For VoLTE, the input is TOSCA.
According to Zahi Kapeluto on 7/25 virtual event, currently SDC can only import HEAT templates, convert to TOSCA, and distribute TOSCA.
According to Eden Rozin on 7/26 virtual event, SDC can import TOSCA VNF. The limitation is that not all TOSCA syntax are supported. - Question:
What is the complete list of artifacts needed to complete a service design? Workflow recipes for SO, data model for AAI, Yang and DGs for SDNC and APPC, policies, data analytic programs. Anything else?
Answer:- Workflows for SO.
- Yang models and DGs
- Policies
- AAI data models: generic VNF model (existing) and service-specific models (to be created).
- Data analytics: to reuse the existing TCA
- Robot framework to emulate BSS
- Blueprint template and policies created using CLAMP - Question:
What specific workflow recipes are needed for vCPE? What are the tools to create such recipes? How are the workflow recipes associated with the service, packaged, and distributed?
Answer:
There will be two sets of workflows.
- The first is to instantiate the general infrastructure, including vBNG, vG_Mux, vDHCP, and vAAA. They are explained on the use case wiki page.
- The second is to instantiate per-customer service, including service-level operations and resource-level operations (tunnel cross connect, vG, and vBRG). They are explained in these slides. - Question:
Are we supposed to create a specific set of workflow recipes for each use case? E.g., one set for vCPE and one set for VoLTE.
Answer:
There are generic workflows that can be reused. There are also needs for specific workflows for each use case. - Question:
Do we manual create AAI data models or use tools? How are the model packaged with the service and distributed? What data models are needed for vCPE?
Answer:
Based on meeting discussions:
--------------------------------------------
We need a consumer broadband service model. The existing generic VNF model will be reused. Additional parameters may be needed on top of the generic VNF model to create specific use case VNF models. There are tools to do this.Comments from James Forsyth:
https://wikilf-onap.onapatlassian.orgnet/wiki/download/attachments/101584916220188/aai_schema_v11.xsd?api=v2 contains the entire schema for A&AI, including generic-vnf.
-------------------------------------------
A&AI watches for new models in SDC and then the model loader puts those model definitions into the A&AI backend. The models needed for vCPE use case would be defined in SDC.
Comments from FREEMAN, BRIAN D:
---------------------------------------------How would A&AI model the service path from the prem to the internet through vCPE Use Case ?
I think the generic VNF model is part of it.
I think the vlan/tunnel information might be missing or perhaps you could explain how the linked list of Layer 2 data could be used for the path ?
Its the physical data path piece that I am worried about not the VNF themselves.
Comments from James Forsyth:
-----------------------------------------------This looks like a model-based query which would allow us to define the service path and then we could pull the whole thing out of the graph with a single call, starting at a given vertex like service-subscription. We’ve done similar use cases with A&AI in the past, and I think we have the appropriate vnf structures, networks, and interfaces defined in the schema to be able to define the service path, possibly using the model from SDC and maybe building some custom network models to support it as well. A&AI architecture will look closely at the use case and will perhaps provide a suggested model of how this service might look in A&AI.
- Question:
Who creates the Yang files to define the SDNC/APPC NBI APIs and data models? Does SDC use the Yang files?
Answer:
For vCPE, Yang files are needed for SDNC to define the NBI API and the service data model for configuration. The APPC will just do stop/start. APPC will also configure the VNFs to enable VES data reporting to DCAE.
The SDNC team will define the Yang files for the SDNC NBI APIs. The Yang models will be based on the Yang models provided by individual VNFs.
In R1, DG Builder will work separately outside of SDC. So SDC does not use the Yang files. The long term goal is to integrate DG Builder into SDC. - Question:
In the general case, ONAP may need to create a new SDNC/APPC dedicated for a new service when the service is instantiated. Is it designed by the ONAP operational team outside of SDC or included as a SDC function? What is the process of creating a new SDNC/APPC? Who does it and in what way, manual or automatic?
Answer:
In R1, the DGs are packaged with SDNC/APPC and are loaded into their DBs during instantiation. SDC does not use such DGs. The SDNC and APPC instances are created in advance with the required DGs loaded. The creation of SDNC and APPC is not part of SDC. It belongs to OOM. - Question:
What policies are needed for vCPE? Are they created manually or using tools (Drools?) How are they integrated into SDC or CLAMP?
Answer:
The policies needed for the vCPE usecase are being determined and defined, for R1 expectation is that they will be created manually, not integrating with SDC or other tools. - Question:
Message bus topics need to be defined and used on DMaaP to enable communication among different modules. When are such topics defined and how are they configured? E.g., Policy will send event on a TOPIC_VM_RESTART to invoke VM restart, APPC will subscribe to TOPIC_VM_RESTART and execute the restart DG. When and where to define this topic? Who configures Policy and APPC to publish/subscribe to the topic, and when?
Answer:
For R1 these will be configured statically. - Question:
Are we going to let VNFs actively post VES data to the DCAE REST API for data collection? If yes, we will need to configure the VNFs with the DCAE collector's URI. Is this configuration performed by APPC? Does APPC get the URI from AAI?
Answer:
Yes, VNF will post VES data to DCAE collector. VNF will need the IP and port of the DCAE collector. There are two options for this:
- Use APPC: VNF will provide NETCONF API, APPC will configure the VNF after it is instantiated.
- Use HEAT: The IP and port will be put in the HEAT template.
Based on discussions, it is preferred to use APPC for the configuration. The reason is that in practice the collector may change due to reasons such as fail over. APPC can be used to modify the collector in the run time. - Question:
Who will develop the data analytics program? Is it required to re-build the DCAE containers to include the analytics program?
Answer:
Kang: To confirm with DCAE: will modify TCA, will need to rebuild. - Question:
Are we going to build the analytics program a CDAP application or a docker container?
Answer:
TCA will be used for the vCPE use case. - Question:
Are there any KPI scripts that need to be created in SDC?
Answer:
The control loop is designed by CLAMP, which includes KPIs as parameters and policies.
Action Item: Discussions are needed to confirm the interface between SDC and CLAMP. - Question:
What Robot testings need to be created during design time? How is the process integrated into SDC?
Answer:
We need Robot framework to emulate BSS to send in customer order. We also need robot framework to load data into DHCP and AAA. - Question:
Is there a standard format for Robot testing reports? How are they presented in ONAP?
Answer:
AI: Discuss with Daniel Rose, Jerry Flood. - Question:
Are we going to use the generic VID or create a vCPE flavor VID to instantiate vCPE?
Answer:
Expectation is that the robot framework will be used to emulate requests to instantiate the subscriber specific vCPE VNF. VID may be used to trigger orchestration / instantiation of the supporting (dhcp, dns, aaa) and edge/metro (bng, mux) VNF. - Question:
What kind of monitoring dashboard is required for vCPE?
Answer:
AI: Disuss with usecase UI project. - Question:
For vCPE, what is the data collection/reporting mechanism between VNF and DCAE?
Answer:
We would prefer to use connect approach to reports generic statistics (e.g. per-port packet in/out, packet drop rate) and VES agent approach for VNF specific statistics (e.g. per-flow and per-subscriber data). - Question:
For vCPE, list all the required SDNC control/configuration actions.
Answer:
- - Question:
For vCPE, list all the required APPC control/configuration actions.
Answer: Question:
What are the complete list of tools/UIs to perform design and operation? For each tool, is it available for testing purpose? If not, what main functions are to be developed for R1?
Answer:Function Tools Input Output Sample input/output link Tools available for testing? Main functions to be developed for R1 Notes VNF packaging VNF SDK VNF certification ICE VNF onboarding SDC VNF template, environment, reference to images. VNF package Every HEAT received by SDC should be previously certified by ICE. ICE is not integrated with SDC so far. Images are not stored in SDC. Either in openstack directly or pulled from the InternetThe images should be pulled from Multi-VIM – different VIMs have different image formats. Service template creation SDC VNF packages, .. Service template in TOSCA Closed loop design CLAMP, SDC Closed loop TOSCA template VES/TCA templates VES onboarding yaml file policies and a blueprint template Policy (out of closed loop) Policy GUI Yes, available via the Portal Dashboard Policy GUI has a tab which allows to deploy a policy into a PDP. Integration of Policy GUI into SDC is not planned for R1. Workflow bpmn files Camunda Modeler for the BPMN files.
A&AI data model Check with AAI Yang model for SDNC/APPC text editor yang files yes DG DG Builder Yang model json/xml/compiled DG yes Data analytics application java Data collector VES docker container Service instantiation VID Monitoring dashboard Use Case UI Robot to emulate BSS Robot to invoke packet loss - Question:
How does vGMUX in vCPE restore to working state after being restarted?
Answer:
Approach proposed by Danny Zhou and Johnson Li:Step 1: SDN-C invokes an Agent in the vG MUX to configure VES collector’s IP and port, those information will be saved to a VES agent configuration file. Note: if VPP already started when this call is invoked, the Agent will update the in-memory variables as well.
Step 2: When VPP starts, the VES Agent Lib will load the VES collector’s IP and port to memory, to be used to construct the URI to interface with the VES collector embedded in the DCAE
Step 3: Robot to configure the packet loss rate statistics to VPP to emulate the high packet loss scenario
Step 4: The VES Agent periodically reads the packet loss rate statistics from the VPP
Step 5: The VES Agent reports the statistics to DCAE
Step 6: The policy engine matches the vG MUX restart policy with the packet loss rate statistics, and triggers the APP-C to restart vG MUX VNF
Step 7: APP-C restarts vG MUX via multi-VIM
Step 8: VPP is signaled by the OS to save the current configuration to a configuration DB, and those configurations will be consumed when the VPP restarts. Note: all the statistics are cleared to zero when VPP restarts, in R1. We can save them to the DB as well but it needs a more complicated HA framework to support it, so might be a R2 feature.
Oliver Spatscheck supports supports the above approach for R1 and points out the following shortcoming that should be fixed in R2:
If we use VPP this way we can’t use authorization in the VES collection (this would require a call into DCAE to configure the VES collector access per VNF). What you describe below is similar to the setup we have for vDNS/vFW right now (except we “hardcode" the VES collector IP) but it’s not the way we would run it in a production setting.
...