- Created by Alexis de Talhouƫt, last modified by JC on Jan 17, 2020
You are viewing an old version of this page. View the current version.
Compare with Current View Page History
« Previous Version 157 Next »
This guide is geared to provide information regarding how to build a CBA.
Installation
ONAP is meant to be deployed within a Kubernetes environment. Hence, the de-facto way to deploy CDS is through Kubernetes.
ONAP also package Kubernetes manifest as Chart, using Helm.
Prerequisite
https://docs.onap.org/en/latest/guides/onap-developer/settingup/index.html
Setup local Helm
helm init --history-max 200 # To install tiller to target Kubernetes if not yet installed helm serve & helm repo add local http://127.0.0.1:8879
Get the chart
Make sure to checkout the release to use, by replacing $release-tag
in bellow command
git clone https://gerrit.onap.org/r/oom cd oom git checkout tags/$release-tag cd kubernetes make common make cds
Install CDS
helm install --name cds cds
Result
$ kubectl get all --selector=release=cds NAME READY STATUS RESTARTS AGE pod/cds-blueprints-processor-54f758d69f-p98c2 0/1 Running 1 2m pod/cds-cds-6bd674dc77-4gtdf 1/1 Running 0 2m pod/cds-cds-db-0 1/1 Running 0 2m pod/cds-controller-blueprints-545bbf98cf-zwjfc 1/1 Running 0 2m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/blueprints-processor ClusterIP 10.43.139.9 <none> 8080/TCP,9111/TCP 2m service/cds NodePort 10.43.254.69 <none> 3000:30397/TCP 2m service/cds-db ClusterIP None <none> 3306/TCP 2m service/controller-blueprints ClusterIP 10.43.207.152 <none> 8080/TCP 2m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/cds-blueprints-processor 1 1 1 0 2m deployment.apps/cds-cds 1 1 1 1 2m deployment.apps/cds-controller-blueprints 1 1 1 1 2m NAME DESIRED CURRENT READY AGE replicaset.apps/cds-blueprints-processor-54f758d69f 1 1 0 2m replicaset.apps/cds-cds-6bd674dc77 1 1 1 2m replicaset.apps/cds-controller-blueprints-545bbf98cf 1 1 1 2m NAME DESIRED CURRENT AGE statefulset.apps/cds-cds-db 1 1 2m
See also Running CDS in minikube instructions for installing CDS.
Swagger
Can be found at http://$ip:$runtimePort/swagger-ui.html#/
NOTE: Swagger is currently disabled (since ElAlto).
CDS APIs are not documented either anywhere yet.
The Designer API offers endpoints to perform CRUD operations on Blueprints, Model-Types and Data-Dictionaries.
This documentationes applies to the Frankfurt release.
All endpoints require Basic Auth header.
Method: GET Path: /api/v1/dictionary/{name} Example success response { "name": "input-source", "dataType": "string", "entrySchema": null, "resourceDictionaryGroup": "default", "definition": { "tags": "input-source", "name": "input-source", "property": { "description": "name of the ", "required": null, "type": "string", "status": null, "constraints": null, "metadata": null, "value": null, "default": null, "entry_schema": null, "external-schema": null }, "group": "default", "updated-by": "user@onap.com", "sources": { "input": { "description": null, "type": "source-input", "metadata": null, "directives": null, "properties": {}, "attributes": null, "capabilities": null, "requirements": null, "interfaces": null, "artifacts": null, "copy": null, "node_filter": null } } }, "description": "name of the ", "tags": "input-source", "creationDate": "2020-01-09T20:38:53.000Z", "updatedBy": "user@onap.com" }
Method: POST Path:/api/v1/dictionary/definition Note: this operation will either insert or update Example request: { "name": "input-source", "group":"default", "property" :{ "description": "name of the ", "type": "string" }, "updated-by": "user@onap.com", "tags": "input-source", "sources": { "input": { "type": "source-input", "properties": { } } } } Example success response: { "tags": "input-source", "name": "input-source", "property": { "description": "name of the ", "required": null, "type": "string", "status": null, "constraints": null, "metadata": null, "value": null, "default": null, "entry_schema": null, "external-schema": null }, "group": "default", "updated-by": "user@onap.com", "sources": { "input": { "description": null, "type": "source-input", "metadata": null, "directives": null, "properties": {}, "attributes": null, "capabilities": null, "requirements": null, "interfaces": null, "artifacts": null, "copy": null, "node_filter": null } } }
Method: DELETE Path: /api/v1/dictionary/{name}
CDS Design time
Bellow are the requirements to enable automation for a service within ONAP.
For instantiation, the goal is to be able to automatically resolve all the HEAT/Helm variables, called cloud parameters.
For post-instantiation, the goal is to configure the VNF with initial configuration.
As part of SDC design time, when defining the topology, for the resource of type VF or PNF, you need to specify
Provide Helper scripts / tool to help with the design time activities
Here's a helper script to facilitate the deployment of data-type, data-dictionary, CBA enrichment and CBA upload.
Make sure to update the following parameters in the bellow script
- NODE_IP: IP of one of the K8S cluster node
The script assume the following folder structure is in place, update the script accordingly to your environment
āāā service āāā cba āāā tmp ā āāā cba.zip (temporary file) ā āāā cba-enriched.zip (temporary file) āāā data-dictionary āāā data-type
Note that before CDS can be used to enrich and load models, initial Data Types and Data Dictionary need to be loaded.
CDS has some own default in-build Data Types and Data Dictionary but those are not loaded by default at startup. CDS Default types and dictionary can be loaded by using CDS GUI or using API call:
curl -X POST --header 'Content-Type: application/json' --header 'Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==' -d '{ "loadModelType" : true, "loadResourceDictionary" : true, "loadCBA" : false }' http://<kube node ip>:30499/api/v1/blueprint-model/bootstrap
This will load Default Data Types 1 and Default Data Dictionary 2 but not demo CBA models (as 'loadCBA' is set to false).
#!/bin/sh IP=NODE_IP BLUEPRINT_PROCESSOR_PORT=30499 BLUEPRINT_PROCESSOR_URI=http://${IP}:${BLUEPRINT_PROCESSOR_PORT} URL_ENRICH=${BLUEPRINT_PROCESSOR_URI}/api/v1/blueprint-model/enrich URL_PUBLISH=${BLUEPRINT_PROCESSOR_URI}/api/v1/blueprint-model/publish URL_DD=${BLUEPRINT_PROCESSOR_URI}/api/v1/dictionary URL_DT=${BLUEPRINT_PROCESSOR_URI}/api/v1/model-type CBA_ZIP=/service/tmp/cba.zip CBA_ZIP_ENRICHED=~/service/tmp/cba_enriched.zip CBA_PATH=/service/cba DD_PATH=/service/data-dictionary DT_PATH=/service/data-type for f in $DT_PATH/*.json; do echo "Pushing model-type '$f'" curl -sS -X POST $URL_DT -H 'Content-Type: application/json' -H 'Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==' -d "@$f" echo " " done for f in $DD_PATH/*.json; do echo "Pushing data dictionary '$f'" curl -sS -X POST $URL_DD -H 'Content-Type: application/json' -H 'Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==' -d "@$f" echo " " done [ -f "$CBA_ZIP" ] && rm "$CBA_ZIP" [ -f "$CBA_ZIP_ENRICHED" ] && rm "$CBA_ZIP_ENRICHED" pushd $CBA_PATH zip -uqr $CBA_ZIP . --exclude=*.git* popd echo "Doing enrichment..." curl -sS -X POST $URL_ENRICH -H 'content-type: multipart/form-data' -H 'Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==' -F file=@$CBA_ZIP -o $CBA_ZIP_ENRICHED echo "Publishing..." curl -X POST $URL_PUBLISH -H 'content-type: multipart/form-data' -H 'Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==' -F file=@$CBA_ZIP_ENRICHED # rm $CBA_ZIP $CBA_ZIP_ENRICHED
With CBA Helper script user can load own CBA model and also own Data Types (API: /api/v1/model-type) and own Data Dictionary (API: /api/v1/dictionary) into CDS.
NOTE: CDS default Data types 1) and Data Dictionary 2) files cannot be loaded into CDS by using CDS's APIs, only CDS can load that format of files.
File format supported by /api/v1/model-type and /api/v1/dictionary APIs is not documented anywhere yet.
Prerequisite
- CDS runtime needs to be loaded with Data Types and Data Dictionary before starting modeling (enrichment). This can be done in CDS GUI or using API /api/v1/blueprint-model/bootstrap.
Gather the parameters:
instantiationHave the HEAT template along with the HEAT environment file.
or
Have the Helm chart along with the Values.yaml file (Integration between Multicloud and CDS TBD)
configurationHave the configuration template to apply on the VNF.
- XML for NETCONF
- JSON / XML for RESTCONF
- JSON for Ansible
- CLI
- ...
- Identify which template parameters are static and dynamic
Create and fill-in the a table for all the dynamic values
While doing so, identify the resources using the same process to be resolved; for instance, if two IPs has to be resolved through the same IPAM, the process the resolve the IP is the same.
Here are the information to capture for each dynamic cloud parameters
Parameter Name Data Dictionary Resource source Data Dictionary Ingredients for resolution Output of resolution Either the cloud parameters name or the placeholder given for the dynamic property. InputValue will be given as input in the request.
DefaultValue will be defaulted in the model.
RESTValue will be resolved by sending a query to the REST system
Auth URL URI Payload VERB Supported Auth type
TokenUse token based authentication
- token
BasicUse basic authentication
- username
- password
SSLUse SSL basic authentication
- keystore type
- truststore
- truststore password
- keystore
- keystore password
http(s)://<host>:<port> /xyz JSON formatted payload HTTP method SQLValue will be resolved by sending a SQL statement to the DB system
Type URL Query Username Password Only maria-db
supported for nowSQL statement CapabilityValue will be resolved through the execution of a script.
These are all the required parameters to process the resolution of that particular resources.
RESTList of placeholders used for
- URI
- Payload
DBList of placeholders used for
- SQL statement
This is the expected result from the system, and you should know what value out of the response is of interest for you.
If it's a JSON payload, then you should think about the json path to access to value of interest.
Data dictionary
For each unique identified dynamic resource, along with all their ingredients, we need to create a data dictionary.
Here are the modeling guideline: Modeling Concepts#resourceDefinition-modeling
Bellow are examples of data dictionary
Value will be pass as input.
{ "tags": "unit-number", "name": "unit-number", "property": { "description": "unit-number", "type": "string" }, "updated-by": "adetalhouet", "sources": { "input": { "type": "source-input" } } }
Value will be defaulted.
{ "tags": "prefix-id", "name": "prefix-id", "property" :{ "description": "prefix-id", "type": "integer" }, "updated-by": "adetalhouet", "sources": { "default": { "type": "source-default" } } }
Value will be resolved through REST.
Modeling reference: Modeling Concepts#rest
In this example, we're making a POST request to an IPAM system with no payload.
Some ingredients are required to perform the query, in this case, $prefixId
. Hence It is provided as an input-key-mapping
and defined as a key-dependencies.
Please refer to the modeling guideline for more in depth understanding.
As part of this request, the expected response will be as bellow. What is of interest is the address
field, as this is what we're trying to resolve.
{ "id": 4, "address": "192.168.10.2/32", "vrf": null, "tenant": null, "status": 1, "role": null, "interface": null, "description": "", "nat_inside": null, "created": "2018-08-30", "last_updated": "2018-08-30T14:59:05.277820Z" }
To tell the resolution framework what is of interest in the response, the path
property can be used, which uses JSON_PATH, to get the value.
{ "tags" : "oam-local-ipv4-address", "name" : "create_netbox_ip", "property" : { "description" : "netbox ip", "type" : "string" }, "updated-by" : "adetalhouet", "sources" : { "sdnc" : { "type" : "source-rest", "properties" : { "type" : "JSON", "verb" : "POST", "endpoint-selector" : "ipam-1", "url-path" : "/api/ipam/prefixes/$prefixId/available-ips/", "path" : "/address", "input-key-mapping" : { "prefixId" : "prefix-id" }, "output-key-mapping" : { "address" : "address" }, "key-dependencies" : [ "prefix-id" ] } } } }
Value will be resolved through a database.
Modeling reference: Modeling Concepts#sql
In this example, we're making a SQL to the primary database.
Some ingredients are required to perform the query, in this case, $vfmoudleid
. Hence It is provided as an input-key-mapping
and defined as a key-dependencies.
Please refer to the modeling guideline for more in depth understanding.
As part of this request, the expected response will be as put in value
. In the output-key-mapping
section, that value will be mapped to the expected resource name to resolve.
{ "name": "vf-module-type", "tags": "vf-module-type", "property": { "description": "vf-module-type", "type": "string" }, "updated-by": "adetalhouet", "sources": { "primary-db": { "type": "source-db", "properties": { "type": "SQL", "query": "select sdnctl.demo.value as value from sdnctl.demo where sdnctl.demo.id=:vfmoduleid", "input-key-mapping": { "vfmoduleid": "vf-module-number" }, "output-key-mapping": { "vf-module-type": "value" }, "key-dependencies": [ "vf-module-number" ] } } } }
Value will be resolved through the execution of a script.
Modeling reference: Modeling Concepts#Capability
In this example, we're making use of a Python script.
Some ingredients are required to perform the query, in this case, $vf-module-type
. Hence It is provided as a key-dependencies.
Please refer to the modeling guideline for more in depth understanding.
As part of this request, the expected response will set within the script itself.
{ "tags": "interface-description", "name": "interface-description", "property": { "description": "interface-description", "type": "string" }, "updated-by": "adetalhouet", "sources": { "capability": { "type": "source-capability", "properties": { "script-type": "jython", "script-class-reference": "Scripts/python/DescriptionExample.py", "key-dependencies": [ "vf-module-type" ] } } } }
The script itself is as bellow.
The key is to have the script class derived from the framework standards.
In the case of resource resolution, the class to derive from is AbstractRAProcessor
It will give the required methods to implement: process
and recover
, along with some utility functions,
such as set_resource_data_value
or addError
.
These functions either come from the AbstractRAProcessor
class, or from the class it derived from.
If the resolution fail, the recover method will get called with the exception as parameter.
# Copyright (c) 2019 Bell Canada. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from abstract_ra_processor import AbstractRAProcessor from blueprint_constants import * from java.lang import Exception as JavaException class DescriptionExample(AbstractRAProcessor): def process(self, resource_assignment): try: # get key-dependencies value value = self.raRuntimeService.getStringFromResolutionStore("vf-module-type") # logic based on key-dependency outcome result = "" if value == "vfw": result = "This is the Virtual Firewall entity" elif value == "vsn": result = "This is the Virtual Sink entity" elif value == "vpg": result = "This is the Virtual Packet Generator" # set the value of resource getting currently resolved self.set_resource_data_value(resource_assignment, result) except JavaException, err: log.error("Java Exception in the script {}", err) except Exception, err: log.error("Python Exception in the script {}", err) return None def recover(self, runtime_exception, resource_assignment): print self.addError(runtime_exception.getMessage()) return None
Value will be resolved through REST., and output will be a complex type.
Modeling reference: Modeling Concepts#rest
In this example, we're making a POST request to an IPAM system with no payload.
Some ingredients are required to perform the query, in this case, $prefixId
. Hence It is provided as an input-key-mapping
and defined as a key-dependencies.
Please refer to the modeling guideline for more in depth understanding.
As part of this request, the expected response will be as bellow.
{ "id": 4, "address": "192.168.10.2/32", "vrf": null, "tenant": null, "status": 1, "role": null, "interface": null, "description": "", "nat_inside": null, "created": "2018-08-30", "last_updated": "2018-08-30T14:59:05.277820Z" }
What is of interest is the address
and id
fields. For the process to return these two values, we need to create a custom data-type, as bellow
{ "version": "1.0.0", "description": "This is Netbox IP Data Type", "properties": { "address": { "required": true, "type": "string" }, "id": { "required": true, "type": "integer" } }, "derived_from": "tosca.datatypes.Root" }
The type of the data dictionary will be dt-netbox-ip
.
To tell the resolution framework what is of interest in the response, the output-key-mapping section is used. The process will map the output-key-mapping to the defined data-type.
{ "tags" : "oam-local-ipv4-address", "name" : "create_netbox_ip", "property" : { "description" : "netbox ip", "type" : "dt-netbox-ip" }, "updated-by" : "adetalhouet", "sources" : { "sdnc" : { "type" : "source-rest", "properties" : { "type" : "JSON", "verb" : "POST", "endpoint-selector" : "ipam-1", "url-path" : "/api/ipam/prefixes/$prefixId/available-ips/", "path" : "", "input-key-mapping" : { "prefixId" : "prefix-id" }, "output-key-mapping" : { "address" : "address", "id" : "id" }, "key-dependencies" : [ "prefix-id" ] } } } }
CBA scaffholding
The overall purpose of the document is the constituate a CBA, see Modeling Concepts#ControllerBlueprintArchive for understanding of what a CBA is.
Now is the time to create the scaffholfing for your CBA.
What you will need is the following based directory/file structure:
āāā Definitions ā āāā blueprint.json Overall TOSCA service template (worfklow + node_template) āāā Environments Contains *.properties files as required by the service āāā Plans Contains Directed Graph āāā Scripts Contains scripts ā āāā python Python scripts ā āāā kotlin Kotlin scripts āāā TOSCA-Metadata ā āāā TOSCA.meta Meta-data of overall package āāā Templates Contains combination of mapping and template
The TOSCA.meta should have this information
TOSCA-Meta-File-Version: 1.0.0 CSAR-Version: 1.0 Created-By: Alexis de Talhouët (adetalhouet89@gmail.com) Entry-Definitions: Definitions/blueprint.json <- Path reference to the blueprint.json file. If the file name is changed, change here accordinlgy. Template-Tags: ONAP, CBA, Test Content-Type: application/vnd.oasis.bpmn Template-Name: <Name of the template> Template-Version: <Version>
The blueprint.json should have the following metadata
{ "metadata": { "template_author": "Alexis de Talhouët", "author-email": "adetalhouet89@gmail.com", "user-groups": "ADMIN, OPERATION", "template_name": "golden", <- This is the overall CBA name, will be refer later to sdnc_blueprint_name "template_version": "1.0.0", <- This is the overall CBA version, will be refer later to sdnc_blueprint_version "template_tags": "ONAP, CBA, Test" } . . .
ONAP Specific Workflows
The following workflows are contracts established between SO, SDNC and CDS to cover the instantiation and the post-instantiation use cases.
User -> SO (Macro Service Create) SO -> AssignBB (service, vnf, vf-module) - instantiation -> SDNC GR-API -> CDS (resource-assignment workflow) SO -> ConfigAssignBB - day0 config assign -> CDS (config-assign workflow) SO -> CreateBB (VF-Module) -> OpenStack adapter / Multi-Cloud SO -> ConfigDeployBB - day0 config push -> CDS (config-deploy workflow)
Please refer to the modeling guide to understand workflow concept: Modeling Concepts#workflow
The workflow definition will be added within the blueprint.json file, see 58233401.
resource-assignment
This action is meant to assign resources needed to instantiate the service, e.g. to resolve all the cloud parameters.
Also, this action has the ability to perform a dry-run, meaning that result from the resolution will be made visible to the user.
Context
This action is triggered by Generic-Resource-API (GR-API) within SDNC as part of the AssignBB orchestrated by SO.
It will be triggered for each VNF(s) and VF-Module(s) (referred as entity bellow).
See SO Building blocks Assignment.
Templates
Understand resource accumulator templates
These templates are specific to the instantiation scenario, and relies on GR-API within SDNC.
The resource accumulator template is composed of the following sections:
resource-accumulator-resolved-data
Defines all the resources that can be resolved directly from the context. It expresses a direct mapping between the name of the resource and its value.
"resource-accumulator-resolved-data": [ { "param-name": "service-instance-id", "param-value": "${service-instance-id}" }, { "param-name": "vnf_id", "param-value": "${vnf-id}" } ]
capability-data
Defines the logic to use to create a specific resource, along with the ingredients required to invoke the capability and the output mapping. See the ingredients as function parameters, and output mapping as returned value.
The logic to resolve the resource is a DG, hence DG development is required to support a new capability.
Currently the following capabilities exist:
Netbox:
netbox-ip-assign
Example Expand source{ "capability-name": "netbox-ip-assign", "key-mapping": [ { "payload": [ { "param-name": "service-instance-id", "param-value": "${service-instance-id}" }, { "param-name": "prefix-id", "param-value": "${private-prefix-id}" }, { "param-name": "vf-module-id", "param-value": "${vf-module-id}" }, { "param-name": "external_key", "param-value": "${vf-module-id}-vpg_private_ip_1" } ], "output-key-mapping": [ { "resource-name": "vpg_private_ip_1", "resource-value": "${vpg_private_ip_1}" } ] } ] }
Name generation:
generate-name
Example Expand source{ "capability-name": "generate-name", "key-mapping": [ { "payload": [ { "param-name": "resource-name", "param-value": "vnf_name" }, { "param-name": "resource-value", "param-value": "${vnf_name}" }, { "param-name": "external-key", "param-value": "${vnf-id}_vnf_name" }, { "param-name": "policy-instance-name", "param-value": "${vf-naming-policy}" }, { "param-name": "nf-role", "param-value": "${nf-role}" }, { "param-name": "naming-type", "param-value": "VNF" }, { "param-name": "AIC_CLOUD_REGION", "param-value": "${aic-cloud-region}" } ], "output-key-mapping": [ { "resource-name": "vnf_name", "resource-value": "${vnf_name}" } ] } ] }
Required templates
See Modeling Concepts#template
The name of the templates is very important, and can't be random. Bellow are the requirements
VNF
The VNF Resource Accumulator Template prefix name can be anything, but what is very important is that when integrating with SDC the sdnc_artifact_name property of the VF or PNF needs to be the same; see 58233401.
VF-Modules
Each vf-module will have its own resource accumulator template, and its prefix name must be the vf-module-label, which is nothing but the name of the HEAT file defining the OS::Nova::Server
Example:
If the file is name vfw.yaml, the vf-module-label
will be vfw
For instance, with the vFW service HEAT definition, you will see in the VSP within SDC the following screen, showing you the label of each vf-module
In this case, we will have 4 resource accumulator templates, following the template convention, hence ending with -template
- base_template-template.vtl
- vfw-template.vtl
- vsn-template.vtl
- vpg-template.vtl
Mapping
Each template requires its associated mapping file, see Modeling Concepts#ArtifactMappingResource
Example:
Taking the same vFW example, we would have 4 mapping template following the convention, hence ending with -mapping:
- base_template-mapping.vtl
- vfw-mapping.vtl
- vsn-mapping.vtl
- vpg-mapping.vtl
Required Inputs
Property | Description | Definition |
---|---|---|
template-prefix | SDNC will populate this input with the name of the template to execute. If doing VNF Assign, it will use sdnc_artifact_name as template-prefix. If doing VF-Module Assign, it will use the vf-module-label as template-prefix. | "template-prefix" : { "required" : true, "type" : "list", "entry_schema" : { "type" : "string" } |
Output
It is necessary to provide the resolved template as output. To do so, we will use the Modeling Concepts#getAttribute expression.
Also, as mentioned here Modeling Concepts#resourceResolution, the resource resolution component node will populate an attribute named assignment-params with the result.
Finally, the name of the ouput has to be meshed-template so SDNC GR-API knows how to properly parse the response.
Component
This action requires a node_template of type component-resource-resolution
The name of the node_template is important, as it will be used within the Workflow definition (see step.target property Modeling Concepts#workflowProperties)
Finally, you can see the component has a list of artifacts, being the template/mapping defined before.
Example:
Taking the same vFW example, we have a node_template name resource-assignment:
"node_templates": { "resource-assignment" : { "type" : "component-resource-resolution", "interfaces" : { "ResourceResolutionComponent" : { "operations" : { "process" : { "inputs" : { "artifact-prefix-names" : { "get_input" : "template-prefix" } } } } } }, "artifacts": { "base-template": { "type": "artifact-template-velocity", "file": "Templates/base-template.vtl" }, "base-mapping": { "type": "artifact-mapping-resource", "file": "Templates/base-mapping.json" }, "vfw-template": { "type": "artifact-template-velocity", "file": "Templates/vfw-template.vtl" }, "vfw-mapping": { "type": "artifact-mapping-resource", "file": "Templates/vfw-mapping.json" }, "vfw-vnf-template": { "type": "artifact-template-velocity", "file": "Templates/vfw-vnf-template.vtl" }, "vfw-vnf-mapping": { "type": "artifact-mapping-resource", "file": "Templates/vfw-vnf-mapping.json" }, "vpg-template": { "type": "artifact-template-velocity", "file": "Templates/vpg-template.vtl" }, "vpg-mapping": { "type": "artifact-mapping-resource", "file": "Templates/vpg-mapping.json" }, "vsn-template": { "type": "artifact-template-velocity", "file": "Templates/vsn-template.vtl" }, "vsn-mapping": { "type": "artifact-mapping-resource", "file": "Templates/vsn-mapping.json" } } } } }
Overall workflow example w/ component and artifact
{ "metadata": { "template_author": "Alexis de Talhouët", "author-email": "adetalhouet89@gmail.com", "user-groups": "ADMIN, OPERATION", "template_name": "vFW_spinup", "template_version": "1.0.0", "template_tags": "vFW" }, "topology_template": { "workflows": { "resource-assignment": { "steps": { "resource-assignment": { "description": "Resource Assign Workflow", "target": "resource-assignment" } }, "inputs" : { "template-prefix" : { "required" : true, "type" : "list", "entry_schema" : { "type" : "string" } } }, "outputs": { "meshed-template": { "type": "json", "value": { "get_attribute": [ "resource-assignment", "assignment-params" ] } } } } }, "node_templates": { "resource-assignment" : { "type" : "component-resource-resolution", "interfaces" : { "ResourceResolutionComponent" : { "operations" : { "process" : { "inputs" : { "artifact-prefix-names" : { "get_input" : "template-prefix" } } } } } }, "artifacts": { "base-template": { "type": "artifact-template-velocity", "file": "Templates/base-template.vtl" }, "base-mapping": { "type": "artifact-mapping-resource", "file": "Templates/base-mapping.json" }, "vfw-template": { "type": "artifact-template-velocity", "file": "Templates/vfw-template.vtl" }, "vfw-mapping": { "type": "artifact-mapping-resource", "file": "Templates/vfw-mapping.json" }, "vfw-vnf-template": { "type": "artifact-template-velocity", "file": "Templates/vfw-vnf-template.vtl" }, "vfw-vnf-mapping": { "type": "artifact-mapping-resource", "file": "Templates/vfw-vnf-mapping.json" }, "vpg-template": { "type": "artifact-template-velocity", "file": "Templates/vpg-template.vtl" }, "vpg-mapping": { "type": "artifact-mapping-resource", "file": "Templates/vpg-mapping.json" }, "vsn-template": { "type": "artifact-template-velocity", "file": "Templates/vsn-template.vtl" }, "vsn-mapping": { "type": "artifact-mapping-resource", "file": "Templates/vsn-mapping.json" } } } } } }
Add a new capability
When adding a capability, consider whether it should be available both at VNF and VF-Module level. This is important for its implementation.
Here is the
You need to do the following:
- Create the DG that will handle the logic to resolve the resource
Load the DG within SDNC
Example of script to automate deployment of DG Expand source#!/bin/sh # This script takes care of loading the DG into the runtime of SDNC. # The DG file name has to follow this pattern: # GENERIC-RESOURCE-API_{rpc_name}_{version} usage() { echo "./load-dg.sh <dg>" exit } if [[ -z $1 ]] then usage fi rpc_name=`echo "$1" | cut -d'_' -f2 | cut -d'.' -f1` version=`echo "$1" | cut -d'_' -f3` content=`cat $1` ip=$2 data="$(curl -s -o /dev/null -w %{url_effective} --get --data-urlencode "$content" "")" dg_xml_escaped="${data##/?}" echo -e "module=GENERIC-RESOURCE-API&rpc=$rpc_name&flowXml=$dg_xml_escaped" > payload echo -e " Installing $rpc_name version ${version%.*}" curl -X POST \ http://$ip:$SDNC_NODE_PORT/uploadxml \ -H 'Authorization: Basic ZGd1c2VyOnRlc3QxMjM=' \ -H 'Content-Type: application/x-www-form-urlencoded' \ -d @payload rm payload echo -e " Activating $rpc_name version ${version%.*}" activate_uri="activateDG?module=GENERIC-RESOURCE-API&rpc=$rpc_name&mode=sync&version=${version%.*}&displayOnlyCurrent=true" curl -X GET \ -H 'Accept: application/json' \ -H 'Authorization: Basic ZGd1c2VyOnRlc3QxMjM=' \ -H 'Content-Type: application/json' \ http://$ip:$SDNC_NODE_PORT/$activate_uri
Add the capability in the
self-serve-vnf-assign
DG and/orself-serve-vf-module-assign
in the node named set ss.capability.execution-order[] then upload the updated version of this DG.
When doing so, make sure to increment the last parameter ss.capability.execution-order_lengthExample
Understand overall SDNC DG flow logic
Logic for vnf and vf-module assignement is pretty much the same.
This is the general DG logic of the VNF assign flow and sub-flows:
- call
vnf-topology-operation
- call
vnf-topology-operation-assign
- call
self-serve-vnf-assign
- set capability.execution-order
- call
self-serve-vnf-ra-assignment
- execute REST call to CDS blueprint processor
- put resource-accumulator-resolved-data in MDSAL GR-API/services/service/$serviceInstanceId/vnfs/vnf/$vnfId
- call
self-serve- + capability-name
- put vnf information in AAI (including the selflink)
- call
naming-policy-generate-name
- put generic-vnf relationship in AAI
- call
- call
This is the general logic of the vf-module assign flow and sub-flows:
- call
vf-module-topology-operation
- call
vf-module-topology-operation-assign
- set service-data based on SO request (userParams / cloudParams)
- call
self-serve-vf-module-assign
- set capability.execution-order
- call self-serve-vfmodule-ra-assignment
- execute REST call to CDS blueprint processor
- put resource-accumulator-resolved-data in MDSAL GR-API/services/service/$serviceInstanceId/vnfs/vnf/$vnfId/vf-modules/vf-module
- execute REST call to CDS blueprint processor
- call
self-serve- + capability-name
- put vf-module information in AAI
- put vnfc information in AAI
- call
config-assign
This action is meant to assign all the resources and generate the configuration to apply post-instantiation (day0 config).
Context
This action is triggered by SO after the AssignBB has been executed for Service, VNF and VF-Module. It corresponds to the ConfigAssignVnfBB.
See SO Building blocks Assignment.
Templates
For this action, you can define as many template as needed. Make sure for each template to follow the convention and to provide the mapping file, as follow:
- xyz-template.vtl
- xyz-mapping.vtl
Required Input
Property | Description | Definitions |
---|---|---|
template-prefix | Name of the template-prefix to resolve. | "template-prefix" : { "required" : true, "type" : "list", "entry_schema" : { "type" : "string" } |
resolution-key | The functionality requires the ability to retrieve the resolution that has been made later point in time in the process, during config-deploy action. | "resolution-key" : { "required" : true, "type" : "string" } |
Output
In order to perform dry-run, it is necessary to provide the meshed resolved template as output. To do so, the use of Modeling Concepts#getAttribute expression is required.
Also, as mentioned here Modeling Concepts#resourceResolution, the resource resolution component node will populate an attribute named assignment-params with the result.
Component
This action requires a node_template of type component-resource-resolution
The name of the node_template is important, as it will be used within the Workflow definition (see step.target property Modeling Concepts#workflowProperties)
Finally, you can see the component has a list of artifacts, being the template/mapping defined before.
Example:
Taking the vDNS example, we have a node_template name config-assign:
"config-assign" : { "type" : "component-resource-resolution", "interfaces" : { "ResourceResolutionComponent" : { "operations" : { "process" : { "inputs" : { "resolution-key" : { "get_input" : "resolution-key" }, "store-result" : true, "artifact-prefix-names" : [ "baseconfig", "incremental-config" ] } } } } }, "artifacts" : { "baseconfig-template" : { "type" : "artifact-template-velocity", "file" : "Templates/baseconfig-template.vtl" }, "baseconfig-mapping" : { "type" : "artifact-mapping-resource", "file" : "Templates/baseconfig-mapping.json" }, "incremental-config-template" : { "type" : "artifact-template-velocity", "file" : "Templates/incremental-config-template.vtl" }, "incremental-config-mapping" : { "type" : "artifact-mapping-resource", "file" : "Templates/incremental-config-mapping.json" } } },
Overall workflow example w/ component and artifact
Here is an example of the config-assign workflow:
{ "tosca_definitions_version": "controller_blueprint_1_0_0", "metadata": { "template_author": "Abdelmuhaimen Seaudi", "author-email": "abdelmuhaimen.seaudi@orange.com", "user-groups": "ADMIN, OPERATION", "template_name": "test", "template_version": "1.0.0", "template_tags": "test, vDNS-CDS, SCALE-OUT, MARCO" }, "topology_template": { "workflows": { "config-assign": { "steps": { "config-assign": { "description": "Config Assign Workflow", "target": "config-assign" } }, "inputs": { "resolution-key": { "required": true, "type": "string" }, "config-assign-properties": { "description": "Dynamic PropertyDefinition for workflow(config-assign).", "required": true, "type": "dt-config-assign-properties" } }, "outputs": { "dry-run": { "type": "json", "value": { "get_attribuxte": [ "config-assign", "assignment-params" ] } } } } }, "node_templates": { "config-assign": { "type": "component-resource-resolution", "interfaces": { "ResourceResolutionComponent": { "operations": { "process": { "inputs": { "resolution-key": { "get_input": "resolution-key" }, "store-result": true, "artifact-prefix-names": [ "baseconfig", "incremental-config" ] } } } } }, "artifacts": { "baseconfig-template": { "type": "artifact-template-velocity", "file": "Templates/baseconfig-template.vtl" }, "baseconfig-mapping": { "type": "artifact-mapping-resource", "file": "Templates/baseconfig-mapping.json" }, "incremental-config-template": { "type": "artifact-template-velocity", "file": "Templates/incremental-config-template.vtl" }, "incremental-config-mapping": { "type": "artifact-mapping-resource", "file": "Templates/incremental-config-mapping.json" } } } } } }
config-deploy
This action is meant to push the configuration templates defined during the config-assign step for the post-instantiation.
This action is triggered by SO after the CreateBB has been executed for all the VF-Modules.
Context
This action is triggered by SO after the CreateVnfBB has been executed. It corresponds to the ConfigDeployBB.
See SO Building blocks Assignment.
Templates
If need be, some template can be defined. They can either be resolved through the a node_template of type component-resource-resolution,which will then have to be combined with another node_template, in order to push the config in the network, or in third party system. In this case, you will want to leverage the multi-action worklow.
Else, the template could be resolved directly through a node_template of type component-script-executor through helpers functions being provided.
Required Inputs
Property | Description |
---|---|
resolution-key | Needed to retrieve the resolution that has been made earlier point in time in the process. The combination of the artifact-name and the resolution-key will be used to uniquely identify the result. |
Output
SUCCESS or FAILURE
Component
If you want to have a multi-action worklow, then the action will refer to a node_template of type dg-generic.
If you want to have a single action workflow, then you should use one of the following node type: component-script-executor, component-remote-script-executor, component-remote-ansible-executor
The name of the node_template is important, as it will be used within the Workflow definition (see step.target property Modeling Concepts#workflowProperties)
Finally, you can see the component(s) might have a list of artifacts, being the template/mapping defined before.
Example:
Taking the vDNS example, we have a node_template name config-deploy-process, which is of type dg-generic, hence we also have the dependent node_template.
"config-deploy-process" : { "type" : "dg-generic", "properties" : { "content" : { "get_artifact" : [ "SELF", "dg-config-deploy-process" ] }, "dependency-node-templates" : [ "nf-account-collection", "execute" ] }, "artifacts" : { "dg-config-deploy-process" : { "type" : "artifact-directed-graph", "file" : "Plans/CONFIG_ConfigDeploy.xml" } } }, "nf-account-collection" : { "type" : "component-resource-resolution", "interfaces" : { "ResourceResolutionComponent" : { "operations" : { "process" : { "inputs" : { "artifact-prefix-names" : [ "nf-params" ] } } } } }, "artifacts" : { "nf-params-template" : { "type" : "artifact-template-velocity", "file" : "Templates/nf-params-template.vtl" }, "nf-params-mapping" : { "type" : "artifact-mapping-resource", "file" : "Templates/nf-params-mapping.json" } } }, "execute" : { "type" : "component-netconf-executor", "requirements" : { "netconf-connection" : { "capability" : "netconf", "node" : "netconf-device", "relationship" : "tosca.relationships.ConnectsTo" } }, "interfaces" : { "ComponentNetconfExecutor" : { "operations" : { "process" : { "inputs" : { "script-type" : "jython", "script-class-reference" : "Scripts/python/ConfigDeploy.py", "instance-dependencies" : [ ], "dynamic-properties" : "*config-deploy-properties" } } } } }, "artifacts" : { "baseconfig-template" : { "type" : "artifact-template-velocity", "file" : "Templates/baseconfig-template.vtl" }, "baseconfig-mapping" : { "type" : "artifact-mapping-resource", "file" : "Templates/baseconfig-mapping.json" }, "incremental-config-template" : { "type" : "artifact-template-velocity", "file" : "Templates/incremental-config-template.vtl" }, "incremental-config-mapping" : { "type" : "artifact-mapping-resource", "file" : "Templates/incremental-config-mapping.json" } } } } }
Overall workflow example w/ component and artifact
Here is an example of the config-deploy workflow:
{ "tosca_definitions_version": "controller_blueprint_1_0_0", "metadata": { "template_author": "Abdelmuhaimen Seaudi", "author-email": "abdelmuhaimen.seaudi@orange.com", "user-groups": "ADMIN, OPERATION", "template_name": "test", "template_version": "1.0.0", "template_tags": "test, vDNS-CDS, SCALE-OUT, MARCO" }, "topology_template": { "workflows": { "config-deploy": { "steps": { "config-deploy": { "description": "Resource Assign and Python Netconf Activation Workflow", "target": "config-deploy-process", "activities": [ { "call_operation": "" } ] } }, "inputs": { "resolution-key": { "required": false, "type": "string" }, "service-instance-id": { "required": false, "type": "string" }, "config-deploy-properties": { "description": "Dynamic PropertyDefinition for workflow(config-deploy).", "required": true, "type": "dt-config-deploy-properties" } } } }, "node_templates": { "config-deploy-process": { "type": "dg-generic", "properties": { "content": { "get_artifact": [ "SELF", "dg-config-deploy-process" ] }, "dependency-node-templates": [ "nf-account-collection", "execute" ] }, "artifacts": { "dg-config-deploy-process": { "type": "artifact-directed-graph", "file": "Plans/CONFIG_ConfigDeploy.xml" } } }, "nf-account-collection": { "type": "component-resource-resolution", "interfaces": { "ResourceResolutionComponent": { "operations": { "process": { "inputs": { "artifact-prefix-names": [ "nf-params" ] } } } } }, "artifacts": { "nf-params-template": { "type": "artifact-template-velocity", "file": "Templates/nf-params-template.vtl" }, "nf-params-mapping": { "type": "artifact-mapping-resource", "file": "Templates/nf-params-mapping.json" } } }, "execute": { "type": "component-netconf-executor", "requirements": { "netconf-connection": { "capability": "netconf", "node": "netconf-device", "relationship": "tosca.relationships.ConnectsTo" } }, "interfaces": { "ComponentNetconfExecutor": { "operations": { "process": { "inputs": { "script-type": "jython", "script-class-reference": "Scripts/python/ConfigDeploy.py", "instance-dependencies": [ ], "dynamic-properties": "*config-deploy-properties" } } } } }, "artifacts": { "baseconfig-template": { "type": "artifact-template-velocity", "file": "Templates/baseconfig-template.vtl" }, "baseconfig-mapping": { "type": "artifact-mapping-resource", "file": "Templates/baseconfig-mapping.json" }, "incremental-config-template": { "type": "artifact-template-velocity", "file": "Templates/incremental-config-template.vtl" }, "incremental-config-mapping": { "type": "artifact-mapping-resource", "file": "Templates/incremental-config-mapping.json" } } } } } }
TBD
Introduction
The purpose is to describe integration of CDS within SDC
What's new
At the VF and PNF level, a new artifact type CONTROLLER_BLUEPRINT_ARCHIVE allow the designed to load the previsouly designed CBA as part of the resource.
How to add the CBA in SDC VF resource (similar for PNF)
Create the VF resource
Click on Deployment Artifact, then Add other arifacts, and select you CBA
Check the artifact is uploaded OK, and click on Certify.
Create a new service model, and add the newly created VF (including CBA artifact) to the new service model. Click on "Add Service"
Click on "Composition", and drag the VF we created from the palette on the left onto the canvas in the middle.
Then, click on "Submit for Testing".
Click on Properties Assignments, then click on the service name, e.g. "CDS-VNF-TEST" from the right bar.
Type "sdnc" in the filter box, and add the sdnc_model_name, sdnc_model_version, and sdnc_artifact_version, and click "Save".
sdnc_model_name - This is the name of the blueprint (e.g. CBA name)
sdnc_model_version - This is the version of the blueprint
sdnc_artifact_name - This is the name of the VNF resource accumulator template
Type "skip" in the filter box, and set "skip post instantiation" to FALSE, then click "Save".
Login as Tester (jm0007/demo123456!) and accept the new service.
Login as Governor (gv0001/demo123456!) and approve for distribution.
Login as Operator (op0001/demo123456!) and click on "Distribute".
Click on "Monitor" to check the progress of the distribution, and check that all ONAP components were notified, and downloaded the artifacts, and deployed OK.
- No labels