Table of Contents |
---|
Types of Users and Usage Instructions:
Short Video Series available at : https://www.youtube.com/playlist?list=PLj-oRfbkqkfnN_2vnfhivCesJ118SA_zG
Demo day demonstration recording and slides available at : https://lf-onap.atlassian.net/wiki/display/DW/2020-04-16+DCAE+Demo
Sr.No | User | Usage Instructions |
1. | Developers who are looking to onboard their mS | · Access the Nifi Web UI url provided to you · Follow steps 2.b to 2.d · You should be able to see your microservices in the Nifi Web UI by clicking and dragging ‘Processor’ on the canvas, and searching for the name of the microservice/component/processor. |
2. | Designers who are building the flows through UI and triggering distribution | · Access the Nifi Web UI url provided to you · Follow steps 3 to the end of the document |
3. | Infrastructure/ Admins who want to stand up DCAE Mod and validate it | · Follow start to the end |
1. Deployment of DCAE MOD components via Helm charts
The DCAE MOD components are deployed using the standard ONAP OOM deployment process. When deploying ONAP using the helm deploy command, DCAE MOD components are deployed when the dcaemod.enabled flag is set to true, either via a --set option on the command line or by an entry in an overrides file. In this respect, DCAE MOD is no different from any other ONAP subsystem.
The default DCAE MOD deployment relies on an nginx ingress controller being available in the Kubernetes cluster where DCAE MOD is being deployed. The Rancher RKE installation process sets up a suitable ingress controller.When DCAE MOD is deployed with an ingress controller, several endpoints are exposed outside the cluster at the ingress controller's external IP address and port. (In the case of a Rancher RKE installation, there is an ingress controller on every worker node, listening at the the In order to enable the use of the ingress controller, it is necessary to override the OOM default global settings for ingress configuration. Specifically, the installation needs to set the following configuration in an override file:
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
#Global ingress configuration
ingress:
enabled: true
virtualhost:
baseurl: "simpledemo.onap.org" |
When DCAE MOD is deployed with an ingress controller, several endpoints are exposed outside the cluster at the ingress controller's external IP address and port. (In the case of a Rancher RKE installation, there is an ingress controller on every worker node, listening at the the standard HTTP port (80).) These exposed endpoints are needed by users using machines outside the Kubernetes cluster.
Endpoint | Routes to (cluster internal address) | Description |
/nifi | http://dcaemod-designtool:8080/nifi | Design tool Web UI |
/nifi-api | http://dcaemod-designtool:8080/nifi-api | Design tool API |
/nifi-jars | http://dcaemod-nifi-registry:18080/nifi-jars | Flow registry listing of JAR files built from component specs |
/onboarding | http://dcaemod-onboarding-api:8080/onboarding | Onboarding API |
/distributor | http://dcaemod-distributor-api:8080/distributor | Distributor API |
To access the design Web UI, for example, a user would use the URL : http://ingress_controller_address:ingress_controller_port/nifi.
ingressingress_controller_address is the the IP address or DNS FQDN of the ingress controller and
ingress_controller_port is the port on which the ingress controller is listening for HTTP requests. (If the port is 80, the HTTP default, then there is no need to specify a port.)
There are two additional internal endpoints that users need to know, in order to configure a registry client and a distribution target in the design tool's controller settings.
Configuration Item | Endpoint URL |
Registry client | http://dcaemod-nifi-registry:18080 |
Distribution target | http://dcaemod-runtime-api:9090 |
Using DCAE MOD without an Ingress Controller
<to be supplied>As OOM/ingress template has been updated in Guilin release to enable virtual host, MOD API's and UI access via ingress should use dcaemod.api.simpledemo.onap.org
Add entry for dcaemod.simpledemo.onap.org in /etc/hosts with the correct IP (any of K8S node IP can be specified)
Using DCAE MOD without an Ingress Controller
Not currently supported.
2. Configuring DCAE mod
Our demo is hosted on 10.12.7.116. The Nifi Registry UI (IP Address for the purpose of this demo will hence be 10.12.7.116. In case of other deployments, we would have used the IP Address, or the DNS FQDN, if there is one, for one of the Kubernetes nodes.
Now let’s access the Nifi (DCAE designer) UI - http://<hostname>/nifi-registry ) should now load if you don’t see errors with the Nifi Registry docker container logs (do docker logs modstandup_nifi-registry_1) and it is now able to find the postgresql database (which is now running as a separate process).
At this point, you can create a new bucket using the wrench tool in the UI and let’s say we name this new-bucket
Creating a bucket is no longer needed. The Helm installation automatically creates a bucket.
Now let’s access the Nifi (DCAE designer) UI - http://<hostname>/nifi
a. dcaemod.simpledemo.onap.org/nifi/
a) Get the artifacts to test and onboard.
Sample Component : model-b1
{
"spec": {
"parameters": [],
"artifacts": [
{
"type": "docker image",
"uri": "tlab-nexus.research.att.com:18444/model-b1:1"
}
],
"self": {
"version": "1.0.0",
"name": "model-b1",
"component_type": "docker",
"description": "Automatically generated from Acumos model"
},
"streams": {
"publishes": [
{
"type": "message_router",
"version": "1.0.0",
"config_key": "predict_publisher",
"format": "OutputFormat"
}
],
"subscribes": [
{
"type": "message_router",
"version": "1.0.0",
"config_key": "predict_subscriber",
"format": "PredictIn"
}
]
},
"auxilary": {
"healthcheck": {
"endpoint": "/healthcheck",
"type": "http"
}
},
"services": {
"provides": [],
"calls": []
}
},
"owner": "aoadapter"
}
Sample data format:
...
b. To onboard a data format and a component
Each component has a description that tells what it does.
These requests would be of the type
...
In our case,
...
In a Helm deployment the hostname would not be localhost. It would be the IP address (or DNS FQDN if there is one) for one of the Kubernetes nodes.
More posting requests for data formats and components
...
Let's fetch the artifacts/ spec files
A sample Component DCAE-VES-Collector : https://git.onap.org/dcaegen2/collectors/ves/tree/dpo/spec/vescollector-componentspec.json
A sample Data Format : https://git.onap.org/dcaegen2/collectors/ves/tree/dpo/data-formats/VES-5.28.4-dataformat.json
For the purpose of onboarding, a Sample Request body should be of the type -
{ "owner": "<some value>", "spec": <some json object> }
where the json object inside the spec field can be a component spec json.
Request bodies of this type will be used in the onboarding requests you make using curl or the onboarding swagger interface.
The prepared Sample Request body for a component dcae-ves-collector looks like so
Expand | |||||||
---|---|---|---|---|---|---|---|
|
The prepared Sample request body for a sample data format looks like so -
Expand | |||||||
---|---|---|---|---|---|---|---|
|
b) To onboard a data format and a component
Each component has a description that tells what it does.
These requests would be of the type-
curl -X POST http://<onboardingapi host>/onboarding/dataformats -H "Content-Type: application/json" -d @df-request-model-b1-etl-publish.json@<filepath to request>
curl -X POST -u acumos:integration2019 http://localhost<onboardingapi host>/onboarding/components components -H "Content-Type: application/json" -d @components-request-model-b1-etl.json @<filepath to request>
In our case,
curl -X POST -u acumos:integration2019 http://localhostdcaemod.simpledemo.onap.org/onboarding/dataformats -H "Content-Type: application/json" -d @df-request-model-b1-publish.json @<filepath to request>
curl -X POST -u acumos:integration2019 http://localhostdcaemod.simpledemo.onap.org/onboarding/dataformatscomponents -H "Content-Type: application/json" -d @df-request-model-b1-subscribe.json
curl -X POST -u acumos:integration2019 http://localhost/onboarding/components -H "Content-Type: application/json" -d @components-request-model-b1.json
...
application/json" -d @<filepath to request>
You can download the Components and Data Formats for the demo from –
Components:
https://git.onap.org/dcaegen2/collectors/ves/tree/dpo/spec/vescollector-componentspec.json
https://git.onap.org/dcaegen2/analytics/tca-gen2/tree/dcae-analytics/dpo/tcagen2_spec.json
Corresponding Data Formats:
https://git.onap.org/dcaegen2/collectors/ves/tree/dpo/data-formats
https://git.onap.org/dcaegen2/analytics/tca-gen2/tree/dcae-analytics/dpo/
c) Verify the resources were created using
curl -X GET
...
http://
...
dcaemod.simpledemo.onap.org/onboarding/dataformats
curl -X GET
...
http://
...
dcaemod.simpledemo.onap.org/onboarding/components
d. ) Verify the genprocessor (which polls onboarding periodically to convert component specs to nifi processor), converted the component
Open http://<genprocessor_host>dcaemod.simpledemo.onap.org/nifi-jars in / in a browser.
These jars should now be available for you to use in the nifi UI as processors
In a Helm deployment <genprocessor_host> would be the IP address (and port, if it's not the default port 80) for a Kubernetes node.
3. Design & Distribution Flow
.
3. Design & Distribution Flow
a) Configure Nifi Registry url
Next check Nifi settings by selecting the Hamburger button in the Nifi UI. It should lead you to the Nifi Settings screen
Add a registry client. The Registry client url will be http://dcaemod-nifi-registry:18080
b) Add distribution target which will be the runtime api url
Set the distribution target in the controller settings
Distribution target URL will be http://dcaemod-runtime-api:9090
c) To start creating flows, we need to create a process group first. The name of the process group will be the name of the flow.
a. Configure Nifi Registry url
Next check Nifi settings by selecting the Hamburger button in the Nifi UI. It should lead you to the Nifi Settings screen
Add a registry client. The Url will be as shown for the current docker compose files. This url is on the internal docker network. This might change when not using docker compose.
See above for the correct URL to use in a Helm environment.
Now enter the process group and drag and drop components that we need into the Process group.
Drag and Drop on the canvas, the ‘Processor Group’ icon from the DCAE Designer bar on the top.
Now enter the process group by double clicking it,
You can now drag and drop on the canvas ‘Processor’ icon from the top DCAE Designer tab. You can search for a particular component in the search box that appears when you attempt to drag the ‘Processor’ icon to the canvas.
If the Nifi registry linking worked, you should see the “Import” button when you try to add a Processor or Process group to the Nifi canvas, like so-
By clicking on the import button, we can import already created saved and version controlled flows from the Nifi registry, if they are present.
We can save created flows by version controlling them like so starting with a 'right click' anywhere on the canvas-
Ideally you would name the flow and process group the same, because functionally they are similar.
When the flow is checked in, the bar at the bottom shows a green checkmark
Note: Even if you move a component around on the canvas, and its position on the canvas changes, it is recognized as a change, and it will have to recommitted.
b. d) Adding components and building the flow
You can add additional components in your flow and connect them.
ModelB1Etl DcaeVesCollector connects to ModelB1DockerTcagen2.
Along the way you need to also provide topic names in the settings section. These can be arbitrary names.
ModelB1 connects to ModelB1Reporting
At this point, in this design, we will drag and drop an Input port and call it ‘ves’ which will be a representation for incoming VES collector data. VES data source connects to ModelB1ETL
To recap, see how ves input port DcaeVesCollector connects to ModelB1Etl connects to ModelB1 which connects to ModelB1ReportingDockerTcagen2. Look at the connection relationships. Currently there is no way to validate these relationships. Notice how it is required to name the topics by going to Settings.
The complete flow after joining our components looks like so
c) Add distribution target which will be the runtime api url
Once we have the desired flow checked in, we can go ahead and set the distribution target in the controller settings
See above for the correct URL.
d. e) Submit/ Distribute the flow:
Once your flow is complete and saved in the Nifi registry, you can choose to submit it for distribution.
If the flow was submitted successfully to the runtime api, you should get a pop up a success message like so -
At this step, the design was packaged and sent to Runtime api.
The runtime is supposed to generate the blueprint out of the packaged design/flow and push it to the DCAE inventory and the DCAE Dasboard.
e. f) Checking the components in the DCAE inventory
In a Helm deployment, inventory is not exposed externally. These steps should probably refer to the DCAE dashboard, which allows viewing blueprints in inventory.
...
Dashboard
You should see the generated artifact/ blueprint in the DCAE Dashboard dashboard at https://
...
...
...
...
...
If you want to view all versions of the components
curl -X GET -k "https://135.207.216.182:32346/dcae-service-types?onlyLatest=false" | jq . | less
You should see the generated artifact/ blueprint in the DCAE inventory30418/ccsdk-app/login_external.htm in our deployment. The name for each component will be appended by the flow name followed by underscore followed by the component’s name.
These blueprints can also be seen in the DCAE Inventory dashboard..
The credentials to access the DCAE Dashboard are
Login: su1234
Password: fusion
The generated Blueprint can be viewed.
Finally, the generated Blueprint can be deployed.
You can use/import the attached input configurations files to deploy. Drag and Drop these sample JSON files to fill in the configuration values.
NOTE 1: Increase memory limit to 512Mi
NOTE 2: Verify image URL
View file | ||||
---|---|---|---|---|
|
View file | ||||
---|---|---|---|---|
|