Demo recording and slides are availabe at : 2022-02-22 DCAE Meeting Notes
Helm Flow Pre-requisite
- An accessible ChartMuseum registry (internal or external)
- As the prrovided registry is used both to pull required dependencies and push new generated charts, all common charts used by DCAE components must be available in this registry.
ONAP deployments (gating) will include Chartmuseum installation within ONAP cluster (charts hosted here - https://github.com/onap/oom/tree/master/kubernetes/platform/components/chartmuseum).
Dependent charts such as - dcaegen2-services-common, readinessCheck, common, repositoryGenerator, postgres, mongo, serviceAccount, certInitializer should be preloaded into this registry as MOD retrieves them during new MS helm charts creation and linting. To support the registry initialization, following scripts has been introduced.
- https://github.com/onap/oom/blob/master/kubernetes/contrib/tools/registry-initialize.sh
- https://github.com/onap/oom/blob/master/kubernetes/robot/demo-k8s.sh
Note: Chartmuseum being a platform component, it has to be enabled on-demand and not available with generic ONAP installation. To setup chartmuseum and pre-load required charts, follow the ommands listed below in this page)
MOD Updates
To support the Helm chart generation, following changes were introduced for MOD in Jakarta release
Specification Schema Change
New V3 version of component spec schema introduced - https://github.com/onap/dcaegen2-platform/blob/master/mod/component-json-schemas/component-specification/dcae-cli-v3/component-spec-schema.json
- Added new “helm” object under “auxilary_docker” properties
- Includes “applicationEnv”
- Includes “service” definition
- Readiness Configuration support
- docker_healthcheck_http
- Added HTTP/HTTPS for supported protocol enum list
- Added “port”
- Added “initialDelaySeconds”
- docker_healthcheck_script
- Added “initialDelaySeconds”
- docker_healthcheck_http
MOD/RuntimeAPI
Build Updates
New Java module - Helmgenerator-core was introduced for Helm charts generation. MOD/Runtime has been enhanced to include this new dependency (inaddition to Bp-generator for supporting cloudify blueprints flows).
Below is snippet from - https://github.com/onap/dcaegen2-platform/blob/master/mod/runtimeapi/runtime-core/pom.xml
Chart Updates
MOD/Runtime Charts has been modified to include under resources, common base templates, Charts.yaml, add-on templates and Values.yaml with placeholder.
The Helmgenerator-core modules uses these template to pull the required dependencies and generate new chart for MS onboarded. The parameters in component-spec provided during onboarding is used for final Values.yaml file generation.
Deployment
The MOD/RuntimeAPI introduces new configuration to identify distribution mechanism. Supported artifactType are BLUEPRINT or HELM.
Blueprint – Distribution to Inventory/Dashboard
Helm – Distribution to ChartMuseum
For Jakarta release, the charts configuration has been set to support HELM distribution by default and configured for ONAP-internal chart-museum registry. RuntimeAPI Chart updates https://github.com/onap/oom/blob/master/kubernetes/dcaemod/components/dcaemod-runtime-api/values.yaml
Installation
Below is summary of the steps involved.
- Chartmuseum Installation
- Chartmuseum initialization (pre-load required dependencies)
- Deploy MOD and define registry/target on UI
- Load v3 specs via OnboardingAPI
- Create flow on MOD Designer tool using VES and TCAgen2
- Distribution to Runtime
- Validation and Deployment
1. Chartmuseum Installation
Clone the OOM repository
cd ~/oom/kubernetes/platform/components/chartmuseum helm install -name dev-chartmuseum -n onap . --set global.masterPassword=test1 --set global.pullPolicy=IfNotPresent
For easier validation of the charts in registry, you may enable the Nodeport for the chartmuseum service via kubectl (kubectl edit svc -n onap chart-museum) and provide Nodeport
ports: - name: http nodePort: 30192 port: 80 protocol: TCP targetPort: http selector: app.kubernetes.io/instance: chartmuseum app.kubernetes.io/name: chartmuseum sessionAffinity: None type: NodePort
Once enabled, you can view the registry via browser - http://<k8snodeip>:30192/api/charts
2. Chartmuseum initialization
As noted earlier, there are two scripts available for pre-load. The registry-initialize.sh retrieves the Chartmuseum credential from secret and load the charts individually based on parameter (default no parameters, will load all DCAE service charts and its dependencies). And demo-k8s.sh is wrapper script used in gating, which invokes registry-initialize.sh with required parameters.
cd ~/oom/kubernetes/robot ./demo-k8s.sh onap registrySynch
OR
cd ~/oom/kubernetes/contrib/tools ./registry-initialize.sh -d ../../dcaegen2-services/charts/ -n onap -r dev-chartmuseum ./registry-initialize.sh -d ../../dcaegen2-services/charts/ -n onap -r dev-chartmuseum -p common ./registry-initialize.sh -h repositoryGenerator -n onap -r dev-chartmuseum ./registry-initialize.sh -h readinessCheck -n onap -r dev-chartmuseum ./registry-initialize.sh -h dcaegen2-services-common -n onap -r dev-chartmuseum ./registry-initialize.sh -h postgres -n onap -r dev-chartmuseum ./registry-initialize.sh -h serviceAccount -n onap -r dev-chartmuseum ./registry-initialize.sh -h certInitializer -n onap -r dev-chartmuseum ./registry-initialize.sh -h mongo -n onap -r dev-chartmuseum
3. MOD Deployments and Configuration
The deployment of MOD has not changed from previous release (same steps listed here - DCAE MOD User Guide#1.DeploymentofDCAEMODcomponentsviaHelmcharts is applicable)
Example below using generic override
helm install dev-dcaemod local/dcaemod --namespace onap -f ~/onap-override.yaml --set global.masterPassword=test1 --set global.pullPolicy=IfNotPresent
When DCAE MOD is deployed with an ingress controller, several endpoints are exposed outside the cluster at the ingress controller's external IP address and port. (In the case of a Rancher RKE installation, there is an ingress controller on every worker node, listening at the the standard HTTP port (80).) These exposed endpoints are needed by users using machines outside the Kubernetes cluster.
Endpoint | Routes to (cluster internal address) | Description |
/nifi | http://dcaemod-designtool:8080/nifi | Design tool Web UI |
/nifi-api | http://dcaemod-designtool:8080/nifi-api | Design tool API |
/nifi-jars | http://dcaemod-nifi-registry:18080/nifi-jars | Flow registry listing of JAR files built from component specs |
/onboarding | http://dcaemod-onboarding-api:8080/onboarding | Onboarding API |
/distributor | http://dcaemod-distributor-api:8080/distributor | Distributor API |
To access the design Web UI, for example, a user would use the URL : http://ingress_controller_address:ingress_controller_port/nifi.
ingress_controller_address is the the IP address or DNS FQDN of the ingress controller and
ingress_controller_port is the port on which the ingress controller is listening for HTTP requests. (If the port is 80, the HTTP default, then there is no need to specify a port.)
There are two additional internal endpoints that users need to know, in order to configure a registry client and a distribution target in the design tool's controller settings.
Configuration Item | Endpoint URL |
Registry client | http://dcaemod-nifi-registry:18080 |
Distribution target | http://dcaemod-runtime-api:9090 |
As OOM/ingress template has been updated in Guilin release to enable virtual host, MOD API's and UI access via ingress should use dcaemod.api.simpledemo.onap.org
Add entry for dcaemod.simpledemo.onap.org in /etc/hosts with the correct IP (any of K8S node IP can be specified)
Configuring DCAE mod
Note: The IP should be changed to one of your K8S Node ip or the DNS FQDN
Now let’s access the Nifi (DCAE designer) UI - http://dcaemod.simpledemo.onap.org/nifi/
a) Configure Nifi Registry url
Next check Nifi settings by selecting the Hamburger button in the Nifi UI. It should lead you to the Nifi Settings screen
Add a registry client. The Registry client url will be http://dcaemod-nifi-registry:18080
b) Add distribution target which will be the runtime api url
Set the distribution target in the controller settings
Distribution target URL will be http://dcaemod-runtime-api:9090
4. Load V3 specs (and data-formats) via Onboarding API
VESCollector
VES specifciation - https://git.onap.org/dcaegen2/collectors/ves/tree/dpo/spec/vescollector-componentspec-v3.json
Data Formats - https://git.onap.org/dcaegen2/collectors/ves/tree/dpo/data-formats
For the purpose of onboarding, a Sample Request body should be of the type -
{ "owner": "<some value>", "spec": <some json object> }
where the json object inside the spec field can be a component spec json.
Request bodies of this type will be used in the onboarding requests you make using curl or the onboarding swagger interface.
The prepared Sample Request body for a component dcae-ves-collector looks like so
The prepared Sample request body for a sample data format looks like so -
TCAGen2
TCA specifciation - https://git.onap.org/dcaegen2/analytics/tca-gen2/tree/dcae-analytics/dpo/tcagen2-componentspec-v3.json
Data Formats - https://git.onap.org/dcaegen2/analytics/tca-gen2/tree/dcae-analytics/dpo/dcaeCLOutput.json, https://git.onap.org/dcaegen2/analytics/tca-gen2/tree/dcae-analytics/dpo/dmaap.json https://git.onap.org/dcaegen2/analytics/tca-gen2/tree/dcae-analytics/dpo/aai.json
For the purpose of onboarding, a Sample Request body should be of the type -
{ "owner": "<some value>", "spec": <some json object> }
where the json object inside the spec field can be a component spec json.
Request bodies of this type will be used in the onboarding requests you make using curl or the onboarding swagger interface.
The prepared Sample Request body for a component dcae-tcagen2 looks like so
The prepared Sample request body for a sample data format looks like so -
Onboard a data format and a component
Each component has a description that tells what it does.
These requests would be of the type-
curl -X POST http://<onboardingapi host>/onboarding/dataformats -H "Content-Type: application/json" -d @<filepath to request>
curl -X POST http://<onboardingapi host>/onboarding/components -H "Content-Type: application/json" -d @<filepath to request>
In our case,
curl -X POST http://dcaemod.simpledemo.onap.org/onboarding/dataformats -H "Content-Type: application/json" -d @<filepath to request>
curl -X POST http://dcaemod.simpledemo.onap.org/onboarding/components -H "Content-Type: application/json" -d @<filepath to request>
HOST=dcaemod.simpledemo.onap.org curl -X POST http://$HOST/onboarding/dataformats -H "Content-Type: application/json" -d @ves-4.27.2-df.json curl -X POST http://$HOST/onboarding/dataformats -H "Content-Type: application/json" -d @ves-5.28.4-df.json curl -X POST http://$HOST/onboarding/dataformats -H "Content-Type: application/json" -d @ves-response-df.json curl -X POST http://$HOST/onboarding/dataformats -H "Content-Type: application/json" -d @VES-7.30.2_ONAP-dataformat_onboard.json curl -X POST http://$HOST/onboarding/components -H "Content-Type: application/json" -d @vescollector-componentspec-v3-mod.json curl -X POST http://$HOST/onboarding/dataformats -H "Content-Type: application/json" -d @dcaeCLOutput-resp.json curl -X POST http://$HOST/onboarding/dataformats -H "Content-Type: application/json" -d @aai-resp.json curl -X POST http://$HOST/onboarding/components -H "Content-Type: application/json" -d @tcagen2-componentspec-v3-mod.json
You can download the Components and Data Formats for the demo here - demo.zip
Verify the resources were created using
curl -X GET http://dcaemod.simpledemo.onap.org/onboarding/dataformats
curl -X GET http://dcaemod.simpledemo.onap.org/onboarding/components
Verify the genprocessor (which polls onboarding periodically to convert component specs to nifi processor), converted the component
Open http://dcaemod.simpledemo.onap.org/nifi-jars/ in a browser.
These jars should now be available for you to use in the nifi UI as processors.
5. Create flow Design using using VES and TCAgen2
This step is same as captured here - DCAE MOD User Guide#3.Design&DistributionFlow
a) To start creating flows, we need to create a process group first. The name of the process group will be the name of the flow. Drag and Drop on the canvas, the ‘Processor Group’ icon from the DCAE Designer bar on the top.
Now enter the process group by double clicking it,
You can now drag and drop on the canvas ‘Processor’ icon from the top DCAE Designer tab. You can search for a particular component in the search box that appears when you attempt to drag the ‘Processor’ icon to the canvas.
If the Nifi registry linking worked, you should see the “Import” button when you try to add a Processor or Process group to the Nifi canvas, like so-
By clicking on the import button, we can import already created saved and version controlled flows from the Nifi registry, if they are present.
We can save created flows by version controlling them like so starting with a 'right click' anywhere on the canvas-
Ideally you would name the flow and process group the same, because functionally they are similar.
When the flow is checked in, the bar at the bottom shows a green checkmark
Note: Even if you move a component around on the canvas, and its position on the canvas changes, it is recognized as a change, and it will have to recommitted.
b) Adding components and building the flow
You can add additional components in your flow and connect them.
DcaeVesCollector connects to DockerTcagen2.
Along the way you need to also provide topic names in the settings section. These can be arbitrary names.
To recap, see how DcaeVesCollector connects to DockerTcagen2. Look at the connection relationships. Currently there is no way to validate these relationships. Notice how it is required to name the topics by going to Settings.
The complete flow after joining our components looks like so
- Distribution to Runtime
- Validation and Deployment
6. Distribute the flow to RuntimeAPI
Once your flow is complete and saved in the Nifi registry, you can choose to submit it for distribution.
If the flow was submitted successfully to the runtime api, you should get a pop up a success message like so -
At this step, the design was packaged and sent to Runtime api.
The runtime is supposed to generate the Helmchart for components involved in the flow and push them to registry configured. The RuntimeAPI logs should looks like below for successful distribution.
7. Validation and Deployment
Charts distributed by MOD/Runtime can be verified on Chartmuseum registry (http://<K8SNodeIp>:30192/api/charts)
For demo purpose, charts are pulled from this registry using these command and followed by deployment
curl -X GET http://10.12.5.9:30192/charts/dcae-ves-collector-1.10.1.tgz -u onapinitializer:demo123456! -o dcae-ves-collector-1.10.1.tgz curl -X GET http://10.12.5.9:30192/charts/dcae-tcagen2-1.3.1.tgz -u onapinitializer:demo123456! -o dcae-tcagen2-1.3.1.tgz helm install -name dev-dcaegen2-services -n onap dcae-tcagen2-1.3.1.tgz --set global.masterPassword=test1 --set global.pullPolicy=Always --set mongo.enabled=true
8. Environment Cleanup
helm delete -n onap dev-chartmuseum # To remove Chartmuseum setup completely helm delete -n onap dev-dcaegen2-services # To remove TCAGen2 services helm delete -n onap dev-dcaemod # To undeploy DCAEMOD # USE DELETE METHOD ON CHARTMUSEUM TO REMOVE ANY SPECIFIC CHART PACKAGE - example below curl -X DELETE http://10.12.5.9:30192/api/charts/dcae-ves-collector/1.10.1 -u onapinitializer:demo123456! curl -X DELETE http://10.12.5.9:30192/api/charts/dcae-tcagen2/1.3.1 -u onapinitializer:demo123456!
Remove also any persistence directory from /dockerdata-nfs/onap/ associated to chartmuseum and dcaemod