Overview
DataLake is a software component of ONAP that can systematically persist the events in DMaaP into supported Big Data storage systems. It has a Admin UI, where a system administrator configures which Topics to be monitored, and to which data storage to store the data. It is also used to manage the settings of the storage and associated data analytics tool. The second part is the Feeder, which does the data transfer work and is horizontal scalable. In the next release, R7, we will add the third component, Data Exposure Service (EDS), which will expose the data in the data storage via REST API for other ONAP components and external systems to consume. Each data exposure only requires simple configurations.
Architecture Diagram
Data Exposure Service will be available in R7.
Artifacts
Βlueprint (deployment artifact) :
Input file (deployment input) :
Docker image : nexus3.onap.org:10001/onap/<>
Deployment Prerequisite/dependencies
In R6, the following storage are supported:
MongoDB
Couchbase
Elasticsearch and Kibana
HDFS
To use DataLake, you need to have at least one of these systems ready. Once DataLake is deployed, you can configure Topic and storage in the DataLake Admin UI.
Deployment Steps
Deployment of dl-handler can be done using Dashboard UI or CloudifyUI or via CLI. Below steps are based on CLI.
- Transfer blueprint component file in DCAE bootstrap POD under /blueprints directory
- Transfer blueprint component inputs file in DCAE bootstrap POD under / directory
- Log-in to the DCAE bootstrap POD's main container
Validate blueprint
Validate Blueprintcfy blueprints validate /blueprints/k8s-dl-handler.yaml
Verify Plugin versions in target Cloudify instance match to blueprint imports
Verify Plugin versioncfy plugins list
If the version of plugin used are different, update the blueprint import to match.
Deploy Service
Upload and deploy blueprintcfy install -b dl-handler -d dl-handler -i /k8s-dl-handler-inputs.yaml /blueprints/k8s-dl-handler.yaml
To un-deploy
Uninstall running component and delete deployment
Uninstall componentcfy uninstall dl-handler
Delete blueprint
Delete blueprintcfy blueprints delete dl-handler
Initial Validation
After deployment, verify if dl-handler POD and mongoDB pod are running correctly
root@k8s-rancher:~# kubectl get pods -n onap | egrep "dl-handler"
And then check the logs to see if it can connect to DMaaP, polling for events.
Functional tests
Following default configuration is loaded into dl-handler (set in blueprint configuration)
<Add below steps to configure DL-Handler to subscribe and feed into external DL with step-by-step procedure>
Dynamic Configuration Update
As the dl-handler service periodically polls Consul KV using configbindingService api's - the run time configuration of dl-handler service can be updated dynamically without having to redeploy/restart the service. The updates to configuration can be triggered either from Policy (or CLAMP) or made directly in Consul.
Locate the servicename by executing into dl-handler Service pod and getting env HOSTNAME value
root@k8s-rancher:~# kubectl exec -it -n onap dep-s78f36f2daf0843518f2e25184769eb8b-dcae-dl-handler-servithzx2 /bin/bash Defaulting container name to s78f36f2daf0843518f2e25184769eb8b-dcae-dl-handler-service. Use 'kubectl describe pod/dep-s78f36f2daf0843518f2e25184769eb8b-dcae-dl-handler-servithzx2 -n onap' to see all of the containers in this pod. misshtbt@s78f36f2daf0843518f2e25184769eb8b-dcae-dl-handler-service:~/bin$ env | grep HOSTNAME HOSTNAME=s78f36f2daf0843518f2e25184769eb8b-dcae-dl-handler-service
Change the configuration for Service in KV-store through UI and verify if updates are picked
http://<k8snodeip>:30270/ui/#/dc1/kv/
Consul Snapshot <>