...
Data Exposure Service will be available in R7.
...
Artefacts
Βlueprint (deployment artifact) :
Input file (deployment input) :
Docker image : nexus3.onap.org:10001/onap/<>k8s-datalake-feeder.yaml, k8s-datalake-admin-ui.yaml
Docker image
Deployment Prerequisite/dependencies
In R6, the following storage are supported:
MongoDB
Couchbase
Elasticsearch and Kibana
HDFS
To use DataLake, you need to have at least one of these systems ready. Once DataLake is deployed, you can configure Topic and storage in the DataLake Admin UI.
Deployment Steps
Build helm repository
...
Since datalake can log the message from the DMaap to several different external databases, such as Elasticsearch, Couch Base, MongoDB, Relational databases...etc. Once Datalake is successfully deployed, you can start to deploy the external databases and configure the sink databases through our admin UI. The following sections will guide you to deploy datalake microservice, including cloudify blueprint upload, deployment, and un-deployment.
Deployment Steps
DL-handler consists of two pods- the feeder and admin UI. It can be deployed by using cloudify
...
Transfer blueprint component inputs file in DCAE bootstrap POD under / directory
Next, the cloudify input file of datalake should be placed into bootstrap pod. The input file can be found in ONAP git repository. Once you clone the repository, the blueprint file could be copied to the DCAE bootstrap pod through the command line.
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
kubectl cp <source directory>/components/datalake-handler/dpo/blueprint/k8s-datalake-helm-input.yaml <DCAE bootstrap pod>:/blueprint -n onap |
blueprint. Datalake can be easily deployed through DCAE cloudify manager. The following steps guides you launch Datalake though cloudify manager.
Log-in to the DCAE bootstrap POD's main container
The following command lets you log in Login to the DCAE bootstrap pod through the following command.
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
kubectl exec -it <DCAE bootstrap pod> /bin/bash -n onap |
...
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
cfy blueprint upload -b datalake-feeder /bluerintbluerints/k8s-datalake-helmfeeder.yaml cfy blueprint upload -b datalake-admin-ui /blueprints/k8s-datalake-admin-ui.yaml |
Verify Blueprint Upload
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
cfy blueprint list |
You can see the following returned message to show the blueprints have been correctly uploaded.
Verify Plugin versions in target Cloudify instance match to blueprint imports
If the version of the plugin used is different, update the blueprint import to match.
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
cfy plugins list |
Customization of Blueprint Input File
...
Create Deployment
Here we are going to create deployments for both feeder and admin UI.
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
tiller-server-ip: <YOUR_CLUSTER_IP>
tiller-server-port: <TILLER_EXPOSED_PORT>
namespace: onap
chart-repo-url: <YOUR_HELM_REPO>
stable-repo-url: <YOUR_STABLE_HELM_REPO>
chart-version: 1.0.0
component-name: dcae-datalake
|
Deploy Service
cfy deployments create -b datalake-feeder feeder-deploy
cfy deployments create -b datalake-admin-ui admin-ui-deploy |
Launch Service
Next, we are going to launch the datalake.
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
cfy installexecutions -bstart datalake -d datalake-deploymentfeeder-deploy install cfy executions start -i /blueprints/k8s-datalake-helm-input.yaml /blueprints/k8s-helm.yamld admin-ui-deploy install |
To Un-deploy
Uninstall running component and delete deployment
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
cfy uninstall feeder-deploy cfy uninstall datalakeadmin-ui-deploymentdeploy |
Delete blueprint
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
cfy blueprints delete datalake
|
Initial Validation
After deployment, verify if dl-handler POD and mongoDB pod are running correctly
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
root@k8s-rancher:~# kubectl get pods -n onap | egrep "dl-handler"
|
And then check the logs to see if it can connect to DMaaP, polling for events.
...
language | bash |
---|---|
theme | Midnight |
title | Verify Logs for Dmaap poll |
linenumbers | true |
Functional tests
Following default configuration is loaded into dl-handler (set in blueprint configuration)
...
language | bash |
---|---|
theme | Midnight |
title | Configuration |
<Add below steps to configure DL-Handler to subscribe and feed into external DL with step-by-step procedure>
Dynamic Configuration Update
As the dl-handler service periodically polls Consul KV using configbindingService api's - the run time configuration of dl-handler service can be updated dynamically without having to redeploy/restart the service. The updates to configuration can be triggered either from Policy (or CLAMP) or made directly in Consul.
Locate the servicename by executing into dl-handler Service pod and getting env HOSTNAME value
Code Block | ||||
---|---|---|---|---|
| ||||
root@k8s-rancher:~# kubectl exec -it -n onap dep-s78f36f2daf0843518f2e25184769eb8b-dcae-dl-handler-servithzx2 /bin/bash
Defaulting container name to s78f36f2daf0843518f2e25184769eb8b-dcae-dl-handler-service.
Use 'kubectl describe pod/dep-s78f36f2daf0843518f2e25184769eb8b-dcae-dl-handler-servithzx2 -n onap' to see all of the containers in this pod.
misshtbt@s78f36f2daf0843518f2e25184769eb8b-dcae-dl-handler-service:~/bin$ env | grep HOSTNAME
HOSTNAME=s78f36f2daf0843518f2e25184769eb8b-dcae-dl-handler-service |
Change the configuration for Service in KV-store through UI and verify if updates are picked
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
http://<k8snodeip>:30270/ui/#/dc1/kv/ |
...
-feeder
cfy blueprints deltet datalake-admin-ui |