Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This page uses a VM This page uses 2 VMs in China Mobile lab as an example to demonstrate how to setup a DataLake development environment. The VM's IP is 172 

We have 2 VMs:

VM1: 172.30.1.74, which will be used as an example throughout this guide. 

...


VM2: 172.30.1.75

We install most components on VM1. Due to port conflicts, we install Druid and Superset on VM2.

  1. Setup host names on both VMs and your local PC
    On both the VM VMs and your local PC, sudo vi /etc/hosts, add these linesthis line: (In windows, the file is C:\Windows\System32\drivers\etc\hosts.)

    172.30.1.74 message-router-zookeeper message-router-kafka dl_-couchbase dl_-mariadb dl_-mongodb dl_-es dl_-hdfs dl-feeder dl-adminui
    172.30.1.75 dl-druid dl_-superset dlhdfs

  2. Install JDK 8 and Docker on both VM VMs and local
    sudo apt install openjdk-8-jdk-headless
    Docker install document: https://docs.docker.com/install/linux/docker-ce/ubuntu/
    I install Docker on a Linux VM running in my local Windows.

    Install Docker Compose: https://docs.docker.com/compose/install/ 

  3. Setup ONAP development environment
    (Ref Setting Up Your Development Environment)
    On your local PC,

    cd ~/.m2 (On Windows, it is C:\Users\your_name\.m2)
    mv settings.xml settings.xml-old
    wget https://raw.githubusercontent.com/onap/oparent/master/settings.xml

  4. Check out source code
    On both VM VMs and local, Check out DataLake source code from https://gerrit.onap.org/r/#/admin/projects/dcaegen2/services to C:\git\onap\dcaegen2\services2 or ~/git/onap/dcaegen2/services2. Currently DataLake Feeder is hosted in ONAP repo as a DCAE component handler.
    If you already check checked out the source code before, you may want to sync to the latest again.

  5. Setup MariaDB
    (Ref https://mariadb.com/kb/en/library/installing-and-using-mariadb-via-docker/)
    On VMVM1,
    sudo docker run -p 3306:3306 --name mariadb -e MYSQL_ROOT_PASSWORD=mypass -d mariadb/server:10.3


    Connect to database as root with the password as above, then run

    GRANT ALL PRIVILEGES ON *.* TO dl@"%" IDENTIFIED BY 'dl1234' WITH GRANT OPTION;

    and scripts in Cin these files:
    C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\assembly\scripts\init_db.sql
    C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\assembly\scripts\init_db_data.sql

  6. Setup Kafka
    (Ref https://kafka.apache.org/quickstart)
    This and the following 2 steps describe setting up and using your own Kafka for development and testing. For using ONAP DMaaP, see step "Use DMaaP as data source".
    On VMOn VM1,

    mkdir ~/kafka
    cd ~/kafka
    wget http://archive.apache.org/dist/kafka/2.0.0/kafka_2.11-2.0.0.tgz
    tar -xzf kafka_2.11-2.0.0.tgz
    cd ~/kafka/kafka_2.11-2.0.0

    vi config/server.properties 
    change
    #listeners=PLAINTEXT://:9092
    to

    listeners=PLAINTEXT://172.30.1.74:9092

    To start Zookeeper and Kafka:
    cd ~/kafka/kafka_2.11-2.0.0
    nohup bin/zookeeper-server-start.sh config/zookeeper.properties > zk.log &
    nohup bin/kafka-server-start.sh config/server.properties > kf.log &

    Btw, here are the commands to stop them:
    bin/zookeeper-server-stop.sh
    bin/kafka-server-stop.sh


  7. Create test Kafka topics 
    On VMOn VM1,

    cd ~/kafka/kafka_2.11-2.0.0

    ./bin/kafka-topics.sh --create --zookeeper message-router-zookeeper:2181 --replication-factor 1 --partitions 1 --topic AAI-EVENT
    ./bin/kafka-topics.sh --create --zookeeper message-router-zookeeper:2181 --replication-factor 1 --partitions 1 --topic unauthenticated.DCAE_CL_OUTPUT
    ./bin/kafka-topics.sh --create --zookeeper message-router-zookeeper:2181 --replication-factor 1 --partitions 1 --topic unauthenticated.SEC_FAULT_OUTPUT
    ./bin/kafka-topics.sh --create --zookeeper message-router-zookeeper:2181 --replication-factor 1 --partitions 1 --topic msgrtr.apinode.metrics.dmaap

    In case you want to reset the topics, here are the scripts to delete them:

    ./bin/kafka-topics.sh --zookeeper message-router-zookeeper:2181 --delete --topic AAI-EVENT
    ./bin/kafka-topics.sh --zookeeper message-router-zookeeper:2181 --delete --topic unauthenticated.DCAE_CL_OUTPUT
    ./bin/kafka-topics.sh --zookeeper message-router-zookeeper:2181 --delete --topic unauthenticated.SEC_FAULT_OUTPUT
    ./bin/kafka-topics.sh --zookeeper message-router-zookeeper:2181 --delete --topic msgrtr.apinode.metrics.dmaap

  8. Load test data to Kafka
    The test data files are checked out from source depot in previous step "Check out source code".
    On VMOn VM1,

    cd ~/kafka/kafka_2.11-2.0.0

    ./bin/kafka-console-producer.sh --broker-list message-router-kafka:9092 --topic AAI-EVENT < ~/git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/AAI-EVENT-100.json
    ./bin/kafka-console-producer.sh --broker-list message-router-kafka:9092 --topic unauthenticated.DCAE_CL_OUTPUT < ~/git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/DCAE_CL_OUTPUT-100.json
    ./bin/kafka-console-producer.sh --broker-list message-router-kafka:9092 --topic unauthenticated.SEC_FAULT_OUTPUT < ~/git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/SEC_FAULT_OUTPUT-100.json
    ./bin/kafka-console-producer.sh --broker-list message-router-kafka:9092 --topic msgrtr.apinode.metrics.dmaap < ~/git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/msgrtr.apinode.metrics.dmaap-100.json


    To check if the data is successfully loaded, one can read the data: 

    bin/kafka-console-consumer.sh --bootstrap-server message-router-kafka:9092 --topic AAI-EVENT --from-beginning
    bin/kafka-console-consumer.sh --bootstrap-server message-router-kafka:9092 --topic unauthenticated.DCAE_CL_OUTPUT --from-beginning
    bin/kafka-console-consumer.sh --bootstrap-server message-router-kafka:9092 --topic unauthenticated.SEC_FAULT_OUTPUT  --from-beginning 
    bin/kafka-console-consumer.sh --bootstrap-server message-router-kafka:9092 --topic msgrtr.apinode.metrics.dmaap --from-beginning

  9. Setup MongoDB
    On VMOn VM1,
    sudo docker run -d -p 27017:27017 --name mongodb mongo
    or to start a stopped one 
    sudo docker start mongodb


  10. Setup Couchbase 
    On VMOn VM1,
    • Start docker
      sudo docker run -d --name couchbase -p 18091: 8091-p 8092-8094:80928091-8094 -p 11210:11210 couchbase/server-sandbox:6.0.0
      Note that we map container's port 8091 to host's 18091 because 8091 will be used by Druid. or to start a stopped one 
      sudo docker start couchbase
    • Create user and bucket

      Access http://dl_-couchbase:180918091/ , use login: "Administrator/password". 

      Create bucket "datalake", with memory quota 200MB.
      Create user dl/dl1234 , with “Application Access” and "Views Admin" roles to bucket "datalake".

  11. Setup ElasticSearch & Kibana 
    (Ref https://docs.swiftybeaver.com/article/33-install-elasticsearch-kibana-via-docker)
    On VMOn VM1,

    sudo docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" --name elastic docker.elastic.co/elasticsearch/elasticsearch:67.61.1
    sudo docker run -d --link elastic:dl_-es -e "ELASTICSEARCH_URLHOSTS=http://dl_-es:9200" -p 5601:5601 --name kibana docker.elastic.co/kibana/kibana:67.61.1

    or to start the stopped ones
    sudo docker start elastic
    sudo docker start kibana

  12. Create test Indices in ElasticSearch 
    Indices should be auto created by DataLake Feeder.
    To access Kibana: http://dl_-es:5601/ .
    In case you want to reset the Indices, here are the scripts to delete them:

    curl -X DELETE "dl_-es:9200/aai-event?pretty"

    curl -X DELETE "dl_-es:9200/unauthenticated.dcae_cl_output?pretty"

    curl -X DELETE "dl_-es:9200/unauthenticated.sec_fault_output?pretty"

    curl -X DELETE "dl_-es:9200/msgrtr.apinode.metrics.dmaap?pretty"


  13. Setup Druid
    (Ref http://druid.io/docs/latest/tutorials/index.html)
    On datalake02, (This has to be on datalake02, since: 1

    We install Druid and Superset on VM2, that is because: 1. Druid uses port 8091, which is also used by Couchbase; 2. Druid uses its own Zookeeper, and we already installed one on

    datalake01. )

    On VM,VM1. (2nd conflict can be resolved by modifying Druid configs though.)
    mkdir ~/druid

    cd ~/druid
    wget httpshttp://www-us.apache.org/dist/apache.stu.edu.tw/incubator/druid/0.14.02-incubating/apache-druid-0.14.02-incubating-bin.tar.gz
    tar -xzf apache-druid-0.14.02-incubating-bin.tar.gz
    cd ~/druid/apache-druid-0.14.02-incubating
     
    vi ~/druid/apache-druid-0.14.02-incubating/quickstart/tutorial/conf/druid/middleManager/runtime.properties, update:
    druid.host=dl-druid
    druid.worker.capacity=30

    vi ~/druid/apache-druid-0.14.02-incubating/quickstart/tutorial/conf/druid/middleManager/jvm.config, update:
    -Xmx640m


    To use the Zookeeper we installed, disable Druid's zookeeper:
    cd ~/druid/apache-druid-0.14.0-incubating/bin
    vi Install Zookeeper:
    curl https://archive.apache.org/dist/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz -o zookeeper-3.4.11.tar.gz
    tar -xzf zookeeper-3.4.11.tar.gz
    mv zookeeper-3.4.11 zk

  14. Run Druid
    cd ~/druid/apache-druid-0.14.02-incubating/
    nohup bin/verify-default-ports, remove 2181 from 'my @ports' list.
    vi ~/druid/apache-druid-0.14.0-incubating/quickstart/supervise -c quickstart/tutorial/conf/tutorial-cluster.conf , comment out this line:
    #!p10 zk bin/run-zk quickstart/tutorial/conf
    curl https://archive.apache.org/dist/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz -o zookeeper-3.4.11.tar.gz
    tar -xzf zookeeper-3.4.11.tar.gz
    mv zookeeper-3.4.11 zk
    Run Druid
    cd ~/druid/apache-druid-0.14.0-incubating
    nohup perl bin/supervise -c quickstart/tutorial/conf/tutorial-cluster.conf > log.txt &
    Submit Druid Kafka indexing service supervisors
    (Ref http://druid.io/docs/latest/tutorials/tutorial-kafka.html)
    We use the Druid Kafka indexing service to load data from Kafka. For each topic, we will need to submit a supervisor spec to Druid: 
    cd ~/> log.txt &

  15. Submit Druid Kafka indexing service supervisors
    (Ref http://druid.io/docs/latest/tutorials/tutorial-kafka.html)
    We use the Druid Kafka indexing service to load data from Kafka. For each topic, we will need to submit a supervisor spec to Druid: 

    cd ~/

    curl -XPOST -H'Content-Type: application/json' -d @git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/AAI-EVENT-kafka-supervisor.json http://dl-druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H'Content-Type: application/json' -d @git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/DCAE_CL_OUTPUT-kafka-supervisor.json http://dl-druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H'Content-Type: application/json' -d @git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/AAI-EVENTSEC_FAULT_OUTPUT-kafka-supervisor.json http://dl_-druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H'Content-Type: application/json' -d @git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/DCAE_CL_OUTPUTmsgrtr.apinode.metrics.dmaap-kafka-supervisor.json http://dl_-druid:8090/druid/indexer/v1/supervisor

    Windows' version:
    curl -XPOST -H
    '"Content-Type: application/json' " -d @git/onap/dcaegen2/services2/components/@C:\git\onap\dcaegen2\services2\components\datalake-handler/\feeder/\src/\main/\resources/druid/SEC_FAULT_OUTPUT\druid\AAI-EVENT-kafka-supervisor.json json http://dl_-druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H
    '"Content-Type: application/json' " -d @git/onap/dcaegen2/services2/components/@C:\git\onap\dcaegen2\services2\components\datalake-handler/\feeder/\src/\main/\resources/druid/msgrtr.apinode.metrics.dmaap\druid\DCAE_CL_OUTPUT-kafka-supervisor.json json http://dl_-druid:8090/druid/indexer/v1/supervisor
    Windows' version:
    curl -XPOST -H"Content-Type: application/json" -d @C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\main\resources\druid\AAI-EVENTSEC_FAULT_OUTPUT-kafka-supervisor.json http://dl_-druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H"Content-Type: application/json" -d @C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\main\resources\druid\
    DCAE_CL_OUTPUTmsgrtr.apinode.metrics.dmaap-kafka-supervisor.json http://dl_-druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H"Content-Type: application/json" -d @C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\main\resources\druid\SEC_FAULT_OUTPUT-kafka-supervisor.json 

    Druid tasks: http://dl-druid:8090 
    Druid datasource: http://dl
    _-druid:80908081/druid/indexer/v1/supervisor
    curl -XPOST -H"Content-Type: application/json" -d @C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\main\resources\druid\msgrtr.apinode.metrics.dmaap-kafka-supervisor.json http://dl_druid:8090/druid/indexer/v1/supervisor
    Druid tasks: http://dl_druid:8090 
    Druid datasource: http://dl_druid:8081/#/datasources 
    Setup Superset
    (Ref https://superset.incubator.apache.org/installation.html#start-with-docker)
    On VM,
    mkdir ~/superset

    cd ~/superset
    git clone https://github.com/apache/incubator-superset/
    #/datasources 

  16. Setup Superset
    (Ref https://superset.incubator.apache.org/installation.html#start-with-docker)
    On VM2,
    mkdir ~/superset

    cd ~/superset
    git clone https://github.com/apache/incubator-superset/
    cd ~/superset/incubator-superset/contrib/docker

    vi docker-compose.yml, add the external host dl-druid to service 'superset':

    extra_hosts:
    - "dl-druid:172.30.1.75"

    vi docker-init.sh, change:
    flask fab create-admin --app superset → fabmanager create-admin --app superset

    superset load_examples →  superset load-examples


    Then
    sudo docker-compose run superset ./docker-init.sh
    (This will take awhile. You will be asked to provide a new username and password.)

  17.  Run Superset

    cd ~/superset/incubator-superset/contrib/docker

    vi docker-compose.yml, add the external host dl_druid to service 'superset':

    extra_hosts:
    - "dl_druid:sudo docker-compose up -d

    Setup Druid as a data source
    Open http://dl-superset:8088/ , using the login created in step 'Setup Superset', go to Sources → Druid Clusters → Add a new record (the '+' sign), and set:
    Verbose Name=DataLake druid
    Broker Host=172.30.1.
    74"
    vi docker-init.sh, change load_examples to load-examples.sudo docker-compose run superset ./docker-init.sh  (This will take awhile. You will be asked to provide a new username and password.)

     Run Superset
    cd ~/superset/incubator-superset/contrib/docker
    sudo docker-compose up -d
    Setup Druid as data source
    Open http://dl_superset:8088/ , using the login created in step 'Setup Superset', go to Sources → Druid Clusters → Add a new record (the '+' sign), and set:
    Verbose Name=dl_druid
    Broker Host=dl_druid
    Cluster=dl_druid
    Setup HDFS
    If you already have  a Hadoop cluster, set 'dlhdfs' to its NameNode IP in /etc/hosts. Otherwise, install a Cloudera QuickStart VM in Docker or other VM formats on datalake01, update /etc/hosts with the VM's IP.
    Download image from http://www.cloudera.com/content/support/en/downloads/quickstart_vms.html.
    For Docker, (Ref. https://www.cloudera.com/documentation/enterprise/5-13-x/topics/quickstart_docker_container.html)
    gunzip cloudera-quickstart

    75
    Cluster=dl_druid

  18. Setup Hadoop/HDFS
    If you already have a Hadoop cluster, set 'dl-hdfs' to its NameNode IP in /etc/hosts. Otherwise, install a Cloudera QuickStart VM in Docker or other VM formats on VM1.
    Download image from http://www.cloudera.com/content/support/en/downloads/quickstart_vms.html.

    For Docker, (Ref. https://www.cloudera.com/documentation/enterprise/5-13-x/topics/quickstart_docker_container.html)
    gunzip cloudera-quickstart-vm-5.13.0-0-beta-docker.tar.gz
    tar -xvf cloudera-quickstart-vm-5.13.0-0-beta-docker.tar.gz
    tar -xvf cd cloudera-quickstart-vm-5.13.0-0-beta-docker.tar
    cd cloudera-quickstart-vm-5.13.0-0-beta-docker
    sudo docker import cloudera-quickstart-vm-5.13.0-0-beta-docker.tar
    sudo docker images
    sudo docker run --name=hadoop --hostname=quickstart.cloudera --privileged=true -t -i -p 7180:7180 -p 8020:8020 -p 50075:50075 -p 50010:50010 5d3a901291ef_replace_with_yours /usr/bin/docker-quickstart
    /home/cloudera/cloudera-manager --express

    Access Cloudera Manager via http://dlhdfsdl-hdfs:7180 , using login 'cloudera/cloudera', and start the cluster.

    On the QuickStart VM, create HDFS folder '/datalake', where the data is will be stored, and assign it to user 'dl':
    sudo -u hdfs hadoop fs -mkdir /datalake
    sudo -u hdfs hadoop fs -chown dl /datalake

  19. Run DataLake Feeder in IDE

    The Feeder is a Spring boot application. The entry point is org.onap.datalake.feeder.Application. Run the project in Eclipse as "Spring Boot App", once . Once started, the app reads the topic list from Zookeeper, and starts pulling then pulls data from these Kafka topics, and insert inserts the data to MongoDB, Couchbase, Elasticsearch and ElasticsearchHDFS. The data loaded to Kafka in step 'Load test data to Kafka' should appears in all the databases/store, and you should be able to use the above installed UI tools to view it.

    The REST APIs provided by controllers are documented on Swagger page: http://localhost:1680/swagger-ui.html

    Create Docker image for deployment
    To create Docker image in your local development environment, it is required to install Docker in local.

    //localhost:1680/datalake/v1/swagger-ui.html .

  20. Create Docker image for deployment
    To create Docker image in your local development environment, it is required to install Docker in local.
    cd ~/git/onap/dcaegen2/services2/components/datalake-handler/feeder
    mvn clean package -DskipTests
    sudo docker build -t moguobiao/datalake-feeder -f src/assembly/Dockerfile . (Replace 'moguobiao' with your name)

    Push docker image to dockerhub

    sudo docker login -u moguobiao -p password
    sudo docker push moguobiao/datalake-feeder

  21. Deploy Docker image 
    On VM1,
    sudo docker pull moguobiao/datalake-feeder
    sudo docker run -d -p 1680:1680 --name dl-feeder --add-host=message-router-kafka:172.30.1.74 --add-host=message-router-zookeeper:172.30.1.74 --add-host=dl-couchbase:172.30.1.74 --add-host=dl-mariadb:172.30.1.74 --add-host=dl-mongodb:172.30.1.74 --add-host=dl-hdfs:172.30.1.74 --add-host=dl-es:172.30.1.74 moguobiao/datalake-feeder

  22. Deploy AdminUI
    On VM1,
    1. Development mode
      1. Environment setup
        Install nodejs >= 10.9.0 and Angular CLI >= 7
        please following to setup development environment https://angular.io/guide/quickstart
        # cd ~/git/onap/dcaegen2/services2/components/datalake-handler/
    feeder
    mvn clean  package
    sudo docker build -t moguobiao/datalake-feeder -f src/assembly/Dockerfile .  (Replace 'moguobiao' with your name)
    Push docker image to dockerhub
    sudo docker login -u moguobiao -p password
    sudo docker push moguobiao/datalake-feeder 
    Deploy Docker image 
    On VM,
    sudo docker pull moguobiao/datalake-feeder
    sudo docker run -p 1680:1680 --name dl_feeder --add-host=message-router-kafka:172.30.1.74 --add-host=message-router-zookeeper:172.30.1.74 --add-host=dl_couchbase:172.30.1.74 --add-host=dl_mariadb:172.30.1.74 --add-host=dl_mongodb:172.30.1.74 --add-host=dlhdfs:172.30.1.74
      1. admin/src
        # npm install
      2. Mockup API server setup (optional)
        # npm run mockup
        # curl http://dl-adminui:1680/datalake/v1/feeder/status
        return 200 means the mockup server is working
      3. Run application
        # vim proxy.conf.json, modify the feeder IP address, line:3 "target": "http://dl-adminui:1680"
        If you don't enable the mockup server, kindly use "target": "http://dl-feeder:1680"
        # npm start (In Windows, ng serve --proxy-config proxy.conf.json)
        Access Admin UI page http://dl-adminui:4200
    1. Production mode with Docker
      # cd ~/git/onap/dcaegen2/services2/components/datalake-handler/admin
      # docker build -t datalake-adminui . --no-cache
      # docker run -d -p 80:80 --name dl-adminui --add-host=dl
    _es
    1. -feeder:172.30.1.74
    moguobiao/datalake-feeder
    1.   datalake-adminui
      Access Admin UI page http://dl-adminui



  1. Use DMaaP (Release C) as data source

    Add datalake01 VM1 to Kubernetes cluster
    ONAP at China Mobile Lab is deployed as a Kubernetes cluster. For DL Feeder to connect to DMaaP’s Kafka and Zookeeper, we need to add VM datalake01 VM1 to the cluster. This is done by installing Rancher containers on the VM.

    Find Zookeeper and Kafka hosts
    kubectl -n onap get pod -o wide | grep dmaap-message-router
    In our instance, it returns
    dev-dmaap-message-router-58cb7f9644-v5qvq 1/1 Running 0 53d 10.42.97.241 mr01-node3 <none>
    dev-dmaap-message-router-kafka-6685877dc4-xkvrk 1/1 Running 0 53d 10.42.243.183 mr01-node2 <none>
    dev-dmaap-message-router-zookeeper-bc76c44f4-6sfbx 1/1 Running 0 53d 10.42.13.227 mr01-node1 <none>

    So we update /etc/hosts on datalake01 VM1 with
    10.42.13.227 message-router-zookeeper 
    10.42.243.183 message-router-kafka

    Run Feeder
    We are not able to run the docker container like in step “Deploy Docker image”, because even though VM datalake01 VM1 is within the Kubernetes cluster, the Feeder container is not. One way to solve it is to deploy the image into the Kubernetes cluster, which is illustrated in step “Deploy Docker image to Kubernetes cluster”. There is a simple way to run the Feeder for development and testing:

    • Copy jar file C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\target\feeder-1.0.0-SNAPSHOT.jar to datalake01VM1. This jar file was created in step “Create Docker image for deployment”, when running the Maven command. 

    • Then run
      nohup java -jar feeder-1.0.0-SNAPSHOT.jar > feeder.log &

  2. Deploy Docker image to Kubernetes cluster
    TODO
  3.  ...