Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

To add a new ONAP component to Heat it is necessary to complete the following three steps:

  1. Add the definition of a new component VM to the Heat template
  2. Make sure that all the component-specific configuration is in the Gerrit repository
  3. Prepare installation scripts that install software dependencies and docker containers


Add the definition of a new component VM to the Heat template

The Heat template contains the definition of the ONAP components and the Operation And Management (OAM) private network that those components use to communicate with each other. Each component has a fixed private IP address towards the OAM network, in the 10.0.0.0/16 address space.

The Heat stack contains a DNS server that resolves the Fully Qualified Domain Names (FQDNs) to IP addresses. The DNS configuration has an entry for each component VM, for example:

vm1.aai.simpledemo.openecomp.org.       IN      A       aai1_ip_addr

Then, all the services that run in a VM are associated to that FQDN:

aai.api.simpledemo.openecomp.org. IN CNAME vm1.aai.simpledemo.openecomp.org.

aai.ui.simpledemo.openecomp.org.    IN  CNAME   vm1.aai.simpledemo.openecomp.org.

aai.searchservice.simpledemo.openecomp.org.     IN      CNAME   vm1.aai.simpledemo.openecomp.org.


Adding a new ONAP component requires to add a description of the host VM in terms of operating system, flavor (number of vCPUs, RAM, disk), ports, etc. The VM description also contains a "user data" section that is used to implement custom operations. In ONAP, the "user data" section is used to save environment-specific parameters in the VM and make them usable by installation scripts (see next sections).

Find below the description of the SO VM:

  # MSO instantiation

  mso_private_port:

    type: OS::Neutron::Port

    properties:

      network: { get_resource: oam_onap }

      fixed_ips: [{"subnet": { get_resource: oam_onap_subnet }, "ip_address": { get_param: mso_ip_addr }}]


  mso_floating_ip:

    type: OS::Neutron::FloatingIP

    properties:

      floating_network_id: { get_param: public_net_id }

      port_id: { get_resource: mso_private_port }


  mso_vm:

    type: OS::Nova::Server

    properties:

      image: { get_param: ubuntu_1604_image }

      flavor: { get_param: flavor_large }

      name:

        str_replace:

          template: base-mso

          params:

            base: { get_param: vm_base_name }      

      key_name: { get_resource: vm_key }

      networks:

        - port: { get_resource: mso_private_port }

      user_data_format: RAW

      user_data:

        str_replace:

          params:

            __nexus_repo__: { get_param: nexus_repo }

            __nexus_docker_repo__: { get_param: nexus_docker_repo }

            __nexus_username__: { get_param: nexus_username }

            __nexus_password__: { get_param: nexus_password }

            __openstack_username__: { get_param: openstack_username }

            __openstack_tenant_id__: { get_param: openstack_tenant_id }

            __openstack_api_key__: { get_param: openstack_api_key }

            __openstack_region__: { get_param: openstack_region }

            __keystone_url__: { get_param: keystone_url }

            __dmaap_topic__: { get_param: dmaap_topic }

            __artifacts_version__: { get_param: artifacts_version }

            __dns_ip_addr__: { get_param: dns_ip_addr }

            __docker_version__: { get_param: docker_version }

            __gerrit_branch__: { get_param: gerrit_branch }

            __cloud_env__: { get_param: cloud_env }

            __external_dns__: { get_param: external_dns }

            __mso_repo__: { get_param: mso_repo }

          template: |

            #!/bin/bash


            # Create configuration files

            mkdir -p /opt/config

            echo "__nexus_repo__" > /opt/config/nexus_repo.txt

            echo "__nexus_docker_repo__" > /opt/config/nexus_docker_repo.txt

            echo "__nexus_username__" > /opt/config/nexus_username.txt

            echo "__nexus_password__" > /opt/config/nexus_password.txt

            echo "__artifacts_version__" > /opt/config/artifacts_version.txt

            echo "__dns_ip_addr__" > /opt/config/dns_ip_addr.txt

            echo "__dmaap_topic__" > /opt/config/dmaap_topic.txt

            echo "__openstack_username__" > /opt/config/openstack_username.txt

            echo "__openstack_tenant_id__" > /opt/config/tenant_id.txt

            echo "__openstack_api_key__" > /opt/config/openstack_api_key.txt

            echo "__openstack_region__" > /opt/config/openstack_region.txt

            echo "__keystone_url__" > /opt/config/keystone.txt

            echo "__docker_version__" > /opt/config/docker_version.txt

            echo "__gerrit_branch__" > /opt/config/gerrit_branch.txt

            echo "__cloud_env__" > /opt/config/cloud_env.txt

            echo "__external_dns__" > /opt/config/external_dns.txt

            echo "__mso_repo__" > /opt/config/remote_repo.txt


            # Download and run install script

            curl -k __nexus_repo__/org.onap.demo/boot/__artifacts_version__/mso_install.sh -o /opt/mso_install.sh

            cd /opt

            chmod +x mso_install.sh

            ./mso_install.sh


The function get_param gets parameter values defined in the Heat environment file, for example:

public_net_id: 03bd2691-2660-4f85-8913-65ef9c9b02df

ubuntu_1404_image: ubuntu-14-04-cloud-amd64

ubuntu_1604_image: ubuntu-16-04-cloud-amd64

flavor_small: m1.small

flavor_medium: m1.medium

flavor_large: m1.large

flavor_xlarge: m1.xlarge

vm_base_name: vm1

key_name: onap_key

nexus_repo: https://nexus.onap.org/content/sites/raw

nexus_docker_repo: nexus3.onap.org:10001

nexus_username: docker

nexus_password: docker

dmaap_topic: AUTO

artifacts_version: 1.1.0-SNAPSHOT

docker_version: 1.1-STAGING-latest

gerrit_branch: master


These parameters mainly refer to the OpenStack environment, docker URL and credentials, Gerrit URL, VM private addresses, etc. For component-specific parameters, instead, we suggest to keep them in Gerrit, such that the repository can be cloned and the specific configuration made available to installation scripts.

For each VM, the last instruction in the Heat template runs <component_name>_install.sh, which installs software dependencies such as docker, Java, make, gcc, git, etc. This script also downloads and runs another script, called <component_name>_vm_init.sh, which is in charge of downloading and running docker containers.

For detailed information about Heat templates and their installation, please refer to ONAP Installation in Vanilla OpenStack.

What should ONAP teams do to onboard a new component? Just provide the VM specs that they want, we will create the VM accordingly. We will also create <component_name>_install.sh


Make sure that all the component-specific configuration is in the Gerrit repository

<component_name>_install.sh will take care of cloning the Gerrit repository of the component, if needed, for example:


# Clone Gerrit repository and run docker containers

cd /opt

git clone -b $GERRIT_BRANCH --single-branch $CODE_REPO


This is required if the component has some specific configuration to use during the installation process.

What should ONAP teams do? Please let us know if you need this step. If so, please make sure that configuration files are updated.


Prepare installation scripts that install software dependencies and docker containers

<component_name>_vm_init.sh downloads and installs docker containers. This script may vary from component to component, depending on the strategy each team adopts. For example:

  • DCAE GEN 1, AAI, MSO, MESSAGE ROUTER, have custom scripts that are used to download and run docker images. Hence, the only thing that <component_name>_vm_init.sh does is to call the custom script. The following example shows the content of dcae_vm_init.sh:

#!/bin/bash

export MTU=$(/sbin/ifconfig | grep MTU | sed 's/.*MTU://' | sed 's/ .*//' | sort -n | head -1)

cd /opt/dcae-startup-vm-controller

git pull

bash init.sh

make up

  • Some teams use docker compose to run docker containers, while other teams prefer docker run. Here is the content of vid_vm_init.sh, which uses docker run:

#!/bin/bash

NEXUS_USERNAME=$(cat /opt/config/nexus_username.txt)

NEXUS_PASSWD=$(cat /opt/config/nexus_password.txt)

NEXUS_DOCKER_REPO=$(cat /opt/config/nexus_docker_repo.txt)

DOCKER_IMAGE_VERSION=$(cat /opt/config/docker_version.txt)

cd /opt/vid

git pull

cd /opt


docker login -u $NEXUS_USERNAME -p $NEXUS_PASSWD $NEXUS_DOCKER_REPO

docker pull $NEXUS_DOCKER_REPO/openecomp/vid:$DOCKER_IMAGE_VERSION

docker pull $NEXUS_DOCKER_REPO/library/mariadb:10


docker rm -f vid-mariadb

docker rm -f vid-server


docker run --name vid-mariadb -e MYSQL_DATABASE=vid_openecomp_epsdk -e MYSQL_USER=vidadmin -e MYSQL_PASSWORD=Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U -e MYSQL_ROOT_PASSWORD=LF+tp_1WqgSY -v /opt/vid/lf_config/vid-my.cnf:/etc/mysql/my.cnf -v /opt/vid/lf_config/vid-pre-init.sql:/docker-entrypoint-initdb.d/vid-pre-init.sql -v /var/lib/mysql -d mariadb:10


docker run -e VID_MYSQL_DBNAME=vid_openecomp_epsdk -e VID_MYSQL_PASS=Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U --name vid-server -p 8080:8080 --link vid-mariadb:vid-mariadb-docker-instance -d $NEXUS_DOCKER_REPO/openecomp/vid:$DOCKER_IMAGE_VERSION


Parameter names in capital letters are those passed via Heat template to the VM. Then, the script logs into the docker hub and download containers. Finally, docker run is used to launch those containers.

What should ONAP teams do? Please help us build the <component_name>_vm_init.sh script. This should contain the logic that runs the docker images. Feel free to choose your preferred strategy.

  • No labels