Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents
Introduction

This document explains how to run the ONAP demos on Azure using the Amsterdam Beijing release of ONAP.

The Amsterdam Beijing release had certain limitations due to which fixes/workarounds have been provided to execute the demos. The document contains the details of the fixes/workarounds and the steps to deploy them. 

TODO: reference workarounds, scripts and ONAP content from this wiki - do not duplicate inside Amdocs


...

Current Limitations of Beijing Release and Workarounds


 The VNFs are onboarded using TOSCA. However, SDC does not support the 'Group' construct (aka VFModule) which is required when the TOSCA is ingested in after distribution.
S.No.ComponentIssue detailCurrent StatusFurther Actions
1SDCSO A partial fix has been provided to overcome this issueA proposal to include the fix and enhance SDC to support TOSCA 'Groups' construct has been submitted to the SDC PTL. Expected to discuss this and include in the Casablanca release
2VIDVID is unable to show the 'VFModule' of a TOSCA service definition. This is linked to the above issue in SDC.Fix has been providedThe fix will be submitted along with the SDC Group fix for Casablanca release
3Multivim BrokerMultiVim broker does not support http request that have content-type as 'multipart/form-data'Fixed in Beijing releaseNA
4SDCThe SDC UI displays the service distribution is not successful even though the TOSCA was successfully deployed in SO.Fixed in Beijing releaseNA
5SOVFModule not available in TOSCA definitions due to which SO does not consume the service properlyFix has been provided in the 'ASDC Controller' module so that the VFModule tables in the Catalog schema can be populated

A proper solution for this depends on the SDC fix explained above. The changes to support 'Groups'/'VFModule' are part of the SDC client library provided by SDC. In addition, the ASDC controller would also need fixes to handle the SDC client library changes. Expected to be fixed in the Casablanca release.

High level Solution Architecture

...

Custom workflow to call Multivim adapterA downstream image of SO is placed on github which contains the custom workflow BPMN along with the Multivim adapter.A base version of code is pushed to gerrit that supports SO-Multicloud interaction. But, this won't support Multpart data(CSAR artifacts to pass to Plugin). This need to be upstreamed.
2MutliCloud pluginCurrent azure plugin on ONAP gerrit does not support vFW and vDNS use-casesUsing the downstream image from github and developed a custom chart in OOM (downstream) to deploy as part of multicloud component set.Need to upstream the azure-plugin code to support vFW and vDNS use-cases.



High level Solution Architecture

Image Added

The High level solution architecture can be found here

Note
Not all ONAP components have been shown in the high level solution. Only the new component/modules that are introduced in the solution are shown. Rest all remains the same.

Deploying ONAP on Azure using Beijing Release

ONAP needs to be deployed with the dockers containing the fixes workarounds provided for the limitations in the Amsterdam Beijing release. Some fixes have been merged with the Beijing Release and those dockers will be used. 

Note

This section explains deploying the Amsterdam ONAP Release on Azure for executing the demo scenarios. 

If you want to deploy other releases of ONAP or without the fixes for Amsterdam release, please refer here 

The OOM deployment values chart have also been modified to deploy the dockers with the fixes.

The detailed list of changes is given below:

 

S.NoProject Name

Docker Image

(Pull from dockerhub repo)

Remarks
1OOMelhaydox/oomNA3

Contains the latest values.yaml files along with certain fixes needed in Amsterdam release.which point to downstream images of:

That include:

  1. SDC POL:5000 error during distribution
  2. Firefox browser crash
2OOM Configelhaydox/oomconfigContains the configuration files.
  1. SO
  2. Multicloud-azure-plugin
2SOelhaydox/mso:1.2.2_azure-1.1.0Contains the VFModule fix along with the newly developed BPMN and Multi VIM adapter
43multicloud-azureelhaydox/multicloud-azureAria plugin to interface with Azure and instantiate VNFs
5SDCelhaydox/sdc-backendContains the partial fix to support Group construct
6VIDelhaydox/vidContains the partial fix to support Group construct so that VF-Module can be instantiated from VID

Deploying ONAP on

...

Consists of two steps:

...

Azure

...

Creation of

...

Kubernetes cluster on Azure

  • Login to azure

    Code Block
    az login --service-principal -u <client_id> -p <client_secret> --tenant <tenant_id/document_id>

     

  •  Create a resource group  

    Code Block
     az group create --name <resource_group_name> --location <location_name>
  • Get the deployment templates from ONAP gerrit

    Code Block
     git clone -o gerrit https://gerrit.onap.org/r/integration
     cd integration/deployment/Azure_ARM_Template
  • Change arm_cluster_deploy_parameters.json file data (if required)
  • Run the deployment template  

    Code Block
        az group deployment create --resource-group deploy_onap --template-file oomarm_azurecluster_armdeploy_deploybeijing.json --parameters @oom@arm_azure_armcluster_deploy_parameters.json

      change the parameters file accordingly

             Files attached: 

              Running ONAP Demos on Azure  

              Running ONAP Demos on Azure

Note
The original OOM templates are here - https://jira.onap.org/browse/LOG-321. However this file will require the Amsterdam fixes to be merged in ONAP.  Till such time, use the above attached files.  

Deploying ONAP on VM

  • Download the following script on the VM created in step 1 above - Running ONAP Demos on Azure

    The deployment process will take around 30 minutes to complete. You will have a cluster with 12 VMs being created on Azure(as per the parameters). The VM name with the post-index: "0" will run Rancher server. And the remaining VMs form a Kubernetes cluster.

Deploying ONAP

  • SSH to the VM using root user where rancher server is installed.(VM with postindex:"0" as mentioned before)

    Note
    titleHelm upgrade

    When you login to Rancher server VM for the first time, Run: "helm ls" to make sure the client and server are compatible. If it gives error: "Error: incompatible versions client[v2.9.1] server[v2.8.2]", then

    Execute: helm init --upgrade

  • Download the OOM repo from github (because of the downstream images)

  • Executes the ONAP script to install rancher - oom_rancher_setup.sh
  • Clones OOM from Git Hub(
    Code Block
    titleGet install script on Azure VM
    wget https://raw.githubusercontent.com/onapdemo/onap-scripts/master/entrypoint/deploy_onap.sh
    chmod 777 deploy_onap.sh
    The deploy_onap.sh script is a wrapper/utility script that does the following :
    git clone -b beijing --single-branch https://github.com/onapdemo/oom.git
     ) that contains the modified image reference in the Values chart. The docker images contain the fixes explained above and are available in docker hub instead of the ONAP nexusInstall ONAP based on the modified value charts in oom.
    Note
    This script should be used only for Amsterdam release on Azure.  The script is a wrapper for OOM to install the required images from the docker hub. Once the fixes are merged in ONAP, which could happen in the Casablanca release, it might not be required. The original OOM scripts will be good to install ONAP.
    Please refer to the original ONAP script in https://git.onap.org/logging-analytics/tree/deploy/cd.sh for installing other releases.
  • Execute the below command to deploy ONAP
Code Block
		./deploy_onap.sh -e onap -t single -r true -n $dns_name

                   -r : give input as true to deploy rancher and kubernetes on VM

  • To delete a previously deployed ONAP and deploy new one execute the command     

...

  • Execute the below commands in sequence to install ONAP

    Code Block
    titleGet install script on Azure VM
    cd oom/kubernetes
    make all # This will create and store the helm charts in local repo.
    helm install local/onap --name dev --namespace onap
    
    
    Note
    Due to network glitches on public cloud, the installation sometimes fail with error: "Error: release dev failed: client: etcd member http://etcd.kubernetes.rancher.internal:2379 has no leader". If one faces this during deployment, we need to re-install ONAP. For that: helm del --purge onap rm -rf /dockerdata-nfs/* #wait for few minutes
    helm install local/onap --name dev --namespace onap

Running ONAP use-cases

Refer to the below pages to run the ONAP use-cases

  1. Running ONAP Demos vFW on Azure
  2. Running ONAP Demos vDNS on Azure


Building the Source Code with fixes

If you want to take a look at the fixes and create the dockers for individual components, the source code for the fixes is available hereSource Code access