Table of Contents |
---|
This document explains how to run the ONAP demos on Azure using the Amsterdam Beijing release of ONAP.
The Amsterdam Beijing release had certain limitations due to which fixes/workarounds have been provided to execute the demos. The document contains the details of the fixes/workarounds and the steps to deploy them.
TODO: reference workarounds, scripts and ONAP content from this wiki - do not duplicate inside Amdocs
...
Current Limitations of Beijing Release and Workarounds
S.No. | Component | Issue detail | Current Status | Further Actions |
---|---|---|---|---|
1 | SDC | The VNFs are onboarded using TOSCA. However, SDC does not support the 'Group' construct (aka VFModule) which is required when the TOSCA is ingested inSO | after distribution.A partial fix has been provided to overcome this issue | A proposal to include the fix and enhance SDC to support TOSCA 'Groups' construct has been submitted to the SDC PTL. Expected to discuss this and include in the Casablanca release |
2 | VID | VID is unable to show the 'VFModule' of a TOSCA service definition. This is linked to the above issue in SDC. | Fix has been provided | The fix will be submitted along with the SDC Group fix for Casablanca release |
3 | Multivim Broker | MultiVim broker does not support http request that have content-type as 'multipart/form-data' | Fixed in Beijing release | NA |
4 | SDC | The SDC UI displays the service distribution is not successful even though the TOSCA was successfully deployed in SO. | Fixed in Beijing release | NA |
5 | SO | VFModule not available in TOSCA definitions due to which SO does not consume the service properly | Fix has been provided in the 'ASDC Controller' module so that the VFModule tables in the Catalog schema can be populated | A proper solution for this depends on the SDC fix explained above. The changes to support 'Groups'/'VFModule' are part of the SDC client library provided by SDC. In addition, the ASDC controller would also need fixes to handle the SDC client library changes. Expected to be fixed in the Casablanca release. |
High level Solution Architecture
...
Custom workflow to call Multivim adapter | A downstream image of SO is placed on github which contains the custom workflow BPMN along with the Multivim adapter. | A base version of code is pushed to gerrit that supports SO-Multicloud interaction. But, this won't support Multpart data(CSAR artifacts to pass to Plugin). This need to be upstreamed. | ||
2 | MutliCloud plugin | Current azure plugin on ONAP gerrit does not support vFW and vDNS use-cases | Using the downstream image from github and developed a custom chart in OOM (downstream) to deploy as part of multicloud component set. | Need to upstream the azure-plugin code to support vFW and vDNS use-cases. |
High level Solution Architecture
The High level solution architecture can be found here
Note |
---|
Not all ONAP components have been shown in the high level solution. Only the new component/modules that are introduced in the solution are shown. Rest all remains the same. |
Deploying ONAP on Azure using Beijing Release
ONAP needs to be deployed with the dockers containing the fixes workarounds provided for the limitations in the Amsterdam Beijing release. Some fixes have been merged with the Beijing Release and those dockers will be used.
Note |
---|
This section explains deploying the Amsterdam ONAP Release on Azure for executing the demo scenarios. If you want to deploy other releases of ONAP or without the fixes for Amsterdam release, please refer here |
The OOM deployment values chart have also been modified to deploy the dockers with the fixes.
The detailed list of changes is given below:
S.No | Project Name | Docker Image (Pull from dockerhub repo) | Remarks | |
---|---|---|---|---|
1 | OOM | elhaydox/oomNA | 3 | Contains the latest values.yaml files along with certain fixes needed in Amsterdam release.which point to downstream images of: That include:
|
2 | OOM Config | elhaydox/oomconfig | Contains the configuration files. | |
| ||||
2 | SO | elhaydox/mso:1.2.2_azure-1.1.0 | Contains the VFModule fix along with the newly developed BPMN and Multi VIM adapter | |
43 | multicloud-azure | elhaydox/multicloud-azure | Aria plugin to interface with Azure and instantiate VNFs | |
5 | SDC | elhaydox/sdc-backend | Contains the partial fix to support Group construct | |
6 | VID | elhaydox/vid | Contains the partial fix to support Group construct so that VF-Module can be instantiated from VID |
Deploying ONAP on
...
Consists of two steps:
...
Azure
...
Creation of
...
Kubernetes cluster on Azure
Login to azure
Code Block az login --service-principal -u <client_id> -p <client_secret> --tenant <tenant_id/document_id>
Create a resource group
Code Block az group create --name <resource_group_name> --location <location_name>
Get the deployment templates from ONAP gerrit
Code Block git clone -o gerrit https://gerrit.onap.org/r/integration cd integration/deployment/Azure_ARM_Template
- Change arm_cluster_deploy_parameters.json file data (if required)
Run the deployment template
Code Block az group deployment create --resource-group deploy_onap --template-file oomarm_azurecluster_armdeploy_deploybeijing.json --parameters @oom@arm_azure_armcluster_deploy_parameters.json
change the parameters file accordingly
Files attached:
Note |
---|
The original OOM templates are here - https://jira.onap.org/browse/LOG-321. However this file will require the Amsterdam fixes to be merged in ONAP. Till such time, use the above attached files. |
Deploying ONAP on VM
- Download the following script on the VM created in step 1 above - Running ONAP Demos on Azure
The deployment process will take around 30 minutes to complete. You will have a cluster with 12 VMs being created on Azure(as per the parameters). The VM name with the post-index: "0" will run Rancher server. And the remaining VMs form a Kubernetes cluster.
Deploying ONAP
SSH to the VM using root user where rancher server is installed.(VM with postindex:"0" as mentioned before)
Note title Helm upgrade When you login to Rancher server VM for the first time, Run: "helm ls" to make sure the client and server are compatible. If it gives error: "Error: incompatible versions client[v2.9.1] server[v2.8.2]", then
Execute: helm init --upgrade
Download the OOM repo from github (because of the downstream images)
- Executes the ONAP script to install rancher - oom_rancher_setup.sh Clones OOM from Git Hub(
- Execute the below command to deploy ONAP
Code Block | ||
---|---|---|
| ||
wget https://raw.githubusercontent.com/onapdemo/onap-scripts/master/entrypoint/deploy_onap.sh
chmod 777 deploy_onap.sh |
git clone -b beijing --single-branch https://github.com/onapdemo/oom.git |
Note |
---|
This script should be used only for Amsterdam release on Azure. The script is a wrapper for OOM to install the required images from the docker hub. Once the fixes are merged in ONAP, which could happen in the Casablanca release, it might not be required. The original OOM scripts will be good to install ONAP. Please refer to the original ONAP script in https://git.onap.org/logging-analytics/tree/deploy/cd.sh for installing other releases. |
Code Block |
---|
./deploy_onap.sh -e onap -t single -r true -n $dns_name |
-r : give input as true to deploy rancher and kubernetes on VM
- To delete a previously deployed ONAP and deploy new one execute the command
...
Execute the below commands in sequence to install ONAP
Code Block title Get install script on Azure VM cd oom/kubernetes make all # This will create and store the helm charts in local repo. helm install local/onap --name dev --namespace onap
Note Due to network glitches on public cloud, the installation sometimes fail with error: "Error: release dev failed: client: etcd member http://etcd.kubernetes.rancher.internal:2379 has no leader". If one faces this during deployment, we need to re-install ONAP. For that: helm del --purge onap rm -rf /dockerdata-nfs/* #wait for few minutes helm install local/onap --name dev --namespace onap
Running ONAP use-cases
Refer to the below pages to run the ONAP use-cases
Building the Source Code with fixes
If you want to take a look at the fixes and create the dockers for individual components, the source code for the fixes is available hereSource Code access