Introduction
DCAE Platform in Dublin release supports new feature to deploy components via helm chart. This is enabled by integrating Cloudify Helm plugin into Cloudify Manager instance DCAE-Platform uses to deploy other required services. The cloudify Helm plugin itself is under CCSDK project delivered part of Casablanca. For Dublin, this plugin has been integrated into DCAE ONAP deployment. Any chart available under chart rep-url specified as configuration input can be deployed.
Dublin Scope
The helm plugin was intended to support deployment scenario of stand-alone application similar to capability offered under OOM. With this plugin integration, any charts packaged under ONAP OOM can be deployed through DCAE platform in ONAP. This provides an opportunity for operators to use a single orchestration through Cloudify for deploying both Helm and TOSCA work flows if required.
As all DCAE MS currently are TOSCA work flow based, the helm plugin is not used for DCAE component deployment.
Artifacts
Repository Path: https://gerrit.onap.org/r/gitweb?p=ccsdk/platform/plugins.git;a=tree;f=helm;h=945eb3159f61071a348791b1f00d1cf4c3c97e7d;hb=HEAD
Blueprint Template:
# ============LICENSE_START========================================== # =================================================================== # Copyright (c) 2019 AT&T # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #============LICENSE_END============================================ tosca_definitions_version: cloudify_dsl_1_3 imports: - http://www.getcloudify.org/spec/cloudify/4.3.1/types.yaml - "https://nexus.onap.org/service/local/repositories/raw/content/org.onap.ccsdk.platform.plugins/type_files/helm/4.0.0/helm-type.yaml" inputs: tiller-server-ip: description: IP address of Kubernetes master node tiller-server-port: description: Nodeport of tiller server namespace: description: Target namespace to be installed under (requires to be new) chart-repo-url: default: https://nexus.onap.org/content/sites/oom-helm-staging chart-version : description: Chart version for identified component-name stable-repo-url: description: URL for stable repository type: string default: 'https://kubernetes-charts.storage.googleapis.com' config-url: default: '' config-format: default: 'yaml' component-name: description: onap component name node_templates: dcaecomponent: type: onap.nodes.component properties: tiller-server-ip: { get_input: tiller-server-ip } tiller-server-port: { get_input: tiller-server-port } component-name: { get_input: component-name } chart-repo-url: { get_input: chart-repo-url } chart-version: { get_input: chart-version } namespace: { get_input: namespace } stable-repo-url: { get_input: stable-repo-url} config-url: { get_input: config-url} config-format: { get_input: config-format} outputs: dcaecomponent_install_status: value: { get_attribute: [ dcaecomponent, install-status ] }
There is also option to override the defaults using values.yaml equivalent using following blueprint template
Pre-Configuration Steps
- Helm needs to be installed on the CM pod
kubectl exec -n onap <Cloudify Manager pod> /bin/bash wget http://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz tar -zxvf helm-v2.9.1-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/helm
Note: If wget is not found, install using - "sudo yum install wget" command on CM pod.
2. Tiller service should be updated to expose a nodeport
kubectl edit svc -n kube-system tiller-deploy -o yaml # Assign an unused nodeport available in cluster #After update K8S svc definition should reflect the node port assigned #verify node port assignment kubectl get svc --all-namespaces | grep tiller kube-system tiller-deploy ClusterIP 10.43.218.97 <none> 44134/TCP 5d
Installation
- Modify the blueprint templates
kubectl exec -it -n onap <dcae-bootstrap pod> /bin/bash cd blueprints ls k8s-helm.yaml k8s-helm-override.yaml # Helm Blueprint templates are available under this directory # Verify and update the blueprint parameters if required # Create a corresponding input files
Note: Explanation of parameters are documented under CCSDK wiki page : Introduction of Helm Plugin.
- Validate and Upload the blueprint into CM
cfy blueprints validate cfy blueprints upload -b k8s-helm-test /blueprints/k8s-helm.yaml
- Deploy the blueprint
cfy deployments create -b k8s-helm-test k8s-helm-test cfy executions start -d k8s-helm-test install
- Validation
# Verify if new NS identified in blueprint configuration is created kubectl get ns # Verify if required component was deployed kubectl get pods -n <ns specified>
Any error on deployment will be reported in console. Additional logs can be found also under Cloudify Manager pod (under /var/log/cloudify/mg*work/logs)
Future Enhancement
- Support Tiller clusterIP/port as option instead of nodeport alone for tiller.
- Support deployment on existing names spaces
- Logging enhancements (deployment errors if any to be captured also)