...
- We modified the Cloudify node type definitions for Kubernetes components to include a new node property, called
location_id
, that specifies the name of the site where the component should be deployed. - We modified the DCAE Kubernetes plugin for Cloudify to read the
location_id
for a component from the blueprint and to use thelocation_id
to find the target Kubernetes cluster and to deploy the component into the target cluster. - We adopted the Kubernetes "kubeconfig" file format for storing information about the Kubernetes clusters available as deployment targets. During the initial deployment of DCAE using Helm, we create a Kubernetes ConfigMap to hold the cluster information and automatically populate it with the data for the central site. In the Dublin release, the ConfigMap must be edited manually to add clusters. (As noted above, we believe there should be an ONAP-wide store for this data, and we hope that when we have such a store, the process of adding data for a cluster can be automated.)
- We allow components deployed into remote sites to access central site services through proxies (using nginx as the server). We created a Helm chart to deploy and configure the proxy.
There is more information about these changes in this presentation.
The remaining sections of this document describe how to add information to the cluster ConfigMap and how to use the Helm chart to deploy the proxy into remote sites.
Changes for Frankfurt Release (R6)
The proxy server for remote sites relies on having access from the remote site to the config-binding-service server at the central site. Prior to R6, we accomplished this by configuring a NodePort service on the central site exposing the config-binding-service http port (10000) and the https (10443) port. In R6, by default, we configure a ClusterIP service for config-service-service. This prevents the http port from being exposed outside the central site Kubernetes Cluster.
In addition, R6 changed how components get certificate for TLS. In prior releases, components that needed a certificate (a server certificate or just a CA certificate to use to validate servers) got the certificate using an init container (org.onap.dcaegen2.deployments.tls-init-container, version 1.0.3) that has the certificates "baked in" to the container image. In R6, the init container (org.onap.dcaegen2.deployments.tls-init-container, version 2.1.0) executes code that pulls a certificate from AAF. This will not work from a remote site because the necessary AAF services are not exposed there.
Assumptions
Pre-requisite
Artifacts
Deployment/Installation steps
...
We expect that work will be done for R7 to remedy this.
In the meantime, to use a remote, it will be necessary to deploy DCAE at the central site with these changes:
- Override dcaegen2.dcae-config-binding-service.service.type. Set it to "NodePort", overriding the current setting of "ClusterIP".
- Override global.tlsImage. Set it to "onap/org.onap.dcaegen2.deployments.tls-init-container:1.0.3". This will use the container with "baked in" certificates.
- Make sure all blueprints import "https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.plugins/R6/k8splugin/1.7.2/k8splugin_types.yaml",i.e., they need to use version 1.7.2 of the k8s plugin. (The blueprints loaded into inventory at deployment time currently meet this requirement.)
We expect significant changes to multi-site support in R7.
Note that as of this update (2020-03-09), there has been no testing of multi-site support in R6.
Additional References
Multisite init-container : https://git.onap.org/dcaegen2/deployments/tree/multisite-init-container/README.md
DCAE remote site setup charts : https://git.onap.org/dcaegen2/deployments/tree/dcae-remote-site