For information about the functionality provided by the Acumos adapter and about the adapter's architecture, see this overview.
Dependencies
The Acumos adapter relies on the following external components:
- An Acumos instance that provides access to models through the Acumos E5 interface. (See this design documentation for a discussion of the E5 interface.)
- A running instance of ONAP DCAE, with at least the minimal set of other ONAP components needed by DCAE. (See the DCAE Installation Guide for more details.)
- A running instance of ONAP DCAE MOD, connected to the ONAP DCAE instance. (See the DCAE MOD User Guide for more information.)
- A Docker registry where the adapter can store the Docker images that it creates.
...
- Clone the ONAP
dcaegen2/platform
repository. - Enter the Acumos Helm chart subdirectory (
adapter/acumos-deployment
) in the cloned repository. - Populate the dependencies for the chart by executing:
helm dep up
Create a YAML file containing information about the Docker registry and the Acumos instance that the adapter will use. The table below shows the properties that must be in this file.
Property Name Description dockerUser
User name to be used by the adapter to push images to the Docker registry dockerPass
Password to be used by the adapter to push images to the Docker registry dockerTargetRegistry
Address of the Docker registry where the adapter will push images, in the format
host_name:port
http_proxy
HTTP address of the proxy server when behind a proxy. To be left blank as http_proxy: " " when no proxy is used https_proxy
HTTPS address of the proxy server when behind a proxy. To be left blank as https_proxy: " " when no proxy is used no_proxy
Addresses that do not require proxy for connecting to the Adapter such as the cluster nodes, docker host. acumosCert
The certificate information needed for the adapter to authenticate itself to the Acumos instance, in PEM format. The information contains the following elements, in this order: - The unencrypted private key associated with the certificate
- The client certificate
- If the certificate has been signed by one or more intermediate certificate authorities, the intermediate certificate authority certificates
Note that this property is a multi-line string in YAML.
Here is an example of the file, with some sensitive information truncated or omitted.
dockerUser: example-user
dockerPass: example-pass
dockerTargetRegistry: nexus.example.com:18448
http_proxy: xx.xx.xx.xx
https_proxy: xx.xx.xx.xx
no_proxy: xx.xx.xx.xxacumosCert: |
-----BEGIN PRIVATE KEY-----
MII...
(remainder of private key)-----END PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
MII...
(remainder of client certificate)-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MII...
(remainder of intermediate CA certificate)-----END CERTIFICATE-----
- Deploy the Acumos adapter using helm:
helm install -n helm_release_name --namespace namespace_of_running_onap_instance -f /path/to/yaml_file/path/to/acumos_adapter_chart_directory
For example:
Helm release name:testadapt
Namespace of running ONAP instance:onap
YAML file with docker & cert info:~/acumos-adapter-demo/overrides.yaml
Executing in the directory holding the Acumos adapter Helm charthelm install -n testadapt --namespace onap -f ~/acumos-adapter-demo/overrides.yaml .
- If the Docker registry requires authentication for pulling an image, some additional configuration will be needed. After you've imported a model and used DCAE MOD to create a flow, you will want to deploy the flow using the blueprints generated by MOD. The Cloudify Manager component executes the deployment operation, using a Kubernetes plugin (
k8splugin).
The plugin needs the Docker pull credentials to pass to Kubernetes, so that Kubernetes can pull the Docker images created by the adapter. There are two steps:- Create a Kubernetes image pull secret. (See the Kubernetes documentation for details. The secret must be created in the namespace where the ONAP instance is running. The command has the form:
kubectl -n onap_namespace create secret docker-registry secret_name --docker-server=docker_registry_server --docker-username=docker_user --docker-password=docker_password
For example, using the Docker information from the example in step 3 above and the namespace from the example in step 4, and choosing the nametestadapt-adapter-pull-secret
for the secret, the command would be:kubectl -n onap create secret docker-registry testadapt-adapter-pull-secret --docker-server=nexus.example.com:18448 --docker-username=example-user --docker-password=example-pass
- Make the secret available to the Kubernetes plugin. The plugin's configuration, which includes is stored in the ONAP Consul key-value store, includes a list of image pull secrets to pass to Kubernetes. You can use the Consul graphical interface to update the configuration.
- To make the Consul graphical interface accessible to your machine, set up Kubernetes port forwarding, using:
kubectl -n onap_namespace port-forward svc/consul-server-ui 8500:8500
whereonap_namespace i
s the namespace where the ONAP instance is running.
This will make the Consul UI service available on your machine at port 8500. - Using a Web browser on your machine, navigate to http://localhost:8500/ui/#/dc1/kv/k8s-plugin/edit. This will take you to a page where you can edit the Kubernetes plugin configuration. The current configuration is presented in a text box that you edit. Change the line that reads:
"image_pull_secrets" : ["onap-docker-registry-key"],
to"image_pull_secrets" : ["onap-docker-registry-key", "secret_name"],
wheresecret_name
is the name you gave to the image pull secret you created in step a.
Once you have made the change, press the "Update" button under the text box.
The image below shows the update using the secret name from the example in step a.
- To make the Consul graphical interface accessible to your machine, set up Kubernetes port forwarding, using:
- Create a Kubernetes image pull secret. (See the Kubernetes documentation for details. The secret must be created in the namespace where the ONAP instance is running. The command has the form:
...
This verifies that the component has been onboarded and is available for use in a DCAE service design. You can also check your Docker repository to verify that the Docker image has been pushed there.
Helm Deployment
From Jakarta release onwards:
The component spec changes for the Istanbul release provide specifications to be used by the helm generator to create helm charts of the DCAE service components, making it possible for Helm deployment instead of the traditional Cloudify Manager-based deployment. The component spec changes for an Acumos model can be observed post a successful model onboarding that can be verified with the above steps. For further deployment of the model as a DCAE microservice, the below steps can be followed.
- Create the message router topics for subscriber and publisher stream of the model. Users can create such topics with a curl command as - "curl -v -k -H 'Content-Type:application/json' -u dmaap-bc-mm-prov@dmaap-bc-mm-prov.onap.org:demo123456! -d@/home/User/Documents/DCAE/mrtopic.txt https://xx.xx.xx.xx:30226/topics/create" with below sample of mrtopic.txt
{
"topicName": "unauthenticated.X_TestOut",
"description": "the model will be publishing prediction output values in this topic",
"partitionCount": "1",
"replicationCount": "3",
"transactionEnabled": "true"
}
Replace xx.xx.xx.xx with the IP of the control node or the node where message router is running. The topicName should be of the format "unauthenticated.$MODELNAME_In" for subscriber and "unauthenticated.$MODELNAME_Out" for publisher stream respectively. $MODELNAME has to be replaced with the model name. The generated component spec in "http://dcaemod.simpledemo.onap.org/onboarding/components" can be also referred to for the topic names for a given model. On a successful distribution, the DCAE MOD runtime API should contain the chart museum information.
These charts can be downloaded from the chartmuseum pod and then deployed or directly using the chartmuseum's address. Below command can be used to helm deploy the Acumos model using chartmuseum's address.$ helm install -n onap http://xx.xx.xx.xx:PORT/charts/Model.tgz --name-template devdddosnewcomp --username onapinitializer --password demo123456!
In case the image model image was onboarded to a private registry, the registry details can be overwritten by adding "-- set global.repository=$yourregistryname:port
" to the above helm command.