Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Current »

For information about the functionality provided by the Acumos adapter and about the adapter's architecture, see this overview.

Dependencies

The Acumos adapter relies on the following external components:

  • An Acumos instance that provides access to models through the Acumos E5 interface. (See this design documentation for a discussion of the E5 interface.)
  • A running instance of ONAP DCAE, with at least the minimal set of other ONAP components needed by DCAE. (See the DCAE Installation Guide for more details.)
  • A running instance of ONAP DCAE MOD, connected to the ONAP DCAE instance. (See the DCAE MOD User Guide for more information.)
  • A Docker registry where the adapter can store the Docker images that it creates.

Installation Prerequisites

Set up access to an Acumos instance

Before installing the Acumos adapter, identify an Acumos instance that exposes the E5 interface and arrange with the administrator to set up access.  Since E5 authenticates clients using a TLS certificate, you will need to obtain a certificate and register the subject information with the Acumos instance. 

After setting up access, you should have:

  • The URL for accessing the E5 endpoint
  • A TLS certificate private key
  • A TLS client certificate signed by a certificate authority acceptable to the Acumos instance
  • Any intermediate CA certs required to validate the client certificate

Set up access to a Docker registry

The Acumos adapter will create Docker images that it needs to push to a registry.  Typically registries allow pushing only to users who have authenticated with a user name and password.  Some registries also require authentication from clients attempting to pull images.  Before installing the Acumos adapter, you will need to identify (or deploy) a Docker registry and set up the necessary user name(s) and password(s). 

After setting up access, you should have:

  • The URL for the registry.
  • The user name and password for Docker image push operations.
  • The user name and password for Docker image pull operations, if the registry requires authentication for pull operations.

WARNING: Using a registry that requires authentication for pull operations requires an extra (and somewhat complicated) configuration step.

Set up access to an ONAP instance with DCAE and DCAE MOD

These instructions will deploy an instance of the Acumos adapter into the same Kubernetes cluster and namespace as the ONAP instance where DCAE and DCAE MOD are running.  Therefore you will need to be able to direct Kubernetes and Helm commands to the ONAP instance.   The simplest way to get this information is to get a "kubeconfig" file that contains the Kubernetes API address and credentials for the ONAP instance. 

After setting up access, you should have:

  • A "kubeconfig" file that you can use to access the ONAP instance where the adapter will be installed.

These instructions assume that you are familiar with Helm and Kubernetes commands and can install the kubectl and helm executables on whatever machine you will be using for driving the installation process.

Build the OOM common chart

If you have used OOM to deploy DCAE and DCAE MOD, you've already done this, and the common chart is available in your local Helm repository.  If not, you need to download the ONAP OOM repository (https://gerrit.onap.org/r/admin/repos/oom). set up a local Helm repository instance, and build (at a minimum) the common chart.  See the OOM Quick Start Guide for more detailed instructions.

Set up appropriate networking arrangements

The Acumos adapter, running inside the ONAP Kubernetes cluster, will need access to the Acumos instance and the Docker registry.   Depending on the exact network configurations, this may require setting up firewall rules at various points in the network(s) involved.   Similarly, the machine you are using to drive the installation will need access to the ONAP instance.

Installation Procedure

The Acumos adapter is installed using Helm, with a Helm chart that's stored in the ONAP dcaegen2/platform repository, in the adapter/acumos-deployment subdirectory. 

  1. Clone the ONAP dcaegen2/platform repository.
  2. Enter the Acumos Helm chart subdirectory (adapter/acumos-deployment) in the cloned repository.
  3. Populate the dependencies for the chart by executing:
    helm dep up
  4. Create a YAML file containing information about the Docker registry and the Acumos instance that the adapter will use.  The table below shows the properties that must be in this file.

    Property NameDescription
    dockerUserUser name to be used by the adapter to push images to the Docker registry
    dockerPassPassword to be used by the adapter to push images to the Docker registry
    dockerTargetRegistry

    Address of the Docker registry where the adapter will push images, in the format host_name:port

    http_proxyHTTP address of the proxy server when behind a proxy. To be left blank  as http_proxy: " " when no proxy is used
    https_proxyHTTPS address of the proxy server when behind a proxy. To be left blank  as https_proxy: " " when no proxy is used 
    no_proxyAddresses that do not require proxy for connecting to the Adapter such as the cluster nodes, docker host.
    acumosCertThe certificate information needed for the adapter to authenticate itself to the Acumos instance, in PEM format.  The information contains the following elements, in this order:
    • The unencrypted private key associated with the certificate
    • The client certificate
    • If the certificate has been signed by one or more intermediate certificate authorities, the intermediate certificate authority certificates

    Note that this property is a multi-line string in YAML.

    Here is an example of the file, with some sensitive information truncated or omitted.

    dockerUser: example-user
    dockerPass: example-pass
    dockerTargetRegistry: nexus.example.com:18448
    http_proxy: xx.xx.xx.xx
    https_proxy: xx.xx.xx.xx
    no_proxy: xx.xx.xx.xx

    acumosCert: |
      -----BEGIN PRIVATE KEY-----
      MII...
     (remainder of private key)

      -----END PRIVATE KEY-----
      -----BEGIN CERTIFICATE-----
      MII...
      (remainder of client certificate)
      -----END CERTIFICATE-----
      -----BEGIN CERTIFICATE-----
      MII...
     (remainder of intermediate CA certificate)
     -----END CERTIFICATE-----

  5. Deploy the Acumos adapter using helm:
    helm install -n helm_release_name --namespace namespace_of_running_onap_instance -f /path/to/yaml_file/path/to/acumos_adapter_chart_directory
    For example:
        Helm release name: testadapt
        Namespace of running ONAP instance: onap   
        YAML file with docker & cert info: ~/acumos-adapter-demo/overrides.yaml
        Executing in the directory holding the Acumos adapter Helm chart
    helm install -n testadapt --namespace onap -f ~/acumos-adapter-demo/overrides.yaml  .

  6. If the Docker registry requires authentication for pulling an image, some additional configuration will be needed.  After you've imported a model and used DCAE MOD to create a flow, you will want to deploy the flow using the blueprints generated by MOD.   The Cloudify Manager component executes the deployment operation, using a Kubernetes plugin (k8splugin). The plugin needs the Docker pull credentials to pass to Kubernetes, so that Kubernetes can pull the Docker images created by the adapter.  There are two steps:
    1. Create a Kubernetes image pull secret. (See the Kubernetes documentation for details.  The secret must be created in the namespace where the ONAP instance is running.  The command has the form:
      kubectl -n onap_namespace create secret docker-registry secret_name --docker-server=docker_registry_server --docker-username=docker_user --docker-password=docker_password
      For example, using the Docker information from the example in step 3 above and the namespace from the example in step 4, and choosing the name testadapt-adapter-pull-secret for the secret, the command would be:
      kubectl -n onap create secret docker-registry testadapt-adapter-pull-secret --docker-server=nexus.example.com:18448 --docker-username=example-user --docker-password=example-pass
    2. Make the secret available to the Kubernetes plugin.  The plugin's configuration, which includes is stored in the ONAP Consul key-value store, includes a list of image pull secrets to pass to Kubernetes.  You can use the Consul graphical interface to update the configuration.
      1. To make the Consul graphical interface accessible to your machine, set up Kubernetes port forwarding, using:
        kubectl -n onap_namespace port-forward svc/consul-server-ui 8500:8500
        where onap_namespace is the namespace where the ONAP instance is running.
        This will make the Consul UI service available on your machine at port 8500.
      2. Using a Web browser on your machine, navigate to http://localhost:8500/ui/#/dc1/kv/k8s-plugin/edit. This will take you to a page where you can edit the Kubernetes plugin configuration.  The current configuration is presented in a text box that you edit.  Change the line that reads:
        "image_pull_secrets" : ["onap-docker-registry-key"],
        to
        "image_pull_secrets" : ["onap-docker-registry-key", "secret_name"],
        where secret_name is the name you gave to the image pull secret you created in step a.
        Once you have made the change,  press the "Update" button under the text box.

        The image below shows the update using the secret name from the example in step a.

Verification

Verify that Kubernetes resources are created

Deploying the Acumos adapter will create a set of Kubernetes resources:   a Deployment, a Pod, an Ingress, a Service, a ConfigMap, and two Secrets.  Use the standard kubectl get commands to verify that they were created.  The names of all of the resources except the Service will be prefixed with the Helm release name.  The Service name will always be  dcae-acumos-adapter.

The examples below assume that the adapter was deployed into the onap namespace, using a Helm release name of testadapt.

$ kubectl -n onap get deployments | grep testadapt-
testadapt-dcae-acumos-adapter 1/1 1 1 3m26s
Note that depending on how far the adapter is in the startup process, the "1/1" (indicating the number of ready instances/number of expected instances) might show "0/1".

$ kubectl -n onap get pods | grep testadapt-
testadapt-dcae-acumos-adapter-6544d59f7f-pnv6z 2/2 Running 0 4m23s
Note that depending on how far the adapter is in the startup process, the "2/2" (indicating the number of ready containers/number of containers) might show "0/2" or "1/2".

$ kubectl -n onap get ingress | grep testadapt-
testadapt-dcae-acumos-adapter-ingress * cardigan.proto.research.att.com 80 5m2s

$ kubectl -n onap get service | grep acumos
dcae-acumos-adapter ClusterIP 10.43.199.110 <none> 9000/TCP 6m16s

$ kubectl -n onap get configmap | grep testadapt-
testadapt-dcae-acumos-adapter-configmap 1 7m38s

$ kubectl -n onap get secret | grep testadapt-
testadapt-dcae-acumos-adapter-certs Opaque 1 8m12s
testadapt-dcae-acumos-adapter-docker Opaque 1 8m12s

Verify that the Acumos adapter has become ready

The Helm chart for the adapter specifies Kubernetes readiness probe that checks if the adapter container is accepting TCP connections on the port used by its Web interface.  To check the current state,  use the kubectl get pods command.
For example:

$ kubectl -n onap get pods | grep testadapt-
testadapt-dcae-acumos-adapter-6544d59f7f-pnv6z 2/2 Running 0 4m23s

The output should show "2/2" (2 containers ready out of a total of 2 containers in the pod).  The adapter may take several minutes to reach this state.

Verify adapter functionality

This step verifies that the adapter can connect to the Acumos instance and access the catalog(s) of models on the instance.

Open the DCAE MOD design tool (this normally is found at ingress_controller_address/nifi/, where ingress_controller_address is the FQDN or IP address of a Kubernetes node where an ingress controller is running.

From the "hamburger menu" on the upper right side of the design tool, select the Import item:

Select the Import Models... item.  A dialog box will appear:

Enter the URL for your Acumos instance and press the Lookup button.  This will cause the Acumos adapter to attempt to connect to the Acumos instance and retrieve any catalogs available on the Instance.  If it can connect, a drop-down box listing the available catalgos will appear in the dialog, along with a button labeled Onboard.

Drop down the catalog listing and select a catalog.  A list of "solutions" (models) will appear.   The exact content of the catalog list will depend on what catalogs your Acumos instance makes available.  Similarly, the set of solutions will depend on your Acumos instance.  Select a catalog, select a solution, and select a revision.   Here's an example (your Acumos instance probably will not have this exact content):

Press the Onboard button.  The process may take up a few minutes, as the adapter will need to create a Docker image and push it to your configured repository.  Eventually, you will see a pop-up message announcing success.

You can then drag a process element onto the screen and find the component that you just onboarded:

This verifies that the component has been onboarded and is available for use in a DCAE service design.   You can also check your Docker repository to verify that the Docker image has been pushed there.

  • No labels