Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Kubernetes participant sample use case:

PMSH deployment from DCAE chart server using k8s-participant (docker):

Prerequisites:

  • CL runtime, DMaap and k8s-participant deployed and running.
  • DCAE up and running to deploy any DCAEGEN2-services like pmsh.
  • DCAE chart server (chart museum) installed and initialized with dcae helm charts.

In Istanbul, the following step is manual for configuring K8s-participant with k8s cluster :

Configure K8s-participant with an existing kubernetes cluster:


Login to the k8s-participant container and create a folder for Kubernetes config file under home directory and copy the cluster config:

Configure k8s cluster
docker exec -it <k8s-pod container id> sh

mkdir  ~/.kube ( create a file “config” with cluster config data)

chmod 600 ~/.kube/config


Note: We are planning to automate this manual step using HTTP participant in Jakarta and even in Istanbul as a fix if we have time.

Verification:

Now the helm cli in the k8s-participant should be pointing to the configured cluster.

Ex: helm ls → Gives the deployment details from the configured cluster.

Note: The ip of the external cluster is expected to be properly configured on the config.


The k8s-participant can now be used in CLAMP CL workflow to deploy any microservices like PMSH in to the configured cluster.

Commission Control loop to CL Runtime:

Commission Control loop TOSCA definitions to Runtime.

Commissioning Endpoint
https://<CL Runtime IP> : <Port> /onap/controlloop/v2/commission

The CL definitions are commissioned to CL runtime which populates the CL runtime  database.  The following sample TOSCA template which contains PMSH CL definitions under node template, will be passed on to this endpoint as request body. (Please refer node template section). Helm chart properties along with override parameters are supplied under the node template of CL element.

Request body:

Commissioning
tosca_definitions_version: tosca_simple_yaml_1_3
data_types:
  onap.datatypes.ToscaConceptIdentifier:
    derived_from: tosca.datatypes.Root
    properties:
      name:
        type: string
        required: true
      version:
        type: string
        required: true
node_types:
  org.onap.policy.clamp.controlloop.Participant:
    version: 1.0.1
    derived_from: tosca.nodetypes.Root
    properties:
      provider:
        type: string
        requred: false
  org.onap.policy.clamp.controlloop.ControlLoopElement:
    version: 1.0.1
    derived_from: tosca.nodetypes.Root
    properties:
      provider:
        type: string
        required: false
      participant_id:
        type: onap.datatypes.ToscaConceptIdentifier
        requred: true
  org.onap.policy.clamp.controlloop.ControlLoop:
    version: 1.0.1
    derived_from: tosca.nodetypes.Root
    properties:
      provider:
        type: string
        required: false
      elements:
        type: list
        required: true
        entry_schema:
          type: onap.datatypes.ToscaConceptIdentifier
  org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement:
    version: 1.0.1
    derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
    properties:
      chart:
        type: string
        required: true
      configs:
        type: list
        required: false
      requirements:
        type: string
        required: false
      templates:
        type: list
        required: false
        entry_schema:
      values:
        type: string
        required: true
topology_template:
  node_templates:
    org.onap.k8s.controlloop.K8SControlLoopParticipant:
      version: 2.3.4
      type: org.onap.policy.clamp.controlloop.Participant
      type_version: 1.0.1
      description: Participant for K8S
      properties:
        provider: ONAP   
    org.onap.domain.database.PMSH_K8SMicroserviceControlLoopElement:   
      version: 1.2.3
      type: org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement
      type_version: 1.0.0
      description: Control loop element for the K8S microservice PMSH
      properties:
        provider: ONAP
        participant_id:
          name: org.onap.k8s.controlloop.K8SControlLoopParticipant
          version: 2.3.4
        chart:          
          chartId: 
            name: dcae-pmsh         
            version: 8.0.0
          namespace: onap 
          releaseName: pmshms
          overrideParams:
            global.masterPassword: test    
          repository:
             repoName: chartmuseum
             address: 172.125.16.1
             port: 8080
             protocol: http
             username: onapinitializer
             password: demo123456!                               
    org.onap.domain.sample.GenericK8s_ControlLoopDefinition:
      version: 1.2.3
      type: org.onap.policy.clamp.controlloop.ControlLoop
      type_version: 1.0.0
      description: Control loop for Hello World
      properties:
        provider: ONAP
        elements:      
        - name: org.onap.domain.database.PMSH_K8SMicroserviceControlLoopElement
          version: 1.2.3  


Instantiate Control loop:

Instantiation dialogues are used to create, set parameters on, instantiate, update, and remove Control Loop instances. Assume a suitable Control Loop Definition exists in the Commissioned Control Loop Inventory. To get a Control Loop instance running instantiate and update the Control loop state. The following sample json represents the request body for instantiating the CL elements under the control loop to UNINITIALISED state. 

Instantiation Endpoint
https://<CL Runtime IP> : <Port> /onap/controlloop/v2/instantiation

Request body:


Update Control loop to PASSIVE state:

When the Control loop is updated with state “PASSIVE”, the Kubernetes participant fetches the node template for all control loop elements and deploys the helm chart of each CL element in to the cluster. The following sample json input is passed on the request body.

Control loop Update
https://<CL Runtime IP> : <Port> /onap/controlloop/v2/command

Request body:


Control loops can be "UNINTIALISED" after deployment:

Under the UNINITIALISED state, all the helm deployments under a control loop will be uninstalled from the cluster.

Control loop Update
https://<CL Runtime IP> : <Port> /onap/controlloop/v2/command

Request body:



K8s-particpant as a standalone application:

The participant can be deployed as a standalone application in docker by configuring the kubernetes cluster manually as mentioned in the first step.

It supports various REST end points for onboarding, installing, uninstalling and retrieval of helm charts from local chart repository. The following REST endpoints are exposed for the various helm operations. The participant maintains the helm charts in its local chart repository with the provision to add/delete charts.


POST: Onboard a chart to the k8s-participant’s local file system:

Helm charts can be onboarded along with its overrides yaml file to the participant’s local chart repository. Accepts the .tgz files in form-data for helm chart and overrides.yaml files, we are also expected to pass the json input describing the chart info. Sample input below:

Onboard a chart
https://<K8s-participant ip> : <port> /onap/k8sparticipant/helm/onboard/chart

Request body:



POST: Install a pre onboarded chart from local chart repository:

The onboarded helm charts can be installed to the Kubernetes cluster. It accepts a json input specifying the chart name and version that needs to be installed. The deployment information will be gathered from the chart info that was provided while onboarding. Sample json input:

Install chart
https://<K8s-participant ip> : <port> /onap/k8sparticipant/helm/install

Request body:

GET: Retrieve all the available local charts :

This API helps to retrieve all the charts along with the version from the local chart repository.

Retrieve charts
https://<K8s-participant ip> : <port> /onap/k8sparticipant/helm/charts


DELETE: Delete a helm chart from local repository:

Deletes a helm chart from the local chart repository of Kubernetes participant.

Delete a chart
https://<K8s-participant ip> : <port> /onap/k8sparticipant/helm/chart/{name}/{version}


DELETE: Uninstall a helm chart from Kubernetes cluster:

Any installed helm chart can be uninstalled from the cluster.

Uninstall a chart
https://<K8s-participant ip> : <port> /onap/k8sparticipant/helm/uninstall/{name}/{version}


Add a remote repository to the k8s-participant:

Remote helm repositories can be added via both TOSCA/ REST API. Once the remote helm repository is configured on the participant, any helm charts from the repository can be installed by k8s-particpant. Sample json body for adding a remote repository:

Add a repository
https://<K8s-participant ip> : <port> /onap/k8sparticipant/helm/repo

Request body:


  • No labels