Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Contributors:
Isaku Yamahata <isaku.yamahata@intel.com> <isaku.yamahata@gmail.com>
Bin Hu <bh526r@att.com>
Munish Agarwal <munish.agarwal@ericsson.com>
Please put your name here



https://gerrit.onap.org/r/#/c/30027/

the discussion is continued at the above spec document. Please review/comment there.


1. Intro

In this API design document will discuss

2. Scope for Beijing release(R-2)

2.1. Basic principle

  1. First baby step to support containers in a Kubernetes cluster via a Multicloud SBI /K8S  Plugin

  2. Minimal implementation with zero impact on MVP of Multicloud Beijing work

3. Use Cases

  1. Sample VNFs(vFW and vDNS)

(vCPE use case is post Beijing release)


3.1. integration scenario

  1. Register/unregister k8s cluster instance which is already deployed. (dynamic deployment of k8s is out of scope)

  2. onboard VNFD/NSD to use container

  3. Instantiate / de-instantiate containerized VNFs through K8S Plugin in K8S cluster

  4. Vnf configuration with sample VNFs(vFW, vDNS)


4. Northbound API design

4.1. REST API Impact and base URL:

Similar to other case, k8s plugin has its own URL prefix so that it doesn’t affect other multicloud northbound API.


4.2. Metadata

  • PATH: swagger.json

Metadata for kubernetes API definitions

  • METHOD: GET



4.3. register/unregister kubernetes cluster instance

  • PATH: clusters

  • METHOD: POST

    • Register kubernetes cluster instance

    • Returns cloud-id

    • K8s instance tracking. Locations etc.

  • METHOD: DELETE, GET, PUT


NOTE:

HPA(kubernetes cluster features/capabilities) is out of scope for Beijing

Assumption

  • K8s cluster instance is already pre-build/deployed

  • Dynamic instantiation is out of scope(for Beining)


Attribute

Type

Req

CRUD

comment

TBD












5. Kubernetes proxy api

  • PATH: clusters/<cloud-id>/proxy/<resources>

  • METHOD: All methods

proxy(or passthru) API to kubernetes API with authorization adjustment to kubernetes API server to {kubernetes api prefix}/<resources> without any changes to http request body.

For details of kubernetes API, please refer to https://kubernetes.io/docs/reference/api-overview/

Note: kubernetes doesn’t have concept of region, tenant.(at this point). So region and tenant_id isn’t in path.


Attribute

Type

Req

CRUD

comment

Passthrough to kubernetes API













5.1. Kubernetes yaml

  • PATH: clusters/<cloud-id>/yaml

  • METHOD: POST

    • Same to kubectl -f xxx.yaml.

Maybe this isn’t necessary as the caller can be easily convert k8s yaml to k8s API calls.

Shortcut to POST to multiple k8s resources.

Attribute

Type

Req

CRUD

comment

resources




List of kubernetes yaml files








5.2. Kubernetes: Helm

TBD: need discussion with Munish.

  • PATH: clusters/<cloud id>/helm/<helm URL: grpc>

  • METHOD: all method

  • Pass through to helm tiller api server with authorization adjustment


Attribute

Type

Req

CRUD

comment

TBD











5.3. Kubernetes: CSAR

temporally work around. This api will be removed after Beijing PoC until SO adaptor is resolved.

  • PATH: clusters/<cloud id>/csar

  • METHOD: POST

Extract k8s yaml file from CSAR and create k8s resources.

Attribute

Type

Req

CRUD

comment

TBD











6. On boarding/packaging/instantiation

We shouldn’t change the current way.

  • Short term: Use additional node type/capability types etc., VDU

  • Longer term way: work with TOSCA community to add additional node type to express k8s.

6.1. Packaging and on-boarding


Reuse CASR so that the existing work flow doesn’t need change.For Beijing CSAR is used with its own TOSCA node definition. (in longer term, once multicloud project has model driven API, it will be used.)

6.2. TOSCA nodes definitions

Introduce new nodes to wrap k8s ingredients(k8s yaml, helm etc.) This is short term solution until model driven API is defined/implemented.

  • onap.multicloud.nodes.kubernetes.proxy

  • onap.multicloud.nodes.kubernetes.helm

This wraps kubernets yaml file or help chart as necessary. cloudify.nodes.Kubernetes isn’t reused in order to avoid definition conflict.

6.3. instantiation

SO ARIA adaptor can be used. (with twist to have SO to talk to multicloud k8s plugin instead of ARIA) Instantiation and SO

7. OOF : TBD

  • Policy matching is done by OOF.

  • For Beijing. Enhancement to policy is stretched goal.

  • Decomposing service design(NSD, VNFD) from VNF package is done by SO with OOF(homing)








.

8. Kubernetes cluster authentication

Note: https://kubernetes.io/docs/admin/authentication

Because Kubernetes cluster installation is not mentioned,  we should treat all users as normal users when authenticate to Kubernetes VIM. There are several ways to authenticate Kubernetes cluster:

8.0.1. Using kubeconfig file

Users provide each Kubernetes VIM information as a cluster, user or context in kubeconfig files.

apiVersion: v1
clusters:
- cluster:
   certificate-authority: fake-ca-file
   server: https://1.2.3.4
 name: development
- cluster:
   insecure-skip-tls-verify: true
   server: https://5.6.7.8
 name: scratch
contexts:
- context:
   cluster: development
   namespace: frontend
   user: developer
 name: dev-frontend
- context:
   cluster: scratch
   namespace: default
   user: experimenter
 name: exp-scratch
current-context: ""
kind: Config
preferences: {}
users:
- name: developer
 user:
   client-certificate: fake-cert-file
   client-key: fake-key-file

In this scenario, when user want to deploy a VNF, user should provide:

  • Kubeconfig file path: Path to the kubeconfig file to use for CLI requests

  • Cluster: The name of the kubeconfig cluster to use

  • Context: The name of the kubeconfig context to use

  • User: The name of the kubeconfig user to use

These files are stored in file system of one host, where multi cloud k8s is installed. Because all tenant VIM information is saved as files, it may be not the good way to manage Kubernetes cluster. It also cause complicated management of Kubernetes VIM.

Details for configure access multiple clusters, please refer to https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters

8.0.2. Using Bearer token

Similar as above approach, but we only store some necessary parameter to validate an user using Bearer token. When register a Kubernetes VIM, user should fill in the following information:

  • Kubernetes API address: The address and port of the Kubernetes API server (e.g. 192.168.1.2:6443)

  • Bearer token: Bearer token for authentication to the API server

  • Client certificate file: Path to a client certificate file for TLS (optional)

8.0.3. Using basic authentication

Different way, username and password for authenticating

  • Kubernetes API address: The address and port of the Kubernetes API server (e.g. 192.168.1.2:6443)

  • Username: Username for basic authentication to the API server

  • Password: Password for basic authentication to the API server

  • Client certificate file: Path to a client certificate file for TLS (optional)


Note:

  • Using bearer token and basic authentication (username and password) may gain some benefits. Users provide their authentication information of Kubernetes VIM where VNFs will be deployed.

  • It may be similar to OpenStack, users can provide their Kubernetes VIM information for registering.

  • It can work with Kubernetes client java and kubectl.

9. Links

  • No labels