Kubernetes clusters using ONAP Multicloud K8s Plugin Project for R4/R5 do not report hardware features to A&AI. Consequently during the CNF/VNF life cycle CNFs/VNFs that require or recommend specific hardware during instantiation cannot dynamically reach the correct cluster and node that provides the needed hardware capabilities.
This Frankfurt Release epic ( - MULTICLOUD-729Getting issue details... STATUS ) adds the ability to discover, report and use hardware features during CNF/VNF life cycle to accelerate use cases using ONAP.
Flow Diagram for HPA for K8s
Discovery and Labeling of Node Features
Discovery of HPA features will be done using the node feature discovery [NFD] addon (https://github.com/kubernetes-sigs/node-feature-discovery) for kubernetes. NFD will be deployed as a daemonSet, with pods on each node and will discover and label hardware features on each kubernetes node with features labels (https://github.com/kubernetes-sigs/node-feature-discovery#feature-labels).
Registering of K8s Clusters
Registration of K8s Cluster will happen at the ESR UI (see: https://onap.readthedocs.io/en/latest/guides/onap-user/cloud_site/openstack/index.html - use k8s for cloud-type). ESR will then send a POST request to
http://{{MSB_IP}}:{{MSB_PORT}}/api/v1/{{cloud-owner}}/{{cloud-region-id}}/registry
Multicloud K8s Plugin registrationHandler will receive said POST and register the new Kubernetes cluster with the K8s Plugin using the connectivity code/API.
ESR Update?
We need to check what is sent from ESR to registery URL. Currently the K8s Plugin uses a kubeconfig file to access different k8s clusters. We will either need to generate a kubeconfig file (does ESR send enough info to do this? If not we might need to extend ESR) or access the cluster using another form of authentication (check k8s code and see if this is possible - auth token perhaps?
- It appears that we can use an auth token and ca cert to talk to Kubernetes API - this may allow us to use current ESR flow to add cloud regions with an addition of code in K8s Plugin to use either kubeconfig or token.
- Another route in AAI-2640 is to extend ESR directly to enable entering kubeconfig as a parameter to be sent to multicloud registry API.
The registrationHandler will then query the k8s cluster under registration using the Node App or Node Plugin code to get a list of nodes and the labels for each node.
- registrationHandler - onap/multicloud-k8s/src/k8splugin/api/registrationhandler.go
- Registers K8s cloud to K8s plugin using onap/multicloud-k8s/src/k8splugin/internal/connection/connection.go – check into if we need kubeconfig or ESR change
- Requires update of onap/multicloud-k8s/src/k8splugin/api/api.go
- Invokes AAI Client to K8s AAI Module to update AAI with Cloud/Tenant/Flavor information for the cluster
Node App or Node Plugin
- Node App - onap/multicloud-k8s/src/k8splugin/internal/app/node or onap/multicloud-k8s/src/k8splugin/internal/plugin/node.go
- Requests labels from nodes in K8s cluster
Requires update of onap/multicloud-k8s/src/k8splugin/internal/app/client.go extend to node Resource (helm.KubernetesResource does not have resource type)
Reporting of Cluster Info to AA&I
Reporting of HPA features to AA&I is done by the registrationHandler. Data will be munged to cloud-region/tenant/flavors schema for that cluster and reported to AA&I. registrationHandler is planning on using AA&I module under development for Frankfurt to report to AA&I. In this case we will send a request to the AA&I module instructing it to update the cloud-region, create a tenant and create flavors that match the node feature labels in said cloud-region.
Update of CNF Artifacts to Include nodeSelector
CNF and VNF helm charts will be updated with nodeSelector requirements & recommendations in line with node labels created by NFD during feature discovery.
Example Node Features:
# Hardware Feature Labels: feature.node.kubernetes.io/cpu-cpuid.AESNI=true, feature.node.kubernetes.io/cpu-cpuid.AVX2=true, feature.node.kubernetes.io/cpu-cpuid.AVX=true, feature.node.kubernetes.io/cpu-cpuid.FMA3=true, feature.node.kubernetes.io/cpu-cpuid.IBPB=true, feature.node.kubernetes.io/cpu-cpuid.STIBP=true, feature.node.kubernetes.io/cpu-hardware_multithreading=true, feature.node.kubernetes.io/cpu-pstate.turbo=true, feature.node.kubernetes.io/cpu-rdt.RDTCMT=true, feature.node.kubernetes.io/cpu-rdt.RDTMON=true, feature.node.kubernetes.io/memory-numa=true, feature.node.kubernetes.io/network-sriov.capable=true, feature.node.kubernetes.io/pci-0300_102b.present=true, feature.node.kubernetes.io/storage-nonrotationaldisk=true # Software Feature Labels: feature.node.kubernetes.io/kernel-config.NO_HZ=true, feature.node.kubernetes.io/kernel-config.NO_HZ_FULL=true, feature.node.kubernetes.io/kernel-version.full=3.10.0-957.el7.x86_64, feature.node.kubernetes.io/kernel-version.major=3, feature.node.kubernetes.io/kernel-version.minor=10, feature.node.kubernetes.io/kernel-version.revision=0, feature.node.kubernetes.io/system-os_release.ID=centos, feature.node.kubernetes.io/system-os_release.VERSION_ID.major=7, feature.node.kubernetes.io/system-os_release.VERSION_ID.minor=, feature.node.kubernetes.io/system-os_release.VERSION_ID=7
Where needed add resource limits and request to ResourceBundle charts. I.E.:
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Charts to Update
https://github.com/onap/multicloud-k8s/tree/master/kud/demo/firewall
https://github.com/onap/multicloud-k8s/tree/master/kud/tests/vnfs/edgex/helm/edgex
For CPU Pinning etc - investigate CMK for kubernetes (see links at bottom of page) and specifying nodeSelector of "cmk.intel.com/cmk-node": "true"
For HPA Features not labeled by NFD, investigate creation of Feature Detector Hooks (User Specific Features)
Instantiation of K8s HPA
See Deploying vFw and EdgeXFoundry Services on Kubernets Cluster with ONAP for instantiation flow for K8s. See https://docs.onap.org/en/latest/submodules/integration.git/docs/docs_vfwHPA.html?highlight=hpa VFW deployment with HPA for HPA use case.
If we follow the deploying vFW for K8s on ONAP, the only difference in the process for HPA would be the addition of homing (see VFW deployment with HPA docs above) to the SO instantiation request. This enables a call to OOF that uses AA&I and Policy to select the correct cluster.
Policy and OOF will use the same path to discover capabilities of K8s nodes as are done for OpenStack. We are using the same data model (OpenStack cloud-region/tenant/flavors) from AA&I. OOF will use AA&I information to home particular CNF workloads to cloud-regions which contain requested features. When K8s plugin instantiates said workloads Kubernetes will read nodeSelector preferences and place each workload on nodes which contain needed features to accelerate their function.
High Level Task Overview
- MULTICLOUD-729Getting issue details... STATUS
- Ensure NFD (Node feature discovery - K8S CNCF project) deploy to Multicloud + K8s Plugin is working and labeling Kubernetes nodes - MULTICLOUD-741Getting issue details... STATUS
- Add registrationHandler to K8s Plugin and expand K8s Plugin API to work with ESR VIM registration of K8s Clusters
-
MULTICLOUD-740Getting issue details...
STATUS
- Use POST/DELETE/GET of http://{{MSB_IP}}:{{MSB_PORT}}/api/multicloud-k8s/v1/{{cloud-owner}}/{{cloud-region-id}}/registry
- Ensure registrationHandler can get K8s feature labels - (labels discovered and placed by NFD representing hardware features)
- Ensure registrationHandler can populate A&AI cloud-region/tenant/compute flavors based on cluster under registration
- MULTICLOUD-739Getting issue details... STATUS - Register K8s cluster internally with the K8s Plugin
- Add Node App in K8s Plugin to retrieve node labels from a cluster
- Identify and Address any gaps in policy and OOF to do match making of CNFs/VNFs using K8s Plugin
- Update CNF packages (Resource Bundles, CSAR) to include nodeSelector and resource requests and limits
- Update docs for HPA and K8s and Multicloud and ESR to reflect any change
- MULTICLOUD-742Getting issue details... STATUS
Identify & Address Gaps in AAI/ESR Cloud Registration
- AAI-2640Getting issue details... STATUS
Identify & Address Gaps to Policy and OOF for K8s HPA Homing
As of now there are not any known gaps in OOF/Policy that would inhibit this from working.
Background Links:
Deploying vFw and EdgeXFoundry Services on Kubernets Cluster with ONAP
vFW K8S examples mapping to AAI
Setting up Closed Loop for K8S vFW - initial pass
https://github.com/intel/CPU-Manager-for-Kubernetes
https://github.com/kubernetes-sigs/node-feature-discovery
https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/
https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/
https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/
https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/