Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

The purpose of this is to measure the load that a complete ONAP platform puts on a Kubernetes multi-node cluster orchestrated and deployed by a Rancher instance. Due to the nature of the deployment in an OOM-based ONAP platform which uses HELM for deployment, it is useful to have a standard set of initial resource requirements so that we know how much should be initially allocated for the Kubernetes node availability so as to not bring the cluster to a failure state due to resource availability concerns.

Rancher’s default Kubernetes template provides three components that have been used here to measure metrics. All of them run on pods within the Kubernetes cluster so the cluster itself can control it:

  • Heapster: This is a component that collects metrics and events from all the Kubernetes nodes. It is designed to be modular in a way that multiple backends are supported, such as InfluxDB (used here), Kakfa, ElasticSearch, Google Cloud Monitoring, etc.

  • InfluxDB: This is the backend component that Heapster uses to store the collected metric data (i.e. Heapster’s Sink). It is a time-series database that can be queried by many visualization GUIs, including Grafana and Kibana.

  • Grafana: This is the frontend component that queries InfluxDB to obtain the metrics collected by Heapster. It can contain many dashboards broken up into panels that can be customized to show all types of data provided by the backend data source.


In order to run these measurements, the following was applied:

  • An 11-node Kubernetes cluster (as currently being tested by the Integration team) is suggested to deploy a complete ONAP platform
    • Docker version 17.03.2
    • Kubernetes version 1.8.10

  • A single Rancher instance (v1.6.14)
    • Docker version 17.03.2

  • The VM flavor for each of these VMs is OpenStack standard flavor “m1.xlarge” which is equivalent to 8 vCPUs, 16 GB RAM, 160 GB HDD

  • ONAP "Beijing" Release (Complete, with all components enabled as part of the HELM chart deployment). Gerrit branch "BEIJING"


By using the components to measure resource usage (Heapster), store into a backend component (InfluxDB), and visualizing the data into a frontend (Grafana), the following relevant metrics were collected.


RESOURCE USAGE CHART

These metrics are from Kubernetes resources that do not stop after an amount of time (i.e. Kubernetes Jobs whose container exits as soon as the job request is completed), so focus is on Kubernetes Pods.


NOTE: All the numbers below are assuming that there is no additional actions on the ONAP applications, other than the resources running at an idle state.


  • For the complete ONAP instance usage (in blue row): this is measuring the mean value of overall kubernetes cluster usage after the sum of all the Kubernetes nodes resource usage reports. This takes into account the resource usage of a complete ONAP instance (with all its HELM Charts enabled and deployed).
  • For measurement of aggregated pod usage (in other rows):

RESOURCEvCPUsFILESYSTEMMEMORY (RAM)NETWORK

Complete ONAP Instance

(All 25 HELM Chart Components -

Beijing Release)

Min: 8 mcores

Max: 23.79

Avg: 8.03

Min: 2.25 GB

Max: 21.03 GB

Avg: 15.87 GB

Min: 89 MB

Max: 122.1 GB

Avg: 114.4 GB

Min (Tx / Rx): 31 kBps / 69 kBps

Max (Tx / Rx): 2.08 MBps / 11.79 MBps

Avg (Tx / Rx): 386 kBps / 368 kBps

HELM AAF - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM AAI - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM APPC - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM CLAMP - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM CLI - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM CONSUL - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM DCAEGEN2 - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM DMAAP - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM ESR - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM LOG - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM MSB - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM MULTICLOUD - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM NBI - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM OOF - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM POLICY - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM PORTAL - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM ROBOT - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM SDC - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM SDNC - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM SNIRO-EMULATOR - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM SO - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM UUI - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM VFC - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM VID - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

HELM VNFSDK - Overall Pod Usage

Min: mcores

Max:

Avg:

Min:

Max:

Avg:

Min:

Max:

Avg:

Min (Tx / Rx):

Max (Tx / Rx):

Avg (Tx / Rx):

OVERALL CLUSTER - CPU USAGE


OVERALL CLUSTER - FILESYSTEM USAGE


OVERALL CLUSTER - MEMORY (RAM) USAGE


OVERALL CLUSTER - NETWORK USAGE


RESOURCE USAGE CHART

RESOURCEvCPUsFILESYSTEMMEMORY (RAM)NETWORK

Complete ONAP Instance

(All 25 HELM Chart Components -

Beijing Release)

Min: 8 mcores

Max: 23.79

Avg: 8.03

Min: 2.25 GB

Max: 21.03 GB

Avg: 15.87 GB

Min: 89 MB

Max: 122.1 GB

Avg: 114.4 GB

Min (Tx / Rx): 31 kBps / 69 kBps

Max (Tx / Rx): 2.08 MBps / 11.79 MBps

Avg (Tx / Rx): 386 kBps / 368 kBps









  • No labels