Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Background

FPGA and GPUs are becoming common place to take away the load from cores.  Some of the use cases for FPGA and GPU include

  • Machine Learning and Deep Learning inference/training offloads
  • Crypto and compression offloads
  • Protocol (such as IPsec and PDCP) offloads

Though fixed function accelerator do provide better performance,  multiple accelerators are required in a given compute node if there are workloads of different types.

Being programmable, acceleration function can be dynamically changed in FPGA/GPU based on the need and hence FPGA and GPUs are considered to support various kinds of workloads.

Openstack and Kubernetes orchestrators are adding support for FPGA and GPU.

ONAP being a service orchestrator (end-to-end), makes decisions on placing the VNFs/workloads and hence the awareness of FPGA/GPU is required in ONAP like any other HPA feature.

Since Openstack is in advanced stages (in Rocky release) of providing FPGA support (via Cyborg project), initial support in ONAP would support Openstack based cloud regions.

FGPAs can be used in following ways:

  1. Pre-programmed with acceleration functions 
  2. Orchestrator (Cyborg) programmed acceleration functions
    1. Supports dynamic programming -  Based on workload being brought up, Openstack programs the acceleration function in the compute node where workload is about to be brought up.
  3. Workload programmed
    1. In this case, workload programs the FPGA.

From ONAP perspective, there is not much difference between pre programmed and orchestrator programmed. These two are commonly known as AFaaS (Acceleration Function as a Service). In case of workload programmed, it is called FPGAaaS (FPGA as a Service).  

Note on Openstack FPGA support


Design


  • No labels