Project Name:
- Proposed name for the project:
SNIRO (ONAP Optimization Framework (ONAP-OF)
- Proposed name for the repository: sniro: optf
Subrepositories:
optf/has -- homing and allocation service
optf/cmso -- change management scheduling service
optf/osdf -- optimization service design framework
Project description:
The SNIRO project (Service, Network, Infrastructure, and Resource Optimization) aims to provide a platform for addressing different optimization ONAP needs core platform optimization services such as VNF placement and resource allocation needs (optimization as a service). Legacy applications typically support specific business or network application needs, and are developed independently. When a carrier or service provider has a wide range of business and network applications, legacy approaches often result in siloed tools and duplicated efforts with associated development and operational overhead.
When using the SNIRO framework, service designers and operators create policies. SNIRO gathers information from these policies and data sources related to a problem; translates that into constraints for optimization problems; and solves them using reusable optimization capabilities. This policy- and model-driven approach promotes efficient reuse of optimization functions. Thus, one of the main objectives of SNIRO is to provide a unified approach to eliminate code redundancy and to reduce overhead associated with managing different optimization applications.
Initial use cases for SNIRO:
- Placement of VNFs (homing).
- Change management scheduling (providing schedule of changes for upgrading of VNFs under given constraints).
- Effective allocation of licenses from pool of licenses in a geographically varied context
Examples of other intended use cases:
- Network routing optimization
- Self-Organizing Network (SON)
- Energy-optimized networks
Project Scope:
The main goals of SNIRO are to provide:
- a set of reusable micro-services (e.g., API, data access) that allows new optimizers to be implemented more easily;
- a standardized interface for optimizers to communicate with other optimizers (e.g., homing optimizer interacting with license optimizer to check a solution for any license related constraints);
- scalability and high-availability for the optimization micro-services and supporting micro-services;
- a unified toolkit for developing optimization applications via extensible APIs. This facilitates developing new optimization applications independent of how the underlying optimization modules are implemented;
- a library of optimization engines/solvers. This will include an API for plugging other custom entities (custom data sources; proprietary or open source optimization engines/solvers; etc.);
- a library for translation of policies to constraints for an optimization engine.
- (in longer term) a mechanism for interacting with the ONAP Operations Manager (OOM) to take actions based on the optimization solution.
The term optimization here is used in the context of providing a solution (or set of solutions) for a problem specified in terms of the state (available resources, topology, objective, etc.) and additional constraints specified as a set of policies. It must be noted that This is different from the use of optimization in other contexts such as “performance optimization”, “platform stability/reliability”, “scalability”, etc. While such services may need information from optimization solutions (e.g. “when should one take an action”, “how much additional capacity is needed to ensure meeting a specific SLA”), they can be considered as applications that can utilize the optimization framework.
Architecture Alignment:
- How does this project fit into the rest of the ONAP Architecture?
Provides optimization (e.g. homing, scheduling, network, license allocation, and capacity planning) as a service to other ONAP subsystems
Provides adapters to other ONAP systems (e.g. policy, A&AI, SDC, etc.) for optimization application developers
- Uses REST and Data Bus interfaces in a service agnostic manner
- Models and artifacts are specified in SDC format, while rules/constraints are specified in Policy Service
- The homing and license allocation applications are expected to support the Multi-VIM Project
- The scheduling application (change management scheduling) is expected to support the Network Change Management Project
- What other ONAP projects does this project depend on?
- A&AI (e.g. network topology, cloud sites, service instances, scheduling/ticketing data)
- DCAE (e.g. cloud-level resource utilization)
- SDN-C (e.g. network utilization, available capacity in a VNF instance)
- SDC (e.g. available license artifacts)
- Policy Service (e.g. rules/constraints)
- Policy-driven VNF Orchestration
- How does this align with external standards/specifications?
- Are there dependencies with other open source projects?
- Open source optimization solvers (GLPK/CBC, OR-Tools, these are pluggable)
- Python eco-system (modules for schema validation, database adapters, etc.)
- While the code is primarily written in Python, we anticipate it to be a mixture of Java and Python
Use case Description:
1. Homing Use Case
The homing homing), and change management scheduling to function in any multi-site, multi-VIM, and multi-service environment. It could also benefit from a framework which promotes the reuse of software tools and algorithms to allow users to construct new optimization services and to extend/enhance existing platform optimization services.
This project currently provides the following two core platform optimization services, which are built to be service independent, policy driven, and extensible along with an optimization framework to enhance these or creating new services.
a) HAS (Homing and Allocation Service) optimizer: a policy driven service placement and resource allocation service to allow deployment of services and VNFs on a multi-site, multi-VIM infrastructure. This service performs function similar to classic OS schedulers or OpenStack scheduler. The role of the HAS is to select which cloud and which sites the elements of a service should be placed in, while respecting service constraints (latency, availability of specific platform features) as well as platform needs (cost).
b) CMSO (Change management scheduling optimizer): a policy driven workflow schedule optimizer for change management planning. The CMSO helps schedule workflows in time to maximize parallel change management activities, while respecting dependency between the workflows.
This will be delivered as three modules. One for HAS, one for CMSO, and one for the service design framework. The HAS and CMSO can execute both as services on DCAE and independent processes.
The set of platform optimization services will grow over time as the ONAP platform needs arise, and the optimization framework is envisioned to handle this as effectively as possible, with minimal or little new code development for creating new services. The optimization service design framework (OSDF), which can be used to build new optimization applications for users of ONAP, as well as to build new platform optimization services or extend the existing platform services through plugins. To demonstrate its capabilities, OSDF has been used to entirely build the change management scheduling optimizer (CMSO) as well as to build VNF license optimization and connectivity optimization plugins for the homing and allocation service (HAS). The OSDF is intended to allow future applications such as energy optimization in networks, optimal route selection, and radio access network (RAN) runtime performance optimization.
We will describe the current platform optimization services and optimization framework with their architectural fit one by one.
HAS: policy driven service homing and resource allocation on a multi-site, multi-VIM infrastructure. Homing/placement and allocation of resources is one of the fundamental requirements of provisioning a service over the cloud (or even non-cloud) infrastructure. The Homing application HAS allows designers of services/VNFs to specify their homing requirements (either in the Policy Service linked to the service model or directly in the TOSCA service model) and objective function service-specific placement requirements using policy constraints (e.g., geo-redundancy requirements for disaster recovery) and objective functions (e.g., minimize latency) linked to the service model. Then, at service deployment time, SNIRO HAS collects information from A&AIAAI, DCAE, and other sources to determine a homing placement solution that meets service constraints while considering both the service objective function and the service provider preferences (e.g., cost) and constraints (e.g., available of capacity). Once a placement decision is made, a resource allocation (reservation) . SNIRO decision can be registered in AAI or with the resource manager for the resource if necessary.
HAS can home a request either to a cloud site where new virtual resources are to be created or to an existing service instance. When the services deployed become more complex (e.g., multiple VNFs with different constraints for individual VNFs and the combinations of VNFs) and the cloud infrastructure is large (e.g., dozens or more possible sites), such capability is essential for managing the services and the infrastructure.
2. Change Management Schedule Planning Use Case
.
HAS will be designed to be used as a building block for both initial deployment, as well as runtime redeployment due to failures or runtime-capacity increase (scale-out). It will be designed to be usable for all platform placement functions, including placements of VMs, containers (e.g., for DCAE micro-services), or VNF specific resources. A plugin model will be provided to allow placements of additional resource types such as licenses, VNF resources. Plugin models will also allow extension by adding new constraint types, optimizer types, and objective functions.
Architecture alignment:
- SO invokes HAS to get a placement and license allocation decision when deploying a new service, or when it is called to redeploy a service upon site failure, or upon increasing the capacity of an already existing service. This is particularly useful in multi-site or multi-VIM environments.
- VF-C/App-C may need to invoke HAS to get a placement decision if an existing VNF must be rebuilt due to failure or increase in capacity.
- OOM may need to invoke HAS to get a placement decision when deploying ONAP components e.g., to get a DCAE micro-service to be placed in proximity to the VNF it is monitoring.
- Policy may need to support HAS by storing placement policies and associating them with service models
- HAS uses information stored in AAI (e.g., available inventory), DCAE (e.g., performance, utilization), SDN-C and other ONAP components to make placement decisions.
- Multi-VIM: HAS allows placement constraints to be specified that drives workloads to different cloud providers when appropriate (e.g., VNF requires some specific cloud platform) or desired (e.g., VNF requires certain level of reliability or performance that only some cloud providers can meet).
CMSO: VNF change management scheduling optimizer: Change Management (CM) application is responsible for managing and enforcing changes (e.g. device upgrade, configuration change, etc.) in the cloud and network infrastructure. Currently, a major part of CM scheduling is performed manually, which is time consuming, inefficient, and prone to service impacting errors. The schedule planning application CMSO provides recommended schedules of changes for upgrading of VNFs under given constraints and current state of schedules and relationships of network elements. The primary challenge is when to schedule changes such that service disruption is minimized. SNIRO offers a schedule optimization OF offers the CMSO service to the CM application, which can be invoked prior to schedule any change are scheduled. A service designer designs a change request in SDC and configures the schedule requirements through policies. Prior to scheduling changes via Service Orchestrator (SO), the designer makes a call to the schedule optimization in SNIRO CMSO from SDC. SNIRO CMSO collects the existing scheduling information from available ticketing system and vertical dependency information from A&AI AAI and calculates a solution to the scheduling application. Finally, the recommended schedule is returned to SDC, which is verified by the designer before committing the schedules to Service Orchestrator.
...
SO.
Architecture alignment:
- AAI (e.g. network topology, cloud sites, service instances, scheduling/ticketing data)
- DCAE (e.g. cloud-level resource utilization)
- SDC (e.g. available VNF license artifacts)
- Policy Service (e.g. rules/constraints
Optimization Service Design Framework. It a set of design time optimization libraries, tools and microservices (MS) to facilitate and simplify the creation of new specific runtime optimization functionalities. The goal of this framework is to avoid siloed optimization tools and associated duplicated efforts and overheads. Indeed, the current platform services HAS and CMSO use the framework extensively in their own development. Other potential optimization services that can be built using this framework include energy optimization in networks, optimal route selection for various network services, and radio access network (RAN) performance optimization. The figure below illustrates the concept.
Architecture alignment:
• How does this framework fit into the rest of the ONAP Architecture?
- Offers a set of MS which provide reusable optimization functionality which can optionally be used by other ONAP components if required by a use case
- Provides a framework for building optimization services as a part of the ONAP ecosystem
- Provides adapters to other ONAP systems (e.g. policy, AAI, SDC, etc.) for optimization application developers
- Uses REST and Data Bus interfaces in a service agnostic manner
- Models and artifacts are specified in SDC format, while rules/constraints are specified in Policy Service
• What other ONAP projects does this project depend on?
- AAI (e.g. network topology, cloud sites, service instances, scheduling/ticketing data)
- DCAE (e.g. cloud-level resource utilization)
- SDN-C (e.g. network utilization, available capacity in a VNF instance)
- SDC (e.g. available VNF license artifacts)
- Policy Service (e.g. rules/constraints)
• How does this align with external standards/specifications?
N/A
• Are there dependencies with other open source projects?
- Open source optimization solvers (e.g. GLPK/CBC, OR-Tools, optaplanner – these are pluggable)
- Python eco-system (modules for schema validation, database adapters, etc.)
- While the code is primarily written in Python, we anticipate it to be a mixture of Java and Python
Resources:
- Primary Contact Person
- Sastry Isukapalli Sarat Puthenpura - AT&T
- Names, gerrit IDs, and company affiliations of the committers
- Sastry Isukapalli - AT&T
- Sarat Puthenpura - AT&T
- Shankaranarayanan Puzhavakath Narayanan - AT&T
- Maopeng Zhang - ZTE
Yoram Zini - Cisco
- Names and affiliations of any other contributor
- Ankit Patel - AT&T
- Avteet Chayal - AT&T
- Matti Hiltunen - AT&T
- Joe D'Andrea - AT&T
- Rúben Borralho - Celfinet
- Mark Volovic - Amdocs
- Manoj K Nair - Netcracker manoj.k.nair@netcracker.com
- Alexander Vul - Intel
- Mark Volovic - Amdocs Names and affiliations of any other contributors
- Rakesh Sinha - AT&T
- Max Zhang - AT&T
- Carlos De Andrade - AT&T
- Kevin Smokowski - AT&T
- Ramki Krishnan - VMware
- Gil Hellmann - Wind River
- Ikram Ikramullah - AT&T
- Dileep Ranganathan - Intel
- Gueyoung Jung - AT&T
- Project Roles (include RACI chart, if applicable)
Other Information:
- link to seed code (if applicable)
- Vendor Neutral
- if the proposal is coming from an existing proprietary codebase, have you ensured that all proprietary trademarks, logos, product names, etc., have been removed?
- Meets Board policy (including IPR)
Use the above information to create a key project facts section on your project page
Key Project Facts
Project Name:
- JIRA project name: SNIRO ( Optimization Framework)Framewok
- JIRA project prefix: sniro-open-of OPTFRA
Repo name: optf
Lifecycle State:
Primary Contact: Sastry IsukapalliSarat Puthenpura (sarat@research.att.com)
Project Lead: Sastry IsukapalliSarat Puthenpura
mailing list tag [Should match Jira Project Prefix]:
Committers:
Sastry Isukapalli sastry@research.att.com AT&T
Ankitkumar Patel ankit@researchSarat Puthenpura sarat@research.att.com AT&T
Matti Hiltunen hiltunen@att.com AT&T
Shankar Narayanan snarayanan@researchSastry Isukapalli sastry@research.att.com AT&T
Joe D'Andrea jdandrea@researchShankaranarayanan Puzhavakath Narayanan - snarayanan@research.att.com AT&T
Maopeng Zhang zhang.maopeng1@zte.com.cn ZTE
Alexander Vul alex.vul@intel.com Intel
Yoram Zini (yzini) yzini@cisco.com Cisco
...
Vladimir Yanover mailto:vyanover@cisco.com Cisco
*Link to TSC approval:
Link to approval of additional submitters:
...