Updated Proposal of Multi VIM/Cloud (Jan-17-2019)
Project Name:
Proposed name for the project: Multi VIM/Cloud for Infrastructure Providers
Proposed name for the repository: multicloud
Project description:
Motivation
ONAP needs underlying physical and virtulized infrastructure to deploy, run, and manage network services and VNFs.
The service providers always look for flexibility and choice in selecting virtual and cloud infrastructure implementations, for example, on-premise private cloud, public cloud, or hybrid cloud implementations, and related network backends.
ONAP needs to maintain platform backward compatibility with every new release.
Goal
Multi-VIM/Cloud project aims to enable ONAP to deploy and run on multiple infrastructure environments, for example, OpenStack and its different distributions (e.g. vanilla OpenStack, Wind River, etc...), public and private clouds (e.g. VMware, Azure), and micro services containers etc.
Multi-VIM/Cloud project also enables ONAP to request and make use of composed physical resources whenever possible .
Multi-VIM/Cloud project will provide a Cloud Mediation Layer supporting multiple infrastructures and network backends so as to effectively prevents vendor lock-in.
Multi-VIM/Cloud project decouples the evolution of ONAP platform from the evolution of underlying cloud infrastructure, and minimizes the impact on the deployed ONAP while upgrading the underlying cloud infrastructures independently.
Scope:
The scope of Multi-VIM/Cloud project is a plugable and extensible framework that
provides a Multi-VIM/Cloud Mediation Layer which includes the following functional modules
Provider Registry to register infrastructure site/location/region and their attributes and capabilities in A&AI
Infra Resource to manage resource request (compute, storage and memory) from SO, DCAE, or other ONAP components, so as to get VM created and VNF instantiated at the right infrastructure
SDN Overlay to configure overlay network via local SDN controllers for the corresponding cloud infrastructure
VNF Resource LCM to perform VM lifecycle management as requested by VNFM (APP-C or VNF-C)
FCAPS to report infrastructure resource metrics (utilization, availability, health, performance) to DCAE Collectors for Close Loop Remediation
provides a common northbound interface (NBI) / Multi-Cloud APIs of the functional modules to be consumed by SO, SDN-C, APP-C, VF-C, DCAE etc.
provides a common abstraction model
provides the ability to
handle differences in models
generate or extend NBI based on the functional model of underlying infrastructure
implement adapters for different providers.
Across the project, the implementation of any differentiated functionalities will be done in a way where ONAP users can decide if to use or not to use those functionalities.
Multi-VIM/Cloud project will align with the Common Controller Framework to enable reuse by different ONAP elements.
Deliverables of Release One:
In R1, we target to support
Maintain OpenStack APIs as the primary interface (Nova, Neutron, etc) to mitigate the risk and impact to other projects
As of this date, we expect to support Vanilla OpenStack based on Ocata, and commercial OpenStack based on Mitaka (see below)
Other OpenStack distributions in theory should work, but need other cloud providers to commit resources in the scope of R1 (Redhat, Mirantis, Canonical, etc)
Provide support for 4 cloud providers and align with R1 use case
Vanilla OpenStack, VMware Integrated OpenStack, Wind River Titanium Server, and Microsoft Azure (Azure to provide HEAT to ARM translator)
Minimal goal: any single cloud provider from above across multi-sites (TIC edge and TIC core)
including implementation of the adapters for above clouds
Stretch goal: mix-match of different cloud providers across multi-sites
Architecture Alignment:
How does this project fit into the rest of the ONAP Architecture?
The proposed Multi-VIM/Cloud Mediation Layer consists of five functional modules, which interacts with SO, SDN-C, APP-C, VF-C, DCAE and A&AI respectively. It will act as the single access point to be called by these components for accessing the cloud and virtual infrastructure. Furthermore, it will interact with SDN-C component to configure overlay network via local SDN controllers for both intra-DC connectivity and inter-DC connectivity of the corresponding cloud infrastructure. Thus it is also the single access point for SDN-C to work with other local SDN Controllers. Applications/VNFs can be homed to the different cloud providers through the standard ONAP methods. For automated homing (SNIRO), different cloud providers can register attributes that differentiate their cloud platforms (e.g., reliability, latency, other capabilities) in A&AI and application placement policies/constraints can request for these specific properties (e.g., reliability > 0.999).
Is there any overlap with other ONAP components or other Open Source projects?
There is no intentional or unintentional overlap with other ONAP components or other Open Source projects to the best of our knowledge
What other ONAP projects does this project depend on?
Consumers of Multi-VIM/Cloud – SO, SDN-C, APP-C, VF-C, DCAE
Producers for Multi-VIM/Cloud – DCAE, A&AI\
Dependencies – Modeling
Alignment of Reusable APIs – Common Controller Framework
Indirect Impact and Collaboration – SNIRO, SDC
How does this align with external standards/specifications?
Support existed functions
Information/data models by ONAP modeling project
Compliant with ETSI NFV architecture framework
VIM, NFVI, Vi-Vnfm, and Or-Vi
Are there dependencies with other open source projects?
Cassandra, OpenStack Java sdk, AWS Java sdk, Azure and Bare metal.
Resources: Resources and Repositories (Deprecated)#MultiVIM/Cloud
PTL
Bin Yang, biny993, bin.yang@windriver.com, Wind River
Names, Gerrit IDs, emails, and company affiliations of the committers
Anbing Zhang, zhanganbing@chinamobile.com, China Mobile
Bin Hu, bh526r, bh526r@att.com, AT&T
Bin Yang, biny993, bin.yang@windriver.com, Wind River
Ethan Lynn, ethanlynnl@vmware.com, VMware
Haibin Huang, haibin,@Haibin Huang, Intel
Sudhakar Reddy, sudhakarreddy, @Sudhakar Reddy, Amdocs
Xinhui Li, xinhuili, lxinhui@vmware.com, VMware
Victor Morales, victor.morales@intel.com, Intel
Names and affiliations of any other contributors
Alex Vul, alex.vul@intel.com, Intel
Alon Strikovsky, alon.Strikovsky@amdocs.com, Amdocs
Anil Vishnoi, vishnoianil@gmail.com,
Andrew Philip, aphilip@microsoft.com, Microsoft
Arash Hekmat, arash.hekmat@amdocs.com, Amdocs
Bin Sun, bins@vmware.com, VMware
Claude Noshpitz cn5542@att.com, AT&T
Dominic Lunanuova, dgl@research.att.com, AT&T
Gautam S, GAUTAMS@amdocs.com, Amdocs
Gil Hellmann, gil.hellmann@windriver.com, Wind River
Haibin Huang, haibin.haung@intel.com, Intel
Hong Hui Xiao, honghui_xiao@yeah.net,
Huang Zhuoyao, haunt.zhuoyao@zte.com.cn, ZTE
Isaku Yanahata, isaku.yamahata@intel.com, Intel
Jinhua Fu, fu.jinhua@zte.com.cn, ZTE
John Murray, jm2932@att.com, AT&T
Kanagaraj Manickam, mkr1481, kanagaraj.manickam@huawei.com, Huawei
Liang Ke, lokyse@163.com,
Madhu Nunna, mnunna@mirantis.com, Mirantis
Manoj K Nair, manoj.k.nair@netcracker.com, NetCracker Technology
Maopeng Zhang, zhang.maopeng1@zte.com.cn, ZTE
Marcin Bednarz, mbednarz@mirantis.com, Mirantis
Matti Hiltunen, hiltunen@att.com, AT&T
Michael O'Brien, frank.obrien@amdocs.com, Amdocs
Piyush Garg, piyush.garg1@amdocs.com, Amdocs
Ram Koya, rk541m@att.com, AT&T
Ramesh Tammana, ramesht@vmware.com, VMware
Ramki Krishnan, ramkik@vmware.com, VMware
Ramu N, rams.nsm@gmail.com,
Robert Tao, roberttao@huawei.com, Huawei
Sandeep Shah, ss00473517@techmahindra.com, Tech Mahindra
Satish Addagadda, sa482b@att.com, AT&T
Shimon Seretensky unnamed link, Amdocs
Srinivasa Addepalli, srinivasa.r.addepalli@intel.com, Intel
Sumit Verdi, sverdi@vmware.com, VMware
Tapan Majhi, Tapan.Majhi@amdocs.com, Amdocs
Tom Tofigh, ttofigh@gmail.com,
Tyler Smith, ts4124@att.com, AT&T
Varun Gudisena, vg411h@att.com, AT&T
Victor Gao, victor.gao@huawei.com, Huawei
Virginie Dotta, vdotta@fr.ibm.com, IBM
Yang Xu, yang.xu3@huawei.com, Huawei
Project Roles (include RACI chart, if applicable)
Other Information:
link to seed code (if applicable)
OPEN-O
seed code for Multi VIM/Cloud framework: https://gerrit.open-o.org/r/multivimdriver-broker
seed code for OpenStack: https://gerrit.open-o.org/r/multivimdriver-openstack
seed code for VMware: https://gerrit.open-o.org/r/multivimdriver-vmware-vio
ECOMP
seed code for Multi VIM/Cloud framework: https://github.com/att/AJSC/tree/master/cdp-pal/cdp-pal-common
seed code for OpenStack: https://github.com/att/AJSC/tree/master/cdp-pal/cdp-pal-openstack
Vendor Neutral
if the proposal is coming from an existing proprietary codebase, have you ensured that all proprietary trademarks, logos, product names, etc., have been removed?
Meets Board policy (including IPR)
The above seed code has been incorpated since Amsterdam Release
Use the above information to create a key project facts section on your project page
Key Project Facts
Project Name:
JIRA project name: multicloud
JIRA project prefix: multicloud
Repo name: Resources and Repositories (Deprecated)#MultiVIM/Cloud
org.onap.multicloud/framework
org.onap.multicloud/openstack
org.onap.multicloud/openstack/windriver
org.onap.multicloud/openstack/vmware
org.onap.multicloud/azure
org.onap.multicloud.k8s
IRC: http://webchat.freenode.net/?channels=onap-multicloud
Lifecycle State: incubation
PTL: Bin Yang, Wind River
mailing list tag: multicloud
Committers:
Anbing Zhang, zhanganbing@chinamobile.com, China Mobile
Bin Hu, bh526r, bh526r@att.com, AT&T
Bin Yang, biny993, bin.yang@windriver.com, Wind River
Ethan Lynn, ethanlynnl@vmware.com, VMware
Haibin Huang, haibin,@Haibin Huang, Intel
Sudhakar Reddy, sudhakarreddy, @Sudhakar Reddy, Amdocs
Xinhui Li, xinhuili, lxinhui@vmware.com, VMware
Victor Morales, victor.morales@intel.com, Intel
Contributors
*Link to TSC approval: https://lists.onap.org/g/onap-tsc-vote/message/834
Link to approval of additional submitters:
Save
in selecting virtual and cloud infrastructure implementations