TLAB - ONAP Daily Deployment Tests

Description

This epic will encompass the work to be done by the ONAP Integration Team in running full ONAP instances to run ETE tests. This means that ONAP instances will be created/torn-down periodically.

  • There currently are 3 tenants created under this effort.

100% Done
Loading...

Activity

Show:

Former user June 19, 2019 at 9:22 PM

I am closing this ticket since TLAB is no more available

Former user January 2, 2019 at 4:12 PM

Hi , this ETE testing runs on the master branch so it runs irrespective of any specific ONAP release version. I removed the tag to fix Beijing release only and left it blank. Is this ok? Thank you

Former user December 30, 2018 at 11:02 AM

Good morning , . The current fixVersion is set to "Beijing Release". Can you please review the status of this ticket and update it accordingly? thank you

Former user February 5, 2018 at 11:41 PM

Deployment mode OOM
Cinder volumes: will depend on each components Kubernetes PV requirements -for now ONAP(outside of DCAE) can run on one big NFS or Cinder share of 1TB.

These are currently an evolving work in progress but the following will be in place until DCAE and the cloudify manager are containerized.

Remember only HD and RAM is invariant - for vCPU's - then entire ONAP can run on 8 vCores - but during orchestration peaks at 55 vCores. If you are running services and closed loop analytics in DCAE then depending on the cdap cluster saturation you would need up to 7 x 16 vCores more to start.

Actually, the HEAT part orchestrated by OOM is a lot smaller - for example the cloudify VM is not 64g anymore
The subset is formally documented here
https://wiki.onap.org/display/DW/ONAP+on+Kubernetes+on+Rancher+in+OpenStack#ONAPonKubernetesonRancherinOpenStack-Overallrequiredresources:

156G ram, 54vCPUs, 17VMs, 1TB HD - this includes the 64G VM for ONAP (minus DCAE/Cloudify)

For OOM - you have several options
The preferred is separation
4G/4cpu/40G for the server
64G/8cpu/120G for the host (the one running all the pods)

But you can run fine colocated on the same 64G VM - the issue is spinning up rancher/kubernetes/helm on the server - if colocated you would have to install the undercloud everytime, if you split the OOM VMs into 2 at 4G/64G then you can delete/create the pods only - on the 64G vm (which could be stateless).

For future federated installs you will want to separate Kubernetes management (these can be clustered for HA and GeoRedundancy) from Kubernetes hosts (these can be clustered for HA)

For HEAT installs - this is not my speciallty anymore.

Former user February 5, 2018 at 3:49 PM

, could you please point out where the hybrid sizing is located at? there are many references (some outdated, some updated) that may not be accurate, so just want to make sure what the final version is (at least for the integration team in tlab). Could you also please point out other specs outside of the RAM is possible (vCPUs, HDD, Cinder Volume size) & what deployment mode does this apply for (HEAT or OOM)? Thank you.

Done

Details

Assignee

Reporter

Components

Priority

Epic Name

Created February 2, 2018 at 3:35 PM
Updated July 10, 2019 at 7:16 PM
Resolved June 19, 2019 at 9:22 PM