ONAP Deployment Specification for Finance and Operations

ONAP Deployment Specification for Finance and Operations





This WIP page will attempt to detail the financial and operations impact of varying ONAP deployments.

NOTE: this is not finished - as I need to do a detailed incoming/outgoing port analysis and some more prototyping - it is a draft for now - mostly as a page to point to when asked questions about cost and security during deployments.

The audience is approving I.T. and Finance personnel.

Up to date content is on Cloud Native Deployment

Public Cloud Deployment

Executive Summary

ONAP as of 20180131 (Beijing master - pre M2) will deploy (minus DCAEGEN2) and run on a single 64G VM provisioned with a minimal 120G HD and at least 8 vCores.  The monthly cost of this deployment will run....

$246 or 64 US per month (reserved/spot) on AWS EC2

$365 or 212 US per month (on-demand/reserved) on Microsoft Azure

The security profile for deployment requires outbound access to nexus3.onap.org:10001/10003 as well as ssh/http/https access to git.onap.org to do leftover chef pulls and curls in-container for some components (being fixed).  The VNF deployment profile currently requires an additional Openstack/Rackspace infrastructure including keystone and CLI/HEAT outbound access for VNF orchestration (in the future cloud-native VNF deployment will be supported).

Marketplace/services Utilization - no use of marketplace services is required in either AWS or Azure.  The ONAP deployment brings its own open source software stack to a bare Ubuntu VM.  At this time Kubernetes as a Service is also not required and all resiliency/scaling/replication/load-balancing/federation behaviour for HA and Geo-Redundancy is handled natively by the Kubernetes framework.

Beijing release

Corporate Allocation

The following table details an example of what is required to run a continuous delivery system around ONAP beijing and some developer profiles when working directly on a cloud provider.

Beijing requires the following to run: Note there is a 110 pod limit per VM - we currently deploy 175+ pods - hence 2+ VMs for full deployment

https://lf-onap.atlassian.net/wiki/display/DW/Cloud+Native+Deployment#CloudNativeDeployment-Changemax-podsfromdefault110podlimit


  • Full production/staging

    • Minimum 2 VMs, 12vCores total, 96G total ram, 100G HD per VM

    • Recommended 3-9 VMs, 24-64 vCores, 128+ total ram, 160G HD per VM + 1 8G/100G kubernetes master VM

  • Developer:

    • Minimum 1 VM at 4+ vCores, 16-64G ram, 100G HD - collocated kubernetes master, host and jumpbox (deploys a subset of ONAP)

    • Recommended 3 VMs 1x8G kubernetes master and 2 x 64G hosts

Amazon AWS

Minimum Deployment costs 

20190305: Dublin state

              Cost is ~$10/day or $300/month per 128G deployment – (I happen to have 128G worth of VMs running on the 75% off spot market – including cost effective R4 instances, EBS store and EFS NFS) – 2 deployments and dev cost will run under $1k US.  Based on the level of funding we will go with pure IaaS VMs up to the EKS PaaS for K8s – but that runs over $20/day.   Ideally we should be running 160G+ clusters but 128G is ok for specific VNFs like the vFW - https://lf-onap.atlassian.net/wiki/display/DW/Cloud+Native+Deployment#CloudNativeDeployment-AmazonAWS



For details on ONAP deployment via OOM on Kubernetes see Cloud Native Deployment#AmazonAWS

Color coded costs are Required, Maximum, Medium, Minimum

Total US $/month

(0*)

EBS

cost (3*)

spot

vm/hr

75% off

#

vCore

Ram

HD

(min)

Flavor

Use

Description

Total US $/month

(0*)

EBS

cost (3*)

spot

vm/hr

75% off

#

vCore

Ram

HD

(min)

Flavor

Use

Description

$66

$0.06

$0.032

1

2

15

100g

R4.large

DevOps

Jump box - cloudformation

$66

$0.06

$0.032

1

2

15

100g

R4.large

DevOps

Jenkins server

$66

$0.06

$0.032

1

2

15

100g

R4.large

DevOps

Kibana (ELK) server

$89 x 4 = $354

$0.06

$0.063

4

4

31

100g

R4.xlarge

DevOps

Minimum CD cluster 1*

$66

$0.06

$0.032

1

2

15

100g

R4.large

DevOps

Minimum CD master

$238 x 4 = $950

$0.06

$0.27

4

16

122

100g

R4.4xlarge

DevOps

production CD cluster

1*

$66

$0.06

$0.032

1

2

15

100g

R4.large

DevOps

production kubernetes master

$138 x 4 = $552

$0.06

$0.13

4

8

61

100g

R4.2xlarge

DevOps

staging cluster

2*

$66

$0.06

$0.032

1

2

15

100g

R4.large



staging kubernetes master

$89 x 4 = $354

$0.12

$0.063

4

4

31

320g

R4.xlarge

DevOps

long duration cluster

$110

$0.12

$0.032

1

2

15

320g

R4.large



long duration cluster k8s master

$138 x 2 = $276

$0.06

$0.13

2

8

61

100g

R4.2xlarge

Dev

developer cluster

$66

$0.06

$0.032

1

2

15

100g

R4.large



developer cluster kubernetes master

$138

$0.06

$0.13

1

8

61

100g

R4.2xlarge

Dev

developer onap subset collocated VM

$0

$0

$0













Route53 DNS/EIP

$2/month per unused EIP

$0

$0

$0













VPC, NG, SG, IAM,

(network costs)

Total cost / month - SPOT (max CD prod cluster - max dev cluster x 1)

$66 + $66 + $66 + $950 + $66 + $354 + $110 + $276 + $66 = $2020/month = $24.2k/year US

(4*)

Total cost / month - SPOT (medium CD prod cluster - min dev cluster x 1)

$66 + $66 + $66 + $552 + $66 + $354 + $110 + $138 = $1418/month = $17k/year US

(4*)

Total cost / month - SPOT (minimum CD cluster - min dev cluster - collocated host)

$66 + $66 + $66 + $354 + $66 + $276 = $894/month = $11K US

Total cost / month - reserved



Notes:

0 - assumes us-east-1 region (us-east-2 region is cleaper (ohio) but the spot market there is more unstable) - if you use ohio - cut costs by about 40% for spot - ie: r4.2xlarg is 0.13 but 0.07 in ohio)

1 - ONAP is CPU bound - it will peak at over 55 vCores during startup - vCPUs over 8 are required for a stable deployment - we could use C4/C5 compute optimized images - more expensive and get timed out of spot more often - it is the same price and more stable to run an R4.2x/4x instance with twice the ram but the same vCores.

2 - a cluster running the 8 core R4.2xlarge vms will be OK but will be CPU throttled during startup and any during any container under test or rogue container episode.

3 - EBS cost is usually around 45% of the ec2 cost for an R4.large for the average 100G HD

4 - some of these costs are reduced if we use AMI's and cloudformation/cli templates to raise/lower systems (ie: on the weekend off, for CD systems raise for 2 hours test and terminate for 2 hours on 4 hour cycles)

Amsterdam release

Deployment Use Cases

There are several deployment scenarios that include VMs and containers both for ONAP itself and the VNFs that are managed.  Container deployments are further segregated by managed kubernetes and Kubernetes as a Service types.  We assume that all ONAP components run as Docker containers whether they are managed per VM (HEAT) or managed in a Kubernetes cluster namespace (KaaS or managed).



Type

ONAP(VMs or Containers)

VNF (VMs or Containers)



Type

ONAP(VMs or Containers)

VNF (VMs or Containers)





Kubernetes containers on VMs

VM Rackspace/Openstack





















Deployment Example: Full Kubernetes on a VM cluster

This is the RI (Reference implementation) of ONAP Beijing release - it consists of all the ONAP containers 90+ deployed to a particular (dev/stg/prod) namespace ecosystem running on Kubernetes.  The Kubernetes implementation is running under any management layer - here Rancher and not on a KaaS.  The Kubernetes cluster undercloud can run on 1 or more VMs - in this example we colocate the server and single host on a single VM which currently fits in 55G.

Note: DCAEGEN2 is currently being fully containerized and should arrive as a native Kubernetes set of containers by Beijing R2 release. Currently DCAE runs in a 64G VM specifically on a configured Openstack system.  There is a reverse proxy mechanism that joins DCAE to the rest of ONAP running in Kubernetes already.  DCAE is required for VNF closed loop operations - but not for VNF orchestration.  When DCAE is fully refactored for Kubernetes then the memory requirement will jump over the 64G baseline and push it to 96 to 128G depending on the size of the CDAP Hadoop cluster running which is 3-7 containers.

Security Profile

ONAP will require certain ports open by CIDR to several static domain names in order to deploy defined in a security group.  At runtime the list is reduced.

Ideally these are all inside a private network.

It looks like we will need a standard public/private network locked down behind a combined ACL/SG for AWS VPC or a NSG for Azure where we only expose what we need outside the private network.

Still working on a list of ports but we should not need any of these exposed if we use a bastion/jumpbox + nat combo inside the network.

Known Security Vulnerabilities

https://medium.com/handy-tech/analysis-of-a-kubernetes-hack-backdooring-through-kubelet-823be5c3d67c

https://github.com/kubernetes/kubernetes/pull/59666 fixed in Kubernetes 1.10

ONAP Port Profile

ONAP on deployment will require the following incoming and outgoing ports.  Note: within ONAP rest calls between components will be handled inside the Kubernetes namespace by the DNS server running as part of K8S.

port

protocol

incoming/outgoing

application

source

destination

Notes

port

protocol

incoming/outgoing

application

source

destination

Notes

22

ssh



ssh

developer vm

host



443





tiller

client

host



8880

http



rancher

client

host



9090

http



kubernetes



host



10001

https



nexus3



nexus3.onap.org



10003

https



nexus3



nexus3.onap.org





https



nexus



nexus.onap.org





https

ssh



git



git.onap.org



30200-30399

http/https



REST api

developer vm

host



5005

tcp