Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This WIP page will attempt to detail the financial and operations impact of varying ONAP deployments.

NOTE: this is not finished - as I need to do a detailed incoming/outgoing port analysis and some more prototyping - it is a draft for now - mostly as a page to point to when asked questions about cost and security during deployments.

The audience is approving I.T. and Finance personnel.

Up to date content is on Cloud Native Deployment

Public Cloud Deployment

Executive Summary

ONAP as of 20180131 (Beijing master - pre M2) will deploy (minus DCAEGEN2) and run on a single 64G VM provisioned with a minimal 120G HD and at least 8 vCores.  The monthly cost of this deployment will run at a minimum $117 ....

$246 or 64 US per month (reserved/spot) on AWS and $x EC2

$365 or 212 US per month (on-demand/reserved) on Microsoft Azure.

The security profile for deployment requires outbound access to nexus3.onap.org:10001/10003 as well as ssh/http/https access to git.onap.org to do leftover chef pulls and curls in-container for some components (being fixed).  In order to orchestrate VMs on a currently supported The VNF deployment profile currently requires an additional Openstack/Rackspace infrastructure including keystone and CLI/HEAT outbound access is required for VNF orchestration (in the future cloud-native VNF deployment will be supported).

Marketplace/services Utilization - no use of marketplace services is required in either AWS or Azure.  The ONAP deployment brings its own open source software stack to a bare Ubuntu VM.  At this time Kubernetes as a Service is also not required and all resiliency/scaling/replication/load-balancing/federation behaviour for HA and Geo-Redundancy is handled natively by the Kubernetes framework.

Deployment Use Cases

There are several deployment scenarios that include VMs and containers both for ONAP itself and the VNFs that are managed.  Container deployments are further segregated by managed kubernetes and Kubernetes as a Service types.  We assume that all ONAP components run as Docker containers whether they are managed per VM (HEAT) or managed in a Kubernetes cluster namespace (KaaS or managed).

...

Deployment Example: Full Kubernetes on a VM cluster

This is the RI (Reference implementation) of ONAP Beijing release - it consists of all the ONAP containers 90+ deployed to a particular (dev/stg/prod) namespace ecosystem running on Kubernetes.  The Kubernetes implementation is running under any management layer - here Rancher and not on a KaaS.  The Kubernetes cluster undercloud can run on 1 or more VMs - in this example we colocate the server and single host on a single VM which currently fits in 55G.

Note: DCAEGEN2 is currently being fully containerized and should arrive as a native Kubernetes set of containers by Beijing R2 release. Currently DCAE runs in a 64G VM specifically on a configured Openstack system.  There is a reverse proxy mechanism that joins DCAE to the rest of ONAP running in Kubernetes already.  DCAE is required for VNF closed loop operations - but not for VNF orchestration.  When DCAE is fully refactored for Kubernetes then the memory requirement will jump over the 64G baseline and push it to 96 to 128G depending on the size of the CDAP Hadoop cluster running which is 3-7 containers.

Security Profile

ONAP will require certain ports open by CIDR to several static domain names in order to deploy defined in a security group.  At runtime the list is reduced.

ONAP Port Profile

ONAP on deployment will require the following incoming and outgoing ports.  Note: within ONAP rest calls between components will be handled inside the Kubernetes namespace by the DNS server running as part of K8S.

...

https

ssh

...

Rancher 1.6.14

Helm 2.8.0

Kubernetes 1.8.6

Docker 17.03.2

Ubuntu 16.04

The rest of the software versions are specific to the 90+ docker containers running for example MariaDB, Jetty... etc.  All of the software is open source and encapsulated in the containers themselves.  The containers implement a REST based microservices architecture.

Microsoft Azure

Monthly Cost

...

VM running Ubuntu 16.04

E8s V3

...

(64g/8vCores)

128G SSD
30G SSD

...

Image Removed

...

VM: Minimum EC2 instance of size 61G (ideally 128G) with a 120+GB EBS volume (ideally 1024GB) and at least 8 vCores (ideally 64 vCores), network 1Gbps (ideally 10Gbps).

We can implement proper public/private VPC peering but for this lab environment security will be on the Kubernetes Admin/client-token only.

Cost

There is an R4.2xlarge instance type that has been the lowest cost instance that can run all of ONAP (except DCAE) and has been demonstrated since Amsterdam on the CD system.  The cost on the spot market is between 72 to 89% off the reserved cost at around $0.14/hour on us-east(N. Virgina DC) and 0.07/hour in us-east(Ohio DC)

Monthly Cost

...

EC2 spot VM running Ubuntu 16.04

R4.2xlarge

...

Image Removed

Artifacts

Amazon

Image Removed

Spot template

security group

IAM profile

ssh key

EIP

public VPC

Network Interface

EBS Volume

Auto scaling group

Amazon Cloudformation Template

Code Block
SPOT only
{
  "IamFleetRole": "arn:aws:iam::453279094200:role/aws-ec2-spot-fleet-tagging-role",
  "AllocationStrategy": "lowestPrice",
  "TargetCapacity": 1,
  "SpotPrice": "0.532",
  "ValidFrom": "2018-01-31T20:31:16Z",
  "ValidUntil": "2019-01-31T20:31:16Z",
  "TerminateInstancesWithExpiration": true,
  "LaunchSpecifications": [
    {
      "ImageId": "ami-aa2ea6d0",
      "InstanceType": "r4.2xlarge",
      "KeyName": "obrien_systems_aws_20141115",
      "SpotPrice": "0.532",
      "BlockDeviceMappings": [
        {
          "DeviceName": "/dev/sda1",
          "Ebs": {
            "DeleteOnTermination": true,
            "VolumeType": "gp2",
            "VolumeSize": 120,
            "SnapshotId": "snap-0dcc947e7c10bed94"
          }
        }
      ],
      "SecurityGroups": [
        {
          "GroupId": "sg-de2185a9"
        }
      ]
    }
  ],
  "Type": "request"
}

Microsoft

Azure Resource Manager Template

VM details

...

Beijing release

Corporate Allocation

The following table details an example of what is required to run a continuous delivery system around ONAP beijing and some developer profiles when working directly on a cloud provider.

Beijing requires the following to run: Note there is a 110 pod limit per VM - we currently deploy 175+ pods - hence 2+ VMs for full deployment

https://lf-onap.atlassian.net/wiki/display/DW/Cloud+Native+Deployment#CloudNativeDeployment-Changemax-podsfromdefault110podlimit

  • Full production/staging
    • Minimum 2 VMs, 12vCores total, 96G total ram, 100G HD per VM
    • Recommended 3-9 VMs, 24-64 vCores, 128+ total ram, 160G HD per VM + 1 8G/100G kubernetes master VM
  • Developer:
    • Minimum 1 VM at 4+ vCores, 16-64G ram, 100G HD - collocated kubernetes master, host and jumpbox (deploys a subset of ONAP)
    • Recommended 3 VMs 1x8G kubernetes master and 2 x 64G hosts

Amazon AWS

Minimum Deployment costs 

20190305: Dublin state

              Cost is ~$10/day or $300/month per 128G deployment – (I happen to have 128G worth of VMs running on the 75% off spot market – including cost effective R4 instances, EBS store and EFS NFS) – 2 deployments and dev cost will run under $1k US.  Based on the level of funding we will go with pure IaaS VMs up to the EKS PaaS for K8s – but that runs over $20/day.   Ideally we should be running 160G+ clusters but 128G is ok for specific VNFs like the vFW - https://lf-onap.atlassian.net/wiki/display/DW/Cloud+Native+Deployment#CloudNativeDeployment-AmazonAWS


For details on ONAP deployment via OOM on Kubernetes see Cloud Native Deployment#AmazonAWS

Color coded costs are Required, Maximum, Medium, Minimum

Total US $/month

(0*)

EBS

cost (3*)

spot

vm/hr

75% off

#vCoreRam

HD

(min)

FlavorUseDescription
$66$0.06$0.0321215100gR4.largeDevOpsJump box - cloudformation
$66$0.06$0.0321215100gR4.largeDevOpsJenkins server
$66$0.06$0.0321215100gR4.largeDevOpsKibana (ELK) server
$89 x 4 = $354$0.06$0.0634431100gR4.xlargeDevOpsMinimum CD cluster 1*
$66$0.06$0.0321215100gR4.largeDevOpsMinimum CD master
$238 x 4 = $950$0.06$0.27416122100gR4.4xlargeDevOps

production CD cluster

1*

$66$0.06$0.0321215100gR4.largeDevOpsproduction kubernetes master
$138 x 4 = $552$0.06$0.134861100gR4.2xlargeDevOps

staging cluster

2*

$66$0.06$0.0321215100gR4.large
staging kubernetes master
$89 x 4 = $354$0.12$0.0634431320gR4.xlargeDevOpslong duration cluster
$110$0.12$0.0321215320gR4.large
long duration cluster k8s master
$138 x 2 = $276$0.06$0.132861100gR4.2xlargeDevdeveloper cluster
$66$0.06$0.0321215100gR4.large
developer cluster kubernetes master
$138$0.06$0.131861100gR4.2xlargeDevdeveloper onap subset collocated VM
$0$0$0





Route53 DNS/EIP

$2/month per unused EIP

$0

$0$0





VPC, NG, SG, IAM,

(network costs)

Total cost / month - SPOT (max CD prod cluster - max dev cluster x 1)

$66 + $66 + $66 + $950 + $66 + $354 + $110 + $276 + $66 = $2020/month = $24.2k/year US

(4*)

Total cost / month - SPOT (medium CD prod cluster - min dev cluster x 1)

$66 + $66 + $66 + $552 + $66 + $354 + $110 + $138 = $1418/month = $17k/year US

(4*)

Total cost / month - SPOT (minimum CD cluster - min dev cluster - collocated host)$66 + $66 + $66 + $354 + $66 + $276 = $894/month = $11K US
Total cost / month - reserved

Notes:

0 - assumes us-east-1 region (us-east-2 region is cleaper (ohio) but the spot market there is more unstable) - if you use ohio - cut costs by about 40% for spot - ie: r4.2xlarg is 0.13 but 0.07 in ohio)

1 - ONAP is CPU bound - it will peak at over 55 vCores during startup - vCPUs over 8 are required for a stable deployment - we could use C4/C5 compute optimized images - more expensive and get timed out of spot more often - it is the same price and more stable to run an R4.2x/4x instance with twice the ram but the same vCores.

2 - a cluster running the 8 core R4.2xlarge vms will be OK but will be CPU throttled during startup and any during any container under test or rogue container episode.

3 - EBS cost is usually around 45% of the ec2 cost for an R4.large for the average 100G HD

4 - some of these costs are reduced if we use AMI's and cloudformation/cli templates to raise/lower systems (ie: on the weekend off, for CD systems raise for 2 hours test and terminate for 2 hours on 4 hour cycles)

Amsterdam release

Deployment Use Cases

There are several deployment scenarios that include VMs and containers both for ONAP itself and the VNFs that are managed.  Container deployments are further segregated by managed kubernetes and Kubernetes as a Service types.  We assume that all ONAP components run as Docker containers whether they are managed per VM (HEAT) or managed in a Kubernetes cluster namespace (KaaS or managed).


TypeONAP(VMs or Containers)VNF (VMs or Containers)

Kubernetes containers on VMsVM Rackspace/Openstack









Deployment Example: Full Kubernetes on a VM cluster

This is the RI (Reference implementation) of ONAP Beijing release - it consists of all the ONAP containers 90+ deployed to a particular (dev/stg/prod) namespace ecosystem running on Kubernetes.  The Kubernetes implementation is running under any management layer - here Rancher and not on a KaaS.  The Kubernetes cluster undercloud can run on 1 or more VMs - in this example we colocate the server and single host on a single VM which currently fits in 55G.

Note: DCAEGEN2 is currently being fully containerized and should arrive as a native Kubernetes set of containers by Beijing R2 release. Currently DCAE runs in a 64G VM specifically on a configured Openstack system.  There is a reverse proxy mechanism that joins DCAE to the rest of ONAP running in Kubernetes already.  DCAE is required for VNF closed loop operations - but not for VNF orchestration.  When DCAE is fully refactored for Kubernetes then the memory requirement will jump over the 64G baseline and push it to 96 to 128G depending on the size of the CDAP Hadoop cluster running which is 3-7 containers.

Security Profile

ONAP will require certain ports open by CIDR to several static domain names in order to deploy defined in a security group.  At runtime the list is reduced.

Ideally these are all inside a private network.

It looks like we will need a standard public/private network locked down behind a combined ACL/SG for AWS VPC or a NSG for Azure where we only expose what we need outside the private network.

Still working on a list of ports but we should not need any of these exposed if we use a bastion/jumpbox + nat combo inside the network.

Known Security Vulnerabilities

https://medium.com/handy-tech/analysis-of-a-kubernetes-hack-backdooring-through-kubelet-823be5c3d67c

https://github.com/kubernetes/kubernetes/pull/59666 fixed in Kubernetes 1.10

ONAP Port Profile

ONAP on deployment will require the following incoming and outgoing ports.  Note: within ONAP rest calls between components will be handled inside the Kubernetes namespace by the DNS server running as part of K8S.

portprotocolincoming/outgoingapplicationsourcedestinationNotes
22ssh
sshdeveloper vmhost
443

tillerclienthost
8880http
rancherclienthost
9090http
kubernetes
host
10001https
nexus3
nexus3.onap.org
10003https
nexus3
nexus3.onap.org

https
nexus
nexus.onap.org

https

ssh


git
git.onap.org
30200-30399http/https
REST apideveloper vmhost
5005tcp
java debug portdeveloper vmhost


Lockdown ports



8080

outgoing





10249-10255
in/out


Lock these down via VPC or a source CIDR that equals only the server/client IP list

https://medium.com/handy-tech/analysis-of-a-kubernetes-hack-backdooring-through-kubelet-823be5c3d67c


Azure

Image Added

AWS

Image Added

Image Added

Image Added

Image Added


Image Added



Software Profile

Rancher 1.6.14

Helm 2.8.0

Kubernetes 1.8.6

Docker 17.03.2

Ubuntu 16.04

The rest of the software versions are specific to the 90+ docker containers running for example MariaDB, Jetty... etc.  All of the software is open source and encapsulated in the containers themselves.  The containers implement a REST based microservices architecture.

Hardware Profile

ONAP on Kubernetes#HardwareRequirements

Microsoft Azure

Monthly Cost

Cost $USArtifactDetails

$365/m at $0.65 (CAN)/h

reduce by 42% for 1 year reserved instances = $212/month

VM running Ubuntu 16.04

E8s V3

(64g/8vCores)

128G SSD
30G SSD

in CA central DC


$0 TBDno extra volume (only 16G VMs have 30G disks)


IP

$0image snapshot

$0Cloud Services


Total

$212/m


on-demand

Image Added

Reserved

Image Added


Amazon AWS

VM: Minimum EC2 instance of size 61G (ideally 128G) with a 120+GB EBS volume (ideally 1024GB) and at least 8 vCores (ideally 64 vCores), network 1Gbps (ideally 10Gbps).

We can implement proper public/private VPC peering but for this lab environment security will be on the Kubernetes Admin/client-token only.

Example of 2 or 3 network VPC peering setup http://files.meetup.com/18216364/aws_vpc_beanstalk_20150224_post.pdf


Cost

There is an R4.2xlarge instance type that has been the lowest cost instance that can run all of ONAP (except DCAE) and has been demonstrated since Amsterdam on the CD system.  The cost on the spot market is between 72 to 89% off the reserved cost at around $0.14/hour on us-east(N. Virgina DC) and 0.07/hour in us-east(Ohio DC)

Monthly Cost
Cost $USArtifactDetails
$52/m at $0.07/h

EC2 spot VM running Ubuntu 16.04

R4.2xlarge

(64g/8vCores)

in the ohio DC


$12.0/m at $0.1/m/GbEBS volume120Gb
$2.0/mElastic EIP


AMI image snapshot

$0Cloud Services


Total

$64/m


Image Added

Artifacts

Amazon

Image Added

Spot template

security group

IAM profile

ssh key

EIP

public VPC

Network Interface

EBS Volume

Auto scaling group

Amazon Cloudformation Template
Code Block
SPOT only
{
  "IamFleetRole": "arn:aws:iam::4.....:role/aws-ec2-spot-fleet-tagging-role",
  "AllocationStrategy": "lowestPrice",
  "TargetCapacity": 1,
  "SpotPrice": "0.532",
  "ValidFrom": "2018-01-31T20:31:16Z",
  "ValidUntil": "2019-01-31T20:31:16Z",
  "TerminateInstancesWithExpiration": true,
  "LaunchSpecifications": [
    {
      "ImageId": "ami-aa2ea6d0",
      "InstanceType": "r4.2xlarge",
      "KeyName": "obr...15",
      "SpotPrice": "0.532",
      "BlockDeviceMappings": [
        {
          "DeviceName": "/dev/sda1",
          "Ebs": {
            "DeleteOnTermination": true,
            "VolumeType": "gp2",
            "VolumeSize": 120,
            "SnapshotId": "snap-0dcc947e7c10bed94"
          }
        }
      ],
      "SecurityGroups": [
        {
          "GroupId": "sg-de2185a9"
        }
      ]
    }
  ],
  "Type": "request"
}


Microsoft

Azure Resource Manager Template

VM details

Code Block
ubuntu@onap:~$ free

              total        used        free      shared  buff/cache   available

Mem:       65949220      278884    65223112        8812      447224    64947380

Swap:             0           0           0

ubuntu@onap:~$ df

Filesystem     1K-blocks    Used Available Use% Mounted on

udev            32963872       0  32963872   0% /dev

tmpfs            6594924    8788   6586136   1% /run

/dev/sda1       30428648 1328420  29083844   5% /

tmpfs           32974608       0  32974608   0% /dev/shm

tmpfs               5120       0      5120   0% /run/lock

tmpfs           32974608       0  32974608   0% /sys/fs/cgroup

/dev/sdb1      131979684   60988 125191480   1% /mnt

tmpfs            6594924       0   6594924   0% /run/user/1000


Sizing

Kubernetes

Openstack


20180202 openlab  Integration-SB-01

VMs: 

We only need the DCAE and Cloudify-manager VMs when using OOM  -the following is the whole set - we will truncate the heat template to bring up half of this

VMsFlavorRAM
2m1.small2G
4m1.medium4G
5m1.large8G
17m1.xlarge16G
1

m1.xxlarge

(cloudify-manager)

64G



29total394G

see also discussion on 

Jira Legacy
serverSystem Jira
serverId4733707d-2057-3a0f-ae5e-4fd8aff50176
keyOPENLABS-160

Jira Legacy
serverSystem Jira
serverId4733707d-2057-3a0f-ae5e-4fd8aff50176
keyOPENLABS-161

Jira Legacy
serverSystem Jira
serverId4733707d-2057-3a0f-ae5e-4fd8aff50176
keyDOC-244

https://onap.readthedocs.io/en/latest/guides/onap-developer/settingup/fullonap.html#requirements
The cns (consul) nodes for example are two sizes larger now at 16G each instead of the documented 4G - I remember mails in the past about memory saturation that caused the bump to m1.xlarge from m1.medium (although consul is already in oom so we should not need these)


dcaepgvm00

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.0.26
  • 10.12.6.25

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 2 days

Create Snapshot


dcaecdap05

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.0.15
  • 10.12.6.22

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 2 days

Create Snapshot


dcaecdap03

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.0.21
  • 10.12.6.1

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 2 days

Create Snapshot


dcaecdap02

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.0.11
  • 10.12.5.227

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 2 days

Create Snapshot


dcaecdap06

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.0.18
  • 10.12.6.8

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 2 days

Create Snapshot


dcaecdap01

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.0.13
  • 10.12.5.248

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 2 days

Create Snapshot


dcaecdap04

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.0.3
  • 10.12.6.12

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 2 days

Create Snapshot


dcaecdap00

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.0.16
  • 10.12.5.92

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 2 days

Create Snapshot


dcaedoks00

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.0.6
  • 10.12.5.246

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 2 days

Create Snapshot


dcaedokp00

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.0.17
  • 10.12.5.247

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 2 days

Create Snapshot


dcaecnsl00

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.0.12
  • 10.12.5.184

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 2 days

Create Snapshot


dcaecnsl01

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.0.14
  • 10.12.6.0

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 2 days

Create Snapshot


dcaecnsl02

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.0.8
  • 10.12.5.232

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 2 days

Create Snapshot

Displaying 20 items | Next »

dcaeorcl00

CentOS-7

oam_onap_c4Uw

  • 10.0.0.9
  • 10.12.5.142

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 2 days

Create Snapshot


vm1-dcae-bootstrap

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.4.1
  • 10.12.5.51

m1.small

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 3 days

Create Snapshot


vm1-policy

ubuntu-14-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.6.1
  • 10.12.5.178

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 3 days

Create Snapshot


vm1-aai-inst1

ubuntu-14-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.1.1
  • 10.12.5.153

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 3 days

Create Snapshot


vm1-portal

ubuntu-14-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.9.1
  • 10.12.5.159

m1.large

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 3 days

Create Snapshot


vm1-message-router

ubuntu-14-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.11.1
  • 10.12.5.171

m1.large

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 3 days

Create Snapshot


vm1-aai-inst2

ubuntu-14-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.1.2
  • 10.12.5.163

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 3 days

Create Snapshot


vm1-appc

ubuntu-14-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.2.1
  • 10.12.5.156

m1.large

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 3 days

Create Snapshot


vm1-robot

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.10.1
  • 10.12.5.148

m1.medium

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 3 days

Create Snapshot


vm1-sdc

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.3.1
  • 10.12.5.2

m1.xlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 3 days

Create Snapshot


vm1-vid

ubuntu-14-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.8.1
  • 10.12.5.116

m1.medium

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 3 days

Create Snapshot


vm1-so

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.5.1
  • 10.12.5.38

m1.large

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 3 days

Create Snapshot


vm1-clamp

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.12.1
  • 10.12.5.114

m1.medium

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 3 days

Create Snapshot


vm1-dns-server

ubuntu-14-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.100.1
  • 10.12.5.62

m1.small

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 3 days

Create Snapshot


vm1-sdnc

ubuntu-14-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.7.1
  • 10.12.5.173

m1.large

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 3 days

Create Snapshot


vm1-multi-service

ubuntu-16-04-cloud-amd64

oam_onap_c4Uw

  • 10.0.14.1
  • 10.12.5.128

m1.xxlarge

onap_key_c4Uw

Active

nova

None

Running

2 weeks, 3 days

Create Snapshot


nokia_jumphost

(not found)

external

  • 10.12.5.134

m1.medium

nokia_jumphost

Active

nova

None

Running

1 month, 3 weeks

Create Snapshot