Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 15 Next »

Working draft: to be finalized when use cases have been finalized.

The section provides information for hardware and network configuration. ONAP Open Labs are collections of dedicated hardware, generally partitioned into pods with servers. Pods can be used for different testings like development, CI/CD, ONAP platform testing, or E2E testing.  The minimal requirements for each pod are defined in the following.

Hardware Summary

A lab compliant pod provides:

  • 2-8 controller / compute nodes depends on the use case (please refer to Server Pod session)
  • A configured network topology allowing for IPMI, Admin(PXE), Public, Private, and Storage Networks
  • Remote access through VPN
  • Security through a firewall
  • Internet access to install and update some software online

Server Pod

In the following table, we define 3 types of pod based on the resource usage assumption. Please note in lab and real deployment scenarios, resource can be over subscribed depending on workload. Also we assume that ONAP platform will be deployed in a separate pod from VNFs.

Type of Pod

Compute NodesNumber of Control Nodes


Total Memory(GB)

Total VCPU

Total StorageNumber of Compute Nodes

Large

600GB

120

4TB>=2

3

Medium

200GB

80

2TB>=2

3

Small

40GB

24

1TB>=1

1

In addition, you may need a provisioning server to help install and access a server pod.

A recommended node (server) configuration is as following:

  • Memory: 256GB RAM
  • CPU: Intel Xeon E5-2658v3 Series or newer, with 12 cores and 24 hyper-threads
  • Firmware: BIOS/EFI compatible for x86-family servers
  • Local Storage: 2 x 1TB HDD, virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)

Networking

Network Hardware

    • 48-Port TOR Switch
    • NICs - Combination of 1GE and 10GE based on network topology options
    • Connectivity for each data/control network is through a separate NIC port or a shared port. Separate port simplifies switch management however requires more NICs on the server and also more switch ports
    • BMC (Baseboard Management Controller) for lights-out management network using IPMI (Intelligent Platform Management Interface)

Remote Management

    • Developers to access deploy/test environments at aggregate 100Mbps upload and download speed

Basic requirements

    • SSH sessions can be established (initially on the jump server)
    • Packages can be installed on a system by pulling from an external repo.

Firewall rules accommodate

    • SSH sessions

Internet access

    • Internet available for some software online installation and update

Requirements for 3 use cases supported by ONAP Release 1 (to be finalized), not including ONAP:

ONAP itself will need a medium sized server pod.


Use CasesVNFsDeployment TopologyServer Pod NumberNetwork HardwareSoftware
Development or vFW/vDNS demo appsOpen sourced vFW/vDNS

1 (Small)TORCloud OS
vCPEvCPE

2 (Medium)

WAN/SPTN Router (2)

DC Gateway (2)

TOR (n)

ThinCPE (1)

Cloud OS (for Edge and Core)

WAN/SPTN Controller

DC Controller

Specific VNFM & EMS

VoLTEvIMS/vEPC

2 (Large)

WAN/SPTN Router (2)

DC Gateway (2)

TOR (n)

Wireless Access Point (2)

VoLTE Terminal Devices (2)

Cloud OS (for Edge and Core)

WAN/SPTN Controller

DC Controller

Specific VNFM & EMS

  • No labels