We use the similar lab infrastructure recommended by OPNFV project.
ONAP Lab Specification
ONAP Open Labs are collections of dedicated hardware, generally partitioned into PODs with servers. PODs can be used for different testings like development, CI/CD, ONAP platform testing, E2E use case integration testing, Community demo, or interoperation testing with 3rd part productions. Each lab can have 1~4 PODs depending on test cases.
The lab specification section provides information for recommended hardware and network configuration.
Hardware
A ONAP lab compliant test-bed provides:
- One CentOS 7 jump server
on which the installer runs - More then one POD should be delivered depends on the usage scenarios.
- A configured network topology allowing for LOM(Lights-out Management), Admin, Public, Private, and/or Storage Networks if needed.
- Remote access through VPN or other approach provided by individual labs
- Security through a firewall
Servers
CPU:
- Intel Xeon E5-2658v3 Series or newer, with 12 cores and 24 hyper-threads
Firmware:
- BIOS/EFI compatible for x86-family servers
Local Storage:
Below describes the minimum for the spec, which is designed to provide enough capacity for a reasonably functional environment. Additional and/or faster disks are nice.
- Disks: 1 x 1TB HDD
The first HDD should be used for OS & additional software/tool installation- Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)
Memory:
256128GB RAM Minimum
Power Supply
Single power supply is acceptable(redundant power not required/nice to have)
Networking
Network Hardware
48 Port TOR Switch
NICs - Combination of 1GE and 10GE based on network topology options
Connectivity for each data/control network is through a separate NIC port or a shared port. Separate port simplifies switch management however requires more NICs on the server and also more switch ports
BMC (Baseboard Management Controller) for lights-out management network using IPMI (Intelligent Platform Management Interface)
Network Options
- Option I: 4x1G Control, 2x10G Data, 48 Port Switch
- 1 x 1G for IPMI Management
- 1 x 1G for Admin/PXE boot
- 1 x 1G for control-plane connectivity
- 1 x 1G for storage
- 2 x 10G for data network (redundancy, NIC bonding)
- Option III: 2x1G Control, 1x10G Data(redundancy nice to have), 1x10G Control or Storage(if needed, redundancy nice to have), 48 Port Switch
- Data NIC used for VNF traffic
- 1 x 1G for IPMI mangement
- 1 x 1G for Admin/PXE boot
- 1 x 10G for control-plane connectivity/storage(if needed, control plane and storage segmented through VLANs)
- 1 x 10G for data network
Documented configuration to include:
- Subnet, VLANs
- IPs
- Types of NW - LOM, public, private, admin, storage
- Default gateways
Remote Management
Remote access is required for …
- Developers to access deploy/test environments (credentials to be issued per POD / user) at 100Mbps upload and download speed
OpenVPN is generally used for remote however community hosted labs may vary due to company security rules. Please refer to individual lab documentation/wiki page as each company may have different access rules and policies.
Basic requirements:
- SSH sessions to be established (initially on the jump server)
- Packages to be installed on a system by pulling from an external repo.
Firewall rules accommodate:
- SSH sessions
Lights-out management network requirements:
- Out-of-band management for power on/off/reset and bare-metal provisioning
- Access to server is through a lights-out-management tool and/or a serial console
- Refer to applicable light-out management information from server manufacturer, such as ...
POD
In the following table, we define 3 types of Pod based on the resource usage assumption and each POD recommended configuration is described in the following. Please note in lab and real deployment scenarios, resource can be over subscribed depending on workload. Also we assume that ONAP platform will be deployed in a separate pod from VNFs.
Type of Pod | Total Memory(GB) for Compute Nodes | Total VCPU for Compute Nodes | Total Storage for Compute Nodes | Number of Control Nodes | Number of Compute Nodes |
---|---|---|---|---|---|
Large | 600GB | 120 | 4TB | 3 | >=2 |
Medium | 200GB | 80 | 2TB | 3 | >=2 |
Small | 40GB | 24 | 1TB | 1 | >=1 |
As an example, taking the above large pod as requirement, we can build a hypothetical pod with servers listed in the following table:
Hostname | CPU | Memory | Storage | IPMI | Admin/PXE | Private | Public | Storage | 10GbE: NIC#, IP, MAC, VLAN, Network |
---|---|---|---|---|---|---|---|---|---|
jumpserver | Intel(R) Xeon(R) CPU E5-2658A V3 @ 2.20GHz | 256G | 2T | Mac IP username/passwd | Port Mac IP | Port Mac IP | Port Mac IP | Port Mac IP | p4p1: MAC,IP p4p2: MAC,IP |
Host1 | Intel(R) Xeon(R) CPU E5-2658A V3 @ 2.20GHz | 256G | 2T | Mac IP username/passwd | Port Mac IP | Port Mac IP | Port Mac IP | Port Mac IP | eth1:Mac,IP eth2:Mac,IP |
Host2 | Intel(R) Xeon(R) CPU E5-2658A V3 @ 2.20GHz | 256G | 2T | Mac IP username/passwd | Port Mac IP | Port Mac IP | Port Mac IP | Port Mac IP | eth1:Mac,IP eth2:Mac,IP |
Host3 | Intel(R) Xeon(R) CPU E5-2658A V3 @ 2.20GHz | 256G | 2T | Mac IP username/passwd | Port Mac IP | Port Mac IP | Port Mac IP | Port Mac IP | eth1:Mac,IP eth2:Mac,IP |
Host4 | Intel(R) Xeon(R) CPU E5-2658A V3 @ 2.20GHz | 256G | 2T | Mac IP username/passwd | Port Mac IP | Port Mac IP | Port Mac IP | Port Mac IP | eth1:Mac,IP eth2:Mac,IP |
Host5 | Intel(R) Xeon(R) CPU E5-2658A V3 @ 2.20GHz | 256G | 2T | Mac IP username/passwd | Port Mac IP | Port Mac IP | Port Mac IP | Port Mac IP | eth1:Mac,IP eth2:Mac,IP |
Host6 | Intel(R) Xeon(R) CPU E5-2658A V3 @ 2.20GHz | 256G | 2T | Mac IP username/passwd | Port Mac IP | Port Mac IP | Port Mac IP | Port Mac IP | eth1:Mac,IP eth2:Mac,IP |
Host7 | Intel(R) Xeon(R) CPU E5-2658A V3 @ 2.20GHz | 256G | 2T | Mac IP username/passwd | Port Mac IP | Port Mac IP | Port Mac IP | Port Mac IP | eth1:Mac,IP eth2:Mac,IP |
Host8 | Intel(R) Xeon(R) CPU E5-2658A V3 @ 2.20GHz | 256G | 2T | Mac IP username/passwd | Port Mac IP | Port Mac IP | Port Mac IP | Port Mac IP | eth1:Mac,IP eth2:Mac,IP |
The network diagram for the above pod is shown here:
IPMI/Lights+out management Admin Private Public Storage PXE vlan 300 172.30.8.64/26 192.168.1.0/24 + + + 192.168.0.0/24| | | | + + | | | | 172.30.10.0/24 | | +-----------------+ | | + | | | | enp6 | | | | +--------+ Jumpserver | 192.168.1.66 | | | | | | CentOS 7 +-----------------------------+ | | | | | | | | | | | | | enp7 | | | | | | | 192.168.0.66 | | | | | | user/pass +---------------------------------------+ | | | | | | | | | | | | enp8 | | | | | | | 172.30.10.72 | | | | | | +-------------------------------------------------+ | | | | | | | | | | | enp9 | | | | | | | | | | | | | +----------------------------------------------------------+ | | | | | | | | +-----------------+ | | | | | | | | | | | | | | | | | | | | +----------------+ | | | | | | 1 | | | | | +-------+ +--------------+-+ | | | | | | | 2 | | | | | | | | +--------------+-+ | | | | | | | | 3 +---------------------------+ | | | | | | | Controller | | | | | | | | | nodes +-------------------------------------+ | | | +-+ | | | | | | | | | +-----------------------------------------------+ | | +-+ | | | | | | | +--------------------------------------------------------+ | +----------------+ | | | | | | | | | | | | | | | +----------------+ | | | | | | 1 | | | | | +-------+ +--------------+-+ | | | | | | | 2 | | | | | | | | +--------------+-+ | | | | | | | | 3 | | | | | | | | | +--------------+-+ | | | | | | | | | 4 | | | | | | +-+ | | +--------------+-+ | | | | | | | | | 5 +-----------------------+ | | | | +-+ | | Compute nodes | | | | | | | | | for deploying +---------------------------------+ | | | +-+ | VoLTE VNFs | | | | | | | | +-------------------------------------------+ | | +-+ | | | | | | | +----------------------------------------------------+ | +----------------+ | | | | | | | | | | | | | | | + + + +