Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

We use the similar lab infrastructure recommended by OPNFV project. 

ONAP Lab Specification

ONAP Open Labs are collections of dedicated hardware, generally partitioned into PODs with servers. PODs can be used for different testings like development, CI/CD, ONAP platform testing, E2E use case integration testing, Community demo, or interoperation testing with 3rd part productions. Each lab can have 1~4 PODs depending on test cases.

The lab specification section provides information for recommended hardware and network configuration.

...

Overview

A ONAP lab compliant test-bed provides:

  • One CentOS 7 jump server on which the installer runs
  • More then one POD should be delivered depends on the usage scenarios.
  • A configured network topology allowing for LOM(Lights-out Management), Admin, Public, Private, and/or Storage Networks if needed.
  • Remote access through VPN or other approach provided by individual labs
  • Security through a firewall

Hardware

...

Servers 

CPU:

    • Intel Xeon E5-2658v3 Series or newer, with 12 cores and 24 hyper-threads

...

Single power supply is acceptable(redundant power not required/nice to have)

Networking

Network Hardware

  • TOR Switch
  • Router
  • others

Networking

    • 48 Port TOR Switch

    • NICs - Combination of 1GE and 10GE based on network topology options

    • Connectivity for each data/control network is through a separate NIC port or a shared port. Separate port simplifies switch management however requires more NICs on the server and also more switch ports

    • BMC (Baseboard Management Controller) for lights-out management network using IPMI (Intelligent Platform Management Interface)

Network Options

    • Option I: 4x1G Control, 2x10G Data, 48 Port Switch
      • 1 x 1G for IPMI Management
      • 1 x 1G for Admin/PXE boot
      • 1 x 1G for control-plane connectivity
      • 1 x 1G for storage
      • 2 x 10G for data network (redundancy, NIC bonding)
    • Option III: 2x1G Control, 1x10G Data(redundancy nice to have), 1x10G Control or Storage(if needed, redundancy nice to have), 48 Port Switch
      • Data NIC used for VNF traffic
      • 1 x 1G for IPMI mangement
      • 1 x 1G for Admin/PXE boot
      • 1 x 10G for control-plane connectivity/storage(if needed,  control plane and storage segmented through VLANs)
      • 1 x 10G for data network

Documented configuration to include:

    • Subnet, VLANs
    • IPs
    • Types of NW - LOM, public, private, admin, storage
    • Default gateways

Sample Network Drawings

Remote Management

Remote access is required for …

...

Lights-out management network requirements:

    • Out-of-band management for power on/off/reset and bare-metal provisioning
    • Access to server is through a lights-out-management tool and/or a serial console
    • Refer to applicable light-out management information from server manufacturer, such as ...
      • Intel lights-out RMM
      • HP lights-out ILO
      • CISCO lights-out UCS
      • Dell iDRAC
      • Huawei iBMC
      • ZTE IPMI


POD


In the following table, we define 3 types of Pod based on the resource usage assumption and each POD recommended configuration is described in the following.  Please note in lab and real deployment scenarios, resource can be over subscribed depending on workload. Also we assume that ONAP platform will be deployed in a separate pod from VNFs. 

...