Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The lab specification section provides information for recommended hardware and network configuration.

Each lab can have 1~4 pods depending on test cases, and each pod recommended configuration is described in the following.  

Hardware

A lab compliant pod provides:

  • One CentOS 7 jump server on which the installer runs
  • A variety of deployment toolchains to deploy from the jump server.
  • 5-8 compute / controller nodes depends on the use case
  • A configured network topology allowing for LOMIPMI, Admin(PXE), Public, Private, and Storage Networks
  • Remote access through VPN
  • Security through a firewall

Servers

CPU:

    • Intel Xeon E5-2658v3 Series or newer, with 12 cores and 24 hyper-threads

Firmware:

    • BIOS/EFI compatible for x86-family servers

...

Below describes the minimum for the spec, which is designed to provide enough capacity for a reasonably functional environment. Additional and/or faster disks are nice to have.

    • Disks: 2 x 1TB HDD + 1 x 100GB SSD (or greater capacity)
    • The first HDD should be used for OS & additional software/tool installation
    • The second HDD is configured for CEPH object storage
    • The SSD should be used as the CEPH journal/tool installation
    • Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)

...

    • 48 Port TOR Switch
    • NICs - Combination of 1GE and 10GE based on network topology options (per server can be on-board or use PCI-e)
    • Connectivity for each data/control network is through a separate NIC . This simplifies Switch Management port or a shared port. Separate port simplifies switch management however requires more NICs on the server and also more switch ports
    • BMC (Baseboard Management Controller) for lights-out mangement management network using IPMI (Intelligent Platform Management Interface)

...


    • Option I: 4x1G Control, 2x10G Data, 48 Port Switch
      • 1 x 1G for IPMI Management
      • 1 x 1G for Admin/PXE boot
      • 1 x 1G for control-plane connectivity
      • 1 x 1G for storage
      • 2 x 10G for data network (redundancy, NIC bonding, High high bandwidth testing)
    • Option II: 1x1G Control, 2x 10G Data, 24 Port Switch
      • Connectivity to networks is through VLANs on the Control NIC
      • Data NIC used for VNF traffic and storage traffic segmented through VLANs
    • Option III: 2x1G Control, 2x10G Data, 2x10G Storage, 24 48 Port Switch
      • Data NIC used for VNF traffic
      • Storage NIC used for control plane and Storage storage segmented through VLANs (separate host traffic from VNF)
      • 1 x 1G for IPMI mangement
      • 1 x 1G for Admin/PXE boot
      • 2 x 10G for control-plane connectivity/storage
      • 2 x 10G for data network


Documented configuration to include:

    • Subnet, VLANs (may be constrained by existing lab setups or rules)
    • IPs
    • Types of NW - IPMI, public, private, admin, storage
    • Default gateways

...

    • Developers to access deploy/test environments (credentials to be issued per POD / user) at 100Mbps upload and download speed
    • Connection of each environment to Jenkins master hosted by Linux Foundation for automated deployment and test

OpenVPN is generally used for remote however community hosted labs may vary due to company security rules. For POD access rules / restrictions Please refer to individual lab documentation as each company may have different access rules and acceptable usage policies.

Basic requirements:

    • SSH sessions to be established (initially on the jump server)
    • Packages to be installed on a system (tools or applications) by pulling from an external repo.

...

    • Out-of-band management for power on/off/reset and bare-metal provisioning
    • Access to server is through a lights-out-management tool and/or a serial console
    • Refer to applicable light-out mangement management information from server manufacturer, such as ...
      • Intel lights-out RMM
      • HP lights-out ILO
      • CISCO lights-out UCS
      • Dell iDRAC


A Pod Example for VoLTE use case:

...