Introduction
The Swisscom virtual BNG and Edge SDN M&C for the ONAP BBS use-case is a demo system. It is a functional prototype and it is not meant for production use. The whole system is running in a single OpenStack VM and therefore only able to support a small number of subscribers. The forwarding dataplane is implemented fully inside the VMs networking stack and therefore not designed to support high data rates. The system is able to interact with ONAP according to the BBS use-case definition.
Below a diagram of the whole end-to-end architecture, the Edge SDN M&C + vBNG VM is highlighted:
As shown above in the diagram the DHCP and Dataplane traffic are brought to the VM by VxLAN tunneling. Since it is just routed L3 traffic, DHCP traffic does not really require VxLAN. For sake of not having two different tunnel technologies we use VxLAN tunneling for both DHCP and Dataplane traffic. Now the question is if the OLT supports native VxLAN tunneling. In our case the OLT does not. Therefore some VxLAN tunnel encapsulation device is required (see transport middle boxes above). Those boxes are very simple to set up. See at the very end of this document how to build such a middle box.
Installation by Heat Template
The Swisscom virtual BNG and Edge SDN M&C is installed and initially set up by a single Heat Orchestration Template. Therefore the OpenStack cloud you would like to use for testing should support Orchestration with Heat. The complete initial vBNG + Edge SDN M&C configuration is provided as Heat stack parameters. The Heat stack creates all the required OpenStack infrastructure, e.g. router, network, security group, port and vbng instance.
Once the stack is created, the vBNG configuration file is deployed to the instance to '$HOME/vbng.conf' and latest vBNG code is directly pulled from specified upstream git repository (default is the Swisscom repository) by cloud-init user-data script. The stack output shows initial vBNG configuration, the floating IP and how to connect to the instance by SSH. To create a stack in OpenStack heat you first require the Heat template from here: https://git.swisscom.com/projects/ZTXGSPON/repos/opnfv/browse/heat/vbng.yaml (Drop a note to michail salichos, David Perez Caparros or dbalsige in case you do not have access)
Option A) Upload template in Horizon
The stack can be created directly in OpenStack Horizon by:
- Navigating to 'Orchestration -> Stacks' in the sidebar
- Pressing the 'Launch Stack' button
- In 'Template Source' select to upload the template file 'vbng.yaml' and press 'Next'.
- Now heat asks for the stack input parameters, they can be set by just entering the desired values in the Horizon form.
The template defines a hopefully useful default for each parameter, therefore not much has to be changed for the Swisscom Lab installation. However, some things like e.g. image, flavor and key have to be selected in the drop-down menus. Also all the initial configuration can be changed there. For a full list of the supported stack parameters see the appendix below.
Option B) OpenStack Commandline Client
Source the openrc.sh file of your OpenStack tenant and create the heat stack the following way:
source Downloads/vbng-openrc.sh Please enter your OpenStack Password for project vBNG as user bng: openstack stack create -t Downloads/vbng.yaml vbngstack
In case you would like to overwrite default parameters with your custom values, try adding '--parameter <key=value>', e.g.:
openstack stack create -t Downloads/vbng.yaml vbngstack --parameter key='your_key' --parameter flavor='your_flavor' --parameter image='your_image'
A full list of the supported stack parameter is shown in the following table:
Appendix: Stack Parameters
Key | Default Value | Description | Notes |
---|---|---|---|
OpenStack Settings | |||
key | vbng | Name of the SSH keypair for logging in into the instance | constraint: nova.keypair |
image | "CentOS 7 x86_64 GenericCloud 1901" | Name of the glance image Supported are upstream cloud images for: Ubuntu 16.04 / Ubuntu 18.04 / CentOS 7 | constraint: glance.image |
flavor | a1.tiny | Flavor to use for the instance Can be a small one (1vCPU/4GB RAM/10GB disk) | constraint: nova.flavor |
extnet | external | Name of external network | This is the existing OpenStack external network containing the floating IPs |
int_cidr | 192.168.1.0/24 | Internal Network IPv4 Addressing in CIDR notation | Can be anything in the private IP space if your OpenStack supports overlapping IP tenant ranges. |
dns1 | 8.8.8.8 | DNS server 1 for internal network | E.g. DNS server 1 Openstack VMs will use |
dns2 | 8.8.4.4 | DNS server 2 for internal network | E.g. DNS server 2 Openstack VMs will use |
vBNG Git Repository Settings | |||
git_repo | ssh://git@git.swisscom.com:7999/ztxgspon/vbng.git | Virtual BNG Git Repository URL (ssh://) | This repository holds the vbng code and is cloned by cloud-init |
git_sshkey | NOT SHOWN HERE | SSH Private Key for Git Repository (Read-Only Access) | For cloud-init read-only access |
git_hostkey | NOT SHOWN HERE | SSH Host Key for Git Host (git.swisscom.com) | |
vBNG Settings | |||
cust_cidr | 10.66.0.0/16 | Customer IPv4 Network in CIDR notation | The network for your subscribers |
cust_gw | 10.66.0.1 | Customer IPv4 Network Gateway | The IPv4 gateway your subscribers will use |
cust_dns | 8.8.8.8 | Customer DNS Server | The DNS severs your subscribers will use |
cust_start | 10.66.1.1 | Customer IPv4 Range Start Address | Subscriber IP range for DHCP |
cust_end | 10.66.1.254 | Customer IPv4 Range End Address | Subscriber IP range for DHCP |
dhcp_cidr | 172.24.24.0/24 | DHCP Server / Relay Network in CIDR notation | The network between The DHCP server and the DHCP L3 Relay on the OLT. |
dhcp_ip | 172.24.24.1 | DHCP Server IPv4 Address | The DHCP Server is binding/listening to that address |
in_tun_port | 4789 | UDP Port for incoming VxLAN Tunnels | For incoming VxLAN UDP packets. Used to configure OpenStack Security Groups |
onap_dcae_ves_collector_url | http://172.30.0.126:30235/eventListener/v7 | ONAP DCAE VES Collector URL | The URL the VES agent is streaming VES to |
vBNG Initial Configuration by cloud-init
Once the stack is created by heat, cloud-init user data script checks out the vbng git repository and runs the scripts 00-installdeps.sh, 01-setupdatapath.sh, 02-setupcontainers.sh
contained in the repository. The parameters passed to them are kept in $HOME/vbng.conf
. Once cloud-init finished its job it will create the file $HOME/vbng_provisioning_done
on your instance. Logs are kept in /var/log/cloud-init-output.log
. You may re-run those scripts as many times as you wish, work will only be done once. For example you have to re-run these 3 scripts on instance reboot. Keep in mind, you may require a reboot in case you have kernel updates installed.
vbng/00-installdeps.sh
Update the system, install dependent packages, install and setup docker.
vbng/01-setupdatapath.sh
Set up the datapath part, including shaping, routing and NAT.
vbng/02-setupcontainers.sh
Create docker images and start all containers: Database, Message Queue, Restconf Server, VES Agent and DHCP Server.
OLT Onboarding Configuration
OLT onboarding configuration is not done by cloud-init, since OLT parameters are normally not known at stack creation time. For OLT onboarding the 2 tunnels for datapath and DHCP transport and the DHCP L3 relay on the OLT have to be configured. Therefore another script should be used, once the vbng instance is provisioned initially:
vbng/03-setupolt.sh
The script accepts exactly 8 parameters to specify tunnel and DHCP relay options. Already configured OLTs are kept in$HOME/oltmap.txt.
Parameters are:vxlan_data_ip: The IP Address of the VxLAN remote tunnel endpoint for OLT datapath
vxlan_data_port: The UDP Port of the VxLAN remote tunnel endpoint for OLT datapath
vxlan_data_vni: The VNI of the VxLAN remote tunnel endpoint for OLT datapath
vxlan_dhcp_ip: The IP Address of the VxLAN remote tunnel endpoint for DHCP server / relay traffic
vxlan_dhcp_port: The UDP Port of the VxLAN remote tunnel endpoint for DHCP server / relay traffic
vxlan_dhcp_vni: The VNI of the VxLAN remote tunnel endpoint for DHCP server / relay traffic
relay_north_ip: The Northbound IP of the L3 DHCP relay on the OLT. (Where the DHCP server routes its replies to)
relay_south_ip: The Southbound IP of the L3 DHCP relay on the OLT. (Where the DHCP replies are injected into datapath)
[centos@vbng ~]$ vbng/03-setupolt.sh Usage: vbng/03-setupolt.sh [vxlan_data_ip] [vxlan_data_port] [vxlan_data_vni] \ [vxlan_dhcp_ip] [vxlan_dhcp_port] [vxlan_dhcp_vni] \ [relay_north_ip] [relay_south_ip] [centos@vbng ~]$ vbng/03-setupolt.sh 172.30.0.252 4789 88888 172.30.0.253 4789 100 172.24.24.2 10.66.0.2 Setting up VxLAN tunnel interface olt0 (172.30.0.252:4789 VNI=88888) Setting up VxLAN tunnel interface dhcp0 (172.30.0.253:4789 VNI=100) Adding port dhcp0 to bride dhcp... Adding relay route to 10.66.0.2 over 172.24.24.2 inside bbs-edge-dhcp-server container... [centos@vbng ~]$
ONT/Subscriber Configuration
Subscribers are usually configured by calls to bbs-edge-restconf-server directly from ONAP. In case you would like to test this functionality you can of course trigger this directly with curl to the floating IP, TCP port 5000 of the vbng instance:
curl -H "Content-Type: application/json" -X POST -d '{"remote_id":"AC9.000.990.001","ont_sn":"serial","service_type":"Internet","mac":"00:00:00:00:00:00","service_id":"1","up_speed":"100","down_speed":"100","s_vlan":10,"c_vlan":333}' 172.30.0.134:5000/CreateInternetProfileInstance
Important parameters are: "remote_id":"AC9.000.990.001","s_vlan":10,"c_vlan":333 .Of course the values configured must match what the OLT/ONT in the Lab sends. The DHCP authentication is done only on the correct value of remote_id. Once successfully authenticated and given a lease by DHCP, dataplane configuration is delegated to a host process by publishing a message to the queue. The host process is consuming the message from the queue and configures subscribers dataplane with the help of those two scripts:
- vbng/04-setupcustomer.sh
- Enable a particular customer
- Usage: vbng/04-setupcustomer.sh [olt_id] [s-vlan] [c-vlan] [customer_ip] [traffic_profile_id]
- vbng/05-removecustomer.sh
- Remove a particular customer
- Usage: vbng/05-removecustomer.sh [customer_ip]
Currently only 4 subscribers profiles are supported (1/2/3/4), 2 * 100Mbit/s symmetrical and 2 * 20Mit/s symmetrical, respectively. This should be enough to run all test-cases for the BBS use-case.
ONAP Configuration
The installation and initial configuration of Edge SDN M&C + vBNG is done by an Heat stack template, see above. The parameters which must be modified in ONAP are the following:
- The IP of Edge SDN M&C in order to be accessed from SDN-C is currently hardcoded in the DG -> GENERIC-RESOURCE-API_bbs-internet-profile-network-topology-operation-common-huawei.json (<parameter name='prop.sdncRestApi.thirdpartySdnc.url' value='http://172.30.0.121:5000' />). The Edge SDN M&C external controller is not registered in ESR for this release. Note: The IP above is provided by Heat stack output, it is the Floating IP of the vBNG instance in Swisscoms Lab.
- To update the IP of Edge SDN M&C in the corresponding DG, one must export the relevant DG mentioned earlier, update the IP, import back and finally enable the DG.