Background
Main purpose of this feature to place VNFCs in sites with SRIOV-NIC enabled compute nodes.
If there are no sites with available SRIOV-NIC compute nodes, OOF can choose next best compute node flavors. Next best compute nodes may not SRIOV-NICs. Of course VNFCs, in this case, assumed to be tested with and without SRIOV-NIC by vendors.
The parameters to instantiate VNFs differ whether SRIOV-NIC based switching or normal vSwitch based switching at the NFVI.
In case of Openstack based cloud-regions, port is expected to be created explicitly with binding:vnic_type as direct. In case of vSwitch based switching, there is no need for explicit creation of port, but port can be created with binding:vnic_type as normal. So, based on flavor, appropriate value is expected to be passed when talking to Openstack.
Some more additional requirements are also to be considered.
- A given VNFC might require multiple SRIOV VFs assigned to it from different PCIe NIC cards.
- A given VNFC might require VFs from different types of PCIe NIC Cards (example: Intel, Melanox etc..)
- A given VNFC might require these VFs coming from different provider networks.
Those requirements can be satisfied, only if there are compute nodes with that kind of NIC hardware and only if ONAP can discover that from various cloud sites.
Overall design
Gliffy | ||||
---|---|---|---|---|
|
1. Scenario
Let us say, a site has three kinds of compute nodes with respect to SRIOV-NIC
1st set contains, say SRIOV-NIC cards of type XYZ (PCIe vendor ID: 1234 Device ID: 5678) and YUI(PCIe vendor ID:2345 Device ID:6789.
2nd set contains say SRIOV-NIC cards of type ABC (PCIe vendor ID: 4321 Device ID: 8765).
3rd set does not contain any SRIOV NIC cards.
1.1 Openstack Config SRIOV
Openstack configuration:
...
An example of a site having three types of compute nodes. 1st set of compute nodes have two SRIOV NIC cards with vendor/device id as 1234, 5678 and vendor/device id as 2345 &6789. 2nd set of compute nodes have two SRIOV-NIC of same type 4321 & 8765. And third set of compute nodes don't have any SRIOV-NIC cards. And hence Openstack administrator at the site creates three flavors to reflect the hardware the site has. As you see in this example, it is expected that alias format is followed. Alias value supposed to be of the form "NIC-sriov-<vendor ID>-<device ID>-<Provider network>
$ openstack flavor create flavor1 --id auto --ram 512 --disk 40 --vcpus 4
$ openstack flavor set flavor1 --property pci_passthrough:alias=NIC-sriov-1234-5678-physnet1:1
$ openstack flavor set flavor1 --property pci_passthrough:alias=NIC-sriov-2345-6789-physnet2:1
Flavor2:
$ openstack flavor create flavor2 --id auto --ram 512 --disk 40 --vcpus 4
$ openstack flavor set flavor2 --property pci_passthrough:alias=NIC-sriov-4321-8765:2
...
1.2 Mutli-cloud discovery
When it reads the flavors information from openstack site, if the pci_passthrough alias start with NIC-sriov, then it assumes that it is SRIOV NIC type.
Next two integers are meant for vendor id and device id.
If it is present after device id, it is assumed to be provider network.
As part of discovery, it populates the A&AI with two PCIe features for Flavor1.
hpa-feature=”pciePassthrough”,
architecture=”{hw_arch}",
version=”v1”,
...
Hpa-attribute-key
...
Hpa-attribute-value
...
pciVendorId
...
1234
...
pciDeviceId
...
5678
...
pciCount
...
1
...
interfaceType
...
SRIOV-NIC
...
providerNetwork
...
Physnet1
hpa-feature=”pciePassthrough”,
architecture=”{hw_arch}",
version=”v1”,
...
Hpa-attribute-key
...
Hpa-attribute-value
...
pciVendorId
...
2345
...
pciDeviceId
...
6789
...
pciCount
...
1
...
interfaceType
...
SRIOV-NIC
...
providerNetwork
...
Physnet2
1.3 SO call OOF
...
Background
Main purpose of this feature to place VNFCs in sites with SRIOV-NIC enabled compute nodes.
If there are no sites with available SRIOV-NIC compute nodes, OOF can choose next best compute node flavors. Next best compute nodes may not SRIOV-NICs. Of course VNFCs, in this case, assumed to be tested with and without SRIOV-NIC by vendors.
The parameters to instantiate VNFs differ whether SRIOV-NIC based switching or normal vSwitch based switching at the NFVI.
In case of Openstack based cloud-regions, port is expected to be created explicitly with binding:vnic_type as direct. In case of vSwitch based switching, there is no need for explicit creation of port, but port can be created with binding:vnic_type as normal. So, based on flavor, appropriate value is expected to be passed when talking to Openstack.
Some more additional requirements are also to be considered.
- A given VNFC might require multiple SRIOV VFs assigned to it from different PCIe NIC cards.
- A given VNFC might require VFs from different types of PCIe NIC Cards (example: Intel, Melanox etc..)
- A given VNFC might require these VFs coming from different provider networks.
Those requirements can be satisfied, only if there are compute nodes with that kind of NIC hardware and only if ONAP can discover that from various cloud sites.
Overall design
Gliffy | ||||
---|---|---|---|---|
|
1. Scenario
Let us say, a site has three kinds of compute nodes with respect to SRIOV-NIC
1st set contains, say SRIOV-NIC cards of type XYZ (PCIe vendor ID: 1234 Device ID: 5678) and YUI(PCIe vendor ID:2345 Device ID:6789.
2nd set contains say SRIOV-NIC cards of type ABC (PCIe vendor ID: 4321 Device ID: 8765).
3rd set does not contain any SRIOV NIC cards.
1.1 Openstack Config SRIOV
Openstack configuration:
1. NIC configuration refer to https://docs.openstack.org/neutron/pike/admin/config-sriov.html
2. An example of a site having three types of compute nodes. 1st set of compute nodes have two SRIOV NIC cards with vendor/device id as 1234, 5678 and vendor/device id as 2345 &6789. 2nd set of compute nodes have two SRIOV-NIC of same type 4321 & 8765. And third set of compute nodes don't have any SRIOV-NIC cards. And hence Openstack administrator at the site creates three flavors to reflect the hardware the site has. As you see in this example, it is expected that host aggregate format is followed. The value supposed to be of the form "NIC-sriov-<vendor ID>-<device ID>-<Provider network>
Flavor1
$ openstack aggregate create --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-1234-5678-physnet1:1 aggr11
$ openstack aggregate create --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-2345-6789-physnet2:1 aggr12
$ openstack flavor create flavor1 --id auto --ram 512 --disk 40 --vcpus 4
$ openstack flavor set flavor1 --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-1234-5678-physnet1:1
$ openstack flavor set flavor1 --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-2345-6789-physnet2:1
Flavor2
$ openstack aggregate create --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-4321-8756-physnet3:1 aggr21
$ openstack flavor create flavor2 --id auto --ram 512 --disk 40 --vcpus 4
$ openstack flavor set flavor2 --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-4321-8756-physnet3:1
Flavor3
$ openstack flavor create flavor3 --id auto --ram 512 --disk 40 --vcpus 4
1.2 Mutli-cloud discovery
When it reads the flavors information from openstack site, if the pci_passthrough alias start with NIC-sriov, then it assumes that it is SRIOV NIC type.
Next two integers are meant for vendor id and device id.
If it is present after device id, it is assumed to be provider network.
As part of discovery, it populates the A&AI with two PCIe features for Flavor1.
hpa-feature=”pciePassthrough”,
architecture=”{hw_arch}",
version=”v1”,
Hpa-attribute-key | Hpa-attribute-value |
pciVendorId | 1234 |
pciDeviceId | 5678 |
pciCount | 1 |
directive | [ {"attribute_name": "vnic-type", "attribute_value": "direct"}, {"attribute_name": "physical-network", "attribute_value": "physnet1"}, ] |
hpa-feature=”pciePassthrough”,
architecture=”{hw_arch}",
version=”v1”,
Hpa-attribute-key | Hpa-attribute-value |
pciVendorId | 2345 |
pciDeviceId | 6789 |
pciCount | 1 |
directive | [ {"attribute_name": "vnic-type", "attribute_value": "direct"}, {"attribute_name": "physical-network", "attribute_value": "physnet2"}, ] |
1.3 SO call OOF
SO will get pciVendorId, pciDeviceId and interfaceType from CSAR file, then call to OOF. OOF will response homing information to SO, SO don't interpret it and pass through it to Multi-cloud.
HOT template that uses parameter to be filled up based on OOF output (Example)
|
1.4 VF-C call OOF
VF-C will get pciVendorId, pciDeviceId and interfaceType from CSAR file, then call to OOF. OOF will response homing information to SO, SO don't interpret it and pass through it to Multi-cloud.
HOT template that uses parameter to be filled up based on OOF output (Example)
parameters:
oof_returned_vnic_type_for_firewall:
type:string
description:This parameter value is determined by OOF. If OOF selects the region and flavor that support SRIOV-NICs, then OOF returns 'direct'. If not, it returns 'normal'
...
VF-C will get pciVendorId, pciDeviceId and interfaceType from CSAR file, then call to OOF. OOF will response homing information to VF-C.
VF-C get OOF response, It will create network of SRIOV-NIC using physical network.
Then, It will create port based on network using interface type.
1.5 OOF Response
OOF just check /cloud-infrastructure/cloud-regions/cloud-region/{cloud-owner}/{cloud-region-id}/flavors/flavor/{flavor-id}/hpa-capabilities
OOF will match the SRIOV information along with the constraint provided by Policy and add extra attributes inside the assignmentInfo data block when returning the response to SO and VF-C.
Sample looks like below.
"assignmentInfo": [
{ "key":"locationId",
"value":"DLLSTX1A" },
{ "key":"locationType",
"value":"openstack-cloud" },
{ "key":"vimId",
"value":"rackspace_DLLSTX1A" },
{ "key":"oofDirectives",
"value":{
"directives":[
VF-C.
VF-C get OOF response, It will create network of SRIOV-NIC using physical network.
Then, It will create port based on network using interface type.
1.5 OOF Response
OOF just check /cloud-infrastructure/cloud-regions/cloud-region/{cloud-owner}/{cloud-region-id}/flavors/flavor/{flavor-id}/hpa-capabilities
OOF will match the SRIOV information along with the constraint provided by Policy and add extra attributes inside the assignmentInfo data block when returning the response to SO and VF-C.
Sample looks like below.
"assignmentInfo": [ { "key":"locationType", { "key":"vimId", { "key":"oofDirectives", "directives":[ { "vnfc_directives":[ } ] |
For the newly added oofDirectives, we only return the vnfc part. For example:
It is worth noting that the vnic-type is converted from interfaceType in OOF.
If interfaceType is SRIOV-NIC, then OOF returns 'vnic-type' as 'direct', If interfaceType is not SRIOV-NIC, OOF return 'vnic-type' as 'normal'.
1.6 Policy Data
"flavorLabel": "flavor_label_1",
"sriovNICLabel": "oof_returned_vnic_type_1
"flavorProperties":[
{
"hpa-feature" : "pciePassthrough",
"mandatory" : "True",
"architecture": "generic",
"hpa-feature-attributes": [
{"hpa-attribute-key":"pciVendorId", "hpa-attribute-value": "1234","operator": "=", "unit": ""},
{"hpa-attribute-key":"pciDeviceId", "hpa-attribute-value": "5678","operator": "=", "unit": ""},
{"hpa-attribute-key":"pciCount", "hpa-attribute-value": "1","operator": "=", "unit": ""},
{"hpa-attribute-key":"cardType "hpa-attribute-value": "sriov-nic","operator": "=", "unit": ""},
{"hpa-attribute-key":"providerNetwork "hpa-attribute-value": "physnet1","operator": "=", "unit": ""},
]
},
{
"hpa-feature" : "pciePassthrough",
"mandatory" : "True",
"architecture": "generic",
"hpa-feature-attributes": [
{"hpa-attribute-key":"pciVendorId", "hpa-attribute-value": "2345","operator": "=", "unit": ""},
{"hpa-attribute-key":"pciDeviceId", "hpa-attribute-value": "6789","operator": "=", "unit": ""},
{"hpa-attribute-key":"pciCount", "hpa-attribute-value": "1","operator": "=", "unit": ""},
{"hpa-attribute-key":"cardType "hpa-attribute-value": "sriov-nic","operator": "=", "unit": ""},
{"hpa-attribute-key":"providerNetwork "hpa-attribute-value": "physnet2","operator": "=", "unit": ""},
]
},
...
"vnfc_directives": [ { "vnfc_id":"", "directives":[ { "directive_name": "flavor_directive", "attributes": [ { "vnfc_directives":[ }, { "directive_name": "<Name of directive,example vnic-info>info2", ] } ] |
For the newly added oofDirectives, we only return the vnfc part. For example:
...
"vnfc_directives": [
{ "vnfc_id":"",
"directives":[
{ "directive_name": "flavor_directive",
"attributes": [
{"attribute_name": "flavor_label_1", "attribute_value":"HPA.flavor"}
]
},
{ "directive_name": "vnic-info1",
"attributes": [
{"attribute_name": "vnic-type", "attribute_value":"direct"},
{"attribute_name": "provider_network", "attribute_value":"physnet1"}
]
},
{ "directive_name": "vnic-info2",
"attributes": [
{"attribute_name": "vnic-type", "attribute_value":"direct"},
{"attribute_name": "provider_network", "attribute_value":"physnet2"}
]
}
]
}
]
] |
It is worth noting that the vnic-type is converted from interfaceType in OOF.
If interfaceType is SRIOV-NIC, then OOF returns 'vnic-type' as 'direct', If interfaceType is not SRIOV-NIC, OOF return 'vnic-type' as 'normal'.
1.6 Policy Data
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
#
#Example 1: vFW, Pcie Passthrough
#one VNFC(VFC) with two Pcie Passthrough requirements
#
{
"service": "hpaPolicy",
"policyName": "oofCasablanca.hpaPolicy_vFW",
"description": "HPA policy for vFW",
"templateVersion": "0.0.1",
"version": "1.0",
"priority": "3",
"riskType": "test",
"riskLevel": "2",
"guard": "False",
"content": {
"resources": "vFW",
"identity": "hpaPolicy_vFW",
"policyScope": ["vFW", "US", "INTERNATIONAL", "ip", "vFW"],
"policyType": "hpaPolicy",
"flavorFeatures": [
{
"id" : "<vdu.Name>",
"type":"vnfc/tocsa.nodes.nfv.Vdu.Compute",
"directives":[
{
"directive_name":"flavor",
"attributes":[
{
"attribute_name":" oof_returned_flavor_for_firewall ", //Admin needs to ensure that this value is same as flavor parameter in HOT
"attribute_value": "<Blank>"
}
]
}
]
"flavorProperties": [
{
"hpa-feature": "pciePassthrough",
"mandatory": "True",
"architecture": "generic",
"directives" : [
{
"directive_name": "pciePassthrough_directive",
"attributes": [
{ "attribute_name": "oof_returned_vnic_type_for_firewall_protected",
"attribute_value": "direct"
},
{ "attribute_name": "oof_returned_provider_network_for_firewall_protected",
"attribute_value": "physnet1"
}
]
}
],
"hpa-feature-attributes": [
{ "hpa-attribute-key": "pciVendorId", "hpa-attribute-value": "1234", "operator": "=", "unit": "" },
{ "hpa-attribute-key": "pciDeviceId", "hpa-attribute-value": "5678", "operator": "=", "unit": "" },
{ "hpa-attribute-key": "pciCount", "hpa-attribute-value": "1", "operator": ">=", "unit": "" }
]
},
{
"hpa-feature": "pciePassthrough",
"mandatory": "True",
"architecture": "generic",
"directives" : [
{
"directive_name": "pciePassthrough_directive",
"attributes": [
{ "attribute_name": "oof_returned_vnic_type_for_firewall_unprotected",
"attribute_value": "direct"
}
{ "attribute_name": "oof_returned_provider_for_firewall_unprotected",
"attribute_value": "physnet2"
}
]
}
],
"hpa-feature-attributes": [
{ "hpa-attribute-key": "pciVendorId", "hpa-attribute-value": "3333", "operator": "=", "unit": "" },
{ "hpa-attribute-key": "pciDeviceId", "hpa-attribute-value": "7777", "operator": "=", "unit": "" },
{ "hpa-attribute-key": "pciCount", "hpa-attribute-value": "1", "operator": ">=", "unit": "" }
]
}
]
}
]
}
}
|
2. ONAP Module Modify
Module Name | Modification | status | owner | comments |
---|---|---|---|---|
SDC | Add SR-IOV NIC attributes. | Completed | Alex Lianhao | |
Policy | Add SR-IOV NIC attributes. | In Progress | Libo | |
VF-C | Add create port process. | In Progress | Haibin | |
SO | Add create port process. | In Progress | Marcus | |
OOF | Add the process for cloud region HPA capabilities | In Progess | Ruoyu | |
AAI | Nothing, we just add one hpa-attribute-key and hpa-attribute-value | Completed | - | |
ESR | Add SR-IOV NIC info to cloud extra info. | In Progress | Haibin | |
Multi-cloud | Register SR-IOV info to AAI. | In Progress | Haibin | |
VIM | Config SR-IOV NIC and create network with SR-IOV NIC. | In Progress | Haibin |
3. SR-IOV NIC related Capability in Data model
This is refer to Supported HPA Capability Requirements(DRAFT)#LogicalNodei/ORequirements
Logical Node i/O Requirements
Capability Name | Capability Value | Descriptiopn |
---|---|---|
pciVendorId | PCI-SIG vendor ID for the device | |
pciDeviceId | PCI-SIG device ID for the device | |
pciNumDevices | Number of PCI devices required. | |
pciAddress | Geographic location of the PCI device via the standard PCI-SIG addressing model of Domain:Bus:device:function | |
pciDeviceLocalToNumaNode | required notRequired | Determines if I/O device affinity is required. |
Network Interface Requirements
Capability Name | Capability Value | Description |
---|---|---|
nicFeature | LSO, LRO, RSS, RDMA | Long list of NIC related items such as LSO, LRO, RSS, RDMA, etc. |
dataProcessingAccelerationLibray | Dpdk_Version | Name and version of the data processing acceleration library required. Orchestration can match any NIC that is known to be compatible with the specified library. |
interfaceType | Virtio, PCI-Passthrough, SR-IOV, E1000, RTL8139, PCNET | Network interface type |
vendorSpecificNicFeature | TBA | List of vendor specific NIC related items. |
...