Configuring Single Root I/O Virtualization (SR-IOV)
Overview: Configuring SR-IOV
Contrail Networking supports single root I/O virtualization (SR-IOV) on Ubuntu systems and on Red Hat Enterprise Linux (RHEL) operating systems as well.
SR-IOV is an interface extension of the PCI Express (PCIe) specification. SR-IOV allows a device, such as a network adapter to have separate access to its resources among various hardware functions.
As an example, the Data Plane Development Kit (DPDK) library has drivers that run in user space for several network interface cards (NICs). However, if the application runs inside a virtual machine (VM), it does not see the physical NIC unless SR-IOV is enabled on the NIC.
This topic shows how to configure SR-IOV with your Contrail Networking system.
Enabling ASPM in BIOS
To use SR-IOV, it must have Active State Power Management (ASPM) enabled for PCI Express (PCIe) devices. Enable ASPM in the system BIOS.
The BIOS of your system might need to be upgraded to a version that can enable ASPM.
Configuring SR-IOV Using the Ansible Deployer
You must perform the following tasks to enable SR-IOV on a system.
Enable the Intel Input/Ouput Memory Management Unit (IOMMU) on Linux.
Enable the required number of Virtual Functions (VFs) on the selected NIC.
Configure the names of the physical networks whose VMs can interface with the VFs.
Reboot Nova compute.
service nova-compute restart
Configure a Nova Scheduler filter based on the new PCI configuration, as in the following example:
/etc/nova/nova.conf [default] scheduler_default_filters = PciPassthroughFilter scheduler_available_filters = nova.scheduler.filters.all_filters scheduler_available_filters = nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter
Restart Nova Scheduler.
service nova-scheduler restart
The above tasks are handled by the Ansible Deployer playbook. The cluster members and its configuration parameters are specified in the instances.yaml file located in the config directory within the ansible-deployer repository.
The compute instances that are going to be in SR-IOV mode should have an SR-IOV configuration. The instance.yaml snippet below shows a sample instance definition.
instances: bms1: provider: bms ip: ip-address roles: openstack: bms2: provider: bms ip:ip-address roles: config_database: config: control: analytics_database: analytics: webui: bms3: provider: bms ip: ip-address roles: openstack_compute: vrouter: SRIOV: true SRIOV_VF: 3 SRIOV_PHYSICAL_INTERFACE: eno1 SRIOV_PHYS_NET: physnet1
Configuring SR-IOV Using Helm
You must perform the following tasks to enable SR-IOV on a system.
Enable the Intel Input/Ouput Memory Management Unit (IOMMU) on Linux.
Enable the required number of Virtual Functions (VFs) on the selected NIC.
Configure the names of the physical networks whose VMs can interface with the VFs.
Reboot Nova compute.
service nova-compute restart
Configure a Nova Scheduler filter based on the new PCI configuration, as in the following example:
/etc/nova/nova.conf [default] scheduler_default_filters = PciPassthroughFilter scheduler_available_filters = nova.scheduler.filters.all_filters scheduler_available_filters = nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter
Restart Nova Scheduler.
service nova-scheduler restart
The above tasks are handled by the Helm charts. The cluster members and its configuration parameters are specified in the multinode-inventory file located in the config directory within the openstack-helm-infra repository.
For Helm, the configuration and SR-IOV environment-specific parameters must be updated in three different places:
The compute instance must be set as contrail-vrouter-sriov.
For example, the following is a snippet from the tools/gate/devel/multinode-inventory.yaml file in the openstack-helm-infra repository.
all: children: primary: hosts: node1: ansible_port: 22 ansible_host: host-ip-address ansible_user: ubuntu ansible_ssh_private_key_file: /home/ubuntu/.ssh/insecure.pem ansible_ssh_extra_args: -o StrictHostKeyChecking=no nodes: children: openstack-compute: children: contrail-vrouter-sriov: #compute instance set to contrail-vrouter-sriov hosts: node7: ansible_port: 22 ansible_host: host-ip-address ansible_user: ubuntu ansible_ssh_private_key_file: /home/ubuntu/.ssh/insecure.pem ansible_ssh_extra_args: -o StrictHostKeyChecking=no
Contrail-vrouter-sriov must be labeled appropriately.
For example, the following is a snippet from the tools/gate/devel/multinode-vars.yaml in the openstack-helm-infra repository.
nodes: labels: primary: - name: openstack-helm-node-class value: primary all: - name: openstack-helm-node-class value: general contrail-controller: - name: opencontrail.org/controller value: enabled openstack-compute: - name: openstack-compute-node value: enabled contrail-vrouter-dpdk: - name: opencontrail.org/vrouter-dpdk value: enabled contrail-vrouter-sriov: # label as contrail-vrouter-sriov - name: vrouter-sriov value: enabled
SR-IOV config parameters must be updated in the contrail-vrouter/values.yaml file.
For example, the following is a snippet from the contrail-vrouter/values.yaml file in the contrail-helm-deployer repository.
contrail_env_vrouter_kernel: AGENT_MODE: kernel contrail_env_vrouter_sriov: SRIOV: true per_compute_info: node_name: k8snode1 SRIOV_VF: 10 SRIOV_PHYSICAL_INTERFACE: enp129s0f1 SRIOV_PHYS_NET: physnet1
Launching SR-IOV Virtual Machines
After ensuring that SR-IOV features are enabled on your system, use one of the following procedures to create a virtual network from which to launch an SR-IOV VM, either by using the Contrail Web UI or the CLI. Both methods are included.
- Using the Contrail Web UI to Enable and Launch an SR-IOV Virtual Machine
- Using the CLI to Enable and Launch SR-IOV Virtual Machines
Using the Contrail Web UI to Enable and Launch an SR-IOV Virtual Machine
To use the Contrail Web UI to enable and launch an SR-IOV VM:
At Configure > Networking > Networks, create a virtual network with SR-IOV enabled. Ensure the virtual network is created with a subnet attached. In the Advanced section, select the Provider Network check box, and specify the physical network already enabled for SR-IOV (in
testbed.py
ornova.conf
) and its VLAN ID. See Figure 1.Figure 1: Edit NetworkOn the virtual network, create a Neutron port (Configure > Networking > Ports), and in the Port Binding section, define a Key value of SR-IOV and a Value of direct. See Figure 2.
Figure 2: Create PortUsing the UUID of the Neutron port you created, use the
nova boot
command to launch the VM from that port.nova boot --flavor m1.large --image <image name> --nic port-id=<uuid of above port> <vm name>
Using the CLI to Enable and Launch SR-IOV Virtual Machines
To use CLI to enable and launch an SR-IOV VM:
Create a virtual network with SR-IOV enabled. Specify the physical network already enabled for SR-IOV (in
testbed.py
ornova.conf
) and its VLAN ID.The following example creates
vn1
with a VLAN ID of 100 and is part ofphysnet1
:neutron net-create --provider:physical_network=physnet1 --provider:segmentation_id=100 vn1
Create a subnet in vn1.
neutron subnet-create vn1 a.b.c.0/24
On the virtual network, create a Neutron port on the subnet, with a binding type of direct.
neutron port-create --fixed-ip subnet_id=<subnet uuid>,ip_address=<IP address from above subnet> --name <name of port> <vn uuid> --binding:vnic_type direct
Using the UUID of the Neutron port created, use the
nova boot
command to launch the VM from that port.nova boot --flavor m1.large --image <image name> --nic port-id=<uuid of above port> <vm name>
Log in to the VM and verify that the Ethernet controller is VF by using the
lspci
command to list the PCI buses.The VF that gets configured with the VLAN can be observed using the
ip link
command.