Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Configuring Single Root I/O Virtualization (SR-IOV)

Overview: Configuring SR-IOV

Contrail Networking supports single root I/O virtualization (SR-IOV) on Ubuntu systems and on Red Hat Enterprise Linux (RHEL) operating systems as well.

SR-IOV is an interface extension of the PCI Express (PCIe) specification. SR-IOV allows a device, such as a network adapter to have separate access to its resources among various hardware functions.

As an example, the Data Plane Development Kit (DPDK) library has drivers that run in user space for several network interface cards (NICs). However, if the application runs inside a virtual machine (VM), it does not see the physical NIC unless SR-IOV is enabled on the NIC.

This topic shows how to configure SR-IOV with your Contrail Networking system.

Enabling ASPM in BIOS

To use SR-IOV, it must have Active State Power Management (ASPM) enabled for PCI Express (PCIe) devices. Enable ASPM in the system BIOS.

Note:

The BIOS of your system might need to be upgraded to a version that can enable ASPM.

Configuring SR-IOV Using the Ansible Deployer

You must perform the following tasks to enable SR-IOV on a system.

  1. Enable the Intel Input/Ouput Memory Management Unit (IOMMU) on Linux.

  2. Enable the required number of Virtual Functions (VFs) on the selected NIC.

  3. Configure the names of the physical networks whose VMs can interface with the VFs.

  4. Reboot Nova compute.

  5. Configure a Nova Scheduler filter based on the new PCI configuration, as in the following example:

  6. Restart Nova Scheduler.

The above tasks are handled by the Ansible Deployer playbook. The cluster members and its configuration parameters are specified in the instances.yaml file located in the config directory within the ansible-deployer repository.

The compute instances that are going to be in SR-IOV mode should have an SR-IOV configuration. The instance.yaml snippet below shows a sample instance definition.

Configuring SR-IOV Using Helm

You must perform the following tasks to enable SR-IOV on a system.

  1. Enable the Intel Input/Ouput Memory Management Unit (IOMMU) on Linux.

  2. Enable the required number of Virtual Functions (VFs) on the selected NIC.

  3. Configure the names of the physical networks whose VMs can interface with the VFs.

  4. Reboot Nova compute.

  5. Configure a Nova Scheduler filter based on the new PCI configuration, as in the following example:

  6. Restart Nova Scheduler.

The above tasks are handled by the Helm charts. The cluster members and its configuration parameters are specified in the multinode-inventory file located in the config directory within the openstack-helm-infra repository.

For Helm, the configuration and SR-IOV environment-specific parameters must be updated in three different places:

  • The compute instance must be set as contrail-vrouter-sriov.

    For example, the following is a snippet from the tools/gate/devel/multinode-inventory.yaml file in the openstack-helm-infra repository.

  • Contrail-vrouter-sriov must be labeled appropriately.

    For example, the following is a snippet from the tools/gate/devel/multinode-vars.yaml in the openstack-helm-infra repository.

  • SR-IOV config parameters must be updated in the contrail-vrouter/values.yaml file.

    For example, the following is a snippet from the contrail-vrouter/values.yaml file in the contrail-helm-deployer repository.

Launching SR-IOV Virtual Machines

After ensuring that SR-IOV features are enabled on your system, use one of the following procedures to create a virtual network from which to launch an SR-IOV VM, either by using the Contrail Web UI or the CLI. Both methods are included.

Using the Contrail Web UI to Enable and Launch an SR-IOV Virtual Machine

To use the Contrail Web UI to enable and launch an SR-IOV VM:

  1. At Configure > Networking > Networks, create a virtual network with SR-IOV enabled. Ensure the virtual network is created with a subnet attached. In the Advanced section, select the Provider Network check box, and specify the physical network already enabled for SR-IOV (in testbed.py or nova.conf) and its VLAN ID. See Figure 1.

    Figure 1: Edit NetworkEdit Network
  2. On the virtual network, create a Neutron port (Configure > Networking > Ports), and in the Port Binding section, define a Key value of SR-IOV and a Value of direct. See Figure 2.

    Figure 2: Create PortCreate Port
  3. Using the UUID of the Neutron port you created, use the nova boot command to launch the VM from that port.

    nova boot --flavor m1.large --image <image name> --nic port-id=<uuid of above port> <vm name>

Using the CLI to Enable and Launch SR-IOV Virtual Machines

To use CLI to enable and launch an SR-IOV VM:

  1. Create a virtual network with SR-IOV enabled. Specify the physical network already enabled for SR-IOV (in testbed.py or nova.conf) and its VLAN ID.

    The following example creates vn1 with a VLAN ID of 100 and is part of physnet1:

    neutron net-create --provider:physical_network=physnet1 --provider:segmentation_id=100 vn1

  2. Create a subnet in vn1.

    neutron subnet-create vn1 a.b.c.0/24

  3. On the virtual network, create a Neutron port on the subnet, with a binding type of direct.

    neutron port-create --fixed-ip subnet_id=<subnet uuid>,ip_address=<IP address from above subnet> --name <name of port> <vn uuid> --binding:vnic_type direct

  4. Using the UUID of the Neutron port created, use the nova boot command to launch the VM from that port.

    nova boot --flavor m1.large --image <image name> --nic port-id=<uuid of above port> <vm name>

  5. Log in to the VM and verify that the Ethernet controller is VF by using the lspci command to list the PCI buses.

    The VF that gets configured with the VLAN can be observed using the ip link command.