Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Configuring SR-IOV on KVM

 

This section includes the following topics on SR-IOV and PCI passthrough for a vSRX instance deployed on KVM:

SR-IOV and PCI Passthrough Overview

vSRX on KVM supports single-root I/O virtualization (SR-IOV) interface types. SR-IOV is a standard that allows a single physical NIC to present itself as multiple vNICs, or virtual functions (VFs), that a virtual machine (VM) can attach to. SR-IOV combines with other virtualization technologies, such as Intel VT-d, to improve the I/O performance of the VM. SR-IOV allows each VM to have direct access to packets queued up for the VFs attached to the VM. You use SR-IOV when you need I/O performance that approaches that of the physical bare metal interfaces.

Note

SR-IOV in KVM does not remap interface numbers. The interface sequence in the vSRX VM XML file matches the interface sequence shown in the Junos OS CLI on the vSRX instance.

vSRX instances with 9 or 17 vCPUs, deployed on KVM supports the Peripheral Component Interconnect (PCI) passthrough virtualization technique on the Intel XL710.

SR-IOV uses two PCI functions:

  • Physical Functions (PFs)—Full PCIe devices that include SR-IOV capabilities. Physical Functions are discovered, managed, and configured as normal PCI devices. Physical Functions configure and manage the SR-IOV functionality by assigning Virtual Functions. When SR-IOV is disabled, the host creates a single PF on one physical NIC.

  • Virtual Functions (VFs)—Simple PCIe functions that only process I/O. Each Virtual Function is derived from a Physical Function. The number of Virtual Functions a device may have is limited by the device hardware. A single Ethernet port, the Physical Device, may map to many Virtual Functions that can be shared to guests. When SR-IOV is enabled, the host creates a single PF and multiple VFs on one physical NIC. The number of VFs depends on the configuration and driver support.

SR-IOV HA Support with Trust Mode Disabled (KVM only)

Understanding SR-IOV HA Support with Trust Mode Disabled (KVM only)

A Redundant Ethernet Interface (RETH) is a virtual interface consisting of equal number of member interfaces from each participating node of an SRX cluster. All logical configurations such as IP address, QoS, zones, and VPNs are bound to this interface. Physical properties are applied to the member or child interfaces. A RETH interface has a virtual MAC address which is calculated using the cluster id. RETH has been implemented as an aggregated interface/LAG in Junos OS. For a LAG, the parent (logical) IFDs MAC address is copied to each of the child interfaces. When you configure the child interface under the RETH interface, the RETH interface’s virtual MAC gets overwritten on the current MAC address field of the child physical interface. This also requires the virtual MAC address to be programmed on the corresponding NIC.

Junos OS runs as a VM on vSRX. Junos OS does not have direct access to the NIC and only has a virtual NIC access provided by the hypervisor which might be shared with other VMs running on the same host machine. This virtual access comes with certain restrictions such as a special mode called trust mode, which is required to program a virtual MAC address on the NIC. During deployments, providing the trust mode access might not be feasible because of possible security issues. To enable RETH model to work in such environments, MAC rewrite behavior is modified. Instead of copying the parent virtual MAC address to the children, we keep the children’s physical MAC address intact and copy the physical MAC address of the child belonging to the active node of the cluster to the current MAC of the reth interface. This way, MAC rewrite access is not required when trust mode is disabled.

In case of vSRX, the DPDK reads the physical MAC address provided by the hypervisor and shares it with the Junos OS control plane. In standalone mode, this physical MAC address is programmed on the physical IFDs. But the support for the same is unavailable in cluster mode, because of which the MAC address for the physical interface is taken from the Juniper reserved MAC pool. In an environment where trust mode is not feasible, the hypervisor is unable to provide the physical MAC address.

To overcome this problem, we have added support to use the hypervisor provided physical MAC address instead of allocating it from the reserved MAC pool. See Configuring SR-IOV support with Trust Mode Disabled (KVM only).

Configuring SR-IOV support with Trust Mode Disabled (KVM only)

Figure 1: Copying MAC address from active child interface to parent RETH
Copying MAC address from active child interface to
parent RETH

Starting in Junos OS Release 19.4R1, SR-IOV HA is supported with trust mode disabled. You can enable this mode by configuring the use-active-child-mac-on-reth and use-actual-mac-on-physical-interfaces configuration statements at the [edit chassis cluster] hierarchy level. If you configure commands in a cluster, the hypervisor assigns the child physical interface’s MAC address and the parent RETH interface’s MAC address is overwritten by the active child physical interface’s MAC address

You need to reboot the vSRX instance to enable this mode. Both the nodes in the cluster need to be rebooted for the commands to take effect.

You need to configure the commands use-active-child-mac-on-reth and use-actual-mac-on-physical-interfaces together to enable this feature.

Limitations

SR-IOV HA support with trust mode disabled is only supported on KVM based systems.

Configuring an SR-IOV Interface on KVM

If you have a physical NIC that supports SR-IOV, you can attach SR-IOV-enabled vNICs or virtual functions (VFs) to the vSRX instance to improve performance. We recommend that if you use SR-IOV, all revenue ports are configured as SR-IOV.

Note the following about SR-IOV support for vSRX on KVM:

  • Starting in Junos OS Release 15.1X49-D90 and Junos OS Release 17.3R1, a vSRX instance deployed on KVM supports SR-IOV on an Intel X710/XL710 NIC in addition to Intel 82599 or X520/540.

  • Starting in Junos OS Release 18.1R1, a vSRX instance deployed on KVM supports SR-IOV on the Mellanox ConnectX-3 and ConnectX-4 Family Adapters.

Note

See the vSRX Performance Scale Up discussion in Understanding vSRX with KVM for the vSRX scale up performance when deployed on KVM, based on vNIC and the number of vCPUs and vRAM applied to a vSRX VM.

Before you can attach an SR-IOV enabled VF to the vSRX instance, you must complete the following tasks:

  • Insert an SR-IOV-capable physical network adapter in the host server.

  • Enable the Intel VT-d CPU virtualization extensions in BIOS on your host server. The Intel VT-d extensions provides hardware support for directly assigning a physical devices to guest. Verify the process with the vendor because different systems have different methods to enable VT-d.

  • Ensure that SR-IOV is enabled at the system/server BIOS level by going into the BIOS settings during the host server boot-up sequence to confirm the SR-IOV setting. Different server manufacturers have different naming conventions for the BIOS parameter used to enable SR-IOV at the BIOS level. For example, for a Dell server ensure that the SR-IOV Global Enable option is set to Enabled.

Note

We recommend that you use virt-manager to configure SR-IOV interfaces. See the virsh attach-device command documentation if you want to learn how to add a PCI host device to a VM with the virsh CLI commands.

To add an SR-IOV VF to a vSRX VM using the virt-manager graphical interface:

  1. In the Junos OS CLI, shut down the vSRX VM if it is running.
    vsrx> request system power-off

  2. In virt-manager, double-click the vSRX VM and select View>Details. The vSRX Virtual Machine details dialog box appears.
  3. Select the Hardware tab, then click Add Hardware. The Add Hardware dialog box appears.
  4. Select PCI Host Device from the Hardware list on the left.
  5. Select the SR-IOV VF for this new virtual interface from the host device list.
  6. Click Finish to add the new device. The setup is complete and the vSRX VM now has direct access to the device.
  7. From the virt-manager icon bar at the upper-left side of the window, click the Power On arrow. The vSRX VM starts. Once the vSRX is powered on the Running status will display in the window.

    You can connect to the management console to watch the boot-up sequence.

    Note

    After the boot starts, you need to select View>Text Consoles>Serial 1 in virt-manager to connect to the vSRX console.

To add an SR-IOV VF to a vSRX VM using virsh CLI commands:

  1. Define four virtual functions for eno2 interface, update the sriov_numvfs file with number 4.
  2. Identify the device.

    Identify the PCI device designated for device assignment to the virtual machine. Use the lspci command to list the available PCI devices. You can refine the output of lspci with grep.

    Use command lspci to check the VF number according to the VF ID.

    root@ kvmsrv:~# lspci | grep Ether
  3. Add SR-IOV device assignment from a vSRX XML profile on KVM and review device information.

    The driver could use either vfio or kvm, depends on KVM server OS/kernel version and drivers for virtualization support. The address type references the unique PCI slot number for each SR-IOV VF (Virtual Function).

    Information on the domain, bus, and function are available from output of the virsh nodedev-dumpxml command.

  4. Add PCI device in edit setting and select VF according to the VF number.Note

    This operation should be done when VM is powered off. Also, do not clone VMs with PCI devices which might lead to VF or MAC conflict.

  5. Start the VM using the # virsh start name of virtual machine command.
Release History Table
Release
Description
Starting in Junos OS Release 18.1R1, a vSRX instance deployed on KVM supports SR-IOV on the Mellanox ConnectX-3 and ConnectX-4 Family Adapters.
Starting in Junos OS Release 15.1X49-D90 and Junos OS Release 17.3R1, a vSRX instance deployed on KVM supports SR-IOV on an Intel X710/XL710 NIC in addition to Intel 82599 or X520/540.