Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Prepare vMX Installation on Contrail

If you are running vRouter in DPDK mode, keep the following points in mind when preparing the OpenStack environment:

  • For Contrail 3.0, make sure you configure the testbed.py file for provisioning a Contrail node to use DPDK. See Preparing the testbed.py File for Provisioning a Contrail Cluster Node with DPDK.

    For Contrail 4.0, make sure you provision a Contrail node with DPDK. See Preparing the server.json File for Provisioning a Contrail 4.0 Cluster Node with DPDK.

  • Enable iommu=pt in /etc/default/grub on the compute node.

  • If using the contrail-3.0.3.0-69 release, upgrade the compute nodes to the libvirt 1.2.16 packages manually with the following commands and restart the nova-compute and libvirt-bin services.

  • Enable Huge Pages for all VMs to allow them to transmit traffic.

  • Verify if DPDK is enabled on the compute nodes with the contrail-status command. The status of contrail-vrouter-dpdk is active.

  • Make sure the NIC cards used for the contrail-vhost network (that connects compute and controller nodes) and the SR-IOV network (that is provided to VFD process) are different network cards. Use the ethtool -i command to verify that bus-info field is from different cards.

To prepare the OpenStack environment to install vMX, perform these tasks:

Preparing the Controller Node for vMX

To prepare the controller node:

  1. Configure the controller node to enable Huge Pages and CPU affinity by editing the scheduler_default_filters parameter in the /etc/nova/nova.conf file. Make sure the following filters are present:
    Note:

    We recommend this configuration whether CPU pinning is on or off. Huge Pages is always on.

    Restart the scheduler service with the service nova-scheduler restart command.

  2. Update the default quotas.
    Note:

    We recommend these default values, but you can use different values if they are appropriate for your environment. Make sure the default quotas have enough allocated resources.

    Verify the changes with the nova quota-defaults command.

  3. (For Contrail 3.0 only) To enable SR-IOV interfaces on the controller node, apply the patch by downloading the vMX KVM software package from the vMX Download Software page.

    Uncompress the package, change directory to root, and apply the patches from the openstack/kilo/openstack_vfd_patches directory.

  4. (For Contrail 4.0, starting with Junos OS Release 17.4R1) To enable SR-IOV interfaces on the controller node, verify the Contrail patch described in https://bugs.launchpad.net/opencontrail/+bug/1709822 has been applied. Otherwise, you might see the Unknown Neutron Exception error message.

Preparing the Compute Nodes

Preparing the Compute Node for vMX

To prepare the compute node:

  1. Configure each compute node to enable IOMMU (intel_iommu=on) and support Huge Pages at boot time and reboot. Add the configuration to /etc/default/grub under the GRUB_CMDLINE_LINUX_DEFAULT parameter.
    Note:

    If you are running vRouter in DPDK mode or you are using VLAN provider OVS networks for virtio, you must also include iommu=pt.

    Run the update-grub command followed by the reboot command.

  2. (Optional) Ensure that VMs with Huge Pages enabled are successfully deployed.

    If you see the unable to create backing store for hugepages: Permission denied message in the /var/log/nova/nova-compute.log file on compute nodes, perform these tasks:

    1. Create a new directory for libvirt to mount or use Huge Pages.

      mkdir -p /run/hugepages/kvm/

    2. Mount huge table fs over this directory.

      mount -t hugetlbfs hugetlbfs-kvm /run/hugepages/kvm/

    3. Provide this directory input explicitly in /etc/libvirt/qemu.conf as the mount location for hugetlbfs.
    4. Add KVM_HUGEPAGES=1 in etc/default/qemu-kvm.
    5. Reboot the compute node.

    After the reboot, verify that Huge Pages are allocated with the cat /proc/meminfo | grep Huge command.

Configuring the Compute Node for SR-IOV Interfaces for Contrail 4.0

Starting in Junos OS Release 17.4R1, the following procedure is only for Contrail 4.0

(For Contrail 4.0) To configure the SR-IOV interfaces:

  1. Enable VT-d in BIOS. (We recommend that you verify the process with the vendor because different systems have different methods to enable VT-d.)
  2. Enable ASPM in BIOS.

    To verify that ASPM is enabled, use the lspci -vv | grep ASPM | grep Enabled command.

  3. (Optional) If you are using Intel Ivy Bridge processors, add options kvm-intel enable_apicv=N to the /etc/modprobe.d/kvm-intel.conf file and reboot.
  4. Load the modified IXGBE driver.

    Before compiling the driver, make sure gcc and make are installed.

    Unload the default IXGBE driver, compile the modified Juniper Networks driver, and load the modified IXGBE driver.

    Create the virtual function (VF) on the physical device. vMX needs only one VF for the SR-IOV traffic (for example, eth4). The following line specifies that there is no VF for eth2 (first IXGBE NIC) and one VF for eth4 (second IXGBE NIC).

    Specify these values according to your configuration.

    Verify the driver version (3.19.1) on the eth2 and eth4 interfaces.

  5. To make sure that the interfaces are up and SR-IOV traffic can pass through them, execute these commands to complete the configuration.

    To confirm the status of VF 0:

    The example only displays VF 0.

  6. Edit the /etc/nova/nova.conf file to add the PCI passthrough allowlist entry for the SR-IOV NIC card.

    For example, this entry adds an entry for the SR-IOV interface card (eth4) for the physical network physnet2.

    Restart the compute node service with the service nova-compute restart command.

Configuring the Compute Node for SR-IOV Interfaces for Contrail 3.0

Note:

The following procedure is only for Contrail 3.0 and VFD agent.

(For Contrail 3.0 only) To configure the SR-IOV interfaces:

  1. Enable VT-d in BIOS. (We recommend that you verify the process with the vendor because different systems have different methods to enable VT-d.)
  2. Enable ASPM in BIOS.

    To verify that ASPM is enabled, use the lspci -vv | grep ASPM | grep Enabled command.

  3. (Optional) If you are using Intel Ivy Bridge processors, add options kvm-intel enable_apicv=N to the /etc/modprobe.d/kvm-intel.conf file and reboot.
  4. Apply the patch by downloading the vMX KVM software package from the vMX Download Software page.

    Uncompress the package, change directory to root, and apply the patches from the openstack/kilo/openstack_vfd_patches directory.

  5. Unload the IXGBEVF driver.

    Load the vfio-pci module.

    Create the virtual function (VF) on the physical device. vMX needs only one VF for the SR-IOV traffic (for example, eth4). Note that we are creating 32 VFs, though we are using only one.

    To verify that the VF was created, the output of the lspci -nn | grep Ether |grep Virtual | wc -l command displays 32.

  6. To make sure that the interfaces are up and SR-IOV traffic can pass through them, execute these commands to complete the configuration.

    To confirm the status of VF 0:

    The example only displays VF 0, but all 32 VFs appear in the output.

  7. Edit the /etc/nova/nova.conf file to add the PCI passthrough allowlist entry for the SR-IOV NIC card.

    For example, this entry adds an entry for the SR-IOV interface card (eth4) for the physical network physnet2.

    Add the VF-agent entry to nova.conf.

    Restart the compute node service with the service nova-compute restart command.

  8. Install the VFD agent. Use the VFD 1.14 Debian package version.

    Copy vfd.cfg from the sample file.

    Edit the default_mtu value and the pciids id values in the vfd.cfg file.

    For example, the edited values might resemble the following with the ID of the physical function (PF):

    Start the VFD process with the service vfd start command.

    Verify that the process has started successfully with the service vfd status command.

    Note:

    While the VFD process is running, the interfaces attached to VFD are not displayed with ifconfig or ip link commands. Use the iplex show all command to verify that the link for the PF is up.

  9. Ensure that the correct permissions are provided for VFIO. Add the "/dev/vfio/vfio" entry to /etc/libvirt/qemu.conf under the cgroup_device_acl parameter.

    Restart libvirtd with the service libvirt-bin restart command.

Release History Table
Release
Description
17.4R1
Starting in Junos OS Release 17.4R1, the following procedure is only for Contrail 4.0