Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Installing vMX on KVM

Read this topic to understand how to install the virtual MX router in the KVM environment.

Preparing the Ubuntu Host to Install vMX

To prepare the Ubuntu host system for installing vMX (Starting in Junos OS Release 15.1F6):

  1. Meet the minimum software and OS requirements described in Minimum Hardware and Software Requirements.

  2. See Upgrading Kernel and Upgrading to libvirt 1.2.19 sections below.

  3. If you are using Intel XL710 PCI-Express family cards, make sure you update the drivers. See Updating Drivers for the X710 NIC.

  4. Enable Intel VT-d in BIOS. (We recommend that you verify the process with the vendor because different systems have different methods to enable VT-d.)

    Refer to the procedure to enable VT-d available on the Intel Website.

  5. Disable KSM by setting KSM_ENABLED=0 in /etc/default/qemu-kvm.

  6. Disable APIC virtualization by editing the /etc/modprobe.d/qemu-system-x86.conf file and adding enable_apicv=0 to the line containing options kvm_intel.

    options kvm_intel nested=1 enable_apicv=0

  7. Restart the host to disable KSM and APIC virtualization.

  8. If you are using SR-IOV, you must perform this step.

    Note:

    You must remove any previous installation with an external bridge in /etc/network/interfaces and revert to using the original management interface. Make sure that the ifconfig -a command does not show external bridges before you proceed with the installation.

    To determine whether an external bridge is displayed, use the ifconfig command to see the management interface. To confirm that this interface is used for an external bridge group, use the brctl show command to see whether the management interface is listed as an external bridge.

    Enable SR-IOV capability by turning on intel_iommu=on in the /etc/default/grub directory.

    GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"

    Append the intel_iommu=on string to any existing text for the GRUB_CMDLINE_LINUX_DEFAULT parameter.

    Run the update-grub command followed by the reboot command.

  9. For optimal performance, we recommend you configure the size of Huge Pages to be 1G on the host and make sure the NUMA node for the VFP has at least 16 1G Huge Pages. To configure the size of Huge Pages, add the following line in /etc/default/grub:

    GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=number-of-huge-pages"

    The number of Huge Pages must be at least (16G * number-of-numa-sockets).

  10. Run the modprobe kvm-intel command before you install vMX.

Note:

Starting in Junos OS 18.2 and later releases, Ubuntu 16.04.5 LTS and Linux 4.4.0-62-generic are supported.

To meet the minimum software and OS requirements, you might need to perform these tasks:

Upgrading the Kernel

Note:

Upgrading Linux kernel in Ubuntu 16.04 version is not required.

Note:

If you are using Ubuntu 14.04.1 LTS, which comes with 3.19.0-80-generic, you can skip this step. Ubuntu 14.04 comes with a lower version of kernel (Linux 3.13.0-24-generic) than the recommended version (Linux 3.19.0-80-generic).

To upgrade the kernel:

  1. Determine your version of the kernel.

  2. If your version differs from the version shown in step 1, run the following commands:

  3. Restart the system.

Upgrading to libvirt 1.2.19

Note:

Ubuntu 16.04.5 supports Libvirt version is 1.3.1. Upgrading libvirt in Ubuntu 16.04 is not required.

Ubuntu 14.04 supports libvirt 1.2.2 (which works for VFP lite mode). If you are using the VFP performance mode or deploying multiple vMX instances using the VFP lite mode, you must upgrade to libvirt 1.2.19.

To upgrade libvirt:

  1. Make sure that you install all the packages listed in Minimum Hardware and Software Requirements.

  2. Navigate to the /tmp directory using the cd /tmp command.

  3. Get the libvirt-1.2.19 source code by using the command wget http://libvirt.org/sources/libvirt-1.2.19.tar.gz.

  4. Uncompress and untar the file using the tar xzvf libvirt-1.2.19.tar.gz command.

  5. Navigate to the libvirt-1.2.19 directory using the cd libvirt-1.2.19 command.

  6. Stop libvirtd with the service libvirt-bin stop command.

  7. Run the ./configure --prefix=/usr --localstatedir=/ --with-numactl command.

  8. Run the make command.

  9. Run the make install command.

  10. Make sure that the libvirtd daemon is running. (Use the service libvirt-bin start command to start it again. If it does not start, use the /usr/sbin/libvirtd -d command.)

  11. Verify that the versions of libvirtd and virsh are 1.2.19.

    The system displays the code compilation log.

Note:

If you cannot deploy vMX after upgrading libvirt, bring down the virbr0 bridge with the ifconfig virbr0 down command and delete the bridge with the brctl delbr virbr0 command.

Updating Drivers for the X710 NIC

If you are using Intel XL710 PCI-Express family NICs, make sure you update the drivers before you install vMX.

To update the drivers:

  1. Download the vMX software package as root and uncompress the package.
  2. Install the i40e driver from the installation directory.
  3. Install the latest i40evf driver from Intel.

    For example, the following commands download and install Version 1.4.15:

  4. Update initrd with the drivers.
  5. Activate the new driver.

Install the Other Required Packages

Use the following commands to install python-netifaces package on Ubuntu.

Preparing the Red Hat Enterprise Linux Host to Install vMX

To prepare the host system running Red Hat Enterprise Linux for installing vMX, perform the task for your version:

Preparing the Red Hat Enterprise Linux 7.3 Host to Install vMX

To prepare the host system running Red Hat Enterprise Linux 7.3 for installing vMX:

  1. Meet the minimum software and OS requirements described in Minimum Hardware and Software Requirements.
  2. Enable hyperthreading and VT-d in BIOS.

    If you are using SR-IOV, enable SR-IOV in BIOS.

    We recommend that you verify the process with the vendor because different systems have different methods to access and change BIOS settings.

  3. During the OS installation, select the Virtualization Host and Virtualization Platform software collections.

    If you did not select these software collections during the GUI installation, use the following commands to install them:

  4. Register your host using your Red Hat account credentials. Enable the appropriate repositories.

    To install the Extra Packages for Enterprise Linux 7 (epel) repository:

  5. Update currently installed packages.
  6. For optimal performance, we recommend you configure the size of Huge Pages to be 1G on the host and make sure that the NUMA node for the VFP has at least sixteen 1G Huge Pages. To configure the size of Huge Pages, use the following step:

    For Red Hat: Add the Huge Pages configuration.

    Use the mount | grep boot command to determine the boot device name.

    The number of Huge Pages must be at least (16G * number-of-numa-sockets).

  7. Install the required packages.
  8. (Optional) If you are using SR-IOV, you must install these packages and enable SR-IOV capability.

    Reboot and log in again.

  9. Link the qemu-kvm binary to the qemu-system-x86_64 file.
  10. Set up the path for the correct Python release and install the PyYAML library.
  11. If you have installed any Red Hat OpenStack libraries, you must change script/templates/red_{vPFE, vRE}-ref.xml to use <type arch='x86_64' machine='pc-0.13'>hvm</type> as the machine type.
  12. Disable KSM.

    To verify that KSM is disabled run the following command.

    The value 0 in the output indicates that KSM is disabled.

  13. Disable APIC virtualization by editing the /etc/modprobe.d/kvm.conf file and adding enable_apicv=n to the line containing options kvm_intel.

    You can use enable_apicv=0 also.

    Restart the host to disable KSM and APIC virtualization.

  14. Stop and disable Network Manager.

    If you cannot stop Network Manager, you can prevent resolv.conf from being overwritten with the chattr +I /etc/resolv.conf command.

  15. Ensure that the build directory is readable by the QEMU user.

    As an alternative, you can configure QEMU to run as the root user by setting the /etc/libvirt/qemu.conf file to user="root".

You can now install vMX.

Note:

When you install vMX with the sh vmx.sh -lv --install command, you might see a kernel version mismatch warning. You can ignore this warning.

Preparing the Red Hat Enterprise Linux 7.2 Host to Install vMX

To prepare the host system running Red Hat Enterprise Linux 7.2 for installing vMX:

  1. Meet the minimum software and OS requirements described in Minimum Hardware and Software Requirements.
  2. Enable hyperthreading and VT-d in BIOS.

    If you are using SR-IOV, enable SR-IOV in BIOS.

    We recommend that you verify the process with the vendor because different systems have different methods to access and change BIOS settings.

  3. During the OS installation, select the Virtualization Host and Virtualization Platform software collections.

    If you did not select these software collections during the GUI installation, use the following commands to install them:

  4. Register your host using your Red Hat account credentials. Enable the appropriate repositories.
  5. Update currently installed packages.
  6. Install the required packages.
  7. For optimal performance, we recommend you configure the size of Huge Pages to be 1G on the host and make sure that the NUMA node for the VFP has at least sixteen 1G Huge Pages. To configure the size of Huge Pages, use the following step:

    For Red Hat: Add the Huge Pages configuration.

    Use the mount | grep boot command to determine the boot device name.

    The number of Huge Pages must be at least (16G * number-of-numa-sockets).

  8. (Optional) If you are using SR-IOV, you must install these packages and enable SR-IOV capability.

    Reboot and log in again.

  9. Link the qemu-kvm binary to the qemu-system-x86_64 file.
  10. Set up the path for the correct Python release and install the PyYAML library.
  11. If you have installed any Red Hat OpenStack libraries, you must change script/templates/red_{vPFE, vRE}-ref.xml to use <type arch='x86_64' machine='pc-0.13'>hvm</type> as the machine type.
  12. Disable KSM.

    To verify that KSM is disabled run the following command.

    The value 0 in the output indicates that KSM is disabled.

  13. Disable APIC virtualization by editing the /etc/modprobe.d/kvm.conf file and adding enable_apicv=n to the line containing options kvm_intel.

    You can use enable_apicv=0 also.

    Restart the host to disable KSM and APIC virtualization.

  14. Stop and disable Network Manager.

    If you cannot stop Network Manager, you can prevent resolv.conf from being overwritten with the chattr +I /etc/resolv.conf command.

  15. Ensure that the build directory is readable by the QEMU user.

    As an alternative, you can configure QEMU to run as the root user by setting the /etc/libvirt/qemu.conf file to user="root".

You can now install vMX.

Note:

When you install vMX with the sh vmx.sh -lv --install command, you might see a kernel version mismatch warning. You can ignore this warning.

Preparing the CentOS Host to Install vMX

To prepare the host system running CentOS for installing vMX:

  1. Meet the minimum software and OS requirements described in Minimum Hardware and Software Requirements.
  2. Enable hyperthreading and VT-d in BIOS.

    If you are using SR-IOV, enable SR-IOV in BIOS.

    We recommend that you verify the process with the vendor because different systems have different methods to access and change BIOS settings.

  3. During the OS installation, select the Virtualization Host and Virtualization Platform software collections.

    If you did not select these software collections during the GUI installation, use the following commands to install them:

  4. Enable the appropriate repositories.
  5. Update currently installed packages.
  6. Install the required packages.
  7. (Optional) If you are using SR-IOV, you must install these packages and enable SR-IOV capability.

    Reboot and log in again.

  8. Link the qemu-kvm binary to the qemu-system-x86_64 file.
  9. Set up the path for the correct Python release and install the PyYAML library.
    Note:

    In case of error with installation, use the following workaround:

  10. Disable KSM.

    To verify that KSM is disabled run the following command.

    The value 0 in the output indicates that KSM is disabled.

  11. Disable APIC virtualization by editing the /etc/modprobe.d/kvm.conf file and adding enable_apicv=0 to the line containing options kvm_intel.

    Restart the host to disable KSM and APIC virtualization.

  12. Stop and disable Network Manager.

    If you cannot stop Network Manager, you can prevent resolv.conf from being overwritten with the chattr +I /etc/resolv.conf command.

  13. Ensure that the build directory is readable by the QEMU user.

    As an alternative, you can configure QEMU to run as the root user by setting the /etc/libvirt/qemu.conf file to user=root.

  14. Add this line to the end of the /etc/profile file.

You can now install vMX.

Note:

When you install vMX with the sh vmx.sh -lv --install command, you might see a kernel version mismatch warning. You can ignore this warning.

Installing vMX for Different Use Cases

Installing vMX is different for specific use cases. Table lists the sample configuration requirements for some vMX use cases.

Table 1: Sample Configurations for Use Cases (supported in Junos OS Release 18.3 to 18.4)

Use Case

Minimum vCPUs

Minimum Memory

NIC Device Type

Lab simulation

Up to 100 Mbps performance

4: 1 for VCP3 for VFP

5 GB: 1 GB for VCP4 GB for VFP

virtio

Low-bandwidth applications

Up to 3 Gbps performance

10:1 for VCP9 for VFP

20 GB: 4 GB for VCP16 GB for VFP

virtio

High-bandwidth applications or performance testing

For 3 Gbps and beyond performance

10:1 for VCP9 for VFP

20 GB 4 GB for VCP16 GB for VFP

SR-IOV

Dual virtual Routing Engines

Note:

When deploying on separate hosts, you must set up a connection between the hosts for the VCPs to communicate with each other.

Double the number of VCP resources for your particular use case is consumed when deploying both VCP instances.

Double the number of VCP resources for your particular use case is consumed when deploying both VCP instances.

virtio or SR-IOV

Table 2: Sample Configurations for Use Cases (supported in Junos OS Release 18.1 to 18.2)

Use Case

Minimum vCPUs

Minimum Memory

NIC Device Type

Lab simulation

Up to 100 Mbps performance

4: 1 for VCP3 for VFP

5 GB: 1 GB for VCP4 GB for VFP

virtio

Low-bandwidth applications

Up to 3 Gbps performance

8:1 for VCP7 for VFP

16 GB: 4 GB for VCP12 GB for VFP

virtio

High-bandwidth applications or performance testing

For 3 Gbps and beyond performance

8:1 for VCP7 for VFP

16 GB 4 GB for VCP12 GB for VFP

SR-IOV

Dual virtual Routing Engines

Note:

When deploying on separate hosts, you must set up a connection between the hosts for the VCPs to communicate with each other.

Double the number of VCP resources for your particular use case is consumed when deploying both VCP instances.

Double the number of VCP resources for your particular use case is consumed when deploying both VCP instances.

virtio or SR-IOV

Table 3: Sample Configurations for Use Cases (supported in Junos OS Release 17.4 )

Use Case

Minimum vCPUs

Minimum Memory

NIC Device Type

Lab simulation

Up to 100 Mbps performance

4: 1 for VCP3 for VFP

5 GB: 1 GB for VCP4 GB for VFP

virtio

Low-bandwidth applications

Up to 3 Gbps performance

8:1 for VCP7 for VFP

16 GB: 4 GB for VCP12 GB for VFP

virtio

High-bandwidth applications or performance testing

For 3 Gbps and beyond performance

8:1 for VCP7 for VFP

16 GB 4 GB for VCP12 GB for VFP

SR-IOV

Table 4: Sample Configurations for Use Cases (supported in Junos OS Release 15.1F6 to 17.3 )

Use Case

Minimum vCPUs

Minimum Memory

NIC Device Type

Lab simulation

Up to 100 Mbps performance

4: 1 for VCP3 for VFP

5 GB: 1 GB for VCP4 GB for VFP

virtio

Low-bandwidth applications

Up to 3 Gbps performance

8:1 for VCP7 for VFP

16 GB: 4 GB for VCP12 GB for VFP

virtio

High-bandwidth applications or performance testing

For 3 Gbps and beyond performance

8:1 for VCP7 for VFP

16 GB 4 GB for VCP12 GB for VFP

SR-IOV

Table 5: Sample Configurations for Use Cases (supported in Junos OS Release 15.1F4 to 15.1F3)

Use Case

Minimum vCPUs

Minimum Memory

NIC Device Type

Lab simulation

Up to 100 Mbps performance

4: 1 for VCP3 for VFP

10 GB: 2 GB for VCP8 GB for VFP

virtio

Low-bandwidth applications

Up to 3 Gbps performance

4:1 for VCP3 for VFP

10 GB: 2 GB for VCP8 GB for VFP

virtio or SR-IOV

High-bandwidth applications or performance testing

For 3 Gbps and beyond performance (with minimum of two 10Gb Ethernet ports)

Up to 80 Gbps of raw performance

8:1 for VCP7 for VFP

16 GB 4 GB for VCP12 GB for VFP

SR-IOV

Table 6: Sample Configurations for Use Cases (supported in Junos OS Release 14.1)

Use Case

Minimum vCPUs

Minimum Memory

NIC Device Type

Lab simulation

Up to 100 Mbps performance

4: 1 for VCP3 for VFP

8 GB: 2 GB for VCP6 GB for VFP

virtio

Low-bandwidth applications

Up to 3 Gbps performance

4:1 for VCP3 for VFP

8 GB: 2 GB for VCP6 GB for VFP

virtio or SR-IOV

High-bandwidth applications or performance testing

For 3 Gbps and beyond performance (with minimum of two 10Gb Ethernet ports)

Up to 80 Gbps of raw performance

5:1 for VCP4 for VFP

8 GB 2 GB for VCP6 GB for VFP

SR-IOV

Note:

From Junos OS Release 18.4R1 (Ubuntu host) and Junos OS Release 19.1R1 (RedHat host), you can set the use_native_drivers value to true in the vMX configuration file to use the latest unmodified drivers for your network interface cards for vMX installations

To install vMX for a particular use case, perform one of the following tasks:

Installing vMX for Lab Simulation

Starting in Junos OS Release 14.1, the use case for lab simulation uses the virtio NIC.

To install vMX for the lab simulation (less than 100 Mbps) application use case:

  1. Download the vMX software package as root and uncompress the package.

    tar xzvf package-name

  2. Change directory to the location of the uncompressed vMX package.

    cd package-location

  3. Edit the config/vmx.conf text file with a text editor to configure a single vMX instance.

    Ensure the following parameter is set properly in the vMX configuration file:

    device-type : virtio

    See Specifying vMX Configuration File Parameters.

  4. Run the ./vmx.sh -lv --install script to deploy the vMX instance specified by the config/vmx.conf startup configuration file and provide verbose-level logging to a file. See Deploying and Managing vMX.
  5. From the VCP, enable lite mode for the VFP.

Here is a sample vMX startup configuration file using the virtio device type for lab simulation:

Installing vMX for Low-Bandwidth Applications

Starting in Junos OS Release 14.1, the use case for low-bandwidth applications uses virtio or SR-IOV NICs.

To install vMX for the low-bandwidth (up to 3 Gbps) application use case:

  1. Download the vMX software package as root and uncompress the package.

    tar xzvf package-name

  2. Change directory to the location of the uncompressed vMX package.

    cd package-location

  3. Edit the config/vmx.conf text file with a text editor to configure a single vMX instance.

    Ensure the following parameter is set properly in the vMX configuration file:

    device-type: virtio or device-type: sriov

    See Specifying vMX Configuration File Parameters.

  4. Run the ./vmx.sh -lv --install script to deploy the vMX instance specified by the config/vmx.conf startup configuration file and provide verbose-level logging to a file. See Deploying and Managing vMX.
  5. From the VCP, enable performance mode for the VFP.

Here is a sample vMX startup configuration file using the virtio device type for low-bandwidth applications:

Installing vMX for High-Bandwidth Applications

Starting in Junos OS Release 14.1, the use case for high-bandwidth applications uses the SR-IOV NICs.

To install vMX for the high-bandwidth (above 3 Gbps) application use case:

  1. Download the vMX software package as root and uncompress the package.

    tar xzvf package-name

  2. Change directory to the location of the uncompressed vMX package.

    cd package-location

  3. Edit the config/vmx.conf text file with a text editor to configure a single vMX instance.

    Ensure the following parameter is set properly in the vMX configuration file:

    device-type: sriov

    See Specifying vMX Configuration File Parameters.

  4. Run the ./vmx.sh -lv --install script to deploy the vMX instance specified by the config/vmx.conf startup configuration file and provide verbose-level logging to a file. See Deploying and Managing vMX.
  5. From the VCP, enable performance mode for the VFP.

Here is a sample vMX startup configuration file using the SR-IOV device type:

For more information see, Example: Enabling SR-IOV on vMX Instances on KVM.

Installing vMX with Dual Routing Engines

You can set up redundant Routing Engines on the vMX server by creating the primary Routing Engine (re0) and backup Routing Engine (re1) in the CONTROL_PLANE section of the vMX startup configuration file (default file is config/vmx.conf).

Note:

When deploying the Routing Engines on separate hosts, you must set up a connection between the hosts for the VCPs to communicate with each other.

Starting in Junos OS Release 18.1 to install vMX for the dual Routing Engines use case:

  1. Download the vMX software package as root and uncompress the package.

    tar xzvf package-name

  2. Change directory to the location of the uncompressed vMX package.

    cd package-location

  3. Edit the config/vmx.conf text file with a text editor to configure the vMX instance.

    The default CONTROL_PLANE section resembles the following with one interface entry:

    To set up the redundant Routing Engines:

    1. Navigate to CONTROL_PLANE and specify the proper number of vCPUs (vcpus) and amount of memory (memory-mb).
    2. Starting with Junos OS Release 18.1R1, add the deploy parameter to designate the Routing Engine instance deployed on this host. If you do not specify this parameter, all instances (0,1) are deployed on the host.

      When deploying the Routing Engines on separate hosts, you must set up a connection between the hosts for the VCPs to communicate with each other.

    3. Modify the interfaces entry to add instance : 0 after the type parameter to set up re0.

      Specify the ipaddr and macaddr parameters. This address is the management IP address for the VCP VM (fxp0).

    4. Add another entry, but specify instance : 1 to set up re1 and specify the console_port parameter for re1 after the instance : 1 parameter.

      Specify the ipaddr and macaddr parameters. This address is the management IP address for the VCP VM (fxp0).

    The revised CONTROL_PLANE section that deploys re0 on the host resembles the following example with two interface entries:

    See Specifying vMX Configuration File Parameters.

  4. Run the ./vmx.sh -lv --install script to deploy the vMX instance specified by the config/vmx.conf startup configuration file and provide verbose-level logging to a file. See Deploying and Managing vMX.
  5. From the VCP, enable performance mode for the VFP.
  6. When deploying the Routing Engines on separate hosts, you must set up a connection between the hosts for the VCPs to communicate with each other.

    For example, to set up a connection (such as br-int-vmx1) between the two hosts over an interface (such as eth1), run the following command on both hosts:

Here is a sample vMX startup configuration file that is deploying the first Routing Engine instance on this host:

Installing vMX with Mixed WAN Interfaces

Starting in Junos OS Release 17.2, the use case for mixed WAN interfaces uses the virtio and SR-IOV interfaces. Sample configuration requirements are the same as for using SR-IOV device type.

To install vMX with mixed interfaces:

  1. Download the vMX software package as root and uncompress the package.

    tar xzvf package-name

  2. Change directory to the location of the uncompressed vMX package.

    cd package-location

  3. Edit the config/vmx.conf text file with a text editor to configure a single vMX instance.

    Ensure the following parameter is set properly in the vMX configuration file:

    device-type: mixed

    When configuring the interfaces, make sure the virtio interfaces are specified before the SR-IOV interfaces. The type parameter specifies the interface type.

    See Specifying vMX Configuration File Parameters.

  4. Run the ./vmx.sh -lv --install script to deploy the vMX instance specified by the config/vmx.conf startup configuration file and provide verbose-level logging to a file. See Deploying and Managing vMX.
  5. From the VCP, enable performance mode for the VFP.

Here is a sample vMX startup configuration file using mixed interfaces: