Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Installing Nested vMX VMs

A nested virtual machine is a virtual machine contained within another VM. Read this topic to understand how to launch the nested vMX VM on KVM.

Overview of the Nested VM Model

The nested vMX virtual machine (VM) model has the virtual control plane (VCP) running as a VM within the virtual forwarding plane (VFP) VM. The VFP VM runs the virtual Trio forwarding plane software and the VCP VM runs Junos OS. The VCP VM and VFP VM require Layer 2 connectivity to communicate with each other. An internal bridge that is local to the server for each vMX instance enables this communication. The VCP VM and VFP VM also require Layer 2 connectivity to communicate with the Ethernet management port on the server. You must specify virtual Ethernet interfaces with unique IP addresses and MAC addresses for both the VFP and VCP to set up an external bridge for a vMX instance. Ethernet management traffic for all vMX instances enters the server through the Ethernet management port.

The nested vMX VM supports virtio and SR-IOV interfaces for forwarding ports. The first interface is used for management and must be a virtio interface connected to the br-ext bridge (external bridge). Subsequent interfaces are WAN interfaces and can be virtio or SR-IOV interfaces. You must create the bridges for all the virtio interfaces. You must have at least one WAN interface for forwarding.

Nested VM with Virtio Interfaces

In virtio mode, the server interfaces must not be configured with the VFs. You can remove or reset the interfaces (eth1) using the rmmod ixgbe command and you can add the IXGBE driver with default interface to the server interface using the modprobe ixgbe command.

Figure 1 illustrates the nested vMX VM model with virtio interfaces.

Figure 1: Nested VM with virtio InterfacesNested VM with virtio Interfaces

Nested VM with SR-IOV Interfaces

In SR-IOV mode, the vMX interfaces are associated with the server interfaces. For example, the ge-0/0/0 interface is associated with eth1 . eth1 is defined in the .conf file- interface: ge-0/0/0 ,nic: eth1.

The VF is added to the IXGBE driver of the server interface eth1 which associated with the VF and can be checked using the ip link show eth1 command while running in the SR-IOV mode.

Figure 2 illustrates the nested vMX VM model with SR-IOV interfaces.

Figure 2: Nested VM with SR-IOV InterfacesNested VM with SR-IOV Interfaces

For SR-IOV interfaces, you must load the modified IXGBE driver before launching the nested vMX VM.

The way network traffic passes from the physical NIC to the virtual NIC depends on the virtualization technique that you configure.

System Requirements for Nested VM Model

vMX can be configured to run in two modes depending on the use case:

  • Lite mode—Needs fewer resources in terms of CPU and memory to run at lower bandwidth.

  • Performance mode—Needs higher resources in terms of CPU and memory to run at higher bandwidth.

    Note:

    Performance mode is the default mode.

vMX Limitations with the Nested VM Model

vMX does not support the following features with the nested VM model:

  • Attachment or detachment of interfaces while a vMX instance is running

  • Upgrade of Junos OS release

Hardware and Software Requirements for Nested vMX VMs

Table 1 lists the hardware requirements.

Table 1: Minimum Hardware Requirements for the Nested vMX VM

Description

Value

Sample system configuration

For virtio: Any x86 processor (Intel or AMD) with VT-d capability.

For SR-IOV: Intel 82599-based PCI-Express cards (10 Gbps) and Ivy Bridge processors.

Number of cores

Note:

Performance mode is the default mode and the minimum value is based on one port.

  • For lite mode: Minimum of 4 vCPUs

    Note:

    If you want to use lite mode when you are running with more than 4 vCPUs for the VFP, you must explicitly configure lite mode.

  • For performance mode: Minimum of 8 vCPUs

    Note:

    To calculate the recommended number of vCPUs needed by VFP for performance mode:

    (3 * number-of-forwarding-ports) + 4

Memory

  • For lite mode: Minimum of 3 GB

  • For performance mode:

    • Minimum of 5 GB

    • Recommended of 16 GB

Table 2 lists the software requirements.

Table 2: Software Requirements for Ubuntu

Description

Value

Operating system

Ubuntu 14.04.1 LTS

Linux 3.19.0-80-generic

Virtualization

QEMU-KVM 2.0.0+dfsg-2ubuntu1.11

Required packages

Note:

Other additional packages might be required to satisfy all dependencies.

bridge-utils qemu-kvm libvirt-bin virtinst

Note:

libvirt 1.2.19

Installing and Launching the Nested vMX VM on KVM

To launch the nested vMX VM on KVM, perform these tasks.

Preparing the Ubuntu Host to Install the Nested vMX VM

To prepare the Ubuntu host system for installing vMX:

  1. Meet the software and OS requirements described in Hardware and Software Requirements for Nested vMX VMs.
  2. Enable Intel VT-d in BIOS. (We recommend that you verify the process with the vendor because different systems have different methods to enable VT-d.)

    Refer to the procedure to enable VT-d available on the Intel Website.

  3. Disable KSM by setting KSM_ENABLED=0 in /etc/default/qemu-kvm.
  4. Disable APIC virtualization by editing the /etc/modprobe.d/qemu-system-x86.conf file and adding enable_apicv=0 to the line containing options kvm_intel.

    options kvm_intel nested=1 enable_apicv=0

  5. Restart the host to disable KSM and APIC virtualization.
  6. If you are using SR-IOV, you must perform this step.
    Note:

    You must remove any previous installation with an external bridge in /etc/network/interfaces and revert to using the original management interface. Make sure that the ifconfig -a command does not show external bridges before you proceed with the installation.

    To determine whether an external bridge is displayed, use the ifconfig command to see the management interface. To confirm that this interface is used for an external bridge group, use the brctl show command to see whether the management interface is listed as an external bridge.

    Enable SR-IOV capability by turning on intel_iommu=on in the /etc/default/grub directory.

    GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"

    Append the intel_iommu=on string to any existing text for the GRUB_CMDLINE_LINUX_DEFAULT parameter.

  7. For optimal performance, we recommend you configure the size of Huge Pages to be 1G on the host and make sure the NUMA node for the VFP has at least 16 1G Huge Pages. To configure the size of Huge Pages, add the following line in /etc/default/grub:

    GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=number-of-huge-pages"

    The number of Huge Pages must be at least (16G * number-of-numa-sockets).

  8. Run the update-grub command followed by the reboot command.
  9. Run the modprobe kvm-intel command before you install vMX.

Loading the Modified IXGBE Driver

If you are using SR-IOV interfaces, you must load the modified IXGBE driver before launching the nested vMX VM. To load the modified IXGBE driver:

  1. Download the vMX KVM software package and uncompress the package.
  2. Before compiling the driver, make sure gcc and make are installed.
  3. Unload the default IXGBE driver, compile the modified Juniper Networks driver, and load the modified IXGBE driver.
  4. Verify the driver version (3.19.1) on the SR-IOV interfaces.

Launching a Nested vMX Instance

To launch the nested vMX instance:

  1. Download the vMX Nested software package.
  2. Convert the vmdk image to qcow2 format.
  3. Create the bridges for the virtio interfaces.
    Note:

    When you create a bridge using the brctl addbr <bridge-name> command, the server might lose the connection. Alternatively, you can spawn the vMX in unnested mode (either in SRIOV or virtio mode) and use the virsh destroy vcp vcp-name and virsh destroy vfp vfp-name commands to create and retain the bridge.

    Note:

    You must create the bridges for the virtio interfaces before you launch the nested vMX instance.

  4. Launch the nested vMX VM instance with the virt-install command. For example:

    where:

    • --vcpus—Specifies the number of vCPUs.

      For lite mode, minimum of 4 vCPUs. For performance mode, minimum of [(4 * number-of-forwarding-ports) + 4] vCPUs.

    • -r—Specifies the amount of memory the VM uses in MB. Minimum of 16 GB.

    • --serial—Specifies the serial port for the VFP.

    • -w—Specifies the virtio interface. The first interface is used for management and is connected to the br-ext bridge. Subsequent interfaces are WAN interfaces and are connected to the bridges on the host.

    • --host-device—Specifies the SR-IOV interface as the PCI ID of the virtual function (VF0).

      To determine the PCI ID:

      1. Use the ip link command to obtain the interface names for which you create VFs that are bound to the vMX instance.

      2. Use the ethtool -i interface-name utility to determine the PCI bus information.

      3. Use the virsh nodedev-list command to obtain the VF PCI ID.

    • -n—Specifies the name of the vMX VM.

    • --disk—Specifies the path to the qcow2 file (vmx-nested-release.qcow2).

For example, this command launches a vMX instance in performance mode with two virtio interfaces connected to the vnet0 and vnet1 bridges:

For example, this command launches a vMX instance in performance mode with two SR-IOV interfaces:

Connecting to the VFP Console Port

After launching the vMX instance with the virt-install command, you can connect to the console port of the VFP from the host with the telnet localhost serial-port command, where serial-port is the port you specified as host with the -serial parameter.

For example:

Log in with the default username jnpr and password jnpr123. Become root using the sudo -i command.

The br-ext interface tries to fetch an IP address using DHCP. Use the ifconfig br-ext command to display the assigned IP address. If DHCP is unavailable or if you prefer a static IP address, assign an IP address to br-ext. You can now connect to the VFP using the SSH protocol and this assigned IP address.

Connecting to the VCP

When the VCP VM is launched, you can connect to the VCP console port at TCP port 8601 from the VFP VM using this command:

From the console port, you can log in with username root and no password.

At a minimum, you must perform these initial Junos OS configuration tasks after logging in to the VCP:

  1. Start the CLI.
  2. Enter configuration mode.
  3. Configure the root password.
  4. Configure the IP address and prefix length for the router’s management Ethernet interface.
  5. Commit the configuration.