Installing Nested vMX VMs
A nested virtual machine is a virtual machine contained within another VM. Read this topic to understand how to launch the nested vMX VM on KVM.
Overview of the Nested VM Model
The nested vMX virtual machine (VM) model has the virtual control plane (VCP) running as a VM within the virtual forwarding plane (VFP) VM. The VFP VM runs the virtual Trio forwarding plane software and the VCP VM runs Junos OS. The VCP VM and VFP VM require Layer 2 connectivity to communicate with each other. An internal bridge that is local to the server for each vMX instance enables this communication. The VCP VM and VFP VM also require Layer 2 connectivity to communicate with the Ethernet management port on the server. You must specify virtual Ethernet interfaces with unique IP addresses and MAC addresses for both the VFP and VCP to set up an external bridge for a vMX instance. Ethernet management traffic for all vMX instances enters the server through the Ethernet management port.
The nested vMX VM supports virtio and SR-IOV interfaces for forwarding ports. The first interface is used for management and must be a virtio interface connected to the br-ext bridge (external bridge). Subsequent interfaces are WAN interfaces and can be virtio or SR-IOV interfaces. You must create the bridges for all the virtio interfaces. You must have at least one WAN interface for forwarding.
Nested VM with Virtio Interfaces
In virtio mode, the server interfaces must not be configured with the VFs. You can remove or reset the interfaces (eth1) using the rmmod ixgbe command and you can add the IXGBE driver with default interface to the server interface using the modprobe ixgbe command.
Figure 1 illustrates the nested vMX VM model with virtio interfaces.
Nested VM with SR-IOV Interfaces
In SR-IOV mode, the vMX interfaces are associated with the server interfaces. For example, the ge-0/0/0 interface is associated with eth1 . eth1 is defined in the .conf file- interface: ge-0/0/0 ,nic: eth1.
The VF is added to the IXGBE driver of the server interface eth1 which associated with the VF and can be checked using the ip link show eth1 command while running in the SR-IOV mode.
Figure 2 illustrates the nested vMX VM model with SR-IOV interfaces.
For SR-IOV interfaces, you must load the modified IXGBE driver before launching the nested vMX VM.
The way network traffic passes from the physical NIC to the virtual NIC depends on the virtualization technique that you configure.
System Requirements for Nested VM Model
vMX can be configured to run in two modes depending on the use case:
Lite mode—Needs fewer resources in terms of CPU and memory to run at lower bandwidth.
Performance mode—Needs higher resources in terms of CPU and memory to run at higher bandwidth.
Performance mode is the default mode.
vMX Limitations with the Nested VM Model
vMX does not support the following features with the nested VM model:
Attachment or detachment of interfaces while a vMX instance is running
Upgrade of Junos OS release
Hardware and Software Requirements for Nested vMX VMs
Table 1 lists the hardware requirements.
Table 1: Minimum Hardware Requirements for the Nested vMX VM
Sample system configuration
For virtio: Any x86 processor (Intel or AMD) with VT-d capability.
For SR-IOV: Intel 82599-based PCI-Express cards (10 Gbps) and Ivy Bridge processors.
Number of cores
Note: Performance mode is the default mode and the minimum value is based on one port.
Table 2 lists the software requirements.
Table 2: Software Requirements for Ubuntu
Ubuntu 14.04.1 LTS
Note: Other additional packages might be required to satisfy all dependencies.
bridge-utils qemu-kvm libvirt-bin virtinst
Note: libvirt 1.2.19
Installing and Launching the Nested vMX VM on KVM
To launch the nested vMX VM on KVM, perform these tasks.
Preparing the Ubuntu Host to Install the Nested vMX VM
To prepare the Ubuntu host system for installing vMX:
- Meet the software and OS requirements described in Hardware and Software Requirements for Nested vMX VMs.
- Enable Intel VT-d in BIOS. (We recommend that you verify
the process with the vendor because different systems have different
methods to enable VT-d.)
Refer to the procedure to enable VT-d available on the Intel Website.
- Disable KSM by setting KSM_ENABLED=0 in
- Disable APIC virtualization by editing the
/etc/modprobe.d/qemu-system-x86.conffile and adding enable_apicv=0 to the line containing options kvm_intel.
options kvm_intel nested=1 enable_apicv=0
- Restart the host to disable KSM and APIC virtualization.
- If you are using SR-IOV, you must perform this step.
You must remove any previous installation with an external bridge in
/etc/network/interfacesand revert to using the original management interface. Make sure that the ifconfig -a command does not show external bridges before you proceed with the installation.
To determine whether an external bridge is displayed, use the ifconfig command to see the management interface. To confirm that this interface is used for an external bridge group, use the brctl show command to see whether the management interface is listed as an external bridge.
Enable SR-IOV capability by turning on intel_iommu=on in the
Append the intel_iommu=on string to any existing text for the GRUB_CMDLINE_LINUX_DEFAULT parameter.
- For optimal performance, we recommend you configure the
size of Huge Pages to be 1G on the host and make sure the NUMA node
for the VFP has at least 16 1G Huge Pages. To configure the size of
Huge Pages, add the following line in
GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=number-of-huge-pages"
The number of Huge Pages must be at least (16G * number-of-numa-sockets).
- Run the update-grub command followed by the reboot command.
- Run the modprobe kvm-intel command before you install vMX.
Loading the Modified IXGBE Driver
If you are using SR-IOV interfaces, you must load the modified IXGBE driver before launching the nested vMX VM. To load the modified IXGBE driver:
- Download the vMX KVM software package and uncompress the
package.tar xvf package-name
- Before compiling the driver, make sure gcc and make are
installed.sudo apt-get updatesudo apt-get install make gcc
- Unload the default IXGBE driver, compile the modified
Juniper Networks driver, and load the modified IXGBE driver.cd package-location/drivers/ixgbe-3.19.1/srcmakesudo rmmod ixgbesudo insmod ./ixgbe.ko max_vfs=1,1sudo make install
- Verify the driver version (3.19.1) on the SR-IOV interfaces.
Launching a Nested vMX Instance
To launch the nested vMX instance:
- Download the vMX Nested software package.
- Convert the vmdk image to qcow2 format.qemu-img convert -f vmdk -O qcow2 vmdk-filename qcow2-filename
- Create the bridges for the virtio interfaces.brctl addbr bridge-name
When you create a bridge using the brctl addbr <bridge-name> command, the server might lose the connection. Alternatively, you can spawn the vMX in unnested mode (either in SRIOV or virtio mode) and use the virsh destroy vcp vcp-name and virsh destroy vfp vfp-name commands to create and retain the bridge.
You must create the bridges for the virtio interfaces before you launch the nested vMX instance.
- Launch the nested vMX VM instance with the virt-install command. For example:
sudo virt-install --hvm --vcpus=number-vcpus -r memory \ --serial tcp,host=:console-port,mode=bind,protocol=telnet \ --nographics --import --noautoconsole \ --cpu \ SandyBridge,+erms,+smep,+fsgsbase,+pdpe1gb,+rdrand,+f16c,+osxsave,+dca,+pcid,+pdcm,+x tpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme \ -w bridge=br-ext,model=virtio \ -w bridge=bridge-name,model=virtio \ --host-device=pci-id \ -n name --disk disk-image,format=qcow2
--vcpus—Specifies the number of vCPUs.
For lite mode, minimum of 4 vCPUs. For performance mode, minimum of [(4 * number-of-forwarding-ports) + 4] vCPUs.
-r—Specifies the amount of memory the VM uses in MB. Minimum of 16 GB.
--serial—Specifies the serial port for the VFP.
-w—Specifies the virtio interface. The first interface is used for management and is connected to the br-ext bridge. Subsequent interfaces are WAN interfaces and are connected to the bridges on the host.
--host-device—Specifies the SR-IOV interface as the PCI ID of the virtual function (VF0).
To determine the PCI ID:
- Use the ip link command to obtain the interface names for which you create VFs that are bound to the vMX instance.
- Use the ethtool -i interface-name utility to determine the PCI bus information.
driver: ixgbe version: 3.19.1 firmware-version: 0x61bd0001 bus-info: 0000:81:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: no
- Use the virsh nodedev-list command to obtain
the VF PCI ID.
pci_0000_81_00_0 pci_0000_81_00_1 pci_0000_81_10_0 pci_0000_81_10_1
-n—Specifies the name of the vMX VM.
--disk—Specifies the path to the qcow2 file (
For example, this command launches a vMX instance in performance mode with two virtio interfaces connected to the vnet0 and vnet1 bridges:
sudo virt-install --hvm --vcpus=12 -r 16384 \ --serial tcp,host=:4001,mode=bind,protocol=telnet \ --nographics --import --noautoconsole \ --cpu \ SandyBridge,+erms,+smep,+fsgsbase,+pdpe1gb,+rdrand,+f16c,+osxsave,+dca,+pcid,+pdcm,+x tpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme \ -w bridge=br-ext,model=virtio \ -w bridge=vnet0,model=virtio \ -w bridge=vnet1,model=virtio \ -n vmx1 --disk vmx-nested-17.2R1.13-4.qcow2,format=qcow2
For example, this command launches a vMX instance in performance mode with two SR-IOV interfaces:
sudo virt-install --hvm --vcpus=12 -r 16384 \ --serial tcp,host=:4001,mode=bind,protocol=telnet \ --nographics --import --noautoconsole \ --cpu \ SandyBridge,+erms,+smep,+fsgsbase,+pdpe1gb,+rdrand,+f16c,+osxsave,+dca,+pcid,+pdcm,+x tpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme \ -w bridge=br-ext,model=virtio \ --host-device=pci_0000_81_10_0 \ --host-device=pci_0000_81_10_1 \ -n vmx2 --disk vmx-nested-17.2R1.13-4.qcow2,format=qcow2
Connecting to the VFP Console Port
After launching the vMX instance with the virt-install command, you can connect to the console port of the VFP from the host with the telnet localhost serial-port command, where serial-port is the port you specified as host with the -serial parameter.
$ telnet localhost 4001
Log in with the default username jnpr and password jnpr123. Become root using the sudo -i command.
The br-ext interface tries to fetch an IP address using DHCP.
Use the ifconfig br-ext command to display the assigned
IP address. If DHCP is unavailable or if you prefer a static IP address,
assign an IP address to
br-ext. You can
now connect to the VFP using the SSH protocol and this assigned IP
Connecting to the VCP
When the VCP VM is launched, you can connect to the VCP console port at TCP port 8601 from the VFP VM using this command:
$ telnet localhost 8601
From the console port, you can log in with username root and no password.
At a minimum, you must perform these initial Junos OS configuration tasks after logging in to the VCP:
- Start the CLI.
- Enter configuration mode.
- Configure the root password.
root@# set system root-authentication plain-text-password
New password: password
Retype new password: password
- Configure the IP address and prefix length for the router’s
management Ethernet interface.
root@# set interfaces fxp0 unit 0 family inet address address/prefix-length
- Commit the configuration.