Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

System Requirements

 

System Requirements by Environment

The topics below provide detailed system environment requirement specifications for each supported environment.

Note

For certain vSRX instance deployments (for example, KVM, VMware, or Contrail), you can scale the performance and capacity of a vSRX instance by increasing the number of vCPUs or the amount of vRAM allocated to the vSRX, but you cannot scale down an existing vSRX instance to a smaller setting.

Hardware Recommendations

Table 2 lists the hardware specifications for the host machine that runs the vSRX virtual machine (VM). For additional hardware guidance with respect to a specific software environment, see the System Requirements topics listed in the previous section.

Table 2: Hardware Specifications for the Host Machine

Component

Specification

Host memory size

4 GB, 8 GB, 16 GB.

Note: Starting in Junos OS Release 15.1X49-D90 and Junos OS Release 17.3R1, the 16-GB host memory size is supported for vSRX on KVM.

Host processor type

x86_64 multicore CPU

Note: DPDK requires Intel Virtualization VT-x/VT-d support in the CPU. See About Intel Virtualization Technology.

Physical NIC

  • Intel X710/XL710, X520/540, or 82599 physical NICs for SR-IOV on vSRX

  • Intel XL710 physical NICs for PCI passthrough support on vSRX

Starting in Junos OS Release 15.1X49-D70 and Junos OS Release 17.3R1, use Intel 82599 physical NICs in pass-through mode to scale the multicore vSRX.

Starting in Junos OS Release 15.1X49-D90 and Junos OS Release 17.3R1, in a KVM deployment you can use SR-IOV (X710/XL710) physical NICs to scale the multicore vSRX. In addition, PCI passthrough (Intel XL710) support is available for vSRX on KVM.

Note

To determine the Junos OS features supported on vSRX, use the Juniper Networks Feature Explorer, a Web-based application that helps you to explore and compare Junos OS feature information to find the right software release and hardware platform for your network. Find Feature Explorer here:

Feature Explorer: vSRX

Best Practices Recommendations

vSRX deployments can be complex, and there is a great deal of variability in the specifics of possible deployments. The following recommendations might apply to and improve performance and function in your particular circumstances.

NUMA Nodes

The x86 server architecture consists of multiple sockets and multiple cores within a socket. Each socket also has memory that is used to store packets during I/O transfers from the NIC to the host. To efficiently read packets from memory, guest applications and associated peripherals (such as the NIC) should reside within a single socket. A penalty is associated with spanning CPU sockets for memory accesses, which might result in nondeterministic performance. For vSRX, we recommend that all vCPUs for the vSRX VM are in the same physical non-uniform memory access (NUMA) node for optimal performance.

Caution

The Packet Forwarding Engine (PFE) on the vSRX will become unresponsive if the NUMA nodes topology is configured in the hypervisor to spread the instance’s vCPUs across multiple host NUMA nodes. vSRX requires that you ensure that all vCPUs reside on the same NUMA node.

We recommend that you bind the vSRX instance with a specific NUMA node by setting NUMA node affinity. NUMA node affinity constrains the vSRX VM resource scheduling to only the specified NUMA node.

PCI NIC-to-VM Mapping

If the node on which vSRX is running is different from the node to which the Intel PCI NIC is connected, then packets will have to traverse an additional hop in the QPI link, and this will reduce overall throughput. On a Linux host OS, install the hwloc package and use the lstopo command to provide information about relative physical NIC locations. On a VMware ESX Server, use the esxtop command to view information about relative physical NIC locations. On some servers where this information is not available or not supported, refer to the hardware documentation for the slot-to-NUMA node topology.

Mapping Virtual Interfaces to a vSRX VM

To determine which virtual interfaces on your Linux host OS map to a vSRX VM:

  1. Use the virsh list command on your Linux host OS to list the running VMs.
    hostOS# virsh list
  2. Use the virsh domiflist vsrx-name command to list the virtual interfaces on that vSRX VM.
    hostOS# virsh domiflist vsrx
    Note

    The first virtual interface maps to the fxp0 interface in Junos OS.