System Requirements
System Requirements by Environment
The topics below provide detailed system environment requirement specifications for each supported environment.
For certain vSRX instance deployments (for example, KVM, VMware, or Contrail), you can scale the performance and capacity of a vSRX instance by increasing the number of vCPUs or the amount of vRAM allocated to the vSRX, but you cannot scale down an existing vSRX instance to a smaller setting.
Hardware Recommendations
Table 2 lists the hardware specifications for the host machine that runs the vSRX virtual machine (VM). For additional hardware guidance with respect to a specific software environment, see the System Requirements topics listed in the previous section.
Table 2: Hardware Specifications for the Host Machine
Component | Specification |
---|---|
Host memory size | 4 GB, 8 GB, 16 GB, or 32 GB. Note: Starting in Junos OS Release 15.1X49-D90 for vSRX, the 16-GB host memory size is supported for vSRX on KVM. Starting in Junos OS Release 15.1X49-D100 for vSRX, the 32-GB host memory size is supported for vSRX on KVM. |
Host processor type | x86_64 multicore CPU Note: DPDK requires Intel Virtualization VT-x/VT-d support in the CPU. See About Intel Virtualization Technology. |
Physical NIC |
Starting in Junos OS Release 15.1X49-D70 for vSRX, use Intel 82599 physical NICs in pass-through mode to scale the multicore vSRX. Starting in Junos OS Release 15.1X49-D90 for vSRX on KVM, use SR-IOV (X710/XL710) physical NICs to scale the multicore vSRX. In addition, PCI passthrough (Intel XL710) support is available for vSRX on KVM. |
For VMware, you can check for CPU and other hardware compatibility here: http://www.vmware.com/resources/compatibility/search.php?deviceCategory=cpu
For KVM, we recommend that you enable hardware-based virtualization on the host machine. You can verify CPU compatibility here: http://www.linux-kvm.org/page/Processor_support
To determine the Junos OS features supported on vSRX, use the Juniper Networks Feature Explorer, a Web-based application that helps you to explore and compare Junos OS feature information to find the right software release and hardware platform for your network. Find Feature Explorer here:
Best Practices Recommendations
vSRX deployments can be complex, and there is a great deal of variability in the specifics of possible deployments. The following recommendations might apply to and improve performance and function in your particular circumstances.
NUMA Nodes
The x86 server architecture consists of multiple sockets and multiple cores within a socket. Each socket also has memory that is used to store packets during I/O transfers from the NIC to the host. To efficiently read packets from memory, guest applications and associated peripherals (such as the NIC) should reside within a single socket. A penalty is associated with spanning CPU sockets for memory accesses, which might result in nondeterministic performance. For vSRX, we recommend that all vCPUs for the vSRX VM are in the same physical non-uniform memory access (NUMA) node for optimal performance.
The Packet Forwarding Engine (PFE) on the vSRX will become unresponsive if the NUMA nodes topology is configured in the hypervisor to spread the instance’s vCPUs across multiple host NUMA nodes. vSRX requires that you ensure that all vCPUs reside on the same NUMA node.
We recommend that you bind the vSRX instance with a specific NUMA node by setting NUMA node affinity. NUMA node affinity constrains the vSRX VM resource scheduling to only the specified NUMA node.
PCI NIC-to-VM Mapping
If the node on which vSRX is running is different from the node to which the Intel PCI NIC is connected, then packets will have to traverse an additional hop in the QPI link, and this will reduce overall throughput. On a Linux host OS, install the hwloc package and use the lstopo command to provide information about relative physical NIC locations. On a VMware ESX Server, use the esxtop command to view information about relative physical NIC locations. On some servers where this information is not available or not supported, refer to the hardware documentation for the slot-to-NUMA node topology.
Mapping Virtual Interfaces to a vSRX VM
To determine which virtual interfaces on your Linux host OS map to a vSRX VM:
- Use the virsh list command on your Linux host
OS to list the running VMs.
hostOS# virsh list
Id Name State ---------------------------------------------------- 9 centos1 running 15 centos2 running 16 centos3 running 48 vsrx running 50 1117-2 running 51 1117-3 running
- Use the virsh domiflist vsrx-name command to list the virtual interfaces on that vSRX VM.
hostOS# virsh domiflist vsrx
Interface Type Source Model MAC ------------------------------------------------------- vnet1 bridge brem2 virtio 52:54:00:8f:75:a5 vnet2 bridge br1 virtio 52:54:00:12:37:62 vnet3 bridge brconnect virtio 52:54:00:b2:cd:f4
Note The first virtual interface maps to the fxp0 interface in Junos OS.