Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    System Requirements for vSRX on VMware

    Software Requirements

    Table 1 lists the system software requirement specifications for the vSRX VMware environment along with the Junos OS release in which a particular software specification was introduced.

    To determine the Junos OS features supported on vSRX, use the Juniper Networks Feature Explorer, a Web-based application that helps you to explore and compare Junos OS feature information to find the right software release and hardware platform for your network. Find Feature Explorer here:

    Feature Explorer: vSRX

    Table 1: Specifications for vSRX

    Component

    Specification

    Release Introduced

    Hypervisor support

    VMware ESXi 5.1, 5.5, or 6.0

    Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

    Memory

    4 GB

    Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

    8GB

    Junos OS Release 15.1X49-D70 and Junos OS Release 17.3R1

    Disk space

    16 GB (IDE or SCSI drives)

    Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

    vCPUs

    2 vCPUs

    Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

    5 vCPUs

    Junos OS Release 15.1X49-D70 and Junos OS Release 17.3R1

    z

    vNICs

    Up to 10

    • SR-IOV

      Note: We recommend the Intel X520/X540 physical NICs for SR-IOV support on vSRX. For SR-IOV limitations, see the Known Behavior section of the vSRX Release Notes.

    • VMNET3

      Note: The Intel DPDK drivers use polling mode for all vNICs, so the NAPI and interrupt mode features in VMXNET3 are not currently supported.

    Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

    Hardware Recommendations

    Table 2 lists the hardware specifications for the host machine that runs the vSRX VM.

    Table 2: Hardware Specifications for the Host Machine

    Component

    Specification

    Host memory size

    4 GB (minimum) .

    Host processor type

    x86_64 multicore CPU

    Note: DPDK requires Intel Virtualization VT-x/VT-d support in the CPU. See About Intel Virtualization Technology.

    Virtual network adapter

  • VMXNet3 device or VMWare Virtual NIC

    Note: Virtual Machine Communication Interface (VMCI) communication channel is internal to the ESXi hypervisor and the vSRX VM.

  • Best Practices Recommendations

    Review the following practices to improve vSRX performance.

    NUMA Nodes

    The x86 server architecture consists of multiple sockets and multiple cores within a socket. Each socket also has memory that is used to store packets during I/O transfers from the NIC to the host. To efficiently read packets from memory, guest applications and associated peripherals (such as the NIC) should reside within a single socket. A penalty is associated with spanning CPU sockets for memory accesses, which might result in nondeterministic performance. For vSRX, we recommend that all vCPUs for the vSRX VM are in the same physical non-uniform memory access (NUMA) node for optimal performance.

    Caution: The Packet Forwarding Engine (PFE) on the vSRX will become unresponsive if the NUMA nodes topology is configured in the hypervisor to spread the instance’s vCPUs across multiple host NUMA nodes. vSRX requires that you ensure that all vCPUs reside on the same NUMA node.

    We recommend that you bind the vSRX instance with a specific NUMA node by setting NUMA node affinity. NUMA node affinity constrains the vSRX VM resource scheduling to only the specified NUMA node.

    PCI NIC-to-VM Mapping

    If the node on which vSRX is running is different from the node to which the Intel PCI NIC is connected, then packets will have to traverse an additional hop in the QPI link, and this will reduce overall throughput. Use the esxtop command to view information about relative physical NIC locations. On some servers where this information is not available, refer to the hardware documentation for the slot-to-NUMA node topology.

    Modified: 2017-08-11