Requirements for vSRX on VMware

 

Software Requirements

Table 1 lists the system software requirement specifications when deploying vSRX on VMware. The table outlines the Junos OS release in which a particular software specification for deploying vSRX on VMware was introduced. You will need to download a specific Junos OS release to take advantage of certain features.

Table 1: Specifications for vSRX on VMware

Component

Specification

Junos OS Release Introduced

Hypervisor support

VMware ESXi 5.1, 5.5, or 6.0

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

VMware ESXi 6.5

Junos OS Release 18.1R1

Memory

4 GB

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

8GB

Junos OS Release 15.1X49-D70 and Junos OS Release 17.3R1

Disk space

16 GB (IDE or SCSI drives)

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

vCPUs

2 vCPUs

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

5 vCPUs

Junos OS Release 15.1X49-D70 and Junos OS Release 17.3R1

z

vNICs

Up to 10

  • SR-IOV

    Note: We recommend the Intel X520/X540 physical NICs for SR-IOV support on vSRX. For SR-IOV limitations, see the Known Behavior section of the vSRX Release Notes.

  • VMNET3

    Note: The Intel DPDK drivers use polling mode for all vNICs, so the NAPI and interrupt mode features in VMXNET3 are not currently supported.

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

Hardware Recommendations

Table 2 lists the hardware specifications for the host machine that runs the vSRX VM.

Table 2: Hardware Specifications for the Host Machine

Component

Specification

Host memory size

4 GB (minimum).

Host processor type

Intel x86_64 multicore CPU

Note: DPDK requires Intel Virtualization VT-x/VT-d support in the CPU. See About Intel Virtualization Technology.

Virtual network adapter

VMXNet3 device or VMware Virtual NIC

Note: Virtual Machine Communication Interface (VMCI) communication channel is internal to the ESXi hypervisor and the vSRX VM.

Best Practices for Improving vSRX Performance

Review the following practices to improve vSRX performance.

NUMA Nodes

The x86 server architecture consists of multiple sockets and multiple cores within a socket. Each socket also has memory that is used to store packets during I/O transfers from the NIC to the host. To efficiently read packets from memory, guest applications and associated peripherals (such as the NIC) should reside within a single socket. A penalty is associated with spanning CPU sockets for memory accesses, which might result in nondeterministic performance. For vSRX, we recommend that all vCPUs for the vSRX VM are in the same physical non-uniform memory access (NUMA) node for optimal performance.

Caution

The Packet Forwarding Engine (PFE) on the vSRX will become unresponsive if the NUMA nodes topology is configured in the hypervisor to spread the instance’s vCPUs across multiple host NUMA nodes. vSRX requires that you ensure that all vCPUs reside on the same NUMA node.

We recommend that you bind the vSRX instance with a specific NUMA node by setting NUMA node affinity. NUMA node affinity constrains the vSRX VM resource scheduling to only the specified NUMA node.

PCI NIC-to-VM Mapping

If the node on which vSRX is running is different from the node to which the Intel PCI NIC is connected, then packets will have to traverse an additional hop in the QPI link, and this will reduce overall throughput. Use the esxtop command to view information about relative physical NIC locations. On some servers where this information is not available, refer to the hardware documentation for the slot-to-NUMA node topology.

Interface Mapping for vSRX on VMware

Each network adapter defined for a vSRX is mapped to a specific interface, depending on whether the vSRX instance is a standalone VM or one of a cluster pair for high availability. The interface names and mappings in vSRX are shown in Table 3 and Table 4.

Note the following:

  • In standalone mode:

    • fxp0 is the out-of-band management interface.

    • ge-0/0/0 is the first traffic (revenue) interface.

  • In cluster mode:

    • fxp0 is the out-of-band management interface.

    • em0 is the cluster control link for both nodes.

    • Any of the traffic interfaces can be specified as the fabric links, such as ge-0/0/0 for fab0 on node 0 and ge-7/0/0 for fab1 on node 1.

Table 3 shows the interface names and mappings for a standalone vSRX VM.

Table 3: Interface Names for a Standalone vSRX VM

Network
Adapter

Interface Name in Junos OS

1

fxp0

2

ge-0/0/0

3

ge-0/0/1

4

ge-0/0/2

5

ge-0/0/3

6

ge-0/0/4

7

ge-0/0/5

8

ge-0/0/6

Table 4 shows the interface names and mappings for a pair of vSRX VMs in a cluster (node 0 and node 1).

Table 4: Interface Names for a vSRX Cluster Pair

Network
Adapter

Interface Name in Junos OS

1

fxp0 (node 0 and 1)

2

em0 (node 0 and 1)

3

ge-0/0/0 (node 0)
ge-7/0/0 (node 1)

4

ge-0/0/1 (node 0)
ge-7/0/1 (node 1)

5

ge-0/0/2 (node 0)
ge-7/0/2 (node 1)

6

ge-0/0/3 (node 0)
ge-7/0/3 (node 1)

7

ge-0/0/4 (node 0)
ge-7/0/4 (node 1)

8

ge-0/0/5 (node 0)
ge-7/0/5 (node 1)

vSRX Default Settings on VMware

Table 5 lists the factory default settings for the vSRX security policies.

Table 5: Factory Default Settings for Security Policies

Source Zone

Destination Zone

Policy Action

trust

untrust

permit

trust

trust

permit

untrust

trust

deny