Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Requirements for vSRX on VMware

 

Software Specifications

Table 1 lists the system software requirement specifications when deploying vSRX on VMware. The table outlines the Junos OS release in which a particular software specification for deploying vSRX on VMware was introduced. You must need to download a specific Junos OS release to take advantage of certain features.

Table 1: Specifications for vSRX and vSRX 3.0 on VMware

Component

Specification

Junos OS Release Introduced

Hypervisor support

VMware ESXi 5.1, 5.5, or 6.0

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

VMware ESXi 5.5, 6.0, 6.5

Junos OS Release 17.4R1, 18.1R1, 18.2R1, 18.3R1

VMware ESXi 6.5

Junos OS Release 18.4R1

VMware ESXi 6.5 (For vSRX 3.0 only)

Junos OS Release 19.3R1

Memory

4 GB

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

8GB

Junos OS Release 15.1X49-D70 and Junos OS Release 17.3R1

16 GB

Junos OS Release 18.4R1

32 GB

Junos OS Release 18.4R1

Disk space

16 GB (IDE or SCSI drives)

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

vCPUs

2 vCPUs

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

5 vCPUs

Junos OS Release 15.1X49-D70 and Junos OS Release 17.3R1

9 vCPUs

Junos OS Release 18.4R1

17 vCPUs

Junos OS Release 18.4R1

vNICs

Up to 10 vNICs

  • SR-IOV

    Note: We recommend the Intel X520/X540 physical NICs for SR-IOV support on vSRX. For SR-IOV limitations, see the Known Behavior section of the vSRX Release Notes.

  • VMNET3

    Note: The Intel DPDK drivers use polling mode for all vNICs, so the NAPI and interrupt mode features in VMXNET3 are not currently supported.

    Note: Starting in Junos OS Release 15.1X49-D20, in vSRX deployments using VMware ESX, changing the default speed (1000 Mbps) or the default link mode (full duplex) is not supported on VMXNET3 vNICs.

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

Starting in Junos OS Release 18.4R1:

  • SR-IOV (Mellanox ConnectX-3/ConnectX-3 Pro and Mellanox ConnectX-4 EN/ConnectX-4 Lx EN) is required if you intend to scale the performance and capacity of a vSRX VM to 9 or 17 vCPUs and 16 or 32 GB vRAM.

    Note: Mellanox NIC (any ConnectX) cards are not support on VMWare.

  • The DPDK version has been upgraded from 17.02 to 17.11.2 to support the Mellanox Family Adapters.

Junos OS Release 18.4R1

Starting in Junos OS Release 19.4R1, DPDK version 18.11 is supported on vSRX. With this feature the Mellanox Connect Network Interface Card (NIC) on vSRX now supports OSPF Multicast and VLANs.

Junos OS Release 19.4R1

Table 2 lists the specifications on the vSRX 3.0 virtual machine (VM).

Table 2: Specifications for vSRX 3.0 on VMware

vCPU

vRAM

DPDK

Hugepage

vNICs

vDisk

Junos OS Release Introduced

2

4G

17.05

2G

2-10

20G

Junos OS Release 18.2R1

5

8G

17.05

6G

2–10

vSRX on VMWare supports VMXNET3 through DPDK and PMD, and SR-IOV (82599).

A maximum number of eight interfaces are supported.

DPDK uses HugePage for improved performance.

20G

Junos OS Release 18.4R1

Hardware Specifications

Table 3 lists the hardware specifications for the host machine that runs the vSRX VM.

Table 3: Hardware Specifications for the Host Machine

Component

Specification

Host processor type

Intel x86_64 multicore CPU

Note: DPDK requires Intel Virtualization VT-x/VT-d support in the CPU. See About Intel Virtualization Technology.

Virtual network adapter

VMXNet3 device or VMware Virtual NIC

Note: Virtual Machine Communication Interface (VMCI) communication channel is internal to the ESXi hypervisor and the vSRX VM.

Physical NIC support on vSRX 3.0

Support SR-IOV on X710/XL710

vSRX3.0 SR-IOV HA on I40E ( X710,X740,X722 and so on) are not supported on VMware.

Mellanox NIC (any ConnectX) cards are not support on VMWare.

Best Practices for Improving vSRX Performance

Review the following practices to improve vSRX performance.

NUMA Nodes

The x86 server architecture consists of multiple sockets and multiple cores within a socket. Each socket also has memory that is used to store packets during I/O transfers from the NIC to the host. To efficiently read packets from memory, guest applications and associated peripherals (such as the NIC) should reside within a single socket. A penalty is associated with spanning CPU sockets for memory accesses, which might result in nondeterministic performance. For vSRX, we recommend that all vCPUs for the vSRX VM are in the same physical non-uniform memory access (NUMA) node for optimal performance.

Caution

The Packet Forwarding Engine (PFE) on the vSRX will become unresponsive if the NUMA nodes topology is configured in the hypervisor to spread the instance’s vCPUs across multiple host NUMA nodes. vSRX requires that you ensure that all vCPUs reside on the same NUMA node.

We recommend that you bind the vSRX instance with a specific NUMA node by setting NUMA node affinity. NUMA node affinity constrains the vSRX VM resource scheduling to only the specified NUMA node.

PCI NIC-to-VM Mapping

If the node on which vSRX is running is different from the node to which the Intel PCI NIC is connected, then packets will have to traverse an additional hop in the QPI link, and this will reduce overall throughput. Use the esxtop command to view information about relative physical NIC locations. On some servers where this information is not available, refer to the hardware documentation for the slot-to-NUMA node topology.

Interface Mapping for vSRX on VMware

Each network adapter defined for a vSRX is mapped to a specific interface, depending on whether the vSRX instance is a standalone VM or one of a cluster pair for high availability. The interface names and mappings in vSRX are shown in Table 4 and Table 5.

Note the following:

  • In standalone mode:

    • fxp0 is the out-of-band management interface.

    • ge-0/0/0 is the first traffic (revenue) interface.

  • In cluster mode:

    • fxp0 is the out-of-band management interface.

    • em0 is the cluster control link for both nodes.

    • Any of the traffic interfaces can be specified as the fabric links, such as ge-0/0/0 for fab0 on node 0 and ge-7/0/0 for fab1 on node 1.

Table 4 shows the interface names and mappings for a standalone vSRX VM.

Table 4: Interface Names for a Standalone vSRX VM

Network

Adapter

Interface Name in Junos OS

1

fxp0

2

ge-0/0/0

3

ge-0/0/1

4

ge-0/0/2

5

ge-0/0/3

6

ge-0/0/4

7

ge-0/0/5

8

ge-0/0/6

Table 5 shows the interface names and mappings for a pair of vSRX VMs in a cluster (node 0 and node 1).

Table 5: Interface Names for a vSRX Cluster Pair

Network

Adapter

Interface Name in Junos OS

1

fxp0 (node 0 and 1)

2

em0 (node 0 and 1)

3

ge-0/0/0 (node 0)

ge-7/0/0 (node 1)

4

ge-0/0/1 (node 0)

ge-7/0/1 (node 1)

5

ge-0/0/2 (node 0)

ge-7/0/2 (node 1)

6

ge-0/0/3 (node 0)

ge-7/0/3 (node 1)

7

ge-0/0/4 (node 0)

ge-7/0/4 (node 1)

8

ge-0/0/5 (node 0)

ge-7/0/5 (node 1)

vSRX Default Settings on VMware

vSRX requires the following basic configuration settings:

  • Interfaces must be assigned IP addresses.

  • Interfaces must be bound to zones.

  • Policies must be configured between zones to permit or deny traffic.

Note

For the management interface, fxp0, VMware uses the VMXNET 3 vNIC and requires promiscuous mode on the vSwitch.

Table 6 lists the factory default settings for the vSRX security policies.

Table 6: Factory Default Settings for Security Policies

Source Zone

Destination Zone

Policy Action

trust

untrust

permit

trust

trust

permit

untrust

trust

deny