Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Requirements for vSRX on KVM

This section presents an overview of requirements for deploying a vSRX instance on KVM;

Software Specifications

Table 1 lists the system software requirement specifications when deploying vSRX in a KVM environment. The table outlines the Junos OS release in which a particular software specification for deploying vSRX on KVM was introduced. You will need to download a specific Junos OS release to take advantage of certain features.

CAUTION:

A Page Modification Logging (PML) issue related to the KVM host kernel might prevent the vSRX from successfully booting. If you experience this behavior with the vSRX, we recommend that you disable the PML at the host kernel level. See Prepare Your Server for vSRX Installation for details about disabling the PML as part of enabling nested virtualization.

Table 1: Specifications for vSRX

Component

Specification

Release Introduced

Linux KVM Hypervisor support

Ubuntu 14.04.5, 16.04, and 16.10

Junos OS Release 18.4R1

Ubuntu 18.04 and 20.04 Junos OS Release 20.4R1

Red Hat Enterprise Linux (RHEL) 7.3

Junos OS Release 18.4R1

CentOS 7.2

CentOS 7.6 and 7.7

Junos OS Release 19.2R1

Red Hat Enterprise Linux (RHEL) 7.6 and 7.7

Red Hat Enterprise Linux (RHEL) 8.2

Junos OS Release 20.4R1

Memory

4 GB

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

8 GB

Junos OS Release 15.1X49-D70 and Junos OS Release 17.3R1

16 GB

Junos OS Release 15.1X49-D90 and Junos OS Release 17.3R1

32 GB

Junos OS Release 15.1X49-D100 and Junos OS Release 17.4R1

Disk space

16 GB IDE drive

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

vCPUs

2 vCPUs

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

5 vCPUs

Junos OS Release 15.1X49-D70 and Junos OS Release 17.3R1

9 vCPUs

Junos OS Release 15.1X49-D90 and Junos OS Release 17.3R1

17 vCPUs

Junos OS Release 15.1X49-D100 and Junos OS Release 17.4R1

Table 2: vNIC Support in vSRX2.0 and vSRX3.0
vNICs vSRX2.0 vSRX3.0
Virtio SA and HA Yes Yes
SR-IOV SA and HA over Intel 82599/X520 series Yes Yes
SR-IOV SA and HA over Intel X710/XL710/XXV710 series Yes Yes
SR-IOV SA over Intel E810 series Yes Yes
SR-IOV HA over Intel E810 series No No
SR-IOV SA and HA over Mellanox ConnectX-3 No No
SR-IOV SA and HA over Mellanox ConnectX-4/5/6 (MLX5 driver only) Yes

Yes

(Junos OS Release 21.2R1 onwards)

PCI passthrough over Intel 82599/X520 series No No
PCI passthrough over Intel X710/XL710 series Yes No

Data Plane Development Kit (DPDK) version 18.11

Starting in Junos OS Release 19.4R1, DPDK version 18.11 is supported on vSRX. With this feature the Mellanox Connect Network Interface Card (NIC) on vSRX now supports OSPF Multicast and VLANs.

Yes Yes

Data Plane Development Kit (DPDK) version 20.11

Starting in Junos OS Release 21.2R1, we've upgraded the Data Plane Development Kit (DPDK) from version 18.11 to version 20.11. The new version supports ICE Poll Mode Driver (PMD), which enables the physical Intel E810 series 100G NIC support on vSRX 3.0.
Yes Yes
Note:

A vSRX on KVM deployment requires you to enable hardware-based virtualization on a host OS that contains an Intel Virtualization Technology (VT) capable processor. You can verify CPU compatibility here: http://www.linux-kvm.org/page/Processor_support

#requirements-for-vsrx-on-kvm__vSRXFlavorsonKVM lists the specifications on the vSRX VM.

Table 3: Feature Support in vSRX2.0 and vSRX3.0
Features vSRX2.0 vSRX3.0

2 vCPU / 4 GB RAM

5 vCPU / 8 GB RAM

Yes Yes

9 vCPU / 16 GB RAM

Yes Yes (Junos OS Release 19.1R1 onwards)

17 vCPU / 32 GB RAM

Yes Yes (Junos OS Release 19.1R1 onwards)

Flexible flow session capacity scaling by an additional vRAM

Yes (from Junos 19.1R1 onwards) Yes (Junos OS Release 19.2R1 onwards)

Multicore scaling support (Software RSS)

No Yes (Junos OS Release 19.3R1 onwards)

Reserve additional vCPU cores for the Routing Engine

Yes Yes

Virtio (virtio-net, vhost-net)

Yes Yes
Supported Hypervisors
     
     

KVM on Ubuntu 16.04, Centos 7.1, Redhat 7.2

Yes Yes
Other Features

Cloud-init

Yes Yes

Powermode IPSec (PMI)

Yes Yes

Chassis cluster

Yes Yes

GTP TEID based session distribution using Software RSS

No Yes (Junos OS Release 19.3R1 onwards)

On-device antivirus scan engine (Avira)

No Yes (Junos OS Release 19.4R1 onwards)

LLDP

Yes Yes (Junos OS Release 21.1R1 onwards)

Junos Telemetry Interface

Yes Yes (Junos OS Release 20.3R1 onwards)
System Requirements

Hardware acceleration/enabled VMX CPU flag in the hypervisor

Yes No

Disk space

16 GB 18 GB

Starting in Junos OS Release 19.1R1, the vSRX instance supports guest OS using 9 or 17 vCPUs with single-root I/O virtualization over Intel X710/XL710 on Linux KVM hypervisor for improved scalability and performance.

KVM Kernel Recommendations for vSRX

Table 4 lists the recommended Linux kernel version for your Linux host OS when deploying vSRX on KVM. The table outlines the Junos OS release in which support for a particular Linux kernel version was introduced.

Table 4: Kernel Recommendations for KVM

Linux Distribution

Linux Kernel Version

Supported Junos OS Release

CentOS

3.10.0.229

Upgrade the Linux kernel to capture the recommended version.

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1 or later release

Ubuntu

3.16

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1 or later release

4.4

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1 or later release

18.04

Junos OS Release 20.4R1 or later release

20.04

Junos OS Release 20.4R1 or later release

RHEL

3.10

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1 or later release

Additional Linux Packages for vSRX on KVM

Table 5 lists the additional packages you need on your Linux host OS to run vSRX on KVM. See your host OS documentation for how to install these packages if they are not present on your server.

Table 5: Additional Linux Packages for KVM

Package

Version

Download Link

libvirt

0.10.0

libvirt download

virt-manager (Recommended)

0.10.0

virt-manager download

Hardware Specifications

Table 6 lists the hardware specifications for the host machine that runs the vSRX VM.

Table 6: Hardware Specifications for the Host Machine

Component

Specification

Host processor type

Intel x86_64 multi-core CPU

Note:

DPDK requires Intel Virtualization VT-x/VT-d support in the CPU. See About Intel Virtualization Technology.

Physical NIC support for vSRX and vSRX 3.0

  • Virtio

  • SR-IOV (Intel X710/XL710, X520/540, 82599)

  • SR-IOV (Mellanox ConnectX-3/ConnectX-3 Pro and Mellanox ConnectX-4 EN/ConnectX-4 Lx EN)

Note:

If using SR-IOV with either the Mellanox ConnectX-3 or ConnectX-4 Family Adapters, on the Linux host, if necessary, install the latest MLNX_OFED Linux driver. See Mellanox OpenFabrics Enterprise Distribution for Linux (MLNX_OFED).

Note:

You must enable the Intel VT-d extensions to provide hardware support for directly assigning physical devices per guest. See Configure SR-IOV and PCI on KVM.

Physical NIC support for vSRX 3.0

Support SR-IOV on Intel X710/XL710/XXV710, and Intel E810

Best Practices for Improving vSRX Performance

Review the following practices to improve vSRX performance.

NUMA Nodes

The x86 server architecture consists of multiple sockets and multiple cores within a socket. Each socket has memory that is used to store packets during I/O transfers from the NIC to the host. To efficiently read packets from memory, guest applications and associated peripherals (such as the NIC) should reside within a single socket. A penalty is associated with spanning CPU sockets for memory accesses, which might result in nondeterministic performance. For vSRX, we recommend that all vCPUs for the vSRX VM are in the same physical non-uniform memory access (NUMA) node for optimal performance.

CAUTION:

The Packet Forwarding Engine (PFE) on the vSRX will become unresponsive if the NUMA nodes topology is configured in the hypervisor to spread the instance’s vCPUs across multiple host NUMA nodes. vSRX requires that you ensure that all vCPUs reside on the same NUMA node.

We recommend that you bind the vSRX instance with a specific NUMA node by setting NUMA node affinity. NUMA node affinity constrains the vSRX VM resource scheduling to only the specified NUMA node.

Mapping Virtual Interfaces to a vSRX VM

To determine which virtual interfaces on your Linux host OS map to a vSRX VM:

  1. Use the virsh list command on your Linux host OS to list the running VMs.

  2. Use the virsh domiflist vsrx-name command to list the virtual interfaces on that vSRX VM.

    Note:

    The first virtual interface maps to the fxp0 interface in Junos OS.

Interface Mapping for vSRX on KVM

Each network adapter defined for a vSRX is mapped to a specific interface, depending on whether the vSRX instance is a standalone VM or one of a cluster pair for high availability. The interface names and mappings in vSRX are shown in Table 7 and Table 8.

Note the following:

  • In standalone mode:

    • fxp0 is the out-of-band management interface.

    • ge-0/0/0 is the first traffic (revenue) interface.

  • In cluster mode:

    • fxp0 is the out-of-band management interface.

    • em0 is the cluster control link for both nodes.

    • Any of the traffic interfaces can be specified as the fabric links, such as ge-0/0/0 for fab0 on node 0 and ge-7/0/0 for fab1 on node 1.

Table 7 shows the interface names and mappings for a standalone vSRX VM.

Table 7: Interface Names for a Standalone vSRX VM

Network Adapter

Interface Name in Junos OS for vSRX

1

fxp0

2

ge-0/0/0

3

ge-0/0/1

4

ge-0/0/2

5

ge-0/0/3

6

ge-0/0/4

7

ge-0/0/5

8

ge-0/0/6

Table 8 shows the interface names and mappings for a pair of vSRX VMs in a cluster (node 0 and node 1).

Table 8: Interface Names for a vSRX Cluster Pair

Network Adapter

Interface Name in Junos OS for vSRX

1

fxp0 (node 0 and 1)

2

em0 (node 0 and 1)

3

ge-0/0/0 (node 0)ge-7/0/0 (node 1)

4

ge-0/0/1 (node 0)ge-7/0/1 (node 1)

5

ge-0/0/2 (node 0)ge-7/0/2 (node 1)

6

ge-0/0/3 (node 0)ge-7/0/3 (node 1)

7

ge-0/0/4 (node 0)ge-7/0/4 (node 1)

8

ge-0/0/5 (node 0)ge-7/0/5 (node 1)

vSRX Default Settings on KVM

vSRX requires the following basic configuration settings:

  • Interfaces must be assigned IP addresses.

  • Interfaces must be bound to zones.

  • Policies must be configured between zones to permit or deny traffic.

Table 9 lists the factory-default settings for security policies on the vSRX.

Table 9: Factory Default Settings for Security Policies

Source Zone

Destination Zone

Policy Action

trust

untrust

permit

trust

trust

permit

untrust

trust

deny