Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Requirements for vSRX on KVM

 

This section presents an overview of requirements for deploying a vSRX instance on KVM;

Software Specifications

Table 1 lists the system software requirement specifications when deploying vSRX in a KVM environment. The table outlines the Junos OS release in which a particular software specification for deploying vSRX on KVM was introduced. You will need to download a specific Junos OS release to take advantage of certain features.

Caution

A Page Modification Logging (PML) issue related to the KVM host kernel might prevent the vSRX from successfully booting. If you experience this behavior with the vSRX, we recommend that you disable the PML at the host kernel level. See Preparing Your Server for vSRX Installation for details about disabling the PML as part of enabling nested virtualization.

Table 1: Specifications for vSRX

Component

Specification

Release Introduced

Linux KVM Hypervisor support

Ubuntu 14.04.5, 16.04, 16.10, and 18.04

Junos OS Release 18.4R1

Red Hat Enterprise Linux (RHEL) 7.3

CentOS 7.2

Memory

4 GB

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

8 GB

Junos OS Release 15.1X49-D70 and Junos OS Release 17.3R1

16 GB

Junos OS Release 15.1X49-D90 and Junos OS Release 17.3R1

32 GB

Junos OS Release 15.1X49-D100 and Junos OS Release 17.4R1

Disk space

16 GB IDE drive

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

vCPUs

2 vCPUs

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

5 vCPUs

Junos OS Release 15.1X49-D70 and Junos OS Release 17.3R1

9 vCPUs

Junos OS Release 15.1X49-D90 and Junos OS Release 17.3R1

17 vCPUs

Junos OS Release 15.1X49-D100 and Junos OS Release 17.4R1

vNICs

2-8 vNICs.

  • Virtio

  • SR-IOV (Intel 82599, X520/X540)

For SR-IOV limitations, see the Known Behavior section of the vSRX Release Notes.

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1

 
  • SR-IOV (X710/XL710)

Junos OS Release 15.1X49-D90

  • SR-IOV (Mellanox ConnectX-3/ConnectX-3 Pro and Mellanox ConnectX-4 EN/ConnectX-4 Lx EN)

Junos OS Release 18.1R1

Starting in Junos OS Release 19.4R1, DPDK version 18.11 is supported on vSRX. With this feature the Mellanox Connect Network Interface Card (NIC) on vSRX now supports OSPF Multicast and VLANs.

Junos OS Release 19.4R1

Note

A vSRX on KVM deployment requires you to enable hardware-based virtualization on a host OS that contains an Intel Virtualization Technology (VT) capable processor. You can verify CPU compatibility here: http://www.linux-kvm.org/page/Processor_support

Table 2 lists the specifications on the vSRX VM.

Table 2: Specifications for vSRX 3.0

vCPU

DPDK

Hugepage

vRAM

vDisk

vNIC

Supported Junos OS Release

2

17.05

2G

4G

20G

2-8

Junos OS Release 18.2R1

5

17.05

6G

8G

20G

2-8

vSRX supports VIRTIO on KVM hypervisor.

Junos OS Release 18.4R1

9

17.5.02

12G

16G

20G

2-8

vSRX supports VIRTIO on KVM hypervisor.

Junos OS Release 19.1R1

17

17.5.02

24G

32G

20G

2-8

vSRX supports VIRTIO on KVM hypervisor.

Junos OS Release 19.1R1

Starting in Junos OS Release 19.1R1, the vSRX instance supports guest OS using 9 or 17 vCPUs with single-root I/O virtualization over Intel X710/XL710 on Linux KVM hypervisor for improved scalability and performance.

KVM Kernel Recommendations for vSRX

Table 3 lists the recommended Linux kernel version for your Linux host OS when deploying vSRX on KVM. The table outlines the Junos OS release in which support for a particular Linux kernel version was introduced.

Table 3: Kernel Recommendations for KVM

Linux Distribution

Linux Kernel Version

Supported Junos OS Release

CentOS

3.10.0.229

Upgrade the Linux kernel to capture the recommended version.

Junos OS Release 15.1X49-D15 and Junos OS Release 17.3R1 or later release

Ubuntu

3.16

4.4

RHEL

3.10

Additional Linux Packages for vSRX on KVM

Table 4 lists the additional packages you need on your Linux host OS to run vSRX on KVM. See your host OS documentation for how to install these packages if they are not present on your server.

Table 4: Additional Linux Packages for KVM

Package

Version

Download Link

libvirt

0.10.0

libvirt download

virt-manager (Recommended)

0.10.0

virt-manager download

Hardware Specifications

Table 5 lists the hardware specifications for the host machine that runs the vSRX VM.

Table 5: Hardware Specifications for the Host Machine

Component

Specification

Host processor type

Intel x86_64 multi-core CPU

Note: DPDK requires Intel Virtualization VT-x/VT-d support in the CPU. See About Intel Virtualization Technology.

Physical NIC support for vSRX

  • Virtio

  • SR-IOV (Intel X710/XL710, X520/540, 82599)

  • SR-IOV (Mellanox ConnectX-3/ConnectX-3 Pro and Mellanox ConnectX-4 EN/ConnectX-4 Lx EN)



Note: If using SR-IOV with either the Mellanox ConnectX-3 or ConnectX-4 Family Adapters, on the Linux host, if necessary, install the latest MLNX_OFED Linux driver. See Mellanox OpenFabrics Enterprise Distribution for Linux (MLNX_OFED).



Note: You must enable the Intel VT-d extensions to provide hardware support for directly assigning physical devices per guest. See Configuring SR-IOV on KVM.

Physical NIC support for vSRX 3.0

Support SR-IOV on X710/XL710

Best Practices for Improving vSRX Performance

Review the following practices to improve vSRX performance.

NUMA Nodes

The x86 server architecture consists of multiple sockets and multiple cores within a socket. Each socket has memory that is used to store packets during I/O transfers from the NIC to the host. To efficiently read packets from memory, guest applications and associated peripherals (such as the NIC) should reside within a single socket. A penalty is associated with spanning CPU sockets for memory accesses, which might result in nondeterministic performance. For vSRX, we recommend that all vCPUs for the vSRX VM are in the same physical non-uniform memory access (NUMA) node for optimal performance.

Caution

The Packet Forwarding Engine (PFE) on the vSRX will become unresponsive if the NUMA nodes topology is configured in the hypervisor to spread the instance’s vCPUs across multiple host NUMA nodes. vSRX requires that you ensure that all vCPUs reside on the same NUMA node.

We recommend that you bind the vSRX instance with a specific NUMA node by setting NUMA node affinity. NUMA node affinity constrains the vSRX VM resource scheduling to only the specified NUMA node.

Mapping Virtual Interfaces to a vSRX VM

To determine which virtual interfaces on your Linux host OS map to a vSRX VM:

  1. Use the virsh list command on your Linux host OS to list the running VMs.
    hostOS# virsh list
  2. Use the virsh domiflist vsrx-name command to list the virtual interfaces on that vSRX VM.
    hostOS# virsh domiflist vsrx
    Note

    The first virtual interface maps to the fxp0 interface in Junos OS.

Interface Mapping for vSRX on KVM

Each network adapter defined for a vSRX is mapped to a specific interface, depending on whether the vSRX instance is a standalone VM or one of a cluster pair for high availability. The interface names and mappings in vSRX are shown in Table 6 and Table 7.

Note the following:

  • In standalone mode:

    • fxp0 is the out-of-band management interface.

    • ge-0/0/0 is the first traffic (revenue) interface.

  • In cluster mode:

    • fxp0 is the out-of-band management interface.

    • em0 is the cluster control link for both nodes.

    • Any of the traffic interfaces can be specified as the fabric links, such as ge-0/0/0 for fab0 on node 0 and ge-7/0/0 for fab1 on node 1.

Table 6 shows the interface names and mappings for a standalone vSRX VM.

Table 6: Interface Names for a Standalone vSRX VM

Network

Adapter

Interface Name in Junos OS for vSRX

1

fxp0

2

ge-0/0/0

3

ge-0/0/1

4

ge-0/0/2

5

ge-0/0/3

6

ge-0/0/4

7

ge-0/0/5

8

ge-0/0/6

Table 7 shows the interface names and mappings for a pair of vSRX VMs in a cluster (node 0 and node 1).

Table 7: Interface Names for a vSRX Cluster Pair

Network

Adapter

Interface Name in Junos OS for vSRX

1

fxp0 (node 0 and 1)

2

em0 (node 0 and 1)

3

ge-0/0/0 (node 0)

ge-7/0/0 (node 1)

4

ge-0/0/1 (node 0)

ge-7/0/1 (node 1)

5

ge-0/0/2 (node 0)

ge-7/0/2 (node 1)

6

ge-0/0/3 (node 0)

ge-7/0/3 (node 1)

7

ge-0/0/4 (node 0)

ge-7/0/4 (node 1)

8

ge-0/0/5 (node 0)

ge-7/0/5 (node 1)

vSRX Default Settings on KVM

vSRX requires the following basic configuration settings:

  • Interfaces must be assigned IP addresses.

  • Interfaces must be bound to zones.

  • Policies must be configured between zones to permit or deny traffic.

Table 8 lists the factory-default settings for security policies on the vSRX.

Table 8: Factory Default Settings for Security Policies

Source Zone

Destination Zone

Policy Action

trust

untrust

permit

trust

trust

permit

untrust

trust

deny