This section contains the known behaviors, system maximums, and limitations in hardware and software in Junos OS Release 18.1R2 for vSRX.
Chassis Cluster/High Availability
In vSRX deployments, HA is not supported on AWS and Microsoft Azure.
In KVM deployments using Virtio, when vSRX is operating in HA and sessions are established and closed at very high rates, some sessions might not get closed on the backup node. This issue is because of a Virtio driver limitation.
Workaround: Reduce session establish rate to less than 300 cps.
In KVM deployments using Virtio, when vSRX is operating in HA, packet loss is observed during an RG0 failover. This occurs because the MAC entry at the bridge layer cannot be updated by the HA mechanism because of a driver limitation. Packets must remain in the queue until they expire.
Interfaces and Routing
In vSRX deployments, source MAC filtering is supported on Fast Ethernet and Gigabit Ethernet interfaces in Layer 3 standalone mode and redundant Ethernet interfaces in HA mode. However, support is not available on Aggregated Ethernet (AE), Fabric Ethernet, or Gigabit Ethernet interfaces in Layer 2 standalone mode.
In vSRX deployments, the following configuration options are not supported: services unified-access-control and protocols l2-learning global-mode switching.
In vSRX deployments, configuring XAuth with AutoVPN secure tunnel (st0) interfaces in point-to-multipoint mode and dynamic IKE gateways is not supported. However, XAuth is supported with shared IKE IDs.
In vSRX deployments using VMware ESX, changing the default speed (1000 Mbps) or the default link mode (full duplex) is not supported on VMXNET3 vNICs.
Platform and Infrastructure
VRRP is not supported on VMware hypervisors because of a VMware support limitation for virtual MAC addresses.
In VMware deployments, a serial console port on the vSRX platform cannot be used through the network to redirect console messages to a telnet session because of an underlying infrastructure limitation. The console port can be configured; however, it is not usable.
In a vSRX deployment in VMware ESXi 5.5 using VMXNET3 vNICs, a performance degradation (8 percent) is observed when more vNICs (approximately eight) are configured, compared with fewer vNICs (approximately three) across a single instance.
DPDK does not provide an outgoing multicast traffic count on its interface. As a result, interface outgoing multicast packets are interpreted as incoming packets on the egress interface.
In vSRX deployments, the vSRX VM does not support the use of Live Migration or vMotion as a means to move virtual machines from one host to another.
When vSRX is run as a virtual network function (VNF) on the NFX250 Network Services Platform, for the vSRX Junos OS release, whenever a new vSRX instance is created the boot-up time for the vSRX increases by approximately four minutes as compared with the Junos OS 15.1X49-D78.4 version of vSRX. Whenever the same vSRX instance is deleted and redeployed on the device using the same qcow image, then the boot-up time is one minute more than that of Junos OS 15.1X49-D78.4 version of vSRX.
SR-IOV interfaces have both physical functions (PFs) and multiple virtual functions (VFs). When configuration parameters are modified on the VF, the PF driver has the option to accept or reject the change. As a security precaution, the generic PF driver that is part of standard hypervisors (both VMware and Linux) does not allow certain parameters to be configured. Parameters that cannot be changed include enabling promiscuous mode, enabling multicast, and allowing Jumbo frames. Because of this driver limitation, the following vSRX features are not supported in deployments that use SR-IOV interfaces:
High availability (HA)
Layer 2 support
Multicast with other features such as OSPF and IPv6
These limitations apply in deployments where the PF drivers cannot be updated or controlled. The limitations do not apply when vSRX is deployed on supported Juniper Networks devices.
SR-IOV does not support all VMware features (see your VMware documentation).
In either a Microsoft Azure or Microsoft Hyper-V deployment, SR-IOV is not supported.
Cloning vSRX VMs with SR-IOV interfaces is not supported. Instead of cloning a VM, instantiate a new vSRX VM from the .ova image (VMware hypervisors) or from the .qcow2 image (KVM hypervisors).
In deployments using SR-IOV interfaces, Address Resolution Protocol (ARP) does not work when Jumbo frames are used on a physical NIC.
In deployments using SR-IOV interfaces, packets are dropped when a MAC address is assigned to a vSRX Junos OS interface. This issue occurs because SR-IOV does not allow MAC address changes in either the PF or the VF.
In KVM deployments using SR-IOV interfaces with a DPDK driver, the PF interface might go down and then come back up. In such circumstances, the vSRX might stay down even after the PF is back up because the Junos OS ge- interface does not receive an updated link state message from the VF interface.
Workaround: Reboot the vSRX instance.
In KVM deployments operating in SR-IOV mode with an Intel X710/XL710 NIC, note that there is no VLAN support for the vSRX interfaces in this configuration. This is due to an Intel card limitation with the X710 and XL710 NICs.
vSRX Limitations in Junos Space Security Director Integration with vSRX
The following vSRX features are not supported in Security Director:
Application QoS (AppQoS)
Layer 2 transparent mode
Specific Security Director limitations with respect to Application Firewall (AppFW), IDP, and UTM features:
UTM database updates are not supported.
Application ID (AppID) custom signatures are not supported.
The following vSRX features are not supported in Junos Space Security Director for IPsec and routing features:
Certificates for AutoVPN must be generated from the CLI.
All other IPsec settings can be configured using Junos Space Security Director.