Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Known Issues

 

This section lists the known issues in Junos OS Release 17.3R1 for vSRX.

For the most complete and latest information about known Junos OS defects, use the Juniper Networks online Junos Problem Report Search application.

Chassis Clustering

  • In HA deployments, when the Routing Engine is busy and an RG0 manual failover is initiated, a control link failure occurs. A failed control link causes both control link detection methods (tcp keepalive and control link heart beat) to fail; it also results in an RG1+ failover. This situation might eventually lead to an RG1+ split brain condition. PR1085987

  • In a cluster environment, when the primary node is shut down on VMware ESXi by the vSphere client, the remaining node state transitions from Secondary to Ineligible before changing to the Primary state. This change in state change can lengthen the delay to failover. PR1216447

  • The vSRX HA control link might go down under high-traffic conditions, which disables the secondary node. PR1229172

  • With vSRX instances running in a chassis cluster, when rebooting the primary node for redundancy-group 1+, traffic forwarding may stop for approximately a minute. PR1258502

    Workaround: First manually failover redundancy-group 1+ before rebooting a cluster node.

  • A high-availability cold-sync failure might occur when using PCI passthrough as FAB. When this issue occurs, the vSRX might become unresponsive. PR1263056

    Workaround: Perform a manual failover for redundancy-group 1+ before rebooting a cluster node. If this does not resolve the issue, use Virtio as FAB.

Class of Service (CoS)

  • On vSRX instances when classifiers, schedulers, and shapers are configured, the interface queue counters where these schedulers are applied do not match the expected number of packets. PR1083463

DHCP

  • In vSRX deployments, when you exclude an assigned address from the DHCP pool on a DHCP server, the DHCP client gets the excluded address when you use the command request dhcp client renew all. This issue occurs because the CLIENT_EVENT_RECONFIGURE event, sent to the client when the request dhcp client renew command was issued, is handled by the client in the bound state. This issue is applicable only to DHCPv4 clients.

    Workaround: Clear the binding on the DHCP client by using the clear dhcp client binding all command, and then run the request dhcp client renew all command to get a new IP address.

    PR1094252

    PR1094257

Flow and Processing

  • When vSRX FTP self-traffic crosses a virtual router, the FTP session might fail. PR1079190

  • In vSRX deployments, traffic is dropped when a loopback address (lo0.0) and a generic routing encapsulation (GRE) physical interface are configured in different zones. PR1081171

    Workaround: Configure lo0.0 and GRE in the same zone, or use the IP address of the physical interface as the source IP address of the GRE interface.

  • Because all DPDK vhost user vNICs on OVS are, by default, bound to the CPUs on Numa node 0, only the OVS poll mode driver (PMD) threads running on node 0 can poll packets on the vhost user NICs. For the performance test to be done on node 1, although you can add the CPU mask to use CPUs on Numa node 0 to poll the packet from the DPDK vhost user NICs, this action can seriously impact performance because of traffic across Numa nodes. PR1241975

    Workaround: A solution to this issues consists of two steps:

    1. Compile the DPDK with CONFIG_RTE_LIBRTE_VHOST_NUMA enabled: /config/common_base:556:CONFIG_RTE_LIBRTE_VHOST_NUMA=y
    2. Set the QEMU process to run on the Numa node 1 by adding emulatorpin elements to the XML file.

General Routing

  • On vSRX platforms, when an interface is configured as a DHCP client using the dhcpd process, the DHCP discovers that the message cannot be sent out and the interface does not fetch the IP address. This occurs when the hostname is not configured. As a result, the DHCP client cannot not fetch an IP address. PR1073443

Interfaces and Routing

  • RSVP neighbors are not established on a VMware ESXi host if NSX components are installed on that host. PR1092514

  • On a VMware ESXi host, packets with VLAN do not cross over ESXi hosts when NSX components are installed through a Virtual Extensible LAN (VXLAN) port group. PR1092517

  • When running VMware ESXi 5.5.0U3, in the show chassis fpc detail output, the current status of fpc0 shows that it is in cluster mode. Normally, the mode is displayed as online. PR1141998

    Workaround: Use VMware ESXi 5.5.0U2 or upgrade to VMware ESXi 6.0.

  • When you operate the vSRX in transparent mode with VMware ESXi 5.1 as the host, some packet corruption might occur at the VMXNET3 driver level if TCP segmentation offload (TSO) is enabled on the host. PR1200051

    Note

    This issue does not occur with VMware ESXi 5.5 and later.

    Workaround: Disable TSO in the data path on the VMware ESXi 5.1 host.

  • The monitor traffic CLI command cannot be used to capture vSRX plain ping-to-host revenue ports traffic. All plain ping packets transmitted to revenue ports are handled on the srxpfe side, and vSRX revenue ports traffic will not be seen by RE using this command. However, traffic coming out from revenue ports can be seen by RE. Revenue ports refer to all ports except fxp0 and em0. PR1234321

  • On vSRX, 10-Gigabit Ethernet interfaces are being displayed as 1-Gigabit Ethernet interfaces. PR1236912

    Note

    This is a display issue and will be addressed in a future version of Junos OS.

  • When performing a rapid disable interface/enable interface sequence on a vSRX (for example, when using a script), this action might trigger an Intel i40e-based NIC limitation where the NIC becomes unresponsive and is unable to receive packets. PR1253659

    Workaround: If possible, avoid using a script to perform a rapid disable interface/enable interface sequence on the vSRX. If you encounter this issue, login to the host and reload the Intel i40e driver to recover the NIC.

  • In some cases, when you specify the show interfaces gr-0/0/0 statistics detail command, the show command output under Physical interface does not properly reflect the input and output packets or bytes in the Traffic statistics. PR1292261

Microsoft Azure

  • Nested vmx (hardware virtualization support) is not supported for a vSRX that is deployed on Microsoft Azure. Please note that this has no impact to vSRX functionality, but it can slightly affect the bootup time and configuration commit time. PR1231270

Microsoft Hyper-V

  • When you deploy a vSRX virtual security appliance on Windows Hyper-V Server 2012 (vSRX support for the Hyper-V hypervisor), if the bidirectional traffic of each port exceeds the capability of the vSRX, you might find that one vSRX port hangs and becomes unable to receive packets. PR1250285

    Workaround: Upgrade Windows Hyper-V Server 2012 to Windows Hyper-V Server 2012 R2.

Platform and Infrastructure

  • In a KVM-based hypervisor, an attempt to save vSRX and restore it through the Virtual Machine Manager GUI causes the Virtual Routing Engine (VRE) to crash. The crash causes the vRE to go to DB mode. PR1087096

    Workaround: Use either virsh destroy/start VM or nova stop/start/reboot VM but not the Virtual Machine Manager GUI.

  • In KVM deployments, virsh reset commands do not work. PR1087112

  • The AWS snapshot feature cannot be used to clone vSRX instances. You can use the AWS snapshot feature to preserve the state of the VM so you can return to the same state when the snapshot was created. PR1160582

  • vSRX uses DPDK to increase packet performance by caching packets to send in burst mode. Latency-sensitive applications must account for this burst operation. PR1087887

  • APIC virtualization (APICv) does not work well with nested VMs such as those used with KVM. On Intel CPUs that support APICv (typically v2 models, for example E5 v2 and E7 v2), you must disable APICv on the host server before deploying vSRX. PR1111582

    Workaround: Disable APICv before deploying vSRX.

    Use the following commands to disable APICv on your host and verify that it is disabled:

  • In a KVM-based hypervisor deployment, you might encounter one or more of following issues: PR1263056

    • The vSRX may become unresponsive when Page Modification Logging (PML) is enabled in the host operating system (CentOS or Ubuntu) when using the Intel Xeon Processor E5 or E7 v4 family. This PML issue prevents the vSRX from successfully booting.

    • Traffic to the vSRX might drop or stop due to Intel XL710 driver-specific limitations. This behavior can be due to issues with the vSRX VM configuration (such as a MAC-VLAN or MAC-NUM limitation).

    Workaround: Perform the appropriate workaround to resolve the issues listed above:

    • If the vSRX becomes unresponsive due to a PML issue, we recommend that you disable the PML at the host kernel level. Depending on your host operating system, open the .conf file in your default editor and add the following line to the file: hostOS# options kvm-intel nested=y enable_apicv=n pml=n.

    • If the vSRX experiences loss of traffic due to Intel XL710 driver limitations, follow the recommended Intel XL710 guidelines to change the VM configuration to avoid these limitations. See Intel Ethernet Controller X710 and XL710 Family Documentation for the recommended guidelines.

  • When deploying a vSRX instance in a KVM or Contrail environment with the vhost_net NIC driver, the vSRX might process and forward all unicast packets which were flooded to the port, regardless of the destination MAC address. PR1344700



    Workaround: For a vSRX on KVM deployment, insert <driver name='qemu'/> below <model type='virtio'/> in the VM XML definition file. For a vSRX on Contrail deployment, no workaround is available. To avoid packets from looping back out of the same interface, do not permit intra-zone traffic forwarding by the security policy.

Routing Protocols

  • When the Bidirectional Forward Detection (BFD) protocol is configured over an IPv6 static route, the route remains in the routing table even after a session failure occurs. PR1109727

UTM

  • In vSRX deployments configured with Sophos Antivirus, some files that are larger than the configured max-content-size might not go into fallback mode, and, after they are retransmitted several times, they might pass with a clean or an infected result. This issue is specific to a few protocols that do not send the content size before attempting to transmit files. PR1093984

VPN

  • An error message might occur for show or clear commands if IPsec VPN is configured with over 1000 tunnels. PR1093872

    Workaround: Retry the commands.

  • IPv6 firewall filters cannot be applied to virtual channels. PR1182367

  • When IPsec is used with PKI authentication, the vSRX might unnecessarily send the entire certificate chain to the remote peer, potentially causing fragmentation of IKE messages. PR1251837

    Workaround: If possible, configure the remote peer to send the CERTREQ (certificate request) payload as part of the IKE exchange. The vSRX will examine the CERTREQ payload from the remote peer to determine what CAs the peer trusts and to compare them with the CAs trusted locally. This examination helps avoid sending the entire certificate chain to the peer.

  • When configuring a manual route-based IPsec VPN, if you enable VPN monitoring this can cause the st0.* interface to go down, which results in VPN traffic being dropped. PR1259422

    Workaround: Enter the restart ipsec-key-management CLI command to restart the kmd process and restore the VPN service.

    Note

    When the kmd process is restarted, all existing phase 1 and phase 2 SA on the device will be cleared.

  • With the tcp-encap-profile command configured in an environment with a virtual routing instance, there might be packet drops on a port 500-based IPsec tunnel. No issues are observed with Pathfinder (port 443) based IPsec tunnels. PR1263518

  • In certain cases, when performing multiple high-availability failovers with a Pathfinder session, the vSRX might enter into an unresponsive state and send a reset connection to the NCP client, which terminates the connection. PR1263678