Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Known Issues

 

This section lists the known issues in Junos OS Release 18.1R2 for vSRX.

For the most complete and latest information about known Junos OS defects, use the Juniper Networks online Junos Problem Report Search application.

Chassis Clustering

  • In HA deployments, when the Routing Engine is busy and an RG0 manual failover is initiated, a control link failure occurs. A failed control link causes both control link detection methods (tcp keepalive and control link heart beat) to fail; it also results in an RG1+ failover. This situation might eventually lead to an RG1+ split brain condition. PR1085987

  • In a cluster environment, when the primary node is shut down on VMware ESXi by the vSphere client, the remaining node state transitions from Secondary to Ineligible before changing to the Primary state. This change in state change can lengthen the delay to failover. PR1216447

  • The vSRX HA control link might go down under high-traffic conditions, which disables the secondary node. PR1229172

  • With vSRX instances running in a chassis cluster, when rebooting the primary node for redundancy-group 1+, traffic forwarding may stop for approximately a minute. PR1258502

    Workaround: First manually failover redundancy-group 1+ before rebooting a cluster node.

  • A high-availability cold-sync failure might occur when using PCI passthrough as FAB. When this issue occurs, the vSRX might become unresponsive. PR1263056

    Workaround: Perform a manual failover for redundancy-group 1+ before rebooting a cluster node. If this does not resolve the issue, use Virtio as FAB.

Class of Service (CoS)

  • On vSRX instances when classifiers, schedulers, and shapers are configured, the interface queue counters where these schedulers are applied do not match the expected number of packets. PR1083463

Cloud-init in AWS

  • If using cloud-init in AWS to automate the initialization of vSRX instances, when you click View Instances to display the Instances list in the EC2 Dashboard, you might find that it takes several minutes to launch the vSRX instance. For the initial boot, the vSRX instance might show an error of “1/2 checks passed” until it initializes, and then finally display “2/2 checks passed.” PR1296704

  • If using cloud-init in AWS to automate the initialization of vSRX instances, you might encounter an SSH connection failure if using an incorrect configuration with keywords in the user-data file. The configuration must be validated, and include details for the fxp0 interface, login, and authentication. It must also have a default route for traffic on fxp0. This information must match the details of the AWS VPC and subnet into which the instance is launched. If any of this information is missing or incorrect, the instance is inaccessible and you must launch a new one. In addition, ensure that DHCP server or static IP addressing along with its default gateway, and root-authentication are specified in the user-data file on AWS. PR1297086

    Note

    The user-data file cannot exceed 16 KB. If your user-data file exceeds this limit, you must compress the file using gzip and use the compressed file. For example, the gzip junos.conf command results in the junos.conf.gz file.

DHCP

  • In vSRX deployments, when you exclude an assigned address from the DHCP pool on a DHCP server, the DHCP client gets the excluded address when you use the command request dhcp client renew all. This issue occurs because the CLIENT_EVENT_RECONFIGURE event, sent to the client when the request dhcp client renew command was issued, is handled by the client in the bound state. This issue is applicable only to DHCPv4 clients.

    Workaround: Clear the binding on the DHCP client by using the clear dhcp client binding all command, and then run the request dhcp client renew all command to get a new IP address.

    PR1094252

    PR1094257

Flow and Processing

  • When vSRX FTP self-traffic crosses a virtual router, the FTP session might fail. PR1079190

  • In vSRX deployments, traffic is dropped when a loopback address (lo0.0) and a generic routing encapsulation (GRE) physical interface are configured in different zones. PR1081171

    Workaround: Configure lo0.0 and GRE in the same zone, or use the IP address of the physical interface as the source IP address of the GRE interface.

  • Because all DPDK vhost user vNICs on OVS are, by default, bound to the CPUs on Numa node 0, only the OVS poll mode driver (PMD) threads running on node 0 can poll packets on the vhost user NICs. For the performance test to be done on node 1, although you can add the CPU mask to use CPUs on Numa node 0 to poll the packet from the DPDK vhost user NICs, this action can seriously impact performance because of traffic across Numa nodes. PR1241975

    Workaround: A solution to this issues consists of two steps:

    1. Compile the DPDK with CONFIG_RTE_LIBRTE_VHOST_NUMA enabled: /config/common_base:556:CONFIG_RTE_LIBRTE_VHOST_NUMA=y
    2. Set the QEMU process to run on the Numa node 1 by adding emulatorpin elements to the XML file.

Interfaces and Routing

  • RSVP neighbors are not established on a VMware ESXi host if NSX components are installed on that host. PR1092514

  • On a VMware ESXi host, packets with VLAN do not cross over ESXi hosts when NSX components are installed through a Virtual Extensible LAN (VXLAN) port group. PR1092517

  • When running VMware ESXi 5.5.0U3, in the show chassis fpc detail output, the current status of fpc0 shows that it is in cluster mode. Normally, the mode is displayed as online. PR1141998

    Workaround: Use VMware ESXi 5.5.0U2 or upgrade to VMware ESXi 6.0.

  • When you operate the vSRX in transparent mode with VMware ESXi 5.1 as the host, some packet corruption might occur at the VMXNET3 driver level if TCP segmentation offload (TSO) is enabled on the host. PR1200051

    Note

    This issue does not occur with VMware ESXi 5.5 and later.

    Workaround: Disable TSO in the data path on the VMware ESXi 5.1 host.

  • The monitor traffic CLI command cannot be used to capture vSRX plain ping-to-host revenue ports traffic. All plain ping packets transmitted to revenue ports are handled on the srxpfe side, and vSRX revenue ports traffic will not be seen by RE using this command. However, traffic coming out from revenue ports can be seen by RE. Revenue ports refer to all ports except fxp0 and em0. PR1234321

  • On vSRX, 10-Gigabit Ethernet interfaces are being displayed as 1-Gigabit Ethernet interfaces. PR1236912

    Note

    This is a display issue and will be addressed in a future version of Junos OS.

  • When performing a rapid disable interface/enable interface sequence on a vSRX (for example, when using a script), this action might trigger an Intel i40e-based NIC limitation where the NIC becomes unresponsive and is unable to receive packets. PR1253659

    Workaround: If possible, avoid using a script to perform a rapid disable interface/enable interface sequence on the vSRX. If you encounter this issue, login to the host and reload the Intel i40e driver to recover the NIC.

  • In some cases, when you specify the show interfaces gr-0/0/0 statistics detail command, the show command output under Physical interface does not properly reflect the input and output packets or bytes in the Traffic statistics. PR1292261

  • In some cases, the DHCP RELEASE packet might not be sent when you specify the clear dhcp client bindings command. PR1338001

J-Web

  • The addition of a block of 2000+ global addresses at a time to an SSL proxy profile-exempted address might cause the J-Web interface to become unresponsive. PR1278087

    Workaround: Only add 500 global addresses at a time.

  • You might encounter issues when you attempt to view custom log files created for event logging in the J-Web interface. Only event logs captured in a policy-session log file can be viewed in the J-Web interface (Monitor > Events and Alarms > View Events), and other event logs captured in different files are missing. PR1280857

    Workaround: If this issue occurs, download the custom log file from Administration > Files > Log Files so you can properly view them.

  • The Applications, Threat Map, and Firewall: Top Denies Dashboard widgets might display No Data Available when the device receives a huge amount of data. PR1282666

    Workaround: If this issue occurs, individually refresh each of the Dashboard widgets.

  • You are unable to view the Java applet in the Google Chrome web browser when attempting to use the J-Web CLI terminal. The J-Web CLI terminal does not work when using a version of Google Chrome web browser that is greater than version 42.0. PR1283216

    Workaround: To use J-Web CLI terminal, use one of the following recommended web browsers and versions:

    • Google Chrome, version 42.0 or earlier.

    • Microsoft Internet, Explorer version 11 or 10.

    • Firefox, version 46 or later.

  • In some cases, when using the Google Chrome web browser, the Time Range slider does not function properly for events. PR1283536

    Workaround: If you encounter this behavior, use the Microsoft Internet Explorer version 11 web browser.

  • Uploading a certificate using the Browse button stores the certificate in the SRX Series device or vSRX instance at the /jail/var/tmp/uploads/ location. The certificate will be deleted when you execute the request system storage cleanup command. PR1312529

    Workaround: If this issue occurs, perform one of the following actions:

    • Refrain from deleting the certificate while executing the request system storage cleanup command. If the certificate is deleted, replace it immediately or the connection to the JIMS server will go down.

    • Save the certificate manually in the SRX Series device or vSRX instance in a location other than /tmp/ folder. Use the J-Web option Specify path of the file on device and specify the correct path.

  • The values of address and address-range are not displayed in the Inline address-set creation pop-up window of the JIMS server. PR1312900

    Workaround: To determine the value of the global address, address-set, and address-range values, navigate to Configure -> Security -> Objects and access Global addresses.

Microsoft Azure

  • Nested vmx (hardware virtualization support) is not supported for a vSRX that is deployed on Microsoft Azure. Please note that this has no impact to vSRX functionality, but it can slightly affect the bootup time and configuration commit time. PR1231270

Microsoft Hyper-V

  • When you deploy a vSRX virtual security appliance on Windows Hyper-V Server 2012 (vSRX support for the Hyper-V hypervisor), if the bidirectional traffic of each port exceeds the capability of the vSRX, you might find that one vSRX port hangs and becomes unable to receive packets. PR1250285

    Workaround: Upgrade Windows Hyper-V Server 2012 to Windows Hyper-V Server 2012 R2.

Platform and Infrastructure

  • In a KVM-based hypervisor, an attempt to save vSRX and restore it through the Virtual Machine Manager GUI causes the Virtual Routing Engine (VRE) to crash. The crash causes the vRE to go to DB mode. PR1087096

    Workaround: Use either virsh destroy/start VM or nova stop/start/reboot VM but not the Virtual Machine Manager GUI.

  • In KVM deployments, virsh reset commands do not work. PR1087112

  • The AWS snapshot feature cannot be used to clone vSRX instances. You can use the AWS snapshot feature to preserve the state of the VM so you can return to the same state when the snapshot was created. PR1160582

  • vSRX uses DPDK to increase packet performance by caching packets to send in burst mode. Latency-sensitive applications must account for this burst operation. PR1087887

  • APIC virtualization (APICv) does not work well with nested VMs such as those used with KVM. On Intel CPUs that support APICv (typically v2 models, for example E5 v2 and E7 v2), you must disable APICv on the host server before deploying vSRX. PR1111582

    Workaround: Disable APICv before deploying vSRX.

    Use the following commands to disable APICv on your host and verify that it is disabled:

  • In a KVM-based hypervisor deployment, you might encounter one or more of following issues: PR1263056

    • The vSRX may become unresponsive when Page Modification Logging (PML) is enabled in the host operating system (CentOS or Ubuntu) when using the Intel Xeon Processor E5 or E7 v4 family. This PML issue prevents the vSRX from successfully booting.

    • Traffic to the vSRX might drop or stop due to Intel XL710 driver-specific limitations. This behavior can be due to issues with the vSRX VM configuration (such as a MAC-VLAN or MAC-NUM limitation).

    Workaround: Perform the appropriate workaround to resolve the issues listed above:

    • If the vSRX becomes unresponsive due to a PML issue, we recommend that you disable the PML at the host kernel level. Depending on your host operating system, open the .conf file in your default editor and add the following line to the file: hostOS# options kvm-intel nested=y enable_apicv=n pml=n.

    • If the vSRX experiences loss of traffic due to Intel XL710 driver limitations, follow the recommended Intel XL710 guidelines to change the VM configuration to avoid these limitations. See Intel Ethernet Controller X710 and XL710 Family Documentation for the recommended guidelines.

  • When deploying a vSRX instance in a KVM or Contrail environment with the vhost_net NIC driver, the vSRX might process and forward all unicast packets which were flooded to the port, regardless of the destination MAC address. PR1344700



    Workaround: For a vSRX on KVM deployment, insert <driver name='qemu'/> below <model type='virtio'/> in the VM XML definition file. For a vSRX on Contrail deployment, no workaround is available. To avoid packets from looping back out of the same interface, do not permit intra-zone traffic forwarding by the security policy.

Routing Protocols

  • When the Bidirectional Forward Detection (BFD) protocol is configured over an IPv6 static route, the route remains in the routing table even after a session failure occurs. PR1109727

UTM

  • In vSRX deployments configured with Sophos Antivirus, some files that are larger than the configured max-content-size might not go into fallback mode, and, after they are retransmitted several times, they might pass with a clean or an infected result. This issue is specific to a few protocols that do not send the content size before attempting to transmit files. PR1093984

  • In some instances, validation is not checked when the UTM policy is detached from the firewall policy rule after an SSL proxy profile is selected. PR1285543

    Workaround: The UTM policy should not be detached after an SSL proxy profile is selected.

  • In a configuration where multiple traffic selectors are configured for a peer with Internet Key Exchange version 2 (IKEv2) reauthentication, only one traffic selector will rekey at the time of the IKEv2 reauthentication. The VPN tunnels of the remaining traffic selectors will be cleared without immediately performing the rekey process. A new negotiation of those traffic selectors will trigger through other mechanisms, for example by traffic or by a peer. PR1287168

VPN

  • An error message might occur for show or clear commands if IPsec VPN is configured with over 1000 tunnels. PR1093872

    Workaround: Retry the commands.

  • IPv6 firewall filters cannot be applied to virtual channels. PR1182367

  • When IPsec is used with PKI authentication, the vSRX might unnecessarily send the entire certificate chain to the remote peer, potentially causing fragmentation of IKE messages. PR1251837

    Workaround: If possible, configure the remote peer to send the CERTREQ (certificate request) payload as part of the IKE exchange. The vSRX will examine the CERTREQ payload from the remote peer to determine what CAs the peer trusts and to compare them with the CAs trusted locally. This examination helps avoid sending the entire certificate chain to the peer.