Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation  Back up to About Overview 
[+] Expand All
[-] Collapse All

Known Behavior

Application Layer Gateways (ALGs)

  • On all SRX Series devices, you can define the Sun RPC and MS-RPC mapping entry ageout value using the set security alg sunrpc map-entry-timeout value and set security alg msrpc map-entry-timeout value commands. The ageout value ranges from 8 hours to 72 hours, and the default value is 32 hours.

    If either the Sun RPC ALG or the MS-RPC ALG service does not trigger the control negotiation even after 72 hours, the maximum RPC ALG mapping entry value times out and the new data connection to the service fails.

  • The maximum size of the jbuf is 9 Kb. If the message buffer size is more than 9 Kb, the entire message cannot be transferred to the ALG packet handler. This causes subsequent packets in the session to bypass ALG handling, resulting in a transaction failure.

The limitations for SCCP ALGs are as follows:

  • The SCCP is a Cisco proprietary protocol. So, any changes to the protocol by Cisco cause the SCCP ALG implementation to break. However, workarounds are provided to bypass strict decoding and allow any protocol changes to be handled gracefully.
  • The SCCP ALG validates protocol data units (PDUs) with message IDs in the ranges [0x0 - 0x12], [0x20 - 0x49], and [0x81 - 0x14A]. By default, all other message IDs are treated as unknown messages and are dropped by the SCCP ALG.
  • Any changes to the policies will drop the sessions and impact already established SCCP calls.
  • The SCCP ALG opens pinholes that are collapsed during traffic or media inactivity. This means that during a temporary loss of connectivity, media sessions are not re-established.
  • CallManager (CM) version 6.x and later does not support TCP probe packets in chassis cluster mode. As a result, the existing SCCP sessions will break when there is a failover. You can still create new SCCP sessions during failover.

The PPTP ALG with IPv6 support has the following limitation:

  • Because PPP packets are compressed with Microsoft Point-to-Point Encryption (MPPE) protocol after the tunnel is set up, translation of the IP header in the PPP package cannot be handled; therefore, to make sure PPTP connection works well, the PPTP client must be able to work in dual stack mode. So that an IPv6 PPTP client can accept an IPv4 address for PPP tunnel interface, by which it can communicate with the IPv4 PPTP server without IP address translation for PPP packets.

The RTSP ALG with IPv6 support has the following limitations:

  • Real-Time Streaming Protocol (RTSP) is an Application Layer protocol for controlling the delivery of data with real-time properties. The RTSP ALG supports a peer client, and the server transmits real-time media; it does not support third-party endpoints involved in the transaction.
  • In case of destination NAT or NAT64 for IP address translation, if the RTSP message (including the Session Description Protocol (SDP) application content) length exceeds 2500 bytes, then the RTSP ALG processes only the first 2500 bytes of the message and ignores the rest of the message. In this scenario, the IP address in the RTSP message is not translated if the IP address does not appear in the first 2500 bytes.

The SIP ALG with IPv6 support has the following limitation:

  • When NAT64 with persistent NAT is implemented, the SIP ALG adds the NAT translation to the persistent NAT binding table if NAT is configured on the Address of Record (AOR). Because persistent NAT cannot duplicate the address configured, coexistence of NAT66 and NAT64 configured on the same address is not supported.

    Only one binding is created for the same source IP address.


  • J-Web pages for AppSecure are preliminary.
  • Custom application signatures and custom nested application signatures are not currently supported by J-Web.
  • When ALG is enabled, application identification includes the ALG result to identify the application of the control sessions. Application firewall permits ALG data sessions whenever control sessions are permitted. If the control session is denied, there are no data sessions.

    When ALG is disabled, application identification relies on its signatures to identify the application of the control and data sessions. If a signature match is not found, the application is considered unknown. Application firewall handles applications based on the application identification result.

Chassis Cluster

  • On high-end SRX Series devices in a chassis cluster, an ISSU from Junos OS Release 12.1X46-D40 to Junos OS Release 12.1X47-D10 or later requires an interim upgrade step (for example, through Junos OS Release 12.1X46-D45) if NAT is configured.

    For more details refer TSB16905

  • If you are adding next-generation SRX5K-SPC-4-15-320 SPCs on SRX5600 and SRX5800 devices that are part of a chassis cluster, you must install the new SPCs so that a next-generation SRX5K-SPC-4-15-320 SPC is the SPC in the original lowest-numbered slot. For example, if the chassis already has two first-generation SRX5K-SPC-2-10-40 SPCs installed in slots 2 and 3, you cannot install SRX5K-SPC-4-15-320 SPCs in slot 0 or 1. You will need to make sure that an SRX5K-SPC-4-15-320 SPC is installed in the slot that provides central point functionality (in this case, slot 2). This ensures that the central point functionality is performed by an SRX5K-SPC-4-15-320 SPC.
  • On all high-end SRX Series devices, IPsec VPN is not supported in active/active chassis cluster configuration (that is, when there are multiple RG1+ redundancy groups).

The following list describes the limitations for inserting an SPC on SRX1400, SRX3400, SRX3600, SRX5600, and SRX5800 devices in chassis cluster mode:

  • The chassis cluster must be in active/passive mode before and during the SPC insert procedure.
  • A different number of SPCs cannot be inserted in two different nodes.
  • A new SPC must be inserted in a slot that is higher than the central point slot.

    Note: The existing combo central point cannot be changed to a full central point after the new SPC is inserted.

  • During an SPC insert procedure, the IKE and IPsec configurations cannot be modified.
  • Users cannot specify the SPU and the IKE instance to anchor a tunnel.
  • After a new SPC is inserted, the existing tunnels cannot use the processing power of the new SPC and redistribute it to the new SPC.
  • Dynamic tunnels cannot load-balance across different SPCs.
  • The manual VPN name and the site-to-site gateway name cannot be the same.
  • In a chassis cluster scaling environment, the heartbeat-threshold must always be set to 8.
  • An APN or an IMSI filter must be limited to 600 for each GTP profile. The number of filters is directly proportional to the number of IMSI prefix entries. For example, if one APN is configured with two IMSI prefix entries, then the number of filters is two.
  • Eight QoS queues are supported per ae interface.
  • The first recommended unified ISSU from release is Junos OS Release 10.4R4. If you intend to upgrade from a release earlier than Junos OS Release 10.4R4, see the release notes for the release that you are upgrading from for information about limitations and issues related to upgrading.
  • Unified ISSU does not support UTM.
  • For the latest unified ISSU support status, go to the Juniper Networks Knowledge Base (KB): and search for KB17946.
  • Unified ISSU does not support version downgrading.
  • In large chassis cluster configurations on SRX1400, SRX3400 or SRX3600 devices, you need to increase the wait time before triggering failover. In a full-capacity implementation, we recommend increasing the wait to 8 seconds by modifying heartbeat-threshold and heartbeat-interval values in the [edit chassis cluster] hierarchy.

    The product of the heartbeat-threshold and heartbeat-interval values defines the time before failover. The default values (heartbeat-threshold of 3 beats and heartbeat-interval of 1000 milliseconds) produce a wait time of 3 seconds.

    To change the wait time, modify the option values so that the product equals the desired setting. For example, setting the heartbeat-threshold to 8 and maintaining the default value for the heartbeat-interval (1000 milliseconds) yields a wait time of 8 seconds. Likewise, setting the heartbeat-threshold to 4 and the heartbeat-interval to 2000 milliseconds also yields a wait time of 8 seconds.

  • Packet-based forwarding for MPLS and ISO protocol families is not supported.
  • On SRX5600 and SRX5800 devices, only two of the 10 ports on each PIC of 40-port 1-Gigabit Ethernet I/O cards (IOCs) can simultaneously enable IP address monitoring. Because there are four PICs per IOC, this permits a total of eight ports per IOC to be monitored. If more than two ports per PIC on 40-port 1-Gigabit Ethernet IOCs are configured for IP address monitoring, the commit will succeed but a log entry will be generated, and the accuracy and stability of IP address monitoring cannot be ensured. This limitation does not apply to any other IOCs or devices.
  • IP address monitoring is not supported on reth interface link aggregation groups (LAGs) or on child interfaces of reth interface LAGs.
  • Screen statistics data can be gathered on the primary device only.
  • Only reth interfaces are supported for IKE external interface configuration in IPsec VPN. Other interface types can be configured, but IPsec VPN might not work.

Dynamic Host Configuration Protocol (DHCP)

  • On all high-end SRX Series devices, DHCPv6 client authentication is not supported.
  • On all high-end SRX Series devices, DHCP client and server functionality is not supported in a chassis cluster.
  • On all high-end SRX Series devices, DHCP relay is unable to update the binding status based on DHCP_RENEW and DHCP_RELEASE messages.

Flow and Processing

  • On all high-end SRX Series devices, when packet-logging functionality is configured with an improved pre-attack configuration parameter value, the resource usage increases proportionally and might affect the performance.
  • On all high-end SRX Series devices, the default authentication table capacity is 45,000; the administrator can increase the capacity to a maximum of 50,000.

    On SRX1400 devices, the default authentication table capacity is 10,000; the administrator can increase the capacity to a maximum of 15,000.

  • On all high-end SRX Series devices, when devices are operating in flow mode, the Routing Engine side cannot detect the path MTU of an IPv6 multicast address (with a large size packet).
  • On all high-end SRX Series devices, you cannot configure route policies and route patterns in the same dial plan.
  • On all high-end SRX Series devices, high CPU utilization triggered for reasons such as CPU intensive commands and SNMP walks causes the BFD protocol to flap while processing large BGP updates.
  • On all high-end SRX Series devices, downgrading is not supported in low-impact unified ISSU chassis cluster upgrades (LICU).
  • On SRX5800 devices, network processing bundling is not supported in Layer 2 transparent mode.
  • On all high-end SRX Series devices, the maximum number of concurrent sessions is 250 for SSH and Telnet, and 1024 for the Web.

General Packet Radio Service (GPRS)

The following Gateway GPRS Support Node (GGSN) and Packet Data Network Gateway (PGW) limitations are applicable for all high-end SRX Series devices.

  • GGSN and PGW traffic must pass through the GPRS tunneling protocol (GTP) framework; otherwise, the tunnel status is updated incorrectly.
  • The central point distributes all GTP packets to SPUs according to upstream endpoints for GGSN or PGW (one GGSN or PGW is the upstream endpoint of the GTP tunnels). Information is checked on the upstream endpoint IP and GTP packets in the GGSN pool network in the following way:
    • If the upstream endpoint source IP address in the Create-PDP-Context-Response or Create-Session-Response message is different from the IP address of the upstream endpoint, tunnels are created on one SPU. According to the IP address of the upstream endpoint for GGSN or PGW, an incoming GTP tunnel message is moved to a second SPU, and the GTP packets are dropped because no tunnel is found.

    Note: In the GGSN pool scenario, GGSN can reply with a Create-PDP-Context-Response or Create-Session-Response message using a different source IP address than the one where the request was sent to. Therefore the request and the response can run on two different flow sessions, and these two flow sessions can be distributed to different SPUs.

The following GTP firewall limitations are applicable on all high-end SRX Series devices.

  • GTP firewall does not support hot-insertable and hot-removable hardware.
  • The GTP firewall needs to learn the network’s GSN table and install the table for the central point and the SPU. Otherwise, some GTP traffic is blocked when the firewall is inserted in the network.
  • On all high-end SRX Series devices, the GPRS tunneling protocol (GTP) module competes with other modules for memory allocation during runtime because it has dynamic memory allocation for tunnel management.
  • On all high-end SRX Series devices, GTP-U inspection has the following limitations:
    • When GTP-U inspection is enabled, GTP-U throughput drops.
    • GTP-U inspection only affects the new flow sessions that are created after enabling the GTP-U inspection.

      Note: When GTP-U inspection is disabled, the GTP module ignores the traffic on which the corresponding flow sessions were created. When GTP-U inspection is reenabled, the GTP module continues to ignore the traffic during the lifetime of the flow sessions that were created before the GTP-U inspection was reenabled.

    • The ramp-up rate of GTP tunnel management messages decreases slightly (the decrease rate is less than 10 percent) when the GTP control (GTP-C) tunnel and GTP-U tunnel are created on different SPUs, whether GTP-U inspection is enabled or not.
  • On all high-end SRX Series devices, NAT for GTP packets has the following limitations:
    • Only static NAT is supported; port NAT is not supported.
    • During a packet data protocol (PDP) context negotiation and update, the packet sent from the customer’s GSNs must carry the public IP in the GTP payload.
    • Source IP and destination IP addresses cannot be translated simultaneously for a packet.
    • NAT for GTP only works in default logical systems.
    • IPv6 is not supported.

The following SCTP limitations are applicable on all high-end SRX Series devices:

  • Dynamic policy is not supported for SCTP. You must configure all policies for needed SCTP sessions.
  • SCTP modules only inspect IPv4 traffic. IPv6 traffic will be passed or dropped by flow-based or policy-based processing directly, and no SCTP module inspection will occur.
  • Only the first chunk in each SCTP packet is checked.
  • For static NAT to work, the interfaces packets (from one side: client or server side) coming in must belong to the same zone.
  • For multihome cases, only IPv4 Address Parameter (5) in INIT or INI-ACK is supported.
  • Only static NAT is supported for SCTP.
  • SCTP enable or disable is controlled by whether there is a SCTP profile configured. When you disable the SCTP feature, all associations are deleted and later SCTP packets will pass or drop according to the policy.

    If you want to enable SCTP again, all the running SCTP communications will be dropped, because no associations exist. New SCTP communications can establish an association and perform the inspections.

    Clear old SCTP sessions when SCTP is reenabled; doing this will avoid any impact caused by the old SCTP sessions on the new SCTP communications.

  • Only established SCTP associations will be synchronized to peer node.
  • A maximum of eight source IP addresses and eight destination IP addresses are allowed in an SCTP communication.
  • One SPU supports a maximum of 5000 associations and a maximum of 320, 000 SCTP sessions.
  • The 4-way handshake process should be done in one node of a cluster. If the SCTP 4-way handshake process is handled on two nodes (for example, two sessions on two nodes in active/active mode) or the cluster fails over before the 4-way handshake is finished, the association cannot be established successfully.
  • If you configure different policies for each session belonging to one association, there will be multiple policies related to one association. The SCTP packet management (drop, rate limit, and so on) will use the profile attached to the handling SCTP session's policy.

    The association's timeout will only use the profile attached to its INIT packet’s policy. If the INIT packet’s policy changes the attached profile, the old profile is deleted, and the association will refresh the timeout configuration. However, if the INIT packet’s policy changes its attached profile without deleting the old profile, the association will not refresh the timeout configuration.

  • In some cases, the associations might not be distributed to SPUs very evenly because the port’s hash result on the central point is uneven. For example, this event can occur when only two peers of ports are used, and one peer has 100 associations, but another peer has only one association. In this case, the associations cannot be distributed evenly on the firewall with more than one SPU.
  • SCTP sessions will not be deleted with associations, and the sessions will time out in 30 minutes, which is the default value. If you need the session to time out soon, you can preconfigure the SCTP application timeout value.
  • M3UA or SCCP message parsing is checked , but the M3UA or SCCP stateful inspection is not checked.
  • Only ITU-T Rec. Q.711-Q.714 (07 or 96) standard is supported. ANSI, ETSI, China, and other standards are not supported.
  • Only RFC 4960 is supported.

On all high-end SRX Series devices, SCTP payload protocol blocking has the following limitations:

  • The supported protocol decimal value is from 0 to 63. This value includes 48 IANA assigned protocols and 16 unassigned protocols.
  • When running SCTP data traffic during a unified ISSU, the SCTP data packets are dropped at Junos OS Release 12.1X46. Only after the unified ISSU is finished, you can configure permit on Junos OS Release 12.1X46-D10 and pass the SCTP data traffic.
  • Only the first data chunk is inspected, so protocol blocking only works for the first data chunk.

On all high-end SRX Series devices, the SCTP rate limiting function has the following limitations:

  • The supported protocol decimal value is from 0 to 63. This value includes 48 IANA assigned protocols and 16 unassigned protocols.
  • Only the first data chunk is inspected, so the rate limiting function only works for the first data chunk.
  • A maximum of 80 addresses are rate limited in one profile.
  • A maximum of 10 protocols are rate limited for one address in one profile.
  • The supported rate limit value is from 1 to 12000.


  • SRX5800 devices does not support a redundant SCB card (third SCB) if an SRX5k SPC II (FRU model number: SRX5K-SPC-4-15-320) is installed on the device. If you have installed an SRX5k SPC II on an SRX5800 device with a redundant SCB card, make sure to remove the redundant SCB card.

Interfaces and Routing

This section covers filter and policing limitations.

  • On SRX1400, SRX3400, and SRX3600 devices, the following feature is not supported by a simple filter:
    • Forwarding class as match condition
  • On all high-end SRX Series devices, PIM does not support upstream and downstream interfaces across different virtual routers in flow mode
  • On SRX1400, SRX3400 and SRX3600, devices, the following features are not supported by a policer or a three-color-policer:
    • Color-aware mode of a three-color-policer
    • Filter-specific policer
    • Forwarding class as action of a policer
    • Logical interface policer
    • Logical interface three-color policer
    • Logical interface bandwidth policer
    • Packet loss priority as action of a policer
    • Packet loss priority as action of a three-color-policer
  • On all high-end SRX Series devices, the following features are not supported by a firewall filter:
    • Egress filter-based forwarding (FBF)
    • Forwarding table filter (FTF)
  • SRX3400 and SRX3600 devices have the following limitations of a simple filter:
    • The forwarding class is the match condition.
    • In the packet processor on an IOC, up to 400 logical interfaces can be applied with simple filters.
    • In the packet processor on an IOC, the maximum number of terms of all simple filters is 2000.
    • In the packet processor on an IOC, the maximum number of policers is 2000.
    • In the packet processor on an IOC, the maximum number of three-color-policers is 2000.
    • The maximum burst size of a policer or three-color-policer is 16 MB.
  • On all high-end SRX Series devices, the flow monitoring version 9 has the following limitations:
    • Routing Engine based flow monitoring V5 or V8 mode is mutually exclusive with inline flow monitoring V9.
    • High-end SRX Series devices do not support multiple collectors like branch SRX Series devices. Only one V9 collector per IPv4 or IPv6 is supported.
    • Flow aggregation for V9 export is not supported.
    • Only UDP over IPv4 or IPv6 protocol can be used as the transport protocol.
    • Only the standard IPv4 or IPv6 template is supported for exporting flow monitoring records.
    • User-defined or special templates are not supported for exporting flow monitoring records.
    • Chassis cluster is supported without flow monitoring session synchronization.
  • On SRX3400 and SRX3600 devices, when you enable the monitor traffic option using the monitor traffic command to monitor the FXP interface traffic, interface bounce occurs. You must use the monitor traffic interface fxp0 no-promiscuous command to avoid the issue.
  • On all high-end SRX Series devices, the lo0 logical interface cannot be configured with RG0 if used as an IKE gateway external interface.
  • On all high-end SRX Series devices, the set protocols bgp family inet flow and set routing-options flow CLI statements are no longer available, because BGP flow spec functionality is not supported on these devices.
  • On all high-end SRX Series devices, the LACP is not supported on Layer 2 interfaces.
  • On all high-end SRX Series devices, BGP-based virtual private LAN service (VPLS) works on child ports and physical interfaces, but not over ae interfaces.
  • When using SRX Series devices in chassis cluster mode, we recommend that you do not configure any local interfaces (or combination of local interfaces) along with redundant Ethernet interfaces.

    For example:

    The following configuration of chassis cluster redundant Ethernet interfaces, in which interfaces are configured as local interfaces, is not recommended:

    ge-2/0/2 {unit 0 {family inet { address;}}}

    The following configuration of chassis cluster redundant Ethernet interfaces, in which interfaces are configured as part of redundant Ethernet interfaces, is recommended:

    interfaces {ge-2/0/2 {gigether-options {redundant-parent reth2;}}reth2 {redundant-ether-options {redundancy-group 1;}unit 0 {family inet {address;}}}}

Intrusion Detection and Prevention (IDP)

  • On all high-end SRX Series devices, from Junos OS Release 11.2 and later, the IDP security package is based on the Berkeley database. Hence, when the Junos OS image is upgraded from Junos OS Release 11.1 or earlier to Junos OS Release 11.2 or later, a migration of IDP security package files needs to be performed. This is done automatically on upgrade when the IDP process comes up. Similarly, when the image is downgraded, a migration (secDb install) is automatically performed when the IDP process comes up, and previously installed database files are deleted.

    However, migration is dependent on the XML files for the installed database present on the device. For first-time installation, completely updated XML files are required. If the last update on the device was an incremental update, migration might fail. In such a case, you have to manually download and install the IDP security package using the download or install CLI commands before using the IDP configuration with predefined attacks or groups.

    As a workaround, use the following CLI commands to manually download the individual components of the security package from the Juniper Security Engineering portal and install the full update:

    • request security idp security-package download full-update
    • request security idp security-package install
  • On all high-end SRX Series devices, the IDP policies for each user logical system are compiled together and stored on the data plane memory. To estimate adequate data plane memory for a configuration, consider these two factors:
    • IDP policies applied to each user logical system are considered unique instances because the ID and zones for each user logical system are different. Estimates need to consider the combined memory requirements for all user logical systems.
    • As the application database increases, compiled policies requires more memory. Memory usage should be kept below the available data plane memory to allow for database increases.
  • On all high-end SRX Series devices, ingress as ge-0/0/2 and egress as ge-0/0/2.100 works with flow showing both source and destination interface as ge-0/0/2.100.
  • IDP does not allow header checks for nonpacket contexts.
  • On all high-end SRX Series devices, application-level distributed denial-of-service (application-level DDoS) detection does not work if two rules with different application-level DDoS applications process traffic going to a single destination application server. When setting up application-level DDoS rules, make sure that you do not configure rulebase-ddos rules that have two different application-ddos objects when the traffic destined to one application server can process more than one rule. Essentially, for each protected application server, you have to configure the application-level DDoS rules so that traffic destined for one protected server processes only one application-level DDoS rule.

    Note: Application-level DDoS rules are terminal, which means that once traffic is processed by one rule, it will not be processed by other rules.

    The following configuration options can be committed, but they will not work properly:






    Application Server











  • On all high-end SRX Series devices, application-level DDoS rule base (rulebase-ddos) does not support port mapping. If you configure an application other than default, and if the application is from either predefined Junos OS applications or a custom application that maps an application service to a nonstandard port, application-level DDoS detection will not work.

    When you configure the application setting as default, IDP uses application identification to detect applications running on standard and nonstandard ports; thus, the application-level DDoS detection would work properly.

  • On all high-end SRX Series devices, all IDP policy templates are supported except All Attacks. There is a 100-MB policy size limit for integrated mode and a 150-MB policy size limit for dedicated mode. The current IDP policy templates supported are dynamic, based on the attack signatures being added. Therefore, be aware that supported templates might eventually grow past the policy size limit.

    On all high-end SRX Series devices, the following IDP policies are supported:

    • DMZ_Services
    • DNS_Service
    • File_Server
    • Getting_Started
    • IDP_Default
    • Recommended
    • Web_Server
  • IDP deployed in both active/active and active/passive chassis clusters has the following limitations:
    • No inspection of sessions that fail over or fail back.
    • The IP action table is not synchronized across nodes.
    • The Routing Engine on the secondary node might not be able to reach networks that are reachable only through a Packet Forwarding Engine.
    • The SSL session ID cache is not synchronized across nodes. If an SSL session reuses a session ID and it happens to be processed on a node other than the one on which the session ID is cached, the SSL session cannot be decrypted and will be bypassed for IDP inspection.
  • IDP deployed in active/active chassis clusters has a limitation that for time-binding scope source traffic, if attacks from a source (with more than one destination) have active sessions distributed across nodes, then the attack might not be detected because time-binding counting has a local-node-only view. Detecting this sort of attack requires an RTO synchronization of the time-binding state that is not currently supported.

IP Monitoring

  • When IP monitoring is enabled on a different subnet than the reth IP address, then you must configure the proxy-arp unrestricted option on the upstream router.


  • Devices with IPv6 addressing do not perform fragmentation. IPv6 hosts should either perform path MTU discovery or send packets smaller than the IPv6 minimum MTU size of 1280 bytes.
  • Because IPv6 addresses are 128 bits long compared to IPv4 addresses, which are 32-bits long, IPv6 IPsec packet processing requires more resources. Therefore, a small performance degradation is observed.
  • IPv6 uses more memory to set up the IPsec tunnel. Therefore, the IPsec IPv4 tunnel scalability numbers might drop.
  • The addition of IPv6 capability might cause a drop in the IPsec IPv4-in-IPv4 tunnel throughput performance.
  • The IPv6 IPsec VPN does not support the following functions:
    • 4in6 and 6in4 policy-based site-to-site VPN, IKE
    • 4in6 and 6in4 route-based site-to-site VPN, IKE
    • 4in6 and 6in4 policy-based site-to-site VPN, Manual Key
    • 4in6 and 6in4 route-based site-to-site VPN, Manual Key
    • 4in4, 6in6, 4in6, and 6in4 policy-based dial-up VPN, IKE
    • 4in4, 6in6, 4in6, and 6in4 policy-based dial-up VPN, Manual Key
    • Remote Access—XAuth, config mode, and shared IKE identity with mandatory XAuth
    • IKE authentication—PKI or DSA
    • IKE peer type—dynamic IP
    • Chassis cluster for basic VPN features
    • IKE authentication—PKI or RSA
    • NAT-T
    • VPN monitoring
    • Hub-and-spoke VPNs
    • NHTB
    • DPD
    • Packet reordering for IPv6 fragments over tunnels is not supported
    • Chassis cluster for advanced VPN features
    • IPv6 link-local address
  • Network and Security Manager (NSM)—Consult the NSM release notes for version compatibility, required schema updates, platform limitations, and other specific details regarding NSM support for IPv6 addressing on all high-end SRX Series devices.
  • Security policy—Only IDP for IPv6 sessions is supported for all high-end SRX Series devices. UTM for IPv6 sessions is not supported. If your current security policy uses rules with the IP address wildcard any, and UTM features are enabled, you will encounter configuration commit errors because UTM features do not yet support IPv6 addresses. To resolve the errors, modify the rule returning the error so that the any-ipv4 wildcard is used; and create separate rules for IPv6 traffic that do not include UTM features.


  • The following table indicates browser compatibility:

    Table 15: Browser Compatibility on High-End SRX Series Devices



    Supported Browsers

    Recommended Browser

    SRX1400, SRX3400, SRX3600, SRX5400, SRX5600, SRX5800


    • Mozilla Firefox version 3.6 or later
    • Microsoft Internet Explorer version 7.0

    Mozilla Firefox version 3.6 or later

  • To use the Chassis View, a recent version of Adobe Flash that supports ActionScript and AJAX (Version 9) must be installed. Also note that the Chassis View is displayed by default on the Dashboard page. You can enable or disable the Chassis View using options in the dashboard Preference dialog box, but clearing cookies in Microsoft Internet Explorer also causes the Chassis View to be displayed.
  • On all high-end SRX Series devices, users cannot differentiate between Active and Inactive configurations on the System Identity, Management Access, User Management, and Date & Time pages.

Layer 2 Features

  • Layer 2 Bridging and Transparent Mode— On all high-end SRX Series devices, bridging and transparent mode are not supported on Mini-Physical Interface Modules (Mini-PIMs).

Logical Systems

  • The master logical system must not be bound to a security profile that is configured with a 0 percent reserved CPU quota because traffic loss could occur. When upgrading all high-end SRX Series devices from Junos OS Release 11.2, make sure that the reserved CPU quota in the security profile that is bound to the master logical system is configured for 1 percent or more. After upgrading from Junos OS Release 11.2, the reserved CPU quota is added to the default security profile with a value of 1 percent.
  • On all high-end SRX Series devices, quality-of-service (QoS) classification across interconnected logical systems does not work.
  • On all high-end SRX Series devices, the number of logical system security profiles you can create is constrained by an internal limit on security profile IDs. The security profile ID range is from 1 through 32, with ID 0 reserved for the internally configured default security profile. When the maximum number of security profiles is reached, if you want to add a new security profile, you must first delete one or more existing security profiles, commit the configuration, and then create the new security profile and commit it. You cannot add a new security profile and remove an existing one within a single configuration commit.

    If you want to add more than one new security profile, the same rule is true. You must first delete the equivalent number of existing security profiles, commit the configuration, and then create the new security profiles and commit them.

  • User and administrator configuration for logical systems—Configuration for users for all logical systems and all user logical systems administrators must be done at the root level by the master administrator. A user logical system administrator cannot create other user logical system administrators or user accounts for their logical systems.
  • Name-space separation—The same name cannot be used in two logical systems. For example, if logical-system1 includes the username “Bob” then other logical systems on the device cannot include the username “Bob”.
  • Commit rollback—Commit rollback is supported at the root level only.
  • Trace and debug—Trace and debug are supported at the root level only.
  • Class of service—You cannot configure class of service on logical tunnel (lt-0/0/0) interfaces.
  • ALGs—The master administrator can configure ALGs at the root level. The configuration is inherited by all user logical systems. It cannot be configured discretely for user logical systems.

Network Address Translation (NAT)

  • Single IP address in a source NAT pool without PAT—The number of hosts that a source NAT pool without PAT can support is limited to the number of addresses in the pool. When you have a pool with a single IP address, only one host can be supported, and traffic from other hosts is blocked because there are no resources available.

    If a single IP address is configured for a source NAT pool without PAT when NAT resource assignment is not in active-backup mode in a chassis cluster, traffic through node 1 will be blocked.

  • For all ALG traffic, except FTP, we recommend that you not use the static NAT rule options source-address or source-port. Data session creation can fail if these options are used, because the IP address and the source port value, which is a random value, might not match the static NAT rule. For the same reason, we also recommend that you not use the source NAT rule option source-port for ALG traffic.

    For FTP ALG traffic, the source-address option can be used because an IP address can be provided to match the source address of a static NAT rule.

    Additionally, because static NAT rules do not support overlapping addresses and ports, they should not be used to map one external IP address to multiple internal IP addresses for ALG traffic. For example, if different sites want to access two different FTP servers, the internal FTP servers should be mapped to two different external IP addresses.

  • On all high-end SRX Series devices, in case of SSL proxy, sessions are whitelisted based on the actual IP address and not on the translated IP address. Because of this, in the whitelist configuration of the SSL proxy profile, the actual IP address should be provided and not the translated IP addresses.


    Consider a destination NAT rule that translates destination IP address to using the following commands:

    • set security nat destination pool d1 address
    • set security nat destination rule-set dst-nat rule r1 match destination-address
    • set security nat destination rule-set dst-nat rule r1 then destination-nat pool d1

    In the above scenario, to exempt a session from SSL proxy inspection, the following IP address should be added to the whitelist:

    • set security address-book global address ssl-proxy-exempted-addr
    • set services ssl proxy profile ssl-inspect-profile whitelist ssl-proxy-exempted-addr
  • Maximum capacities for source pools and IP addresses have been extended on all high-end SRX Series devices as follows:

    Pool/PAT Maximum Address Capacity




    Source NAT pools




    IP addresses supporting port translation




    PAT port number




    Increasing the capacity of source NAT pools consumes memory needed for port allocation. When source NAT pool and IP address limits are reached, port ranges should be reassigned. That is, the number of ports for each IP address should be decreased when the number of IP addresses and source NAT pools is increased. This ensures NAT does not consume too much memory. Use the port-range statement in configuration mode in the CLI to assign a new port range or the pool-default-port-range statement to override the specified default.

    Configuring port overloading should also be done carefully when source NAT pools are increased.

    For source pool with PAT in range (63,488 through 65,535), two ports are allocated at one time for RTP or RTCP applications, such as SIP, H.323, and RTSP. In these scenarios, each IP address supports PAT, occupying 2048 ports (63,488 through 65,535) for ALG module use. On SRX5600 and SRX5800 devices, if all of the 12288 source pool is configured, a port allocation of 2M is reserved for twin port use.

  • NAT rule capacity change—To support the use of large-scale NAT at the edge of the carrier network, the device wide NAT rule capacity has been changed.

    The number of destination, static, and source NAT rules has been incremented as shown in Table 16. The limitation on the number of destination rule sets and static rule sets has been increased.

    Table 16 provides the requirements per device to increase the configuration limitation as well as to scale the capacity for each device.

    Table 16: Number of Rules on All High-End SRX Series Devices

    NAT Rule Type




    Source NAT rule




    Destination NAT rule




    Static NAT rule




    The restriction on the number of rules per rule set has been increased so that there is only a devicewide limitation on how many rules a device can support. This restriction is provided to help you better plan and configure the NAT rules for the device.

    For memory consumption, there is no guarantee to support these numbers (maximum source rule or rule set + maximum destination rule or rule set + maximum static rule or rule-set) at the same time for SRX1400, SRX3400, SRX3600, SRX5600, and SRX5800 devices.

    The suggested total number of rules and rule sets is listed in following table:




    Total NAT rule sets per system



    Total NAT rules per rule set



Security Policies

  • On all high-end SRX Series devices, the current SSL proxy implementation has the following connectivity limitations:
    • The SSLv2 protocol is not supported. SSL sessions using SSLv2 are dropped.
    • SSL sessions where client certificate authentication is mandatory are dropped.
    • SSL sessions where renegotiation is requested are dropped.
  • On all high-end SRX Series devices, for a particular session, the SSL proxy is only enabled if a relevant feature related to SSL traffic is also enabled. Features that are related to SSL traffic are IDP, application identification, application firewall, and application tracking. If none of the above listed features are active on a session, the SSL proxy bypasses the session and logs are not generated in this scenario.
  • On all high-end SRX Series devices, you cannot configure the following IP addresses as negated addresses in a policy:
    • Wildcard addresses
    • IPv6 addresses
    • Addresses such as any, any-ipv4 , any-IPv6 and
  • When a range of addresses or a single address is negated, it can be divided into multiple addresses. These negated addresses are shown as a prefix or a length that requires more memory for storage on a Packet Forwarding Engine.
  • Each platform has a limited number of policies with negated addresses. A policy can contain 10 source or destination addresses. The capacity of the policy depends on the maximum number of policies that the platform supports.

Services Offloading

  • Services offloading has the following limitations:
    • Transparent mode is not supported. If transparent mode is configured, a normal session is installed.
    • LAG is not supported. If a LAG is configured, a normal session is installed.
    • Only multicast sessions with one fan-out are supported. If a multicast session with more than one fan-out exists, a normal session is installed.
    • Only active/passive chassis cluster configuration is supported. Active/active chassis cluster configuration is not supported.
    • Fragmented packets are not supported. If fragmented packets exist, a normal session is installed.
    • IPv6 is not supported. If IPv6 is configured, a normal session is installed.

    Note: A normal session forwards packets from the network processor to the SPU for fast-path processing. A services-offload session processes fast-path packets in the network processor and the packets exit out of the network processor itself.

  • For Non-Services-Offload Sessions:
    • When services offloading is enabled, for normal sessions, the performance can drop by approximately 20 percent for connections per second (CPS) and 15 percent for packets per second (pps) when compared with non-services-offload mode.
    • For Services-Offload Sessions:

      When services offloading is enabled, for fast-forward sessions, the performance can drop by approximately 13 percent for connections per second (CPS).

Simple Network Management Protocol (SNMP)

  • On all high-end SRX Series devices, the show snmp mib CLI command will not display the output for security related MIBs. We recommend that you use an SNMP client and prefix logical-system-name@ to the community name. For example, if the community is public, use default@public for default root logical system.

Unified Access Control

  • During SRX device communication to the Infranet Controller (IC), the connection remains in attempt-next state preventing a successful communication. This happens when an outgoing interface used to connect the IC is a part of routing-instance.

Unified Threat Management (UTM)

  • On SRX5400 devices configured with Sophos Antivirus, certain files whose sizes are larger than the max-content-size might not go into fallback unlike other AV engines and instead end up being detected as clean file for few protocols which does not pre-declare the content size.

Virtual Private Networks (VPNs)

  • On SRX Series devices, if an IPsec VPN tunnel is established using IKEv2, a small number of packet drops might be observed during CHILD_SA rekey as a result of "bad SPI" being logged.

    This occurs only when the SRX Series device is the responder for this rekey and the peer is a non-Juniper Networks device, and the latency between the peers is low and the packet rate is high.

    To avoid this issue, ensure that the SRX Series device always initiates the rekeys by setting its IPsec lifetime to a lower value than that of the peer.

  • IKEv2 does not support the following features:

    • Policy-based VPN.
    • Dialup tunnels.
    • VPN monitoring.
    • EAP.
    • Multiple child SAs for the same traffic selectors for each QoS value.
    • IP Payload Compression Protocol (IPComp).
    • Traffic selectors.
  • On all high-end SRX Series devices, configuring XAuth with AutoVPN secure tunnel (st0) interfaces in point-to-multipoint mode and dynamic IKE gateways is not supported.
  • VPN monitoring and Suite B cryptographic configuration options ecdsa-signatures-384 (for IKE authentication) and Diffie-Hellman group20 consume considerable CPU resources. If VPN monitoring and the ecdsa-signatures-384 and group20 options are used on an SRX Series device with a large number of tunnels configured, the device must have the next-generation SPC installed.
  • On all high-end SRX Series devices, for auto VPN, the tunnel setup rate decreases with an increase in the number of SPCs in the device.
  • On SRX Series devices, configuring RIP demand circuits over VPN interfaces is not supported.
  • On a high-end SRX Series device, VPN monitoring of an externally connected device (such as a PC) is not supported. The destination IP address for VPN monitoring must be a local interface on the high-end SRX Series device.
  • IPv6 policy-based VPN is not supported
  • On all high-end SRX Series devices, DH-group 14 is not supported for dynamic VPN.
  • On all high-end SRX Series devices, when you enable VPN, overlapping of the IP addresses across virtual routers is supported with the following limitations:
    • An IKE external interface address cannot overlap with any other virtual router.
    • An internal or trust interface address can overlap across any other virtual router.
    • An st0 interface address cannot overlap in route-based VPN in point-to-multipoint tunnels such as NHTB.
    • An st0 interface address can overlap in route-based VPN in point-to-point tunnels.
  • On all high-end SRX Series devices, the DF-bit configuration for VPN only works if the original packet size is smaller than the st0 interface MTU, and larger than the external interface-ipsec overhead.
  • RIP is not supported in point-to-multipoint (P2MP) VPN scenarios including AutoVPN deployments. We recommend OSPF or IBGP for dynamic routing when using P2MP VPN tunnels.
  • On all high-end SRX Series devices, the IPsec NAT-T tunnel scaling and sustaining issues are as follows:
    • For a given private IP address, the NAT device should translate both 500 and 4500 private ports to the same public IP address.
    • The total number of tunnels from a given public translated IP cannot exceed 1000 tunnels.
  • On all high-end SRX Series devices, for auto VPN, the tunnel setup rate decreases with an increase in the number of SPCs in the device.

Related Documentation

Modified: 2017-04-24