Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?


Known Limitations

Learn about limitations in this release for the PTX10001-36MR, PTX10003, PTX10004, and PTX10008.

For the most complete and latest information about known Junos OS Evolved defects, use the Juniper Networks online Junos Problem Report Search application.


  • If a packet with unknown inner ether-type is received at the device over an EVPN-MPLS tunnel, then such packet is dropped. PR1564431

General Routing

  • Excess-rate configuration in port schedulers might not be completely honored in certain scenarios. In such scenarios, with explicit excess-rate configuration the actual excess-rate achieved might still be more in proportion to the configured transmit-rate. PR1528124

  • Double-fault scenarios are not handled by link auto-heal feature and fabric links remain down if the Routing Engine switchover is attempted while auto-heal recovery is in progress. PR1529599

  • When a scheduler-map binding is removed from an interface, then default scheduler-map is bound to the interface. If default scheduler-map is an oversubscribed scheduler map for the interface, then that map is not applied to this interface and all "interface queue" counters for this interface show statistics as 0. PR1539052

  • PTX10008: By default IPv6 addressing is configured with /64 subnet irrespective of the subnet configured on the DHCP server side. PR1539839

  • On all Junos OS and Junos OS Evolved platforms when the next-hop is added or changed to Packet Forwarding Engine and the same next-hop also forwards nexthop of an indirect route, if ingress Packet Forwarding Engine is fast and egress Packet Forwarding Engine is slow, then this results in packet loss as ingress Packet Forwarding Engine being faster sees new FNH and also the indirect change. However, egress Packet Forwarding Engine being slower does not consume indirect change yet. PR1547432

  • On Junos OS Evolved PTX10008 platforms, if multiple SIBs are in offline state and GRES is performed immediately, SIBs might get stuck in offline state for sometime. PR1554423

  • UDP encapsulated MPLS packets with explicit null label received on FTI tunnel gets dropped after UDP decapsulation. After UDP tunnel header decapsulation MPLS payload with explicit null label cannot be forwarded as it requires popping MPLS explicit null label and lookup of MPLS inner payload which is not supported in BT ASIC based products without looping back the MPLS payload for additional lookup. We support only scenarios where we decapsulate the tunnel header and forward the packets based on the exposed MPLS Label. PR1580641

  • PTX10003 interface queue and voq does not report drops when the low priority queue is oversubscribed. PR1581490

Interfaces and Chassis

  • In PTX10003-80C or PTX10003-160C, when there is over-subscription traffic across Packet Forwarding Engines and one of the Fabric ASICs (ZF) goes bad, the software takes an automatic action to recover the system by issuing an automatic SIB offline and then online. However, if the egress traffic is at line-rate, the traffic would take time to converge till which there are fabric drops. PR1580376

  • For 25G speed channelisation, due to ASIC limitation in PTX10003 no two neighbouring channels ( 1-2, 3-4 ) can be configured with different FEC mode else the link remains down. PR1580717


  • If all the Routing Engines are not rebooted after a network service configuration change (for example, changing the range of MPLS labels), the rpd process might crash. PR1461468

Routing Protocols

  • On all platforms with dual Routing Engines running Junos OS or Junos OS Evolved, BGP Nonstop-Routing replication might be stuck in a rare and timing case. BGP session(s) on the primary Routing Engine are stuck in "SoWait" state, and BGP session(s) on the backup Routing Engine cannot sync with the primary Routing Engine. From the BGP peer side, the BGP session(s) break after hold-time expiry (90 seconds by default).

    This defect could be seen after the following series of events happen:

    1. BGP NSR replication starts while primary Routing Engine (BGP session) is busy reading packets (that is, protocol data unit).
    2. Primary Routing Engine (BGP session) requests to stop reading at PDU boundary.
    3. While BGP session on primary Routing Engine waits to read complete packet (remaining bytes), the TCP sync connection (between primary and backup BGP) flaps (that is., PDU boundary is not read before the flap.PR1581578

System Management

  • Results from the show ethernet-switching statistics command are limited. Only the Current MAC count statistic is displayed. PR1564962