Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation  Back up to About Overview 
  
[+] Expand All
[-] Collapse All

Known Behavior

This section lists the limitations in Junos OS Release 14.1X53 for the QFX Series.

High Availability

  • On a Virtual Chassis Fabric that has more than two Routing Engines configured, a nonstop software upgrade (NSSU) might not succeed. PR1081786
  • On a QFX5100 switch, when you perform an in-service software upgrade (ISSU), interfaces configured with the Link Aggregation Control Protocol (LACP) and with the speed set to fast will come up and go down, causing all protocols on the interface to come up and go down. As a workaround, before you perform an ISSU, configure the LACP speed setting to slow on the switch and its peers. PR1106510
  • On an EX4300 or a QFX5100 Virtual Chassis, when you perform an NSSU, there might be more than five seconds of traffic loss for multicast traffic. PR1125155
  • On a QFX Series Virtual Chassis or Virtual Chassis Fabric, an NSSU to Release 14.1X53-D35 might cause a traffic loss of a few seconds for BUM traffic. PR1128208
  • On a QFX5100 Virtual Chassis, when you perform a nonstop software upgrade (NSSU) from Junos OS Release14.1X53-D30.6 to Junos OS Release 14.1X53-D32, there might be traffic loss for up to one second. PR1154635
  • During a QFabric NSSU from Junos OS Release 12.2X50 to Release 14.1X53, multicast traffic might be impacted (loss or duplication) for up to 60 seconds while upgrading ICs. The variation in loss duration depends on the number of front/back cards on the IC, number of distribution trees passing through those ICs, and so on, because all forwarding paths need to be set up afresh after an upgrade. PR1225870
  • On QFabric systems, when the RSNG backup has a lower MAC/SYSID at reboot, you might observe an abrupt mastership switchover.

    As a workaround for the RSNG backup reboot:

    1. ssh login to the RSNG master.
    2. Execute this command to get access to the RSNG backup: request session member fpc-number backup.
    3. Go to CLI mode on the RSNG backup, go to edit mode, and execute the following commands: set interfaces me5 disable; set interfaces me6 disable.
    4. On commit it will ask for confirmation ?confirm ?yes? and press enter. This will automatically reboot the RSNG backup.

    PR1240951

Infrastructure

  • On QFX5100 switches, Ethernet VPN (EVPN) routes from compute nodes can be withdrawn when no change has taken place on either the compute node or the QFX5100 switch. PR1106510
  • On a QFX5100 switch that has performed a pseudowire switchover, traffic might drop for 10 seconds immediately after the switchover. PR1049606

Interfaces and Chassis

  • On a QFX5100 switch, high ICMP delays are experienced when pinging directly connected integrated routing and bridging (IRB) interfaces. PR966905
  • On a QFX5100 switch, if you configure MC-LAG, IRB mac sync, and LACP force up, the number of packets received (rx) might be twice the amount sent (tx) from the customer edge to the core. PR1015655
  • If a transceiver is removed from a port on a QFX5100, QFX3600, or EX4300 switch within 30 seconds of converting the port into a Virtual Chassis port (VCP), the port might not get initialized as a VCP. PR1029829
  • QFX5100-48T 10gbase-T copper ports support 100m speed in both autonegotiation and force speed mode. To configure 100m speed in autonegotiation mode, you must configure 100m speed on the peer. To configure 100m speed in force speed mode, you must delete the interface by using the delete interface ether-options auto-negotiation configuration command. Deleting the interface sets the port into force speed mode, in which 10gbase-T copper ports support only 100m speed. Therefore, the port speed mode is automatically set to 100m. That is, you do not need to explicitly configure the speed as 100m in force speed mode. PR1044860

    Note: In Junos OS Release 14.1X53-D15, the ether-options statement is not available in the [edit interfaces interface-name] hierarchy.

  • When an EX4600 or a QFX5100 switch is downgraded from Junos OS Release 14.1X53-D15 or later to Junos OS Release 14.1X53-D10 or earlier, the 40-Gbps Ethernet interfaces on QSFP+ transceivers might not return to the UP state. As a workaround, power cycle the switch after the Junos OS upgrade. PR1061213
  • On 40-gigabit links between EX4300 and QFX5100 switches, you must disable auto-negotiation on both ends of the link for the interfaces to remain up. On each switch, issue the set interface et-x/y/z ether-options no-auto-negotiation command. Also, because auto-negotiation is disabled, you must also explicitly configure the link-mode and speed options on those interfaces. PR1118318
  • In a Q-in-Q tunneling configuration on a QFX5100 switch, if you configure a VLAN ID list on an ingress customer-VLAN (C-VLAN) interface but only configure a VLAN ID without vlan-id-list on the egress C-VLAN interface, packets that are sent to the egress C-VLAN interface might be dropped. As a workaround, configure the same VLAN ID list on both the ingress and the egress C-VLAN interfaces. PR1216732
  • In a Q-in-Q tunneling configuration on a QFX5100 switch that is running under Junos OS Release 14.1X53-D40, if you configure a VLAN ID on the egress UNI interface that is the same as the SVLAN ID, and if the vlan-id-list statement is not configured on the logical interface on that UNI interface, Q-in-Q packets might be forwarded out with dual tags after they exit from the UNI interface. We recommend that you always include vlan-id-list in the Q-in-Q configuration.
  • On QFX5100 switches, if the loopback interface does not have an accept term for the management IP address out of the box, all traffic on the management port will be dropped. You must configure an explicit accept term on the loopback filter to "allow all traffic" to pass through the management interface. PR1215384
  • Configuring link aggregation group (LAG) hashing with the [edit forwarding-options enhanced-hash-key] inet vlan-id statement uses the VLAN ID in the hashing algorithm calculation. On some switching platforms, when this option is configured for a LAG that spans FPCs, such as in a Virtual Chassis or Virtual Chassis Fabric (VCF), packets are dropped due to an issue with using an incorrect VLAN ID in the hashing algorithm. As a result, the vlan-id hashing option is not supported in a Virtual Chassis or VCF containing any of the following switches as members: EX4300, EX4600, QFX3500, QFX3600, QFX5100, or QFX5110 switches. Under these conditions, use any of the other supported enhanced-hash-key hashing configuration options instead. PR1293920

Layer 2 Features

  • On a QFabric system, system log messages might be flooded during the mapping of interfaces to VLANs. You can ignore these system log messages. PR1200853

Layer 3 Protocols

  • On QFX5100 switches, if you use the next-table statement in the configuration of a static route that is part of a virtual routing instance, the switch does not forward ICMP packets destined to a route that is present in the inet.0 routing table. PR970895

Layer 3 VPNs

  • In a Layer 3 VPN, if IRB is used between the penultimate hop and the PE node, you cannot check VRF connectivity using PE to PE ping. Pinging to the PE loopback address or interface IP address from the remote PE does not work. As a workaround, use CE to CE ping to verify VRF connectivity. PR1211462

MPLS

  • In an MPLS scenario, on EX4600 or QFX Series switches with AE interface configured, after the IGP metric is changed and the AE interface is disabled, an fxpc crash might be observed when child nexthop of a UNILIST is pointing to NULL. PR1168150
  • If a link failure occurs when multiple LSPs are using a link-protected, fast-rerouted link, the convergence time is proportional to the number of LSPs sharing the protected link. PR1015806
  • In a scaled configuration for MPLS FRR and L2 circuit, the convergence time for FRR might increase. For L2 circuit, there might be packet drops. PR1016146
  • A pseudowire is a port-based Layer 2 circuit that emulates a service over a packet switched network (PSN). You can emulate any circuit end to end using a pseudowire. In the event of a link failure on a transit router that hosts a Layer 2 circuit over an RSVP tunnel, the traffic convergence time is approximately 350 milliseconds for a single pseudowire. PR1016992
  • On a QFX5100 switch, if an MPLS link is in hot standby mode and a pseudowire switchover is triggered by the event remote site local interface signaled down, traffic flowing through the pseudowire is dropped. PR1027755
  • On a QFX5100 switch that is using the Ethernet tagged mode of operation on a pseudowire, L2 control protocols can fail to come up between customer edge devices (CEs) across the pseudowire. This issue is not seen when the pseudowire mode of operation is Ethernet raw mode. PR1028537
  • For an L2 circuit on QFX5100 switches, when IS-IS is used as an IGP between CEs connected to an L2 circuit, the CEs fail to form an IS-IS adjacency over the pseudowire. As a workaround, consider using an alternative IGP protocol, such as OSPF. PR1032007
  • On a QFX5100 switch, the enhanced hash key does not work for MPLS-IP packets. PR1095136
  • On QFX5100 switches, MPLS ECMP with penultimate hop popping (PHP) does not work with single labels. PR1212113

Multicast Protocols

  • When an IGMP leave is sent from a host to a QFX5100 switch, one packet per multicast group is dropped during route programming. PR995331
  • On a QFabric system, when you use the set interfaces vlan disable command to disable VLAN interfaces that are running multicast at Layer 3, wait up to 4 minutes to enable the VLAN interfaces. Waiting to reenable the interfaces provides the time necessary to clear Layer 3 multicast groups, preventing the loss of multicast traffic. PR1194388
  • On a QFabric system, rate-limiting is enabled by default for multicast traffic. PR1198892

Network Management and Monitoring

  • This issue applies to the Cloud Analytics Engine feature. The Compute Agent Web API does not provide the option to configure the VXLAN destination port. PR1036372
  • This issue applies to the Cloud Analytics Engine feature. Compute Agent CPU utilization goes up when 50 flows are sent. As a workaround, use staggered probe initiation or lower the number of probes per second.PR1041190
  • This issue applies to the Cloud Analytics Engine feature. The Cloud Analytics Engine Compute Agent process (cagent) does not start after the server is rebooted. As a workaround, restart the cagent process manually or with customer automation framework scripts. PR1041931
  • On a QFX5100 switch, the DHCP relay bindings of clients bound via secondary addresses may get cleared when the primary address on the gateway interface configured as DHCP Smart Relay is modified or deactivated. PR1084911
  • On a QFabric system, an SNMP query might not complete because it is querying a nonexistent node-group. This node-group is present in the QFabric configuration but the corresponding node-device has been removed or powered off.

    1. 1. From the shell, issue mysql -uroot -ppassword sfcdb -hdb -e "Call GetSNMPFabricChassisINEs();" to list devices in the database.
    2. Determine if there are devices listed that are not supposed to be there.
    3. Delete these devices’ configurations from the CLI.
    4. From the shell, confirm removal of the devices using the command in Step 1.

    PR1235326

  • After upgrading from Junos OS Release 12.2 to Junos OS Release 14.1X53-D40 in a QFabric system, queries for jnxFabricOperatingTable entries in the enterprise-specific SNMP Fabric Chassis MIB might not return results for the two Director groups DG0 and DG1. You can follow these steps to work around this issue:
    1. Remove the file /opt/dcf/config/.add_director from Director groups DG0 and DG1.
    2. Run the script /opt/dcf/scripts/move_cores.sh on Director group DG0 and on Director group DG1. This script is available by default on switches running Junos OS and used at QFabric system setup time to add devices in the system to the MIB database; running it after a Junos OS upgrade adds DG0 and DG1 if they were not already present.
    3. Confirm the Director group entries are present using the show snmp mib walk ... mib-object CLI command or other SNMP MIB query tools to retrieve jnxFabricOperatingTable entries.

    PR1214737

OVSDB

  • On QFX5100 switches, the amount of time that it takes for other Juniper Networks devices that function as hardware virtual tunnel endpoints (VTEPs) to learn a new MAC address after the first packet is sent from this MAC address is a maximum of 4.5 seconds. (The amount of time depends upon the server configuration on which VMware NSX is running.) During this time, traffic destined for this MAC address is flooded into the VXLAN. PR962945
  • After the connections with NSX controllers are disabled on a Juniper Networks device, interfaces that were configured to be managed by OVSDB continue passing traffic. PR980577
  • QFX5100 switches do not support multiple service nodes for the handling of Layer 2 broadcast, unknown unicast, and multicast (BUM) traffic within an OVSDB-managed VXLAN. PR985872
  • If an entity with a particular MAC address is moved so that its traffic is handled by a different Juniper Networks device that functions as a hardware virtual tunnel endpoint (VTEP), this MAC address is not learned by entities served by the new hardware VTEP until the hardware VTEP that previously handled its traffic ages out the MAC address. During this transitional period, traffic destined for this MAC address is dropped. PR988270
  • On QFX5100 switches, an NSX controller occasionally overrides an existing local MAC with a remote MAC of the same address. If the Junos OS hardware VTEP detects such a condition (that is, it receives a remote MAC from the NSX controller that conflicts (matches) with an existing local MAC), the hardware VTEP in a Junos OS network accepts the remote MAC and stops publishing the local MAC to the NSX controller. PR991553
  • On QFX5100 switches, an active path in the OVSDB overlay, which you can view by using the show ovsdb mac operational command, does not always match the active path in the Layer 3 network underlay, which you can view by using the show route operational command. PR1015998
  • On QFX5100 switches, in NSX Manager, when a logical switch is deleted, the corresponding VXLAN on a QFX5100 switch might not be automatically deleted and might still appear in the output of the show vlans command. PR1024169
  • On a QFX5100 switch on which OVSDB-managed interfaces are automatically configured, if you delete the configuration of one or more of the interfaces from the switch using the delete vlans interfaces command, the interfaces will be automatically reconfigured per the logical switch, gateway service, and logical switch port configurations that still reside in NSX. Despite the automatic reconfiguration of the OVSDB-managed interfaces, an 8- to 12-second loss of traffic might occur. This loss is because local MAC addresses learned by the interfaces and port-to-logical switch bindings were cleared when the interfaces were deleted and must be re-learned after the interfaces are up and running again. PR1069889
  • If an NSX or Contrail controller pushes a large logical-system configuration to a QFX5100 switch, the existing Bidirectional Forwarding Detection (BFD) sessions with aggressive timers might flap. As a workaround, configure the BFD timer to be at least 1 second. PR1084780
  • On QFX5100, during GRES, FPC reboot, or OVSDB server restarts, an OVSDB session flaps between the TOR and the TOR agent, causing approximately 30 seconds of BUM traffic loss. PR1254188
  • We recommend that you do not use the clear ethernet switching command on an OVSDB-enabled switch because doing so might cause a delay in relearning MAC addresses, and depending on the scale of VNs, this delay can increase. To recover the switch once the logical interfaces are down because of a MAC move limit shutdown, use the clear ethernet-switching recovery-timeout command. PR1275025
  • Based on the current design, Contrail Release 2.20 (and later) supports TOR agent redundancy through the use of HAProxy. The TOR switch establishes a connection through HAProxy to one of the available TOR agents, based on current load. The TOR needs to re-establish the connection to TSN after OVSDB session flaps, which are triggered by GRES switchover. If the TOR establishes the connection to a TSN different than the one which the TOR was connected to previously, there may be up to 2.5 minutes of BUM traffic loss. This behavior is expected as per current design.

    In a production network, load will be equally distributed across all the TSNs. In this condition, when the OVSDB session flaps due to GRES on the TOR, it will most likely get connected to the same TSN. And hence, less probability to experience the issue. PR1257494

  • The TSN receives BUM packets from the originating TOR (which sends BUM traffic to TSN) and replicates them to other TORs or SW VTEPs. Upon GRES, BUM traffic loss for 2 minutes may happen while doing GRES on QFX5100 Virtual Chassis with a scale of 60K MAC and 2K remote VTEPs. PR1268529

Platform and Infrastructure

  • On a QFX5100 switch, Bidirectional Forwarding Detection (BFD) sessions might continuously switch from on to off for several minutes after an in-service software upgrade (ISSU). PR980476

Port Security

  • Framing errors on a MACSec-enabled interface might be seen when an AN number is refreshing. We recommend that you enable flow control on MACSec-enabled interfaces to reduce the number of framing errors. PR1261567
  • The following log is generated when a MAC move occurs (which is a correct behavior):
    Mar 10 16:06:16  vdc-vcf-s1 l2ald[1830]: L2ALD_MAC_MOVE_EXCEEDED_BD: Limit on MAC moves exceeded at VLAN Contrail-c0065ee4-f22a-4b38-aba9-6f5d30a6fe2dfor MAC 00:11:94:00:00:02 moved from interface et-1/0/53.3501 to interface et-0/0/16.3501;Mac move limit is 1. DROPPING THE PACKET

    However, when the mac-move-limit is configured with a value greater than 5, the drop log may not be generated.

    set vlans vlan-name switch-options mac-move-limit number > 5
    set vlans vlan-name switch-options mac-move-limit packet-action drop-and-log

    The recommended value range for mac-move-limit is between 2 and 5. When configuring mac-move-limit, use recommended values. PR1261593

  • On QFX5100 switches, the MAC move limit feature can be used to detect and remove loops by taking actions like drop or shutdown when many MAC moves are detected. If the MAC move limit is configured with a packet action of drop on a VXLAN VLAN, it can take more than 4 minutes for traffic to resume after the loop is removed. PR1274359

QFabric Systems

  • On a QFabric system, if the fabric control Routing Engines are not load balanced, when you request a "component all" style software upgrade, the upgrade fails. PR892310
  • On a QFabric system, when you configure an alias for a Node device or an Interconnect device, use that alias when you configure a flow group. PR1032693
  • In a QFabric system, traffic loss is expected if the master of an RSNG is rebooted. Issue a switchover to make the device the backup of the RSNG before rebooting the device. PR1229949
  • On QFabric systems, during the reboot of the CPE primary device, there could be packet loss of control packets for a few seconds. The control packets have provisioning for retransmission, so protocols may not be impacted as long as the protocol dead-interval or hold-interval is more than 10 seconds during the CPE primary reboot. The data traffic is not impacted by the CPE primary reboot. PR1252908
  • SNMP traps sent out from a QFabric system have an additional object jnxQFabricEventSource (that is, 1.3.6.1.4.1.2636.3.42.1.1.1) to identify the component from which the SNMP trap was generated such as node-group, interconnect device name, and so on. This object is not part of the MIB definitions. As a workaround, program NMS Tools to handle this additional object and correctly identify the source of the trap. PR1269928

Routing Policy and Firewall Filters

  • On QFX Series Virtual Chassis, packets that are generated in the CPU and exit from a non-master FPC port might be subjected to an egress port-based firewall filter (PACL) and be egress filtered, while packets that exit from a master FPC port might not be egress filtered. PR923659
  • QFX5100 switches do not depend on the VRF match for loopback filters configured at different routing instances. Loopback filters per routing instance (such as lo0.100, lo0.103, lo0.105) are not supported and may cause unpredictable behavior. We recommend that you apply the loopback filter to the lo0.0 (master routing instance) only.

Routing Protocols

  • The device does not properly process a Neighbor Solicitation sent to its Subnet-Router Anycast address. PR693235

Security

  • The following control packets share the same policer (burst and bandwidth) in hardware, so changing one in the DDoS protection CLI also changes the DDoS parameter for other protocols:
    • STP, PVSTP, and LLDP share DDoS parameters
    • l3mtu-fail, TTL, and ip-opt share DDoS parameters
    • RSVP, LDP, and BGP share DDoS parameters
    • unknown-l2mc, RIP, and OSPF share DDoS parameters

    PR1211911

Software Installation and Upgrade

  • On a QFX5100 switch, system logs might not be retained after a unified in-service software upgrade (ISSU), due to the data disk being reformatted during the ISSU. PR964950
  • On QFX5100 switches, if a port mirroring analyzer is configured with a VLAN input and you perform an (ISSU, the analyzer state is restored after the upgrade. If you later delete the analyzer configuration, mirroring stops but there might be harmless stale entries in the hardware. PR970011
  • On QFX3500 and QFX5100 switches, the amount of time that it takes for Zero Touch Provisioning to complete might be lengthy because TFTP might take a long time to fetch required data. PR980530
  • On a QFabric system, during an NSSU upgrade from Junos OS Release 13.2X52 to 14.1X53-D40, traffic loss may be observed during RSNG upgrade. PR1207804
  • On QFX5100, ISSU from Junos OS Release 14.1X53-D30 to 14.1X53-D40 is not supported. As a workaround, perform a regular software upgrade by downloading the new software version and rebooting the switch during a maintenance window. PR12209272

System Management

  • On the QFX Series, using the set license keys key and request system license add terminal commands together does not work for installing and deleting licenses. For example, if you install the license using the set license keys key, use the delete license keys command to delete the license. This is true for the request system license commands. PR1023672

Storage and Fibre Channel

  • Each Fibre Channel fabric on an FCoE-FC gateway supports a maximum of four Fibre Channel over Ethernet (FCoE) VLAN interfaces.
  • The maximum number of logins for each FCoE node (ENode) is in the range of 32 through 2500. (Each ENode can log in to a particular fabric up to the maximum number of configured times. The maximum number of logins is per fabric, so an ENode can log in to more than one fabric and have its configured maximum number of logins on each fabric.)
  • The maximum number of FCoE sessions for the switch, which equals the total number of fabric login (FLOGI) sessions plus the total number of fabric discovery (FDISC) sessions, is 2500.
  • The maximum number of FIP snooping sessions per QFX3500 switch is 2500.
  • When you configure FIP snooping filters, if the filters consume more space than is available in the ternary content-addressable memory (TCAM), the configuration commit operation succeeds even though the filters are not actually implemented in the configuration. Because the commit operation checks syntax but does not check available resources, it appears as if the FIP snooping filters are configured, but they are not. The only indication of this issue is that the switch generates a system log message that the TCAM is full. You must check the system log to find out if a TCAM full message has been logged if you suspect that the filters have not been implemented.
  • You cannot use a fixed classifier to map FCoE traffic to an Ethernet interface. The FCoE application type, length, and value (TLV) carries the FCoE priority-based flow control (PFC) information when you use an explicit IEEE 802.1p classifier to map FCoE traffic to an Ethernet interface. You cannot use a fixed classifier to map FCoE traffic to an Ethernet interface because untagged traffic is classified in the FCoE forwarding class, but FCoE traffic must have a priority tag (FCoE traffic cannot be untagged).

    For example, the following behavior aggregate classifier configuration is supported:

    [edit class-of-service]
    user@switch# set congestion notification profile fcoe-cnp input ieee-802.1 code-point 011 pfc
    user@switch# set interfaces xe-0/0/24 unit 0 classifiers ieee-802.1 fcoe

    For example, the following fixed classifier configuration is not supported:

    [edit class-of-service]
    user@switch# set interfaces xe-0/0/24 unit 0 forwarding-class fcoe

  • On a QFX Series device, a DCBX interoperability issue between 10-Gigabit Ethernet interfaces on QFX Series devices and 10-Gigabit Ethernet interfaces on another vendor’s devices can prevent the two interfaces from performing DCBX negotiation successfully in the following scenario:
    1. On a QFX Series 10-Gigabit Ethernet interface, LLDP is running, but DCBX is disabled.
    2. On another vendor’s device 10-Gigabit Ethernet interface, both LLDP and DCBX are running, but the interface is administratively down.
    3. When you bring another vendor’s 10-Gigabit Ethernet interface up by issuing the no shutdown command, the device sends DCBX 1.01 (CEE) TLVs, but receives no acknowledge (ACK) message from the QFX Series device, because DCBX is not enabled on the QFX Series device. After a few tries, another vendor’s device sends DCBX 1.00 (CIN) TLVs, and again receive no ACK messages from the QFX Series device.
    4. Enable DCBX on the QFX Series 10-Gigabit Ethernet interface. The interface sends DCBX 1.01 (CEE) TLVs, but the other vendor’s device ignores them and replies with DCBX 1.00 (CIN) TLVs. The other vendor’s device does not attempt to send or acknowledge DCBX 1.01 TLVs, only DCBX 1.00 TLVs.

    In this case, the QFX Series device ignores the DCBX 1.00 (CIN) TLVs because the QFX Series does not support DCBX 1.00 (the QFX Series supports DCBX 1.01 and IEEE DCBX). The result is that the DCBX capabilities negotiation between the two interfaces fails.

Traffic Management

  • On a QFX5100 switch, running tcpdump on the console might cause system instability or cause protocols such as STP or LACP to fail. PR932592
  • CoS on Virtual Chassis access interfaces is the same as CoS on QFX Series access interfaces with the exception of shared buffer settings. All of the documentation for QFX Series CoS on access interfaces applies to Virtual Chassis access interfaces.

    Virtual Chassis access interfaces support the following CoS features:

    • Forwarding classes—The default forwarding classes, queue mapping, and packet drop attributes are the same as on QFX Series access interfaces:

      Default Forwarding Class

      Default Queue Mapping

      Default Packet Drop Attribute

      best-effort (be)

      0

      drop

      fcoe

      3

      no-loss

      no-loss

      4

      no-loss

      network-control (nc)

      7

      drop

      mcast

      8

      drop

    • Packet classification—Classifier default settings and configuration are the same as on QFX Series access interfaces. Support for behavior aggregate, multifield, multidestination, and fixed classifiers is the same as on QFX Series access interfaces.
    • Enhanced transmission selection (ETS)—This data center bridging (DCB) feature that supports hierarchical scheduling has the same defaults and user configuration as on QFX Series access interfaces, including forwarding class set (priority group) and traffic control profile configuration.
    • Priority-based flow control (PFC)—This DCB feature that supports lossless transport has the same defaults and user configuration as on QFX Series access interfaces, including support for six lossless priorities (forwarding classes).
    • Ethernet PAUSE—Same defaults and configuration as on QFX Series access interfaces.
    • Queue scheduling—Same defaults, configuration, and scheduler-to-forwarding-class mapping as on QFX Series access interfaces. Queue scheduling is a subset of hierarchical scheduling.
    • Priority group (forwarding class set) scheduling—Same defaults and configuration as on QFX Series access interfaces. Priority group scheduling is a subset of hierarchical scheduling.
    • Tail-drop profiles—Same defaults and configuration as on QFX Series access interfaces.
    • Code-point aliases—Same defaults and configuration as on QFX Series access interfaces.
    • Rewrite rules—As on the QFX Series access interfaces, there are no default rewrite rules applied to egress traffic.
    • Host outbound traffic—Same defaults and configuration as on QFX Series access interfaces.

    The default shared buffer settings and shared buffer configuration are also the same as on QFX Series access interfaces, except that the shared buffer configuration is global and applies to all access ports on all members of the Virtual Chassis. You cannot configure different shared buffer settings for different Virtual Chassis members.

  • Similarities in CoS support on VCP interfaces and QFabric system Node device fabric interfaces—VCP interfaces support full hierarchical scheduling (ETS). ETS includes:
    • Creating forwarding class sets (priority groups) and mapping forwarding classes to forwarding class sets.
    • Scheduling for individual output queues. The scheduler defaults and configuration are the same as the scheduler on access interfaces.
    • Scheduling for priority groups (forwarding class sets) using a traffic control profile. The defaults and configuration are the same as on access interfaces.
    • No other CoS features are supported on VCP interfaces.

    Note: You cannot attach classifiers, congestion notification profiles, or rewrite rules to VCP interfaces. Also, you cannot configure buffer settings on VCP interfaces. Similar to QFabric system Node device fabric interfaces, you can only attach forwarding class sets and traffic control profiles to VCP interfaces.

    The behavior of lossless traffic across 40-Gigabit VCP interfaces is the same as the behavior of lossless traffic across QFabric system Node device fabric ports. Flow control for lossless forwarding classes (priorities) is enabled automatically. The system dynamically calculates buffer headroom that is allocated from the global lossless headroom buffer for the lossless forwarding classes on each 40-Gigabit VCP interface. If there is not enough global headroom buffer space to support the number of lossless flows on a 40-Gigabit VCP interface, the system generates a syslog message.

    Note: After you configure lossless transport on a Virtual Chassis, check the syslog messages to ensure that there is sufficient buffer space to support the configuration.

    Note: If you break out a 40-Gigabit VCP interface into 10-Gigabit VCP interfaces, lossless transport is not supported on the 10-Gigabit VCP interfaces. Lossless transport is supported only on 40-Gigabit VCP interfaces.

  • Differences in CoS support on VCP interfaces and QFabric system Node device fabric interfaces—Although most of the CoS behavior on VCP interfaces is similar to CoS behavior on QFabric system Node device fabric ports, there are some important differences:

    • Hierarchical scheduling (queue and priority group scheduling)—On QFabric system Node device fabric interfaces, you can apply a different hierarchical scheduler (traffic control profile) to different priority groups (forwarding class sets) on different interfaces. However, on VCP interfaces, the schedulers you apply to priority groups are global to all VCP interfaces. One hierarchical scheduler controls scheduling for a priority group on all VCP interfaces.

      You attach a scheduler to VCP interfaces using the global identifier (vcp-*) for VCP interfaces. For example, if you want to apply a traffic control profile (which contains both queue and priority group scheduling configuration) named vcp-fcoe-tcp to a forwarding class set named vcp-fcoe-fcset, you include the following statement in the configuration:

      [edit]
      user@switch# set class-of-service interfaces vcp-* forwarding-class-set vcp-fcoe-fcset output-traffic-control-profile vcp-fcoe-tcp

      The system applies the hierarchical scheduler vcp-fcoe-tcp to the traffic mapped to the priority group vcp-fcoe-fcset on all VCP interfaces.

    • You cannot attach classifiers, congestion notification profiles, or rewrite rules to VCP interfaces. Also, you cannot configure buffer settings on VCP interfaces. Similar to QFabric system Node device fabric interfaces, you can only attach forwarding class sets and traffic control profiles to VCP interfaces.
    • Lossless transport is supported only on 40-Gigabit VCP interfaces. If you break out a 40-Gigabit VCP interface into 10–Gigabit VCP interfaces, lossless transport is not supported on the 10-Gigabit VCP interfaces.
  • On a QFX5100 switch, CPU-generated host outbound traffic is forwarded on the network-control forwarding class, which is mapped to queue 7. If you use the default scheduler, the network-control queue receives a guaranteed minimum bandwidth (transmit rate) of 5 percent of port bandwidth. The guaranteed minimum bandwidth is more than sufficient to ensure lossless transport of host outbound traffic.

    However, if you configure a scheduler, you must ensure that the network-control forwarding class (or whatever forwarding class you configure for host outbound traffic) receives sufficient guaranteed bandwidth to prevent packet loss.

    If you configure a scheduler, we recommend that you configure the network-control queue (or the queue you configure for host outbound traffic if it is not the network-control queue) as a strict-high priority queue. Strict-high priority queues receive the bandwidth required to transmit their entire queues before other queues are served.

    Note: As with all strict-high priority traffic, if you configure the network-control queue (or any other queue) as a strict-high priority queue, you must also create a separate forwarding class set (priority group) that contains only strict-high priority traffic, and apply the strict-high priority forwarding class set and its traffic control profile (hierarchical scheduler) to the relevant interfaces.

  • You cannot apply classifiers and rewrite rules to IRB interfaces because the members of an IRB interface are VLANs, not interfaces. You can apply classifiers and rewrite rules to Layer 2 logical interfaces and Layer 3 physical interfaces that are members of VLANs that belong to IRB interfaces.

Virtual Chassis and Virtual Chassis Fabric

  • On a mixed-mode Virtual Chassis Fabric, during a Routing Engine switchover, the system might experience a 200-300 millisecond loss of traffic. PR964987
  • On a mixed Virtual Chassis Fabric (VCF), control plane packets, including control packets for OSPF or PIM, are not mirrored by the native analyzer when the output port belongs to another member switch. PR969542
  • If a VCF is connected to a Juniper Networks router with a flexible PIC concentrator (FPC) and an xSTP bridge protocol data unit is distributed to the FPC, there might be traffic loss when the FPC is rebooted. PR990247
  • When a Virtual Chassis port (VCP) is added between two QFX5100 member switches that are already interconnected using a VCP, a VCP link aggregation group (LAG) is formed and some multicast packets between the two member switches might be duplicated. PR1007204
  • On a mixed-mode VCF, if you perform a nonstop software upgrade (NSSU) and a MAC address is present on the ingress or egress Packet Forwarding Engine, in some cases known Layer 2 unicast traffic might still be flooded over the VLAN. PR1013416
  • On QFX5100 mixed-mode Virtual Chassis or Virtual Chassis Fabric (VCF) systems that include QFX3500 or QFX3600 switches, a MACsec configuration cannot be committed because MACsec is not supported on QFX3500 and QFX3600. PR1024921
  • On a mixed Virtual Chassis Fabric (VCF), a VCP link between two members disappears after you perform a nonstop software upgrade. The show virtual-chassis protocol adjacency member command output shows the state of the VCP link as Initializing. PR1031296
  • In a mixed-mode Virtual Chassis with QFX3500 switches, if multicast packets are sent to the Routing Engine at a very high rate, the Virtual Chassis might become unresponsive. PR1117133
  • In a large-scale Virtual Chassis Fabric (VCF), you might see timeout errors when running the command request system reboot all-members at now to immediately reboot all members of the VCF. As a workaround, omit the at now option, and run the command simply as request system reboot all-members. PR1215130
  • If a QFX5100 switch running Junos OS Release 14.1X53-D40 or later is in the same Virtual Chassis or Virtual Chassis Fabric (VCF) as a Juniper Networks device that does not support Virtual Extensible LAN (VXLAN), for example, an EX4300 switch, the Junos OS CLI of the EX4300 switch supersedes the Junos OS CLI of the QFX5100. As a result, the vxlan configuration statement at the [edit vlans vlan-name] hierarchy level is not present. PR1176054

VXLAN

  • On a QFX5100 switch with a VXLAN configured, (S,G) interface entries downstream from a VXLAN interface might be missing from the multicast routing table but be present in the kernel and packet forwarding engine. In this circumstance, traffic is forwarded as expected. PR1027119
  • VXLANs with the VLAN IDs of 1 and 2 are configured on a QFX5100 switch. The replicated packets for these VXLANs should include the VLAN tags of 1 or 2, respectively. Instead, the replicated packets for these VXLANs are untagged, which might result in the packets being dropped by a Juniper Networks device that receives the packets. To avoid this situation, when configuring a VXLAN on a QFX5100 switch, we recommend using a VLAN ID of 3 or higher. PR1072090
  • QFX5100 switches do not support ingress VLAN firewall flood filters. If you configure such a filter by issuing the set vlans forwarding-options flood input command on a QFX5100 switch, the filter is implemented on egress traffic instead of on ingress traffic, which causes unexpected results. The unexpected results especially impact packets in which a VLAN header is added or removed in egress traffic, for example, IRB traffic and VXLAN traffic. As a workaround for these types of traffic, we recommend applying a filter policy on the ingress VLAN traffic and not using the flood keyword in the command that you issue. PR1166200
  • QFX5100 switches do not support ingress VLAN firewall flood filters. If you configure such as a filter by issuing the set vlans forwarding-options flood input command on a QFX5100 switch, the filter is implemented on egress traffic instead of on ingress traffic, which causes unexpected results. Further, if the filter includes policer as the action, the rate at which traffic is flooded to egress interfaces is reduced by a factor of the number of egress interfaces with respect to the committed information rate (CIR). For example, if the CIR is 4g and the number of egress interfaces in the VLAN is 2, the amount of traffic flooded to each interface is reduced to approximately half of the CIR traffic (4g/2 = 2g). Similarly, if the CIR is 4g and the number of egress interfaces in the VLAN is 4, the amount of traffic flooded to each interface is reduced to approximately a quarter of the CIR traffic (4g/4 = 1g). PR1166439
  • QFX5100 switches do not support ingress VLAN firewall flood filters. If you configure such as a filter by issuing the set vlans forwarding-options flood input command and specify policer as the action on a QFX5100 switch, the filter is implemented on egress traffic instead of on ingress traffic, which causes unexpected results, especially for integrated routing and bridging (IRB) traffic or VXLAN traffic. For example, in the case of Layer 2 traffic intended for VLAN 101 and temporarily encapsulated with a VLAN header (VLAN 100), such a filter applied to VLAN 100 might result in the ingress interfaces in VLAN 101 being flooded by traffic intended for VLAN 100. Further, in the case of routing traffic between VLANs, traffic intended for VLAN 101 might be routed to the IRB interface associated with VLAN 100, or in the case of VXLAN traffic, to a virtual tunnel endpoint (VTEP) on which VLAN 100 is configured. PR1168777
  • While setting up multihoming active-active mode on a link aggregation group (LAG) interface on a QFX5100 standalone switch or QFX5100 Virtual Chassis in an EVPN-VXLAN topology, check to see if the LAG interface on which you are configuring an Ethernet segment identifier (ESI) is already configured for flexible VLAN tagging. If so, all logical subinterfaces associated with this interface must be activated. If one logical subinterface is deactivated, unexpected behavior related to the ESI and designated forwarder (DF) election might occur. PR1189830
  • When the mac-move-limit configuration statement is configured with packet-action drop or drop-and-log in OVSDB-managed VXLAN bridge domains on QFX5100, a loop might not completely cease when the MAC address move limit is exceeded, in a case where traffic circulates across VTEPs in one direction only. A continuous loop might be seen across VTEPs. As a workaround, use packet-action shutdown for better loop detection and prevention, in conjunction with storm control. PR1266446

Modified: 2017-11-29