Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Configuring Fate Sharing Mitigation Across the Interconnect Device by Remapping Traffic Flows (Forwarding Classes)

On a QFabric system, traffic flows that belong to the same forwarding class are mapped to the same output queue and share the output queue resources. If congestion occurs on one of these flows, the congestion can affect the uncongested flows in the forwarding class when the flows use the same ingress interface.

For example, if a congested flow is paused to prevent packet loss, uncongested flows that use the same ingress interface are also paused because they share the same forwarding class and output queue. When a congested flow affects an uncongested flow, the flows share the same fate—this is known as fate sharing.

Fate sharing happens because pausing traffic is based on forwarding class. When a flow experiences congestion, the output queue sends a pause message to the input queue on which the flow arrived. On that input queue, the pause message affects all traffic in the forwarding class that is mapped to the congested output queue. So all traffic in that forwarding class is paused on the input queue, not just the flow that is experiencing the congestion. This is how uncongested flows can share fate with a congested flow.

Traffic from many QFabric system Node devices crosses the Interconnect device, so flows within a given forwarding class are aggregated on the Interconnect device. The aggregated flows use the same output queue on the Interconnect device and are subject to fate sharing if the flows also use the same ingress interface.

In addition to the external physical interfaces that connect the Interconnect device to Node devices, the Interconnect device has internal Clos interfaces. The Interconnect device automatically selects the best path through its internal Clos interfaces. Path selection through the internal Clos interfaces is not configurable, so you cannot control the traffic that enters any particular ingress Clos interface, and so fate sharing can occur on the Interconnect device. (On Node devices, you control the traffic connected to an ingress interface, but on the Interconnect device, you cannot control which flows use a particular internal ingress Clos interface.)

However, you can use firewall filters to separate the traffic assigned to one forwarding class and split it into different forwarding classes for the journey across the Interconnect device. Remapping the flows into different forwarding classes means the flows use different output queues on the Interconnect device. If the flows use the same ingress interface on the Interconnect device, they do not experience fate sharing because only the flows mapped to the congested queue are paused, while the flows remapped to other forwarding classes are not paused.

This topic shows you how to configure firewall filters to remap traffic across the Interconnect device and mitigate fate sharing.

To change the forwarding class (and therefore the output queue) that traffic uses on the Interconnect device, you need to map traffic into a new forwarding class before it enters the Interconnect device, then map the traffic back into the original forwarding class after it exits the Interconnect device. Traffic needs to be mapped back into its original forwarding class before it leaves the QFabric system because the original forwarding class contains similar traffic, and is configured to support the CoS that the traffic type requires and the destination device expects. For example, FCoE traffic destined for different targets in the same Fibre Channel storage area network must be in the same forwarding class (and therefore have the same IEEE 802.1p priority), or the traffic is not handled properly.

The firewall filter has to remap traffic in both directions of flow. For example, if a flow transports traffic between a server and a target device, remapping needs to occur when traffic flows from the server to the target device, and also when traffic flows from the target device to the server. Firewall filter terms contain match conditions (from statement) to identify traffic, and actions (then statement) to tell the system what to do with the identified traffic.

You configure a firewall filter for fate sharing mitigation in the firewall family ethernet-switching hierarchy. You cannot configure firewall filters to mitigate fate sharing in the inet (IPv4) or inet6 (IPv6) firewall family hierarchies.

To mitigate fate sharing across the Interconnect device, you need to configure a firewall filter that:

  1. Identifies and remaps traffic flowing from a source to a destination before it enters the Interconnect device. (This separates flows for crossing the Interconnect device.)

  2. Identifies and remaps traffic flowing from a source to a destination after it exits the Interconnect device. (This brings flows back into their original forwarding class before traffic is forwarded toward its destination.)

    Steps 1 and 2 combine to remap flows across the Interconnect device as traffic travels from a source to a destination.

  3. Identifies and remaps traffic flowing back from a destination to a source before it enters the Interconnect device. (This separates flows for crossing the Interconnect device in the other direction.)

  4. Identifies and remaps traffic flowing back from a destination to a source after it exits the Interconnect device. (This brings flows back into their original forwarding class in the other direction.)

    Steps 3 and 4 combine to remap flows across the Interconnect device on the return path, as traffic flows from the destination device back to the original source device.

  5. Accept other traffic. Because firewall filters have an implicit default discard terminating action, include a final accept term so that traffic that does not match the filter is not dropped (unless you want to drop traffic that does not match the filter).

You can use the following match conditions in the filter term from statement to identify (select) traffic that you want to remap as it crosses the Interconnect device:

  • Client-side MAC address (for example, an FCF MAC address for FCoE traffic) (destination-mac-address mac-address) or (source-mac-address mac-address)

  • Server-side MAC address (for example, an ENode MAC address for FCoE traffic) (destination-mac-address mac-address) or (source-mac-address mac-address)

  • EtherType (ether-type value)

    Note:

    If you remap an FCoE flow using EtherType as a match condition, you need to include two terms in the filter in each direction of flow to identify the traffic, one term to identify FCoE traffic (EtherType 0x8906), and one term to identify FIP traffic (EtherType 0x8914).

  • VLAN (vlan (vlan-name | vlan-id))

  • .1q user priority (dot1q-user-priority value)

Match conditions enable you to identify traffic in VLANs that carry a mix of traffic types—for example, you can identify a flow within a VLAN based on EtherType or .1q value. For more information about match conditions, see Firewall Filter Match Conditions and Actions (QFX5100, QFX5110, QFX5120, QFX5200, EX4600, EX4650).

Best Practice:

For FCoE traffic, we recommend that you use the FCF MAC address (instead of the ENode MAC address) as the source or destination address when you configure a firewall filter, because an ENode might be able to reach more than one FCF. Using the FCF MAC is the most specific way to identify the correct path for the traffic.

Note:

You cannot match on multicast addresses based on prefix. You must use a specific multicast address as the source or destination address.

In the same filter term from statement, you specify a match condition to determine whether you are identifying traffic that is flowing from a Node device into the Interconnect device, or traffic that is flowing from the Interconnect device to a Node device:

  • to-fabric <except>—This condition matches traffic that flows from a Node device to an Interconnect device (traffic that is exiting a Node device and entering the Interconnect device). Traffic that matches the to-fabric condition is remapped before it exits the ingress Node device and enters the Interconnect device.

    The except option remaps forwarding classes for traffic that is locally switched. For example, if a target device is directly connected to a Node device, the traffic destined for the directly connected target is remapped to the new forwarding class. When you specify the except option, traffic that is remotely switched is not remapped to a new forwarding class before it crosses the Interconnect device.

  • from-fabric—This condition matches traffic that flows from the Interconnect device to a Node device (traffic that is exiting the Interconnect device and entering the egress Node device). Traffic that matches the from-fabric condition is mapped back to its original forwarding class after it exits the Interconnect device, when it enters the egress Node device.

Best Practice:

In a firewall filter configuration, if you use a to-fabric except match condition, place it before the from-fabric term in the sequence of terms in the filter.

In general, we recommend that in a filter, you configure the to-fabric terms first, then configure the from-fabric terms.

After you configure match conditions in a filter term, you configure an action to take on the identified (matched) traffic in the same term. Because the goal is to remap traffic in one forwarding class into a different forwarding class, the action is usually to place the matched traffic into a forwarding class.

Use the following actions (then statement) to control into which forwarding class the matched traffic is remapped in a given term:

  • forwarding-class forwarding-class-name—Specify a default or a user-defined forwarding class into which matching traffic is mapped.

  • loss-priority level—If you specify a forwarding class for matching traffic, you must also specify the packet loss priority (PLP) level for the forwarding class. The PLP level can be low, medium-high, or high.

  • count counter-name—Optionally, you can configure an action to count the number of packets affected by each term.

    Note:

    You can use the match conditions to identify a traffic flow, and then count the packets without remapping the forwarding class. To do that, in the then statement, do not include the forwarding class and loss priority, include only the count action.

After you configure a firewall filter that remaps traffic across the Interconnect device in both directions of flow, you bind (apply) the filter to an ingress (input) VLAN. The filter only affects traffic in that VLAN.

The following procedure shows how to configure a firewall filter that mitigates fate sharing on the Interconnect device using the CLI. Steps 1-4 configure forwarding class remapping for traffic leaving an ingress Node device and entering the Interconnect device (to-fabric), in both directions of flow. Steps 5-8 configure forwarding class remapping for traffic leaving the Interconnect device and entering the egress Node device (from-fabric), in both directions of flow.

  1. Name the firewall filter and the first term of the filter, and then define match conditions for traffic flowing from the ingress Node device to the Interconnect device in the server-to-target direction (this filter term identifies the traffic to map into a different forwarding class):

    The flow-match-conditions specify the traffic that you want to remap to a different forwarding class on the ingress Node device for transport across the Interconnect device. The to-fabric condition matches only traffic that is going from the ingress Node device to the Interconnect device.

  2. In the same firewall filter and term, configure the action to take on traffic on the ingress Node device that matches the conditions in the server-to-target direction (the action is to map the traffic into a different forwarding class on the ingress Node device, before the traffic enters the Interconnect device):

    The new-forwarding-class-name specifies the forwarding class that the matching traffic is mapped to for transport across the Interconnect device. The packet counter action is optional, but is included here and in later steps because many administrators like to have this type of information available to analyze traffic patterns.

  3. In the same firewall filter, configure a second term to define match conditions for traffic flowing from the ingress Node device to the Interconnect device in the target-to-server direction (this filter term identifies the traffic to map into a different forwarding class):

    The flow-match-conditions specify the traffic that you want to remap to a different forwarding class on the ingress Node device for transport across the Interconnect device. The to-fabric condition matches only traffic that is going from the ingress Node device to the Interconnect device.

  4. In the second term in the same firewall filter, configure the action to take on traffic on the ingress Node device that matches the conditions in the target-to-server direction (the action is to map the traffic into a different forwarding class on the ingress Node device, before the traffic enters the Interconnect device):

    The first four steps of this process configure match conditions to identify Interconnect device ingress traffic in both directions of flow, and the forwarding class remapping action to take on the matched traffic. The next four steps map the traffic back into its original forwarding class after the traffic exits the Interconnect device, in both directions of flow.

  5. In the same firewall filter, configure a third term to define match conditions for traffic flowing from the Interconnect device to the egress Node device in the server-to-target direction (this term identifies traffic to map back into the original forwarding class after it crosses the Interconnect device):

    The flow-match-conditions specify the traffic that you want to map back into the original forwarding class on the egress Node device, after the traffic crosses the Interconnect device. The from-fabric condition matches only traffic that is coming from the Interconnect device into the egress Node device.

  6. In the third term in the same firewall filter, configure the action to take on traffic when it enters the egress Node device from the Interconnect device in the server-to-target direction (the action is to map the traffic back into its original forwarding class on the egress Node device, after the traffic crosses the Interconnect device):

    The original-forwarding-class-name specifies the original forwarding class name (the forwarding class the traffic was first classified into when it entered the QFabric system). Traffic that matches the conditions in Step 5 is mapped back into its original forwarding class when it enters the egress Node device, after the traffic crosses the Interconnect device.

  7. In the same firewall filter, configure a fourth term to define match conditions for traffic flowing from the Interconnect device to the egress Node device in the target-to-server direction (this term identifies traffic to map back into the original forwarding class after it crosses the Interconnect device):

  8. In the fourth term in the same firewall filter, configure the action to take on traffic when it enters the egress Node device from the Interconnect device in the target-to-server direction (the action is to map the traffic back into its original forwarding class on the egress Node device, after the traffic crosses the Interconnect device):

    The first eight steps remap traffic in both directions of flow on the Interconnect device, and ensure that the traffic is mapped to its original forwarding class on the Node devices and as the traffic exits the QFabric system.

  9. In the same firewall filter, add a final fifth term to define the default handling (action) of traffic that does not match the filter conditions. Firewall filters have an implicit default discard action, but in most cases, the intention is not to drop traffic is that is not remapped to a different forwarding class, so the action should be to accept the rest of the traffic:

    If you wish, you can also configure a final counter in this term to count the total number of packets affected by the filter.

Note:

If you configure a new forwarding class for the remapped traffic on a Node device, you must also configure scheduling for the new forwarding class on the Node device. On the Interconnect device, you must map the forwarding class to a fabric forwarding class set (fabric fc-set; see Understanding CoS Fabric Forwarding Class Sets for more information), and if the fabric fc-set is not one of the default fabric fc-sets, you must configure scheduling for the fabric fc-set (see Example: Configuring CoS Scheduling Across the QFabric System for more information).