Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Multicast Forwarding at Layer 2 in a Junos Fusion Data Center with EVPN

 

In a Junos Fusion Data Center with EVPN, extended ports on the satellite devices are multihomed to all aggregation devices. Each extended port is modeled as an EVPN Ethernet Segment (ES) and assigned an ES ID (ESI). To support multicast traffic replication and forwarding in the multihoming environment, the infrastructure employs a combination of BGP EVPN multicast signaling and IEEE 802.1BR tagging, IGMP state synchronization among the aggregation devices, local bias or designated forwarder (DF) replication and forwarding, and custom optimizations to better support the large number of extended ports in this architecture.

When forwarding multicast traffic in a Junos Fusion Data Center with EVPN, aggregation devices default to using ingress replication to extended ports on satellite devices, where the aggregation device replicates and sends copies of the traffic to every destination extended port individually. You can alternatively enable egress (local) replication to offload some of the traffic replication and forwarding responsibility from the aggregation devices to the satellite devices for their local extended ports. See Understanding Multicast Replication in a Junos Fusion for limitations on enabling local replication.

Junos Fusion Data Center with EVPN uses the same methods for multicast forwarding described here to also manage other multi-destination traffic for VLAN flooding (forwarding unknown unicast traffic to all extended ports in a VLAN) and broadcast traffic (flooding traffic to all extended ports in a broadcast domain).

See Monitoring Layer 2 Multicast Forwarding in a Junos Fusion Data Center with EVPN for a summary of the CLI commands you can use to view multicast replication and forwarding information in a Junos Fusion Data Center with EVPN.

Multicast Infrastructure in a Junos Fusion Data Center with EVPN

In a Junos Fusion Data Center with EVPN, the central EVPN infrastructure, referred to as the EVPN core, encompasses one EVPN instance (EVI). This section describes how the EVPN core manages multicast traffic forwarding.

Multicast Route Signaling and Source Traffic Forwarding in the EVPN Core

Whenever a VLAN is configured on an aggregation device in the EVI, the aggregation device signals a BGP EVPN Type 3 (Inclusive Multicast Ethernet Tag [IMET]) route for the VLAN. Because the configuration is synchronized among all aggregation devices in the EVPN core, all the aggregation devices effectively join a core multicast replication tree consisting of all of those routes for each configured VLAN.

An aggregation device that initially receives multicast source traffic for a VLAN and multicast group is referred to as the ingress aggregation device. Junos Fusion Data Center with EVPN uses only ingress replication tunnel mode in the EVPN core, in which the ingress aggregation device replicates the packets and floods them on each EVPN tunnel to all of the other aggregation devices or any external edge routers in the EVI that might need to forward the traffic.

Designated Forwarder Election

In an EVPN fabric with multihoming, a designated forwarder (DF) is assigned to each ES so only one device forwards traffic to that ES, which eliminates duplicate traffic and prevents traffic loops. The devices in the provider edge (PE) role in the EVPN network generate Ethernet Segment (ES) routes for each connected extended port, and use BGP to advertise these routes to the other multihoming PE devices. BGP convergence determines a DF for a particular ES and VLAN. See EVPN Multihoming Overview for details. When local replication is not enabled in a Junos Fusion Data Center with EVPN, the aggregation devices use this process to elect the DF for each extended port ES.

When local replication is enabled, Junos Fusion Data Center with EVPN uses a different DF election model that is optimized for the large number of extended ports usually present in this architecture. The infrastructure already maintains connectivity information between aggregation devices and the extended ports on satellite devices using the IEEE 802.1BR protocol, so it does not require BGP signaling to build the ES topology, and does not need to rely on BGP route convergence when reassigning DFs.

In the optimized DF election model for local replication, a satellite device is responsible for electing its DF. Because all of the extended ports connected to a satellite device share its multihoming properties to the aggregation devices, it is sufficient to elect a DF at the granularity of the satellite device rather than for each extended port. As a result, the DF for an extended port is derived from the extended port’s satellite device. This model allows local replication at the satellite devices for any broadcast, unknown unicast, and multicast (BUM) traffic. With local replication enabled, the forwarding aggregation device only needs to send one copy of the traffic to each satellite device with destination extended ports, and the satellite devices then replicate and forward the traffic to their extended ports.

Satellite devices elect a DF as follows:

  • Each satellite device maintains a list of all the aggregation devices to which it has connectivity in order by IP address.

  • The satellite device performs a modulo hash of the satellite device ID with the number of connected aggregation devices, and uses the hash value as the index to choose the aggregation device to be its DF.

For example, if a satellite device with ID 101 is connected to four aggregation devices listed using indices 0 through 3, the satellite device calculates 101 modulo 4 = 1, so the second aggregation device in the list at index 1 is selected as the DF for that satellite device.

The satellite device notifies the designated aggregation device that it is the DF for that satellite device, and notifies all the other connected aggregation devices that they are not the DF for that satellite device.

Upon receiving this notification, the aggregation devices associate the DF election or non-DF status with that satellite device’s virtual interface. Satellite devices re-elect a new DF upon detecting link failure to the current DF. For reliable convergence, satellite devices dampen re-election if a link is flapping, keeping one DF assigned while delaying reassignment until that link stabilizes (up or down).

IGMP Proxy and IGMP Report Synchronization in Aggregation Devices

For multicast group management in a multihomed environment, to avoid flooding IGMP reports in the EVPN core, the aggregation devices provide an IGMP proxy mechanism using BGP EVPN Type 6 (Selective Multicast Ethernet Tag [SMET]) routes. The aggregation device elected as the DF for an extended port signals a Type 6 route for each VLAN and multicast group combination ([VLAN, group]) in the EVI for which there is at least one receiver for that [VLAN, group]. These Type 6 routes summarize the IGMP state of the system.

Because the satellite devices balance their traffic among the available aggregation devices, IGMP membership reports (IGMP join messages) from an extended port might be sent to an aggregation device that is not that extended port’s DF. An IGMP leave message might also not be sent to the extended port’s DF, nor to the same aggregation device as the one that received the corresponding join message. As a result, the IGMP state must be synchronized from the aggregation device receiving IGMP reports for an extended port to that extended port’s DF aggregation device. For simplicity and faster convergence when a DF must be re-elected, the aggregation devices simply synchronize the IGMP state for each extended port among all of the aggregation devices connected to that extended port.

To achieve IGMP report synchronization for extended ports, Junos Fusion Data Center with EVPN uses the BGP EVPN control plane.

When an aggregation device learns the snooped IGMP state with group membership status for an extended port, the aggregation device originates and advertises a BGP EVPN Type 7 (IGMP Join Sync) route, which includes the VLAN and multicast group address, the extended port’s ES ID, and the EVI. The Type 7 route uses the same ES-Import route target extended community as the ES route for that extended port, so the route is only picked up by the aggregation devices that are connected to that extended port.

Synchronizing IGMP leave message status is more involved because in BGP, only the BGP device that advertises a route owns and can withdraw that route. The aggregation devices use BGP EVPN Type 8 (IGMP Leave Sync) routes to facilitate withdrawal of the route by advertiser of the IGMP join state as follows:

  • An aggregation device receives an IGMP leave message from an extended port, and starts a maximum response timer. The aggregation device might or might not be the extended port’s DF.

  • If the aggregation device receiving the IGMP leave message does not own that route, the aggregation device advertises a BGP EVPN Type 8 route to all aggregation devices connected to that extended port. Like the BGP EVPN Type 7 route, the BGP EVPN Type 8 route includes the VLAN, multicast group address, extended port ESI and EVI, and also carries the maximum response time. The advertising scope is limited to the ES-Import route target for that extended port.

  • The multicast router device in the network acting as the IGMP querier sends out group queries.

  • Any aggregation devices receiving the BGP EVPN Type 8 route start a leave timer with the maximum response time from the advertised route.

  • If the DF for the extended port that sent the IGMP leave message no longer has any local join state for that multicast group, the DF withdraws the join state for that extended port ES.

  • Finally, on the aggregation device that advertised the BGP EVPN Type 8 route, after the maximum response timer expires, the aggregation device withdraws the BGP EVPN Type 8 route.

ECIDs for Forwarding Multicast Traffic Between Aggregation Devices and Satellite Devices

When a multihomed satellite device in a Junos Fusion Data Center with EVPN sends multicast source traffic to one of the aggregation devices, the satellite device includes the source (ingress) extended port unicast E-channel ID (ECID) in the 802.1BR header.

When forwarding multicast traffic to destination extended ports, aggregation devices send a multicast destination ECID in the 802.1BR header so the receiving satellite devices can direct the traffic to the right extended ports. The ingress aggregation device also includes the source extended port ECID in the 802.1BR header when forwarding the traffic back to the source satellite device. The ingress ECID is required for the satellite device to make split-horizon decisions when forwarding to its local extended ports, and avoid forwarding the traffic back to the source port.

Extended ports in a Junos Fusion Data Center with EVPN can be part of a link aggregation group (LAG) on one satellite device or spanning satellite devices. Like standalone extended ports, LAGs of extended ports are represented as an ES and are assigned a special LAG ESI. LAGs of extended ports are also assigned a LAG ECID for 802.1BR communication to the satellite devices. See Understanding EVPN in a Junos Fusion Data Center for an overview of extended port LAGs. If an ingress aggregation device receives source traffic that originates on a LAG of extended ports, it also includes the 802.1BR header in the VXLAN packets when flooding the source traffic in the EVPN core, so any DF aggregation device can also send the ingress LAG ECID to its designated satellite device for split-horizon considerations. This is the only case in which 802.1BR ECIDs are included in traffic forwarded over the VXLAN tunnels.

See Understanding Multicast Replication in a Junos Fusion for more information on how Junos Fusion architectures use 802.1BR E-channel IDs (ECIDs) for managing multicast traffic flow between aggregation devices and satellite devices.

Multicast Replication and Forwarding in a Junos Fusion Data Center with EVPN

This section explains the Junos Fusion Data Center with EVPN multicast replication and forwarding model with and without local replication enabled, and illustrates several different forwarding scenarios.

Ingress Replication in the EVPN Core and Local Bias Forwarding to Multicast Destination Extended Ports

To forward the traffic to destination extended ports in the VLAN and multicast group, the ingress aggregation device uses a local bias forwarding model. With local bias forwarding, the ingress aggregation device forwards the traffic to any multicast destination extended ports (ES) that it can reach directly, regardless of whether it is the DF for a particular multicast destination ES or not.

To ensure the multicast traffic reaches other listeners in the EVI, the ingress aggregation device then replicates the packets and floods them on each EVPN tunnel in the EVI, sending the traffic to the other aggregation devices and edge routers in the EVI that share BGP EVPN Type 3 IMET routes for the VLAN (see Multicast Route Signaling and Source Traffic Forwarding in the EVPN Core). This behavior is referred to as ingress replication tunnel mode in the EVPN core.

For example, see Figure 1.

Figure 1: Junos Fusion Data Center with EVPN—Ingress Replication in EVPN Core and Local Bias Forwarding
Junos
Fusion Data Center with EVPN—Ingress Replication in EVPN Core
and Local Bias Forwarding

In Figure 1, Aggregation Device 2 is the ingress aggregation device that receives multicast source traffic for VLAN 100 and multicast group address 233.252.0.1 from multihomed extended port EP1.

Based on the IGMP snooping state synchronized to all aggregation devices, extended ports EP2 on Satellite Device 1 (multicast ECIDx) and EP3 and EP4 on Satellite Device 2 (multicast ECIDy) have multicast listeners for that group in VLAN 100.

Note

Local replication is enabled in this scenario, so the aggregation device uses multicast ECIDs to offload most of the replication to the satellite devices. Local Bias Forwarding to Extended Ports with Local Replication at the Satellite Devices explains more about local replication for this example.

Aggregation Device 2 forwards the traffic as follows:

  • Using the local bias model, Aggregation Device 2 forwards the traffic to Satellite Device 1 using ECIDx and Satellite Device 2 using ECIDy to reach the multicast listeners on EP2, EP3, and EP4 even though it is not the DF for these destination extended ports.

  • Using ingress replication in the EVPN core, Aggregation Device 2 also floods the traffic to the three other aggregation devices in the EVI to reach other destination extended ports to which it is not directly connected.

The aggregation device that is the DF for a particular destination extended port ES would normally handle replication and forwarding to that ES upon receiving the source multicast source traffic. To avoid generating duplicate traffic if the ingress aggregation device already forwarded the traffic according to the local bias model, each DF aggregation device performs a local bias check using the following information to decide whether to forward the traffic to its ESs:

  • The packets forwarded in the EVPN core carry the ingress aggregation device’s IP address in the VXLAN header (the outer IP address), so the receiving aggregation devices know which other aggregation device was the source of the traffic.

  • Any aggregation device can check the shared topology information maintained the EVPN core for all connected extended ports to determine whether the source aggregation device has a direct connection to a given extended port.

The DF does not forward the traffic to its designated ESs for which the source aggregation device has connectivity and would have already forwarded the traffic. The DF does forward the traffic to its designated ESs that fail the local bias check.

In Figure 1, Aggregation Device 1 is the elected DF for EP1 on Satellite Device 1, but does not forward the traffic to EP1 after performing a local bias check and determining that the source aggregation device, Aggregation Device 2, has connectivity to EP1 and would have already forwarded the traffic to EP1 using local bias forwarding. Similarly, Aggregation Device 3, the elected DF For EP3 and EP4, performs the local bias check and does not forward the traffic to Satellite Device 2.

Local Bias Forwarding to Extended Ports with Ingress Replication to the Satellite Devices (No Local Replication)

By default, any aggregation device forwarding multicast traffic to destination extended ports uses ingress multicast replication towards the destination extended ports. This replication method applies whether the forwarding aggregation device is the ingress aggregation device using local bias forwarding or is a DF forwarding traffic for destination extended ports that failed the local bias check.

With ingress replication, the forwarding aggregation device creates copies of the traffic for every destination extended port, and uses the individual unicast ECIDs to send the copies to the satellite device connected to the destination extended ports. The satellite device then forwards the traffic addressed to each of its local destination extended ports.

Figure 2 illustrates local bias forwarding with ingress replication towards the extended ports.

Figure 2: Junos Fusion Data Center with EVPN—Local Bias with Local Replication Not Enabled
Junos Fusion
Data Center with EVPN—Local Bias with Local Replication Not
Enabled

Aggregation Device 2 is the ingress aggregation device that receives multicast source traffic for VLAN 100 and multicast group address 233.252.0.1 from multihomed extended port EP1. Based on the IGMP state synchronized across the aggregation devices, extended ports EP3 on Satellite Device 1 and EP6 and EP7 on Satellite Device 2 have multicast listeners for that group in VLAN 100. Aggregation Device 2 is not the DF for any of these extended ports, but forwards the traffic to Satellite Device 1 and Satellite Device 2 for those extended ports according to the local bias model.

Aggregation Device 2 uses ingress replication towards the three destination extended ports as follows:

  • Creates and forwards one copy for EP3 to Satellite Device 1 using ECID[EP3]

  • Creates and forwards two copies to Satellite Device 2, one for EP6 (using ECID[EP6]) and one for EP7 (using ECID[EP7])

The satellite devices forward the traffic to each of their destination extended ports. Aggregation Device 2 also floods the traffic to the three other aggregation devices in the EVI in case there are other destinations to which it is not directly connected. The other aggregation devices that are DFs for extended ports on Satellite Device 1 and Satellite Device 2 receive the traffic, perform a local bias check, and do not forward the traffic to EP3, EP6, or EP7.

See Understanding Multicast Replication in a Junos Fusion and Ingress Multicast Replication at the Aggregation Device for more details on how Junos Fusion architectures use 802.1BR E-channel IDs (ECIDs) and ingress replication for managing multicast traffic flow between aggregation devices and extended ports by way of their satellite devices.

Local Bias Forwarding to Extended Ports with Local Replication at the Satellite Devices

You can enable egress replication, also called local replication, in a Junos Fusion Data Center with EVPN to offload most of the replication and forwarding work to the satellite devices connected to the destination extended ports. (See Configuring Egress Replication on a Junos Fusion.) With local replication enabled, the forwarding aggregation device creates and forwards only one copy of the traffic to each satellite device that has one or more destination extended ports, and each satellite device replicates the packets and forwards copies to each of its local destination extended ports.

Figure 3 illustrates local bias forwarding with local replication towards the extended ports.

Figure 3: Junos Fusion Data Center with EVPN—Local Bias Forwarding with Local Replication Enabled
Junos
Fusion Data Center with EVPN—Local Bias Forwarding with Local
Replication Enabled

In Figure 3, incoming traffic from a multicast source on EP1, hashed among the aggregation devices to which EP1 is multihomed, arrives on Aggregation Device 2 for VLAN 100 and multicast group address 233.252.0.1. Based on the IGMP snooping state synchronized to all aggregation devices, extended ports EP2 on Satellite Device 1 (multicast ECIDx) and EP3 and EP4 on Satellite Device 2 (multicast ECIDy) have multicast listeners for that group in VLAN 100. Aggregation Device 2 is not the DF for these destination extended ports, but according to the local bias model, Aggregation Device 2 forwards the traffic to multicast listeners on EP2, EP3, and EP4. Replication and forwarding proceeds as follows with local replication enabled:

  • Aggregation Device 2 creates and forwards one copy for EP2 to Satellite Device 1 using multicast ECIDx.

    Note

    The 802.1BR header also carries the ingress ECID (ECID[EP1]), so Satellite Device 1 would not replicate and forward the traffic back out of the source extended port (EP1).

  • Aggregation Device 2 creates and forwards one copy to Satellite Device 2 for EP3 and EP4 using multicast ECIDy.

  • Satellite Device 1 forwards the traffic to EP2.

  • Satellite Device 2 replicates the traffic and sends one copy to EP3 and one copy to EP4.

Aggregation Device 2 also floods the traffic to the three other aggregation devices in the EVI in case there are other destinations to which it is not directly connected. In Figure 3, other aggregation devices that are DFs for extended ports on Satellite Device 1 and Satellite Device 2 receive the traffic, perform a local bias check, and do not forward the traffic to their assigned extended ports.

See Understanding Multicast Replication in a Junos Fusion and Egress Multicast Replication on the Satellite Devices for full details on how Junos Fusion architectures use 802.1BR E-channel IDs (ECIDs) and local replication for managing multicast traffic flow between aggregation devices and extended ports by way of their satellite devices.

Designated Forwarder Traffic Forwarding with Local Replication at the Satellite Devices

In a Junos Fusion Data Center with EVPN topology, if an ingress aggregation device loses direct connectivity to a satellite device, it cannot perform local bias forwarding to destination extended ports on that satellite device. In that case, the DF for each destination extended port is responsible to forward the traffic. Upon receiving multicast source traffic from the ingress aggregation device, the DF for a given destination extended port performs the usual local bias check. If the local bias check fails, the DF becomes the forwarding aggregation device for its destination extended ports.

With local replication enabled, the DF creates and forwards only one copy of the traffic to each satellite device that has one or more of its destination extended port ESs, and each satellite device replicates and forwards copies to each of its local destination extended ports.

Figure 4 illustrates DF forwarding with local replication towards the extended ports.

Figure 4: Junos Fusion Data Center with EVPN—Designated Forwarder Using Local Replication
Junos
Fusion Data Center with EVPN—Designated Forwarder Using Local
Replication

In Figure 4, incoming traffic from a multicast source on EP1, hashed among the aggregation devices to which EP1 is multihomed, arrives on Aggregation Device 2 for VLAN 100 and multicast group address 233.252.0.1. Based on the IGMP snooping state synchronized to all aggregation devices, extended ports EP2 on Satellite Device 1 (multicast ECIDx) and EP3 and EP4 on Satellite Device 2 (multicast ECIDy) have multicast listeners for that group in VLAN 100. Aggregation Device 2 is not the DF for these destination extended ports, has direct connectivity to EP2 by way of Satellite Device 1, but does not have direct connectivity to EP3 and EP4 due to a failed cascade link to Satellite Device 2. As a result, Aggregation Device 3, the DF for EP3 and EP4, is responsible to forward the traffic to EP3 and EP4.

Replication and forwarding to the multicast listeners on EP2, EP3, and EP4 proceeds as follows with local replication enabled:

  • Aggregation Device 2 creates and forwards one copy for EP2 to Satellite Device 1 using multicast ECIDx.

    Note

    The 802.1BR header also carries the ingress ECID (ECID[EP1]), so Satellite Device 1 would not replicate and forward the traffic back out of the source extended port (EP1).

  • Aggregation Device 2 floods the traffic into the EVPN core to reach other destinations to which it is not directly connected (which include EP3 and EP4).

  • Aggregation Device 3 receives the traffic from Aggregation Device 2 and determines it must forward the traffic to EP3 and EP4 because it is the DF for those extended ports, and a local bias check for Aggregation Device 2 connectivity to these destinations fails.

  • Aggregation Device 3 creates and forwards one copy to Satellite Device 2 for both EP3 and EP4 using multicast ECIDy.

  • Satellite Device 1 forwards the traffic to EP2.

  • Satellite Device 2 replicates the traffic and sends one copy to EP3 and one copy to EP4.

  • Aggregation Device 1 receives the traffic from the EVPN core, performs a local bias check, and does not forward the traffic to EP2.

Handling Split-horizon on a LAG of Ingress Extended Ports Across Satellite Devices

Extended ports in a Junos Fusion Data Center with EVPN can be part of a link aggregation group (LAG) that spans satellite devices. Extended port LAGs are assigned special LAG ECIDs.

Forwarding multicast traffic from a source on an extended port LAG is similar to forwarding traffic from a single source extended port, but special handling in the EVPN core is required when the LAG spans satellite devices.

A Junos Fusion Data Center with EVPN uses these forwarding actions whether the source is a LAG of extended ports on one satellite device or a LAG of extended ports across satellite devices:

  • A satellite device receiving multicast source traffic on an extended port LAG includes the source (ingress) extended port LAG ECID in the 802.1BR header when forwarding the traffic to the aggregation devices.

  • When the ingress aggregation device forwards traffic towards destination extended ports, it sends the multicast destination ECID in the 802.1BR header so the receiving satellite devices can direct the traffic to the right extended ports.

  • The ingress aggregation device also includes the source extended port LAG ECID from the 802.1BR header when forwarding the traffic back to the source satellite device. The source satellite device requires this information to apply split-horizon forwarding to its extended ports, and does not forward the traffic back out of the source extended port LAG.

  • Upon receiving the source traffic from the ingress aggregation device, if a DF determines from the local bias check that it needs to forward the traffic to its ESs, the DF includes the multicast destination ECID in the 802.1BR header when it forwards the traffic to the satellite devices connected to its ESs.

Additional actions are required when the source LAG spans satellite devices. Due to the multihoming environment and local bias or DF forwarding methods, other aggregation devices besides the ingress aggregation device might be responsible for forwarding the traffic to satellite devices with extended ports in the source LAG. These DF aggregation devices also need to forward the source extended port LAG ECID to their satellite devices so those satellite devices can apply split-horizon loop prevention when forwarding the traffic.

As a result, the aggregation devices also perform these actions only when the source LAG spans satellite devices:

  • When the ingress aggregation device floods source traffic into the EVPN core to the other aggregation devices, the ingress aggregation preserves and inserts the 802.1BR header (containing the source LAG ECID) with the multicast payload in the VXLAN tunnel encapsulation.

  • Each DF receives the source LAG ECID and can include that in the 802.1BR header with the traffic forwarded to the satellite devices for its destination ESs, and the satellite devices can avoid forwarding the data to other extended ports in the source LAG.

Figure 5 shows a simplified example of a Junos Fusion Data Center with EVPN topology in which the traffic source ingresses on a LAG of extended ports that spans satellite devices. The link between Aggregation Device 2 and Satellite Device 2 has failed. (Local replication is enabled in this example, but the replication method does not have any bearing on the requirement to share the source LAG ECID across the aggregation devices in the EVPN core.)

Figure 5: Junos Fusion Data Center with EVPN—Multicast Source LAG of Extended Ports Spanning Satellite Devices
Junos Fusion
Data Center with EVPN—Multicast Source LAG of Extended Ports
Spanning Satellite Devices

In Figure 5, both Satellite Device 1 and Satellite Device 2 have extended ports (EP4 and EP5) in the LAG, and the source traffic ingresses on EP4 on Satellite Device 1.

  • Satellite Device 1 sends the traffic to one of the aggregation devices to which it is multihomed, in this case Aggregation Device 2, and includes the ECID of the LAG as the source or ingress ECID.

  • Aggregation Device 2 recognizes that the source is part of a LAG, and inserts the source LAG ECID into the packets it floods into the EVPN core to the other aggregation devices.

  • Aggregation Device 3 is the DF for EP6 and EP7 on Satellite Device 2, so it forwards the traffic to its ESs because the local bias check for Satellite Device 2 failed. When forwarding the traffic, Aggregation Device 3 also includes the source LAG ECID in the 802.1BR header to Satellite Device 2 so the satellite device can ensure the traffic is not sent out on destination extended ports that are members of the source LAG.

  • Aggregation Device 2 has connectivity to Satellite Device 1 and can perform local bias forwarding to reach the receiver at EP 1. Aggregation Device 2 also includes the source LAG ECID so Satellite Device 1 can ensure the traffic is not sent out of EP4, the source extended port and a member of the source extended port LAG.

  • Aggregation Device 1, the DF for EP 1, does not forward the traffic to Satellite Device 1 because the local bias check shows Aggregation Device 2 has connectivity and would have already forwarded the traffic.