Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Optimized Intersubnet Multicast (OISM) with Assisted Replication (AR) for Edge-Routed Bridging Overlays

This chapter describes how to configure the optimized intersubnet multicast (OISM) feature in a large EVPN-VXLAN edge-routed bridging (ERB) overlay fabric. OISM can also interoperate with the assisted replication (AR) feature (see Assisted Replication Multicast Optimization in EVPN Networks) to offload and load-balance multicast replication in the fabric to the devices better equipped to handle the load.

In EVPN ERB overlay fabric designs, the leaf devices route traffic between tenant VLANs as well as forwarding traffic within tenant VLANs. To support efficient multicast traffic flow in a scaled ERB overlay fabric with both internal and external multicast sources and receivers, we provide a multicast configuration model based on the IETF draft specification draft-ietf-bess-evpn-irb-mcast, EVPN Optimized Inter-Subnet Multicast (OISM) Forwarding. OISM combines the best aspects of ERB and CRB overlay designs for multicast traffic together to provide the most efficient multicast traffic flow in ERB overlay fabrics.

See Optimized Intersubnet Multicast for Edge-Routed Bridging Overlay Networks earlier in this guide for a short summary of how OISM works with AR.

OISM enables ERB overlay fabrics to:

  • Support multicast traffic with sources and receiver both inside and outside the fabric.

  • Minimize multicast control and data traffic flow in the EVPN core to optimize performance in scaled environments.

Figure 1 shows the ERB overlay reference architecture in which we validated OISM and AR on supported devices in this example.

Figure 1: Edge-Routed Bridging Overlay Fabric with OISM and AREdge-Routed Bridging Overlay Fabric with OISM and AR

Here is a summary of OISM components, configuration elements, and operation in this environment. For full details on how OISM works in different scenarios and available OISM support on different platforms, see Optimized Intersubnet Multicast in EVPN Networks.

  • In this example, the OISM devices take one of these device roles:

    • Server leaf (SL)—Leaf devices that link to the access side (internal) top-of-rack (TOR) devices that host the multicast servers and receivers inside the fabric. The SL devices can act as AR leaf devices.

    • Border Leaf (BL)—Leaf devices that link to an external PIM domain to manage multicast flow to and from external multicast sources and receivers. The BL devices can also act as AR leaf devices.

    • AR Replicator Spine (S-ARR)—IP fabric transit devices that serve as route reflectors in the ERB overlay fabric and also as the AR replicator devices working with OISM. When the spine devices in an ERB overlay act as AR replicators, they must run EVPN-VXLAN and no longer function simply as lean spines.

  • In this example, you configure OISM with a MAC-VRF EVPN instance with the VLAN-aware service type (supports multiple VLANs in the MAC-VRF instance) on all SL, BL, and S-ARR devices. You don’t need to configure an EVPN instance on the external PIM router.

  • We support OISM with a symmetric bridge domains model. With this model, you configure all tenant VLANs (also called revenue bridge domains or revenue VLANs) and virtual routing and forwarding (VRF) instances in the fabric on all OISM leaf devices. If you configure OISM with AR, you also configure these elements on the spine devices that act as AR replicators.

  • OISM leaf devices do intrasubnet bridging, and use a local routing model for intersubnet (Layer 3 [L3]) multicast traffic to conserve bandwidth and avoid hairpinning in the EVPN core. See Local Routing on OISM Devices for details.

    • SL devices forward multicast source traffic into the EVPN core only on the source VLAN.

    • BL devices forward traffic from external multicast sources into the EVPN core toward internal receivers only on a supplemental bridge domain called the SBD. The SBD design enables the local routing model and solves other issues with externally sourced traffic. For each tenant VRF instance, you assign a VLAN and a corresponding IRB interface for the SBD.

    • OISM SL devices receive multicast traffic from internal sources on the source VLAN, or from external sources through the BL devices on the SBD. For internally sourced traffic, the SL devices locally bridge the traffic to receivers on the source VLAN, and use IRB interfaces to locally route the traffic to receivers on other VLANs. Upon receiving traffic from outside the fabric, the SL devices use IRB interfaces to locally route the traffic from the SBD to the tenant VLANs and then to their locally attached receivers.

  • We support OISM with IGMPv2 (any-source multicast [ASM] reports only) or IGMPv3 (source-specific multicast [SSM] reports only). OISM requires that you enable IGMP snooping with either IGMP version. We use Protocol Independent Multicast (PIM) in sparse mode for multicast routing with different options on SL and BL devices according to their functions.

    Note:

    To support both IGMPv2 and IGMPv3 receivers on the same device, you must:

    • Use different tenant VRF instances to support the receivers for each IGMP version.

    • Configure different VLANs and corresponding IRB interfaces that support the receivers for each IGMP version.

    • Associate the IRB interfaces for each version with the corresponding tenant VRF instance.

    See Considerations for OISM Configurations for details on the required configuration considerations. The configuration we tested here accommodates receivers for both versions on the same device.

  • With IGMP snooping, OISM also optimizes multicast traffic using EVPN Type 6 routes for selective multicast Ethernet tag (SMET) forwarding. With SMET, OISM devices only forward traffic for a multicast group to other devices in the fabric with receivers that show interest in receiving that traffic. (Multicast receivers send IGMP join messages to request traffic for a multicast group.)

    In the OISM symmetric bridge domains model used here, OISM devices advertise EVPN Type 6 routes only on the SBD.

  • OISM supports EVPN multihoming with multicast traffic. The fabric can include receivers behind TOR devices that are multihomed in an Ethernet segment (ES) to more than one OISM leaf device. You configure an ES identifier (ESI) for the links in the ES.

    OISM devices use EVPN Type 7 (Join Sync) and Type 8 (Leave Sync) routes to synchronize the multicast state among the multihoming peer devices that serve an ES.

In this environment, we validate OISM and AR together at scale with the AR replicator role on the spine devices. Configure AR Replicator Role on OISM Spine Devices and AR Leaf Role on OISM Leaf Devices explains more about how AR works in this example. When the AR replicator role is not collocated with an OISM border leaf role on the same device, as in this example, we say the AR replicator operates in standalone AR replicator mode. The OISM SL and BL devices act as AR leaf devices.

We call devices that don’t support AR regular network virtualization edge (RNVE) devices. The test environment includes an SL device (see SL-3 in Figure 1) on which we don’t configure the AR leaf role to simulate an RNVE device. With RNVE devices in the fabric:

  • The RNVE devices use ingress replication to forward multicast traffic to other leaf devices in the fabric.

  • The AR replicators use ingress replication instead of AR to forward multicast source data to the RNVE devices.

In this chapter, we show configuration and verification for a small subset of the scaled environment in which we validate OISM and AR together. Although the scaled test environment includes more devices, configured elements, multicast sources, and subscribed receivers, in this example we show configuration and verification output for the following elements:

  • One EVPN instance, MACVRF-1, which is a MAC-VRF instance with VLAN-aware service type and VXLAN encapsulation.

  • Multicast stream use cases that encompass:

    • IGMPv2 or IGMPv3 traffic.

    • Internal or external multicast sources.

  • Two tenant VRF instances, one for IGMPv3 receivers and one for IGMPv2 receivers.

    For each tenant VRF instance, we define:

    • Four tenant VLANs with VXLAN tunnel network identifier (VNI) mappings, and corresponding IRB interfaces in the tenant VRF instance.

      In the OISM design, we refer to the tenant VLANs as revenue bridge domains or revenue VLANs.

    • One SBD VLAN mapped to a VNI, and a corresponding IRB interface in the tenant VRF instance.

  • One multicast source inside the data center, and one multicast source outside the data center in the external PIM domain.

    You configure the BL devices to act as PIM EVPN gateway (PEG) devices for the EVPN fabric. In this example, we connect PEG devices through classic L3 interfaces to an external PIM router and PIM rendezvous point (RP). The L3 interfaces on each of the BL PEG devices link to the external PIM router on different subnets.

  • Multicast receivers that subscribe to one or more multicast groups.

    Note:

    Each multicast stream has multiple receivers subscribed to traffic from each source. The multicast traffic verification commands in this example focus on the first receiver device in the Receivers column in Table 1.

See Table 1 for a summary of these elements and their values. Figure 1 illustrates the device roles and the first two corresponding IRB interfaces, VLANs, and VNI mappings for each of the tenant VRFs in the table.

Table 1: OISM Streams and Elements in this Example

Multicast Stream

Tenant VRF

VLANs, IRB Interfaces, and VNI Mappings

Source

Receivers

Multicast Groups

Internal source, internal receivers with IGMPv3—SSM reports only

VRF-1

VLAN-1, irb.1

VNI 110001

TOR-1 on VLAN-1 (Multihomed to SL-1 and SL-2)

TOR-4 (Multihomed to SL-4 and SL-5)

Other receivers:

TOR-2 (Single-homed to SL-3)

TOR-3 (Multihomed to SL-4 and SL-5)

TOR-5 (Single-homed to SL-6)

233.252.0.21 through 233.252.0.23

233.252.0.121 through 233.252.0.123

VLAN-2, irb.2

VNI 110002

VLAN-3, irb.3

VNI 110003

VLAN-4, irb.4

VNI 110004

(SBD) VLAN-2001, irb.2001

VNI 992001

External source, internal receivers with IGMPv2—ASM reports only

VRF-101

VLAN-401, irb.401

VNI 110401

External Source (In External PIM Domain)

TOR-1 on VLAN-1 (Multihomed to SL-1 and SL2)

Other receivers:

TOR-2 (Single-homed to SL-3)

TOR-3 (Multihomed to SL-4 and SL-5)

TOR-4 (Multihomed to SL-4 and SL-5)

TOR-5 (Single-homed to SL-6)

233.252.0.1 through 233.252.0.3

233.252.0.101 through 233.252.0.103

VLAN-402, irb.402

VNI 110402

VLAN-403, irb.403

VNI 110403

VLAN-404, irb.404

VNI 110404

(SBD) VLAN-2101, irb.2101

VNI 992101

See Table 2 for a summary of the BL device and external PIM router L3 connection parameters. In this example, the BL devices both use aggregated Ethernet (AE) interface ae3 for the external L3 connection, with different subnets per BL device. In the scaled-out test environment, the configuration uses a range of logical units on the ae3 interface with corresponding VLANs per tenant VRF, starting with unit 0 and VLAN-3001 for VRF-1. We focus on tenant VRF instances VRF-1 and VRF-101 in this example.

Table 2: External Multicast L3 Interface Parameters

BL Device

Tenant VRF Instance

External L3 Interface Logical Unit

Associated VLAN

BL L3 Logical Interface IP Address

PIM Router Logical Interface and IP Address

PIM RP Logical Unit and IP Address

BL-1

VRF-1

unit 0: ae3.0

VLAN-3001

172.30.0.1

ae1.0: 172.30.0.0

lo0.1: 172.22.2.1

VRF-101

unit 100: ae3.100

VLAN-3101

172.30.100.1

ae1.100: 172.30.100.0

lo0.101: 172.22.102.1

BL-2

VRF-1

unit 0: ae3.0

VLAN-3001

172.31.0.1

ae2.0: 172.31.0.0

lo0.1: 172.22.2.1

VRF-101

unit 100: ae3.100

VLAN-3101

172.31.100.1

ae2.100: 172.31.100.0

lo0.101: 172.22.102.1

You configure these parameters in Configure the Border Leaf Devices for External Multicast Connectivity, PIM EVPN Gateway Role, and PIM Options.

We divide the configuration into several sections.

Configure the Underlay and the Overlay

We use eBGP for the underlay and iBGP for the overlay following the reference architectures in IP Fabric Underlay Network Design and Implementation and Configure IBGP for the Overlay.

This example uses AE interfaces with one or two member links each for all of the connections for redundancy.

  1. Configure the approximate number of aggregated Ethernet (AE) interfaces in the configuration.

    For example:

    All SL and BL Devices:

    S-ARR Devices:

  2. Follow the procedure in Configuring the Aggregated Ethernet Interfaces Connecting Spine Devices to Leaf Devices to configure the AE interfaces for the underlay on all the BL, SL, and S-ARR devices.

    See Figure 2 and Figure 3 for the device IP addresses, AS numbers, and the AE interface names and IP addresses in this example.

  3. Configure the eBGP underlay on the BL and SL devices. On each BL and SL device, set S-ARR-1 (AS: 4200000021) and S-ARR-2 (AS: 4200000022) as BGP neighbors. Similarly, on each S-ARR device, set each of the BL and SL devices as BGP neighbors. See IP Fabric Underlay Network Design and Implementation for details.

    See Figure 2 and Figure 3 for the device IP addresses, AS numbers, and the AE interface names and IP addresses in this example.

    For example:

    SL-1 (device AS: 4200000011):

  4. Configure the iBGP overlay with an overlay AS number and EVPN signaling on the SL, BL, and S-ARR devices. See Configure IBGP for the Overlay for details.

    On each SL and BL device, set the spine devices S-ARR-1 (lo0:192.168.2.1) and S-ARR-2 (192.168.2.2) in the overlay AS as BGP neighbors. For example:

    SL-1 (device lo0: 192.168.0.1):

    Repeat this step for each SL and BL device, substituting that device loopback IP address for the local-address option. See Figure 2 and Figure 3.

    On each S-ARR device, set all of the SL and BL devices in the overlay AS as BGP neighbors. For example:

    S-ARR-1 (device lo0: 192.168.2.1):

  5. Set MAC aging, Address Resolution Protocol (ARP), and Neighbor Discovery Protocol (NDP) timer values to support MAC address learning and IPv4 and IPv6 route learning. These parameters are common settings in EVPN-VXLAN fabrics and are not specific to the OISM or AR feature configurations.

    Set these values on all SL and BL devices.

    Set the NDP stale timer on all IRB interfaces for IPv6 neighbor reachability confirmation on all SL and BL devices for IPv6 unicast traffic.

    Note:

    You apply this setting to all IRB interfaces that might handle IPv6 unicast traffic, so you usually add this statement to a configuration group of common statements and apply the group to all interfaces.

Configure an OISM-Enabled EVPN MAC-VRF Instance

The scaled test environment includes multiple MAC-VRF EVPN instances. We show one instance called MACVRF-1 here that we use for OISM and AR traffic.

Note:

We require that you enable the shared tunnels feature on the QFX5000 line of switches running Junos OS with a MAC-VRF instance configuration. This feature prevents problems with VTEP scaling on the device when the configuration uses multiple MAC-VRF instances. When you configure shared tunnels, the device minimizes the number of next-hop entries to reach remote VTEPs. You globally enable shared VXLAN tunnels on the device using the shared-tunnels statement at the [edit forwarding-options evpn-vxlan] hierarchy level. You must reboot the device for this setting to take effect.

This statement is optional on the QFX10000 line of switches running Junos OS, which can handle higher VTEP scaling than the QFX5000 line of switches. On devices running Junos OS Evolved in EVPN-VXLAN fabrics, shared tunnels are enabled by default.

Configure the elements in these steps on all SL, BL, and S-ARR devices.

Note:

This example includes the AR multicast optimization with OISM. The spine devices (S-ARR-1 and S-ARR-2) in the fabric serve as standalone AR replicator devices. For AR to work with the OISM symmetric bridge domains model, you must also configure all the common OISM SL and BL elements on the standalone AR replicator devices, such as the MAC-VRF instance, VLANs, tenant VRFs, and IRB interfaces.

If the spine devices don’t run as AR replicators, you don’t need to configure these elements on the spine devices.

  1. Configure MACVRF-1, an EVPN MAC-VRF instance with the VLAN-aware service type. Specify the VXLAN tunnel endpoint (VTEP) source interface as the device loopback interface. Set a route distinguisher and route target (with automatic route target derivation) for the instance. Also, enable all VNIs in the instance to extend into the EVPN BGP domain.

    For example:

    SL-1:

    Repeat this step on the remaining SL devices, BL-1, BL-2, S-ARR-1, and S-ARR-2. In the configuration on each device, substitute that device’s IP address as part of the route-distinguisher statement value so these values are unique across the devices.

  2. Configure platform-specific settings required in scaled EVPN-VXLAN environments. You configure these options on the devices in the fabric that operate in any OISM or AR role.
    1. On Junos OS devices in the QFX5000 line of switches , include the global shared tunnels configuration statement. This setting supports VXLAN tunneling with MAC-VRF instances on those devices in scaled environments:

      Note:

      If you need to configure the shared tunnels setting here, after you commit the configuration, you must reboot the device for the shared tunnels setting to take affect.

      The shared tunnels feature is set by default on devices that run Junos OS Evolved.

    2. On devices running Junos OS Evolved, configure the following to support scaled EVPN-VXLAN environments:

    3. On QFX5110 and QFX5120 switches, configure next hop settings for scaled environments to ensure next hops table size is proportional to the device capacity, as follows:

      QFX5110 switches:

      QFX5120 switches:

  3. Configure the OISM revenue VLANs and SBD VLANs in the EVPN MACVRF-1 instance. Map each VLAN to a VXLAN VNI value. Also, create IRB interfaces for each VLAN. See Table 1 for the VLANs and VNI values we use in this example. You configure one SBD and corresponding IRB for each VRF.

    Configure these elements on all SL, BL, and S-ARR devices.

  4. Enable OISM globally on all SL, BL, and S-ARR devices.
    Note:

    The S-ARR spine devices in this example serve as standalone AR replicator devices, so you must enable OISM on them too. If the spine devices don’t run as AR replicators, you don’t need to enable OISM on those devices.

  5. Enable IGMP snooping in the MAC-VRF instance for all OISM tenant (revenue) VLANs and SBD VLANs on all SL, BL, and S-ARR devices.
    Note:

    You enable IGMP snooping on the S-ARR devices because they act as AR replicator devices. AR replicators use IGMP snooping to optimize traffic forwarding.

    In EVPN-VXLAN fabrics, we support IGMPv2 traffic with ASM reports only. We support IGMPv3 traffic with SSM reports only. When you enable IGMP snooping for IGMPv3 traffic, include the SSM-specific option evpn-ssm-reports-only configuration option as shown below. See Supported IGMP or MLD Versions and Group Membership Report Modes for more on ASM and SSM support with EVPN-VXLAN.

    IGMP snooping with IGMPv2 on all VLANs:

    IGMP snooping on VLANs with IGMPv3 receivers (VLAN-1 through VLAN-4, according to Table 1):

  6. Enable IGMP on the IRB interfaces that handle multicast traffic on the SL, BL, and S-ARR devices.

    IGMPv2 is enabled by default when you configure PIM, but you need to explicitly enable IGMPv3 on the IRB interfaces that handle IGMPv3 traffic. According to Table 1, the IGMPv3 IRB interfaces are irb.1, irb.2, irb.3, and irb.4, which are associated with VRF-1:

  7. (Required on any devices configured as AR replicators with OISM, and on the QFX10000 line of switches and the PTX10000 line of routers in any OISM role only in releases prior to Junos OS or Junos OS Evolved Release 23.4R1) To avoid traffic loss at the onset of multicast flows, set the install-star-g-routes option at the [edit routing-instances name multicast-snooping-options oism] hierarchy. You set this parameter in the MAC-VRF instance. With this option, the Routing Engine installs (*,G) multicast routes on the Packet Forwarding Engine for all of the OISM revenue VLANs in the routing instance immediately upon learning about any interested receiver. See Latency and Scaling Trade-Offs for Installing Multicast Routes with OISM (install-star-g-routes Option) for more on using this option.

    For example, in this test environment, S-ARR-1 and S-ARR-2 are AR replicators, so we configure this statement in the MAC-VRF routing instance on those devices. Also, SL-4 and SL-5 are switches in the QFX10000 line switches, so if those devices are running earlier releases than Junos OS or Junos OS Evolved Release 23.4R1, you would also configure this statement on those devices.

Configure the Tenant VRF Instances for IGMPv2 and IGMPv3 Multicast Receivers

The scaled test environment includes many tenant L3 VRF instances. We show the VRF instances for the two multicast use cases in Table 1:

  • VRF-1 for IGMPv3 traffic pulled from an internal source.

  • VRF-101 for IGMPv2 traffic pulled from an external source.

Configure the elements in these steps in those VRF instances on all SL, BL, and S-ARR devices.

Note:

The S-ARR spine devices in this example also serve as standalone AR replicator devices, so you must configure all the tenant VRF settings on them too. If the spine devices don’t run as AR replicators, you don’t need to include these steps on those devices.

You also configure different PIM options in the tenant VRF instances for SL devices compared to BL devices. See Configure PIM on the Server Leaf Devices and Configure the Border Leaf Devices for External Multicast Connectivity, PIM EVPN Gateway Role, and PIM Options for those configuration steps. You don’t need to configure PIM on the S-ARR devices.

  1. Configure the tenant VRF instances.

    Include the IRB interfaces associated with each tenant VRF instance. Set a route distinguisher based on the device IP address, and a route target for the instance.

    For example:

    SL-1:

    Repeat this step on the remaining SL devices, BL-1, BL-2, S-ARR-1, and S-ARR-2. On each device, configure the route-distinguisher to be unique across the devices and tenant VRFs. Substitute the device IP address from Figure 2 or Figure 3, and the VRF instance number for each tenant VRF, as follows:

    • On SL-2, use route-distinguisher 192.168.0.2:1 for VRF-1, and 192.168.0.2:101 for VRF-101.

      On SL-3, use route-distinguisher 192.168.0.3:1 for VRF-1, and 192.168.0.3:101 for VRF-101.

      And so on for the remaining SL devices.

    • On BL-1, use route-distinguisher 192.168.5.1:1 for VRF-1, and 192.168.5.1:101 for VRF-101.

      On BL-2, use route-distinguisher 192.168.5.2:1 for VRF-1, and 192.168.5.2:101 for VRF-101.

    • On S-ARR-1, use route-distinguisher 192.168.2.1:1 for VRF-1, and 192.168.2.1:101 for VRF-101.

      On S-ARR-2, use route-distinguisher 192.168.2.2:1 for VRF-1, and 192.168.2.2:101 for VRF-101.

  2. In each VRF instance, include the corresponding IRB interfaces for the OISM revenue VLANs and the SBD associated with that instance, according to Table 1:

    Repeat the same configuration all SL, BL, and S-ARR devices.

  3. Specify the IRB interface for the OISM SBD associated with each tenant VRF instance, as outlined in Table 1:

    Repeat the same configuration all SL, BL, and S-ARR devices.

  4. On any SL, BL, and S-ARR devices that have a single Routing Engine, configure the graceful restart feature for each tenant VRF:

Configure Server Leaf to TOR Interfaces and Ethernet Segment Identifiers (ESIs) for EVPN Multihoming

The TOR devices host the multicast sources and receivers inside the fabric. These devices have single-homed or multihomed connections to the SL devices in the EVPN core. See Figure 3 for the topology in this example. TOR-1, TOR-3, and TOR-4 are each multihomed to two SL devices, and TOR-2 and TOR-5 are single-homed. Figure 3 shows that:

  • The SL devices all use interface ae3 to connect to the TOR devices.

  • SL4 and SL-5 use interfaces ae3 and ae5 for redundant connections to TOR-3 and TOR-4, respectively.

  • Each multihomed TOR device uses interfaces ae1 and ae2 to connect to its peer SL devices.

  • For consistency in the configuration, the single-homed TORs (TOR-2 and TOR-5) also use interfaces ae1 and ae2 but as redundant links to a single SL device.

Also, starting in Junos OS and Junos OS Evolved Release 23.2R2, you can additionally configure the network isolation feature on the multihomed TOR-facing interfaces on the SL devices to help mitigate traffic loss during core isolation events on those interfaces. In this example, Step 4 shows how to configure the network isolation feature on the interfaces from SL-1 and SL-2 to multihomed device TOR-1.

Note:

Although this example doesn't show any multihomed server-facing or TOR-facing interfaces on the BL devices, you can similarly configure the network isolation feature for any such interfaces on BL devices.

  1. Configure the server leaf to TOR interfaces and Ethernet segment identifiers (ESIs) for EVPN multihoming.

    On multihoming peer SL devices, configure the same ESI on the AE interface links to the multihomed TOR device. Also enable LACP and LACP hold timers on the interfaces. See Multihoming an Ethernet-Connected End System Design and Implementation for more details on configuring these interfaces and the corresponding Ethernet segment identifiers (ESIs).

    Note:

    See Step 4 in this section to add configuring the device to detect and take action upon network isolation events on these interfaces. If you include that configuration, make sure the "up" hold timer configured here in this step is greater than the "up" hold timer in the network isolation group profile in that step.

    For example:

    On SL-1 for the multihoming interfaces to TOR-1:

    Repeat this configuration on SL-2, the multihoming peer to SL-1. Use the corresponding interface for the link on SL-2 to TOR-1, and the same ESI.

    On SL-2 for the multihoming interfaces to TOR-1:

  2. Repeat the configuration in Step 1 above on the other SL devices that serve multihomed TOR devices.

    Use a different ESI for each corresponding multihoming peer SL device pair. See Figure 3. In this example, all SL devices use ae3 to link to the TOR devices. SL-4 and SL-5 use ae3 to link to TOR-3, and additionally use ae5 to link to TOR-4. As a result, you set one ESI for the ae3 links and a different ESI for the ae5 links on both of those SL devices.

    Here we assign ESI values with the following conventions:

    • The 6th segment from the right in the ESI value matches the number of the multihomed TOR.

    • The last segment of the ESI value matches the AE interface number.

    For example:

    • On SL-4 and SL-5 for the interfaces to multihomed TOR-3 (link is ae3 on both SL devices):

    • On SL-4 and SL-5 for the interfaces to multihomed TOR-4 (link is ae5 on both SL devices):

    TOR-2 is single-homed to SL-3, so you don’t configure an ESI on the ae3 interface on SL-3.

    Similarly, TOR-5 is single-homed to SL-6, so you don’t configure an ESI on the ae3 interface on SL-6.

    Note:

    The test environment also includes separate AE interfaces (ae4 on each SL device) and ESIs (for the multihomed TOR devices) that the fabric uses for unicast traffic. We only show configuring the multicast ESIs here.

  3. Include the interfaces to the TOR devices in the MAC-VRF instance on each SL device—ae3 on all SL devices, and also ae5 on SL-4 and SL-5.

    For example:

    On SL-1 for the interface to TOR-1 (and repeat this configuration on SL-2 for the interface to TOR-1, SL-3 for the interface to TOR-2, and SL-6 for the interface to TOR-5):

    On SL-4 and SL-5 for the interfaces to TOR-3 and TOR-4:

  4. (Optional in releases starting with Junos OS and Junos OS Evolved Release 23.2R2 with this OISM configuration) Configure the network isolation service-tracking feature on SL-1 and SL-2 for interface ae3 to TOR-1. With this feature, the device changes the interface status upon detecting and recovering from a core isolation event. See Layer 2 Interface Status Tracking and Shutdown Actions for EVPN Core Isolation Conditions, network-isolation, and network-isolation-profile for details on how this feature works. To enable this feature, we:
    • Set up a network isolation group net-isolation-grp-1 with core isolation service tracking using the network-isolation statement.

    • Include an "up" hold time (in ms) that the device will wait upon detecting recovery from network isolation before acting to bring the interface up again.

      Note:

      The "up" hold time you configure here for core isolation event detection should be less than the general "up" hold timer on the interfaces for the ESI in Step 1. For example, in that step, the interface "up" hold timer is 300000 ms; here, we set the "up" hold timer for the network isolation group to 100000 ms.

    • Configure the device to set either lacp-out-of-sync or link-down status on the interface as the core isolation action. For example, here we set the lacp-out-of-sync action.

    • Apply the network isolation group to the interface using the network-isolation-profile statement.

    • Because core isolation service tracking improves the convergence time for the ESI routes, we don't need to include a delay in the overlay BGP route advertisements. As a result, on devices where we configure the network isolation feature on TOR-facing interfaces, we remove the delay-route-advertisements settings from Step 4 in Configure the Underlay and the Overlay.

    For example:

    On SL-1 for the multihoming interface(s) to TOR-1:

    Repeat this configuration on SL-2 for ae3 with the same network isolation group profile parameters.

    You can set up the same or similar network isolation group profiles on the other multihoming peer SL devices, SL-4 and SL-5, which use ae3 for the ESI to TOR-3 and ae5 for the ESI to TOR-4.

    Also, when we enable the network isolation feature on the SL device interfaces, with that configuration we remove the overlay route advertisement delay setting on the device. We still need to retain the overlay delay-route-advertisements minimum-delay routing-uptime setting on S-ARR-1 and S-ARR-2 from Step 4 in Configure the Underlay and the Overlay. However, with the network isolation feature configured in the network, we observed better results testing with a higher setting on the S-ARR devices than the setting in that step. As a result, in this case we recommend you also change the following:

    On S-ARR-1 and S-ARR-2 from Step 4 in Configure the Underlay and the Overlay:

    change that configuration to:

Configure PIM on the Server Leaf Devices

In this procedure, you configure the OISM elements specific to server leaf functions, such as PIM, in the tenant VRF instances in this example—VRF-1 and VRF-101. Configure these steps on all SL devices.

  1. Configure PIM in passive mode on the SL devices for all interfaces in each of the VRF routing instances.
  2. Set the PIM accept-remote-source option to enable the SL devices to accept multicast traffic from the SBD IRB interface as the source interface. With the symmetric bridge domains OISM model, any multicast traffic coming from external sources arrives at the SL devices on the SBD.

Configure the Border Leaf Devices for External Multicast Connectivity, PIM EVPN Gateway Role, and PIM Options

In this procedure, you configure the OISM elements specific to border leaf functions, including the steps to connect to the external PIM domain. Configure statements in this procedure on each BL device.

Note:

This example connects to the external PIM router using classic L3 interface links. OISM supports additional methods to connect to the external domain depending on the platform of the BL device. See External Multicast Connection Methods in the EVPN User Guide for a list of supported external multicast methods per platform.

  1. Configure the L3 interfaces that connect to the external PIM router.

    In this example, both BL devices use interface ae3 for this purpose with a different subnet for each BL device connection per tenant VRF. The physical interfaces in the AE interface bundle might differ across the BL devices. Figure 2 shows the BL device and link information for this example.

    The external L3 multicast connection uses the ae3 interface with VLAN tagging enabled. In the scaled environment, we configure logical units on the ae3 interface and corresponding VLANs per tenant VRF starting with unit 0 and VLAN-3001 for VRF-1. In this example we show configuring ae3.0 in VRF-1 and ae3.100 in VRF-101 on both BL devices. See Table 2 for a summary of the external L3 interface connection configuration parameters.

    BL-1:

    BL-2:

  2. Include the logical L3 interfaces in the tenant VRF instances. Both BL devices use ae3 for the external multicast connection, so use the same configuration on both devices.

    BL-1 and BL-2:

  3. Configure each of the BL devices with the OISM PEG role in each tenant VRF.

    BL-1 and BL-2:

  4. Configure PIM in the tenant VRF instances, including the following for each tenant VRF in this example:
    • In this environment, set a static PIM RP address corresponding to the external PIM router and RP router (one for each logical unit and associated tenant VRF).

    • Configure PIM on the OISM revenue VLAN IRB interfaces and include the PIM distributed-dr option.

    • Configure classic PIM (no distributed-dr option) on the SBD IRB interface, the logical unit L3 interface and logical loopback interface.

      With PIM on the SBD IRB interface, include the accept-remote-source option to enable the BL devices to accept multicast traffic from the SBD IRB interface as the source interface. This option handles situations where the BL devices might send source traffic to each other on the SBD. See Multicast Traffic from an External Source to Receivers Inside the EVPN Data Center—L3 Interface Method or Non-EVPN IRB Method in the EVPN User Guide for more information on those situations.

    • With PIM on the SBD IRB interface, enable Bidirectional Forwarding Detection (BFD) and the stickydr option. The BFD settings improve convergence time with interface issues to help avoid traffic loss. The stickydr option eliminates designated router switchover convergence delay during reboot events.

    Include the same configuration on BL-1 and BL-2.

    For VRF-1:

    For VRF-101:

    See OISM Components in the EVPN User Guide for details on why we configure these PIM options on the BL devices.

  5. Configure an OSPF area for each tenant VRF instance that includes the logical L3 interface, logical loopback interface, and SBD IRB interface. This step establishes an OSPF routing domain with these interfaces as neighbors in the tenant VRF to support routing among them.

    You also include the export export-direct policy option, which exports directly-connected IRB interfaces on a particular BL device into the OSPF routing protocol.

    The configuration is the same on both BL devices.

    BL-1 and BL-2:

Configure External Multicast PIM Router and PIM RP Router

In this example, an MX Series router acts as the external PIM domain router and PIM RP device. In this procedure, we include the configuration on this device that matches the BL device configuration in Configure the Border Leaf Devices for External Multicast Connectivity, PIM EVPN Gateway Role, and PIM Options. This information helps you interpret show command output to verify the setup, connectivity, and group memberships established for the OISM and AR devices in the fabric.

The PIM router and RP router configuration includes:

  • Connections to BL-1 on interface ae1 and to BL-2 on interface ae2, with VLAN-tagging enabled.

  • Routing instances of type virtual-router (PIM-GW-VR-n) that correspond to each tenant VRF-n in the OISM configuration on the BL devices.

  • Logical units on ae1 and ae2 with corresponding VLANs per virtual router VRF, starting with unit 0 and VLAN-3001 for VRF-1.

  • A PIM RP IP address for each virtual router VRF instance.

This procedure shows configuring the following on the PIM router and RP router, as listed in Table 2:

  • PIM-GW-VR-1 (corresponding to VRF-1) and VLAN 3001 with:

    • Interface ae1.0 to BL-1.

    • Interface ae2.0 to BL-2.

  • PIM-GW-VR-101 (corresponding to VRF-101) and VLAN-3101 with:

    • Interface ae1.100 to BL-1.

    • Interface ae2.100 to BL-2.

  1. Configure the L3 interfaces on the PIM router that connect to the BL devices (ae1 to BL-1 and ae2 to BL-2).
  2. Configure virtual router routing instances corresponding to the OISM tenant VRFs and VLANs in this example.

    Include the L3 interfaces to BL-1 and BL-2 in the routing instances, as well as the device logical loopback interface and logical interfaces that connect to the external source for each routing instance.

  3. In each of the routing instances, configure PIM, a static IP address for the PIM RP, and an OSPF area for PIM for the interfaces.

    See Table 2 for the PIM RP static addresses we use in this example.

Configure AR Replicator Role on OISM Spine Devices and AR Leaf Role on OISM Leaf Devices

In an ERB overlay fabric, you can enable OISM with AR. You can assign the AR replicator role to one or more spine devices in the fabric. When a spine device runs as an AR replicator, the AR replicator operates in standalone AR replicator mode. This means the AR replicator role is not collocated with the OISM border leaf role on the device.

When an ingress AR leaf device needs to forward multicast traffic to other AR leaf devices, it uses an AR overlay VXLAN tunnel to send only one copy of the traffic to an available AR replicator device instead. Then, also using AR overlay VXLAN tunnels, the AR replicator device replicates and forwards the traffic to the other AR leaf devices with receivers that subscribed to the multicast stream. AR replicators use ingress replication instead of AR to forward multicast traffic directly to leaf devices that don’t support AR (what we call RNVE devices).

AR leaf devices balance the load of AR replicator requests among the available AR replicator devices using one of two methods, depending on the leaf device platform:

  • QFX5000 line of switches (models that run either Junos OS or Junos OS Evolved)—These devices designate a particular AR replicator device for traffic associated with each VLAN or VNI. In this case, the show evpn multicast-snooping assisted-replication next-hops CLI command output shows the designated AR replicator for each VNI as the (Designated Node).

  • QFX10000 line of switches—These devices actively load-balance among the AR replicators based on traffic flow levels within a VNI. The device doesn’t designate a particular AR replicator for each VNI.

In this example, we have multicast flows from an internal source and an external source. The ERB overlay fabric spine devices (S-ARR-1 and S-ARR-2) act as AR replicator devices. The OISM SL and BL devices act as AR leaf devices, except SL-3, which simulates an RNVE device (we don’t enable the AR leaf role on that device). Figure 4 shows how AR works if we consider the multicast streams and corresponding fabric parameters in this example from Table 1.

Figure 4: AR with OISM Internal and External Multicast SourcesAR with OISM Internal and External Multicast Sources
  • In the internal source use case:

    1. SL-1 is the ingress device for an internal multicast stream from multihomed TOR-1 on source VLAN VLAN-1 for traffic to receivers in tenant VRF VRF-1.

    2. SL-1 (a QFX5120 switch) forwards the traffic to its designated AR replicator for VLAN-1 (VNI 110001). The designated AR replicator is S-ARR-1 in this case.

    3. S-ARR-1 replicates and forwards the stream on the source VLAN to the AR leaf devices that host TOR devices with subscribed receivers.

    4. The destination SL devices forward or locally route the traffic to their subscribed receivers.

  • In the external source use case:

    1. BL-1 is the ingress device for an external multicast stream from the external PIM domain for traffic to receivers in tenant VRF VRF-101.

    2. BL-1 (a QFX5130 switch) forwards the traffic to its designated AR replicator for the SBD VLAN, VLAN-2101 (VNI 992101). The designated AR replicator is S-ARR-2 in this case.

    3. S-ARR-2 replicates and forwards the stream on the SBD VLAN using AR tunnels to the AR leaf devices that host TORs with subscribed receivers.

    4. S-ARR-2 also replicates the stream and uses an ingress replication (IR) tunnel to forward the stream to SL-3, an RNVE leaf device that hosts a TOR device with a subscribed receiver.

    5. The destination SL devices forward or locally route the traffic toward their subscribed receivers.

For more details on AR device roles, how AR works, and other use cases besides the ones in this example, see Assisted Replication Multicast Optimization in EVPN Networks.

To configure AR in this example:

  1. Configure the AR replicator role on the S-ARR devices.
    1. Configure the device loopback interface lo0 with a secondary IP address specifically for AR functions. The AR replicator advertises this IP address to the network in EVPN Type 3 AR tunnel routes.

      We include the primary loopback address configuration statements here as well so you can more easily identify each S-ARR device.

      S-ARR-1:

      S-ARR-2:

    2. Configure the AR replicator role on S-ARR-1 and S-ARR-2 in the MAC-VRF instance, using the secondary AR loopback interface you configured in the previous step.

      S-ARR-1:

      S-ARR-2:

    3. On AR replicator devices in standalone mode, you must also configure the common OISM elements that you configure on the OISM SL and BL devices. You configure those elements in earlier steps in this example. See:

      You don't need to configure any of the PIM or external multicast elements specific to OISM BL or SL devices.

  2. Configure the AR leaf role in the MAC-VRF instance on the OISM BL devices and all SL devices except the RNVE device in this example, SL-3.

    Include the replicator-activation-delay option as shown. By default, AR leaf devices delay 10 seconds after receiving an AR replicator advertisement before starting to send traffic to that AR replicator device. In a scaled environment, we recommend you make the delay longer to ensure the AR replicator devices have fully learned the current EVPN state from the network. The delay also helps in cases when an AR replicator goes down and comes up again.

    BL-1, BL-2, SL-1, SL-2, SL-4, SL-5, and SL-6:

    We skip this configuration on SL-3, which acts as a device that doesn’t support AR.

Verify OISM and AR Configuration and Operation

You can use the show commands in the following steps to verify OISM and AR configuration and operation.

  1. Verify the underlay and overlay configurations and confirm the fabric has established the BGP state and traffic paths among the devices.

    SL-1 (device lo0: 192.168.0.1):

    Run this command on each SL, BL, and S-ARR device in the fabric.

  2. Verify the configured VTEPs in the MAC-VRF EVPN instance on the SL, BL, and S-ARR devices.

    You can use the show ethernet-switching vxlan-tunnel-end-point remote command, or its alias, the show mac-vrf forwarding vxlan-tunnel-end-point remote command (where supported).

    For example:

    SL-1:

    Note:

    You also see the AR role of the device on each remote VTEP in the RVTEP-Mode column of the output from this command, as follows

    • The primary loopback IP address on an AR replicator device is the ingress replication (IR) IP address, which the device uses to forward traffic to RNVE devices. That’s why you see “RNVE” as the role corresponding to the S-ARR device primary loopback addresses in the RVTEP-IP column.

    • The secondary loopback IP address you assign for the AR replicator role is the AR IP address. You see “Replicator” as the role for those RVTEP-IP addresses in this output.

    • The AR leaf devices and the RNVE device only use IR tunnels, so this command displays “Leaf” or “RNVE” role corresponding to the primary loopback IP address for those devices in the RVTEP-IP column.

    Run this command on each SL, BL, and S-ARR device in the fabric.

  3. Verify the SL device to TOR device links. These links are ae3 on each SL device. Also, SL-4 and SL-5 are multihoming peers for both TOR-3 and TOR-4, and use ae5 for those additional TOR device links. (See Figure 3.)

    For example:

    SL-1 to TOR-1:

    SL-2 to TOR-1:

    Repeat this command on each SL device for interface ae3, and on SL-4 and SL-5, repeat this command for ae5.

  4. Verify external L3 interface OSPF neighbor reachability on the BL devices for each tenant VRF instance.

    BL-1:

    BL-2:

  5. Verify the PIM router and RP device connections from the external PIM router and RP device to the BL devices.
  6. Verify AR leaf overlay tunnel load balancing to the available AR replicator devices.

    The AR leaf devices detect the advertised AR replicator devices and load-balance among them using different methods based on the leaf device platform. (See AR Leaf Device Load Balancing with Multiple Replicators for details.)

    In this example, SL-1 is a QFX5120 switch, so as an AR leaf device, SL-1 load balances by assigning an AR replicator device to each VLAN or VNI.

    Run the show evpn multicast-snooping assisted-replication next-hops instance mac-vrf-instance command on the AR leaf devices to see the overlay tunnels and load-balancing next hops to the available AR replicators. On SL devices that designate AR replicators by VNI, the output of this command tags the AR replicator as the (Designated Node). The output doesn’t include this tag on AR leaf devices that load balance based on active traffic flow.

    For example, the output here for SL-1 shows that the device assigned:

    • S-ARR-1 as the designated replicator for configured VNIs 110001 and 110003 (which corresponds to VLAN-1 and VLAN-3, respectively)

    • S-ARR-2 as the designated replicator for configured VNI 110002 and 110004 (which correspond to VLAN-2 and VLAN-4, respectively)

    SL-1:

    Run this command on the SL and BL devices in the fabric.

  7. Verify PIM join and multicast group status for a multicast stream from a source inside the fabric behind TOR-1. TOR-1 is multihomed to SL-1 and SL-2 (see Figure 1). Receivers on the TOR devices connected to the other SL devices subscribe to multicast groups hosted by this source as listed in Table 1. The stream we verify here is intra-VLAN and inter-VLAN traffic with source VLAN VLAN-1 and receiver VLANs VLAN-2, VLAN-3, and VLAN-4.

    With AR enabled, the ingress SL device forwards the multicast source traffic to the designated AR replicator. See Figure 4. The AR replicator forwards copies to the SL devices with subscribed receivers on the source VLAN, VLAN-1. Then the SL devices forward the traffic toward the receivers on VLAN-1.

    In this step, you run the commands only for VRF-1, which is the tenant VRF that hosts the internal multicast source traffic in this example. Also, this stream is an IGMPv3 stream with SSM reports, so you see only (S,G) multicast routes. In this case, the output shows the source behind TOR-1 has source IP address 10.0.1.12.

    In this step, we show running verification commands for:

    • PIM join status on SL-1 and SL-2 for the source on multihomed device TOR-1.

      The show pim join summary output shows that the SL devices serving TOR-1 saw joins for a total of 6 multicast groups.

    • IGMP snooping multicast group membership status for the receiver behind device TOR-4, which is multihomed to SL-4 and SL-5.

      The show igmp snooping membership output shows the multicast group joins from the receiver behind TOR-4. TOR-4 hashes the join messages to either of the multihoming peer SL devices. The number of joins on both devices together (3 on each device) equals the total number of joins in the show pim join summary output (6).

    • PIM join status summary and details on SL-4 and SL-5 for the receiver behind multihomed device TOR-4.

      When the show pim join extensive output on SL-4 and SL-5 show the same upstream and downstream IRB interface, the device is bridging the multicast stream within the same VLAN. When the downstream IRB interfaces are different from the upstream IRB interface, the device is routing the multicast stream between VLANs.

    • The designated forwarder among multihoming peers SL-4 and SL-5 that the device elected to forward the traffic to the receiver behind multihomed TOR-4.

      We run the show evpn instance MACVRF-1 designated-forwarder esi command for the ESI you configured on the ae5 interfaces from SL-4 and SL-5 to TOR-4.

    SL-1: Internal source—PIM join status for VRF-1:

    SL-2: Internal source—PIM join status for VRF-1:

    SL-4: Receiver—Multicast group membership status on source VLAN VLAN-1 and receiver VLANs VLAN-1 through VLAN-4::

    SL-5: Receiver—Multicast group membership status on source VLAN VLAN-1 and receiver VLANs VLAN-1 through VLAN-4:

    SL-4: Receiver—PIM join status for VRF-1:

    SL-5: Receiver—PIM join status for VRF-1:

    SL-4: Check the designated forwarder to the receiver behind TOR-4:

    Recall that in the SL device configuration, we assign ae5 as the link from both SL-4 and SL5 to TOR-4. We set ESI 00:00:00:ff:00:04:00:01:00:05 on those links. The following output shows that SL-4 is not the designated forwarder for this ESI.

    SL-5: Check the designated forwarder to the receiver behind TOR-4:

    The following output confirms that SL-5 (lo0.0 192.168.0.5) is the designated forwarder for ESI 00:00:00:ff:00:04:00:01:00:05.

  8. Verify PIM join and multicast group status for a multicast stream from a source outside the fabric in the external PIM domain. See Figure 1. Receivers behind the TOR devices connected to the SL devices subscribe to multicast groups hosted by this source as listed in Table 1. The ingress BL device routes the external source traffic from the L3 interface connection to the SBD VLAN (VLAN-2101 in this case).

    With AR enabled, the BL device forwards the traffic on the SBD VLAN to an AR replicator (either the designated AR replicator or an AR replicator based on traffic load-balancing, depending on the BL device platform). See Figure 4. The AR replicator forwards copies on the SBD to the SL devices with subscribed receivers. Then the SL devices forward or locally route the traffic toward the receivers on the tenant VLANs.

    In this step, you run the commands only for VRF-101, which is the tenant VRF that hosts external multicast source traffic in this example. Also, this stream is an IGMPv2 stream with ASM reports, so you see only (*,G) multicast routes.

    In this step, you run verification commands for:

    • PIM join status on BL-1 and BL-2 as the ingress devices for the external multicast source.

    • IGMP snooping multicast group membership status for a receiver behind device TOR-1, which is multihomed to SL-1 and SL-2.

    • PIM join status on SL-1 and SL-2 for the receiver on multihomed device TOR-1.

    BL-1: Ingress BL device for external source—PIM join status for VRF-101:

    BL-2: Ingress BL device for external source—PIM join status for VRF-101:

    SL-1: Receiver—Multicast group membership status for VLANs associated with VRF-101 (VLAN-401 through VLAN-404):

    SL-2: Receiver—Multicast group membership status for VLANs associated with VRF-101 (VLAN-401 through VLAN-404):

    SL-1: Receiver—PIM join status for VRF-101:

    SL-2: Receiver—PIM join status for VRF-101:

  9. Verify the OISM devices use EVPN Type 6 (SMET) routes to optimize multicast traffic flow in the EVPN core. View the Type 6 routes in the EVPN routing tables (bgp.evpn.0 and MACVRF-1.evpn.0) on OISM leaf devices and the spine devices that act as AR replicators. Type 6 routes are advertised only on the SBD VLAN.

    For example, here we view Type 6 routes for interested receivers behind TOR-4, which is multihomed to peer SL devices SL-4 and SL-5. We show results for parameters of the featured multicast stream in Table 1 for tenant VRF-1, with IGMPv3 traffic between an internal source and internal receivers:

    • VLANs: VLAN-1 through VLAN-4, which map to VNIs 110001 through 110004

    • SBD VLAN: VLAN-2001, which maps to VNI 992001

    • Internal source: Behind SL-1 and SL-2 on TOR-1, with internal IP address 10.0.1.12

    • Multicast groups: 233.252.0.121 through 233.252.0.123

    These commands show:

    • S-ARR-1 and SL-ARR-2 received Type 6 routes from SL-4 and SL-5 on the SBD (VLAN-2001 VNI 992001).

    • SL-4 (lo0: 192.168.0.4) and SL-5 (lo0: 192.168.0.5) received joins from a receiver behind multihomed TOR-4 for multicast groups 233.252.0.121 through 233.252.0.123.

    • Source (10.0.1.12) and Group information, because the receivers send IGMPv3 joins.

    For S-ARR-1 — Type 6 routes from SL-4 and SL-5:

    S-ARR-1 links to SL-4 on ae4 (172.16.7.0/31).

    S-ARR-1 links to SL-5 on ae5 (172.16.8.0/31):

    Run the same commands on S-ARR-2 to see similar output on that device. S-ARR-2 links to SL-4 on ae4 (172.16.9.0/31) and SL-5 on ae5 (172.16.10.0/31).

    SL-4 — Locally generated Type 6 routes, and Type 6 routes from other SL devices by way of S-ARR-1 and S-ARR-2:

    You see similar Type 6 routes as remote routes from the other SL devices, which also serve receivers for the same groups in the same tenant VRF for the internal source on TOR-1 (multihomed to SL-1 and SL-2).

    Run these commands on SL-5 to see the Type 6 routes on that device. Match on prefix 6:192.168.0.5 to see the locally generated Type 6 routes for SL-5. Match on other device prefixes (such as 6*192.168.0.4 for SL-4) to see the remotely generated Type 6 routes.

  10. Verify the peer devices for multihoming ESs use EVPN Type 7 routes to synchronize multicast join states.

    Use the show route table __default_evpn__.evpn.0 command to see Type 7 route prefixes.

    For example, here we show Type 7 routes generated by peer SL devices SL-4 and SL-5 for the receiver behind TOR-4, with an internal source and IGMPv3 joins (see the parameters in Table 1 for VRF-1). TOR-4 hashes join messages from its receivers to either SL-4 or SL-5. The devices each advertise Type 7 routes to their multihoming peers for the joins they receive, to synchronize the join status among them.

    These commands show:

    • SL-5 locally generated Type 7 routes to advertise to SL-4

    • SL-5 received Type 7 route advertisements from SL-4 for joins that TOR-4 hashed to SL-4

    • OISM devices advertise Type 7 and Type 8 routes on the OISM revenue VLANs, not the SBD.

      In this case, receivers joined groups on VLAN-2 (VNI 110002, hashed to SL-5) and VLAN-3 (VNI-110003, hashed to SL-4).

    Note that the ESI we configured for the SL-4 and SL-5 links to TOR-4 is 00:00:00:ff:00:04:00:01:00:05, which you see in the Type 7 route entry in the routing table.

    SL-5 — Locally generated Type 7 routes to advertise to SL-4:

    This output shows that SL-5 locally generated three Type 7 routes for joins on VLAN-2 (VNI 110002) for multicast groups 233.252.0.121 through 233.252.0.123. .

    SL-5 — Type 7 routes from SL-4:

    This output shows that SL-5 received three Type 7 route advertisements from SL-4 for joins on VLAN-3 (VNI 110003) for multicast groups 233.252.0.121 through 233.252.0.123.

    Run these commands on SL-4 to see Type 7 routes on that device. Match on prefix 7:192.168.0.5 to see the Type 7 routes from its multihoming peer, SL-5. Match on prefix 7:192.168.0.4 to see the locally generated Type 7 routes that SL-4 advertises to SL-5.

    You can use these commands to see Type 7 routes on the other multihoming peer SL device pairs.