Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

How to Configure IGMP Snooping for EVPN-VXLAN (Sample Topicmap for Feature)

 
Summary

IGMP snooping for EVPN-VXLAN helps to constrain multicast traffic to interested receivers in a broadcast domain.

Overview of Multicast Forwarding with IGMP Snooping in an EVPN-VXLAN Environment

Internet Group Management Protocol (IGMP) snooping constrains multicast traffic in a broadcast domain to interested receivers and multicast devices. In an environment with a significant volume of multicast traffic, using IGMP snooping preserves bandwidth because multicast traffic is forwarded only on those interfaces where there are IGMP listeners.

Starting with Junos OS Release 17.2R1, QFX10000 switches support IGMP snooping in an Ethernet VPN (EVPN)-Virtual Extensible LAN (VXLAN) edge-routed bridging overlay (EVPN-VXLAN topology with a collapsed IP fabric).

Starting with Junos OS Release 17.3R1, QFX10000 switches support the exchange of traffic between multicast sources and receivers in an EVPN-VXLAN edge-routed bridging overlay, which uses IGMP, and sources and receivers in an external Protocol Independent Multicast (PIM) domain. A Layer 2 multicast VLAN (MVLAN) and associated IRB interfaces enable the exchange of multicast traffic between these two domains.

Starting with Junos OS Release 18.1R1, QFX5110 switches support IGMP snooping in an EVPN-VXLAN centrally-routed bridging overlay (EVPN-VXLAN topology with a two-layer IP fabric).

Starting with Junos OS Release 19.3R1, EX9200 switches, MX Series routers, and vMX virtual routers support IGMP version 2 (IGMPv2) and IGMP version 3 (IGMPv3), IGMP snooping, selective multicast forwarding, external PIM gateways, and external multicast routers with an EVPN-VXLAN centrally-routed bridging overlay.

Note

Unless called out explicitly, the information in this topic applies to IGMPv2 and IGMPv3 and the following IP fabric architectures:

  • EVPN-VXLAN edge-routed bridging overlay

  • EVPN-VXLAN centrally-routed bridging overlay

Note

On a Juniper Networks switching device, for example, a QFX10000 switch, you can configure a VLAN. On a Juniper Networks routing device, for example, an MX480 router, you can configure the same entity, which is called a bridge domain. To keep things simple, this topic uses the term VLAN when referring to the same entity configured on both Juniper Networks switching and routing devices.

Benefits of Multicast Forwarding with IGMP Snooping in an EVPN-VXLAN Environment

  • In an environment with a significant volume of multicast traffic, using IGMP snooping constrains multicast traffic in a VLAN to interested receivers and multicast devices, which conserves network bandwidth.

  • Synchronizing the IGMP state among all EVPN devices for multihomed receivers ensures that all subscribed listeners receive multicast traffic, even in cases such as the following:

    • IGMP membership reports for a multicast group might arrive on an EVPN device that is not the Ethernet segment’s designated forwarder (DF).

    • An IGMP message to leave a multicast group arrives at a different EVPN device than the EVPN device where the corresponding join message for the group was received.

  • Selective multicast forwarding conserves bandwidth usage in the EVPN core and reduces the load on egress EVPN devices that do not have listeners.

  • The support of external PIM gateways enables the exchange of multicast traffic between sources and listeners in an EVPN-VXLAN network and sources and listeners in an external PIM domain. Without this support, the sources and listeners in these two domains would not be able to communicate.

Supported IGMP Versions and Group Membership Report Modes

Table 1 outlines the supported IGMP versions and the membership report modes supported for each version.

Table 1: Supported IGMP Versions and Group Membership Report Modes

IGMP Versions

Any-Source Multicast (ASM) (*,G) Only

Single-Source Multicast (SSM) (S,G) Only

ASM (*,G) + SSM (S,G)

IGMPv2

Yes (default)

No

No

IGMPv3

Yes (default)

Yes

No

To explicitly configure EVPN devices to process only (S,G) SSM membership reports for IGMPv3, enter the evpn-ssm-reports-only configuration statement at the [edit protocols igmp-snooping] hierarchy level.

You can enable SSM-only processing for one or more VLANs in an EVPN routing instance (EVI). When enabling this option for a routing instance of type virtual switch, the behavior applies to all VLANs in the virtual switch instance. When you enable this option, ASM reports are not processed and are dropped.

If you don’t include the evpn-ssm-reports-only configuration statement at the [edit protocols igmp-snooping] hierarchy level, and the EVPN devices receive IGMPv3 reports, the devices drop the reports.

Summary of Multicast Traffic Forwarding and Routing Use Cases

Table 2 provides a summary of the multicast traffic forwarding and routing use cases that we support in EVPN-VXLAN networks and our recommendation for when you should apply a use case to your EVPN-VXLAN network.

Table 2: Supported Multicast Traffic Forwarding and Routing Use Cases and Recommended Usage

Use Case Number

Use Case Name

Summary

Recommended Usage

1

Intra-VLAN multicast traffic forwarding

Forwarding of multicast traffic to hosts within the same VLAN.

We recommend implementing this basic use case in all EVPN-VXLAN networks.

2

Inter-VLAN multicast routing and forwarding—IRB interfaces with PIM

IRB interfaces using PIM on Layer 3 EVPN devices. These interfaces route multicast traffic between source and receiver VLANs.

We recommend implementing this basic use case in all EVPN-VXLAN networks except when you prefer to use an external multicast router to handle inter-VLAN routing (see use case 5).

3

Inter-VLAN multicast routing and forwarding—PIM gateway with Layer 2 connectivity

A Layer 2 mechanism for a data center, which uses IGMP and PIM, to exchange multicast traffic with an external PIM domain.

We recommend this use case in either EVPN-VXLAN edge-routed bridging overlays or EVPN-VXLAN centrally-routed bridging overlays.

4

Inter-VLAN multicast routing and forwarding—PIM gateway with Layer 3 connectivity

A Layer 3 mechanism for a data center, which uses IGMP and PIM, to exchange multicast traffic with an external PIM domain.

We recommend this use case in EVPN-VXLAN centrally-routed bridging overlays only.

5

Inter-VLAN multicast routing and forwarding—external multicast router

Instead of IRB interfaces on Layer 3 EVPN devices, an external multicast router handles inter-VLAN routing.

We recommend this use case when you prefer to use an external multicast router instead of IRB interfaces on Layer 3 EVPN devices to handle inter-VLAN routing.

For example, in a typical EVPN-VXLAN edge-routed bridging overlay, you can implement use case 1 for intra-VLAN forwarding and use case 2 for inter-VLAN routing and forwarding. Or, if you want an external multicast router to handle inter-VLAN routing in your EVPN-VXLAN network instead of EVPN devices with IRB interfaces running PIM, you can implement use case 5 instead of use case 2. If there are hosts in an existing external PIM domain that you want hosts in your EVPN-VXLAN network to communicate with, you can also implement use case 3.

When implementing any of the use cases in an EVPN-VXLAN centrally-routed bridging overlay, you can use a mix of spine devices—for example, MX Series routers, EX9200 switches, and QFX10000 switches. However, if you do this, keep in mind that the functionality of all spine devices is determined by the limitations of each spine device. For example, QFX10000 switches support a single routing instance of type virtual-switch. Although MX Series routers and EX9200 switches support multiple routing instances of type evpn or virtual-switch, on each of these devices, you would have to configure a single routing instance of type virtual-switch to interoperate with the QFX10000 switches.

Use Case 1: Intra-VLAN Multicast Traffic Forwarding

We recommend this basic use case for all EVPN-VXLAN networks.

This use case supports the forwarding of multicast traffic to hosts within the same VLAN and includes the following key features:

  • Hosts that are single-homed to an EVPN device or multihomed to more than one EVPN device in all-active mode.

  • Routing instances:

    • (QFX Series switches) A single routing instance of type virtual-switch.

    • (MX Series routers, vMX virtual routers, and EX9200 switches) Multiple routing instances of type evpn or virtual-switch.

      • EVI route target extended community attributes associated with multihomed EVIs. BGP EVPN Type 7 (Join Sync Route) and Type 8 (Leave Synch Route) routes carry these attributes to enable the simultaneous support of multiple EVPN routing instances.

        For information about another supported extended community, see the “EVPN Multicast Flags Extended Community” section.

  • IGMPv2 and IGMPv3. For information about the membership report modes supported for each IGMP version, see Table 1. For information about IGMP route synchronization between multihomed EVPN devices, see Overview of Multicast Forwarding with IGMP or MLD Snooping in an EVPN-MPLS Environment.

  • IGMP snooping. Hosts in a network send IGMP reports expressing interest in particular multicast groups from multicast sources. EVPN devices with IGMP snooping enabled listen to the IGMP reports, and use the snooped information on the access side to establish multicast routes that only forward traffic for a multicast group to interested receivers.

    IGMP snooping supports multicast senders and receivers in the same or different sites. A site can have either receivers only, sources only, or both senders and receivers attached to it.

  • Selective multicast forwarding (advertising EVPN Type 6 Selective Multicast Ethernet Tag (SMET) routes for forwarding only to interested receivers). This feature enables EVPN devices to selectively forward multicast traffic to only the devices in the EVPN core that have expressed interest in that multicast group.

    Note

    We support selective multicast forwarding to devices in the EVPN core only in EVPN-VXLAN centrally-routed bridging overlays.

    When you enable IGMP snooping, selective multicast forwarding is enabled by default.

  • EVPN devices that do not support IGMP snooping and selective multicast forwarding.

Although you can implement this use case in an EVPN single-homed environment, this use case is particularly effective in an EVPN multihomed environment with a high volume of multicast traffic.

All multihomed interfaces must have the same configuration, and all multihomed peer EVPN devices must be in active mode (not standby or passive mode).

An EVPN device that initially receives traffic from a multicast source is known as the ingress device. The ingress device handles the forwarding of intra-VLAN multicast traffic as follows:

  • With IGMP snooping and selective multicast forwarding enabled:

    • As shown in Figure 1, the ingress device (leaf 1) selectively forwards the traffic to other EVPN devices with access interfaces where there are interested receivers for the same multicast group.

    • The traffic is then selectively forwarded to egress devices in the EVPN core that have advertised the EVPN Type 6 SMET routes.

  • If any EVPN devices do not support IGMP snooping or the ability to originate EVPN Type 6 SMET routes, the ingress device floods multicast traffic to these devices.

  • If a host is multihomed to more than one EVPN device, the EVPN devices exchange EVPN Type 7 and Type 8 routes as shown in Figure 1. This exchange synchronizes IGMP membership reports received on multihomed interfaces in case one of the devices fails.

Figure 1: Intra-VLAN Multicast Traffic Flow with IGMP Snooping and Selective Multicast Forwarding
Intra-VLAN Multicast Traffic Flow with IGMP Snooping and Selective
Multicast Forwarding

If you have configured IRB interfaces with PIM on one or more of the Layer 3 devices in your EVPN-VXLAN network (use case 2), note that the ingress device forwards the multicast traffic to the Layer 3 devices. The ingress device takes this action to register itself with the Layer 3 device that acts as the PIM rendezvous point (RP).

Use Case 2: Inter-VLAN Multicast Routing and Forwarding—IRB Interfaces with PIM

We recommend this basic use case for all EVPN-VXLAN networks except when you prefer to use an external multicast router to handle inter-VLAN routing (see Use Case 5: Inter-VLAN Multicast Routing and Forwarding—External Multicast Router).

For this use case, IRB interfaces using Protocol Independent Multicast (PIM) route multicast traffic between source and receiver VLANs. The EVPN devices on which the IRB interfaces reside then forward the routed traffic using these key features:

  • Inclusive multicast forwarding with ingress replication

  • IGMP snooping

  • Selective multicast forwarding

The default behavior of inclusive multicast forwarding is to replicate multicast traffic and flood the traffic to all devices. For this use case, however, we support inclusive multicast forwarding coupled with IGMP snooping and selective multicast forwarding. As a result, the multicast traffic is replicated but selectively forwarded to access interfaces and devices in the EVPN core that have interested receivers.

For information about the EVPN multicast flags extended community, which Juniper Networks devices that support EVPN and IGMP snooping include in EVPN Type 3 (Inclusive Multicast Ethernet Tag) routes, see the “EVPN Multicast Flags Extended Community” section.

In an EVPN-VXLAN centrally-routed bridging overlay, you can configure the spine devices so that some of them perform inter-VLAN routing and forwarding of multicast traffic and some do not. At a minimum, we recommend that you configure two spine devices to perform inter-VLAN routing and forwarding.

When there are multiple devices that can perform the inter-VLAN routing and forwarding of multicast traffic, one device is elected as the designated router (DR) for each VLAN.

In the sample EVPN-VXLAN centrally-routed bridging overlay shown in Figure 2, assume that multicast traffic needs to be routed from source VLAN 100 to receiver VLAN 101. Receiver VLAN 101 is configured on spine 1, which is designated as the DR for that VLAN.

Figure 2: Inter-VLAN Multicast Traffic Flow with IRB Interface and PIM
Inter-VLAN Multicast Traffic Flow with
IRB Interface and PIM

After the inter-VLAN routing occurs, the EVPN device forwards the routed traffic to:

  • Access interfaces that have multicast listeners (IGMP snooping).

  • Egress devices in the EVPN core that have sent EVPN Type 6 SMET routes for the multicast group members in receiver VLAN 2 (selective multicast forwarding).

To understand how IGMP snooping and selective multicast forwarding reduce the impact of the replicating and flooding behavior of inclusive multicast forwarding, assume that an EVPN-VXLAN centrally-routed bridging overlay includes the following elements:

  • 100 IRB interfaces using PIM starting with irb.1 and going up to irb.100

  • 100 VLANs

  • 20 EVPN devices

For the sample EVPN-VXLAN centrally-routed bridging overlay, m represents the number of VLANs, and n represents the number of EVPN devices. Assuming that IGMP snooping and selective multicast forwarding are disabled, when multicast traffic arrives on irb.1, the EVPN device replicates the traffic m * n times or 100 * 20 times, which equals a rate of 20,000 packets. If the incoming traffic rate for a particular multicast group is 100 packets per second (pps), the EVPN device would have to replicate 200,000 pps for that multicast group.

If IGMP snooping and selective multicast forwarding are enabled in the sample EVPN-VXLAN centrally-routed bridging overlay, assume that there are interested receivers for a particular multicast group on only 4 VLANs and 3 EVPN devices. In this case, the EVPN device replicates the traffic at a rate of 100 * m * n times (100 * 4 * 3), which equals 1200 pps. Note the significant reduction in the replication rate and the amount of traffic that must be forwarded.

When implementing this use case, keep in mind that there are important differences for EVPN-VXLAN centrally-routed bridging overlays and EVPN-VXLAN edge-routed bridging overlays. Table 3 outlines these differences

Table 3: Use Case 2: Important Differences for EVPN-VXLAN Edge-routed and Centrally-routed Bridging Overlays

EVPN VXLAN IP Fabric Architectures

Support Mix of Juniper Networks Devices?

All EVPN Devices Required to Host All VLANs In EVPN-VXLAN Network?

All EVPN Devices Required to Host All VLANs that Include Multicast Listeners?

Required PIM Configuration

EVPN-VXLAN edge-routed bridging overlay

No. We support only QFX10000 switches for all EVPN devices.

Yes

Yes

Configure PIM distributed designated router (DDR) functionality on the IRB interfaces of the EVPN devices.

EVPN-VXLAN centrally-routed bridging overlay

Yes.

Spine devices: We support mix of MX Series routers, EX9200 switches, and QFX10000 switches.

Leaf devices: We support mix of MX Series routers and QFX5110 switches.

Note: If you deploy a mix of spine devices, keep in mind that the functionality of all spine devices is determined by the limitations of each spine device. For example, QFX10000 switches support a single routing instance of type virtual-switch. Although MX Series routers and EX9200 switches support multiple routing instances of type evpn or virtual-switch, on each of these devices, you would have to configure a single routing instance of type virtual-switch to interoperate with the QFX10000 switches.

No

No. However, you must configure all VLANs that include multicast listeners on each spine device that performs inter-VLAN routing. You don’t need to configure all VLANs that include multicast listeners on each leaf device.

Do not configure DDR functionality on the IRB interfaces of the spine devices. By not enabling DDR on an IRB interface, PIM remains in a default mode on the interface, which means that the interface acts the designated router for the VLANs.

In addition to the differences described in Table 3, a hair pinning issue exists with an EVPN-VXLAN centrally-routed bridging overlay. Multicast traffic typically flows from a source host to a leaf device to a spine device, which handles the inter-VLAN routing. The spine device then replicates and forwards the traffic to VLANs and EVPN devices with multicast listeners. When forwarding the traffic in this type of EVPN-VXLAN overlay, be aware that the spine device returns the traffic to the leaf device from which the traffic originated (hair-pinning). This issue is inherent with the design of the EVPN-VXLAN centrally-routed bridging overlay. When designing your EVPN-VXLAN overlay, keep this issue in mind especially if you expect the volume of multicast traffic in your overlay to be high and the replication rate of traffic (m * n times) to be large.

Use Case 3: Inter-VLAN Multicast Routing and Forwarding—PIM Gateway with Layer 2 Connectivity

We recommend the PIM gateway with Layer 2 connectivity use case for both EVPN-VXLAN edge-routed bridging overlays and EVPN-VXLAN centrally-routed bridging overlays.

For this use case, we assume the following:

  • You have deployed a EVPN-VXLAN network to support a data center.

  • In this network, you have already set up:

    • Intra-VLAN multicast traffic forwarding as described in use case 1.

    • Inter-VLAN multicast traffic routing and forwarding as described in use case 2.

  • There are multicast sources and receivers within the data center that you want to communicate with multicast sources and receivers in an external PIM domain.

Note

We support this use case with both EVPN-VXLAN edge-routed bridging overlays and EVPN-VXLAN centrally-routed bridging overlays.

The use case provides a mechanism for the data center, which uses IGMP and PIM, to exchange multicast traffic with the external PIM domain. Using a Layer 2 multicast VLAN (MVLAN) and associated IRB interfaces on the EVPN devices in the data center to connect to the PIM domain, you can enable the forwarding of multicast traffic from:

  • An external multicast source to internal multicast destinations

  • An internal multicast source to external multicast destinations

    Note

    In this section, external refers to components in the PIM domain. Internal refers to components in your EVPN-VXLAN network that supports a data center.

Figure 3 shows the required key components for this use case in a sample EVPN-VXLAN centrally-routed bridging overlay.

Figure 3: Use Case 3: PIM Gateway with Layer 2 Connectivity—Key Components
Use Case 3: PIM Gateway with Layer
2 Connectivity—Key Components
  • Components in the PIM domain:

    • A PIM gateway that acts as an interface between an existing PIM domain and the EVPN-VXLAN network. The PIM gateway is a Juniper Networks or third-party Layer 3 device on which PIM and a routing protocol such as OSPF are configured. The PIM gateway does not run EVPN. You can connect the PIM gateway to one, some, or all EVPN devices.

    • A PIM rendezvous point (RP) is a Juniper Networks or third-party Layer 3 device on which PIM and a routing protocol such as OSPF are configured. You must also configure the PIM RP to translate PIM join or prune messages into corresponding IGMP report or leave messages then forward the reports and messages to the PIM gateway.

  • Components in the EVPN-VXLAN network:

    Note

    These components are in addition to the components already configured for use cases 1 and 2.

    • EVPN devices. For redundancy, we recommend multihoming the EVPN devices to the PIM gateway through an aggregated Ethernet interface on which you configure an Ethernet segment identifier (ESI). On each EVPN device, you must also configure the following for this use case:

      • A Layer 2 multicast VLAN (MVLAN). The MVLAN is a VLAN that is used to connect the PIM gateway. In the MVLAN, PIM is enabled.

      • An MVLAN IRB interface on which you configure PIM, IGMP snooping, and a routing protocol such as OSPF. To reach the PIM gateway, the EVPN device forwards multicast traffic out of this interface.

      • To enable the EVPN devices to forward multicast traffic to the external PIM domain, configure:

        • PIM-to-IGMP translation:

          • For EVPN-VXLAN edge-routed bridging overlays, configure PIM-to-IGMP translation by including the pim-to-igmp-proxy upstream-interface irb-interface-name configuration statements at the [edit routing-options multicast] hierarchy level. For the IRB interface, specify the MVLAN IRB interface.

          • For EVPN-VXLAN centrally-routed bridging overlays, you do not need to include the pim-to-igmp-proxy upstream-interface irb-interface-name configuration statements. In this type of overlay, the PIM protocol handles the routing of multicast traffic from the PIM domain to the EVPN-VXLAN network and vice versa.

        • Multicast router interface. Configure the multicast router interface by including the multicast-router-interface configuration statement at the [edit routing-instances routing-instance-name bridge-domains bridge-domain-name protocols igmp-snooping interface interface-name] hierarchy level. For the interface name, specify the MVLAN IRB interface.

    • PIM passive mode. For EVPN-VXLAN edge-routed bridging overlays only, you must ensure that the PIM gateway views the data center as only a Layer 2 multicast domain. To do so, include the passive configuration statement at the [edit protocols pim] hierarchy level.

Use Case 4: Inter-VLAN Multicast Routing and Forwarding—PIM Gateway with Layer 3 Connectivity

We recommend the PIM gateway with Layer 3 connectivity use case for EVPN-VXLAN centrally-routed bridging overlays only.

For this use case, we assume the following:

  • You have deployed a EVPN-VXLAN network to support a data center.

  • In this network, you have already set up:

    • Intra-VLAN multicast traffic forwarding as described in use case 1.

    • Inter-VLAN multicast traffic routing and forwarding as described in use case 2.

  • There are multicast sources and receivers within the data center that you want to communicate with multicast sources and receivers in an external PIM domain.

Note

We recommend the PIM gateway with Layer 3 connectivity use case for EVPN-VXLAN centrally-routed bridging overlays only.

This use case provides a mechanism for the data center, which uses IGMP and PIM, to exchange multicast traffic with the external PIM domain. Using Layer 3 interfaces on the EVPN devices in the data center to connect to the PIM domain, you can enable the forwarding of multicast traffic from:

  • An external multicast source to internal multicast destinations

  • An internal multicast source to external multicast destinations

    Note

    In this section, external refers to components in the PIM domains. External refers to components in your EVPN-VXLAN network that supports a data center.

Figure 4 shows the required key components for this use case in a sample EVPN-VXLAN centrally-routed bridging overlay.

Figure 4: Use Case 4: PIM Gateway with Layer 3 Connectivity—Key Components
Use Case 4: PIM Gateway with Layer 3
Connectivity—Key Components
  • Components in the PIM domain:

    • A PIM gateway that acts as an interface between an existing PIM domain and the EVPN-VXLAN network. The PIM gateway is a Juniper Networks or third-party Layer 3 device on which PIM and a routing protocol such as OSPF are configured. The PIM gateway does not run EVPN. You can connect the PIM gateway to one, some, or all EVPN devices.

    • A PIM rendezvous point (RP) is a Juniper Networks or third-party Layer 3 device on which PIM and a routing protocol such as OSPF are configured. You must also configure the PIM RP to translate PIM join or prune messages into corresponding IGMP report or leave messages then forward the reports and messages to the PIM gateway.

  • Components in the EVPN-VXLAN network:

    Note

    These components are in addition to the components already configured for use cases 1 and 2.

    • EVPN devices. You can connect one, some, or all EVPN devices to a PIM gateway. You must make each connection through a Layer 3 interface on which PIM is configured. Other than the Layer 3 interface with PIM, this use case does not require additional configuration on the EVPN devices.

Use Case 5: Inter-VLAN Multicast Routing and Forwarding—External Multicast Router

Starting with Junos OS Release 17.3R1, you can configure an EVPN device to perform inter-VLAN forwarding of multicast traffic without having to configure IRB interfaces on the EVPN device. In such a scenario, an external multicast router is used to send IGMP queries to solicit reports and to forward VLAN traffic through a Layer 3 multicast protocol such as PIM. IRB interfaces are not supported with the use of an external multicast router.

For this use case, you must include the igmp-snooping proxy configuration statements at the [edit routing-instances routing-instance-name protocols] hierarchy level.

EVPN Multicast Flags Extended Community

Juniper Networks devices that support EVPN-VXLAN and IGMP snooping also support the EVPN multicast flags extended community. When you have enabled IGMP snooping on one of these devices, the device adds the community to EVPN Type 3 (Inclusive Multicast Ethernet Tag) routes.

The absence of this community in an EVPN Type 3 route can indicate the following about the device that advertises the route:

  • The device does not support IGMP snooping.

  • The device does not have IGMP snooping enabled on it.

  • The device is running a Junos OS software release that doesn’t support the community.

  • The device does not support the advertising of EVPN Type 6 SMET routes.

  • The device has IGMP snooping and a Layer 3 interface with PIM enabled on it. Although the Layer 3 interface with PIM performs snooping on the access side and selective multicast forwarding on the EVPN core, the device needs to attract all traffic to perform source registration to the PIM RP and inter-VLAN routing.

Figure 5 shows the EVPN multicast flag extended community, which has the following characteristics:

  • The community is encoded as an 8-bit value.

  • The Type field has a value of 6.

  • The IGMP Proxy Support flag is set to 1, which means that the device supports IGMP proxy.

Figure 5: EVPN Multicast Flag Extended Community
EVPN Multicast Flag Extended
Community

Example: Protecting IPv4 Traffic over Layer 3 VPN Running BGP Labeled Unicast

This example shows how to configure IGMP snooping on provider edge (PE) devices in an Ethernet VPN (EVPN)-Virtual Extensible LAN. When multicast traffic arrives at the VXLAN core, a PE device configured with EVPN forwards traffic only to the local access interfaces where there are IGMP listeners.

Requirements

This example uses the following hardware and software components:

  • Two QFX10000 switches configured as multihomed PE devices that are connected to the CE, one QFX10000 device configured as a PE device connected to the multihomed PEs and a QFX5110 configured as a CE device.

  • Junos OS Release 17.2R1 or later running on all devices.

Overview

IGMP snooping is used to constrain multicast traffic in a broadcast domain to interested receivers and multicast devices. In an environment with significant multicast traffic, IGMP snooping preserves bandwidth because multicast traffic is forwarded only on those interfaces where there are IGMP listeners. IGMP is enabled to manage multicast group membership.

When you enable IGMP snooping on each VLAN, the device advertises EVPN Type 7 and Type 8 routes (Join and Leave Sync Routes) to synchronize IGMP join and leave states among multihoming peer devices in the EVPN instance. On the access side, devices only forward multicast traffic to subscribed listeners. However, multicast traffic is still flooded in the EVPN core even when there are no remote receivers.

Configuring IGMP snooping in an EVPN-VXLAN environment requires the following:

  • Multihoming peer PE devices in all-active mode.

  • IGMP version 2 only. (IGMP versions 1 and 3 are not supported.)

  • IGMP snooping configured in proxy mode for the PE to become the IGMP querier for the local access interfaces.

Note

On QFX5110 switches, to enable IGMP snooping in an EVPN-VXLAN multihoming environment, you must configure IGMP snooping on all VLANs associated with any configured VXLANs. You cannot selectively enable IGMP snooping only on those VLANs that might have interested listeners, because all the VXLANs share VXLAN tunnel endpoints (VTEPs) between the same multihoming peers and must have the same settings.

This feature supports both intra-VLAN and inter-VLAN multicast forwarding. You can configure a PE device to perform either or both. In this example, to enable inter-VLAN forwarding, each PE device is configured as a statically defined Protocol Independent Multicast (PIM) rendezvous point (RP) to enable multicast forwarding. You also configure the distributed-dr statement at the [edit protocols pim interface interface-name] hierarchy level for each IRB interface. This mode enables PIM to forward multicast traffic more efficiently by disabling PIM features that are not required in this scenario. When you configure this statement, PIM ignores the designated router (DR) status of the interface when processing IGMP reports received on the interface. When the interface receives the IGMP report, the PE device sends PIM upstream join messages to pull the multicast stream and forward it to the interface—regardless of the DR status of the interface.

Topology

Figure 6 illustrates an EVPN-VXLAN environment where two PE devices (PE1 and PE2) are connected to the customer edge (CE) device. These PEs are dual-ihomed in active-active mode to provide redundancy. A third PE device forwards traffic to the PE devices that face the CE. IGMP is enabled on integrated routing and bridging (IRB) interfaces. The CE device hosts five VLANs; IGMP snooping is enabled on all of the VLANs. Because this implementation does not support the use of a multicast router, each VLAN in the PE is enabled as an IGMP Layer 2 querier. The multihomed PE devices forward traffic towards the CE only on those interfaces where there are IGMP listeners. .

Figure 6: IGMP Snooping in an EVPN-VXLAN Environment
IGMP Snooping in an EVPN-VXLAN
Environment

Configuration

To configure IGMP Snooping in an EVPN-VXLAN environment, perform these tasks:

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Device PE1

Device PE2

CE

PE3

Configuring PE1

Step-by-Step Procedure

The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User Guide.

To configure device PE1:

  1. Specify the number of aggregated Ethernet logical interfaces.
  2. Configure the interfaces.
  3. Configure active-active multihoming and enable the Link Aggregation Control Protocol (LACP) on each aggregated Ethernet interface.
  4. Configure each aggregated Ethernet interface as a trunk port.
  5. Configure IRB interfaces and virtual-gateway addresses.
  6. Configure the autonomous system.
  7. Configure OSPF.
  8. Configure BGP internal peering.
  9. Configure the VLANs.
  10. Enable EVPN.
  11. Configure an export routing policy to load balance EVPN traffic.
  12. Configure the source interface for the VXLAN tunnel.
  13. Enable IGMP on the IRB interfaces associated with the VLANs..
  14. Enable IGMP snooping on the VLANs.
  15. Configure PIM by defining a static rendezvous point and enabling on the IRB interfaces associated with the VLANs..Note

    This step is required only if you want to configure inter-VLAN forwarding. If your PE device is performing only intra-VLAN forwarding, omit this step.

Results

From configuration mode, confirm your configuration by entering the show chassis, show interfaces. show routing-options,show protocols, show vlans, show policy-options, and show switch-options commands. If the output does not display the intended configuration, repeat the instructions in this example to correct the configuration.

Configuring PE2

Step-by-Step Procedure

The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User Guide.

To configure device PE2:

  1. Specify the number of aggregated Ethernet logical interfaces.
  2. Configure the interfaces.
  3. Configure active-active multihoming and enable the Link Aggregation Control Protocol (LACP) on each aggregated Ethernet interface.
  4. Configure each aggregated Ethernet interface as a trunk port.
  5. Configure IRB interfaces and virtual-gateway addresses.
  6. Configure the autonomous system.
  7. Configure OSPF.
  8. Configure BGP internal peering.
  9. Configure the VLANs.
  10. Enable EVPN.
  11. Configure an export routing policy to load balance EVPN traffic and apply it to the forwarding-table.
  12. Configure the source interface for the VXLAN tunnel.
  13. Enable IGMP on the IRB interfaces.
  14. Enable IGMP snooping on the IRB interfaces.
  15. Configure PIM by defining a static rendezvous point and enabling on the IRB interfaces.Note

    This step is required only if you want to configure inter-VLAN forwarding. If your PE device is performing only intra-VLAN forwarding, omit this step.

Results

From configuration mode, confirm your configuration by entering the show chassis, show interfaces. show routing-options,show protocols, show vlans, show policy-options, and show switch-options commands. If the output does not display the intended configuration, repeat the instructions in this example to correct the configuration.

Configuring CE Device

Step-by-Step Procedure

The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User Guide.

To configure device CE:

  1. Specify the number of aggregated Ethernet logical interfaces.
  2. Configure the interfaces and enable LACP on the aggregated Ethernet interfaces.
  3. Create the Layer 2 customer bridge domains and the VLANs associated with the domains.
  4. Configure each interface to include in CE domain as a trunk port for accepting packets tagged with the specified VLAN identifiers.

Results

From configuration mode, confirm your configuration by entering the show chassis and show interfaces commands. If the output does not display the intended configuration, repeat the instructions in this example to correct the configuration.

Configuring PE3

Step-by-Step Procedure

The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User Guide.

To configure device PE3:

  1. Configure the interfaces.
  2. Configure each logical Ethernet interface as a trunk port for accepting packets tagged with the specified VLAN identifiers.
  3. Configure IRB interfaces and virtual-gateway addresses.
  4. Configure the autonomous system.
  5. Configure OSPF
  6. Configure BGP internal peering with PE1 and PE2.
  7. Configure the VLANs.
  8. Enable EVPN.
  9. Configure an export routing policy to load balance EVPN traffic.
  10. Configure the source interface for the VXLAN tunnel.
  11. Enable IGMP on the IRB interfaces.
  12. Enable IGMP snooping on the IRB interfaces.
  13. Configure PIM by defining the local rendezvous point and enabling on the IRB interfaces.Note

    This step is required only if you want to configure inter-VLAN forwarding. If your PE device is performing only intra-VLAN forwarding, omit this step.

Results

From configuration mode, confirm your configuration by entering the show chassis, show interfaces. show routing-options,show protocols, show vlans, show policy-options, and show switch-options commands. If the output does not display the intended configuration, repeat the instructions in this example to correct the configuration.

Verification

Confirm that the configuration is working properly.

Verifying IGMP Messages are Synced

Purpose

Verify on each PE that IGMP join and leave messages are synced.

Action

From operational mode, run the show evpn instance extensive command.

user@PE1> show evpn instance extensive

Meaning

The SG Sync is Enabled and the IM Core next-hop field displays a valid route.

Verifying Source Addresses Are Learned and Multicast Traffic Is Being Forwarded

Purpose

Verify on each PE that multicast receivers have learned the source interface for the VXLAN tunnel.

Action

From operational mode, enter the show evpn igmp-snooping database extensive l2-domain-id 1 command and the show igmp snooping evpn database vlan VLAN1 commands.

These commands displays output for VLAN1. You can use them to display output for each configured VLAN

From operational mode, enter the show evpn multicast-snooping next-hops to verify that downstream interface has been learned.

user@PE1> show evpn igmp-snooping database extensive l2-domain-id 1
user@PE1> show igmp snooing evpn database vlan VLAN1
user@PE1> show evpn multicast-snooping next-hops

WHAT'S NEXT

For more information on configuring EVPN-VXLAN, see the EVPN Feature Guide.

Release History Table
Release
Description
Starting with Junos OS Release 19.3R1, EX9200 switches, MX Series routers, and vMX virtual routers support IGMP version 2 (IGMPv2) and IGMP version 3 (IGMPv3), IGMP snooping, selective multicast forwarding, external PIM gateways, and external multicast routers with an EVPN-VXLAN centrally-routed bridging overlay.
Starting with Junos OS Release 18.1R1, QFX5110 switches support IGMP snooping in an EVPN-VXLAN centrally-routed bridging overlay (EVPN-VXLAN topology with a two-layer IP fabric).
Starting with Junos OS Release 17.3R1, QFX10000 switches support the exchange of traffic between multicast sources and receivers in an EVPN-VXLAN edge-routed bridging overlay, which uses IGMP, and sources and receivers in an external Protocol Independent Multicast (PIM) domain. A Layer 2 multicast VLAN (MVLAN) and associated IRB interfaces enable the exchange of multicast traffic between these two domains.
Starting with Junos OS Release 17.3R1, you can configure an EVPN device to perform inter-VLAN forwarding of multicast traffic without having to configure IRB interfaces on the EVPN device.
Starting with Junos OS Release 17.2R1, QFX10000 switches support IGMP snooping in an Ethernet VPN (EVPN)-Virtual Extensible LAN (VXLAN) edge-routed bridging overlay (EVPN-VXLAN topology with a collapsed IP fabric).