Understanding Distributed IGMP
By default, Internet Group Management Protocol (IGMP) processing takes place on the Routing Engine for MX Series routers. This centralized architecture may lead to reduced performance in scaled environments or when the Routing Engine undergoes CLI changes or route updates. You can improve system performance for IGMP processing by enabling distributed IGMP, which utilizes the Packet Forwarding Engine to maintain a higher system-wide processing rate for join and leave events.
Distributed IGMP Overview
Distributed IGMP works by moving IGMP processing from the Routing Engine to the Packet Forwarding Engine. When distributed IGMP is not enabled, IGMP processing is centralized on the routing protocol process (rpd) running on the Routing Engine. When you enable distributed IGMP, join and leave events are processed across Modular Port Concentrators (MPCs) on the Packet Forwarding Engine. Because join and leave processing is distributed across multiple MPCs instead of being processed through a centralized rpd on the Routing Engine, performance improves and join and leave latency decreases.
When you enable distributed IGMP, each Packet Forwarding Engine processes reports and generates queries, maintains local group membership to the interface mapping table and updates the forwarding state based on this table, runs distributed IGMP independently, and implements the group-policy and ssm-map-policy IGMP interface options.
Information from group-policy and ssm-map-policy IGMP interface options passes from the Routing Engine to the Packet Forwarding Engine.
When you enable distributed IGMP, the rpd on the Routing Engine synchronizes all IGMP configurations (including global and interface-level configurations) from the rpd to each Packet Forwarding Engine, runs passive IGMP on distributed interfaces, and notifies Protocol Independent Multicast (PIM) of all group memberships per distributed IGMP interface.
Guidelines for Configuring Distributed IGMP
Consider the following guidelines when you configure distributed IGMP on an MX Series router with MPCs:
Distributed IGMP increases network performance by reducing the maximum join and leave latency and by increasing join and leave events.
Join and leave latency may increase if multicast traffic is not preprovisioned and destined for an MX Series router when a join or leave event is received from a client interface.
Distributed IGMP is supported for Ethernet interfaces. It does not improve performance on PIM interfaces.
Starting in Junos OS release 18.2, distributed IGMP is supported on aggregated Ethernet interfaces, and for enhanced subscriber management. As such, IGMP processing for subscriber flows is moved from the Routing Engine to the Packet Forwarding Engine of supported line cards. Multicast groups can be comprised of mixed receivers, that is, some centralized IGMP and some distributed IGMP.
You can reduce initial join delays by enabling Protocol Independent Multicast (PIM) static joins or IGMP static joins. You can reduce initial delays even more by preprovisioning multicast traffic. When you preprovision multicast traffic, MPCs with distributed IGMP interfaces receive multicast traffic.
For distributed IGMP to function properly, you must enable enhanced IP network services on a single-chassis MX Series router. Virtual Chassis is not supported.
When you enable distributed IGMP, the following interface options are not supported on the Packet Forwarding Engine: oif-map, group-limit, ssm-map, and static. The traceoptions and accounting statements can only be enabled for IGMP operations still performed on the Routing Engine; they are not supported on the Packet Forwarding Engine. The clear igmp membership command is not supported when distributed IGMP is enabled.