Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Technical Overview

 

Multicast delivers application source market data feeds to multiple receivers without burdening the source or the receivers, while using a minimum of network bandwidth.

In this configuration example for multicast deployment, the QFX5100 devices serve as the last-hop router (LHR) and first-hop router (FHR).

Figure 1: Multicast Architecture Used in This NCE
Multicast Architecture Used in This NCE
  • Multicast source: Each multicast source sends a data feed to a multicast group address.

  • FHR: The QFX5100 device to which the multicast sources connect is the FHR. The FHR forwards the multicast group ID and source to the next-hop multicast router toward the predefined rendezvous point (RP).

  • LHR: The QFX5100 device to which the multicast receivers connect serves as an LHR. The LHR forwards data feeds to the multicast receiver.

  • Rendezvous point (RP): The RP serves as the information exchange point for the other routers. All routers in a PIM domain must provide mapping to an RP. Only the RP must be aware of the active multicast sources.

  • Multicast receiver: The multicast receiver requests data feeds from the multicast source by sending an IGMP join message to the LHR. IGMP snooping is enabled on QFX5100 devices to monitor the Internet Group Management Protocol (IGMP) messages from the hosts and multicast source. This helps in conserving bandwidth by enabling the switch to send multicast data feeds only to the interfaces connected to devices that need to receive the multicast traffic.

  • Market data feed: Market data feeds are typically applications wherein several multicast sources send data to groups. Market data is delivered through dual multicast streams (primary and backup feeds). In most cases, even if a single data packet is lost on one feed, it can be recovered from the other feed.

    Multicast data feeds are replicated by routers enabled with Protocol Independent Multicast (PIM) and other supporting multicast protocols, The replication occurs in the network at the point of primary and backup divergence. This results in the most efficient delivery of market data to multiple receivers.

  • High availability cluster: Chassis clustering provides network node redundancy by grouping a pair of the same type of supported SRX Series devices into a cluster. The SRX5600 Services Gateway serves as a cluster. The multicast feeds go through the SRX chassis cluster configured to work in active/active mode for redundancy and efficiency purposes. A chassis cluster in active/active mode has transit data feeds passing through both nodes of the cluster all the time. Even if one of the nodes goes down, impacting the corresponding feed, the other node or feed will be still active.

    The configuration uses four redundant Ethernet (reth) interfaces. Each reth has ports from both nodes, and every reth connects to a QFX5100. All reths are assigned a unique subnet, which helps to avoid PIM asserts.

  • Security: Firewall security policies enable authentication of the PIM neighbors. QFX devices also support distributed denial of service (DDoS) for policing the control plane feeds. For more information on DDoS, see Understanding Distributed Denial-of-Service Protection on QFX Series Switches. SRX Series devices are also used for creating security policies to allow traffic between zones. Statically configured anycast RP provides the greatest level of protection against malicious or misconfigured devices.

Table 1 describes the network type, platforms, technologies, and the protocols used in this configuration.

Table 1: Network Elements Used in Multicast Configuration

Network Type

Platforms

Technologies

Protocols

Multicast source and receiver LANs

QFX5100

1G, 10G, 40G (Gigabit Ethernet Interfaces), SRX Chassis Cluster

PIM-SM, MSDP, OSPF, IBGP, EBGP, BFD, RTG

Chassis cluster

SRX5600

1G, 10G, 40G, SRX Chassis Cluster, Firewall Security Policies

PIM-SM, EBGP, BFD

The multicast deployment configured with the protocols inTable 2 provides the financial trading environment with an edge to optimize its market data delivery.

Table 2: Supported Protocols

Protocols

Description

  • PIM sparse mode (PIM-SM)

PIM-SM as the multicast delivery protocol works well for both one-to-many and many-to-many distribution of data over a LAN, WAN, or the Internet. Also, the PIM-SM protocol is very well deployed and understood. For more information on PIM-SM, see PIM-SM.

  • Anycast RP and MSDP

Anycast RP and MSDP enable sharing the load on the RP, as well as for redundancy purposes. You can configure anycast RP for the purpose of load balancing and redundancy. When an RP fails, sources and receivers are taken to a new RP by means of unicast routing. When you configure anycast RP, you bypass the restriction of having one active RP per multicast group, and instead deploy multiple RPs for the same group range. The RP routers share one unicast IP address. Sources from one RP are known to other RPs that use the Multicast Source Discovery Protocol (MSDP). Sources and receivers use the closest RP, as determined by the interior gateway protocol (IGP). MSDP interconnects multiple IPv4 PIM-SM domains, which enables PIM-SM to have RP redundancy and inter-domain multicasting. For more information, see Anycast RP with or without MSDP.

  • Open Shortest Path First (OSPF)

OSPF detects changes in the topology, such as link failures, and converges on a new loop-free routing structure within seconds. OSPF computes the shortest path tree for each route using a method based on a shortest-path-first algorithm. OSPF is used within an autonomous system (AS). For more information, see OSPF.

  • Border Gateway Protocol (BGP)

BGP is an exterior gateway protocol (EGP) that is used to exchange routing information among devices in different ASs. For more information, see BGP.

  • Bidirectional Forwarding Detection (BFD)

BFD is used to detect link failures and reroute traffic quickly. For more information, see BFD.

  • Redundant Trunk Group (RTG)

RTG is enabled on LHR and FHR devices to enable quick failover of traffic during link failures. For more information, see RTG.

Design Considerations

  • PIM-SM is known not to work well for intermittent multicast sources. If there are known intermittent multicast sources, use PIM SSM to avoid initial multicast loss.

  • Complicated behaviors in PIM are encountered in multiaccess topologies rather than simpler point-point topologies.