Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Best Practices for Managing a Chassis Cluster

Following are some best practices for chassis clusters for SRX Series devices.

Using Dual Control Links

In dual control links, two pairs of control link interfaces are connected between each device in a cluster. Dual control links are supported on the SRX5000 and SRX3000 lines. Having two control links helps to avoid a possible single point of failure. For the SRX5000 line, this functionality requires a second Routing Engine, as well as a second Switch Control Board (SCB) to house the Routing Engine, to be installed on each device in the cluster. The purpose of the second Routing Engine is only to initialize the switch on the SCB. The second Routing Engine, to be installed on SRX5000 line devices only, does not provide backup functionality. For the SRX3000 line, this functionality requires an SRX Clustering Module (SCM) to be installed on each device in the cluster. Although the SCM fits in the Routing Engine slot, it is not a Routing Engine. SRX3000 line devices do not support a second Routing Engine. The purpose of the SCM is to initialize the second control link. SRX Series branch devices do not support dual control links.

Using Dual Data Links

You can connect two fabric links between each device in a cluster, which provides a redundant fabric link between the members of a cluster. Having two fabric links helps to avoid a possible single point of failure. When you use dual fabric links, the runtime objects (RTOs) and probes are sent on one link, and the fabric-forwarded and flow-forwarded packets are sent on the other link. If one fabric link fails, the other fabric link handles the RTOs and probes, as well as data forwarding. The system selects the physical interface with the lowest slot, PIC, or port number on each node for the RTOs and probes.

Using BFD

The Bidirectional Forwarding Detection (BFD) protocol is a simple hello mechanism that detects failures in a network. Hello packets are sent at a specified, regular interval. A neighbor failure is detected when the router stops receiving a reply after a specified interval. BFD works with a wide variety of network environments and topologies. BFD failure detection times are shorter than RIP detection times, providing faster reaction times to various kinds of failures in the network. These timers are also adaptive. For example, a timer can adapt to a higher value if the adjacency fails, or a neighbor can negotiate a higher value for a timer than the one configured. Therefore, BFD liveliness can be configured between the two nodes of an SRX Series chassis cluster using the local interfaces and not the fxp0 IP addresses on each node. This way BFD can keep monitoring the status between the two nodes of the cluster. When there is any network issue between the nodes, the BFD session-down SNMP traps are sent, which indicates an issue between the nodes.

Using IP Monitoring

IP monitoring is an automation script that enables you to use this critical feature on the SRX Series platforms. It allows for path and next-hop validation through the existing network infrastructure using the Internet Control Message Protocol (ICMP). Upon detection of a failure, the script executes a failover to the other node in an attempt to prevent downtime.

Using Interface Monitoring

The other SRX Series chassis cluster feature implemented is called interface monitoring. For a redundancy group to automatically fail over to another node, its interfaces must be monitored. When you configure a redundancy group, you can specify a set of interfaces that the redundancy group is to monitor for status or health to determine whether the interface is up or down. A monitored interface can be a child interface of any of its redundant Ethernet (reth) interfaces. When you configure an interface for a redundancy group to monitor, you give it a weight. Every redundancy group has a threshold tolerance value initially set to 255. When an interface monitored by a redundancy group becomes unavailable, its weight is subtracted from the redundancy group's threshold. When a redundancy group's threshold reaches 0, it fails over to the other node. For example, if redundancy group 1 was primary on node 0, on the threshold-crossing event, redundancy group 1 becomes primary on node 1. In this case, all the child interfaces of redundancy group 1's reth interfaces begin handling traffic. A redundancy group failover occurs because the cumulative weight of the redundancy group's monitored interfaces has brought its threshold value to 0. When the monitored interfaces of a redundancy group on both nodes reach their thresholds at the same time, the redundancy group is primary on the node with the lower node ID, in this case, node 0.

Note:

Interface monitoring is not recommended for redundancy group 0.

Using Graceful Restart

With routing protocols, any service interruption requires that an affected router recalculate adjacencies with neighboring routers, restore routing table entries, and update other protocol-specific information. An unprotected restart of a router can result in forwarding delays, route flapping, wait times stemming from protocol reconvergence, and even dropped packets. The main benefits of graceful restart are uninterrupted packet forwarding and temporary suppression of all routing protocol updates. Graceful restart enables a router to pass through intermediate convergence states that are hidden from the rest of the network.

Three main types of graceful restart are available on Juniper Networks routing platforms:

  • Graceful restart for aggregate and static routes and for routing protocols—Provides protection for aggregate and static routes and for BGP, End System-to-Intermediate System (ES-IS), IS-IS, OSPF, RIP, next-generation RIP (RIPng), and Protocol Independent Multicast (PIM) sparse mode routing protocols.

  • Graceful restart for MPLS-related protocols—Provides protection for LDP, RSVP, circuit cross-connect (CCC), and translational cross-connect (TCC).

  • Graceful restart for virtual private networks (VPNs)—Provides protection for Layer 2 and Layer 3 VPNs.