Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

SRX Series Chassis Cluster Configuration Overview

Following are the prerequisites for configuring a chassis cluster:

  • On SRX300, SRX320, SRX340, SRX345, and SRX380 any existing configurations associated with interfaces that transform to the fxp0 management port and the control port should be removed. For more information, see Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical Interface Naming.

  • Confirm that hardware and software are the same on both devices.

  • Confirm that license keys are the same on both devices.

  • For SRX300, SRX320, SRX340, SRX345, and SRX380 chassis clusters, the placement and type of GPIMs, XGPIMs, XPIMs, and Mini-PIMs (as applicable) must match in the two devices.

  • For SRX5000 line chassis clusters, the placement and type of SPCs must match in the two devices.

Figure 1 shows a chassis cluster flow diagram for SRX300, SRX320, SRX340, SRX345, SRX380, SRX1500, SRX1600, SRX2300, SRX4300, SRX4100, SRX4200, and SRX4600 devices.

Figure 1: Chassis Cluster Flow Diagram (SRX300, SRX320, SRX340, SRX345, SRX380, SRX1500, SRX1600, SRX2300, SRX4100, SRX4200, SRX4300 and SRX4600 Devices)Chassis Cluster Flow Diagram (SRX300, SRX320, SRX340, SRX345, SRX380, SRX1500, SRX1600, SRX2300, SRX4100, SRX4200, SRX4300 and SRX4600 Devices)
Figure 2: Chassis Cluster Flow Diagram (SRX5800, SRX5600, SRX5400 Devices)Chassis Cluster Flow Diagram (SRX5800, SRX5600, SRX5400 Devices)

This section provides an overview of the basic steps to create an SRX Series chassis cluster. To create an SRX Series chassis cluster:

  1. Prepare the SRX Series Firewalls to be used in the chassis cluster. For more information, see Preparing Your Equipment for Chassis Cluster Formation.
  2. Physically connect a pair of the same kind of supported SRX Series Firewalls together. For more information, see Connecting SRX Series Devices to Create a Chassis Cluster.
    1. Create the fabric link between two nodes in a cluster by connecting any pair of Ethernet interfaces. For most SRX Series Firewalls, the only requirement is that both interfaces be Gigabit Ethernet interfaces (or 10-Gigabit Ethernet interfaces).

      When using dual fabric link functionality, connect the two pairs of Ethernet interfaces that you will use on each device. See Understanding Chassis Cluster Dual Fabric Links.

    2. Configure the control ports (SRX5000 line only). See Example: Configuring Chassis Cluster Control Ports.

  3. Connect the first device to be initialized in the cluster to the console port. This is the node (node 0) that forms the cluster and use CLI operational mode commands to enable clustering:
    1. Identify the cluster by giving it the cluster ID.

    2. Identify the node by giving it its own node ID and then reboot the system.

    See Example: Setting the Node ID and Cluster ID for Security Devices in a Chassis Cluster . For connection instructions, see the Getting Started Guide for your device

  4. Connect to the console port on the other device (node 1) and use CLI operational mode commands to enable clustering:
    1. Identify the cluster that the device is joining by setting the same cluster ID you set on the first node.

    2. Identify the node by giving it its own node ID and then reboot the system.

  5. Configure the management interfaces on the cluster. See Example: Configuring the Chassis Cluster Management Interface.
  6. Configure the cluster with the CLI. See the following topics:
  7. (Optional) Initiate manual failover. See Initiating a Chassis Cluster Manual Redundancy Group Failover.
  8. (Optional) Configure conditional route advertisement over redundant Ethernet interfaces. See Understanding Conditional Route Advertising in a Chassis Cluster.
  9. Verify the configuration. See Viewing a Chassis Cluster Configuration.

If two nodes are connected in cluster, one node is elected as primary mode and its Routing Engine is running as primary. The Routing Engine in secondary node running as client. All FPCs in the cluster, regardless in primary node or secondary node, connect to the primary Routing Engine. The FPCs on secondary node connect to primary Routing Engine through the HA control link. If the cluster has two primaries, IOC receives a message from a different primary and reboot itself to recover from this error state.

To prevent the IOC card from rebooting, secondary node has to be powered off before connecting into the cluster.

To preserve the traffic on primary while connecting the secondary node into cluster, it is recommended to configure cluster mode on node 1 and power down before connecting it to the cluster to avoid any impact to the primary. The reason here is that control networks are different for a HA cluster or a single node system. When the control ports are connected, these two join the same network and they exchange messages.

This section provides an overview of the basic steps to restore the backup node after a failure when there is a running primary node:

  1. Connect to the console port on the other device (node 1) and use CLI operational mode commands to enable clustering:

    1. Identify the cluster that the device is joining by setting the same cluster ID you set on the first node.

    2. Identify the node by giving it its own node ID and then reboot the system.

    See Example: Setting the Node ID and Cluster ID for Security Devices in a Chassis Cluster . For connection instructions, see the Getting Started Guide for your device

  2. Power off the secondary node.

  3. Connect the HA control ports between two nodes.

  4. Power on the secondary node.

  5. The cluster is re-formed and the session is synced to the secondary node.

  • When using dual fabric link functionality, connect the two pairs of Ethernet interfaces that you will use on each device. See Understanding Chassis Cluster Dual Fabric Links.

  • When using dual control link functionality (SRX5600 and SRX5800 devices only), connect the two pairs of control ports that you will use on each device.

    See Dual Control Link Connections for SRX Series Firewalls in a Chassis Cluster.

    For SRX5600 and SRX5800 devices, control ports must be on corresponding slots in the two devices. Table 1 shows the slot numbering offsets:

    Table 1: Slot Numbering Offsets

    Device

    Offset

    SRX5800

    12 (for example, fpc3 and fpc15)

    SRX5600

    6 (for example, fpc3 and fpc9)

    SRX5400

    3 (for example, fpc3 and fpc6)

    SRX4600

    7 (for example, fpc1 and fpc8)

  • On SRX3400 and SRX3600 devices, the control ports are dedicated Gigabit Ethernet ports.

  • On SRX4600 devices, the control ports and fabric ports are dedicated 10-Gigabit Ethernet ports.