Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Chassis Cluster Support on SRX100, SRX210, SRX220, SRX240, SRX650, and SRX1400 Devices

This topic includes the supported information for SRX100, SRX210, SRX220, SRX240, SRX650, SRX1400, SRX3400, and SRX3600 devices.

SRX Series Chassis Cluster Configuration Overview

Following are the prerequisites for configuring a chassis cluster:

Flow and Processing

Flowd monitoring is supported on SRX100, SRX210, SRX240, and SRX650 devices.

Chassis Cluster Slot Numbering and Physical Port and Logical Interface Naming for SRX100, SRX210, SRX220, SRX240, and SRX650

Information about chassis cluster slot numbering is also provided in Figure 1, Figure 2, Figure 3, Figure 4, and Figure 5.

Figure 1: Chassis Cluster Slot Numbering for SRX100 DevicesChassis Cluster Slot Numbering for SRX100 Devices
Figure 2: Chassis Cluster Slot Numbering for SRX210 DevicesChassis Cluster Slot Numbering for SRX210 Devices
Figure 3: Chassis Cluster Slot Numbering for SRX220 DevicesChassis Cluster Slot Numbering for SRX220 Devices
Figure 4: Chassis Cluster Slot Numbering for SRX240 DevicesChassis Cluster Slot Numbering for SRX240 Devices
Figure 5: Chassis Cluster Slot Numbering for SRX650 DevicesChassis Cluster Slot Numbering for SRX650 Devices

Layer 2 switching must not be enabled on an SRX Series device when chassis clustering is enabled. If you have enabled Layer 2 switching, make sure you disable it before enabling chassis clustering.

The factory default configuration for SRX100, SRX210, and SRX220 devices automatically enables Layer 2 Ethernet switching. Because Layer 2 Ethernet switching is not supported in chassis cluster mode, if you use the factory default configuration for these devices, you must delete the Ethernet switching configuration before you enable chassis clustering. See Disabling Switching on SRX100, SRX210, and SRX220 Devices Before Enabling Chassis Clustering.

In chassis cluster mode, the interfaces on the secondary node are renumbered internally. For example, the management interface port on the front panel of each SRX210 device is still labeled fe-0/0/6, but internally, the node 1 port is referred to as fe-2/0/6.

For SRX650 devices, control interfaces are dedicated Gigabit Ethernet ports.

For SRX100, SRX220, and SRX210 devices, after you enable chassis clustering and reboot the system, the built-in interface named fe-0/0/6 is repurposed as the management interface and is automatically renamed fxp0.

For SRX550 devices, control interfaces are dedicated Gigabit Ethernet ports.

For SRX210 devices, after you enable chassis clustering and reboot the system, the built-in interface named fe-0/0/7 is repurposed as the control interface and is automatically renamed fxp1.

In chassis cluster mode, the interfaces on the secondary node are renumbered internally. For example, the management interface port on the front panel of each SRX210 device is still labeled fe-0/0/6, but internally, the node 1 port is referred to as fe-2/0/6.

For SRX240 devices, control interfaces are dedicated Gigabit Ethernet ports. For SRX100 and SRX220 devices, after you enable chassis clustering and reboot the system, the built-in interface named fe-0/0/7 is repurposed as the control interface and is automatically renamed fxp1.

Note:

For SRX210 Services Gateways, the base and enhanced versions of a model can be used to form a cluster. For example:

  • SRX210B and SRX210BE

  • SRX210H and SRX210HE

However, the following combinations cannot be used to form a cluster:

  • SRX210B and SRX210H

  • SRX210B and SRX210HE

  • SRX210BE and SRX210H

  • SRX210BE and SRX210HE

Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and all show pairs of SRX Series devices with the fabric links and control links connected.

Figure 6: Connecting SRX100 Devices in a Chassis ClusterConnecting SRX100 Devices in a Chassis Cluster
Figure 7: Connecting SRX110 Devices in a Chassis ClusterConnecting SRX110 Devices in a Chassis Cluster
Figure 8: Connecting SRX210 Devices in a Chassis ClusterConnecting SRX210 Devices in a Chassis Cluster
Figure 9: Connecting SRX220 Devices in a Chassis ClusterConnecting SRX220 Devices in a Chassis Cluster
Figure 10: Connecting SRX240 Devices in a Chassis ClusterConnecting SRX240 Devices in a Chassis Cluster
Figure 11: Connecting SRX650 Devices in a Chassis ClusterConnecting SRX650 Devices in a Chassis Cluster

The fabric link connection for the SRX100 and SRX210 must be a pair of either Fast Ethernet or Gigabit Ethernet interfaces. The fabric link connection must be any pair of either Gigabit Ethernet or 10-Gigabit Ethernet interfaces on all SRX Series devices.

For some SRX Series devices, such as the SRX100 and SRX200 line devices, do not have a dedicated port for fxp0. For SRX100, SRX210, the fxp0 interface is repurposed from a built-in interface.

Chassis Cluster Slot Numbering and Physical Port and Logical Interface Naming for SRX3600, SRX3400, and SRX1400

Table 1 shows the slot numbering, as well as the physical port and logical interface numbering, for both of the SRX Series devices that become node 0 and node 1 of the chassis cluster after the cluster is formed.

Table 1: Chassis Cluster Slot Numbering, and Physical Port and Logical Interface Naming for SRX1400, SRX3400, and SRX3600

Model

Chassis

Maximum Slots Per Node

Slot Numbering in a Cluster

Management Physical Port/Logical Interface

Control Physical Port/Logical Interface

Fabric Physical Port/Logical Interface

SRX3600

Node 0

13 (CFM slots)

0 — 12

Dedicated Gigabit Ethernet port

Dedicated Gigabit Ethernet port

Any Ethernet port

fxp0

em0

fab0

Node 1

13 — 25

Dedicated Gigabit Ethernet port

Dedicated Gigabit Ethernet port

Any Ethernet port

fxp0

em0

fab1

SRX3400

Node 0

8 (CFM slots)

0 — 7

Dedicated Gigabit Ethernet port

Dedicated Gigabit Ethernet port

Any Ethernet port

fxp0

em0

fab0

Node 1

8 — 15

Dedicated Gigabit Ethernet port

Dedicated Gigabit Ethernet port

Any Ethernet port

fxp0

em0

fab1

SRX1400

Node 0

4 (FPC slots)

0 — 3

Dedicated Gigabit Ethernet port

Dedicated Gigabit Ethernet port

Any Ethernet port

fxp0

em0

fab0

Node 1

4 — 7

Dedicated Gigabit Ethernet port

Dedicated Gigabit Ethernet port

Any Ethernet port

fxp0

em0

fab1

Information about chassis cluster slot numbering is also provided in Figure 12, Figure 13 and Figure 14.

Figure 12: Chassis Cluster Slot Numbering for SRX3600 DevicesChassis Cluster Slot Numbering for SRX3600 Devices
Figure 13: Chassis Cluster Slot Numbering for SRX3400 DevicesChassis Cluster Slot Numbering for SRX3400 Devices
Figure 14: Chassis Cluster Slot Numbering for SRX1400 DevicesChassis Cluster Slot Numbering for SRX1400 Devices

You can connect two control links (SRX1400, SRX4600, SRX5000 and SRX3000 lines only) and two fabric links between the two devices in the cluster to reduce the chance of control link and fabric link failure. See Understanding Chassis Cluster Dual Control Links and Understanding Chassis Cluster Dual Fabric Links.

Figure 17 show pairs of SRX Series devices with the fabric links and control links connected.

Figure 15 and Figure 16 show pairs of SRX Series devices with the fabric links and control links connected.

Figure 15: Connecting SRX3600 Devices in a Chassis ClusterConnecting SRX3600 Devices in a Chassis Cluster
Figure 16: Connecting SRX3400 Devices in a Chassis ClusterConnecting SRX3400 Devices in a Chassis Cluster

For dual control links on SRX3000 line devices, the Routing Engine must be in slot 0 and the SRX Clustering Module (SCM) in slot 1. The opposite configuration (SCM in slot 0 and Routing Engine in slot 1) is not supported.

Figure 17: Connecting SRX1400 Devices in a Chassis ClusterConnecting SRX1400 Devices in a Chassis Cluster

Supported Fabric Interface Types for SRX Series Devices (SRX210, SRX240, SRX220, SRX100, and SRX650 Devices)

For SRX210 devices, the fabric link can be any pair of Gigabit Ethernet interfaces or Fast Ethernet interfaces (as applicable). Interfaces on SRX210 devices are Fast Ethernet or Gigabit Ethernet (the paired interfaces must be of a similar type) and all interfaces on SRX100 devices are Fast Ethernet interfaces.

For SRX550 devices, the fabric link can be any pair of Gigabit Ethernet interfaces or Fast Ethernet interfaces (as applicable).

For SRX Series chassis clusters, the fabric link can be any pair of Ethernet interfaces spanning the cluster; the fabric link can be any pair of Gigabit Ethernet interface.

Table 2 shows the fabric interface types that are supported for SRX Series devices.

Table 2: Supported Fabric Interface Types for SRX Series Devices

SRX550

SRX650

SRX240

SRX220

SRX100

SRX210

Fast Ethernet

Fast Ethernet

Fast Ethernet

 

Fast Ethernet

Fast Ethernet

Gigabit Ethernet

Gigabit Ethernet

Gigabit Ethernet

Gigabit Ethernet

 

Gigabit Ethernet

Redundant Ethernet Interfaces

Table 3: Maximum Number of Redundant Ethernet Interfaces Allowed (SRX100, SRX220, SRX240, SRX210, and SRX650)

Device

Maximum Number of reth Interfaces

SRX100

8

SRX210

8

SRX220

8

SRX240

24

SRX650

68

  • Point-to-Point Protocol over Ethernet (PPPoE) over redundant Ethernet (reth) interface is supported on SRX100, SRX210, SRX220, SRX240, and SRX650 devices in chassis cluster mode. This feature allows an existing PPPoE session to continue without starting a new PPP0E session in the event of a failover.

For SRX100, SRX220, and SRX240 devices, the total number of logical interfaces that you can configure across all the redundant Ethernet (reth) interfaces in a chassis cluster deployment is 1024.

IP address monitoring cannot be used on a chassis cluster running in transparent mode. The maximum number of monitoring IP addresses that can be configured per cluster is 32 for the SRX1400 device and the SRX3000 line of devices.

Control Links

  • For SRX100, SRX210, and SRX220 devices, the control link uses the fe-0/0/7 interface.

  • For SRX210 devices, the total number of logical interfaces that you can configure across all the redundant Ethernet (reth) interfaces in a chassis cluster deployment is 1024.

  • For SRX240, SRX650M, devices, the control link uses the ge-0/0/1 interface.

Table 4: SRX Series Services Gateways Interface Settings (SRX100, SRX210, SRX220, SRX240)

Command

SRX100

SRX210

SRX220

SRX240

set interfaces

fab0

fabric-options

member-interfaces

fe-0/0/1

ge-0/0/1

ge-0/0/0 to ge-0/0/5

ge-0/0/2

set interfaces

fab1

fabric-options

member-interfaces

fe-1/0/1

ge-2/0/1

ge-3/0/0 to ge-3/0/5

ge-5/0/2

set chassis cluster redundancy-group 1 interface-monitor

fe-0/0/0 weight 255

fe-0/0/3 weight 255

ge-0/0/0 weight 255

ge-0/0/5 weight 255

set chassis cluster redundancy-group 1 interface-monitor

fe-0/0/2 weight 255

fe-0/0/2 weight 255

ge-3/0/0 weight 255

ge-5/0/5 weight 255

set chassis cluster redundancy-group 1 interface-monitor

fe-1/0/0 weight 255

fe-2/0/3 weight 255

ge-0/0/1 weight 255

ge-0/0/6 weight 255

set chassis cluster redundancy-group 1 interface-monitor

fe-1/0/2 weight 255

fe-2/0/2 weight 255

ge-3/0/1 weight 255

ge-5/0/6 weight 255

set interfaces

fe-0/0/2 fastether-options redundant-parent reth1

fe-0/0/2 fastether-options redundant-parent reth1

ge-0/0/2 fastether-options redundant-parent reth0

ge-0/0/5 gigether-options redundant-parent reth1

set interfaces

fe-1/0/2 fastether-options redundant-parent reth1

fe-2/0/2 fastether-options redundant-parent reth1

ge-0/0/3 fastether-options redundant-parent reth1

ge-5/0/5 gigether-options redundant-parent reth1

set interfaces

fe-0/0/0 fastether-options redundant-parent reth0

fe-0/0/3 fastether-options redundant-parent reth0

ge-3/0/2 fastether-options redundant-parent reth0

ge-0/0/6 gigether-options redundant-parent reth0

set interfaces

fe-1/0/0 fastether-options redundant-parent reth0

fe-2/0/3 fastether-options redundant-parent reth0

ge-3/0/3 fastether-options redundant-parent reth1

ge-5/0/6 gigether-options redundant-parent reth0

ISSU System Requirements for SRX1400, SRX3400 and SRX3600

To perform an ISSU, your device must be running a Junos OS release that supports ISSU for the specific platform. See Table 5 for platform support.

Table 5: ISSU Platform Support SRX1400, SRX3400 and SRX3600

Device

Junos OS Release

SRX1400

12.1X47-D10

SRX3400

12.1X47-D10

SRX3600

12.1X47-D10