Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Chassis Cluster Support on SRX100, SRX210, SRX220, SRX240, SRX550M, SRX650, SRX1400, SRX3400, and SRX3600 Devices

This topic includes the supported information for SRX100, SRX210, SRX220, SRX240, SRX550M, SRX650, SRX1400, SRX3400, and SRX3600 devices.

SRX Series Chassis Cluster Configuration Overview

Following are the prerequisites for configuring a chassis cluster:

Flow and Processing

Flowd monitoring is supported on SRX100, SRX210, SRX 220, SRX240, SRX550M, and SRX650 devices.

Monitoring

The maximum number of monitoring IPs that can be configured per cluster is 64 for SRX550M devices. On SRX550M devices, logs cannot be sent to NSM when logging is configured in the stream mode.

Installation and Upgrade

For SRX550M devices, the reboot parameter is not available, because the devices in a cluster are automatically rebooted following an in-band cluster upgrade (ICU).

ICU is available with the no-sync option only for SRX550M devices.

For SRX550M devices, the devices in a chassis cluster can be upgraded with a minimal service disruption of approximately 30 seconds using ICU with the no-sync option.

Chassis Cluster Slot Numbering and Physical Port and Logical Interface Naming for SRX100, SRX210, SRX220, SRX240, SRX550M, and SRX650.

Following are the prerequisites for configuring a chassis cluster:

  • On SRX550M any existing configurations associated with interfaces that transform to the fxp0 management port and the control port should be removed.

  • For SRX550M chassis clusters, the placement and type of GPIMs, XGPIMs, XPIMs, and Mini-PIMs (as applicable) must match in the two devices.

For SRX550M devices, control interfaces are dedicated Gigabit Ethernet ports.

Information about chassis cluster slot numbering is also provided in Figure 1, Figure 2, Figure 3, Figure 4, and Figure 5.

Figure 1: Chassis Cluster Slot Numbering for SRX100 DevicesChassis Cluster Slot Numbering for SRX100 Devices
Figure 2: Chassis Cluster Slot Numbering for SRX210 DevicesChassis Cluster Slot Numbering for SRX210 Devices
Figure 3: Chassis Cluster Slot Numbering for SRX220 DevicesChassis Cluster Slot Numbering for SRX220 Devices
Figure 4: Chassis Cluster Slot Numbering for SRX240 DevicesChassis Cluster Slot Numbering for SRX240 Devices
Figure 5: Chassis Cluster Slot Numbering for SRX650 DevicesChassis Cluster Slot Numbering for SRX650 Devices

Layer 2 switching must not be enabled on an SRX Series Firewall when chassis clustering is enabled. If you have enabled Layer 2 switching, make sure you disable it before enabling chassis clustering.

The factory default configuration for SRX100, SRX210, and SRX220 devices automatically enables Layer 2 Ethernet switching. Because Layer 2 Ethernet switching is not supported in chassis cluster mode, if you use the factory default configuration for these devices, you must delete the Ethernet switching configuration before you enable chassis clustering. See Disabling Switching on SRX100, SRX210, and SRX220 Devices Before Enabling Chassis Clustering.

In chassis cluster mode, the interfaces on the secondary node are renumbered internally. For example, the management interface port on the front panel of each SRX210 device is still labeled fe-0/0/6, but internally, the node 1 port is referred to as fe-2/0/6.

For SRX650 devices, control interfaces are dedicated Gigabit Ethernet ports.

For SRX100, SRX220, and SRX210 devices, after you enable chassis clustering and reboot the system, the built-in interface named fe-0/0/6 is repurposed as the management interface and is automatically renamed fxp0.

For SRX550 devices, control interfaces are dedicated Gigabit Ethernet ports.

For SRX210 devices, after you enable chassis clustering and reboot the system, the built-in interface named fe-0/0/7 is repurposed as the control interface and is automatically renamed fxp1.

In chassis cluster mode, the interfaces on the secondary node are renumbered internally. For example, the management interface port on the front panel of each SRX210 device is still labeled fe-0/0/6, but internally, the node 1 port is referred to as fe-2/0/6.

For SRX240 devices, control interfaces are dedicated Gigabit Ethernet ports. For SRX100 and SRX220 devices, after you enable chassis clustering and reboot the system, the built-in interface named fe-0/0/7 is repurposed as the control interface and is automatically renamed fxp1.

Note:

For SRX210 Services Gateways, the base and enhanced versions of a model can be used to form a cluster. For example:

  • SRX210B and SRX210BE

  • SRX210H and SRX210HE

However, the following combinations cannot be used to form a cluster:

  • SRX210B and SRX210H

  • SRX210B and SRX210HE

  • SRX210BE and SRX210H

  • SRX210BE and SRX210HE

Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and all show pairs of SRX Series Firewalls with the fabric links and control links connected.

Figure 6: Connecting SRX100 Devices in a Chassis ClusterConnecting SRX100 Devices in a Chassis Cluster
Figure 7: Connecting SRX110 Devices in a Chassis ClusterConnecting SRX110 Devices in a Chassis Cluster
Figure 8: Connecting SRX210 Devices in a Chassis ClusterConnecting SRX210 Devices in a Chassis Cluster
Figure 9: Connecting SRX220 Devices in a Chassis ClusterConnecting SRX220 Devices in a Chassis Cluster
Figure 10: Connecting SRX240 Devices in a Chassis ClusterConnecting SRX240 Devices in a Chassis Cluster
Figure 11: Connecting SRX650 Devices in a Chassis ClusterConnecting SRX650 Devices in a Chassis Cluster

The fabric link connection for the SRX100 and SRX210 must be a pair of either Fast Ethernet or Gigabit Ethernet interfaces. The fabric link connection must be any pair of either Gigabit Ethernet or 10-Gigabit Ethernet interfaces on all SRX Series Firewalls.

For some SRX Series Firewalls, such as the SRX100 and SRX200 line devices, do not have a dedicated port for fxp0. For SRX100, SRX210, the fxp0 interface is repurposed from a built-in interface.

Table 1: SRX Devices Interface Renumbering

SRX Series Services Gateway

Renumbering Constant

Node 0 Interface Name

Node 1 Interface Name

SRX550M

9

ge-0/0/0

ge-9/0/0

Chassis Cluster Slot Numbering and Physical Port and Logical Interface Naming for SRX3600, SRX3400, and SRX1400

Table 2 shows the slot numbering, as well as the physical port and logical interface numbering, for both of the SRX Series Firewalls that become node 0 and node 1 of the chassis cluster after the cluster is formed.

Table 2: Chassis Cluster Slot Numbering, and Physical Port and Logical Interface Naming for SRX1400, SRX3400, and SRX3600

Model

Chassis

Maximum Slots Per Node

Slot Numbering in a Cluster

Management Physical Port/Logical Interface

Control Physical Port/Logical Interface

Fabric Physical Port/Logical Interface

SRX550M

Node 0

9 (PIM slots)

0-8

ge-0/0/0

ge-0/0/1

Any Ethernet port

fxp0

fxp1

fab0

Node 1

9—17

ge-9/0/0

ge-9/0/1

Any Ethernet port

fxp0

fxp1

fab1

SRX3600

Node 0

13 (CFM slots)

0 — 12

Dedicated Gigabit Ethernet port

Dedicated Gigabit Ethernet port

Any Ethernet port

fxp0

em0

fab0

Node 1

13 — 25

Dedicated Gigabit Ethernet port

Dedicated Gigabit Ethernet port

Any Ethernet port

fxp0

em0

fab1

SRX3400

Node 0

8 (CFM slots)

0 — 7

Dedicated Gigabit Ethernet port

Dedicated Gigabit Ethernet port

Any Ethernet port

fxp0

em0

fab0

Node 1

8 — 15

Dedicated Gigabit Ethernet port

Dedicated Gigabit Ethernet port

Any Ethernet port

fxp0

em0

fab1

SRX1400

Node 0

4 (FPC slots)

0 — 3

Dedicated Gigabit Ethernet port

Dedicated Gigabit Ethernet port

Any Ethernet port

fxp0

em0

fab0

Node 1

4 — 7

Dedicated Gigabit Ethernet port

Dedicated Gigabit Ethernet port

Any Ethernet port

fxp0

em0

fab1

Information about chassis cluster slot numbering is also provided in Figure 12, Figure 13, Figure 15, and Figure 14.

Figure 12: Chassis Cluster Slot Numbering for SRX3600 DevicesChassis Cluster Slot Numbering for SRX3600 Devices
Figure 13: Chassis Cluster Slot Numbering for SRX3400 DevicesChassis Cluster Slot Numbering for SRX3400 Devices
Figure 14: Chassis Cluster Slot Numbering for SRX1400 DevicesChassis Cluster Slot Numbering for SRX1400 Devices
Figure 15: Slot Numbering for SRX550M DevicesSlot Numbering for SRX550M Devices

In a large chassis cluster configuration on an SRX3400 or SRX3600 device, the heartbeat timers are recommended to increase the wait time to 8 seconds.

For SRX550M devices, connect the ge-0/0/1 on node 0 to the ge-9/0/1 on node 1.

You can connect two control links (SRX1400 and SRX3000 lines only) and two fabric links between the two devices in the cluster to reduce the chance of control link and fabric link failure. See Understanding Chassis Cluster Dual Control Links and Understanding Chassis Cluster Dual Fabric Links.

Figure 19 show pairs of SRX Series Firewalls with the fabric links and control links connected.

Figure 17 and Figure 18 show pairs of SRX Series Firewalls with the fabric links and control links connected.

Figure 16: Connecting SRX550M Devices in a Chassis ClusterConnecting SRX550M Devices in a Chassis Cluster
Figure 17: Connecting SRX3600 Devices in a Chassis ClusterConnecting SRX3600 Devices in a Chassis Cluster
Figure 18: Connecting SRX3400 Devices in a Chassis ClusterConnecting SRX3400 Devices in a Chassis Cluster

For dual control links on SRX3000 line devices, the Routing Engine must be in slot 0 and the SRX Clustering Module (SCM) in slot 1. The opposite configuration (SCM in slot 0 and Routing Engine in slot 1) is not supported.

Figure 19: Connecting SRX1400 Devices in a Chassis ClusterConnecting SRX1400 Devices in a Chassis Cluster

Supported Fabric Interface Types for SRX Series Firewalls (SRX210, SRX240, SRX220, SRX100, and SRX650 Devices)

For SRX210 devices, the fabric link can be any pair of Gigabit Ethernet interfaces or Fast Ethernet interfaces (as applicable). Interfaces on SRX210 devices are Fast Ethernet or Gigabit Ethernet (the paired interfaces must be of a similar type) and all interfaces on SRX100 devices are Fast Ethernet interfaces.

For SRX550 devices, the fabric link can be any pair of Gigabit Ethernet interfaces or Fast Ethernet interfaces (as applicable).

For SRX Series chassis clusters made up of SRX550M devices, SFP interfaces on Mini-PIMs cannot be used as the fabric link.

For SRX550M devices, the total number of logical interfaces that you can configure across all the redundant Ethernet (reth) interfaces in a chassis cluster deployment is 1024.

For SRX Series chassis clusters, the fabric link can be any pair of Ethernet interfaces spanning the cluster; the fabric link can be any pair of Gigabit Ethernet interface.

Table 3 shows the fabric interface types that are supported for SRX Series Firewalls.

Table 3: Supported Fabric Interface Types for SRX Series Firewalls

SRX550

SRX650

SRX240

SRX220

SRX100

SRX210

Fast Ethernet

Fast Ethernet

Fast Ethernet

 

Fast Ethernet

Fast Ethernet

Gigabit Ethernet

Gigabit Ethernet

Gigabit Ethernet

Gigabit Ethernet

 

Gigabit Ethernet

Redundant Ethernet Interfaces

Table 4: Maximum Number of Redundant Ethernet Interfaces Allowed (SRX100, SRX220, SRX240, SRX210, and SRX650)

Device

Maximum Number of reth Interfaces

SRX100

8

SRX210

8

SRX220

8

SRX240

24

SRX550M

58

SRX650

68

  • Point-to-Point Protocol over Ethernet (PPPoE) over redundant Ethernet (reth) interface is supported on SRX100, SRX210, SRX220, SRX240, SRX550M, and SRX650 devices in chassis cluster mode. This feature allows an existing PPPoE session to continue without starting a new PPP0E session in the event of a failover.

  • On SRX550M devices, the number of child interfaces is restricted to 16 on the reth interface (eight per node).

For SRX100, SRX220, and SRX240 devices, the total number of logical interfaces that you can configure across all the redundant Ethernet (reth) interfaces in a chassis cluster deployment is 1024.

On SRX550M devices, the speed mode and link mode configuration is available for member interfaces of a reth interface.

IP address monitoring cannot be used on a chassis cluster running in transparent mode. The maximum number of monitoring IP addresses that can be configured per cluster is 32 for the SRX1400 device and the SRX3000 line of devices.

Control Links

  • For SRX100, SRX210, and SRX220 devices, the control link uses the fe-0/0/7 interface.

  • For SRX210 devices, the total number of logical interfaces that you can configure across all the redundant Ethernet (reth) interfaces in a chassis cluster deployment is 1024.

  • For SRX240, SRX650M, devices, the control link uses the ge-0/0/1 interface.

Table 5: fxp0 and fxp1 Ports on SRX550M Series Devices

Device

Management (fxp0)

HA Control (fxp1)

Fabric (fab0 and fab1)—must be configured

SRX550M

ge-0/0/0

ge-0/0/1

Any ge or xe interface

Table 6: SRX Series Firewalls Interface Settings (SRX100, SRX210, SRX220, SRX240, SRX550M)

Command

SRX100

SRX210

SRX220

SRX240

SRX550M

set interfaces

fab0

fabric-options

member-interfaces

fe-0/0/1

ge-0/0/1

ge-0/0/0 to ge-0/0/5

ge-0/0/2

ge-0/0/2

set interfaces

fab1

fabric-options

member-interfaces

fe-1/0/1

ge-2/0/1

ge-3/0/0 to ge-3/0/5

ge-5/0/2

ge-9/0/2

set chassis cluster redundancy-group 1 interface-monitor

fe-0/0/0 weight 255

fe-0/0/3 weight 255

ge-0/0/0 weight 255

ge-0/0/5 weight 255

ge-1/0/0 weight 255

set chassis cluster redundancy-group 1 interface-monitor

fe-0/0/2 weight 255

fe-0/0/2 weight 255

ge-3/0/0 weight 255

ge-5/0/5 weight 255

ge-10/0/0 weight 255

set chassis cluster redundancy-group 1 interface-monitor

fe-1/0/0 weight 255

fe-2/0/3 weight 255

ge-0/0/1 weight 255

ge-0/0/6 weight 255

ge-1/0/1 weight 255

set chassis cluster redundancy-group 1 interface-monitor

fe-1/0/2 weight 255

fe-2/0/2 weight 255

ge-3/0/1 weight 255

ge-5/0/6 weight 255

ge-10/0/1 weight 255

set interfaces

fe-0/0/2 fastether-options redundant-parent reth1

fe-0/0/2 fastether-options redundant-parent reth1

ge-0/0/2 fastether-options redundant-parent reth0

ge-0/0/5 gigether-options redundant-parent reth1

ge-1/0/0 gigether-options redundant-parent reth1

set interfaces

fe-1/0/2 fastether-options redundant-parent reth1

fe-2/0/2 fastether-options redundant-parent reth1

ge-0/0/3 fastether-options redundant-parent reth1

ge-5/0/5 gigether-options redundant-parent reth1

ge-10/0/0 gigether-options redundant-parent reth1

set interfaces

fe-0/0/0 fastether-options redundant-parent reth0

fe-0/0/3 fastether-options redundant-parent reth0

ge-3/0/2 fastether-options redundant-parent reth0

ge-0/0/6 gigether-options redundant-parent reth0

ge-1/0/1 gigether-options redundant-parent reth0

set interfaces

fe-1/0/0 fastether-options redundant-parent reth0

fe-2/0/3 fastether-options redundant-parent reth0

ge-3/0/3 fastether-options redundant-parent reth1

ge-5/0/6 gigether-options redundant-parent reth0

ge-10/0/1 gigether-options redundant-parent reth0

ISSU System Requirements for SRX1400, SRX3400 and SRX3600

To perform an ISSU, your device must be running a Junos OS release that supports ISSU for the specific platform. See Table 7 for platform support.

Table 7: ISSU Platform Support SRX1400, SRX3400 and SRX3600

Device

Junos OS Release

SRX1400

12.1X47-D10

SRX3400

12.1X47-D10

SRX3600

12.1X47-D10

Example: Configure IRB and VLAN with Members Across Two Nodes on a Security Device using Tagged

Note:

Our content testing team has validated and updated this example.

Requirements

This example uses the following hardware and software components:

Overview

This example shows the configuration of a VLAN with members across node 0 and node 1.

Topology

Figure 20 shows the Layer 2 Ethernet switching across chassis cluster nodes using tagged traffic.

Figure 20: Layer 2 Ethernet Switching Across Chassis Cluster using Tagged TrafficLayer 2 Ethernet Switching Across Chassis Cluster using Tagged Traffic

Configuration

Procedure

CLI Quick Configuration

To quickly configure this section of the example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Step-by-Step Procedure

To configure IRB and a VLAN:

  1. Configure security zones.

  2. Configure Ethernet switching on the node0 interfaces.

  3. Define the interfaces used for the fab connection (data plane links for RTOsync) by using physical ports from each node. These interfaces must be connected back-to-back, or through a Layer 2 infrastructure.

  4. configure a switching fabric interface on both nodes to configure Ethernet switching-related features on the nodes.

  5. Configure the irb interface.

  6. Create and associate a VLAN interface with the VLAN.

  7. If you are done configuring the device, commit the configuration.

Results

From configuration mode, confirm your configuration by entering the show security, show interfaces, and show vlans commands. If the output does not display the intended configuration, repeat the configuration instructions in this example to correct the configuration.

Verification

Verifying Tagged VLAN With IRB
Purpose

Verify that the configuration for tagged VLAN with IRB is working properly.

Action

From operational mode, enter the show chassis cluster interfaces command.

From operational mode, enter the show ethernet-switching table command.

From operational mode, enter the show arp command.

From operational mode, enter the show ethernet-switching interface command to view the information about Ethernet switching interfaces.

Meaning

The output shows the VLANs are configured and working fine.

Example: Configure IRB and VLAN with Members Across Two Nodes on a Security Device using Untagged Traffic

Note:

Our content testing team has validated and updated this example.

Requirements

This example uses the following hardware and software components:

Overview

This example shows the configuration of a VLAN with members across node 0 and node 1.

Topology

Figure 21 shows the Layer 2 Ethernet switching across chassis cluster nodes using untagged traffic.

Figure 21: Layer2 Ethernet Switching Across Chassis Cluster Nodes using Untagged TrafficLayer2 Ethernet Switching Across Chassis Cluster Nodes using Untagged Traffic

Configuration

Procedure

CLI Quick Configuration

To quickly configure this section of the example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Step-by-Step Procedure

To configure IRB and a VLAN:

  1. Configure security zones.

  2. Configure Ethernet switching on the node0 interfaces.

  3. Define the interfaces used for the fab connections (data plane links for RTOsync) by using physical ports from each node. These interfaces must be connected back-to-back, or through a Layer 2 infrastructure.

  4. configure a switching fabric interface on both nodes to configure Ethernet switching-related features on the nodes.

  5. Configure the irb interface.

  6. Create and associate a VLAN interface with the VLAN.

  7. If you are done configuring the device, commit the configuration.

Results

From configuration mode, confirm your configuration by entering the show security, show interfaces, and show vlans commands. If the output does not display the intended configuration, repeat the configuration instructions in this example to correct the configuration.

Verification

Verifying Untagged VLAN With IRB
Purpose

Verify that the configuration of untagged VLAN with IRB is working properly.

Action

From operational mode, enter the show chassis cluster interfaces command.

From operational mode, enter the show ethernet-switching table command.

From operational mode, enter the show arp command.

From operational mode, enter the show ethernet-switching interface command to view the information about Ethernet switching interfaces.

Meaning

The output shows the VLANs are configured and working fine.

Example: Configuring VLAN with Members Across Two Nodes on a Security Device

Requirements

This example uses the following hardware and software components:

Overview

This example shows the configuration of a VLAN with members across node 0 and node 1.

Configuration

Procedure

CLI Quick Configuration

To quickly configure this section of the example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Step-by-Step Procedure

To configure VLAN:

  1. Configure Ethernet switching on the node0 interface.

  2. Configure Ethernet switching on the node1 interface.

  3. Create VLAN vlan100 with vlan-id 100.

  4. Add interfaces from both nodes to the VLAN.

  5. Create a VLAN interface.

  6. Associate an VLAN interface with the VLAN.

  7. If you are done configuring the device, commit the configuration.

Results

From configuration mode, confirm your configuration by entering the show vlans and show interfaces commands. If the output does not display the intended configuration, repeat the configuration instructions in this example to correct the configuration.

Verification

Verifying VLAN

Purpose

Verify that the configuration of VLAN is working properly.

Action

From operational mode, enter the show interfaces terse ge-0/0/3 command to view the node 0 interface.

From operational mode, enter the show interfaces terse ge-0/0/4 command to view the node 0 interface.

From operational mode, enter the show interfaces terse ge-7/0/5 command to view the node1 interface.

From operational mode, enter the show vlans command to view the VLAN interface.

From operational mode, enter the show ethernet-switching interface command to view the information about Ethernet switching interfaces.

Meaning

The output shows the VLANs are configured and working fine.