Chassis Cluster Dual Control Links
Dual control links provide a redundant link for control traffic. For more information, see the following topics:
Understanding Chassis Cluster Dual Control Links
The control link connects two SRX Series devices together and it is responsible for sending high-availability control data between the two SRX Series devices including heartbeats and configuration synchronization. If this link goes down, the secondary SRX Series is disabled from the cluster. In dual control links, two pairs of control link interfaces are connected between each device in a cluster. Having two control links helps to avoid a possible single point of failure. Dual control links, provide a redundant link for control traffic. Unlike dual fabric links, only one control link is used at any one time.
Dual control links are supported for the SRX4600, SRX5600 and SRX5800 Services Gateways.
For the SRX5400 Services Gateways, dual control is not supported due to limited slots.
Dual control link functionality is not supported on SRX4100 and SRX4200 devices.
Starting in Junos OS Release 20.4R1, you can enable or disable the control links on SRX1500 using operational and configuration modeCLIs listed below:
Previously, if you wanted to disable the control and fabric links, you had to manually unplug the cables for control link and fabric link which is very inconvenient.
This feature is for controlling the status of the cluster nodes during a cluster upgrade for protection against version mismatch during the upgrade procedure and to minimize failovers.
From configuration mode, run the set chassis cluster control-interface
<node0/node1> disable
on node 0 or node 1 to disable the
control link and to enable the control link run the delete chassis
cluster control-interface <node0/node1> disable
on both the
nodes. If you have disabled the links using the configuration command,
then the links will remain disabled even after system reboot.
From the operational mode, run the request chassis cluster
control-interface <node0/node1> disable
or the request
chassis cluster control-interface <node0/node1> enable
commands
to enable or disable the control link from the local node. If you
have disabled control link using the operational mode CLI commands,
the links will be enabled after system reboot.
Benefits of Dual Control Links
Provides a redundant link for control traffic. In the link-level redundancy, if one link fails, the other can take over and restore traffic forwarding that had been previously sent over the failed link.
Prevents the possibility of single point of failure.
Dual Control Links Functionality Requirements
For the SRX5600 and SRX5800 Services Gateways, dual control link functionality requires a second Routing Engine, as well as a second Switch Control Board (SCB) to house the Routing Engine, to be installed on each device in the cluster. The purpose of the second Routing Engine is only to initialize the switch on the SCB.
For the SRX5000 line, the second Routing Engine must be running Junos OS Release 10.0 or later.
The second Routing Engine, to be installed on SRX5000 line devices only, does not provide backup functionality. It does not need to be upgraded, even when there is a software upgrade of the primary Routing Engine on the same node. Note the following conditions:
You cannot run the CLI or enter configuration mode on the second Routing Engine.
You do not need to set the chassis ID and cluster ID on the second Routing Engine.
You need only a console connection to the second Routing Engine. (A console connection is not needed unless you want to check that the second Routing Engine booted up or to upgrade a software image.)
You cannot log in to the second Routing Engine from the primary Routing Engine.
As long as the first Routing Engine is installed (even if it is rebooting or failing), the second Routing Engine cannot take over the chassis primary role; that is, it cannot control all the hardware on the chassis.
Be cautious and judicious in your use of redundancy group 0 manual failovers. A redundancy group 0 failover implies a Routing Engine (RE) failover, in which case all processes running on the primary node are killed and then spawned on the new primary Routing Engine (RE). This failover could result in loss of state, such as routing state, and degrade performance by introducing system churn.
For the SRX3000 line, dual control link functionality requires an SRX Clustering Module (SCM) to be installed on each device in the cluster. Although the SCM fits in the Routing Engine slot, it is not a Routing Engine. SRX3000 line devices do not support a second Routing Engine. The purpose of the SCM is to initialize the second control link.
See Also
Connecting Dual Control Links for SRX Series Devices in a Chassis Cluster
For SRX5600 and SRX5800 devices, you can connect two control links between the two devices, effectively reducing the chance of control link failure.
Dual control links are not supported on SRX5400 due to the limited number of slots.
For SRX5600 and SRX5800 devices, connect two pairs of the same type of Ethernet ports. For each device, you can use ports on the same Services Processing Card (SPC), but we recommend that they be on two different SPCs to provide high availability. Figure 1 shows a pair of SRX5800 devices with dual control links connected. In this example, control port 0 and control port 1 are connected on different SPCs.

For SRX5600 and SRX5800 devices, you must connect control port 0 on one node to control port 0 on the other node and, likewise, control port 1 to control port 1. If you connect control port 0 to control port 1, the nodes cannot receive heartbeat packets across the control links.
See Also
Upgrading the Second Routing Engine When Using Chassis Cluster Dual Control Links on SRX5600 and SRX5800 Devices
For SRX5600 and SRX5800 devices, a second Routing Engine is required for each device in a cluster if you are using dual control links. The second Routing Engine does not provide backup functionality; its purpose is only to initialize the switch on the Switch Control Board (SCB). The second Routing Engine must be running Junos OS Release 12.1X47-D35, 12.3X48-D30, 15.1X49-D40 or later. For more information, see knowledge base article KB30371.
On SRX5600 and SRX5800 devices, starting from Junos OS Release
15.1X49-D70 and Junos OS Release 17.3R1, you can use the show
chassis hardware
command to see the serial number and the hardware
version details of the second Routing Engine. To use this functionality,
ensure that the second Routing Engine is running Junos OS Release
15.1X49-D70 and later releases or Junos OS Release 17.3R1 or later
releases.
For the SRX5400 Services Gateways, dual control is not supported due to limited slots.
Because you cannot run the CLI or enter configuration mode on the second Routing Engine, you cannot upgrade the Junos OS image with the usual upgrade commands. Instead, use the primary Routing Engine to create a bootable USB storage device, which you can then use to install a software image on the second Routing Engine.
To upgrade the software image on the second Routing Engine:
Example: Configuring Chassis Cluster Control Ports for Dual Control Links
This example shows how to configure chassis cluster control ports for use as dual control links on SRX5600, and SRX5800 devices. You need to configure the control ports that you will use on each device to set up the control links.
Dual control links are not supported on an SRX5400 device due to the limited number of slots.
Requirements
Before you begin:
Understand chassis cluster control links. See Understanding Chassis Cluster Control Plane and Control Links.
Physically connect the control ports on the devices. See Connecting SRX Series Devices to Create a Chassis Cluster.
Overview
By default, all control ports on SRX5600 and SRX5800 devices are disabled. After connecting the control ports, configuring the control ports, and establishing the chassis cluster, the control links are set up.
This example configures control ports with the following FPCs and ports as the dual control links:
FPC 4, port 0
FPC 10, port 0
FPC 6, port 1
FPC 12, port 1
Configuration
Procedure
CLI Quick Configuration
To quickly configure this section of the example,
copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration,
copy and paste the commands into the CLI at the [edit]
hierarchy
level, and then enter commit
from configuration mode.
{primary:node0}[edit] set chassis cluster control-ports fpc 4 port 0 set chassis cluster control-ports fpc 10 port 0 set chassis cluster control-ports fpc 6 port 1 set chassis cluster control-ports fpc 12 port 1
Step-by-Step Procedure
To configure control ports for use as dual control links for the chassis cluster:
Specify the control ports.
{primary:node0}[edit] user@host# set chassis cluster control-ports fpc 4 port 0 {primary:node0}[edit] user@host# set chassis cluster control-ports fpc 10 port 0 {primary:node0}[edit] user@host# set chassis cluster control-ports fpc 6 port 1 {primary:node0}[edit] user@host# set chassis cluster control-ports fpc 12 port 1
Results
From configuration mode, confirm your configuration
by entering the show chassis cluster
command. If the output
does not display the intended configuration, repeat the configuration
instructions in this example to correct it.
For brevity, this show
command output includes only
the configuration that is relevant to this example. Any other configuration
on the system has been replaced with ellipses (...).
{primary:node0}[edit] user@host# show chassis cluster ... control-ports { fpc 4 port 0; fpc 6 port 1; fpc 10 port 0; fpc 12 port 1; } ...
If you are done configuring the device, enter commit
from configuration mode.
Verification
Verifying the Chassis Cluster Status
Purpose
Verify the chassis cluster status.
Action
From operational mode, enter the show chassis cluster
status
command.
{primary:node0} user@host> show chassis cluster status Cluster ID: 1 Node Priority Status Preempt Manual failover Redundancy group: 0 , Failover count: 1 node0 100 primary no no node1 1 secondary no no Redundancy group: 1 , Failover count: 1 node0 0 primary no no node1 0 secondary no no
Meaning
Use the show chassis cluster status command to confirm that the devices in the chassis cluster are communicating with each other. The chassis cluster is functioning properly, as one device is the primary node and the other is the secondary node.