Maintaining the SRX5800 Host Subsystem
Maintaining the SRX5800 Services Gateway Host Subsystem and SCBs
Purpose
For optimum services gateway performance, verify the condition of the host subsystem and any additional SCBs. The host subsystem comprises an SCB and a Routing Engine installed into a slot in the SCB.
Action
On a regular basis:
Check the LEDs on the craft interface to view information about the status of the Routing Engines.
Check the LEDs on the SCB faceplate.
Check the LEDs on the Routing Engine faceplate.
To check the status of the Routing Engine, issue the show chassis routing-engine command. The output is similar to the following:
user@host> show chassis routing-engine
Routing Engine status: Slot 0: Current state Master Election priority Master (default) Temperature 36 degrees C / 96 degrees F CPU temperature 33 degrees C / 91 degrees F DRAM 2048 MB Memory utilization 12 percent CPU utilization: User 1 percent Background 0 percent Kernel 4 percent Interrupt 0 percent Idle 94 percent Model RE-S-1300 Serial ID 1000697084 Start time 2008-07-11 08:31:44 PDT Uptime 3 hours, 27 minutes, 27 seconds Load averages: 1 minute 5 minute 15 minute 0.44 0.16 0.06
To check the status of the SCB, issue the show chassis environment cb command. The output is similar to the following:
user@host> show chassis environment cb
CB 0 status: State Online Master Temperature 40 degrees C / 104 degrees F Power 1 1.2 V 1208 mV 1.5 V 1521 mV 1.8 V 1807 mV 2.5 V 2507 mV 3.3 V 3319 mV 5.0 V 5033 mV 12.0 V 12142 mV 1.25 V 1243 mV 3.3 V SM3 3312 mV 5 V RE 5059 mV 12 V RE 11968 mV Power 2 11.3 V bias PEM 11253 mV 4.6 V bias MidPlane 4814 mV 11.3 V bias FPD 11234 mV 11.3 V bias POE 0 11176 mV 11.3 V bias POE 1 11292 mV Bus Revision 42 FPGA Revision 1
To check the status of a specific SCB, issue the show chassis environment cb node slot command, for example, show chassis environment cb node 0.
For more information about using the CLI, see the CLI Explorer.
Taking the SRX5800 Services Gateway Host Subsystem Offline
The host subsystem is composed of an SCB with a Routing Engine installed in it. You take the host subsystem offline and bring it online as a unit. Before you replace an SCB or a Routing Engine, you must take the host subsystem offline. Taking the host subsystem offline causes the device to shut down.
To take the host subsystem offline:
- On the console or other management device connected to
the Routing Engine that is paired with the SCB you are removing, enter
CLI operational mode and issue the following command. The command
shuts down the Routing Engine cleanly, so its state information is
preserved:
user@host> request system halt
- Wait until a message appears on the console confirming
that the operating system has halted.
For more information about the command, see Junos OS System Basics and Services Command Reference at www.juniper.net/documentation/.
Note The SCB might continue forwarding traffic for approximately 5 minutes after the request system halt command has been issued.
Operating and Positioning the SRX5800 Services Gateway SCB Ejectors
When removing or inserting an SCB, ensure that the SCBs or blank panels in adjacent slots are fully inserted to avoid hitting them with the ejector handles. The ejector handles require that all adjacent components be completely inserted so the ejector handles do not hit them, which could result in damage.
The ejector handles have a center of rotation and need to be stored toward the center of the board. Ensure the long ends of the ejectors located at both the top and the bottom of the board are vertical and pressed as far as possible toward the center of the board. Once you have installed an SCB, place the ejector handles in their proper position, vertically and toward the center of the board. To avoid blocking the visibility of the LEDs,position the ejectors over the PARK icon.
To insert or remove the SCB card, slide the ejector across the SCB horizontally, rotate it, and slide it again another quarter of a turn. Turn the ejector again and repeat as necessary. Use the indexing feature to maximize leverage and to avoid hitting any adjacent components.
Operate both ejector handles simultaneously. The insertion force on an SCB is too great for one ejector.
Replacing an SRX5800 Services Gateway SCB
Before replacing an SCB, read the guidelines in Operating and Positioning the SRX5800 Services Gateway SCB Ejectors. To replace an SCB, perform the following procedures:
The procedure to replace an SCB applies to the SRX5K-SCB, SRX5K-SCBE, SRX5K-SCB3, and SRX5K-SCB4.
Removing an SRX5800 Services Gateway SCB
Before you begin to remove a SCB:
Ensure you understand how to prevent electrostatic discharge (ESD) damage. See Prevention of Electrostatic Discharge Damage.
Ensure that you have the following available:
ESD grounding strap
Replacement SCB or blank panel
Antistatic mat
To remove an SCB (see Figure 1):
The SCB and Routing Engine are removed as a unit. You can also remove the Routing Engine separately.
Before removing an SCB, ensure that you know how to operate the ejector handles properly to avoid damage to the equipment.
- If you are removing an SCB from a chassis cluster, deactivate
the fabric interfaces from any of the nodes.
Note The fabric interfaces should be deactivated to avoid failures in the chassis cluster.
user@host# deactivate interfaces fab0
user@host# deactivate interfaces fab1
user@host# commit
- Power off the services gateway using the command request system power-off.
user@host# request system power-off
Note Wait until a message appears on the console confirming that the services stopped.
- Physically turn off the power and remove the power cables from the chassis.
- Place an electrostatic bag or antistatic mat on a flat, stable surface.
- Attach an ESD grounding strap to your bare wrist, and connect the strap to one of the ESD points on the chassis.
- Rotate the ejector handles simultaneously counterclockwise to unseat the SCB.
- Grasp the ejector handles and slide the SCB about halfway out of the chassis.
- Place one hand underneath the SCB to support it and slide it completely out of the chassis.
- Place the SCB on the antistatic mat.
- If you are not replacing the SCB now, install a blank panel over the empty slot.

Installing an SRX5800 Services Gateway SCB
Before you begin to install a SCB:
Ensure you understand how to prevent electrostatic discharge (ESD) damage. See Prevention of Electrostatic Discharge Damage.
Ensure that you have the following available:
ESD grounding strap
To install an SCB (see Figure 2):
- Attach an ESD grounding strap to your bare wrist, and connect the strap to one of the ESD points on the chassis.
- Power off the services gateway using the command request system power-off.
user@host# request system power-off
Note Wait until a message appears on the console confirming that the services stopped.
- Physically turn off the power and remove the power cables from the chassis.
- Carefully align the sides of the SCB with the guides inside the chassis.
- Slide the SCB into the chassis until you feel resistance, carefully ensuring that it is correctly aligned.
- Grasp both ejector handles and rotate them simultaneously clockwise until the SCB is fully seated.
- Place the ejector handles in the proper position, vertically and toward the center of the board.
- Connect the power cables to the chassis and power on the services gateway.
- To verify that the SCB is functioning normally, check
the LEDs on its faceplate. The green
OK/FAIL
LED should light steadily a few minutes after the SCB is installed. If theOK/FAIL
LED is red, remove and install the SCB again. If theOK/FAIL
LED still lights steadily, the SCB is not functioning properly. Contact your customer support representative.To check the status of the SCB:
user@host> show chassis environment cb
- If you installed an SCB into a chassis cluster, through
the console of the newly installed SCB put the node back into cluster
and reboot.
user@host> set chassis cluster cluster-id X node Y reboot
where x is the cluster ID and Y is the node ID
- Activate the disabled fabric interfaces.
user@host# activate interfaces fab0
user@host# activate interfaces fab1
user@host# commit

Replacing the SRX5800 Services Gateway Routing Engine
To replace the Routing Engine, perform the following procedures:
The procedure to replace a Routing Engine applies to SRX5K-RE-13-20, SRX5K-RE-1800X4, and SRX5K-RE-128G.
Removing the SRX5800 Services Gateway Routing Engine
Before you begin to remove a routing engine:
Ensure you understand how to prevent electrostatic discharge (ESD) damage. See Prevention of Electrostatic Discharge Damage.
Ensure that you have the following available:
ESD grounding strap
Replacement routing engine or blank panel
Antistatic mat
Phillips (+) number 2 screwdriver
Before you replace a Routing Engine, you must take the host subsystem offline.
To remove the Routing Engine (see Figure 3):
- Take the host subsystem offline as described in Taking the SRX5800 Services Gateway Host Subsystem Offline.
- Place an electrostatic bag or antistatic mat on a flat, stable surface.
- Attach an ESD grounding strap to your bare wrist, and connect the strap to one of the ESD points on the chassis.
- Using the Phillips (+) number 2 screwdriver loosen the captive screws at each end of the Routing Engine faceplate.
- Flip the ejector handles outward to unseat the Routing Engine.
- Grasp the Routing Engine by the ejector handles and slide it about halfway out of the chassis.
- Place one hand underneath the Routing Engine to support it and slide it completely out of the chassis.
- Place the Routing Engine on the antistatic mat.

Installing the SRX5800 Services Gateway Routing Engine
Before you begin to install a routing engine:
Ensure you understand how to prevent electrostatic discharge (ESD) damage. See Prevention of Electrostatic Discharge Damage.
Ensure that you have the following available:
ESD grounding strap
Phillips (+) number 2 screwdriver
To install a Routing Engine into an SCB (see Figure 4):
If you install only one Routing Engine in the service gateway, you must install it in SCB slot 0 of service gateway chassis.
- If you have not already done so, take the host subsystem offline. See Taking the SRX5800 Services Gateway Host Subsystem Offline.
- Attach an ESD grounding strap to your bare wrist, and connect the strap to one of the ESD points on the chassis.
- Ensure that the ejector handles are not in the locked position. If necessary, flip the ejector handles outward.
- Place one hand underneath the Routing Engine to support it.
- Carefully align the sides of the Routing Engine with the guides inside the opening on the SCB.
- Slide the Routing Engine into the SCB until you feel resistance, and then press the Routing Engine's faceplate until it engages the connectors.
- Press both of the ejector handles inward to seat the Routing Engine.
- Using the Phillips (+) number 2 screwdriver tighten the captive screws on the top and bottom of the Routing Engine faceplate.
- Power on the services gateway.
The Routing Engine might require several minutes to boot.
After the Routing Engine boots, verify that it is installed correctly by checking the
RE0
andRE1
LEDs on the craft interface. If the services gateway is operational and the Routing Engine is functioning properly, the greenONLINE
LED lights steadily. If the redFAIL
LED lights steadily instead, remove and install the Routing Engine again. If the redFAIL
LED still lights steadily, the Routing Engine is not functioning properly. Contact your customer support representative.To check the status of the Routing Engine, use the CLI command:
user@host> show chassis routing-engine
Routing Engine status: Slot 0: Current state Master ...
For more information about using the CLI, see the CLI Explorer.
Figure 4: Installing the Routing Engine - If the Routing Engine was replaced on one of the nodes
in a chassis cluster, then you need to copy certificates and key pairs
from the other node in the cluster:
Start the shell interface as a root user on both nodes of the cluster.
Verify files in the
/var/db/certs/common/key-pair
folder of the source node (other node in the cluster) and destination node (node on which the Routing Engine was replaced) by using the following command:ls -la /var/db/certs/common/key-pair/
If the same files exist on both nodes, back up the files on the destination node to a different location. For example:
root@SRX-B% pwd
/var/db/certs/common/key-pair
root@SRX-B% ls -la
total 8
drwx------ 2 root wheel 512 Jan 22 15:09
drwx------ 7 root wheel 512 Mar 26 2009
-rw-r--r-- 1 root wheel 0 Jan 22 15:09 test
root@SRX-B% mv test test.old
root@SRX-B% ls -la
total 8
drwx------ 2 root wheel 512 Jan 22 15:10
drwx------ 7 root wheel 512 Mar 26 2009
-rw-r--r-- 1 root wheel 0 Jan 22 15:09 test.old
root@SRX-B%Copy the files from the
/var/db/certs/common/key-pair
folder of the source node to the same folder on the destination node.Note Ensure that you use the correct node number for the destination node.
In the destination node, use the ls –la command to verify that all files from the
/var/db/certs/common/key-pair
folder of the source node are copied.Repeat Step b through Step e for the
/var/db/certs/common/local
and/var/db/certs/common/certification-authority
folders.
Low Impact Hardware Upgrade for SCB3 and IOC3
If your device is part of a chassis cluster, you can upgrade SRX5K-SCBE (SCB2) to SRX5K-SCB3 (SCB3) and SRX5K-MPC (IOC2) to IOC3 (SRX5K-MPC3-100G10G or SRX5K-MPC3-40G10G) using the low-impact hardware upgrade (LICU) procedure, with minimum downtime. You can also follow this procedure to upgrade SCB1 to SCB2, and RE1 to RE2.
Before you begin the LICU procedure, verify that both services gateways in the cluster are running the same Junos OS release.
You can perform the hardware upgrade using the LICU process only.
You must perform the hardware upgrade at the same time as the software upgrade from Junos OS Release 12.3X48-D10 to 15.1X49-D10.
In the chassis cluster, the primary device is depicted as node 0 and the secondary device as node 1.
Follow these steps to perform the LICU.
- Ensure that the secondary node does not have an impact
on network traffic by isolating it from the network when LICU is in
progress. For this, disable the physical interfaces (RETH child interfaces)
on the secondary node.For SRX5400 Services Gatewaysadmin@cluster#set interfaces xe-5/0/0 disableadmin@cluster#set interfaces xe-5/1/0 disableFor SRX5600 Services Gatewaysadmin@cluster#set interfaces xe-9/0/0 disableadmin@cluster#set interfaces xe-9/0/4 disableFor SRX5800 Services Gatewaysadmin@cluster#set interfaces xe-13/0/0 disableadmin@cluster#set interfaces xe-13/1/0 disable
- Disable SYN bit and TCP sequence number checking for the
secondary node to take over.admin@cluster#set security flow tcp-session no-syn-checkadmin@cluster#set security flow tcp-session no-sequence-check
- Commit the configuration.root@#commit
- Disconnect control and fabric links between the devices
in the chassis cluster so that nodes running different Junos OS releases
are disconnected. For this, change the control port and fabric port
to erroneous values. Fabric ports must be set to any FPC number and
control ports to any non-IOC port. Issue the following commands:admin@cluster#delete chassis cluster control-portsadmin@cluster#set chassis cluster control-ports fpc 10 port 0 <<<<<<< non-SPC portadmin@cluster#set chassis cluster control-ports fpc 22 port 0 <<<<<<< non-SPC portadmin@cluster#delete interfaces fab0admin@cluster#delete interfaces fab1admin@cluster#set interfaces fab0 fabric-options member-interfaces xe-4/0/5 <<<<<<< non-IOC portadmin@cluster#set interfaces fab1 fabric-options member-interfaces xe-10/0/5<<<<<<< non-IOC port
- Commit the configuration.root@#commit
Note After you commit the configuration, the following error message appears: Connection to node1 has been broken error:remote unlock-configuration failed on node1 due to control plane communication break.
Ignore the error message.
- Upgrade the Junos OS release on the secondary node from
12.3X48-D10 to 15.1X49-D10.admin@cluster#request system software add <location of package/junos filename> no-validate no-copy
- Power on the secondary node.admin@cluster#request system reboot
See:
- Perform the hardware upgrade on the secondary node by
replacing SCB2 with SCB3, IOC2 with IOC3, and the existing midplane
with the enhanced midplane.
Following these steps while upgrading the SCB:
To upgrade the Routing Engine on the secondary node:
- Before powering off the secondary node, copy the configuration information to a USB device.
- Replace RE1 with RE2 and upgrade the Junos OS on RE2.
- Upload the configuration to RE2 from the USB device.
For more information about mounting the USB drive on the device, refer to KB articles KB12880 and KB12022 from the Knowledge Base.
Perform this step when you upgrade the MPC.
- Configure the control port, fabric port, and RETH child
ports on the secondary node.
[edit]
root@clustert# show | display set | grep delete
delete groups global interfaces fab1
delete groups global interfaces fab0
delete interfaces reth0
delete interfaces reth1
delete interfaces xe-3/0/5 gigether-options redundant-parent reth0
delete interfaces xe-9/0/5 gigether-options redundant-parent reth0
delete interfaces xe-3/0/9 gigether-options redundant-parent reth
delete interfaces xe-9/0/9 gigether-options redundant-parent reth0
[edit]
root@clustert# show | display set | grep fab
set groups global interfaces fab1 fabric-options member-interfaces xe-9/0/2
set groups global interfaces fab0 fabric-options member-interfaces xe-3/0/2
[edit]
root@clustert# show | display set | grep reth0
set chassis cluster redundancy-group 1 ip-monitoring family inet 44.44.44.2 interface reth0.0 secondary-ip-address 44.44.44.3
set interfaces xe-3/0/0 gigether-options redundant-parent reth0
set interfaces xe-9/0/0 gigether-options redundant-parent reth0
set interfaces reth0 vlan-tagging
set interfaces reth0 redundant-ether-options redundancy-group 1
set interfaces reth0 unit 0 vlan-id 20
set interfaces reth0 unit 0 family inet address 44.44.44.1/8
[edit]
root@clustert# show | display set | grep reth1
set interfaces xe-3/0/4 gigether-options redundant-parent reth1
set interfaces xe-9/0/4 gigether-options redundant-parent reth1
set interfaces reth1 vlan-tagging
set interfaces reth1 redundant-ether-options redundancy-group 1
set interfaces reth1 unit 0 vlan-id 30
set interfaces reth1 unit 0 family inet address 55.55.55.1/8
- Verify that the secondary node is running the upgraded
Junos OS release.
root@cluster> show version node1
Hostname: <displays the hostname> Model: <displays the model number> Junos: 15.1X49-D10 JUNOS Software Release [15.1X49-D10]root@cluster> show chassis cluster status
Monitor Failure codes: CS Cold Sync monitoring FL Fabric Connection monitoring GR GRES monitoring HW Hardware monitoring IF Interface monitoring IP IP monitoring LB Loopback monitoring MB Mbuf monitoring NH Nexthop monitoring NP NPC monitoring SP SPU monitoring SM Schedule monitoring CF Config Sync monitoring Cluster ID: 1 Node Priority Status Preempt Manual Monitor-failures Redundancy group: 0 , Failover count: 1 node0 0 lost n/a n/a n/a node1 100 primary no no None Redundancy group: 1 , Failover count: 3 node0 0 lost n/a n/a n/a node1 150 primary no no Noneroot@cluster>show chassis fpc pic-status node1
Slot 1 Online SRX5k IOC II PIC 0 Online 1x 100GE CFP PIC 2 Online 2x 40GE QSFP+ Slot 2 Online SRX5k SPC II PIC 0 Online SPU Cp PIC 1 Online SPU Flow PIC 2 Online SPU Flow PIC 3 Online SPU Flow Slot 3 Online SRX5k IOC II PIC 0 Online 10x 10GE SFP+ PIC 2 Online 2x 40GE QSFP+ Slot 4 Online SRX5k SPC II PIC 0 Online SPU Flow PIC 1 Online SPU Flow PIC 2 Online SPU Flow PIC 3 Online SPU Flow Slot 5 Online SRX5k IOC II PIC 0 Online 10x 10GE SFP+ PIC 2 Online 2x 40GE QSFP+
- Verify configuration changes by disabling interfaces on
the primary node and enabling interfaces on the secondary.For SRX5400 Services Gatewaysadmin@cluster#set interfaces xe-2/0/0 disableadmin@cluster#set interfaces xe-2/1/0 disableadmin@cluster#delete interfaces xe-5/0/0 disableadmin@cluster#delete interfaces xe-5/1/0 disableFor SRX5600 Services Gatewaysadmin@cluster#set interfaces xe-2/0/0 disableadmin@cluster#set interfaces xe-2/0/4 disableadmin@cluster#delete interfaces xe-9/0/0 disableadmin@cluster#delete interfaces xe-9/0/4 disableFor SRX5800 Services Gatewaysadmin@cluster#set interfaces xe-1/0/0 disableadmin@cluster#set interfaces xe-1/1/0 disableadmin@cluster#delete interfaces xe-13/0/0 disableadmin@cluster#delete interfaces xe-13/1/0 disable
- Check the configuration changes.root@#commit check
- After verifying, commit the configuration.root@#commit
Network traffic fails over to the secondary node.
- Verify that the failover was successful by checking the
session tables and network traffic on the secondary node.admin@cluster#show security flow session summaryadmin@cluster#monitor interface traffic
- Upgrade the Junos OS release on the primary node from
12.3X48-D10 to 15.1X49-D10.admin@cluster#request system software add <location of package/junos filename> no-validate no-copy
Ignore error messages pertaining to the disconnected cluster.
- Power on the primary node.admin@cluster#request system reboot
See:
- Perform the hardware upgrade on the primary node by replacing
SCB2 with SCB3, IOC2 with IOC3, and the existing midplane with the
enhanced midplane.
Perform the following steps while upgrading the SCB.
To upgrade the Routing Engine on the primary node:
- Before powering off the secondary node, copy the configuration information to a USB device.
- Replace RE1 with RE2 and upgrade the Junos OS on RE2.
- Upload the configuration to RE2 from the USB device.
For more information about mounting the USB drive on the device, refer to KB articles KB12880 and KB12022 from the Knowledge Base.
Perform this step when you upgrade the MPC.
- Configure the control port, fabric port, and RETH child
ports on the primary node.
[edit]
root@clustert# show | display set | grep delete
delete groups global interfaces fab1
delete groups global interfaces fab0
delete interfaces reth0
delete interfaces reth1
delete interfaces xe-3/0/5 gigether-options redundant-parent reth0
delete interfaces xe-9/0/5 gigether-options redundant-parent reth0
delete interfaces xe-3/0/9 gigether-options redundant-parent reth0
delete interfaces xe-9/0/9 gigether-options redundant-parent reth0
[edit]
root@clustert# show | display set | grep fab
set groups global interfaces fab1 fabric-options member-interfaces xe-9/0/2
set groups global interfaces fab0 fabric-options member-interfaces xe-3/0/2
[edit]
root@clustert# show | display set | grep reth0
set chassis cluster redundancy-group 1 ip-monitoring family inet 44.44.44.2 interface reth0.0 secondary-ip-address 44.44.44.3
set interfaces xe-3/0/0 gigether-options redundant-parent reth0
set interfaces xe-9/0/0 gigether-options redundant-parent reth0
set interfaces reth0 vlan-tagging
set interfaces reth0 redundant-ether-options redundancy-group 1
set interfaces reth0 unit 0 vlan-id 20
set interfaces reth0 unit 0 family inet address 44.44.44.1/8
[edit]
root@clustert# show | display set | grep reth1
set interfaces xe-3/0/4 gigether-options redundant-parent reth1
set interfaces xe-9/0/4 gigether-options redundant-parent reth1
set interfaces reth1 vlan-tagging
set interfaces reth1 redundant-ether-options redundancy-group 1
set interfaces reth1 unit 0 vlan-id 30
set interfaces reth1 unit 0 family inet address 55.55.55.1/8
- Verify that the primary node is running the upgraded Junos
OS release, and that the primary node is available to take over network
traffic.
root@cluster> show version node1
Hostname: <displays the hostname> Model: <displays the model number> Junos: 15.1X49-D10 JUNOS Software Release [15.1X49-D10]root@cluster> show chassis cluster status
Monitor Failure codes: CS Cold Sync monitoring FL Fabric Connection monitoring GR GRES monitoring HW Hardware monitoring IF Interface monitoring IP IP monitoring LB Loopback monitoring MB Mbuf monitoring NH Nexthop monitoring NP NPC monitoring SP SPU monitoring SM Schedule monitoring CF Config Sync monitoring Cluster ID: 1 Node Priority Status Preempt Manual Monitor-failures Redundancy group: 0 , Failover count: 1 node0 0 lost n/a n/a n/a node1 100 primary no no None Redundancy group: 1 , Failover count: 3 node0 0 lost n/a n/a n/a node1 150 primary no no Noneroot@cluster>show chassis fpc pic-status node1
Slot 1 Online SRX5k IOC II PIC 0 Online 1x 100GE CFP PIC 2 Online 2x 40GE QSFP+ Slot 2 Online SRX5k SPC II PIC 0 Online SPU Cp PIC 1 Online SPU Flow PIC 2 Online SPU Flow PIC 3 Online SPU Flow Slot 3 Online SRX5k IOC II PIC 0 Online 10x 10GE SFP+ PIC 2 Online 2x 40GE QSFP+ Slot 4 Online SRX5k SPC II PIC 0 Online SPU Flow PIC 1 Online SPU Flow PIC 2 Online SPU Flow PIC 3 Online SPU Flow Slot 5 Online SRX5k IOC II PIC 0 Online 10x 10GE SFP+ PIC 2 Online 2x 40GE QSFP+
- Check the configuration changes.root@#commit check
- After verifying, commit the configuration.root@#commit
- Verify configuration changes by disabling interfaces on
the secondary node and enabling interfaces on the primary.For SRX5400 Services Gatewaysadmin@cluster#set interfaces xe-5/0/0 disableadmin@cluster#set interfaces xe-5/1/0 disableadmin@cluster#delete interfaces xe-2/0/0 disableadmin@cluster#delete interfaces xe-2/1/0 disableFor SRX5600 Services Gatewaysadmin@cluster#set interfaces xe-9/0/0 disableadmin@cluster#set interfaces xe-9/0/4 disableadmin@cluster#delete interfaces xe-2/0/0 disableadmin@cluster#delete interfaces xe-2/0/4 disableFor SRX5800 Services Gatewaysadmin@cluster#set interfaces xe-13/0/0 disableadmin@cluster#set interfaces xe-13/1/0 disableadmin@cluster#delete interfaces xe-1/0/0 disableadmin@cluster#delete interfaces xe-1/1/0 disable
Network traffic fails over to the primary node.
- To synchronize the devices within the cluster, reconfigure
the control ports and fabric ports with the correct port values on
the secondary node.admin@cluster#delete chassis cluster control-portsadmin@cluster#set chassis cluster control-ports fpc 1 port 0admin@cluster#set chassis cluster control-ports fpc 13 port 0admin@cluster#delete interfaces fab0admin@cluster#delete interfaces fab1admin@cluster#set interfaces fab0 fabric-options member-interfaces xe-3/0/2admin@cluster#set interfaces fab1 fabric-options member-interfaces xe-9/0/2
- Commit the configuration.root@#commit
- Power on the secondary node.admin@cluster#request system reboot
See:
- When you power on the secondary node, enable the control
ports and fabric ports on the primary node, and reconfigure them with
the correct port values.admin@cluster#delete chassis cluster control-portsadmin@cluster#set chassis cluster control-ports fpc 1 port 0admin@cluster#set chassis cluster control-ports fpc 13 port 0admin@cluster#delete interfaces fab0admin@cluster#delete interfaces fab1admin@cluster#set interfaces fab0 fabric-options member-interfaces xe-3/0/2admin@cluster#set interfaces fab1 fabric-options member-interfaces xe-9/0/2
- Commit the configuration.root@#commit
- After the secondary node is up, verify that it synchronizes
with the primary node.admin@cluster#delete interfaces xe-4/0/5 disableadmin@cluster#delete interfaces xe-10/0/5 disable
- Enable SYN bit and TCP sequence number checking for the
secondary node.admin@cluster#delete security flow tcp-session no-syn-checkadmin@cluster#delete security flow tcp-session no-sequence-check
- Commit the configuration.root@#commit
- Verify the Redundancy Group (RG) states and their priority.
root@cluster>show version
node0: -------------------------------------------------------------------------- Hostname: <displays the hostname> Model: <displays the model number> Junos: 15.1X49-D10 JUNOS Software Release [15.1X49-D10] node1: -------------------------------------------------------------------------- Hostname: <displays the hostname> Model: <displays the model> Junos: 15.1X49-D10 JUNOS Software Release [15.1X49-D10]
After the secondary node is powered on, issue the following command:
root@cluster>show chassis fpc pic-status
node0: -------------------------------------------------------------------------- Slot 1 Online SRX5k IOC II PIC 0 Online 1x 100GE CFP PIC 2 Online 2x 40GE QSFP+ Slot 2 Online SRX5k SPC II PIC 0 Online SPU Cp PIC 1 Online SPU Flow PIC 2 Online SPU Flow PIC 3 Online SPU Flow Slot 3 Online SRX5k IOC3 24XGE+6XLG PIC 0 Online 12x 10GE SFP+ PIC 1 Online 12x 10GE SFP+ PIC 2 Offline 3x 40GE QSFP+ PIC 3 Offline 3x 40GE QSFP+ Slot 4 Online SRX5k SPC II PIC 0 Online SPU Flow PIC 1 Online SPU Flow PIC 2 Online SPU Flow PIC 3 Online SPU Flow Slot 5 Online SRX5k IOC II PIC 0 Online 10x 10GE SFP+ PIC 2 Online 10x 10GE SFP+ node1: -------------------------------------------------------------------------- Slot 1 Online SRX5k IOC II PIC 0 Online 1x 100GE CFP PIC 2 Online 2x 40GE QSFP+ Slot 2 Online SRX5k SPC II PIC 0 Online SPU Cp PIC 1 Online SPU Flow PIC 2 Online SPU Flow PIC 3 Online SPU Flow Slot 3 Online SRX5k IOC3 24XGE+6XLG PIC 0 Online 12x 10GE SFP+ PIC 1 Online 12x 10GE SFP+ PIC 2 Offline 3x 40GE QSFP+ PIC 3 Offline 3x 40GE QSFP+ Slot 4 Online SRX5k SPC II PIC 0 Online SPU Flow PIC 1 Online SPU Flow PIC 2 Online SPU Flow PIC 3 Online SPU Flow Slot 5 Online SRX5k IOC II PIC 0 Online 10x 10GE SFP+ PIC 2 Online 2x 40GE QSFP+
root@cluster> show chassis cluster status
CS Cold Sync monitoring FL Fabric Connection monitoring GR GRES monitoring HW Hardware monitoring IF Interface monitoring IP IP monitoring LB Loopback monitoring MB Mbuf monitoring NH Nexthop monitoring NP NPC monitoring SP SPU monitoring SM Schedule monitoring CF Config Sync monitoring Cluster ID: 1 Node Priority Status Preempt Manual Monitor-failures Redundancy group: 0 , Failover count: 0 node0 250 primary no no None node1 100 secondary no no None Redundancy group: 1 , Failover count: 0 node0 254 primary no no None node1 150 secondary no no None
root@cluster>show security monitoring
node0: -------------------------------------------------------------------------- Flow session Flow session CP session CP session FPC PIC CPU Mem current maximum current maximum --------------------------------------------------------------------------- 2 0 0 11 0 0 1999999 104857600 2 1 2 5 289065 4194304 0 0 2 2 2 5 289062 4194304 0 0 2 3 2 5 289060 4194304 0 0 4 0 2 5 289061 4194304 0 0 4 1 2 5 281249 4194304 0 0 4 2 2 5 281251 4194304 0 0 4 3 2 5 281251 4194304 0 0 node1: -------------------------------------------------------------------------- Flow session Flow session CP session CP session FPC PIC CPU Mem current maximum current maximum --------------------------------------------------------------------------- 2 0 0 11 0 0 1999999 104857600 2 1 0 5 289065 4194304 0 0 2 2 0 5 289062 4194304 0 0 2 3 0 5 289060 4194304 0 0 4 0 0 5 289061 4194304 0 0 4 1 0 5 281249 4194304 0 0 4 2 0 5 281251 4194304 0 0 4 3 0 5 281251 4194304 0 0
Enable the traffic interfaces on the secondary node.
root@cluster> show interfaces terse | grep reth0
xe-3/0/0.0 up up aenet --> reth0.0
xe-3/0/0.32767 up up aenet --> reth0.32767
xe-9/0/0.0 up up aenet --> reth0.0
xe-9/0/0.32767 up up aenet --> reth0.32767
reth0 up up
reth0.0 up up inet 44.44.44.1/8
reth0.32767 up up multiservice
root@cluster> show interfaces terse | grep reth1
xe-3/0/4.0 up up aenet --> reth1.0
xe-3/0/4.32767 up up aenet --> reth1.32767
xe-9/0/4.0 up up aenet --> reth1.0
xe-9/0/4.32767 up up aenet --> reth1.32767
reth1 up up
reth1.0 up up inet 55.55.55.1/8
reth1.32767 up up multiservice
For more information about LICU, refer to KB article KB17947 from the Knowledge Base.
In-Service Hardware Upgrade for SRX5K-RE-1800X4 and SRX5K-SCBE or SRX5K-RE-1800X4 and SRX5K-SCB3 in a Chassis Cluster
If your device is part of a chassis cluster, using the in-service hardware upgrade (ISHU) procedure you can upgrade:
SRX5K-SCB with SRX5K-RE-13-20 to SRX5K-SCBE with SRX5K-RE-1800X4
Note Both the services gateways must have the same Junos OS version 12.3X48.
SRX5K-SCBE with SRX5K-RE-1800X4 to SRX5K-SCB3 with SRX5K-RE-1800X4
Note You cannot upgrade SRX5K-SCB with SRX5K-RE-13-20 directly to SRX5K-SCB3 with SRX5K-RE-1800X4.
We strongly recommend that you perform the ISHU during a maintenance window, or during the lowest possible traffic as the secondary node is not available at this time.
Ensure to upgrade the SCB and Routing Engine at the same time as the following configurations are only supported:
SRX5K-RE-13-20 and SRX5K-SCB
SRX5K-RE-1800X4 and SRX5K-SCBE
SRX5K-RE-1800X4 and SRX5K-SCB3
While performing the ISHU, in the SRX5800 service gateway, the second SCB can contain a Routing Engine but the third SCB must not contain a Routing Engine. In the SRX5600 services gateway, the second SCB can contain a Routing Engine.
Ensure that the following prerequisites are completed before you begin the ISHU procedure:
Replace all interface cards such as IOCs and Flex IOCs as specified in Table 1.
Table 1: List of Interface Cards for Upgrade
Cards to Replace
Replacement Cards for Upgrade
SRX5K-40GE-SFP
SRX5K-MPC and MICs
SRX5K-4XGE-XFP
SRX5K-MPC and MICs
SRX5K-FPC-IOC
SRX5K-MPC and MICs
SRX5K-RE-13-20
SRX5K-RE-1800X4
SRX5K-SCB
SRX5K-SCBE
SRX5K-SCBE
SRX5K-SCB3
Verify that both services gateways in the cluster are running the same Junos OS versions; release 12.1X47-D15 or later for SRX5K-SCBE with SRX5K-RE-1800X4 and 15.1X49-D10 or later for SRX5K-SCB3 with SRX5K-RE-1800X4. For more information on cards supported on the services gateways see Cards Supported on SRX5400, SRX5600, and SRX5800 Services Gateways.
For more information about unified in-service software upgrade (unified ISSU), see Upgrading Both Devices in a Chassis Cluster Using an ISSU.
To perform an ISHU:
- Export the configuration information from
the secondary node to a USB or an external storage device.
For more information about mounting the USB on the device, refer to KB articles KB12880 and KB12022 from the Knowledge Base.
- Power off the secondary node.
See, Powering Off the SRX5400 Services Gateway, Powering Off the SRX5600 Services Gateway, or Powering Off the SRX5800 Services Gateway.
- Disconnect all the interface cards from the chassis backplane by pulling them out of the backplane by 6” to 8” (leaving cables in place).
- Replace the SRX5K-SCBs with SRX5K-SCBEs, or SRX5K-SCBEs with SRX5K-SCB3s and SRX5K-RE-13-20s with SRX5K-RE-1800X4s based on the chassis specifications.
- Power on the secondary node.
See:
- After the secondary node reboots as a standalone node,
configure the same cluster ID as in the primary node.root@>set chassis cluster cluster-id 1 node 1
- Install the same Junos OS software image on the secondary
node as on the primary node and reboot.
Note Ensure that the Junos OS version installed is release 12.1X47-D15 or later for SRX5K-RE-1800X4 & SRX5K-SCBE and 15.1X49-D10 or later for SRX5K-RE-1800X4 & SRX5K-SCB3.
- After the secondary node reboots, import all the configuration
settings from the USB to the node.
For more information about mounting the USB on the device, refer to KB articles KB12880 and KB12022 from the Knowledge Base.
- Power off the secondary node.
See Powering Off the SRX5400 Services Gateway, Powering Off the SRX5600 Services Gateway, or Powering Off the SRX5800 Services Gateway.
- Re-insert all the interface cards into the chassis backplane.
Note Ensure the cards are inserted in the same order as in the primary node, and maintain connectivity between the control link and fabric link.
- Power on the node and issue this command to ensure all
the cards are online:
user@host> show chassis fpc pic-status
After the node boots, it must join the cluster as a secondary node. To verify, issue the following command
admin@cluster> show chassis cluster status
Note The command output must indicate that the node priority is set to a non-zero value, and that the cluster contains a primary node and a secondary node.
- Initiate Redundancy Group (RG) failover to
the upgraded node, manually, so that it is assigned to all RGs as
a primary node.
For RG0, issue the following command:
admin@cluster> request chassis cluster failover redundancy-group 0 node 1For RG1, issue the following command:
admin@cluster> request chassis cluster failover redundancy-group 1 node 1Verify that all RGs are failed over by issuing the following command:
admin@cluster> show chassis cluster status - Verify the operations of the upgraded secondary node by
performing the following:
To ensure all FPC’s are online, issue the following command:
admin@cluster> show chassis fpc pic-statusTo ensure all RG’s are upgraded and the node priority is set to a non-zero value, issue the following command:
admin@cluster> show chassis cluster statusTo ensure that the upgraded primary node receives and transmits data, issue the following command:
admin@cluster> monitor interface trafficTo ensure sessions are created and deleted on the upgraded node, issue the following command:
admin@cluster> show security monitoring
- Repeat Step 1 through 12 for the primary node.
- To ensure that the ISHU process is completed successfully,
check the status of the cluster by issuing the following command:
admin@cluster> show chassis cluster status
For detailed information about chassis cluster, see the Chassis Cluster User Guide for SRX Series Devices at www.juniper.net/documentation/.