Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Maintaining the SRX5400 Host Subsystem

 

Maintaining the SRX5400 Services Gateway Host Subsystem

Purpose

For optimum services gateway performance, verify the condition of the host subsystem. The host subsystem is composed of an SCB and a Routing Engine installed into the slot in the SCB.

Action

On a regular basis:

  • Check the LEDs on the craft interface to view information about the status of the Routing Engines.

  • Check the LEDs on the SCB faceplate.

  • Check the LEDs on the Routing Engine faceplate.

  • To check the status of the Routing Engine, issue the show chassis routing-engine command. The output is similar to the following:

    user@host> show chassis routing-engine
  • To check the status of the SCB, issue the show chassis environment cb command. The output is similar to the following:

    user@host> show chassis environment cb

To check the status of a specific SCB, issue the show chassis environment cb node slot command, for example, show chassis environment cb node 0.

For more information about using the CLI, see the CLI Explorer.

Taking the SRX5400 Services Gateway Host Subsystem Offline

The host subsystem is composed of an SCB with a Routing Engine installed in it. You take the host subsystem offline and bring it online as a unit. Before you replace an SCB or a Routing Engine, you must take the host subsystem offline. Taking the host subsystem offline causes the device to shut down.

To take the host subsystem offline:

  1. On the console or other management device connected to the Routing Engine that is paired with the SCB you are removing, enter CLI operational mode and issue the following command. The command shuts down the Routing Engine cleanly, so its state information is preserved:
    user@host> request system halt
  2. Wait until a message appears on the console confirming that the operating system has halted.

    For more information about the command, see Junos OS System Basics and Services Command Reference at www.juniper.net/documentation/.

    Note

    The SCB might continue forwarding traffic for approximately 5 minutes after the request system halt command has been issued.

Operating and Positioning the SRX5400 Services Gateway SCB Ejectors

  • When removing or inserting the SCB, ensure that the cards or blank panels in adjacent slots are fully inserted to avoid hitting them with the ejector handles. The ejector handles require that all adjacent components be completely inserted so the ejector handles do not hit them, which could result in damage.

  • The ejector handles must be stored toward the center of the board. Ensure the long ends of the ejectors located at both the right and left ends of the board are horizontal and pressed as far as possible toward the center of the board.

  • To insert or remove the SCB, slide the ejector across the SCB horizontally, rotate it, and slide it again another quarter of a turn. Turn the ejector again and repeat as necessary. Utilize the indexing feature to maximize leverage and to avoid hitting any adjacent components.

  • Operate both ejector handles simultaneously. The insertion force on the SCB is too great for one ejector.

Replacing the SRX5400 Services Gateway SCB

Before replacing the SCB, read the guidelines in Operating and Positioning the SRX5400 Services Gateway SCB Ejectors. To replace the SCB, perform the following procedures:

Note

The procedure to replace an SCB applies to the SRX5K-SCB, SRX5K-SCBE, and SRX5K-SCB3.

  1. Removing the SRX5400 Services Gateway SCB

  2. Installing an SRX5400 Services Gateway SCB

Removing the SRX5400 Services Gateway SCB

To remove the SCB (see Figure 1):

Note

The SCB and Routing Engine are removed as a unit. You can also remove the Routing Engine separately.

Caution

Before removing the SCB, ensure that you know how to operate the ejector handles properly to avoid damage to the equipment.

  1. If you are removing an SCB from a chassis cluster, deactivate the fabric interfaces from any of the nodes.Note

    The fabric interfaces should be deactivated to avoid failures in the chassis cluster.

    user@host# deactivate interfaces fab0
    user@host# deactivate interfaces fab1
    user@host# commit
  2. Power off the services gateway using the command request system power-off.
    user@host# request system power-off
    Note

    Wait until a message appears on the console confirming that the services stopped.

  3. Physically turn off the power and remove the power cables from the chassis.
  4. Place an electrostatic bag or antistatic mat on a flat, stable surface.
  5. Wrap and fasten one end of the ESD grounding strap around your bare wrist, and connect the other end of the strap to an ESD point.
  6. Rotate the ejector handles simultaneously counterclockwise to unseat the SCB.
  7. Grasp the ejector handles and slide the SCB about halfway out of the chassis.
  8. Place one hand underneath the SCB to support it and slide it completely out of the chassis.
  9. Place the SCB on the antistatic mat.
  10. If you are not replacing the SCB now, install a blank panel over the empty slot.
Figure 1: Removing the SCB
Removing the SCB


Installing an SRX5400 Services Gateway SCB

To install the SCB (see Figure 2):

  1. Wrap and fasten one end of the ESD grounding strap around your bare wrist, and connect the other end of the strap to an ESD point.
  2. Power off the services gateway using the command request system power-off.
    user@host# request system power-off
    Note

    Wait until a message appears on the console confirming that the services stopped.

  3. Physically turn off the power and remove the power cables from the chassis.
  4. Carefully align the sides of the SCB with the guides inside the chassis.
  5. Slide the SCB into the chassis until you feel resistance, carefully ensuring that it is correctly aligned.
    Figure 2: Installing the SCB
    Installing the SCB
  6. Grasp both ejector handles and rotate them simultaneously clockwise until the SCB is fully seated.
  7. Place the ejector handles in the proper position, horizontally and toward the center of the board.
  8. Connect the power cables to the chassis and power on the services gateway. The OK LED on the power supply faceplate should blink, then light steadily.
  9. To verify that the SCB is functioning normally, check the LEDs on its faceplate. The green OK/FAIL LED should light steadily a few minutes after the SCB is installed. If the OK/FAIL LED is red, remove and install the SCB again. If the OK/FAIL LED still lights steadily, the SCB is not functioning properly. Contact your customer support representative.

    To check the status of the SCB:

    user@host> show chassis environment cb
  10. If you installed an SCB into a chassis cluster, through the console of the newly installed SCB put the node back into cluster and reboot.
    user@host> set chassis cluster cluster-id X node Y reboot

    where x is the cluster ID and Y is the node ID

  11. Activate the disabled fabric interfaces.
    user@host# activate interfaces fab0
    user@host# activate interfaces fab1
    user@host# commit

Replacing the SRX5400 Services Gateway Routing Engine

To replace the Routing Engine, perform the following procedures:

Note

The procedure to replace a Routing Engine applies to both SRX5K-RE-13-20, SRX5K-RE-1800X4, and SRX5K-RE-128G.

  1. Removing the SRX5400 Services Gateway Routing Engine

  2. Installing the SRX5400 Services Gateway Routing Engine

Removing the SRX5400 Services Gateway Routing Engine

Caution

Before you replace the Routing Engine, you must take the host subsystem offline.

To remove the Routing Engine (see Figure 3):

  1. Take the host subsystem offline as described in Taking the SRX5400 Services Gateway Host Subsystem Offline.
  2. Place an electrostatic bag or antistatic mat on a flat, stable surface.
  3. Wrap and fasten one end of the ESD grounding strap around your bare wrist, and connect the other end of the strap to an ESD point.
  4. Flip the ejector handles outward to unseat the Routing Engine.
  5. Grasp the Routing Engine by the ejector handles and slide it about halfway out of the chassis.
  6. Place one hand underneath the Routing Engine to support it and slide it completely out of the chassis.
    Figure 3: Removing the Routing Engine
    Removing the Routing Engine
  7. Place the Routing Engine on the antistatic mat.

Installing the SRX5400 Services Gateway Routing Engine

To install the Routing Engine into the SCB (see Figure 4):

Note

If you install only one Routing Engine in the service gateway, you must install it in SCB slot 0 of service gateway chassis.

  1. If you have not already done so, take the host subsystem offline. See Taking the SRX5400 Services Gateway Host Subsystem Offline.
  2. Wrap and fasten one end of the ESD grounding strap around your bare wrist, and connect the other end of the strap to an ESD point.
  3. Ensure that the ejector handles are not in the locked position. If necessary, flip the ejector handles outward.
  4. Place one hand underneath the Routing Engine to support it.
  5. Carefully align the sides of the Routing Engine with the guides inside the opening on the SCB.
  6. Slide the Routing Engine into the SCB until you feel resistance, and then press the Routing Engine's faceplate until it engages the connectors.
    Figure 4: Installing the Routing Engine
    Installing the Routing Engine
  7. Press both of the ejector handles inward to seat the Routing Engine.
  8. Tighten the captive screws on the right and left ends of the Routing Engine faceplate.
  9. Power on the services gateway. The OK LED on the power supply faceplate should blink, then light steadily.

    The Routing Engine might require several minutes to boot.

    After the Routing Engine boots, verify that it is installed correctly by checking the RE0 and RE1 LEDs on the craft interface. If the services gateway is operational and the Routing Engine is functioning properly, the green ONLINE LED lights steadily. If the red FAIL LED lights steadily instead, remove and install the Routing Engine again. If the red FAIL LED still lights steadily, the Routing Engine is not functioning properly. Contact your customer support representative.

    To check the status of the Routing Engine, use the CLI command:

    user@host> show chassis routing-engine

    For more information about using the CLI, see the CLI Explorer.

  10. If the Routing Engine was replaced on one of the nodes in a chassis cluster, then you need to copy certificates and key pairs from the other node in the cluster:
    1. Start the shell interface as a root user on both nodes of the cluster.

    2. Verify files in the /var/db/certs/common/key-pair folder of the source node (other node in the cluster) and destination node (node on which the Routing Engine was replaced) by using the following command:

      ls -la /var/db/certs/common/key-pair/

    3. If the same files exist on both nodes, back up the files on the destination node to a different location. For example:

      root@SRX-B% pwd

      /var/db/certs/common/key-pair

      root@SRX-B% ls -la

      total 8

      drwx------ 2 root wheel 512 Jan 22 15:09

      drwx------ 7 root wheel 512 Mar 26 2009

      -rw-r--r-- 1 root wheel 0 Jan 22 15:09 test

      root@SRX-B% mv test test.old

      root@SRX-B% ls -la

      total 8

      drwx------ 2 root wheel 512 Jan 22 15:10

      drwx------ 7 root wheel 512 Mar 26 2009

      -rw-r--r-- 1 root wheel 0 Jan 22 15:09 test.old

      root@SRX-B%

    4. Copy the files from the /var/db/certs/common/key-pair folder of the source node to the same folder on the destination node.

      Note

      Ensure that you use the correct node number for the destination node.

    5. In the destination node, use the ls –la command to verify that all files from the /var/db/certs/common/key-pair folder of the source node are copied.

    6. Repeat Step b through Step e for the /var/db/certs/common/local and /var/db/certs/common/certification-authority folders.

Low Impact Hardware Upgrade for SCB3 and IOC3

If your device is part of a chassis cluster, you can upgrade SRX5K-SCBE (SCB2) to SRX5K-SCB3 (SCB3) and SRX5K-MPC (IOC2) to IOC3 (SRX5K-MPC3-100G10G or SRX5K-MPC3-40G10G) using the low-impact hardware upgrade (LICU) procedure, with minimum downtime. You can also follow this procedure to upgrade SCB1 to SCB2, and RE1 to RE2.

Before you begin the LICU procedure, verify that both services gateways in the cluster are running the same Junos OS release.

Note

You can perform the hardware upgrade using the LICU process only.

You must perform the hardware upgrade at the same time as the software upgrade from Junos OS Release 12.3X48-D10 to 15.1X49-D10.

In the chassis cluster, the primary device is depicted as node 0 and the secondary device as node 1.

Follow these steps to perform the LICU.

  1. Ensure that the secondary node does not have an impact on network traffic by isolating it from the network when LICU is in progress. For this, disable the physical interfaces (RETH child interfaces) on the secondary node.
  2. Disable SYN bit and TCP sequence number checking for the secondary node to take over.
  3. Commit the configuration.
  4. Disconnect control and fabric links between the devices in the chassis cluster so that nodes running different Junos OS releases are disconnected. For this, change the control port and fabric port to erroneous values. Fabric ports must be set to any FPC number and control ports to any non-IOC port. Issue the following commands:
  5. Commit the configuration.
    Note

    After you commit the configuration, the following error message appears: Connection to node1 has been broken error:remote unlock-configuration failed on node1 due to control plane communication break.

    Ignore the error message.

  6. Upgrade the Junos OS release on the secondary node from 12.3X48-D10 to 15.1X49-D10.
  7. Power on the secondary node.

    See:

  8. Perform the hardware upgrade on the secondary node by replacing SCB2 with SCB3, IOC2 with IOC3, and the existing midplane with the enhanced midplane.

    Following these steps while upgrading the SCB:

    To upgrade the Routing Engine on the secondary node:

    1. Before powering off the secondary node, copy the configuration information to a USB device.
    2. Replace RE1 with RE2 and upgrade the Junos OS on RE2.
    3. Upload the configuration to RE2 from the USB device.

      For more information about mounting the USB drive on the device, refer to KB articles KB12880 and KB12022 from the Knowledge Base.

    Perform this step when you upgrade the MPC.

    1. Configure the control port, fabric port, and RETH child ports on the secondary node.
      [edit]
      root@clustert# show | display set | grep delete
      delete groups global interfaces fab1
      delete groups global interfaces fab0
      delete interfaces reth0
      delete interfaces reth1
      delete interfaces xe-3/0/5 gigether-options redundant-parent reth0
      delete interfaces xe-9/0/5 gigether-options redundant-parent reth0
      delete interfaces xe-3/0/9 gigether-options redundant-parent reth
      delete interfaces xe-9/0/9 gigether-options redundant-parent reth0
      [edit]
      root@clustert# show | display set | grep fab
      set groups global interfaces fab1 fabric-options member-interfaces xe-9/0/2
      set groups global interfaces fab0 fabric-options member-interfaces xe-3/0/2

      [edit]
      root@clustert# show | display set | grep reth0
      set chassis cluster redundancy-group 1 ip-monitoring family inet 44.44.44.2 interface reth0.0 secondary-ip-address 44.44.44.3
      set interfaces xe-3/0/0 gigether-options redundant-parent reth0
      set interfaces xe-9/0/0 gigether-options redundant-parent reth0
      set interfaces reth0 vlan-tagging
      set interfaces reth0 redundant-ether-options redundancy-group 1
      set interfaces reth0 unit 0 vlan-id 20
      set interfaces reth0 unit 0 family inet address 44.44.44.1/8

      [edit]
      root@clustert# show | display set | grep reth1
      set interfaces xe-3/0/4 gigether-options redundant-parent reth1
      set interfaces xe-9/0/4 gigether-options redundant-parent reth1
      set interfaces reth1 vlan-tagging
      set interfaces reth1 redundant-ether-options redundancy-group 1
      set interfaces reth1 unit 0 vlan-id 30
      set interfaces reth1 unit 0 family inet address 55.55.55.1/8
  9. Verify that the secondary node is running the upgraded Junos OS release.
    root@cluster> show version node1
    root@cluster> show chassis cluster status
    root@cluster>show chassis fpc pic-status node1
  10. Verify configuration changes by disabling interfaces on the primary node and enabling interfaces on the secondary.
  11. Check the configuration changes.
  12. After verifying, commit the configuration.

    Network traffic fails over to the secondary node.

  13. Verify that the failover was successful by checking the session tables and network traffic on the secondary node.
  14. Upgrade the Junos OS release on the primary node from 12.3X48-D10 to 15.1X49-D10.

    Ignore error messages pertaining to the disconnected cluster.

  15. Power on the primary node.

    See:

  16. Perform the hardware upgrade on the primary node by replacing SCB2 with SCB3, IOC2 with IOC3, and the existing midplane with the enhanced midplane.

    Perform the following steps while upgrading the SCB.

    To upgrade the Routing Engine on the primary node:

    1. Before powering off the secondary node, copy the configuration information to a USB device.
    2. Replace RE1 with RE2 and upgrade the Junos OS on RE2.
    3. Upload the configuration to RE2 from the USB device.

      For more information about mounting the USB drive on the device, refer to KB articles KB12880 and KB12022 from the Knowledge Base.

    Perform this step when you upgrade the MPC.

    1. Configure the control port, fabric port, and RETH child ports on the primary node.
      [edit]
      root@clustert# show | display set | grep delete
      delete groups global interfaces fab1
      delete groups global interfaces fab0
      delete interfaces reth0
      delete interfaces reth1
      delete interfaces xe-3/0/5 gigether-options redundant-parent reth0
      delete interfaces xe-9/0/5 gigether-options redundant-parent reth0
      delete interfaces xe-3/0/9 gigether-options redundant-parent reth0
      delete interfaces xe-9/0/9 gigether-options redundant-parent reth0

      [edit]
      root@clustert# show | display set | grep fab
      set groups global interfaces fab1 fabric-options member-interfaces xe-9/0/2
      set groups global interfaces fab0 fabric-options member-interfaces xe-3/0/2

      [edit]
      root@clustert# show | display set | grep reth0
      set chassis cluster redundancy-group 1 ip-monitoring family inet 44.44.44.2 interface reth0.0 secondary-ip-address 44.44.44.3
      set interfaces xe-3/0/0 gigether-options redundant-parent reth0
      set interfaces xe-9/0/0 gigether-options redundant-parent reth0
      set interfaces reth0 vlan-tagging
      set interfaces reth0 redundant-ether-options redundancy-group 1
      set interfaces reth0 unit 0 vlan-id 20
      set interfaces reth0 unit 0 family inet address 44.44.44.1/8

      [edit]
      root@clustert# show | display set | grep reth1
      set interfaces xe-3/0/4 gigether-options redundant-parent reth1
      set interfaces xe-9/0/4 gigether-options redundant-parent reth1
      set interfaces reth1 vlan-tagging
      set interfaces reth1 redundant-ether-options redundancy-group 1
      set interfaces reth1 unit 0 vlan-id 30
      set interfaces reth1 unit 0 family inet address 55.55.55.1/8
  17. Verify that the primary node is running the upgraded Junos OS release, and that the primary node is available to take over network traffic.
    root@cluster> show version node1
    root@cluster> show chassis cluster status
    root@cluster>show chassis fpc pic-status node1
  18. Check the configuration changes.
  19. After verifying, commit the configuration.
  20. Verify configuration changes by disabling interfaces on the secondary node and enabling interfaces on the primary.

    Network traffic fails over to the primary node.

  21. To synchronize the devices within the cluster, reconfigure the control ports and fabric ports with the correct port values on the secondary node.
  22. Commit the configuration.
  23. Power on the secondary node.

    See:

    1. When you power on the secondary node, enable the control ports and fabric ports on the primary node, and reconfigure them with the correct port values.
  24. Commit the configuration.
  25. After the secondary node is up, verify that it synchronizes with the primary node.
  26. Enable SYN bit and TCP sequence number checking for the secondary node.
  27. Commit the configuration.
  28. Verify the Redundancy Group (RG) states and their priority.
    root@cluster>show version

    After the secondary node is powered on, issue the following command:

    root@cluster>show chassis fpc pic-status
    root@cluster> show chassis cluster status
    root@cluster>show security monitoring

    Enable the traffic interfaces on the secondary node.

    root@cluster> show interfaces terse | grep reth0
    xe-3/0/0.0 up up aenet --> reth0.0
    xe-3/0/0.32767 up up aenet --> reth0.32767
    xe-9/0/0.0 up up aenet --> reth0.0
    xe-9/0/0.32767 up up aenet --> reth0.32767
    reth0 up up
    reth0.0 up up inet 44.44.44.1/8
    reth0.32767 up up multiservice
    root@cluster> show interfaces terse | grep reth1
    xe-3/0/4.0 up up aenet --> reth1.0
    xe-3/0/4.32767 up up aenet --> reth1.32767
    xe-9/0/4.0 up up aenet --> reth1.0
    xe-9/0/4.32767 up up aenet --> reth1.32767
    reth1 up up
    reth1.0 up up inet 55.55.55.1/8
    reth1.32767 up up multiservice

For more information about LICU, refer to KB article KB17947 from the Knowledge Base.

In-Service Hardware Upgrade for SRX5K-RE-1800X4 and SRX5K-SCBE or SRX5K-RE-1800X4 and SRX5K-SCB3 in a Chassis Cluster

If your device is part of a chassis cluster, using the in-service hardware upgrade (ISHU) procedure you can upgrade:

  • SRX5K-SCB with SRX5K-RE-13-20 to SRX5K-SCBE with SRX5K-RE-1800X4

    Note

    Both the services gateways must have the same Junos OS version 12.3X48.

  • SRX5K-SCBE with SRX5K-RE-1800X4 to SRX5K-SCB3 with SRX5K-RE-1800X4

    Note

    You cannot upgrade SRX5K-SCB with SRX5K-RE-13-20 directly to SRX5K-SCB3 with SRX5K-RE-1800X4.

Note

We strongly recommend that you perform the ISHU during a maintenance window, or during the lowest possible traffic as the secondary node is not available at this time.

Ensure to upgrade the SCB and Routing Engine at the same time as the following configurations are only supported:

  • SRX5K-RE-13-20 and SRX5K-SCB

  • SRX5K-RE-1800X4 and SRX5K-SCBE

  • SRX5K-RE-1800X4 and SRX5K-SCB3

Note

While performing the ISHU, in the SRX5800 service gateway, the second SCB can contain a Routing Engine but the third SCB must not contain a Routing Engine. In the SRX5600 services gateway, the second SCB can contain a Routing Engine.

Ensure that the following prerequisites are completed before you begin the ISHU procedure:

  • Replace all interface cards such as IOCs and Flex IOCs as specified in Table 1.

    Table 1: List of Interface Cards for Upgrade

    Cards to Replace

    Replacement Cards for Upgrade

    SRX5K-40GE-SFP

    SRX5K-MPC and MICs

    SRX5K-4XGE-XFP

    SRX5K-MPC and MICs

    SRX5K-FPC-IOC

    SRX5K-MPC and MICs

    SRX5K-RE-13-20

    SRX5K-RE-1800X4

    SRX5K-SCB

    SRX5K-SCBE

    SRX5K-SCBE

    SRX5K-SCB3

  • Verify that both services gateways in the cluster are running the same Junos OS versions; release 12.1X47-D15 or later for SRX5K-SCBE with SRX5K-RE-1800X4 and 15.1X49-D10 or later for SRX5K-SCB3 with SRX5K-RE-1800X4. For more information on cards supported on the services gateways see Cards Supported on SRX5400, SRX5600, and SRX5800 Services Gateways.

    For more information about unified in-service software upgrade (unified ISSU), see Upgrading Both Devices in a Chassis Cluster Using an ISSU.

To perform an ISHU:

  1. Export the configuration information from the secondary node to a USB or an external storage device.

    For more information about mounting the USB on the device, refer to KB articles KB12880 and KB12022 from the Knowledge Base.

  2. Power off the secondary node.

    See, Powering Off the SRX5400 Services Gateway, Powering Off the SRX5600 Services Gateway, or Powering Off the SRX5800 Services Gateway.

  3. Disconnect all the interface cards from the chassis backplane by pulling them out of the backplane by 6” to 8” (leaving cables in place).
  4. Replace the SRX5K-SCBs with SRX5K-SCBEs, or SRX5K-SCBEs with SRX5K-SCB3s and SRX5K-RE-13-20s with SRX5K-RE-1800X4s based on the chassis specifications.
  5. Power on the secondary node.

    See:

  6. After the secondary node reboots as a standalone node, configure the same cluster ID as in the primary node.
  7. Install the same Junos OS software image on the secondary node as on the primary node and reboot.Note

    Ensure that the Junos OS version installed is release 12.1X47-D15 or later for SRX5K-RE-1800X4 & SRX5K-SCBE and 15.1X49-D10 or later for SRX5K-RE-1800X4 & SRX5K-SCB3.

  8. After the secondary node reboots, import all the configuration settings from the USB to the node.

    For more information about mounting the USB on the device, refer to KB articles KB12880 and KB12022 from the Knowledge Base.

  9. Power off the secondary node.

    See Powering Off the SRX5400 Services Gateway, Powering Off the SRX5600 Services Gateway, or Powering Off the SRX5800 Services Gateway.

  10. Re-insert all the interface cards into the chassis backplane.Note

    Ensure the cards are inserted in the same order as in the primary node, and maintain connectivity between the control link and fabric link.

  11. Power on the node and issue this command to ensure all the cards are online:
    user@host> show chassis fpc pic-status

    After the node boots, it must join the cluster as a secondary node. To verify, issue the following command

    admin@cluster> show chassis cluster status
    Note

    The command output must indicate that the node priority is set to a non-zero value, and that the cluster contains a primary node and a secondary node.

  12. Initiate Redundancy Group (RG) failover to the upgraded node, manually, so that it is assigned to all RGs as a primary node.

    For RG0, issue the following command:

    For RG1, issue the following command:

    Verify that all RGs are failed over by issuing the following command:

  13. Verify the operations of the upgraded secondary node by performing the following:
    • To ensure all FPC’s are online, issue the following command:

    • To ensure all RG’s are upgraded and the node priority is set to a non-zero value, issue the following command:

    • To ensure that the upgraded primary node receives and transmits data, issue the following command:

    • To ensure sessions are created and deleted on the upgraded node, issue the following command:

  14. Repeat Step 1 through 12 for the primary node.
  15. To ensure that the ISHU process is completed successfully, check the status of the cluster by issuing the following command:
    admin@cluster> show chassis cluster status

For detailed information about chassis cluster, see the Chassis Cluster User Guide for SRX Series Devices at www.juniper.net/documentation/.