Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Replacing SPCs in an Operating SRX5600 or SRX5800 Services Gateway Chassis Cluster

    If your SRX5600 or SRX5800 Services Gateway is part of an operating chassis cluster, you can replace SPCs with minimum downtime on your network. You can also replace first-generation SPCs with next-generation SRX5K-SPC-4-15-320 SPCs.

    To replace SPCs in a services gateway that is part of a chassis cluster, it must meet the following conditions:

    • Each services gateway must have at least one SPC installed. The installation may warrant additional SPCs if the number of sessions encountered is greater than the session limit of one SPC.
    • If the chassis cluster is operating in active-active mode, you must transition it to active-passive mode before using this procedure. You transition the cluster to active-passive mode by making one node primary for all redundancy groups.
    • To replace first-generation SRX5K-SPC-2-10-40 SPCs, both of the services gateways in the cluster must be running Junos OS Release 11.4R2S1, 12.1R2, or later.
    • To replace next-generation SRX5K-SPC-4-15-320 SPCs, both of the services gateways in the cluster must be running Junos OS Release 12.1X44-D10, or later.
    • You must install SPCs of the same type and in the same slots in both of the services gateways in the cluster. Both services gateways in the cluster must have the same physical configuration of SPCs.
    • If you are replacing an existing first-generation SRX5K-SPC-2-10-40 SPC with a next-generation SRX5K-SPC-4-15-320 SPC, you must install the new SPC so that the next-generation SRX5K-SPC-4-15-320 SPC is the SPC with the lowest-numbered slot. For example, if the chassis has SPCs installed in slots 2 and 3, then you must replace the SPC in slot 2 first. This ensures that the center point (CP) functionality is performed by an SRX5K-SPC-4-15-320 SPC.
    • If you are replacing next-generation SRX5K-SPC-4-15-320 SPCs to the chassis, you must install the new SPCs so that a next-generation SRX5K-SPC-4-15-320 SPC is the SPC in the original lowest-numbered slot. For example, if the chassis already has two first-generation SPCs installed in slots 2 and 3, you cannot install SRX5K-SPC-4-15-320 SPCs in slots 0 or 1. You will need to make sure that an SRX5K-SPC-4-15-320 SPC is installed in the slot providing center point (CP) functionality (in this case, slot 2). This ensures that the CP functionality is performed by an SRX5K-SPC-4-15-320 SPC.
    • If you are replacing next-generation SRX5K-SPC-4-15-320 SPCs in the services gateways, both services gateways must already be equipped with high-capacity power supplies and fan trays. See Upgrading an SRX5600 Services Gateway from Standard-Capacity to High-Capacity Power Supplies or Upgrading an SRX5800 Services Gateway from Standard-Capacity to High-Capacity Power Supplies for more information.

    If your installation does not meet these criteria, use the procedure in Installing an SRX5600 Services Gateway SPC or Installing an SRX5800 Services Gateway SPC to install SPCs in your services gateway.

    Note: During this installation procedure, you must shut down both devices, one at a time. During the period when one device is shut down, the remaining device operates without a backup. If that remaining device fails for any reason, you incur network downtime until you restart at least one of the devices.

    To replace SPCs in an SRX5600 or SRX5800 Services Gateway cluster without incurring downtime:

    1. Use the console port on the Routing Engine to establish a CLI session with one of the devices in the cluster.
    2. Use the show chassis cluster status command to determine which services gateway is currently primary, and which services gateway is secondary, within the cluster.

      In the example below, all redundancy groups are primary on node 0, and secondary on node 1:

      admin@cluster> show chassis cluster status 
      Cluster ID: 1 
      Node                  Priority          Status    Preempt  Manual failover
      
      Redundancy group: 0 , Failover count: 5
          node0                   1           primary        no       no  
          node1                   100         secondary      no       no  
      
      Redundancy group: 1 , Failover count: 1
          node0                   200         primary        no       no  
          node1                   100         secondary      no       no  
      
      Redundancy group: 2 , Failover count: 1
          node0                   200         primary        no       no  
          node1                   100         secondary      no       no  
      
      Redundancy group: 3 , Failover count: 1
          node0                   100         primary        no       no  
          node1                   200         secondary      no       no  
      
      Redundancy group: 4 , Failover count: 1
          node0                   200         primary        no       no  
          node1                   100         secondary      no       no  
    3. If the device with which you established the CLI session in Step 2 is not the secondary node in the cluster, use the console port on the device that is the secondary node to establish a CLI session.
    4. In the CLI session for the secondary services gateway, use the request system power off command to shut down the services gateway.
    5. Wait for the secondary services gateway to completely shut down.
    6. If you are replacing first-generation SPCs with SRX5K-SPC-4-15-320 SPCs, use the procedure in Removing an SRX5600 Services Gateway SPC or Removing an SRX5800 Services Gateway SPC to remove the SPCs you are replacing from the powered-off services gateway.
    7. Install the new SPC or SPCs in the powered-off services gateway using the procedure in Installing an SRX5600 Services Gateway SPC or Installing an SRX5800 Services Gateway SPC.
    8. Power on the secondary services gateway and wait for it to finish starting.
    9. Reestablish the CLI session with the secondary node device.
    10. Use the show chassis fpc pic-status command to make sure that all of the cards in the secondary node chassis are back online.

      In the example below, the second column shows that all of the cards are online. This example is for an SRX5800 Services Gateway; for other devices the output is similar.

      admin@cluster> show chassis fpc pic-status 
      node0:
      --------------------------------------------------------------------------
      Slot 0   Online       SRX5k IOC II
        PIC 0  Online       2x 40GE QSFP+
      Slot 1   Online       SRX5k SPC II
        PIC 0  Online       SPU Cp
        PIC 1  Online       SPU Flow
        PIC 2  Online       SPU Flow
        PIC 3  Online       SPU Flow
      Slot 2   Online       SRX5k IOC II
        PIC 0  Online       2x 40GE QSFP+
        PIC 2  Online       10x 10GE SFP+
      Slot 3   Online       SRX5k IOC II
        PIC 0  Online       1x 100GE CFP
        PIC 2  Online       10x 10GE SFP+
      Slot 4   Online       SRX5k SPC II
        PIC 0  Online       SPU Flow
        PIC 1  Online       SPU Flow
        PIC 2  Online       SPU Flow
        PIC 3  Online       SPU Flow
      Slot 5   Online       SRX5k SPC II
        PIC 0  Online       SPU Flow
        PIC 1  Online       SPU Flow
        PIC 2  Online       SPU Flow
        PIC 3  Online       SPU Flow
      Slot 7   Online       SRX5k SPC II
        PIC 0  Online       SPU Flow
        PIC 1  Online       SPU Flow
        PIC 2  Online       SPU Flow
        PIC 3  Online       SPU Flow
      Slot 8   Online       SRX5k SPC II
        PIC 0  Online       SPU Flow
        PIC 1  Online       SPU Flow
        PIC 2  Online       SPU Flow
        PIC 3  Online       SPU Flow
      Slot 9   Online       SRX5k IOC3 2CGE+4XGE
        PIC 0  Online       2x 10GE SFP+
        PIC 1  Online       1x 100GE CFP2
        PIC 2  Online       2x 10GE SFP+
        PIC 3  Online       1x 100GE CFP2
      
      node1:
      --------------------------------------------------------------------------
      Slot 0   Online       SRX5k IOC II
        PIC 0  Online       2x 40GE QSFP+
        PIC 2  Online       10x 10GE SFP+
      Slot 1   Online       SRX5k SPC II
        PIC 0  Online       SPU Cp
        PIC 1  Online       SPU Flow
        PIC 2  Online       SPU Flow
        PIC 3  Online       SPU Flow
      Slot 2   Online       SRX5k IOC II
        PIC 0  Online       2x 40GE QSFP+
        PIC 2  Online       10x 10GE SFP+
      Slot 3   Online       SRX5k IOC II
        PIC 0  Online       1x 100GE CFP
        PIC 2  Online       10x 10GE SFP+
      Slot 4   Online       SRX5k SPC II
        PIC 0  Online       SPU Flow
        PIC 1  Online       SPU Flow
        PIC 2  Online       SPU Flow
        PIC 3  Online       SPU Flow
      Slot 5   Online       SRX5k SPC II
        PIC 0  Online       SPU Flow
        PIC 1  Online       SPU Flow
        PIC 2  Online       SPU Flow
        PIC 3  Online       SPU Flow
      Slot 7   Online       SRX5k SPC II
        PIC 0  Online       SPU Flow
        PIC 1  Online       SPU Flow
        PIC 2  Online       SPU Flow
        PIC 3  Online       SPU Flow
      Slot 8   Online       SRX5k SPC II
        PIC 0  Online       SPU Flow
        PIC 1  Online       SPU Flow
        PIC 2  Online       SPU Flow
        PIC 3  Online       SPU Flow
      Slot 9   Online       SRX5k IOC3 2CGE+4XGE
        PIC 0  Online       2x 10GE SFP+
        PIC 1  Online       1x 100GE CFP2
        PIC 2  Online       2x 10GE SFP+
        PIC 3  Online       1x 100GE CFP2
      
    11. Use the show chassis cluster status command to make sure that the priority for all redundancy groups is greater than zero.
    12. Use the console port on the device that is the primary node to establish a CLI session.
    13. In the CLI session for the primary node device, use the request chassis cluster failover command to fail over all the redundancy groups.

      For example:

      admin@cluster> request chassis cluster failover redundancy-group 0 node 1admin@cluster> request chassis cluster failover redundancy-group 1 node 1admin@cluster> request chassis cluster failover redundancy-group 2 node 1admin@cluster> request chassis cluster failover redundancy-group 3 node 1admin@cluster> request chassis cluster failover redundancy-group 4 node 1
    14. In the CLI session for the primary node device, use the request system power off command to shut down the services gateway. This action causes redundancy group 0 to fail over onto the other services gateway, making it the active node in the cluster.
    15. Repeat Step 6 and Step 7 to replace or install SPCs in the powered-off services gateway.
    16. Power on the services gateway and wait for it to finish starting.
    17. Use the show chassis fpc pic-status command on each node to confirm that all cards are online and both services gateways are operating correctly.
    18. Use the show chassis cluster status command to make sure that the priority for all redundancy groups is greater than zero.

    Modified: 2016-06-05