Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Installing SPCs in an Operating SRX3400 Services Gateway Chassis Cluster

 

If your SRX3600 Services Gateway is part of a chassis cluster, you can install additional SPCs in the services gateways in the cluster without incurring downtime on your network. This process is sometimes called in-service hardware upgrade (ISHU).

To perform such an installation, it must meet the following conditions:

  • If the chassis cluster is operating in active-active mode, you must transition it to active-passive mode before using this procedure. You transition the cluster to active-passive mode by making one node primary for all redundancy groups.

  • Both of the services gateways in the cluster must be running Junos OS Release 11.4R2S1, 12.1R2, or later.

  • You must install SPCs of the same type in both of the services gateways in the cluster.

  • You must install the SPCs in the same slots in each chassis.

  • You must install the SPCs so that they are not the SPCs with the lowest-numbered slots in the chassis. For example, if the chassis already has two SPCs with one SPC each in slots 2 and 3, you cannot install additional SPCs in slots 0 or 1 using this procedure.

If your installation does not meet these criteria, use the procedure in Installing CFM Cards in the SRX3400 Services Gateway to install SPCs in your services gateway.

Note

During this installation procedure, you must shut down both devices, one at a time. During the period when one device is shut down, the remaining device operates without a backup. If that remaining device fails for any reason, you incur network downtime until you restart at least one of the devices.

Note

If the services gateway only has one SPC installed, you must make sure that the full-cp-key license that enables large central point mode is not installed. If the license is installed when you upgrade from a single SPC to multiple SPCs, it changes the mode of the SPC in the lowest numbered slot from combo mode to large central point. After that change, the run time objects (RTOs) do not synchronize properly between the two devices in the cluster, and you incur network downtime while the devices discard their existing RTOs and rebuild their RTO tables.

To install SPCs in an SRX3400 Services Gateway cluster without incurring downtime:

  1. Use the console port on the Routing Engine to establish a command-line interface (CLI) session with one of the devices in the cluster.
  2. Use the show chassis cluster status command to determine which services gateway is currently primary, and which services gateway is secondary, within the cluster.

    In the example below, all redundancy groups are primary on node 0, and secondary on node 1:

  3. If the device with which you established the CLI session in Step 2 is not the secondary node in the cluster, use the console port on the device that is the secondary node to establish a CLI session.
  4. If the services gateway has only one SPC installed in it, in the CLI session for the secondary services gateway, use the show system license command to make sure the device does not have the full-cp-key license installed:
    admin@node1_device> show system license
  5. If the services gateway has only one SPC installed and the full-cp-key license is also installed, record the identifier of the full-cp-key license from the command output of the previous step. Then use the following command to remove the license:
    admin@node1_device> request system license delete identifier
  6. In the CLI session for the secondary services gateway, use the request system power off command to shut down the services gateway.
  7. Wait for the secondary services gateway to completely shut down.
  8. Install the new SPC or SPCs in the powered-off services gateway using the procedure in Installing CFM Cards in the SRX3400 Services Gateway.
  9. Power on the secondary services gateway and wait for it to finish starting.
  10. Reestablish the CLI session with the secondary node device.
  11. Use the show chassis fpc pic-status command to make sure that all of the cards in the secondary node chassis are back online.

    In the example below, the second column shows that all of the cards are online. This example is for an SRX3400 Services Gateway; for other devices the output will be similar.

  12. Use the show chassis cluster status command to make sure that the priority for all redundancy groups is greater than zero.
  13. Use the console port on the device that is the primary node to establish a CLI session.
  14. In the CLI session for the primary node device, use the request chassis cluster failover command to fail over each redundancy group that has an ID number greater than zero.

    For example:

  15. If the services gateway has only one SPC in it and the full-cp-key license is installed, remove the full-cp-key license as described in Step 5.
  16. In the CLI session for the primary node device, use the request system power off command to shut down the services gateway. This action causes redundancy group 0 to fail over onto the other services gateway, making it the active node in the cluster.
  17. Install the new SPC or SPCs in the powered-off services gateway using the procedure in Installing CFM Cards in the SRX3400 Services Gateway.
  18. Power on the services gateway and wait for it to finish starting.
  19. Use the show chassis fpc pic-status command on each node to confirm that all cards are online and both services gateways are operating correctly.
  20. Use the show chassis cluster status command to make sure that the priority for all redundancy groups is greater than zero.