Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Overview of the Rolling Restart Upgrade Method

 

This section describes how to perform a rolling restart upgrade to your 8.6.0 SSR cluster. By following this procedure, you should not experience any downtime in your cluster.

Note

Although you should not experience any service interruption during this procedure, we recommend you schedule a maintenance window when performing this procedure. To mitigate the risk of experiencing downtime in your cluster, we recommend using a transition server. For instructions on creating and using a transition server, see Using a Transition Server to Mitigate Downtime While Upgrading Your Cluster.

Note

The example procedures in this section assume your cluster consists of one SM node, one S node, two M nodes, and two D nodes, as follows:

  • First M node, bng-mars.englab.juniper.net (10.212.10.68)

  • Second M node, bng-sbr-perf1 (10.212.10.66)

  • SM node, sbr1.englab.juniper.net (10.212.10.213)

  • First D node, bng-sbr-perfm3000-3 (10.212.10.188)

  • Second D node, bng-sbr-perf2 (10.212.10.67)

  • S node, bng-sbrha-1 (10.212.10.65)

Summary of the Rolling Restart Upgrade Method

Summary of the Rolling Restart Upgrade Method

During this upgrade procedure, you will need to:

  1. Stop the SSR management process on the M node that you are planning to upgrade, upgrade the M node (install and configure the new software), and restart the SSR management process on the M node. Repeat this process on each M node, one at a time.

  2. Stop the SSR management process and RADIUS process on the SM node, upgrade the SM node (install and configure the new software), and restart the SSR management process and RADIUS process on the SM node. Repeat this process on each SM node, one at a time.

  3. Stop the RADIUS process on the S node, install the new software on the S node, configure the new software, and restart the SSR data process on the S node.

  4. Stop the SSR data process on the D node, install the new software on the D node, configure the new software, and restart the SSR data process on the D node. Repeat this process on each D node, one at a time.

Introduction and Requirements

Introduction and Requirements

Using these instructions, you will be able to upgrade your working SSR cluster from any of the specified software releases without interruption of service. However, because the cluster will momentarily run with one node lesser than normally required, it is advisable to schedule the maintenance window outside any busy hours.

Note

Skipping versions when upgrading the cluster using the rolling restart method is not supported. Since SBR Carrier 8.0.0 uses MySQL 5.5.37, and 8.4.0, 8.4.1, and 8.5.0 use 5.7.18 for Linux (see Unresolved xref), we strongly recommend that you not use the rolling restart method to upgrade the cluster version of SBR Carrier directly from release 8.0.0 to 8.4.x or later on a Linux machine. On Solaris, we strongly recommend that you not use the rolling restart method to upgrade the cluster version of SBR Carrier directly from release 8.0.0 to 8.6.0 or later, since SBR Carrier 8.0.0 uses MySQL 5.5.37 and 8.6.0 uses 5.7.25 for Solaris. Instead, use the backup, destroy, and re-create method to upgrade or perform a clean install.

MySQL Version

5.5.37

5.6.22

5.6.28

5.6.29

Linux: 5.7.18

Solaris: 5.6.36

5.7.25

8.0.22

Note

Upgrades from SBR Carrier releases earlier than 7.5.0 have not been tested by Juniper Networks.

Note

The entire upgrade process takes approximately three hours to complete. The current number of concurrent sessions in SSR and the current load on the data nodes contribute to the time it takes to complete this upgrade.

To perform the rolling restart upgrade, you will need the following:

  • The original cluster configuration files from the previous release of SBR Carrier installation (the contents of /opt/JNPRshare).

  • The SBR Carrier cluster distribution files, for example: sbr-cl-8.6.0.R-1.sparc.tgz.

  • At least 10 GB of disk space on each machine running an SBRC front-end (S or SM node).

  • At least 3 GB of disk space on each machine running a data node (D node).