Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Adding a Data Expansion Kit to an Existing Cluster

 

Adding the two new data nodes in the Data Expansion Kit to an existing cluster requires deleting and re-creating the session database for the cluster after all the data nodes are up and running.

Because the process of updating the existing cluster topology and re-creating the session database may result in a longer downtime than desired, there are two approaches you can take to minimize the downtime:

  • Use a transition server (temporary cluster)—One of the sm nodes in the existing cluster is borrowed from the cluster and converted to a transition server operating a temporary cluster. All traffic is routed to the transition server while the existing cluster is updated to include the new data nodes. After the updated cluster is up and running with the new data nodes, traffic is switched back to it, and the transition server is unconfigured from being a temporary cluster, reconfigured as a sm node, and re-incorporated into the updated cluster. We use this method in this example procedure. See Using a Transition Server When Adding Data Nodes to an Existing Cluster.

  • Non-transition server—This approach results in longer downtime, the cluster database is destroyed, and entire cluster is updated and reconfigured with the new topology before re-creating the old database. See Non-Transition Server Method—Terminating Connections.

Note

Although both of these methods minimize downtime as much as possible, they both require the cluster to be reinitialized, which necessitates destroying and re-creating the session database. The difference between the two approaches is that using a transition server allows SBR Carrier traffic to be processed while the remaining cluster is updated. This is not possible with the non-transition server. See Non-Transition Server Method—Terminating Connections.

Requirements for Selecting a Transition Server in Your Environment

Requirements for Selecting a Transition Server in Your Environment

Use the following selection criteria to select a temporary server:

  • The server must meet all the Release 8.6.0 hardware and software requirements listed in Before You Install Software.

  • If the server is part of an existing cluster:

    • We recommend using the most powerful (the most RAM and greatest number of processors) available because it will be processing a heavier-than-normal load during the transition.

    • We recommend using a SBR or management node, rather than a data node both to reduce front end processing on the existing cluster and to maintain data redundancy.

  • If you intend the server to be the transition server and then reconfigure it to be part of the updated cluster when it is reconfigured, it must be a combined SBR/management node host.

  • If you use Centralized Configuration Management to replicate SBR Carrier node configurations among a group of like nodes, the transition server cannot take on the role of primary CCM server in the updated cluster because it will not be the first SBR node to be configured.

Using a Transition Server When Adding Data Nodes to an Existing Cluster

Using a Transition Server When Adding Data Nodes to an Existing Cluster

In general, to use a transition server to add a Data Expansion Kit to an existing cluster:

  1. Create the transition server and switch all traffic to it.

    See Creating the Transition Server.

  2. Create the updated cluster definition files that include the two new data nodes.

    See Creating the Updated Cluster Definition Files.

  3. Install the SBR Carrier software on the host machines for the new data nodes.

    See Installing the SBR Carrier Software on the Two New Data Node Host Machines.

  4. Distribute the new cluster definition files to the existing cluster nodes and the new data nodes.

    See Distributing the Updated Cluster Definition Files to the Existing Nodes.

  5. Destroy the session database on the existing cluster.

    See Destroying the Session Database on the Original Cluster.

  6. Configure each node in the expanded cluster with the updated cluster definition files.

    See Configuring the Nodes in the Expanded Cluster with the Updated Cluster Definition Files.

  7. Create the session database and IP pools for the expanded cluster.

    See Creating the Session Database and IP Pools on the Expanded Cluster.

  8. Switch the traffic back to the updated, expanded cluster.

    See Removing the Transition Server from Service.

  9. Unconfigure the transition server, rebuild it, and reincorporate it into the expanded cluster.

    See Unconfiguring and Rebuilding the Transition Server.

Existing Cluster Configuration for This Example Procedure

Existing Cluster Configuration for This Example Procedure

The following procedure adds one Data Expansion Kit to an existing cluster.

The following designations are used throughout the examples in this section:

sm = Hardware has SBR node and Management node.

s = Hardware has only SBR node.

m = Hardware has only Management node.

d = Hardware has Data node.

2sm, 2d = Two SBR/Management nodes and 2 Data nodes.

2S, 2SM, 2D = Two SBR nodes and 2 SBR/Management nodes, 2 Data nodes.

Display the existing cluster:

The existing cluster includes two full SBR Carrier licenses and a license for the Starter Kit resulting in a configuration that includes two sm nodes and two d nodes. For the purposes of this procedure, the existing two sm nodes are identified as sm1 and sm2 as follows:

In this example, we borrow the sm2 node, and convert it to a transition server operating as a temporary cluster.

Creating the Transition Server

Creating the Transition Server

In this example, we borrow the sm2 node and convert it to a transition server operating as a temporary cluster. To set up the transition server to temporarily take the place of the existing cluster, you need to prepare the server, install software, and configure the database.

The SBRC Temporary Cluster, also termed the transition server, is an exceptional node in the sense that it executes all processes on one machine. The transition server is assigned the node type= smdt. The CreateDB.sh and configuration of pool(s) must be done manually on a transition server, just like in a cluster.

Stopping the Processes on the Target Transition Server

Stopping the Processes on the Target Transition Server

  1. Log in to the server that you are reconfiguring to act as the transition server (in this example sm2) as root.

  2. Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).

    Example: cd /opt/JNPRsbr/radius

  3. Stop the RADIUS and SSR processes on the node.

    Execute:

    ./sbrd stop radius

    ./sbrd stop ssr

  4. Check the status of the node and confirm it is not connected to the cluster:

    ./sbrd status

    In this example, notice that the sm2 node is not connected as indicated by

Configuring the Software on the Transition Server as a Temporary Cluster

Configuring the Software on the Transition Server as a Temporary Cluster

Now that the processes are stopped on the machine we are reconfiguring as the transition server, we need to reconfigure it as a temporary cluster. At this point, you are still logged in to the target machine as root (in this case the original sm2 node).

  1. Execute the configure script to reconfigure the machine as a temporary cluster:

    Execute:

    ./configure

    Example:

  2. From the menu of configuration tasks, enter 5 to specify Create Temporary Cluster.

  3. Enter the exact name of the existing cluster. In this example: cambridge.

  4. Enter the SSR Starter Kit license number, the license number for one SBR node, and, if you are using one of the optional SBR Carrier modules, the license number for it.

    While migrating to the updated cluster, you can use the same licenses for the transition server as for the updated cluster.

  5. Enter passwords for two internal accounts. The password input is not echoed to the screen; the fields appear to be blank.

    The system generates the required configuration files and prompts you to view, accept, or reject them.

  6. Enter a to accept them and continue or v to view them.

    Caution

    We recommend that you enter an r to reject them only if a serious error was made when you provided information. We recommend that you not edit these files.

    You are prompted with a warning whether or not to apply the changes.

  7. Enter y to continue.

  8. For the remainder of the prompts, simply press Enter to configure the transition server with the existing configuration.

  9. Enter q to quit.

  10. Notice the server configuration in the line:

    (smdt) indicates the machine is configured as an s,m,d temporary cluster.

Configuring and Starting the Transition Server

Configuring and Starting the Transition Server

Now that the software is configured, you need to create the session database and the IP pools and ranges on the transition server. All cluster traffic will ultimately be switched to this single transition server temporarily, while you take the other nodes in the existing cluster down and upgrade and reconfigure them. So, you need to configure the temporary transition server to match the existing cluster configuration.

  1. Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default) and start the SSR process on the transition server.

    Example: cd /opt/JNPRsbr/radius

  2. As root, execute:

    ./sbrd start ssr

    Status messages are displayed as the programs start:

  3. Verify the process started without error:

    As root, execute:

    ./sbrd status

  4. Create the session database.

    If you need to customize the sessions database to match your existing cluster session database, see Customizing the SSR Database Current Sessions Table. Any customization must be done prior to running the CreateDB.sh script.

    1. Log in as hadm.

    2. Navigate to the hadm user's home directory, /opt/JNPRhadm/ by default.

      Execute:

      ./CreateDB.sh

  5. As hadm, set up IP address pools and ranges using the SSR Administration Scripts. The IP address range should be separate from the in-use pools on the existing and upgraded cluster to avoid overlaps. If the old and transitional pools overlap, then during the transition the two clusters may give the same IP address to two different users. See the section Session State Register Administration in the SBR Carrier Administration and Configuration Guide for more information.

  6. Start the RADIUS process:

    As root, execute:

    sbrd start radius

    Status messages are displayed as the programs start:

  7. Verify the process started without error:

    As root, execute:

    ./sbrd status

  8. Finish configuring the transition server using Web GUI. Follow the steps outlined in Basic SBR Carrier Node Configuration. For complete details, see the SBR Carrier Administration and Configuration Guide.

Switching Traffic to the Transition Server

Switching Traffic to the Transition Server

After the transition server is set up and tested, and a working database created, reconfigure the site’s routers to gradually direct traffic to the transition server instead of to the existing cluster’s SBR servers.

Creating the Updated Cluster Definition Files

Creating the Updated Cluster Definition Files

The next phase of the process is to create the new cluster definition files to include the two new data nodes from the Data Expansion Kit. At this point in the process the existing cluster configuration shows the sm2 node processes are not running and not connected, as indicated by id=2 (not connected, accepting connect from 172.28.84.166):

Start by creating the updated cluster definition files on the sm1 node:

  1. As root, on the sm1 node, navigate to the radius/install subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).

    Example: cd /opt/JNPRsbr/radius/install

  2. Run the configure script:

    Execute:

    ./configure

    Example:

    root@sbrha-4:/opt/JNPRsbr/radius/install> ./configure

  3. From the menu of configuration tasks, enter 2 to specify Generate Cluster Definition.

    You are prompted to enter the name of the cluster.

  4. Press Enter to use the current cluster name.

    You are prompted either to create a new cluster or update an existing cluster definition.

  5. Enter u to update the existing cluster. definition.

  6. Because we are not adding a Management Expansion Kit, press Enter to skip adding the license.

  7. Enter the license number for the Data Expansion Kit and press Enter.

  8. When prompted to enter the number of SBR nodes, press Enter to keep the existing configuration.

  9. Notice the updated cluster configuration includes four data nodes as indicated by: Updating cluster cambridge{0s,2sm,0m,4d}.

    Enter y to continue.

  10. When prompted, enter the node names and IP addresses for the two new data nodes.

    Press Enter when prompted to Enter node type (d) [d]: and when prompted to Enter DATA node ID.

    The system generates the updated cluster definition files.

  11. Verify the proper configuration by examining the line: Generated configuration is {0s,2sm,0m,4d} of {0s,2sm,0m,4d} showing the four data nodes.

    When prompted to, enter a to accept the updated configuration.

  12. When the main configuration menu is displayed, enter q to quit.

Installing the SBR Carrier Software on the Two New Data Node Host Machines

Installing the SBR Carrier Software on the Two New Data Node Host Machines

At this point in the process, the updated cluster definition files have been generated and reside on the sm1 node only. Next you need to install the SBR Carrier software on each of the machines that you want to host the two new data nodes. After the SBR Carrier software is installed on these machines, you distribute the updated cluster definition files to all the other nodes in the original cluster.

This procedure describes how to unpack and install the SBR Carrier software on the host machine for the new data nodes.

  1. Log in to the machine as root.

  2. Copy the Steel-Belted Radius Carrier installation files from their download location to the machine. Make sure to copy them to a local or remote hard disk partition that is readable by root.

    This example copies the files from a download directory to the /tmp/sbr directory.

    Execute:

    mkdir -p /opt/tmp

    cp -pR /tmp/sbr/solaris/* /opt/tmp/

  3. Extract the SBR Carrier installation package.

    For 64-bit Solaris, execute:

    cd /tmp/sbr

    ls -ltr

    Execute:

    gunzip sbr-cl-8.6.0.R-1.sparcv9.tgz

    tar xf sbr-cl-8.6.0.R-1.sparcv9.tar

  4. Verify that the extraction worked and confirm the name of the package file.

    For 64-bit Solaris, execute:

    ls -ltr

  5. Install the package.

    Execute:

    pkgadd -d /tmp/sbr

  6. Type all and press Enter.

    The script resumes.

  7. Confirm the installation directory.

    Depending on the system configuration, you are prompted whether to create the /opt/JNPRsbr directory if it does not exist, to over-write an already extracted package, or any of several other questions.

  8. Answer the question appropriately (or change the extraction path if necessary) so that the script can proceed.

    To accept the default directory as a target, enter y.

    The script resumes.

  9. Enter y to confirm that you want to continue to install the package.

  10. Repeat this process on the second new data node.

Distributing the Updated Cluster Definition Files to the Existing Nodes

Distributing the Updated Cluster Definition Files to the Existing Nodes

Now that the two machines hosting the new data nodes have the SBR Carrier software installed, you can distribute the updated cluster definition files to the new nodes and the other nodes in the original cluster.

On both the existing nodes and the new data nodes in the original cluster, create a copy of the new cluster definition files. This process does not invoke the updated cluster definition files, but makes them available to the configure script later in the workflow.

To distribute the new cluster definition files:

  1. Log in to each node (existing and new) as hadm.

  2. Change directories to the install directory.

    (On new nodes, the entire path may not exist because the <cluster name> portion of the path was not created when you prepared the new machine, so you may need to create it.) See Creating Share Directories.

    Execute:

    cd /opt/JNPRshare/install/ <cluster_name>  

    For example:

    cd /opt/JNPRshare/install/cambridge

  3. Use FTP binary mode to connect to the node host (in this example, sm1) where you created the new cluster definition files.

  4. Execute the get command to transfer the configure. <cluster name> .tar file.

    For example:

    bin

    get /opt/JNPRsbr/radius/install/configure.cambridge.tar

  5. In a terminal window, extract the new cluster definition files from the archive.

    Execute:

    tar xvf configure. <cluster_name> .tar

    Output similar to this example is displayed:

  6. Repeat these steps until every node in the cluster has a copy of the new cluster definition files.

Destroying the Session Database on the Original Cluster

Destroying the Session Database on the Original Cluster

You now log in to the sm1 node, destroy the session database from the original cluster, and stop the original cluster.

  1. Log in to sm1 as hadm.

  2. Navigate to the hadm user's home directory, /opt/JNPRhadm by default.

  3. Execute:

    /DestroyDB.sh

  4. Each time you are prompted as to whether you really want to destroy the database, enter yes.

    The system responds with:

  5. Stop the original cluster by executing:

    /sbrd stop cluster

  6. Each time you are prompted as to whether you really want to stop the entire cluster, enter y.

    The software stops the RADIUS processes first and then the SSR processes.

  7. On each remaining node of the original cluster, execute /sbrd stop cluster and verify that the processes are stopped.

    Perform this step on the remaining nodes in this order: s nodes, sm nodes, m nodes, d nodes.

    1. Log in to each remaining node in the existing cluster as root.

    2. Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).

      Example: cd /opt/JNPRsbr/radius

    3. Execute:

      /sbrd stop cluster

    4. Execute:

      /sbrd status

    5. Examine each line to ensure it says not connected.

Configuring the Nodes in the Expanded Cluster with the Updated Cluster Definition Files

Configuring the Nodes in the Expanded Cluster with the Updated Cluster Definition Files

To configure the nodes in the expanded cluster with the updated cluster definition files, you run the configure script on each node. First you run the script on the two new data nodes, then run it on the original nodes in the cluster (except for the sm2 node, which is still operating as the transition server).

Configuring the SBR Carrier Software on the New Data Nodes

Configuring the SBR Carrier Software on the New Data Nodes

Configure the software on each new data node:

  1. As root, navigate to the directory where you installed the Steel-Belted Radius Carrier package in Installing the SBR Carrier Software on the Two New Data Node Host Machines.

    Then, navigate to the radius/install subdirectory.

    Example: cd /opt/JNPRsbr/radius/install

  2. Run the configure script.

    Execute:

    ./configure

  3. The End User License Agreement is displayed. Review the Steel-Belted Radius Carrier license agreement.

    Press the spacebar to move from one page to the next.

  4. When you are prompted to accept the terms of the license agreement, enter y.

    Do you accept the terms in the license agreement? [n] y

  5. From the menu of configuration tasks, enter 3 to specify Configure Cluster Node.

  6. Enter the exact name of the cluster and press Enter.

  7. Enter a to accept the configuration.

  8. Enter y to continue.

  9. Enter q to quit.

  10. Notice the line: node sbrha-2.spgma.juniper.net(d) is configured and processes are down, may be reconfigured if desired indicating the node name you assigned and that the node was configured without error. The processes remain down for now.

  11. Log in to the next new data node and repeat this procedure.

Running the Configure Script on Each Node from the Original Cluster

Running the Configure Script on Each Node from the Original Cluster

At this point in the process, all nodes in the cluster have the new cluster definition files loaded. However, only the new data nodes have been configured with the new files.

In this step, you run the configure script on each node from the original cluster. This includes the sm1, d1, and d2 nodes. Running this script applies the updated cluster definition files to the nodes.

You do not run the script on the sm2 node, which is still operating as the transition server (temporary cluster).

  1. Log in to the first existing node (in this example, sm1) as root.

  2. Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed.

    Example: cd /opt/JNPRsbr/radius/

  3. Check the status of the node by executing:

    ./sbrd status

  4. Examine the line for the node you are about to configure, and verify that it is not connected. In this example, the node ID for sm1 indicates: id=1 (not connected, accepting connect from 172.28.84.36), indicating the sm1 node is stopped.

  5. Navigate to the radius/install subdirectory of the directory where the JNPRsbr package was installed.

    Example: cd /opt/JNPRsbr/radius/install

  6. Run the configure script to apply the updated cluster definition files:

    Execute:

    ./configure

  7. Enter 3 to specify Configure Cluster Node and press Enter.

  8. Press Enter to accept the cluster name and continue.

    You are prompted either to create a new or update an existing node configuration.

  9. Enter u to update the node with the updated cluster definition files.

  10. Enter a to accept the updated configuration.

  11. Enter y to continue.

    Notice the applied configuration includes the four data nodes as indicated by the line: SBR 8.60.50006 cluster cambridge{0s,2sm,0m,4d}.

  12. Enter q to quit.

  13. Log in to the remaining nodes from the original cluster (d1 and d2) and repeat this procedure.

Creating the Session Database and IP Pools on the Expanded Cluster

Creating the Session Database and IP Pools on the Expanded Cluster

At this point in the process, all nodes in the expanded cluster have been configured with the updated cluster definition files. All of these nodes are currently down. You now create the session database and IP pools and ranges for the expanded cluster. To create the new session database, we recommend that you run the clean command on the nodes from the original cluster (in this case, sm1, d1, and d2).

The sm2 node is still operating as the transition server (temporary cluster). Do not disrupt it in any way.

The following procedure describes how to run the clean command on sm1, d1, and d2, start the SSR process and create the session database and IP pools.

Cleaning the Original Nodes from the Cluster

Cleaning the Original Nodes from the Cluster

Perform the following procedure on sm1, d1, and d2 only:

  1. Log in to the first existing node (in this example, sm1) as root.

  2. Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed.

    Example: cd /opt/JNPRsbr/radius

  3. Execute:

    ./sbrd clean

  4. Repeat this procedure on the d1 and d2 nodes.

Creating the Session Database and IP Pools

Creating the Session Database and IP Pools

In this procedure, you create the session database and IP address pools for the expanded cluster. For details on performing these tasks, see the section on Session State Register Administration in the SBR Carrier Administration and Configuration Guide.

First you start the SSR process. The proper order for starting the SSR process is (sm) nodes, (m) nodes, and (d) nodes. We do not have any (m) nodes in this example, so start the SSR process in the following order: sm1, d1, d2, d3, and d4. Start the SSR process on each node in the expanded cluster one at a time, starting with the sm1 node and then on each data node. For complete details on the proper order of starting and stopping nodes, see When and How to Restart Session State Register Nodes, Hosts, and Clusters.

Starting the SSR Processes on the Nodes in the Expanded Cluster

  1. Log in to the first sm node (in this example, sm1) as root.

  2. Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).

    Example: cd /opt/JNPRsbr/radius

  3. Start the SSR process:

    ./sbrd start ssr

  4. Before moving on to the next node, verify the SSR process started without error by executing:

    ./sbrd status

  5. Examine the status and ensure there are no errors.

  6. Repeat this procedure on the d1 and d2 nodes.

When you finish starting the SSR process on sm1, d1, d2, d3, and d4, the cluster configuration is as follows:

The lines for node IDs 10, 11, 12, and 13 indicate the SSR processes started without error on the four data nodes.

The line id=1 @172.28.84.36 (mysql-5.7.25 ndb-7.6.9) indicates the SSR process started properly on the sm1 node.

Notice that the sm2 node still says it is not connected as indicated by the line: id=2 (not connected, accepting connect from 172.28.84.166). The sm2 node is still operating as the transition server.

Creating the Session Database and IP Address Pools

Now create the session database and IP pools and ranges on the sm1 node.

  1. Log back in to the sm1 node as hadm.

  2. Navigate to the hadm user's home directory, /opt/JNPRhadm by default.

  3. Create the session database on the sm1 node.

    If you need to customize the sessions database, see Customizing the SSR Database Current Sessions Table. Any customization must be done before running the CreateDB.sh script.

    1. Log in as hadm.

    2. Navigate to the hadm user's home directory, /opt/JNPRhadm/ by default.

    3. Execute:

      ./CreateDB.sh

  4. As hadm, add the IP address pools and ranges.

    For details on performing these tasks, see the section on Session State Register Administration in the SBR Carrier Administration and Configuration Guide.

Starting the RADIUS Process

Start the RADIUS process on the sm1 node.

Execute:

./sbrd start radius

./sbrd status

When you finish starting the SSR process on sm1, d1, d2, d3, and d4, the cluster configuration is as follows:

The RADIUS process for the sm1 node has started properly as indicated by the line:

Notice that the sm2 node is the only node that still is not connected, as indicated by the lines:

Now that the expanded cluster nodes sm1, d1, d2, d3, and d4 are all started and running without error you can switch traffic back to the expanded cluster.

Removing the Transition Server from Service

Removing the Transition Server from Service

After you bring the expanded cluster online, configure it, and test it, begin transferring live traffic to it and away from the transition server. When all traffic has been shifted to the new expanded cluster and the number of on-going sessions managed by the transition server has reached a suitably low level, take the transition server offline. Some sessions are terminated, but reconnect through the new cluster.

Unconfiguring and Rebuilding the Transition Server

Unconfiguring and Rebuilding the Transition Server

To free the licenses used by the transition server (in this case, sm2), and clean up installed software, uninstall the SBR Carrier software. See Uninstalling Steel-Belted Radius Carrier Software.

Unconfiguring the Transition Server

Unconfiguring the Transition Server

  1. Log in to the sm2 node as root.

  2. Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).

    Example: cd /opt/JNPRsbr/radius

  3. Stop the RADIUS processes.

    Execute:

    ./sbrd stop radius

  4. Stop the SSR processes:

    Execute:

    ./sbrd stop ssr

  5. Check the status on the sm2 node to ensure the processes are stopped.

    Execute:

    ./sbrd status

  6. Navigate to the directory where you installed the SBR Carrier package and then into the radius/install subdirectory. (/opt/JNPRsbr/radius/install)

  7. Run the unconfigure script:

    Execute:

    ./unconfigure

  8. At the warning message enter y to continue.

  9. Press Enter indicating you do not want to remove the shared directory.

  10. Press Enter indicating you do not want to remove the OS user account.

Retrieving the Updated Cluster Definition Files from SM1 Node

Retrieving the Updated Cluster Definition Files from SM1 Node

To distribute the new cluster definition files:

  1. Log in to the sm2 node as hadm.

  2. Change directories to the install directory.

    (On new nodes, the entire path may not exist because the <cluster name> portion of the path was not created when you prepared the new machine, so you may need to create it.) See Creating Share Directories.

    Execute:

    cd /opt/JNPRshare/install/ <cluster_name> 

    For example:

    cd /opt/JNPRshare/install/cambridge

  3. Use FTP binary mode to connect to the node host (in this example, sm1) where you created the new cluster definition files.

  4. Execute the get command to transfer the configure. <cluster name> .tar file.

    For example:

    bin

    get /opt/JNPRsbr/radius/install/configure.cambridge.tar

  5. In a terminal window, extract the new cluster definition files from the archive.

    Execute:

    tar xvf configure. <cluster_name> .tar

    Output similar to this example is displayed:

Running the Configure Script on the SM2 Node

Running the Configure Script on the SM2 Node

  1. Log in to the sm2 node as root.

  2. Navigate to the radius/install subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).

    Example: cd /opt/JNPRsbr/radius/install

  3. Run the configure script to apply the updated cluster definition files:

    Execute:

    ./configure

  4. Review and accept the Steel-Belted Radius Carrier license agreement.

    Press the spacebar to move from one page to the next. When you are prompted to accept the terms of the license agreement, enter y.

    Do you accept the terms in the license agreement? [n] y

  5. From the menu of configuration tasks, enter 3 to specify Configure Cluster Node.

  6. Specify the exact name of the cluster.

  7. Enter a to accept the modified configuration files and continue or v to view them.

    Caution

    We recommend that you enter an r to reject them only if a serious error was made when you provided information. We recommend that you not edit these files.

  8. The configure script prompts you with a warning whether or not to apply the cluster definition to this node. Enter y to proceed.

    Note

    Expect package is required to be installed on RHEL 7.3 or later (see Linux for supported RHEL versions). SBR Carrier has been tested only with the Expect-5.45 version. Expect package is installed on Solaris 11.3.36.10.0 or later version by default (see Solaris for supported Solaris versions).

  9. Configure the node.

    For information about configuring the node in the following prompts, see Configuring the Host Software on the First Server in the Cluster.

  10. Enter q to quit.

  11. Start the SSR process on the newly configured sm2 node:

    1. Execute:

      ./sbrd start ssr

    2. Execute:

      ./sbrd status

    3. Examine each line and ensure the SSR process is running without error.

  12. Run CreateDB.sh script on sm2.

    The purpose of running the CreateDB.sh script is to create certain files that are required to run the administrative shell scripts used to administer the session database in the cluster.

    1. Log in to sm2 as hadm.

    2. Navigate to the hadm user's home directory, /opt/JNPRhadm/ by default.

    3. Execute:

      CreateDB.sh

  13. Start the RADIUS process on sm2:

    1. Log in to sm2 as root.

    2. Execute:

      ./sbrd start radius

    3. Execute:

      ./sbrd status

      The final cluster configuration looks as follows:

    4. Examine each line and ensure the cluster is running with no errors.

Non-Transition Server Method—Terminating Connections

Non-Transition Server Method—Terminating Connections

If you can tolerate some downtime while the existing data nodes are stopped, the new configuration imposed, and all nodes restarted, that is the quickest and easiest method to incorporate the new data nodes. However, sessions are disconnected, and reconnection is not possible until all nodes come back online.

Caution

This procedure stops the entire cluster. You will not be able to process any requests from users.

To estimate how long this process takes, note the amount of time it takes to reconfigure one or two nodes.

Assuming the same basic configuration as in the previous examples of (0s), (2sm), 0(m), 2(d), the following procedure describes the high-level tasks involved in this method. Reference the previous procedures in this chapter for information about performing each task.

  1. Stop the RADIUS processes on the sm1 and sm2 nodes.

  2. Call DestroyDB.sh as user hadm on either the sm1 or sm2 node.

  3. Stop the cluster on sm1.

  4. Stop the SSR process on sm2.

  5. Verify that the SSR processes are stopped on the two existing data nodes.

  6. Install the SBR Carrier software on the two new data nodes in the expansion kit.

  7. Run the configure script on sm1 using option 2 to update the cluster definition files.

  8. Distribute the updated cluster definition files to all nodes including the two new data nodes.

  9. Run the clean command on all four of the existing nodes (sm1, sm2, d1, and d2).

  10. Start the SSR process on each node one at a time.

  11. Run CreateDB.sh on the sm1 node.

  12. After CreateDB.sh has finished running on sm1, repeat it on sm2 as user hadm.

  13. Add the IP address pools and ranges using the administrative scripts.

  14. Start the RADIUS processes on sm1 and sm2 one at a time.