Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Adding a Management Node Expansion Kit to an Existing Cluster

 

This section describes how to add a management node to an existing cluster using the Management Node Expansion Kit. The Management Node Expansion Kit provides software and a license for a third management node. This third management node (which is the final arbiter) is set up on a separate host machine as an (m) node. It does not share the machine with an SBR node (sm node). You must place the third management node in a location that has similar connectivity to the database as the bulk of your NAS devices. You must set the ArbitrationRank for the third M node to 1 (ArbitrationRank=1) and the other two M nodes as ArbitrationRank=2. In this case, during an NOC outage, the third management node decides which half of the cluster lives. You may even need to set up a VLAN or a special firewall connection between this M node and the red zone on which your D nodes are networked.

Adding the third management node increases the resiliency of the cluster by providing an additional arbiter in case of a node failure.

To add a new management node to an existing cluster, you perform the following high-level tasks:

  1. Update the existing cluster definition files to include the new management node.

    See Updating the Existing Cluster Definition Files for the New Management Node.

  2. Distribute the updated cluster definition files to the existing nodes in the cluster.

    See Distributing the Updated Cluster Definition Files to the Existing Nodes.

  3. Install the SBR Carrier software on the new management node.

    See Installing the SBR Carrier Software on the New Management Node Host Machine.

  4. Configure the SBR Carrier software on the new management node.

    See Configuring the SBR Carrier Software on the New Management Node.

  5. One by one, stop the process on each existing node, configure it with the new cluster definition file, and restart the process.

    See Configuring Each Existing Node in the Cluster with the New Cluster Definition Files.

  6. Start the SSR process on the new management node.

    See Starting the New Management Node.

  7. Run CreateDB.sh on the new management node.

    See Running CreateDB.sh on the New Management Node.

The following procedure adds a single management node to an existing cluster.

The following designations are used throughout the examples in this section:

sm = Hardware has SBR node and Management node.

s = Hardware has only SBR node.

m = Hardware has only Management node.

d = Hardware has Data node.

2sm, 2d = Two SBR/Management nodes and 2 Data nodes.

2S, 2SM, 2D = Two SBR nodes and 2 SBR/Management nodes, 2 Data nodes.

Display the existing cluster:

The existing cluster includes two full SBR Carrier licenses and a license for the Starter Kit resulting in a configuration that includes: two sm nodes and two d nodes.

For the purposes of this procedure, the existing two sm nodes are identified as sm1 and sm2 as follows:

Updating the Existing Cluster Definition Files for the New Management Node

Updating the Existing Cluster Definition Files for the New Management Node

In this first part of the procedure, you update the existing cluster definition files on SM1 to reflect the new configuration of 0s, 2sm, 1m, 2d.

Before proceeding, make sure the machine that you want to host the new management node meets all system requirements. See Before You Install Software.

The following steps create a new set of cluster definition files in /opt/JNPRshare/install/  <cluster_name> and in configure.<cluster_name>.tar. You may want to make a backup copy of the existing configure.<cluster_name>.tar file before creating the new files, in case you need to restore the existing configuration.

To generate the updated cluster definition files:

  1. As root, on the sm1 node, navigate to the radius/install subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).

    Example: cd /opt/JNPRsbr/radius/install

  2. Run the configure script.

    Execute:

    ./configure

    Example:

  3. From the menu of configuration tasks, enter 2 to specify Generate Cluster Definition.

    You are prompted to enter the name of the cluster.

  4. Press Enter to use the current cluster name.

    You are prompted either to create a new cluster or update an existing cluster definition.

  5. Enter u to update the existing cluster definition.

  6. Enter the license number for the new management node.

  7. You are prompted for the license if adding a Data Expansion Kit. Press Enter because we are not adding a Data Expansion Kit.

  8. You are prompted for the license if adding an SBR node. Press Enter because we are not adding an SBR node.

  9. Verify the proper configuration of {0s,2sm,1m,2d} for the cluster named cambridge and enter y to continue.

  10. Press Enter for the management node.

  11. Press Enter to accept the management node ID.

  12. Enter the IP address for the new management node and press Enter.

    The system generates the required cluster definition files and prompts you to view, accept, or reject them.

  13. Enter a to accept them and continue or v to view them.

    Caution

    We recommend that you enter an r to reject them only if a serious error was made when you provided information. We recommend that you not edit these files.

    In this example, notice that the new configuration displays as Generated configuration is {0s,2sm,1m,2d}, confirming that the new management node is included in the cluster definition.

    Enter a to accept the new definition files.

    The software writes the new cluster definition files to this node and returns you to the main configuration menu.

  14. Press q to quit.

Distributing the Updated Cluster Definition Files to the Existing Nodes

Distributing the Updated Cluster Definition Files to the Existing Nodes

At this point, the updated cluster definition files (*.tar file) have been created on the sm1 node only. Now you need to distribute the new definition files to the other nodes in the cluster, including the new management node.

On both the existing nodes and the new management node in the expanded cluster, create a copy of the new cluster definition files. Doing this does not invoke the new files, but makes them available to the configure script later in the workflow.

To distribute the new cluster definition files:

  1. Log in to each node (existing and new) as hadm.

  2. Change directories to the installation directory.

    (On new nodes, the entire path may not exist because the <cluster name> portion of the path was not created when you prepared the new machine, so you may need to create it.) See Creating Share Directories.

    Execute:

    cd /opt/JNPRshare/install/  <cluster_name>

    For example:

    cd /opt/JNPRshare/install/cambridge

  3. Use FTP binary mode to connect to the node host (in this example, sm1) where you created the new cluster definition files.

  4. Execute the get command to transfer the configure. <cluster name> .tar file.

    For example:

    bin

    get /opt/JNPRsbr/radius/install/configure.cambridge.tar

  5. In a terminal window, extract the new cluster definition files from the archive.

    Execute:

    tar xvf configure. <cluster_name> .tar

    Output similar to this example is displayed:

  6. Repeat these steps until every node in the cluster has a copy of the new cluster definition files.

Installing the SBR Carrier Software on the New Management Node Host Machine

Installing the SBR Carrier Software on the New Management Node Host Machine

This procedure describes how to unpack and install the SBR Carrier software on the host machine for the new management node.

  1. Log in to the host machine for the new management node as root.

  2. Copy the Steel-Belted Radius Carrier installation files from their download location to the machine. Make sure to copy them to a local or remote hard disk partition that is readable by root.

    This example copies the files from a download directory to the /tmp/sbr directory.

    Execute:

    mkdir -p /opt/tmp

    cp -pR /tmp/sbr/solaris/* /opt/tmp/

  3. Extract the SBR Carrier installation package.

    For 64-bit Solaris, execute:

    cd /tmp/sbr

    ls -ltr

    Execute:

    gunzip sbr-cl-8.6.0.R-1.sparcv9.tgz

    tar xf sbr-cl-8.6.0.R-1.sparcv9.tar

  4. Verify that the extraction worked and confirm the name of the package file.

    For 64-bit Solaris, execute:

    ls -ltr

  5. Install the package.

    Execute:

    pkgadd -d /tmp/sbr

  6. Type all and press Enter.

    The script resumes.

  7. Confirm the installation directory.

    Depending on the system configuration, the script prompts you to create the /opt/JNPRsbr directory if it does not exist, or to over-write an already extracted package, or any of several other questions.

  8. Answer the question appropriately (or change the extraction path if necessary) so that the script can proceed.

    To accept the default directory as a target, enter y.

    The script resumes.

  9. Enter y to confirm that you want to continue to install the package.

Configuring the SBR Carrier Software on the New Management Node

Configuring the SBR Carrier Software on the New Management Node

Before starting this procedure, review Before You Install Software. In particular, review requirements for Setting Up External Database Connectivity (Optional) and Installing the SIGTRAN Interface (Optional), because steps in this procedure require the server to be preconfigured for these capabilities.

To configure the software on the new management node:

  1. As root, navigate to the radius/install subdirectory of the directory where you installed the Steel-Belted Radius Carrier package in Installing the SBR Carrier Software on the New Management Node Host Machine.

    Example: cd /opt/JNPRsbr/radius/install

  2. Run the configure script.

    Execute:

    ./configure

  3. Review and accept the Steel-Belted Radius Carrier license agreement.

    Press the spacebar to move from one page to the next. When you are prompted to accept the terms of the license agreement, enter y.

    Do you accept the terms in the license agreement? [n] y

  4. From the menu of configuration tasks, enter 3 to specify Configure Cluster Node.

  5. Specify the name of the cluster.

  6. Enter a to accept the modified cluster definition files and continue or v to view them.

    Caution

    We recommend that you enter an r to reject them only if a serious error was made when you provided information. We recommend that you not edit these files.

  7. The configure script prompts you with a warning whether or not to apply the cluster definition to this node. Enter y to proceed.

    Note

    Expect package is required to be installed on RHEL 7.3 or later (see Linux for supported RHEL versions). SBR Carrier has been tested only with the Expect-5.45 version. Expect package is installed on Solaris 11.3.36.10.0 or later version by default (see Solaris for supported Solaris versions).

  8. Enter q to quit.

Configuring Each Existing Node in the Cluster with the New Cluster Definition Files

Configuring Each Existing Node in the Cluster with the New Cluster Definition Files

At this point in the process, all nodes in the cluster have the new cluster definition files loaded. However, only the new management node has been configured with the new files. The existing nodes are still running with the old cluster definition files.

In this procedure, you log in to each existing node, stop the processes, run the configure script, and restart the processes. You must complete these steps on every existing node in the cluster. This example starts with the sm1 node.

Caution

In this procedure, you need to stop and restart each existing node one by one to apply the new cluster definition to each of the original cluster nodes. Do not operate on multiple nodes at the same time because that creates multiple faults that can stop the entire cluster.

Always review the recommended start and stop order and processes and plan out the order in which to perform the equivalent steps in your cluster. See When and How to Restart Session State Register Nodes, Hosts, and Clusters.

  1. Log in to the first existing node (in this example, sm1) as root.

  2. Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).

    Example: cd /opt/JNPRsbr/radius

  3. Stop the RADIUS processes on the node you are configuring (required on each (s) and (sm) node), and execute:

    1. ./sbrd stop radius

    2. ./sbrd status

  4. Stop the ssr processes on the node you are configuring (required on each (sm), (m) and (d) node):

    ./sbrd stop ssr

  5. Check the status of the node:

    ./sbrd status

  6. Verify that the node you are about to configure is not connected. In this example, the node ID for sm1 indicates: id=1 (not connected, accepting connect from 172.28.84.36), indicating the sm1 node is stopped.

  7. Run the configure script:

    Execute:

    ./configure

  8. From the menu of configuration tasks, enter 3 to specify Configure Cluster Node.

  9. Press Enter to accept the existing cluster name (in the example: cambridge).

  10. Enter u to update the existing node configuration.

  11. Enter a to accept the new cluster definition files.

    The configure script prompts you with a warning about whether or not to apply the new cluster definition to the node.

  12. Enter y to continue.

  13. Enter q to quit.

  14. Notice that the first line in the applied configuration is: SBR 8.60.50006 cluster cambridge{0s,2sm,1m,2d}, indicating that the new configuration has been applied to the sm1 node.

  15. Restart the SSR process on the newly configured sm1 node:

    1. Execute:

      ./sbrd start ssr

    2. Execute:

      ./sbrd status

    3. Examine each line and ensure the SSR process is running without error.

  16. Restart the RADIUS process on sm1:

    1. Execute:

      ./sbrd start radius

    2. Execute:

      ./sbrd status

    3. Examine each line and ensure the RADIUS process is running without error.

  17. Repeat steps 1 through 16 on the sm2 node and then on each data node, one at a time. For the data nodes, you do not need to stop and restart the RADIUS process because data nodes only run the SSR process.

    Do not operate on multiple nodes at once. Doing so creates multiple faults that can stop the entire cluster. Complete the procedure on each node one at a time until the node is operating without error.

Starting the New Management Node

Starting the New Management Node

At this point in the process, the original nodes in the cluster (sm1, sm2, d1, and d2) are up and running with the new cluster definition files. The new management node is configured with the proper configuration, but is not yet running in the cluster. The following procedure starts the new management node in the cluster.

  1. Start the ssr process on the new management node:

    1. Log in as root to the management (m) node.

    2. Change directories to /opt/JNPRsbr/radius/.

    3. Execute:

      ./sbrd start ssr

    4. Execute:

      ./sbrd status

    5. Examine each line of the final cluster configuration and ensure it is running without error.

  2. Now that the new management node is started and running in the cluster, configure it using Web GUI. See Basic SBR Carrier Node Configuration. For complete details, see the SBR Carrier Administration and Configuration Guide.

Running CreateDB.sh on the New Management Node

Running CreateDB.sh on the New Management Node

At this point, the new management node is up and running in the cluster. You run the CreateDB.sh script to create certain files that are required to run the administrative shell scripts used to administer the session database in the cluster.

Run the CreateDB.sh script on the new management node.

  1. Log in as hadm.

  2. Navigate to the hadm user's home directory, /opt/JNPRhadm/ by default.

  3. Execute:

    CreateDB.sh

The addition of the new management node is complete.