Adding a Management Node Expansion Kit to an Existing Cluster
This section describes how to add a management node to an existing cluster using the Management Node Expansion Kit. The Management Node Expansion Kit provides software and a license for a third management node. This third management node (which is the final arbiter) is set up on a separate host machine as an (m) node. It does not share the machine with an SBR node (sm node). You must place the third management node in a location that has similar connectivity to the database as the bulk of your NAS devices. You must set the ArbitrationRank for the third M node to 1 (ArbitrationRank=1) and the other two M nodes as ArbitrationRank=2. In this case, during an NOC outage, the third management node decides which half of the cluster lives. You may even need to set up a VLAN or a special firewall connection between this M node and the red zone on which your D nodes are networked.
Adding the third management node increases the resiliency of the cluster by providing an additional arbiter in case of a node failure.
To add a new management node to an existing cluster, you perform the following high-level tasks:
Update the existing cluster definition files to include the new management node.
See Updating the Existing Cluster Definition Files for the New Management Node.
Distribute the updated cluster definition files to the existing nodes in the cluster.
See Distributing the Updated Cluster Definition Files to the Existing Nodes.
Install the SBR Carrier software on the new management node.
See Installing the SBR Carrier Software on the New Management Node Host Machine.
Configure the SBR Carrier software on the new management node.
See Configuring the SBR Carrier Software on the New Management Node.
One by one, stop the process on each existing node, configure it with the new cluster definition file, and restart the process.
See Configuring Each Existing Node in the Cluster with the New Cluster Definition Files.
Start the SSR process on the new management node.
Run CreateDB.sh on the new management node.
The following procedure adds a single management node to an existing cluster.
The following designations are used throughout the examples in this section:
sm = Hardware has SBR node and Management node.
s = Hardware has only SBR node.
m = Hardware
has only Management node.
d = Hardware has Data node.
2sm, 2d = Two SBR/Management nodes and 2 Data nodes.
2S, 2SM, 2D = Two SBR nodes and 2 SBR/Management nodes, 2 Data nodes.
Display the existing cluster:
hadm@wrx07:~> ndb_mgm -e show Connected to Management Server at: 172.28.84.166:5235 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=10 @172.28.84.163 (mysql-5.7.25 ndb-7.6.9, Nodegroup: 0, Master) id=11 @172.28.84.113 (mysql-5.7.25 ndb-7.6.9, Nodegroup: 0)
[ndb_mgmd(MGM)] 2 node(s) id=1 @172.28.84.36 (mysql-5.7.25 ndb-7.6.9) id=2 @172.28.84.166 (mysql-5.7.25 ndb-7.6.9)
[mysqld(API)] 4 node(s) id=6 @172.28.84.36 (mysql-5.7.25 ndb-7.6.9) id=7 @172.28.84.166 (mysql-5.7.25 ndb-7.6.9) id=58 @172.28.84.166 (mysql-5.7.25 ndb-7.6.9) id=59 @172.28.84.36 (mysql-5.7.25 ndb-7.6.9)
The existing cluster includes two full SBR Carrier licenses and a license for the Starter Kit resulting in a configuration that includes: two sm nodes and two d nodes.
For the purposes of this procedure, the existing two sm nodes are identified as sm1 and sm2 as follows:
id=1 @172.28.84.36 = sm1
id=2 @172.28.84.166 = sm2
Updating the Existing Cluster Definition Files for the New Management Node
Updating the Existing Cluster Definition Files for the New Management Node
In this first part of the procedure, you update the existing cluster definition files on SM1 to reflect the new configuration of 0s, 2sm, 1m, 2d.
Before proceeding, make sure the machine that you want to host the new management node meets all system requirements. See Before You Install Software.
The following steps create a new set of cluster definition files in /opt/JNPRshare/install/ <cluster_name> and in configure.<cluster_name>.tar. You may want to make a backup copy of the existing configure.<cluster_name>.tar file before creating the new files, in case you need to restore the existing configuration.
To generate the updated cluster definition files:
As root, on the sm1 node, navigate to the radius/install subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).
Example: cd /opt/JNPRsbr/radius/install
Run the configure script.
Execute:
./configureExample:
root@wrx07:/opt/JNPRsbr/radius/install> ./configure Configuring SBR Software
--------------------------------------------------------------------------- SBR 8.60.50006 cluster cambridge{0s,2sm,0m,2d} on SunOS 5.10 Generic_141444-09 node wrx07(sm) is CONFIGURED and processes are UP, may be stopped if reconfigured ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by 'Generate Cluster Definition' on one node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
Enter the number of the desired configuration task or quit (4,q):
From the menu of configuration tasks, enter 2 to specify Generate Cluster Definition.
--------------------------------------------------------------------------- SBR 8.60.50006 cluster cambridge{0s,2sm,0m,2d} on SunOS 5.10 Generic_141444-09 node wrx07(sm) is CONFIGURED and processes are UP, may be stopped if reconfigured ---------------------------------------------------------------------------
Generating Cluster Definition...
Enter SBR cluster name [cambridge]:
You are prompted to enter the name of the cluster.
Press Enter to use the current cluster name.
You are prompted either to create a new cluster or update an existing cluster definition.
Create (c) new or update (u) existing cluster definition? [u]:
Enter u to update the existing cluster definition.
The SBR Cluster Starter Kit license allows you to create a minimal cluster of 2 SBR nodes, 2 management nodes, and 2 data nodes. When each node is installed on a separate machine the cluster topology is denoted as {2s,2m,2d}. When SBR nodes are paired with management nodes on the same machines the cluster topology is denoted as {2sm,2d}.
An optional SBR Cluster Management Expansion Kit allows you to add a third management node for {2sm,1m,2d} and an optional Data Expansion Kit allows you to add 2 more data nodes for {2sm,1m,4d} clusters. Additional SBR licenses allow you to add up to 18 more SBR nodes to obtain a maximal cluster {18s,2sm,1m,4d} and/or enable extra features.
While it is not difficult to add management and/or SBR nodes to an existing cluster, adding data nodes is more difficult and may require you to shutdown the entire cluster as opposed to a rolling restart.
Another license is required if you wish to add a third management node. Adding a third management node will require a rolling restart later. Enter Management Expansion Kit license, if any: 1770 0002 0112 0100 1145 3801
Enter the license number for the new management node.
Another license is required if you wish to add more data nodes. Adding data nodes may require you to shutdown the entire cluster. Enter Data Expansion Kit license, if any:
You are prompted for the license if adding a Data Expansion Kit. Press Enter because we are not adding a Data Expansion Kit.
This cluster presently contains 2 of 20 possible SBR nodes. Adding more SBR nodes will require a rolling restart later. Enter number of SBR nodes to be added [0]:
You are prompted for the license if adding an SBR node. Press Enter because we are not adding an SBR node.
Updating cluster cambridge{0s,2sm,1m,2d} will require 1 new machines. Do you wish to continue? [y]:
Verify the proper configuration of {0s,2sm,1m,2d} for the cluster named cambridge and enter y to continue.
Information will now be gathered for each new machine to be added. You will have a chance to review all information at least once before any machines are modified.
--------------------------------------------------------------------------- SBR 8.60.50006 cluster cambridge{0s,2sm,1m,2d} on SunOS 5.10 Generic_141444-09 node wrx07(sm) Partial configuration at present is {0s,2sm,0m,2d} of {0s,2sm,1m,2d} --------------------------------------------------------------------------- IMPORTANT: node names must be entered as reported by 'uname -n'. Enter node name [cambridge-6]: sbrha-8.carrier.spgma.juniper.net Enter node type (m) [m]:
Press Enter for the management node.
Enter MGMT node ID (1-3) [3]:
Press Enter to accept the management node ID.
Enter MGMT node IP address by which it is known to other nodes. Enter MGMT node IP address: 172.28.84.178
Enter the IP address for the new management node and press Enter.
--------------------------------------------------------------------------- SBR 8.60.50006 cluster cambridge{0s,2sm,1m,2d} on SunOS 5.10 Generic_141444-09 node wrx07(sm) Generated configuration is {0s,2sm,1m,2d} of {0s,2sm,1m,2d} ---------------------------------------------------------------------------
Generating configuration files
Reviewing configuration files /opt/JNPRsbr/radius/install/tmp/config.ini /opt/JNPRsbr/radius/install/tmp/my.cnf /opt/JNPRsbr/radius/install/tmp/dbclusterndb.gen View (v), accept (a), or reject (r) configuration files:
The system generates the required cluster definition files and prompts you to view, accept, or reject them.
Enter a to accept them and continue or v to view them.
Caution We recommend that you enter an r to reject them only if a serious error was made when you provided information. We recommend that you not edit these files.
In this example, notice that the new configuration displays as Generated configuration is {0s,2sm,1m,2d}, confirming that the new management node is included in the cluster definition.
Enter a to accept the new definition files.
Writing shared configuration to /opt/JNPRshare/install/cambridge
--------------------------------------------------------------------------- SBR 8.60.50006 cluster cambridge{0s,2sm,0m,2d} on SunOS 5.10 Generic_141444-09 node wrx07(sm) is CONFIGURED and processes are UP, may be stopped if reconfigured ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by 'Generate Cluster Definition' on one node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
READY: last operation succeeded, generated cluster definition. Enter the number of the desired configuration task or quit (4,q):
The software writes the new cluster definition files to this node and returns you to the main configuration menu.
Press q to quit.
Distributing the Updated Cluster Definition Files to the Existing Nodes
Distributing the Updated Cluster Definition Files to the Existing Nodes
At this point, the updated cluster definition files (*.tar file) have been created on the sm1 node only. Now you need to distribute the new definition files to the other nodes in the cluster, including the new management node.
On both the existing nodes and the new management node in the expanded cluster, create a copy of the new cluster definition files. Doing this does not invoke the new files, but makes them available to the configure script later in the workflow.
To distribute the new cluster definition files:
Log in to each node (existing and new) as hadm.
Change directories to the installation directory.
(On new nodes, the entire path may not exist because the <cluster name> portion of the path was not created when you prepared the new machine, so you may need to create it.) See Creating Share Directories.
Execute:
cd /opt/JNPRshare/install/ <cluster_name>For example:
cd /opt/JNPRshare/install/cambridgeUse FTP binary mode to connect to the node host (in this example, sm1) where you created the new cluster definition files.
Execute the get command to transfer the configure. <cluster name> .tar file.
For example:
bin
get /opt/JNPRsbr/radius/install/configure.cambridge.tarIn a terminal window, extract the new cluster definition files from the archive.
Execute:
tar xvf configure. <cluster_name> .tarOutput similar to this example is displayed:
$ tar xvf configure.MyCluster.tar x dbcluster.rc, 1925 bytes, 4 tape blocks x config.ini, 2435 bytes, 5 tape blocks x my.cnf, 1017 bytes, 2 tape blocks x dbclusterndb.gen, 33474 bytes, 66 tape blocks x dbcluster.dat, 921 bytes, 2 tape blocks
Repeat these steps until every node in the cluster has a copy of the new cluster definition files.
Installing the SBR Carrier Software on the New Management Node Host Machine
Installing the SBR Carrier Software on the New Management Node Host Machine
This procedure describes how to unpack and install the SBR Carrier software on the host machine for the new management node.
Log in to the host machine for the new management node as root.
Copy the Steel-Belted Radius Carrier installation files from their download location to the machine. Make sure to copy them to a local or remote hard disk partition that is readable by root.
This example copies the files from a download directory to the /tmp/sbr directory.
Execute:
mkdir -p /opt/tmp
cp -pR /tmp/sbr/solaris/* /opt/tmp/Extract the SBR Carrier installation package.
For 64-bit Solaris, execute:
cd /tmp/sbr
ls -ltrtotal 216240 -rw-r--r-- 1 root root 110712276 Aug 25 09:44 sbr-cl-8.6.0.R-1.sparcv9.tgz
Execute:
gunzip sbr-cl-8.6.0.R-1.sparcv9.tgz
tar xf sbr-cl-8.6.0.R-1.sparcv9.tarVerify that the extraction worked and confirm the name of the package file.
For 64-bit Solaris, execute:
ls -ltrtotal 216256 drwxr-xr-x 4 Xtreece other 370 Aug 24 17:01 JNPRsbr.pkg -rw-r--r-- 1 root root 110712276 Aug 25 09:44 sbr-cl-8.6.0.R-1.sparcv9.tar
Install the package.
Execute:
pkgadd -d /tmp/sbrThe following packages are available: 1 JNPRsbr.pkg JNPRsbr - Juniper Networks Steel-Belted Radius (Carrier Cluster Edition) (sparc) 8.60.50006
Select package(s) you wish to process (or 'all' to process all packages). (default: all) [?,??,q]: all
Type all and press Enter.
The script resumes.Processing package instance <JNPRsbr.pkg> from </tmp>
Confirm the installation directory.
Depending on the system configuration, the script prompts you to create the /opt/JNPRsbr directory if it does not exist, or to over-write an already extracted package, or any of several other questions.
The selected base directory </opt/JNPRsbr> must exist before installation is attempted.
Do you want this directory created now [y,n,?,q]
Answer the question appropriately (or change the extraction path if necessary) so that the script can proceed.
To accept the default directory as a target, enter y.
The script resumes.Using </opt/JNPRsbr> as the package base directory. #Processing package information. #Processing system information. 48 package pathnames are already properly installed. #Verifying disk space requirements. #Checking for conflicts with packages already installed. #Checking for setuid/setgid programs.
This package contains scripts which will be executed with super-user permission during the process of installing this package.
Do you want to continue with the installation of <JNPRsbr> [y,n,?]
Enter y to confirm that you want to continue to install the package.
Installing JNPRsbr - Juniper Networks Steel-Belted Radius (Carrier Cluster Edition) as <JNPRsbr>
## Executing preinstall script. ## Installing part 1 of 1. . . . [ verifying class <none> ] ## Executing postinstall script. Newly installed server directory will be backed up as: /opt/JNPRsbr/radius/install/backups/2009:03:31-00:34:06
Installation of <JNPRsbr> was successful.
Configuring the SBR Carrier Software on the New Management Node
Configuring the SBR Carrier Software on the New Management Node
Before starting this procedure, review Before You Install Software. In particular, review requirements for Setting Up External Database Connectivity (Optional) and Installing the SIGTRAN Interface (Optional), because steps in this procedure require the server to be preconfigured for these capabilities.
To configure the software on the new management node:
As root, navigate to the radius/install subdirectory of the directory where you installed the Steel-Belted Radius Carrier package in Installing the SBR Carrier Software on the New Management Node Host Machine.
Example: cd /opt/JNPRsbr/radius/install
Run the configure script.
Execute:
./configureReview and accept the Steel-Belted Radius Carrier license agreement.
Press the spacebar to move from one page to the next. When you are prompted to accept the terms of the license agreement, enter y.
Do you accept the terms in the license agreement? [n] y
From the menu of configuration tasks, enter 3 to specify Configure Cluster Node.
--------------------------------------------------------------------------- SBR 8.60.50006 cluster on SunOS 5.10 Generic_141444-09 node sbrha-8.carrier.spgma.juniper.net is not configured and processes are down, needs to be configured ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by 'Generate Cluster Definition' on one node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
Enter the number of the desired configuration task or quit (2,q): 3
Specify the name of the cluster.
--------------------------------------------------------------------------- SBR 8.60.50006 cluster on SunOS 5.10 Generic_141444-09 node sbrha-8.carrier.spgma.juniper.net is not configured and processes are down, needs to be configured ---------------------------------------------------------------------------
Configuring Cluster Node...
Enter SBR cluster name [sbrha]:cambridge
Reading shared configuration from /opt/JNPRshare/install/cambridge
Generating configuration files
Reviewing configuration files /opt/JNPRsbr/radius/install/tmp/dbclusterndb.gen View (v), accept (a), or reject (r) configuration files: a
Enter a to accept the modified cluster definition files and continue or v to view them.
Caution We recommend that you enter an r to reject them only if a serious error was made when you provided information. We recommend that you not edit these files.
The configure script prompts you with a warning whether or not to apply the cluster definition to this node. Enter y to proceed.
WARNING: You are about to make irreversible changes to this node. Are you sure that you wish to continue? (y,n): y
Cleaning directories /opt/JNPRhadm
Applying configuration
Initializing Session State Register, please wait a few minutes...
Configuration complete
--------------------------------------------------------------------------- SBR 8.60.50006 cluster cambridge{0s,2sm,1m,2d} on SunOS 5.10 Generic_141444-09 node sbrha-8.carrier.spgma.juniper.net(s) is configured and processes are down, may be reconfigured if desired ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by 'Generate Cluster Definition' on one node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
READY: last operation succeeded, node configured. Enter the number of the desired configuration task or quit (4,q):
Note Expect package is required to be installed on RHEL 7.3 or later (see Linux for supported RHEL versions). SBR Carrier has been tested only with the Expect-5.45 version. Expect package is installed on Solaris 11.3.36.10.0 or later version by default (see Solaris for supported Solaris versions).
Enter q to quit.
Configuring Each Existing Node in the Cluster with the New Cluster Definition Files
Configuring Each Existing Node in the Cluster with the New Cluster Definition Files
At this point in the process, all nodes in the cluster have the new cluster definition files loaded. However, only the new management node has been configured with the new files. The existing nodes are still running with the old cluster definition files.
In this procedure, you log in to each existing node, stop the processes, run the configure script, and restart the processes. You must complete these steps on every existing node in the cluster. This example starts with the sm1 node.
In this procedure, you need to stop and restart each existing node one by one to apply the new cluster definition to each of the original cluster nodes. Do not operate on multiple nodes at the same time because that creates multiple faults that can stop the entire cluster.
Always review the recommended start and stop order and processes and plan out the order in which to perform the equivalent steps in your cluster. See When and How to Restart Session State Register Nodes, Hosts, and Clusters.
Log in to the first existing node (in this example, sm1) as root.
Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).
Example: cd /opt/JNPRsbr/radius
Stop the RADIUS processes on the node you are configuring (required on each (s) and (sm) node), and execute:
./sbrd stop radius
./sbrd status
Stop the ssr processes on the node you are configuring (required on each (sm), (m) and (d) node):
./sbrd stop ssr
Check the status of the node:
./sbrd status
Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=10 @172.28.84.163 (mysql-5.7.25 ndb-7.6.9, Nodegroup: 0, Master) id=11 @172.28.84.113 (mysql-5.7.25 ndb-7.6.9, Nodegroup: 0)
[ndb_mgmd(MGM)] 2 node(s) id=1 (not connected, accepting connect from 172.28.84.36) id=2 @172.28.84.166 (mysql-5.7.25 ndb-7.6.9)
[mysqld(API)] 4 node(s) id=6 (not connected, accepting connect from 172.28.84.36) id=7 @172.28.84.166 (mysql-5.7.25 ndb-7.6.9) id=58 @172.28.84.166 (mysql-5.7.25 ndb-7.6.9) id=59 (not connected, accepting connect from 172.28.84.36)
hadm@wrx07:~>
Verify that the node you are about to configure is not connected. In this example, the node ID for sm1 indicates: id=1 (not connected, accepting connect from 172.28.84.36), indicating the sm1 node is stopped.
Run the configure script:
Execute:
./configureConfiguring SBR Software
--------------------------------------------------------------------------- SBR 8.60.50006 cluster cambridge{0s,2sm,0m,2d} on SunOS 5.10 Generic_141444-09 node sbrha-4(sm) is configured and processes are down, may be reconfigured if desired ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by 'Generate Cluster Definition' on one node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
Enter the number of the desired configuration task or quit (4,q):
From the menu of configuration tasks, enter 3 to specify Configure Cluster Node.
--------------------------------------------------------------------------- SBR 8.60.50006 cluster cambridge{0s,2sm,0m,2d} on SunOS 5.10 Generic_141444-09 node sbrha-4(sm) is configured and processes are down, may be reconfigured if desired ---------------------------------------------------------------------------
Configuring Cluster Node...
Enter SBR cluster name [cambridge]:
Press Enter to accept the existing cluster name (in the example: cambridge).
Create (c) new or update (u) existing node configuration? [u]:
Enter u to update the existing node configuration.
Reading shared configuration from /opt/JNPRshare/install/cambridge
Generating configuration files
Reviewing configuration files /opt/JNPRsbr/radius/install/tmp/config.ini /opt/JNPRsbr/radius/install/tmp/my.cnf /opt/JNPRsbr/radius/install/tmp/dbclusterndb.gen View (v), accept (a), or reject (r) configuration files:
Enter a to accept the new cluster definition files.
WARNING: You are about to make irreversible changes to this node. Are you sure that you wish to continue? (y,n): y
The configure script prompts you with a warning about whether or not to apply the new cluster definition to the node.
Enter y to continue.
Applying configuration
--------------------------------------------------------------------------- SBR 8.60.50006 cluster cambridge{0s,2sm,1m,2d} on SunOS 5.10 Generic_141444-09 node sbrha-4(sm) is configured and processes are down, may be reconfigured if desired ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by 'Generate Cluster Definition' on one node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
READY: last operation succeeded, node configured. Enter the number of the desired configuration task or quit (4,q):
Enter q to quit.
root@sbrha-4:/opt/JNPRsbr/radius/install>
Notice that the first line in the applied configuration is: SBR 8.60.50006 cluster cambridge{0s,2sm,1m,2d}, indicating that the new configuration has been applied to the sm1 node.
Restart the SSR process on the newly configured sm1 node:
Execute:
./sbrd start ssrExecute:
./sbrd statusExamine each line and ensure the SSR process is running without error.
Restart the RADIUS process on sm1:
Execute:
./sbrd start radiusExecute:
./sbrd statusExamine each line and ensure the RADIUS process is running without error.
Repeat steps 1 through 16 on the sm2 node and then on each data node, one at a time. For the data nodes, you do not need to stop and restart the RADIUS process because data nodes only run the SSR process.
Do not operate on multiple nodes at once. Doing so creates multiple faults that can stop the entire cluster. Complete the procedure on each node one at a time until the node is operating without error.
Starting the New Management Node
Starting the New Management Node
At this point in the process, the original nodes in the cluster (sm1, sm2, d1, and d2) are up and running with the new cluster definition files. The new management node is configured with the proper configuration, but is not yet running in the cluster. The following procedure starts the new management node in the cluster.
Start the ssr process on the new management node:
Log in as root to the management (m) node.
Change directories to /opt/JNPRsbr/radius/.
Execute:
./sbrd start ssrExecute:
./sbrd statusExamine each line of the final cluster configuration and ensure it is running without error.
Now that the new management node is started and running in the cluster, configure it using Web GUI. See Basic SBR Carrier Node Configuration. For complete details, see the SBR Carrier Administration and Configuration Guide.
Running CreateDB.sh on the New Management Node
Running CreateDB.sh on the New Management Node
At this point, the new management node is up and running in the cluster. You run the CreateDB.sh script to create certain files that are required to run the administrative shell scripts used to administer the session database in the cluster.
Run the CreateDB.sh script on the new management node.
Log in as hadm.
Navigate to the hadm user's home directory, /opt/JNPRhadm/ by default.
Execute:
CreateDB.sh
The addition of the new management node is complete.