Adding a Data Expansion Kit to an Existing Cluster
Adding the two new data nodes in the Data Expansion Kit to an existing cluster requires deleting and re-creating the session database for the cluster after all the data nodes are up and running.
Because the process of updating the existing cluster topology and re-creating the session database may result in a longer downtime than desired, there are two approaches you can take to minimize the downtime:
Use a transition server (temporary cluster)—One of the sm nodes in the existing cluster is borrowed from the cluster and converted to a transition server operating a temporary cluster. All traffic is routed to the transition server while the existing cluster is updated to include the new data nodes. After the updated cluster is up and running with the new data nodes, traffic is switched back to it, and the transition server is unconfigured from being a temporary cluster, reconfigured as a sm node, and re-incorporated into the updated cluster. We use this method in this example procedure. See Using a Transition Server When Adding Data Nodes to an Existing Cluster.
Non-transition server—This approach results in longer downtime, the cluster database is destroyed, and entire cluster is updated and reconfigured with the new topology before re-creating the old database. See Non-Transition Server Method—Terminating Connections.
Although both of these methods minimize downtime as much as possible, they both require the cluster to be reinitialized, which necessitates destroying and re-creating the session database. The difference between the two approaches is that using a transition server allows SBR Carrier traffic to be processed while the remaining cluster is updated. This is not possible with the non-transition server. See Non-Transition Server Method—Terminating Connections.
Requirements for Selecting a Transition Server in Your Environment
Requirements for Selecting a Transition Server in Your Environment
Use the following selection criteria to select a temporary server:
The server must meet all the Release 8.5.0 hardware and software requirements listed in Before You Install Software.
If the server is part of an existing cluster:
We recommend using the most powerful (the most RAM and greatest number of processors) available because it will be processing a heavier-than-normal load during the transition.
We recommend using a SBR or management node, rather than a data node both to reduce front end processing on the existing cluster and to maintain data redundancy.
If you intend the server to be the transition server and then reconfigure it to be part of the updated cluster when it is reconfigured, it must be a combined SBR/management node host.
If you use Centralized Configuration Management to replicate SBR Carrier node configurations among a group of like nodes, the transition server cannot take on the role of primary CCM server in the updated cluster because it will not be the first SBR node to be configured.
Using a Transition Server When Adding Data Nodes to an Existing Cluster
Using a Transition Server When Adding Data Nodes to an Existing Cluster
In general, to use a transition server to add a Data Expansion Kit to an existing cluster:
Create the transition server and switch all traffic to it.
Create the updated cluster definition files that include the two new data nodes.
Install the SBR Carrier software on the host machines for the new data nodes.
See Installing the SBR Carrier Software on the Two New Data Node Host Machines.
Distribute the new cluster definition files to the existing cluster nodes and the new data nodes.
See Distributing the Updated Cluster Definition Files to the Existing Nodes.
Destroy the session database on the existing cluster.
See Destroying the Session Database on the Original Cluster.
Configure each node in the expanded cluster with the updated cluster definition files.
See Configuring the Nodes in the Expanded Cluster with the Updated Cluster Definition Files.
Create the session database and IP pools for the expanded cluster.
See Creating the Session Database and IP Pools on the Expanded Cluster.
Switch the traffic back to the updated, expanded cluster.
Unconfigure the transition server, rebuild it, and reincorporate it into the expanded cluster.
Existing Cluster Configuration for This Example Procedure
Existing Cluster Configuration for This Example Procedure
The following procedure adds one Data Expansion Kit to an existing cluster.
The following designations are used throughout the examples in this section:
sm = Hardware has SBR node and Management node.
s = Hardware has only SBR node.
m = Hardware
has only Management node.
d = Hardware has Data node.
2sm, 2d = Two SBR/Management nodes and 2 Data nodes.
2S, 2SM, 2D = Two SBR nodes and 2 SBR/Management nodes, 2 Data nodes.
Display the existing cluster:
Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=10 @172.28.84.163 (mysql-5.7.18 ndb-7.5.6, Nodegroup: 0, Master) id=11 @172.28.84.113 (mysql-5.7.18 ndb-7.5.6, Nodegroup: 0)
[ndb_mgmd(MGM)] 2 node(s) id=1 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6) id=2 @172.28.84.166 (mysql-5.7.18 ndb-7.5.6)
[mysqld(API)] 5 node(s) id=6 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6) id=7 @172.28.84.166 (mysql-5.7.18 ndb-7.5.6) id=58 @172.28.84.166 (mysql-5.7.18 ndb-7.5.6) id=59 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6)
The existing cluster includes two full SBR Carrier licenses and a license for the Starter Kit resulting in a configuration that includes two sm nodes and two d nodes. For the purposes of this procedure, the existing two sm nodes are identified as sm1 and sm2 as follows:
id=1 @172.28.84.36 = sm1
id=2 @172.28.84.166 = sm2
In this example, we borrow the sm2 node, and convert it to a transition server operating as a temporary cluster.
Creating the Transition Server
Creating the Transition Server
In this example, we borrow the sm2 node and convert it to a transition server operating as a temporary cluster. To set up the transition server to temporarily take the place of the existing cluster, you need to prepare the server, install software, and configure the database.
The SBRC Temporary Cluster, also termed the transition server, is an exceptional node in the sense that it executes all processes on one machine. The transition server is assigned the node type= smdt. The CreateDB.sh and configuration of pool(s) must be done manually on a transition server, just like in a cluster.
Stopping the Processes on the Target Transition Server
Stopping the Processes on the Target Transition Server
Log in to the server that you are reconfiguring to act as the transition server (in this example sm2) as root.
Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).
Example: cd /opt/JNPRsbr/radius
Stop the RADIUS and SSR processes on the node.
Execute:
./sbrd stop radius
./sbrd stop ssrCheck the status of the node and confirm it is not connected to the cluster:
./sbrd status
Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=10 @172.28.84.163 (mysql-5.7.18 ndb-7.5.6, Nodegroup: 0, Master) id=11 @172.28.84.113 (mysql-5.7.18 ndb-7.5.6, Nodegroup: 0)
[ndb_mgmd(MGM)] 2 node(s) id=1 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6) id=2 (not connected, accepting connect from 172.28.84.166)
[mysqld(API)] 4 node(s) id=6 (not connected, accepting connect from 172.28.84.36) id=7 @172.28.84.166 (mysql-5.7.18 ndb-7.5.6) id=58 @172.28.84.166 (mysql-5.7.18 ndb-7.5.6) id=59 (not connected, accepting connect from 172.28.84.36)
hadm@wrx07:~>
In this example, notice that the sm2 node is not connected as indicated by
id=2 (not connected, accepting connect from 172.28.84.166).
Configuring the Software on the Transition Server as a Temporary Cluster
Configuring the Software on the Transition Server as a Temporary Cluster
Now that the processes are stopped on the machine we are reconfiguring as the transition server, we need to reconfigure it as a temporary cluster. At this point, you are still logged in to the target machine as root (in this case the original sm2 node).
Execute the configure script to reconfigure the machine as a temporary cluster:
Execute:
./configureExample:
root@wrx07:/opt/JNPRsbr/radius/install> ./configure
Configuring SBR Software
--------------------------------------------------------------------------- SBR 8.50.50006 cluster cambridge{0s,2sm,0m,2d} on SunOS 5.10 Generic_141444-09 node wrx07(sm) is not configured and processes are down, needs to be configured ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by ’Generate Cluster Definition’ on any node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
Enter the number of the desired configuration task or quit (2,q):
From the menu of configuration tasks, enter 5 to specify Create Temporary Cluster.
Creating Temporary Cluster...
Enter SBR cluster name [wrx07]:
Enter the exact name of the existing cluster. In this example: cambridge.
In order to avoid service outages when performing certain major cluster maintenance tasks, you are allowed to reuse each of your licenses in order to create a temporary cluster that consists of 1 SBR node, 1 management node, and 1 data node all installed on the same machine. Note that this is not a true cluster since it is vulnerable to single points of failure.
Enter the SSR Starter Kit license number, the license number for one SBR node, and, if you are using one of the optional SBR Carrier modules, the license number for it.
While migrating to the updated cluster, you can use the same licenses for the transition server as for the updated cluster.
Enter Starter Kit license: 1770 0004 0112 0202 2747 5761 Enter SBR licenses meant only for this particular SBR node. Enter one license per line and an empty line when finished. Enter SBR full license: 1750 0006 0012 0001 0050 0167 8140 Enter SBR feature license:
Enter passwords for two internal accounts. The password input is not echoed to the screen; the fields appear to be blank.
All cluster nodes will share the same Session State Register (SSR). Setting password for SSR admin account hadmsql Password: Again: Setting password for SSR software account hadmsbr Password: Again:
Generating configuration files
Reviewing configuration files /opt/JNPRsbr/radius/install/tmp/config.ini /opt/JNPRsbr/radius/install/tmp/my.cnf /opt/JNPRsbr/radius/install/tmp/dbclusterndb.gen View (v), accept (a), or reject (r) configuration files:
The system generates the required configuration files and prompts you to view, accept, or reject them.
Enter a to accept them and continue or v to view them.
Caution We recommend that you enter an r to reject them only if a serious error was made when you provided information. We recommend that you not edit these files.
WARNING: You are about to make irreversible changes to this node. Are you sure that you wish to continue? (y,n):
You are prompted with a warning whether or not to apply the changes.
Enter y to continue.
For the remainder of the prompts, simply press Enter to configure the transition server with the existing configuration.
Cleaning directories /opt/JNPRhadm /opt/JNPRmysql /opt/JNPRmysqld /opt/JNPRndb_mgmd /opt/JNPRndbd
Applying configuration
Initializing Session State Register, please wait a few minutes...
Configuring for use with generic database Do you want to configure Java Runtime Environment for JDBC Feature [n] : Do you want to enable "Radius WatchDog" Process? [n]: Do you want to enable LCI? [n]: Do you want to configure for use with Oracle? [n]: Removing oracle references from startup script Do you want to configure for use with SIGTRAN? [n]: Removing SIGTRAN references from startup script Do you want to configure SNMP? [n]: Configuring Admin GUI Webserver Compatible Java version found : Do you want to install custom SSL certificate for Admin WebServer? [n]: Enable (e), disable (d), or preserve (p) autoboot scripts [e]:
The SBR Admin Web GUI can be launched using the following URL: https://<servername>:2909
Configuration complete
--------------------------------------------------------------------------- SBR 8.50.50006 temporary cluster cambridge on SunOS 5.10 Generic_141444-09 node wrx07(smdt) is configured and processes are down, may be reconfigured if desired ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by 'Generate Cluster Definition' on one node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
READY: last operation succeeded, created temporary cluster. Enter the number of the desired configuration task or quit (4,q):
Enter q to quit.
Notice the server configuration in the line:
SBR 8.50.50006 temporary cluster cambridge on SunOS 5.10 Generic_141444-09 node wrx07(smdt)
(smdt) indicates the machine is configured as an s,m,d temporary cluster.
Configuring and Starting the Transition Server
Configuring and Starting the Transition Server
Now that the software is configured, you need to create the session database and the IP pools and ranges on the transition server. All cluster traffic will ultimately be switched to this single transition server temporarily, while you take the other nodes in the existing cluster down and upgrade and reconfigure them. So, you need to configure the temporary transition server to match the existing cluster configuration.
Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default) and start the SSR process on the transition server.
Example: cd /opt/JNPRsbr/radius
As root, execute:
./sbrd start ssrStatus messages are displayed as the programs start:
Starting ssr management processes Starting ssr auxiliary processes Starting ssr data processes
Verify the process started without error:
As root, execute:
./sbrd statusCreate the session database.
If you need to customize the sessions database to match your existing cluster session database, see Customizing the SSR Database Current Sessions Table. Any customization must be done prior to running the CreateDB.sh script.
Log in as hadm.
Navigate to the hadm user's home directory, /opt/JNPRhadm/ by default.
Execute:
./CreateDB.sh
As hadm, set up IP address pools and ranges using the SSR Administration Scripts. The IP address range should be separate from the in-use pools on the existing and upgraded cluster to avoid overlaps. If the old and transitional pools overlap, then during the transition the two clusters may give the same IP address to two different users. See the section Session State Register Administration in the SBR Carrier Administration and Configuration Guide for more information.
Start the RADIUS process:
As root, execute:
sbrd start radiusStatus messages are displayed as the programs start:
Starting radius server processes RADIUS: Process ID of daemon is 13224 RADIUS: Starting DCF system RADIUS: Configuration checksum: 2D D6 38 1D RADIUS started . . . RADIUS: DCF system started
Verify the process started without error:
As root, execute:
./sbrd statusFinish configuring the transition server using Web GUI. Follow the steps outlined in Basic SBR Carrier Node Configuration. For complete details, see the SBR Carrier Administration and Configuration Guide.
Switching Traffic to the Transition Server
Switching Traffic to the Transition Server
After the transition server is set up and tested, and a working database created, reconfigure the site’s routers to gradually direct traffic to the transition server instead of to the existing cluster’s SBR servers.
Creating the Updated Cluster Definition Files
Creating the Updated Cluster Definition Files
The next phase of the process is to create the new cluster definition files to include the two new data nodes from the Data Expansion Kit. At this point in the process the existing cluster configuration shows the sm2 node processes are not running and not connected, as indicated by id=2 (not connected, accepting connect from 172.28.84.166):
Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=10 @172.28.84.163 (mysql-5.7.18 ndb-7.5.6, Nodegroup: 0, Master) id=11 @172.28.84.113 (mysql-5.7.18 ndb-7.5.6, Nodegroup: 0)
[ndb_mgmd(MGM)] 2 node(s) id=1 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6) id=2 (not connected, accepting connect from 172.28.84.166)
[mysqld(API)] 5 node(s) id=6 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6) id=7 (not connected, accepting connect from 172.28.84.166) id=58 (not connected, accepting connect from 172.28.84.166) id=59 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6)
Start by creating the updated cluster definition files on the sm1 node:
As root, on the sm1 node, navigate to the radius/install subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).
Example: cd /opt/JNPRsbr/radius/install
Run the configure script:
Execute:
./configureExample:
root@sbrha-4:/opt/JNPRsbr/radius/install> ./configure
Configuring SBR Software
--------------------------------------------------------------------------- SBR 8.50.50006 cluster cambridge{0s,2sm,0m,2d} on SunOS 5.10 Generic_141444-09 node sbrha-4(sm) is CONFIGURED and processes are UP, may be stopped if reconfigured ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by 'Generate Cluster Definition' on one node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
Enter the number of the desired configuration task or quit (4,q):
From the menu of configuration tasks, enter 2 to specify Generate Cluster Definition.
--------------------------------------------------------------------------- SBR 8.50.50006 cluster cambridge{0s,2sm,0m,2d} on SunOS 5.10 Generic_141444-09 node sbrha-4(sm) is CONFIGURED and processes are UP, may be stopped if reconfigured ---------------------------------------------------------------------------
Generating Cluster Definition...
Enter SBR cluster name [cambridge]:
You are prompted to enter the name of the cluster.
Press Enter to use the current cluster name.
You are prompted either to create a new cluster or update an existing cluster definition.
Create (c) new or update (u) existing cluster definition? [u]:
Enter u to update the existing cluster. definition.
The SBR Cluster Starter Kit license allows you to create a minimal cluster of 2 SBR nodes, 2 management nodes, and 2 data nodes. When each node is installed on a separate machine the cluster topology is denoted as {2s,2m,2d}. When SBR nodes are paired with management nodes on the same machines the cluster topology is denoted as {2sm,2d}.
An optional SBR Cluster Management Expansion Kit allows you to add a third management node for {2sm,1m,2d} and an optional Data Expansion Kit allows you to add 2 more data nodes for {2sm,1m,4d} clusters. Additional SBR licenses allow you to add up to 18 more SBR nodes to obtain a maximal cluster {18s,2sm,1m,4d} and/or enable extra features.
While it is not difficult to add management and/or SBR nodes to an existing cluster, adding data nodes is more difficult and may require you to shutdown the entire cluster as opposed to a rolling restart.
Another license is required if you wish to add a third management node. Adding a third management node will require a rolling restart later. Enter Management Expansion Kit license, if any:
Because we are not adding a Management Expansion Kit, press Enter to skip adding the license.
Another license is required if you wish to add more data nodes. Adding data nodes may require you to shutdown the entire cluster. Enter Data Expansion Kit license, if any: 1770 0002 0112 0002 4439 9250
Enter the license number for the Data Expansion Kit and press Enter.
This cluster presently contains 2 of 20 possible SBR nodes. Adding more SBR nodes will require a rolling restart later. Enter number of SBR nodes to be added [0]:
When prompted to enter the number of SBR nodes, press Enter to keep the existing configuration.
Updating cluster cambridge{0s,2sm,0m,4d} will require 2 new machines. Do you wish to continue? [y]:
Notice the updated cluster configuration includes four data nodes as indicated by: Updating cluster cambridge{0s,2sm,0m,4d}.
Enter y to continue.
When prompted, enter the node names and IP addresses for the two new data nodes.
Press Enter when prompted to Enter node type (d) [d]: and when prompted to Enter DATA node ID.
Information will now be gathered for each new machine to be added. You will have a chance to review all information at least once before any machines are modified.
--------------------------------------------------------------------------- SBR 8.50.50006 cluster cambridge{0s,2sm,0m,4d} on SunOS 5.10 Generic_141444-09 node sbrha-4(sm) Partial configuration at present is {0s,2sm,0m,2d} of {0s,2sm,0m,4d} --------------------------------------------------------------------------- IMPORTANT: node names must be entered as reported by 'uname -n'.
Enter node name [cambridge-6]: sbrha-8.carrier.spgma.juniper.net Enter node type (d) [d]: Enter DATA node ID (10-29) [12]: Enter DATA node IP address by which it is known to management nodes. Enter DATA node IP address: 172.28.84.178 ---------------------------------------------------------------------------
SBR 8.50.50006 cluster cambridge{0s,2sm,0m,4d} on SunOS 5.10 Generic_141444-09 node sbrha-4(sm) Partial configuration at present is {0s,2sm,0m,3d} of {0s,2sm,0m,4d} --------------------------------------------------------------------------- IMPORTANT: node names must be entered as reported by 'uname -n'. Enter node name [cambridge-7]: sbrha-2.spgma.juniper.net Enter node type (d) [d]: Enter DATA node ID (10-29) [13]: Enter DATA node IP address by which it is known to management nodes. Enter DATA node IP address: 172.28.84.104
The system generates the updated cluster definition files.
Verify the proper configuration by examining the line: Generated configuration is {0s,2sm,0m,4d} of {0s,2sm,0m,4d} showing the four data nodes.
When prompted to, enter a to accept the updated configuration.
--------------------------------------------------------------------------- SBR 8.50.50006 cluster cambridge{0s,2sm,0m,4d} on SunOS 5.10 Generic_141444-09 node sbrha-4(sm) Generated configuration is {0s,2sm,0m,4d} of {0s,2sm,0m,4d} ---------------------------------------------------------------------------
Generating configuration files
Reviewing configuration files /opt/JNPRsbr/radius/install/tmp/config.ini /opt/JNPRsbr/radius/install/tmp/my.cnf /opt/JNPRsbr/radius/install/tmp/dbclusterndb.gen View (v), accept (a), or reject (r) configuration files: a Writing shared configuration to /opt/JNPRshare/install/cambridge
--------------------------------------------------------------------------- SBR 8.50.50006 cluster cambridge{0s,2sm,0m,2d} on SunOS 5.10 Generic_141444-09 node sbrha-4(sm) is CONFIGURED and processes are UP, may be stopped if reconfigured ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by 'Generate Cluster Definition' on one node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
READY: last operation succeeded, generated cluster definition. Enter the number of the desired configuration task or quit (4,q):
When the main configuration menu is displayed, enter q to quit.
root@sbrha-4:/opt/JNPRsbr/radius/install>
Installing the SBR Carrier Software on the Two New Data Node Host Machines
Installing the SBR Carrier Software on the Two New Data Node Host Machines
At this point in the process, the updated cluster definition files have been generated and reside on the sm1 node only. Next you need to install the SBR Carrier software on each of the machines that you want to host the two new data nodes. After the SBR Carrier software is installed on these machines, you distribute the updated cluster definition files to all the other nodes in the original cluster.
This procedure describes how to unpack and install the SBR Carrier software on the host machine for the new data nodes.
Log in to the machine as root.
Copy the Steel-Belted Radius Carrier installation files from their download location to the machine. Make sure to copy them to a local or remote hard disk partition that is readable by root.
This example copies the files from a download directory to the /tmp/sbr directory.
Execute:
mkdir -p /opt/tmp
cp -pR /tmp/sbr/solaris/* /opt/tmp/Extract the SBR Carrier installation package.
For 64-bit Solaris, execute:
cd /tmp/sbr
ls -ltrtotal 216240 -rw-r--r-- 1 root root 110712276 Aug 25 09:44 sbr-cl-8.5.0.R-1.sparcv9.tgz
Execute:
gunzip sbr-cl-8.5.0.R-1.sparcv9.tgz
tar xf sbr-cl-8.5.0.R-1.sparcv9.tarVerify that the extraction worked and confirm the name of the package file.
For 64-bit Solaris, execute:
ls -ltrtotal 216256 drwxr-xr-x 4 Xtreece other 370 Aug 24 17:01 JNPRsbr.pkg -rw-r--r-- 1 root root 110712276 Aug 25 09:44 sbr-cl-8.5.0.R-1.sparcv9.tar
Install the package.
Execute:
pkgadd -d /tmp/sbrThe following packages are available: 1 JNPRsbr.pkg JNPRsbr - Juniper Networks Steel-Belted Radius (Carrier Cluster Edition) (sparc) 8.50.50006
Select package(s) you wish to process (or 'all' to process all packages).
(default: all) [?,??,q]:Type all and press Enter.
The script resumes.Processing package instance <JNPRsbr.pkg> from </tmp>
Confirm the installation directory.
Depending on the system configuration, you are prompted whether to create the /opt/JNPRsbr directory if it does not exist, to over-write an already extracted package, or any of several other questions.
The selected base directory </opt/JNPRsbr> must exist before installation is attempted.
Do you want this directory created now [y,n,?,q]
Answer the question appropriately (or change the extraction path if necessary) so that the script can proceed.
To accept the default directory as a target, enter y.
The script resumes.Using </opt/JNPRsbr> as the package base directory. #Processing package information. #Processing system information. 48 package pathnames are already properly installed. #Verifying disk space requirements. #Checking for conflicts with packages already installed. #Checking for setuid/setgid programs.
This package contains scripts which will be executed with super-user permission during the process of installing this package.
Do you want to continue with the installation of <JNPRsbr> [y,n,?]
Enter y to confirm that you want to continue to install the package.
Installing JNPRsbr - Juniper Networks Steel-Belted Radius (Carrier Cluster Edition) as <JNPRsbr>
## Executing preinstall script. ## Installing part 1 of 1. . . . [ verifying class <none> ] ## Executing postinstall script. Newly installed server directory will be backed up as: /opt/JNPRsbr/radius/install/backups/2009:03:31-00:34:06
Installation of <JNPRsbr> was successful.
Repeat this process on the second new data node.
Distributing the Updated Cluster Definition Files to the Existing Nodes
Distributing the Updated Cluster Definition Files to the Existing Nodes
Now that the two machines hosting the new data nodes have the SBR Carrier software installed, you can distribute the updated cluster definition files to the new nodes and the other nodes in the original cluster.
On both the existing nodes and the new data nodes in the original cluster, create a copy of the new cluster definition files. This process does not invoke the updated cluster definition files, but makes them available to the configure script later in the workflow.
To distribute the new cluster definition files:
Log in to each node (existing and new) as hadm.
Change directories to the install directory.
(On new nodes, the entire path may not exist because the <cluster name> portion of the path was not created when you prepared the new machine, so you may need to create it.) See Creating Share Directories.
Execute:
cd /opt/JNPRshare/install/ <cluster_name>For example:
cd /opt/JNPRshare/install/cambridgeUse FTP binary mode to connect to the node host (in this example, sm1) where you created the new cluster definition files.
Execute the get command to transfer the configure. <cluster name> .tar file.
For example:
bin
get /opt/JNPRsbr/radius/install/configure.cambridge.tarIn a terminal window, extract the new cluster definition files from the archive.
Execute:
tar xvf configure. <cluster_name> .tarOutput similar to this example is displayed:
$ tar xvf configure.MyCluster.tar x dbcluster.rc, 1925 bytes, 4 tape blocks x config.ini, 2435 bytes, 5 tape blocks x my.cnf, 1017 bytes, 2 tape blocks x dbclusterndb.gen, 33474 bytes, 66 tape blocks x dbcluster.dat, 921 bytes, 2 tape blocks
Repeat these steps until every node in the cluster has a copy of the new cluster definition files.
Destroying the Session Database on the Original Cluster
Destroying the Session Database on the Original Cluster
You now log in to the sm1 node, destroy the session database from the original cluster, and stop the original cluster.
Log in to sm1 as hadm.
Navigate to the hadm user's home directory, /opt/JNPRhadm by default.
Execute:
/DestroyDB.sh
hadm@sbrha-4:~> ./DestroyDB.sh SBRs must be offline; OK? <yes|no> yes This will destroy the "SteelBeltedRadius" database; OK? <yes|no> yes Really? <yes|no> yes
Each time you are prompted as to whether you really want to destroy the database, enter yes.
The system responds with:
Database "SteelBeltedRadius" destroyed.
Stop the original cluster by executing:
/sbrd stop cluster
hadm@sbrha-4:~> su Password: # bash root@sbrha-4:~> root@sbrha-4:~> /etc/init.d/sbrd stop cluster WARNING: This function is capable of stopping multiple nodes. Do not use this function if you intend to stop only one node. Do you intend to stop the entire cluster? (y,n): y Are you sure? (y,n): y Really? (y,n): y
Each time you are prompted as to whether you really want to stop the entire cluster, enter y.
The software stops the RADIUS processes first and then the SSR processes.
Stopping radius server processes waiting for radius 10 seconds elapsed, still waiting radius stopped Stopping ssr auxiliary processes Stopping ssr management processes Connected to Management Server at: 172.28.84.36:5235 Shutdown of NDB Cluster node(s) failed. * 1006: Illegal reply from server * root@sbrha-4:~>
On each remaining node of the original cluster, execute /sbrd stop cluster and verify that the processes are stopped.
Perform this step on the remaining nodes in this order: s nodes, sm nodes, m nodes, d nodes.
Log in to each remaining node in the existing cluster as root.
Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).
Example: cd /opt/JNPRsbr/radius
Execute:
/sbrd stop clusterExecute:
/sbrd statusExamine each line to ensure it says not connected.
Configuring the Nodes in the Expanded Cluster with the Updated Cluster Definition Files
Configuring the Nodes in the Expanded Cluster with the Updated Cluster Definition Files
To configure the nodes in the expanded cluster with the updated cluster definition files, you run the configure script on each node. First you run the script on the two new data nodes, then run it on the original nodes in the cluster (except for the sm2 node, which is still operating as the transition server).
Configuring the SBR Carrier Software on the New Data Nodes
Configuring the SBR Carrier Software on the New Data Nodes
Configure the software on each new data node:
As root, navigate to the directory where you installed the Steel-Belted Radius Carrier package in Installing the SBR Carrier Software on the Two New Data Node Host Machines.
Then, navigate to the radius/install subdirectory.
Example: cd /opt/JNPRsbr/radius/install
Run the configure script.
Execute:
./configure# ./configure Configuring SBR Software
The End User License Agreement is displayed. Review the Steel-Belted Radius Carrier license agreement.
Press the spacebar to move from one page to the next.
When you are prompted to accept the terms of the license agreement, enter y.
Do you accept the terms in the license agreement? [n] y
From the menu of configuration tasks, enter 3 to specify Configure Cluster Node.
--------------------------------------------------------------------------- SBR 8.50.50006 cluster on SunOS 5.10 Generic_141444-09 node sbrha-2.spgma.juniper.net is not configured and processes are down, needs to be configured ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by 'Generate Cluster Definition' on one node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
Enter the number of the desired configuration task or quit (2,q): 3
--------------------------------------------------------------------------- SBR 8.50.50006 cluster on SunOS 5.10 Generic_141444-09 node sbrha-2.spgma.juniper.net is not configured and processes are down, needs to be configured ---------------------------------------------------------------------------
Configuring Cluster Node...
Enter SBR cluster name [sbrha]: cambridge
Enter the exact name of the cluster and press Enter.
Reading shared configuration from /opt/JNPRshare/install/cambridge
Generating configuration files
Reviewing configuration files /opt/JNPRsbr/radius/install/tmp/config.ini /opt/JNPRsbr/radius/install/tmp/my.cnf View (v), accept (a), or reject (r) configuration files:
Enter a to accept the configuration.
WARNING: You are about to make irreversible changes to this node. Are you sure that you wish to continue? (y,n):
Enter y to continue.
Cleaning directories /opt/JNPRhadm
Applying configuration
Initializing Session State Register, please wait a few minutes...
--------------------------------------------------------------------------- SBR 8.50.50006 cluster cambridge{0s,2sm,0m,4d} on SunOS 5.10 Generic_141444-09 node sbrha-2.spgma.juniper.net(d) is configured and processes are down, may be reconfigured if desired ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by 'Generate Cluster Definition' on one node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
READY: last operation succeeded, node configured. Enter the number of the desired configuration task or quit (2,q):
Enter q to quit.
Notice the line: node sbrha-2.spgma.juniper.net(d) is configured and processes are down, may be reconfigured if desired indicating the node name you assigned and that the node was configured without error. The processes remain down for now.
Log in to the next new data node and repeat this procedure.
Running the Configure Script on Each Node from the Original Cluster
Running the Configure Script on Each Node from the Original Cluster
At this point in the process, all nodes in the cluster have the new cluster definition files loaded. However, only the new data nodes have been configured with the new files.
In this step, you run the configure script on each node from the original cluster. This includes the sm1, d1, and d2 nodes. Running this script applies the updated cluster definition files to the nodes.
You do not run the script on the sm2 node, which is still operating as the transition server (temporary cluster).
Log in to the first existing node (in this example, sm1) as root.
Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed.
Example: cd /opt/JNPRsbr/radius/
Check the status of the node by executing:
./sbrd status
Examine the line for the node you are about to configure, and verify that it is not connected. In this example, the node ID for sm1 indicates: id=1 (not connected, accepting connect from 172.28.84.36), indicating the sm1 node is stopped.
Navigate to the radius/install subdirectory of the directory where the JNPRsbr package was installed.
Example: cd /opt/JNPRsbr/radius/install
Run the configure script to apply the updated cluster definition files:
Execute:
./configureroot@sbrha-4:/opt/JNPRsbr/radius/install> ./configure Configuring SBR Software
--------------------------------------------------------------------------- SBR 8.50.50006 cluster cambridge{0s,2sm,0m,2d} on SunOS 5.10 Generic_141444-09 node sbrha-4(sm) is configured and processes are down, may be reconfigured if desired ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by 'Generate Cluster Definition' on one node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
Enter the number of the desired configuration task or quit (4,q):
Enter 3 to specify Configure Cluster Node and press Enter.
--------------------------------------------------------------------------- SBR 8.50.50006 cluster cambridge{0s,2sm,0m,2d} on SunOS 5.10 Generic_141444-09 node sbrha-4(sm) is configured and processes are down, may be reconfigured if desired ---------------------------------------------------------------------------
Configuring Cluster Node...
Enter SBR cluster name [cambridge]:
Press Enter to accept the cluster name and continue.
You are prompted either to create a new or update an existing node configuration.
Create (c) new or update (u) existing node configuration? [u]:
Enter u to update the node with the updated cluster definition files.
Reading shared configuration from /opt/JNPRshare/install/cambridge
Generating configuration files
Reviewing configuration files /opt/JNPRsbr/radius/install/tmp/config.ini /opt/JNPRsbr/radius/install/tmp/my.cnf /opt/JNPRsbr/radius/install/tmp/dbclusterndb.gen View (v), accept (a), or reject (r) configuration files:
Enter a to accept the updated configuration.
WARNING: You are about to make irreversible changes to this node. Are you sure that you wish to continue? (y,n):
Enter y to continue.
Applying configuration
--------------------------------------------------------------------------- SBR 8.50.50006 cluster cambridge{0s,2sm,0m,4d} on SunOS 5.10 Generic_141444-09 node sbrha-4(sm) is configured and processes are down, may be reconfigured if desired ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by 'Generate Cluster Definition' on one node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
READY: last operation succeeded, node configured. Enter the number of the desired configuration task or quit (4,q):
Notice the applied configuration includes the four data nodes as indicated by the line: SBR 8.50.50006 cluster cambridge{0s,2sm,0m,4d}.
Enter q to quit.
Log in to the remaining nodes from the original cluster (d1 and d2) and repeat this procedure.
Creating the Session Database and IP Pools on the Expanded Cluster
Creating the Session Database and IP Pools on the Expanded Cluster
At this point in the process, all nodes in the expanded cluster have been configured with the updated cluster definition files. All of these nodes are currently down. You now create the session database and IP pools and ranges for the expanded cluster. To create the new session database, we recommend that you run the clean command on the nodes from the original cluster (in this case, sm1, d1, and d2).
The sm2 node is still operating as the transition server (temporary cluster). Do not disrupt it in any way.
The following procedure describes how to run the clean command on sm1, d1, and d2, start the SSR process and create the session database and IP pools.
Cleaning the Original Nodes from the Cluster
Cleaning the Original Nodes from the Cluster
Perform the following procedure on sm1, d1, and d2 only:
Log in to the first existing node (in this example, sm1) as root.
Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed.
Example: cd /opt/JNPRsbr/radius
Execute:
./sbrd cleanWARNING: Cleaning the SSR lock on this node may be destructive. Do not use this function unless you are attempting to start the entire cluster for the first time, or for recovery purposes. Clean the SSR lock on this node? (y,n): y Are you sure? (y,n): y Really? (y,n): y Cleaning SSR lock
Repeat this procedure on the d1 and d2 nodes.
Creating the Session Database and IP Pools
Creating the Session Database and IP Pools
In this procedure, you create the session database and IP address pools for the expanded cluster. For details on performing these tasks, see the section on Session State Register Administration in the SBR Carrier Administration and Configuration Guide.
First you start the SSR process. The proper order for starting the SSR process is (sm) nodes, (m) nodes, and (d) nodes. We do not have any (m) nodes in this example, so start the SSR process in the following order: sm1, d1, d2, d3, and d4. Start the SSR process on each node in the expanded cluster one at a time, starting with the sm1 node and then on each data node. For complete details on the proper order of starting and stopping nodes, see When and How to Restart Session State Register Nodes, Hosts, and Clusters.
Starting the SSR Processes on the Nodes in the Expanded Cluster
Log in to the first sm node (in this example, sm1) as root.
Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).
Example: cd /opt/JNPRsbr/radius
Start the SSR process:
./sbrd start ssr
Before moving on to the next node, verify the SSR process started without error by executing:
./sbrd status
Examine the status and ensure there are no errors.
Repeat this procedure on the d1 and d2 nodes.
When you finish starting the SSR process on sm1, d1, d2, d3, and d4, the cluster configuration is as follows:
Cluster Configuration --------------------- [ndbd(NDB)] 4 node(s) id=10 @172.28.84.163 (mysql-5.7.18 ndb-7.5.6, Nodegroup: 0, Master) id=11 @172.28.84.113 (mysql-5.7.18 ndb-7.5.6, Nodegroup: 0) id=12 @172.28.84.178 (mysql-5.7.18 ndb-7.5.6, Nodegroup: 1) id=13 @172.28.84.104 (mysql-5.7.18 ndb-7.5.6, Nodegroup: 1)
[ndb_mgmd(MGM)] 2 node(s) id=1 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6) id=2 (not connected, accepting connect from 172.28.84.166)
[mysqld(API)] 4 node(s) id=6 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6) id=7 (not connected, accepting connect from 172.28.84.166) id=58 (not connected, accepting connect from 172.28.84.166) id=59 (not connected, accepting connect from 172.28.84.36)
The lines for node IDs 10, 11, 12, and 13 indicate the SSR processes started without error on the four data nodes.
The line id=1 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6) indicates the SSR process started properly on the sm1 node.
Notice that the sm2 node still says it is not connected as indicated by the line: id=2 (not connected, accepting connect from 172.28.84.166). The sm2 node is still operating as the transition server.
Creating the Session Database and IP Address Pools
Now create the session database and IP pools and ranges on the sm1 node.
Log back in to the sm1 node as hadm.
Navigate to the hadm user's home directory, /opt/JNPRhadm by default.
Create the session database on the sm1 node.
If you need to customize the sessions database, see Customizing the SSR Database Current Sessions Table. Any customization must be done before running the CreateDB.sh script.
Log in as hadm.
Navigate to the hadm user's home directory, /opt/JNPRhadm/ by default.
Execute:
./CreateDB.sh
As hadm, add the IP address pools and ranges.
For details on performing these tasks, see the section on Session State Register Administration in the SBR Carrier Administration and Configuration Guide.
Starting the RADIUS Process
Start the RADIUS process on the sm1 node.
Execute:
./sbrd start radius
./sbrd status
When you finish starting the SSR process on sm1, d1, d2, d3, and d4, the cluster configuration is as follows:
Cluster Configuration --------------------- [ndbd(NDB)] 4 node(s) id=10 @172.28.84.163 (mysql-5.7.18 ndb-7.5.6, Nodegroup: 0, Master) id=11 @172.28.84.113 (mysql-5.7.18 ndb-7.5.6, Nodegroup: 0) id=12 @172.28.84.178 (mysql-5.7.18 ndb-7.5.6, Nodegroup: 1) id=13 @172.28.84.104 (mysql-5.7.18 ndb-7.5.6, Nodegroup: 1)
[ndb_mgmd(MGM)] 2 node(s) id=1 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6) id=2 (not connected, accepting connect from 172.28.84.166)
[mysqld(API)] 4 node(s) id=6 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6) id=7 (not connected, accepting connect from 172.28.84.166) id=58 (not connected, accepting connect from 172.28.84.166) id=59 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6)
The RADIUS process for the sm1 node has started properly as indicated by the line:
id=59 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6)
Notice that the sm2 node is the only node that still is not connected, as indicated by the lines:
id=2 (not connected, accepting connect from 172.28.84.166) id=7 (not connected, accepting connect from 172.28.84.166) id=58 (not connected, accepting connect from 172.28.84.166)
Now that the expanded cluster nodes sm1, d1, d2, d3, and d4 are all started and running without error you can switch traffic back to the expanded cluster.
Removing the Transition Server from Service
Removing the Transition Server from Service
After you bring the expanded cluster online, configure it, and test it, begin transferring live traffic to it and away from the transition server. When all traffic has been shifted to the new expanded cluster and the number of on-going sessions managed by the transition server has reached a suitably low level, take the transition server offline. Some sessions are terminated, but reconnect through the new cluster.
Unconfiguring and Rebuilding the Transition Server
Unconfiguring and Rebuilding the Transition Server
To free the licenses used by the transition server (in this case, sm2), and clean up installed software, uninstall the SBR Carrier software. See Uninstalling Steel-Belted Radius Carrier Software.
Unconfiguring the Transition Server
Unconfiguring the Transition Server
Log in to the sm2 node as root.
Navigate to the radius subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).
Example: cd /opt/JNPRsbr/radius
Stop the RADIUS processes.
Execute:
./sbrd stop radiusStop the SSR processes:
Execute:
./sbrd stop ssrCheck the status on the sm2 node to ensure the processes are stopped.
Execute:
./sbrd statusNavigate to the directory where you installed the SBR Carrier package and then into the radius/install subdirectory. (/opt/JNPRsbr/radius/install)
Run the unconfigure script:
Execute:
./unconfigureroot@wrx07:/opt/JNPRsbr/radius/install> ./unconfigure Unconfiguring SBR Software
--------------------------------------------------------------------------- SBR 8.50.50006 temporary cluster cambridge on SunOS 5.10 Generic_141444-09 node wrx07(smdt) is configured and processes are down, may be reconfigured if desired ---------------------------------------------------------------------------
Unconfiguring Cluster Node...
WARNING: You are about to unconfigure this node. Are you sure that you wish to continue? (y,n):
At the warning message enter y to continue.
Cleaning directories /opt/JNPRhadm /opt/JNPRmysql /opt/JNPRmysqld /opt/JNPRndb_mgmd /opt/JNPRndbd
Locating shared directory... drwxrwxr-x 2 hadm hadmg 512 Apr 15 20:19 /opt/JNPRshare/install/cambridge
WARNING: If you remove the shared directory for this cluster, you will either have to recover the data from another cluster node or reconfigure the entire cluster again. This is neither necessary nor recommended if you are updating an existing configuration. Remove the shared directory for this cluster? [n]:
Press Enter indicating you do not want to remove the shared directory.
Locating OS user account and home directory... hadm:x:16663:65536::/opt/JNPRhadm:/bin/bash hadmg::65536: drwxrwx--- 2 hadm hadmg 1536 Apr 16 00:03 /opt/JNPRhadm
WARNING: If you remove the OS user account hadm you will have to recreate it, the associated OS group account hadmg, and the associated home directory /opt/JNPRhadm This is neither necessary nor recommended if you are updating an existing configuration. Remove the OS user account? [n]:
Press Enter indicating you do not want to remove the OS user account.
Unconfigured
root@wrx07:/opt/JNPRsbr/radius/install>
Retrieving the Updated Cluster Definition Files from SM1 Node
Retrieving the Updated Cluster Definition Files from SM1 Node
To distribute the new cluster definition files:
Log in to the sm2 node as hadm.
Change directories to the install directory.
(On new nodes, the entire path may not exist because the <cluster name> portion of the path was not created when you prepared the new machine, so you may need to create it.) See Creating Share Directories.
Execute:
cd /opt/JNPRshare/install/ <cluster_name>For example:
cd /opt/JNPRshare/install/cambridgeUse FTP binary mode to connect to the node host (in this example, sm1) where you created the new cluster definition files.
Execute the get command to transfer the configure. <cluster name> .tar file.
For example:
bin
get /opt/JNPRsbr/radius/install/configure.cambridge.tarIn a terminal window, extract the new cluster definition files from the archive.
Execute:
tar xvf configure. <cluster_name> .tarOutput similar to this example is displayed:
$ tar xvf configure.MyCluster.tar x dbcluster.rc, 1925 bytes, 4 tape blocks x config.ini, 2435 bytes, 5 tape blocks x my.cnf, 1017 bytes, 2 tape blocks x dbclusterndb.gen, 33474 bytes, 66 tape blocks x dbcluster.dat, 921 bytes, 2 tape blocks
Running the Configure Script on the SM2 Node
Running the Configure Script on the SM2 Node
Log in to the sm2 node as root.
Navigate to the radius/install subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr by default).
Example: cd /opt/JNPRsbr/radius/install
Run the configure script to apply the updated cluster definition files:
Execute:
./configureReview and accept the Steel-Belted Radius Carrier license agreement.
Press the spacebar to move from one page to the next. When you are prompted to accept the terms of the license agreement, enter y.
Do you accept the terms in the license agreement? [n] y
--------------------------------------------------------------------------- SBR 8.50.50006 cluster cambridge{0s,2sm,0m,2d} on SunOS 5.10 Generic_141444-09 node wrx07(sm) is not configured and processes are down, needs to be configured ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by 'Generate Cluster Definition' on one node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
Enter the number of the desired configuration task or quit (2,q):
From the menu of configuration tasks, enter 3 to specify Configure Cluster Node.
--------------------------------------------------------------------------- SBR 8.50.50006 cluster on SunOS 5.10 Generic_141444-09 node wrx07(sm) is not configured and processes are down, needs to be configured ---------------------------------------------------------------------------
Configuring Cluster Node...
Enter SBR cluster name []:cambridge
Specify the exact name of the cluster.
Reading shared configuration from /opt/JNPRshare/install/cambridge
Generating configuration files
Reviewing configuration files /opt/JNPRsbr/radius/install/tmp/dbclusterndb.gen View (v), accept (a), or reject (r) configuration files:
Enter a to accept the modified configuration files and continue or v to view them.
Caution We recommend that you enter an r to reject them only if a serious error was made when you provided information. We recommend that you not edit these files.
The configure script prompts you with a warning whether or not to apply the cluster definition to this node. Enter y to proceed.
WARNING: You are about to make irreversible changes to this node. Are you sure that you wish to continue? (y,n): y
Cleaning directories /opt/JNPRhadm
Applying configuration
Initializing Session State Register, please wait a few minutes...
Configure the node.
For information about configuring the node in the following prompts, see Configuring the Host Software on the First Server in the Cluster.
Do you want to configure Java Runtime Environment for JDBC Feature [n] : Please enter backup or radius directory from which to migrate. Enter n for new configuration, s to search, or q to quit [n]:
Enter initial admin user (UNIX account must have a valid password) [root]: Enable Centralized Configuration Management (CCM) for this SBR node? [n]: Configuring for use with generic database Do you want to enable "Radius WatchDog" Process? [n]: Do you want to enable LCI? [n]: Do you want to configure for use with Oracle? [n]: Removing oracle references from startup script Do you want to configure for use with SIGTRAN? [n]: Removing SIGTRAN references from startup script Do you want to configure SNMP? [n]: Configuring Admin GUI Webserver Compatible Java version found : Do you want to install custom SSL certificate for Admin WebServer? [n]: Enable (e), disable (d), or preserve (p) autoboot scripts [e]:
The SBR Admin Web GUI can be launched using the following URL: https://<servername>:2909
Configuration complete
--------------------------------------------------------------------------- SBR 8.50.50006 cluster cambridge{0s,2sm,0m,4d} on SunOS 5.10 Generic_141444-09 node wrx07(sm)is configured and processes are down, may be reconfigured if desired ---------------------------------------------------------------------------
1. Unconfigure Cluster Node Not used when merely updating existing cluster definitions.
2. Generate Cluster Definition Creates new or updates existing cluster definitions. Modifies the shared directory but does not modify this node.
3. Configure Cluster Node To be preceded by 'Generate Cluster Definition' on one node. Must be invoked on each and every node of the cluster.
4. Reconfigure RADIUS Server Only on SBR nodes, updates the existing SBR configuration.
5. Create Temporary Cluster Used to approximate a cluster using only this one machine. Intended for migration and demonstration purposes only.
6. Upgrade From Restricted Cluster License Used to upgrade from restricted cluster to regular cluster. Removes database restriction on the number of concurrent sessions and enables the addition of an expansion kit license
READY: last operation succeeded, node configured. Enter the number of the desired configuration task or quit (4,q):
Enter q to quit.
Start the SSR process on the newly configured sm2 node:
Execute:
./sbrd start ssrExecute:
./sbrd statusExamine each line and ensure the SSR process is running without error.
Run CreateDB.sh script on sm2.
The purpose of running the CreateDB.sh script is to create certain files that are required to run the administrative shell scripts used to administer the session database in the cluster.
Log in to sm2 as hadm.
Navigate to the hadm user's home directory, /opt/JNPRhadm/ by default.
Execute:
CreateDB.sh
Start the RADIUS process on sm2:
Log in to sm2 as root.
Execute:
./sbrd start radiusExecute:
./sbrd statusThe final cluster configuration looks as follows:
Cluster Configuration --------------------- [ndbd(NDB)] 4 node(s) id=10 @172.28.84.163 (mysql-5.7.18 ndb-7.5.6, Nodegroup: 0, Master) id=11 @172.28.84.113 (mysql-5.7.18 ndb-7.5.6, starting, Nodegroup: 0) id=12 @172.28.84.178 (mysql-5.7.18 ndb-7.5.6, starting, Nodegroup: 1) id=13 @172.28.84.104 (mysql-5.7.18 ndb-7.5.6, starting, Nodegroup: 1)
[ndb_mgmd(MGM)] 2 node(s) id=1 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6) id=2 @172.28.84.166 (mysql-5.7.18 ndb-7.5.6)
[mysqld(API)] 4 node(s) id=6 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6) id=7 @172.28.84.166 (mysql-5.7.18 ndb-7.5.6) id=58 @172.28.84.166 (mysql-5.7.18 ndb-7.5.6) id=59 @172.28.84.36 (mysql-5.7.18 ndb-7.5.6)
Examine each line and ensure the cluster is running with no errors.
Non-Transition Server Method—Terminating Connections
Non-Transition Server Method—Terminating Connections
If you can tolerate some downtime while the existing data nodes are stopped, the new configuration imposed, and all nodes restarted, that is the quickest and easiest method to incorporate the new data nodes. However, sessions are disconnected, and reconnection is not possible until all nodes come back online.
This procedure stops the entire cluster. You will not be able to process any requests from users.
To estimate how long this process takes, note the amount of time it takes to reconfigure one or two nodes.
Assuming the same basic configuration as in the previous examples of (0s), (2sm), 0(m), 2(d), the following procedure describes the high-level tasks involved in this method. Reference the previous procedures in this chapter for information about performing each task.
Stop the RADIUS processes on the sm1 and sm2 nodes.
Call DestroyDB.sh as user hadm on either the sm1 or sm2 node.
Stop the cluster on sm1.
Stop the SSR process on sm2.
Verify that the SSR processes are stopped on the two existing data nodes.
Install the SBR Carrier software on the two new data nodes in the expansion kit.
Run the configure script on sm1 using option 2 to update the cluster definition files.
Distribute the updated cluster definition files to all nodes including the two new data nodes.
Run the clean command on all four of the existing nodes (sm1, sm2, d1, and d2).
Start the SSR process on each node one at a time.
Run CreateDB.sh on the sm1 node.
After CreateDB.sh has finished running on sm1, repeat it on sm2 as user hadm.
Add the IP address pools and ranges using the administrative scripts.
Start the RADIUS processes on sm1 and sm2 one at a time.