Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Setting Up Data Node Hosts Included with the Starter Kit

 

Use this procedure for any data node host installation. The examples in this section install and set up each of the two data nodes in a Starter Kit, using the MyCluster cluster example, but the procedure is the same for expansion kit installations.

Note

For performance reasons, the SSR ndbmtd processes on data (D) nodes are configured to execute under the UNIX root account by default, as opposed to the UNIX hadm account. In particular, this allows the ndbmtd processes to lock data in physical memory, which is faster, as opposed to allowing the OS to use swap space on disk, which is slower. The UNIX root account privilege is required in order to lock data in physical memory.

  • The relevant configuration item is the #sbrd-ndbd-run-as-root = true parameter in the [ndbd] section of the /opt/JNPRhadm/my.cnf file. Note that the leading # character is required to distinguish this parameter as a sbrd script parameter; this parameter is not a comment and is always active. When the value of this parameter is true, the ndbmtd processes execute under the UNIX root account. When the value of this parameter is false (or if the parameter is missing entirely), the ndbmtd processes execute under the UNIX hadm account. The value of this parameter can only be changed immediately after configuring a data (D) node. The value of this parameter cannot be changed after the SSR processes are running.

  • We recommend, although it is not necessary, that the parameter be configured the same on all data (D) nodes. In order to change the value of this parameter at a later time, you must unconfigure the data (D) node and then reconfigure it again.

  • When the ndbmtd processes are executed under the UNIX root account, it is extremely important that the DataMemory and IndexMemory parameters in the [ndbd default] section of the /opt/JNPRhadm/config.ini file be configured properly with respect to the amount of physical memory that is actually available on the data (D) node. If the data (D) node does not have enough physical memory available, then the ndbmtd processes can starve the entire machine, including the OS itself, for memory. By default, SBRC is configured under the assumption that at least 8 GB of memory is available solely for ndbmtd processes. In practice, more than 8 GB is required to support the OS and other applications.

Striping Data Nodes

Striping Data Nodes

For performance reasons, the data stored in the Session State Register (SSR) should be striped. If you choose not to enable striping, the SBR Carrier software operates in demonstration mode without enforcing minimum memory requirements. When operating in demonstration mode, the SBR Carrier software makes a best effort attempt to operate in spite of various deficiencies that would normally prevent operation due to poor performance.

The choice of whether or not to stripe must be answered when the ./configure script (typically found in the /opt/JNPR sbr/radius/install directory) is executed in order to create a new cluster definition. When you execute the ./configure script, and select option 2, "Generate Cluster Definition", you are presented with the following prompts:

If the prompts related to striping are not answered correctly (for example, striping is enabled but one or more data nodes has less than 8 GB memory, then you will not be able to configure all of the data nodes. In this case, when you execute the ./configure script, and select option 3, "Configure Cluster Node", and then select the (c) Create option, you are prompted as follows:

The number of stripes is presently a fixed parameter, always being set to either 1 (striping disabled for demonstration mode), 4 (striping enabled for cluster), or 8 (striping enabled for standalone server or transition server). After the number of stripes is configured, it cannot be changed without destroying and then re-creating the entire SBR Carrier cluster. Again, because striping is a global parameter with respect to cluster geometry, all data nodes must always have the same number of stripes.

Each stripe is implemented by a separate SSR data process requiring its own unique node ID. Thus, eight node IDs are required for each data node in a transition server when striping is enabled. However, the ./configure script only prompts for one base node ID per data node regardless of whether striping is enabled because higher order node IDs are determined by an algorithm related to the number of data nodes and the number of stripes. Also, the ./sbrd script (typically found in /opt/JNPR sbr/radius) operates upon all of the SSR data processes on a particular node as if they were one entity.

If any SSR data processes diverge from the group, the ./sbrd script may detect this and warn you if you attempt to restart them. (You are not likely to encounter this unless you are having trouble starting the software in the first place.)

If you see this warning, use the ./sbrd status command to verify whether any data processes have failed. If any data processes have failed while other data processes still persist, then execute ./sbrd stop ssr followed by ./sbrd start ssr and finally ./sbrd status again to verify that the problem has been resolved.

When ./sbrd status is executed as either root or hadm on a running M or SM node, or on a cluster that is striped, you should observe four times as many [ndbd(NDB)] nodes as there are actual data nodes (because four is the number of stripes). When ./sbrd status is executed on a running data node, you should observe twice as many ndbmtd processes (the SSR data processes) as stripes because each working ndbmtd process is paired with a watchdog instance of itself to guard against failure.

Populating the JNPRShare Directory

Populating the JNPRShare Directory

Before running the configure script, make a local copy of the configuration files that were created during installation on the first combined.

To copy the cluster’s base configuration files to this target machine:

  1. Log in as hadm.

  2. Change directories to the working directory on the local server.

    Execute:

    cd /opt/JNPRshare/install/ <cluster_name> 

    For example:

    cd /opt/JNPRshare/install/MyCluster

  3. Use FTP binary mode to connect to the first server that was set up and navigate to the radius/install subdirectory of the directory in which the JNPRsbr package was installed (/opt/JNPRsbr/radius/install by default) on the source server.

  4. Execute a get command to transfer the configure. <cluster name> .tar file to the local directory.

    For example:

    bin

    get configure.MyCluster.tar

  5. Extract the configuration files from the archive.

    For example:

    tar xvf configure.MyCluster.tar

    The output display includes five files similar to this example:

Configuring the Host Software on the Data Nodes

Configuring the Host Software on the Data Nodes

To configure the software on a data node in a Starter Kit cluster:

Note

You must repeat this procedure on every data node in the cluster.

  1. As root, navigate to the directory where you installed the Steel-Belted Radius Carrier package. For information about directory in which Steel-Belted Radius Carrier package is installed, see Unpacking Session State Register Software.

    Navigate to the radius/install subdirectory and run:

    Execute:

    cd /opt/JNPRsbr/radius/install/

  2. Execute the configure script to install the Steel-Belted Radius Carrier server software:

    Execute:

    ./configure

  3. Review and accept the Steel-Belted Radius Carrier license agreement.

    Press the spacebar to move from one page to the next. When you are prompted to accept the terms of the license agreement, enter y.

    Do you accept the terms in the license agreement? [n] y

  4. From the menu of configuration tasks, enter 3 to specify Configure Cluster Node.

  5. Specify the name of the cluster.

    Enter the name exactly as you specified it in Table 9.

    Enter SBR cluster name [MyCluster]: MyCluster

    You are prompted to verify whether you want to proceed, unless the script detects any unusual installation conditions (a pre-existing directory, for example). In some cases, you may be prompted to resolve or ignore them.

  6. The system reads the configuration files that you copied to the server and prompts you to change some settings to adapt them to this server. Enter y to proceed.

  7. Enter a to accept the modified configuration files and continue or v to view them.

    Caution

    We recommend that you enter an r to reject them only if a serious error was made when you provided information. We recommend that you not edit these files.

  8. Specify that you want to configure the data node host to autoboot (restart automatically when the operating system is restarted).

    Enable (e), disable (d), or preserve (p) autoboot scripts [e]: e

    A local /radiusdir/radius/sbrd script is always created, and /opt/JNPRhadm/sbrd is always a symbolic link to this local copy.

    • If you enter e (enable), the configure script copies the local sbrd script to /etc/init.d, where it is automatically invoked by the OS whenever the OS is stopped or started.

    • If you enter d (disable), the configure script removes all copies of the sbrd script from /etc/init.d, thus, disabling autoboot for all versions of Steel-Belted Radius Carrier.

    • If you enter p (preserve), the configure script does nothing, thereby leaving your previous autoboot scripts unchanged.

  9. Repeat this procedure on each data node in the cluster.

  10. Now that the two SBR/management (sm) nodes and two data (d) nodes are configured, start the cluster following the procedure described in Initial Cluster Startup Procedure.