Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation  Back up to About Overview 
  
[+] Expand All
[-] Collapse All

Initial Cluster Startup Procedure

After all four nodes in the Starter Kit have been installed and configured, we recommend you start the cluster to ensure that everything is working properly, before fully configuring the nodes.

A specific sequence of steps is required to gracefully bring up and shut down the cluster. Do not proceed to the next node until you are sure the process has started properly and the node is connected. The following procedure provides the general steps for starting the cluster in this example. For complete details, see When and How to Restart Session State Register Nodes, Hosts, and Clusters.

  1. On the SBR/management node that you installed first (MyNode_1 in the example), start the ssr process:
    1. Log in as root.
    2. Change directories to /opt/JNPRsbr/radius/.
    3. Execute:
      ./sbrd start ssr
    4. Execute:
      ./sbrd status
    5. Examine each line and ensure the SSR process is running without error.
  2. Repeat the sequence of commands on the second combined SBR/management node (MyNode_2 in the example), start the ssr process:
    1. Log in as root.
    2. Change directories to /opt/JNPRsbr/radius/.
    3. Execute:
      ./sbrd start ssr
    4. Execute:
      ./sbrd status
    5. Examine each line and ensure the SSR process is running without error.
  3. Repeat the sequence of commands on the first data node that you installed (MyNode_3 in the example), start the ssr process:
    1. Log in as root.
    2. Change directories to /opt/JNPRsbr/radius/.
    3. Execute:
      ./sbrd start ssr
    4. Execute:
      ./sbrd status
    5. Examine each line and ensure the SSR process is running without error.
  4. Repeat the sequence of commands on the second data node that you installed (MyNode_4 in the example):
    1. Log in as root.
    2. Change directories to /opt/JNPRsbr/radius/.
    3. Execute:
      ./sbrd start ssr
    4. Execute:
      ./sbrd status
    5. Examine each line and ensure the SSR process is running without error.

    In the preceding steps, each time the sbrd status command is executed, results similar to this example should be displayed:

    hadmUser$>/opt/JNPRsbr/radius/sbrd status 
    [ndbd(NDB)]     2 node(s)
    id=1   @172.28.84.163  (mysql-5.6.28  ndb-7.4.10, Nodegroup: 0, Master)
    id=2   @172.28.84.113  (mysql-5.6.28  ndb-7.4.10, Nodegroup: 0)
    [ndb_mgmd(MGM)] 2 node(s)
    id=51    @172.28.84.36  (mysql-5.6.28  ndb-7.4.10)
    id=52    @172.28.84.166  (mysql-5.6.28  ndb-7.4.10)
    [mysqld(API)]   4 node(s)
    id=61   (not connected, accepting connect from 172.28.84.36) 
    id=62   (not connected, accepting connect from 172.28.84.166)
    id=100  (not connected, accepting connect from 172.28.84.36)
    id=101  (not connected, accepting connect from 172.28.84.166)

    Examine the lines starting with id=, and verify that there are no references to starting, connecting, or not connected. Any of these references indicate the process either has not finished starting, or the node is not connected properly. You may need to execute the sbrd status command more than once because it only shows a snapshot of activity; the display does not refresh automatically.

  5. Create the database on every management node. Management nodes include both normal management (m) nodes and combination SBR/management (sm) nodes.

    This step creates a basic database on each management node in the cluster. Alternatively, you can create a sample database, or customize the database for you particular environment:

    If you choose one of these two options as opposed to performing this step, be sure to come back to this procedure and complete the remaining steps.

    Note: Except when migrating from a temporary cluster, all SSR processes must be up on all SSR nodes [sm, m, d] and all SBR processes must be down on all SBR nodes [s, sm] in order to execute CreateDB.sh.

    On each and every management node:

    1. Log in as hadm.
    2. Change directories to /opt/JNPRhadm/.
    3. Execute:
      CreateDB.sh
  6. Configure all server configuration files for you environment.

    Complete the configuration of all server initialization (.ini) files, authentication (.aut) files, accounting (.acc) files, as well as configure any proxy setup you may require.

    Carefully review the SBR Carrier Reference Guide and configure all files for your environment before you start the RADIUS process. Also review the SBR Carrier Administration and Configuration Guide, and plan the configuration steps for your particular environment.

    You cannot connect to the servers in the cluster with Web GUI until the RADIUS process is started; however, we recommend you plan out the administration of the server before starting the RADIUS process.

    See Recommendations before Configuring the Cluster for general configuration recommendations.

    After you have completed the configuration of the various configuration files described in the SBR Carrier Reference Guide, remember to come back to this procedure and complete the step in bringing up the cluster.

  7. Configure at least one IP address pool and one range using the SSR Administration Scripts. See Testing the Installation with DemoSetup.sh. Also see the section on Session State Register Administration in the SBR Carrier Administration and Configuration Guide.
  8. Start the RADIUS process on each and every SBR nodes, one at a time.

    SBR nodes include both SBR (s) nodes and SBR/management nodes (sm).

    1. Log in as root to each SBR (s) node and each SBR/management (sm) node.
    2. Change directories to /opt/JNPRsbr/radius/.
    3. Execute:
      ./sbrd start radius
    4. Execute:
      ./sbrd status
      [ndbd(NDB)]     2 node(s)
      id=1    @172.28.84.163  (mysql-5.6.28 ndb-7.4.10, Nodegroup: 0, *)
      id=2    @172.28.84.113  (mysql-5.6.28 ndb-7.4.10, Nodegroup: 0)
      
      [ndb_mgmd(MGM)] 2 node(s)
      id=51   @172.28.84.36  (mysql-5.6.28 ndb-7.4.10)
      id=52   @172.28.84.166  (mysql-5.6.28 ndb-7.4.10)
      
      [mysqld(API)]   4 node(s)
      id=61   @172.28.84.36  (mysql-5.6.28 ndb-7.4.10)
      id=62   @172.28.84.166  (mysql-5.6.28 ndb-7.4.10)
      id=100  @172.28.84.36  (mysql-5.6.28 ndb-7.4.10)
      id=101  @172.28.84.166  (mysql-5.6.28 ndb-7.4.10)
      
      ---------------------------------------------------------------------------
      Current state of network interfaces:
      
      tcp   0      0 0.0.0.0:1812               0.0.0.0:*        LISTEN      
      tcp   0      0 0.0.0.0:1813               0.0.0.0:*        LISTEN      
      udp   0      0 172.28.84.36:1645          0.0.0.0:*        
      udp   0      0 172.28.84.36:1646          0.0.0.0:*        
      udp   0      0 172.28.84.36:1812          0.0.0.0:*        
      udp   0      0 172.28.84.36:1813          0.0.0.0:*        
      ---------------------------------------------------------------------------
      
      hadm     16788 ndb_mgmd --config-cache=0 --configdir=/opt/JNPRhadm
      hadm     16849 /bin/sh /opt/JNPRmysql/install/bin/mysqld_safe
      hadm     17194 /opt/JNPRmysql/install/bin/mysqld --basedir=/opt/JNPRmysql/install --datadir=/opt/JNPRmysqld/data --plugin-dir=/opt/JNPRmysql/install/lib/plugin --log-error=/opt/JNPRmysqld/mysqld_safe.err --pid-file=/opt/JNPRmysqld/mysqld.pid --socket=/opt/JNPRhadm/.mysql.sock --port=3001
      root     17683 radius sbr.xml
      root     17723 webserver
    5. Examine each line and ensure the RADIUS process is running without error.
    6. Repeat this process until the RADIUS process is started and running without error on each and every SBR node.
  9. Complete the configuration of the cluster nodes using Web GUI. See Basic SBR Carrier Node Configuration. For complete details, see the SBR Carrier Administration and Configuration Guide.

Modified: 2017-03-07