Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation  Back up to About Overview 
  
[+] Expand All
[-] Collapse All

Installing the Cluster

With your existing cluster servers offline, you can proceed with a normal cluster installation:

  1. Confirm that the cluster topology plan is complete. See Planning Your Session State Register Cluster.

    Four-Server Strategy Only

    If you are using the four-server strategy and plan to incorporate the transition server into the new Starter Kit cluster, the transition server must be the second SBR/management (SM) node host.

    Note: During this procedure, the new cluster is configured as if the second SBR/management node host were present. However, it is not present because it is currently functioning as the transition server, so you defer configuring and starting the second SBR/management node host until after the new cluster is operational.

  2. Make sure that each server in the new cluster conforms to the requirements in Before You Install Software.

    Caution: Do not skip this step; the server requirements for Session State Register have changed significantly since SBR/HA Release 5.x.

  3. Install and configure the software on all cluster nodes.

    Follow the procedures in Installing Session State Register Nodes.

    Four-Server Strategy Only

    If you are using the four-server strategy and plan to incorporate the transition node as the second SBR/management node in the new cluster, skip Setting Up the Second SBR/Management Node in a Starter Kit entirely as you work through the node installations.

  4. When you begin configuring the cluster nodes, if you edited the CurrentSessions.sql file on the transition server during the procedure for Configuring the Transition Server, you can copy that CurrentSessions.sql to the first management node that you set up. See Customizing the SSR Database Current Sessions Table.
  5. Start the cluster.
    • If you are using a five-node strategy, use the Initial Cluster Startup Procedure.
    • In the following procedure, each time the sbrd status command is executed results similar to this example are displayed:
      hadmUser$>/opt/JNPRsbr/radius/sbrd status 
      [ndbd(NDB)]     2 node(s)
      id=10   @172.28.84.163  (mysql-5.6.28 ndb-7.4.10, Nodegroup: 0, Master)
      id=11   @172.28.84.113  (mysql-5.6.28 ndb-7.4.10, Nodegroup: 0)
      [ndb_mgmd(MGM)] 2 node(s)
      id=1    @172.28.84.36  (mysql-5.6.28 ndb-7.4.10)
      id=2    @172.28.84.166  (mysql-5.6.28 ndb-7.4.10)
      [mysqld(API)]   4 node(s)
      id=21   @172.28.84.36  (mysql-5.6.28 ndb-7.4.10)
      id=22   @172.28.84.166  (mysql-5.6.28 ndb-7.4.10)
      id=30   @172.28.84.36  (mysql-5.6.28 ndb-7.4.10)
      id=31   @172.28.84.166  (mysql-5.6.28 ndb-7.4.10)

      Examine the line starting with id=, and verify that there are no references to starting, connecting, or not connected. Any of these references indicate the process has either not finished starting, or the node is not connected properly. You may need to execute the sbrd status command more than once because it only shows a snapshot of activity; the display does not refresh automatically. Do not proceed to the next node until you are sure the process has started properly and the node is connected.

      Four-Server Strategy Only

    • If you are using the four-server strategy, start the new cluster using a non-standard series of commands because the fourth server that hosts the second management node is missing. Use this sequence of commands:
      1. Log in to the SBR/management node as root.
      2. Change directories to /opt/JNPRsbr/radius/.
      3. Execute:
        ./sbrd start ssr
      4. Execute:
        ./sbrd status
      5. Examine each line and ensure the SSR process is running without error.
      6. Log in to a data node as root.
      7. Change directories to /opt/JNPRsbr/radius/.
      8. Execute:
        ./sbrd start ssr
      9. Execute:
        ./sbrd status
      10. Examine each line and ensure the SSR process is running without error.
      11. Log in to the second data node as root.
      12. Change directories to /opt/JNPRsbr/radius/.
      13. Execute:
        ./sbrd start ssr
      14. Execute:
        ./sbrd status
      15. Examine each line and ensure the SSR process is running without error.
      16. Go back to the management node, still as root.
      17. Change directories to /opt/JNPRhadm/.
      18. Log in as hadm.
      19. Execute:
        ./CreateDB.sh
  6. Run CreateDB.sh on each SBR/management node and each management node in the cluster.

    If you need to customize the sessions database, see Customizing the SSR Database Current Sessions Table.

  7. Configure at least one IP address pool and one range using the SSR Administration Scripts. See Testing the Installation with DemoSetup.sh. See the SBR Carrier Administration and Configuration Guide for details on configuring IP address pools and ranges.

    Note: We recommend you consult with the Juniper Networks Technical Assistance Center (JTAC) if you are using IP address pools and creating a transition server.

  8. Start the RADIUS process on the SBR/management node.
    1. Log in as root to the SBR/management (sm) node.
    2. Change directories to /opt/JNPRsbr/radius/.
    3. Execute:
      ./sbrd start radius
    4. Execute:
      ./sbrd status
    5. Examine each line and ensure the RADIUS process is running without error.

Now that the RADIUS process is running, you can complete the configuration using Web GUI. See Basic SBR Carrier Node Configuration. For complete details, see the SBR Carrier Administration and Configuration Guide.

Modified: 2017-03-07