Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Configuring the Linux Operating System for IP/MPLSView High Availability

    Assigning a Password to the Ricci Daemon

    The ricci daemon is a cluster management and configuration process that dispatches incoming messages to underlying management modules. When ricci is run with no options, it runs as a daemon and listens to the default port (11111). You must run ricci as the root user.

    To assign a password to the ricci daemon:

    1. Use the following command as the root user on both cluster servers:
      [root@node1 ~]# passwd examplepw
      Changing password for user ricci.
      New password:
      BAD PASSWORD: it is based on a dictionary word
      BAD PASSWORD: is too simple
      Retype new password:
      passwd: all authentication tokens updated successfully.
      Restart the ricci services to take the changes affect
      [root@node1 ~]# /etc/init.d/ricci start
      Starting oddjobd:                                [ OK ]
      generating SSL certificates... done
      Generating NSS database... done
      Starting ricci:                                  [ OK ]
    2. Confirm that the ricci services start after the reboot.
      [root@node1 ~]# chkconfig ricci on

    Starting Conga Services

    The ricci daemon works in conjunction with luci, which is the cluster management process that oversees and manages all of the ricci nodes. The ricci and luci daemons are collectively referred to as Conga, which is the GUI application you use to configure the services and cluster nodes. The Conga application provides centralized configuration and management for the RHEL High Availability Add-On.

    You must start Conga services on the management server, which is node3 in this example.

    To start Conga services (luci) on the management server:

    1. As the root user, start the luci services.
      [root@node3 ~]#/etc/init.d/luci start
      Adding following auto-detected host IDs (IP addresses/domain names),
      corresponding to `node3.example' address, to the configuration of
      self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change
      them by editing `/var/lib/luci/etc/cacert.config', removing the generated
      certificate `/var/lib/luci/certs/host.pem' and restarting luci): (none
      suitable found, you can still do it manually as mentioned above)
      
      Generating a 2048 bit RSA private key
      writing new private key to '/var/lib/luci/certs/host.pem'
      Starting saslauthd:                                      [ OK ]
      Start luci...                                            [ OK ]
    2. Confirm that the luci services start.
      [root@node1 ~]# chkconfig luci on

    Accessing the Luci Console

    To access the luci console:

    1. In your Web browser, enter https:/ip-address:8084, where ip-address is the IP address of your management server.

      Figure 1: High Availability Luci Console

       High Availability Luci Console
    2. In the Username field, enter root.
    3. In the Password field, enter the root password.

    Creating the High Availability Cluster

    To create a new high availability cluster:

    1. In the Homebase window, select Manage Clusters.

      Figure 2: High Availability Manage Clusters Luci Console

      High Availability Manage Clusters Luci Console
    2. Click Create to display the Create New Cluster window.

      Figure 3: High Availability Manage Clusters Actions

      High Availability Manage Clusters Actions
    3. In the Create New Cluster window, provide the properties for the new cluster.

      Figure 4: Create New Cluster Window

      Create New Cluster Window
      1. In the Cluster Name field, enter the name of the new cluster (application in this example).
      2. In the Node Name fields, enter the name, password, ricci hostname, and default ricci port for each node participating in the cluster.

        In this example, the node names and ricci hostnames are node1 and node2, the password is the value specified in Assigning a Password to the Ricci Daemon, and the default ricci port is 11111. Make sure each node you specify is reachable.

    4. Select Use Locally Installed Packages.
    5. Select Enable Shared Storage Support to specify that GFS2 is being used to share data among the nodes in the cluster.
    6. Click Create Cluster.

      The nodes are added to the high availability cluster.

      Figure 5: Create New Cluster Add Nodes Window

      Create New Cluster Add Nodes Window

    Configuring the Quorum Disk

    To configure the quorum disk:

    1. Select the Configure tab.
    2. Select the QDisk tab. The Quarum Disk Configuration window is displayed.

      Figure 6: Quarum Disk Configuration Window

      Quarum Disk Configuration Window
    3. In the By Device Label field, enter the label name of the quorum disk specified in Creating the Quorum Disk.
    4. In the Heuristics fields, enter the command you want the software to use to check the quorum status among all nodes in the cluster, and the interval at which you want to run the command.

      The Heuristics fields specify where your heartbeat connection is configured (for example, the eth1 interface).

    5. Click Apply.

    Configuring Fence Devices

    To configure the fence devices:

    1. Select the Fence Devices tab.
    2. Select the Fence Daemon tab and click Add.
    3. In the Add Fence Device (Instance) window, specify information for each fence device you want to add.
    4. Click Submit to add the specified fence device to the cluster.

      Figure 7: Add Node1 and Node2 Fence Device (Instance) Windows

      Add Node1 and Node2 Fence Device (Instance) Windows
      Add Node1 and Node2 Fence Device (Instance) Windows

    Configuring Nodes to Use Fence Devices

    To configure nodes to use fence devices:

    1. Select Homebase and click on the cluster name.
    2. Select one of the hosts.
    3. In the Fence Device pane, select Add Fence Method and enter IPMI Lan.

      Figure 8: Add Fence Device Window Fence Method

      Add Fence Device Window Fence Method
    4. Click Add Fence Instance and select an appropriate IPMI device.
    5. Repeat Steps 1 to 4 for the other nodes in the cluster.

      Figure 9: Add Fence Device Window Nodes List

      Add Fence Device Window Nodes List

    Configuring the Failover Domain

    To configure the failover domain:

    1. Select the Failover Domains tab.
    2. Click Add.
    3. Enter a name for the failover domain and provide other required information.
    4. Click Create to create the new failover domain.

      Figure 10: Add Failover Domain to Cluster Dialog Box

      Add Failover Domain to Cluster Dialog Box

    Creating Service Groups

    After you configure the GFS2, IP address, and script services for the high availability cluster, you must configure a service group and add these same services to the service group. With IP/MPLSView, you can use multiple failover domains and multiple service groups instead of multiple clusters.

    To create a service group and add resources to it:

    1. Select the Service Groups tab and click Add.

      The Add Service Group to Cluster window is displayed.

      Figure 11: Add Service Group to Cluster Window

      Add Service Group to Cluster Window
    2. Provide the following information for the new service group:
      1. In the Service Name field, enter the name of the new service group (application-SG in this example).
      2. Select Automatically Start This Service.
      3. In the Failover Domain field, select the name of the failure domain configured in Configuring Nodes to Use Fence Devices (application-FD in this example).
      4. In the Recovery Policy field, select Relocate.
    3. Click Add Resource.

      The Add Resource to Service window is displayed.

      Figure 12: Add Resource to Service Window

      Add Resource to Service Window
    4. Add the GFS2 resource to the service group.
      1. Select Select a Resource Type > GFS2.

        The Add Resource to Cluster window for GFS2 is displayed, as shown in Creating Resources.

      2. Click Submit to add the GFS2 resource to the service group.
    5. Add the IP Address resource to the service group.
      1. Select Select a Resource Type > IP Address.

        The Add Resource to Cluster window for IP Address is displayed, as shown in Creating Resources.

      2. Click Submit to add the IP Address resource to the service group.
    6. Add the Script resource to the service group.
      1. Select Select a Resource Type > Script.

        The Add Resource to Cluster window for Script is displayed, as shown in Creating Resources.

      2. Click Submit to add the Script resource to the service group.
    7. Refresh the Web console to verify that the GFS2, IP Address, and Script resources are running on any of the nodes in your cluster.

      Figure 13: Service Groups Edit Service

      Service Groups Edit Service

    Automating SSH Login from All Servers

    To ensure that you can perform an automatic SSH login from the database servers to the application servers, you must automate the SSH login process on both database servers.

    You perform this procedure only once. You do not need to repeat the procedure when you upgrade the software unless you change the password or IP address of the server as part of the upgrade.

    To automate SSH login on all database servers:

    1. On the primary database server, log in as the wandl user and change the current directory to the wandl home directory (/home/wandl in this example.)
    2. Generate a pair of authentication keys without specifying a passphrase.
      /home/wandl> ssh-keygen -t rsa
      Generating public/private rsa key pair.
      Enter file in which to save the key (/home/wandl/.ssh/id_rsa):
      Created directory '/home/wandl/.ssh'.
      Enter passphrase (empty for no passphrase):
      Enter same passphrase again:
      Your identification has been saved in /home/wandl/.ssh/id_rsa.
      Your public key has been saved in /home/wandl/.ssh/id_rsa.pub.
      The key fingerprint is:
      94:7c:a6:d2:b6:80:19:a4:b9:f4:7d:7f:09:d4:f2:52 wandl@lexu
    3. Use SSH to create the .ssh directory on the primary application server.

      Substitute remosthostip with the IP address of the primary application server. When prompted, enter the wandl password of the remote host.

      /home/wandl> ssh wandl@remotehostip mkdir -p .ssh
      The authenticity of host ‘<remotehostip> (<remotehostip>)' can't be
      established.
      
      RSA key fingerprint is 8a:d9:a9:c5:91:6a:e6:23:8c:2f:ad:4f:ea:48:78:0b.
      
      Are you sure you want to continue connecting (yes/no)? yes
      
      Warning: Permanently added ‘<remotehostip>’ (RSA) to the list of known
      hosts.
      
      Password:
    4. Append the local host’s new public key to the primary application server’s authorized keys, and enter the wandl password for the primary application server.
      /home/wandl> cat .ssh/id_rsa.pub | ssh wandl@remotehostip 'cat >> .ssh/authorized_keys'
      Password:
    5. From the database servers, log in to the application servers to confirm that automatic SSH login is enabled.

      If automatic SSH login is working properly, you should be able to directly log in to the application servers from the database servers without specifying a password.

    Modified: 2016-11-08