Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Installing the Linux Operating System for IP/MPLSView High Availability

    This topic describes how to install and configure the CentOS 6.6 64-bit operating system on the application servers, database servers, and management nodes in your network. In addition, it provides guidelines for partitioning the SAN storage disk so that each partition is accessible to a CentOS cluster group.

    Installing Linux OS on Your Servers

    Before you install CentOS 6.6 64-bit OS on your network servers:

    • Make sure your system has static IP addresses configured for both public and private interfaces.
    • In the /etc/hosts file, include an entry for each node participating in the CentOS cluster.

    To install the CentOS 6.6 64-bit OS package on all servers in your network:

    1. Use the minimal desktop installation option according to your local policies.
    2. Update each server with the CentOS 6.6 64-bit OS installation package using the following command:
      yum –y update package-name
    3. Install the nonstandard telnet and ksh packages.
    4. Assign private IP addresses (such as 10.10.10.0/24) between the eth1 interfaces.

      For example, using the sample high availability hardware setup, you might assign these addresses as follows:

      Node1:eth0:172.25.152.19 (public network)
      Node1:eth1:10.10.10.1/24 (storage network)
      Node1:IPMI:172.25.152.116 (fencing network)
      Node2:eth0:172.25.152.20 (public network)
      Node2:eth1:10.10.10.2/24 (storage network)
      Node2:IPMI:172.25.152.117(fencing network)
    5. Use the following commands to configure the CentOS cluster as the root user to disable the IPTables, IP6Tables, and NetworkManager services.
      chkconfig iptables off
      chkconfig ip6tables off
      chkconfig NetworkManager off
    6. Disable SELinux by changing the entry in the /etc/sysconfig/selinux file from enforcing to disabled.
      vi file: /etc/sysconfig/selinux

      Disabling SELinux prevents it from blocking or interfering with some of the ports that must be opened for high availability and the application.

    7. Reboot the device.

    Installing Linux OS on the Management Nodes

    Before you install the CentOS 6.6 64-bit OS on the management nodes:

    • Make sure each management node in your network meets the following minimum requirements:
      • 2.0 GHz CPU
      • 4 GB memory
      • 20 GB hard disk drive
      • 1 Ethernet interface
    • Make sure your system has static IP addresses configured for both public and private interfaces.
    • In the /etc/hosts file, include an entry for each node in your network.

    To install the CentOS 6.6 64-bit OS package on all management nodes in your network:

    1. Use the minimal desktop installation option according to your local policies.
    2. Update each management node with the CentOS 6.6 64-bit OS installation package.
      yum –y update package-name

    Guidelines for Partitioning the SAN Storage Disk

    Best Practice: Active application and database servers save both the data collected from the router network and the processed data in the shared disk. The data saved by application and database servers requires segregation on the storage disk. As a result, you should create two partitions on the storage disk to ensure that each partition is accessible to a cluster group.

    The partition accessible to the application cluster group should not be accessible to the database cluster group. Conversely, the partition accessible to the database cluster group should not be accessible to the application cluster group.

    You can create these shared storage devices by using the Internet Small Computer System Interface (iSCSI) standard, or by directly attaching Fibre Channel Arbitrated Loop (FC-AL) host bus adapters (HBAs).

    For information about installing required drivers, creating logical unit numbers (LUNs), and identifying the LUNs to the CentOS servers, see the documentation provided by your SAN vendor.

    When the CentOS software can detect and identify the LUNs, install the Linux OS high availability software. If you are running the database and application on the same server, you can allow access to both servers in tandem regardless of the server’s role in the cluster.

    Installing the Red Hat Enterprise Linux High Availability Add-On

    To install the RHEL High Availability Add-On:

    • Install the required processes (daemons) and services for high availability clustering by issuing the following commands as the root user on each node:
      [root@node1 ~]# yum groupinstall “High Availability” “Resilient Storage”
      [root@node2 ~]# yum groupinstall “High Availability” “Resilient Storage”
      [root@node3 ~]# yum groupinstall “High Availability Management” “High Availability”

      In this example, node3 is the management node.

    Creating the Quorum Disk

    Configuring a quorum disk, also known as a QDisk, in combination with fencing for a CentOS high availability cluster, detects and sends notifications about available nodes in the cluster and shuts off any nodes in the cluster that are unavailable. Fencing is the process of separating an unavailable or malfunctioning cluster node from the resources it manages, without the support of the node being fenced. When used in combination with a quorum disk, fencing can prevent resources from being improperly used in a high availability cluster.

    This section describes a typical procedure for configuring the quorum disk. Depending on your SAN setup, your procedure might vary. Consult the documentation provided by your SAN vendor for the instructions you should follow.

    For more information about configuring and using a quorum disk in a CentOS high availability cluster, including options for sizing the quorum disk, see Frequently Asked Questions: Cluster Administration.

    To create the quorum disk:

    1. Confirm that the partition in which you are configuring the quorum disk is not in use by other users.
    2. Create the quorum disk on each node (node1 and node2 in this example) by issuing the following command as the root user:

      Note: Using the following mkqdisk command to configure the quorum disk destroys all data in the specified partition.

      [root@node1 ~]# mkqdisk -c /dev/sdd -l quorum
      mkq disk v3.0.12.1
      
      Writing new quorum disk label 'quorum' to /dev/sdd.
      
      WARNING: About to destroy all data on /dev/sdd; proceed [N/y] ? y
      
      Warning: Initializing previously initialized partition
      
      Initializing status block for node 1...
      
      Initializing status block for node 2...

      When you set up the QDisk, use the label quorum, as specified in the mkqdisk command.

    Setting Up the Global File System 2 Partition

    Global File System 2 (GFS2) is a clustered file system in which data is shared among GFS2 nodes with a single, consistent, and coherent view of the file system name space. Processes on different nodes work with GFS2 files in the same way that processes on one node can share files in a local file system.

    To set up GFS2 services on the cluster:

    1. Format the SAN target on which the cluster nodes are mapped (/dev/sdc in this example) to use the following description:
      • Formatting file system: GFS2
      • Locking protocol: lock_dlm
      • Cluster name: cluster1
      • File system name: GFS
      • Journal: 2
      • Partition: /dev/sdc
    2. Configure the GFS2 file system on any of the nodes in the partition by issuing the following command as the root user.

      This command takes effect on all other nodes participating in the same partition (/dev/sdc in this example) and creates a lock table based on the specified cluster name.

      [root@node1 ~]# mkfs.gfs2 -p lock_dlm -t application:GFS -j 2 /dev/sdc
      This will destroy any data on /dev/sdc.
      
      It appears to contain: Linux GFS2 Filesystem (blocksize 4096, lockproto
      lock_dlm)
      
      Are you sure you want to proceed? [y/n] y
      Device:                   /dev/sdc
      Blocksize:                  4096
      Device Size               49.34 GB (2711552 blocks)
      Filesystem Size:          49.34 GB (2711552 blocks)
      Journals:                 2
      Resource Groups:          42
      Locking Protocol:         "lock_dlm"
      Lock Table:               "application:GFS"
      UUID:                     2ff81375-31f9-c57d-59d1-7573cdfaff42

    Modified: 2017-03-20