Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Configuring VMware Clusters, High Availability, and Dynamic Resource Scheduler

    VMware clusters enable the management of multiple host systems as a single, logical entity, combining standalone hosts into a single virtual device with pooled resources and higher availability. VMware clusters aggregate the hardware resources of individual ESX Server hosts but manage the resources as if they resided on a single host. Now, when you power on a virtual machine, it can be given resources from anywhere in the cluster, rather than from a specific physical ESXi host.

    VMware high availability (HA) allows virtual machines running on specific hosts to be restarted automatically using other host resources in the cluster in the case of host failure. VMware HA continuously monitors all ESX Server hosts in a cluster and detects failures. The VMware HA agent placed on each host maintains a heartbeat with the other hosts in the cluster. Each server sends heartbeats to the other servers in the cluster at 5-second intervals. If any servers lose heartbeat over three consecutive heartbeat intervals, VMware HA initiates the failover action of restarting all affected virtual machines on other hosts. VMware HA also monitors whether sufficient resources are available in the cluster at all times in order to be able to restart virtual machines on different physical host machines in the event of host failure. Safe restart of virtual machines is made possible by the locking technology in the ESX Server storage stack, which allows multiple ESX Server hosts to have simultaneous access to the same virtual machine files.

    VMware Dynamic Resource Scheduler (DRS) automatically provides initial virtual machine placement and makes automatic resource relocation and optimization decisions as hosts are added or removed from the cluster. DRS also optimizes based on virtual machine load, managing resources in events where the load on individual virtual machines goes up or down. VMware DRS also makes cluster-wide resource pools possible.

    For more information on configuration of VMware HA clusters, see:

    VMware vSphere 5.1 HA Documentation

    The MetaFabric 1.0 solution utilized VMware clusters in both POD1 and POD2. Below are overview screenshots that illustrate the use of clusters in the solution.

    The MetaFabric 1.0 solution test bed contains three clusters: Infra (Figure 1), POD1 (Figure 2), and POD2 (Figure 3). All clusters are configured with HA and DRS.

    Figure 1: Infra Cluster Hosts Detail

    Infra Cluster Hosts Detail

    Figure 2: POD1 Cluster Hosts Detail

    POD1 Cluster Hosts Detail

    Figure 3: POD2 Cluster Hosts Detail

    POD2 Cluster Hosts Detail

    The Infra cluster (Figure 4) is running all VMs required to support the data center infrastructure. The Infra cluster is hosted on two standalone servers (IBM System x3750 M4). The VMs hosted on the Infra cluster are:

    • Windows 2K8 Server with vCenter Server VM
    • Windows 2K8 domain controller VM
    • Windows 2K8 SQL database server VM
    • Junos Space Network Director
    • Remote Secure Access (SA)
    • Firefly Host Management (also referred to as vGW Management)
    • Firefly Host SVM – Hosts (also referred to as vGW SVM – Hosts)
    • Windows 7 VM - For NOC (Jump station)

    Figure 4: INFRA Cluster VMs

    INFRA Cluster VMs

    The POD1 cluster (Figure 5) hosts the VMs that run all enterprise business-critical application in the test bed. POD1 is hosted on one IBM Flex pass-thru chassis and one 40-Gb CNA module chassis. POD1 contains the following applications/VMs:

    • Windows Server 2012 domain controller
    • Exchange Server 2012 CAS
    • Exchange Server 2012 CAS
    • Exchange Server 2012 CAS
    • Exchange Mailbox server
    • Exchange Mailbox server
    • Exchange Mailbox server
    • MediaWiki Server
    • vGW SVM - All compute nodes

    Figure 5: POD1 Cluster

    POD1 Cluster

    The POD2 cluster (Figure 6) hosts the VMs that run all enterprise business-critical applications in the test bed. POD2 has one IBM Flex pass-thru chassis and one 10-Gb CNA module chassis. POD2 contains the following applications/VMs:

    • Windows Server 2012 secondary domain controller
    • SharePoint Server (Web-front end, six total VMs)
    • SharePoint Application Server (two of these)
    • SharePoint Database Server
    • vGW SVM – All compute nodes

    Figure 6: POD2 Cluster

    POD2 Cluster

    Published: 2015-04-20