Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Configuring VMware Enhanced vMotion Compatibility

    VMware Enhanced vMotion Compatibility (EVC) configures a cluster and its hosts to maximize vMotion compatibility. Once enabled, EVC will ensure that only hosts that are compatible with those in the cluster can be added to the cluster. This solution uses the Intel Sandy Bridge Generation option for enhanced vMotion compatibility that supports the baseline feature set.

    To configure a vSphere distributed switch on a vCenter server, perform following steps.

    • Add a vSphere distributed switch
    • Add hosts to a vSphere distributed switch
    • Add a distributed port group (dvPG) configuration

    For more details on configuration of VMware EVC, see:

    VMware vSphere 5.1 Documentation - Enable EVC on an Existing Cluster

    In the MetaFabric 1.0 solution, EVC is configured as directed in the link provided. A short overview of the configuration follows.

    Each ESXi host in the POD hosts multiple VMs and is part of a different port group. VMs running on the PODs include Microsoft Exchange, MediaWiki, Microsoft SharePoint, MySQL database, and Firefly Host (VM security). Because traffic is flowing to and from many different VMs, multiple port groups are defined on the distributed switch:

    • Infra = PG-INFRA-101
    • SharePoint = PG-SP-102
    • MediaWiki = PG-WM-103
    • Exchange = PG-XCHG-104
    • MySQL Database for SharePoint = PG-SQL-105
    • vMotion = PG-vMotion-106
    • Fault Tolerance = PG-Fault Tolerance-107
    • Exchange Cluster = PG-Exchange-Cluster-109
    • iSCSI POD1 = PG-STORAGE-108
    • iSCSI POD2 = PG-STORAGE-208
    • Network MGMT = PG-MGMT-800
    • Security (vGW) = PG-Security-801
    • Remote Access = PG-Remote-Access-810

    These port groups are configured as shown in Figure 1. In this scenario, a port group naming convention was used to ease identification and mapping of VM and its function (for example, Exchange, SharePoint) to a VLAN ID. For instance, one VM is connected to PG104 running an Exchange application while another VM is is connected to PG103 running a MediaWiki application on the same ESXi host. Port group naming convention is also used in this scenario to identify the VLAN ID to which the host belongs. For instance, PG-XCHG-104 is using VLAN ID 104 on the network. (The 104 in the name is the same as the host VLAN ID.) The use of different port groups and VLANs enables the use of vMotion, which in turn enables fault tolerance in the data center.

    Figure 1: Port Groups

    Port Groups

    NIC teaming is also deployed in the solution. NIC teaming is a configuration of multiple uplink adapters that connect to a single switch to form a team. A NIC team can either share the load of traffic between physical and virtual networks among some or all of its members, or provide passive failover in the event of a hardware failure or a network outage. All the port groups (PG) except for iSCSI protocol storage groups are configured with a NIC teaming policy for failover and redundancy. All the compute nodes have four active adapters as dvUplink in the NIC teaming policy. This configuration enables load balancing and resiliency. The IBM Pure Flex System with a 10-Gb CNA card has two network adapters on each ESXi host. Consequently, that system has only two dvUplink adapters per ESXi host. Figure 2 is an example of one port group configuration. Other port groups are configured similarly (with the exception being the storage port group).

    Figure 2: Port Group and NIC Teaming Example

    Port Group and NIC Teaming Example

    Figure 3: Configure Teaming and Failover

    Configure Teaming and Failover

    Note: An exception to the use of NIC teaming is an iSCSI port group. The ISCSI protocol doesn’t support multi-channeling or bundling (LAG). When deploying iSCSI, instead of configuring four active dvUplinks, a single dvUplink should be used. In this solution, QFX3000-M QFabric POD1 uses one port group (PG-storage-108) and QFX3000-M QFabric POD2 uses another port group (PG-storage-208). These port groups are connected to the storage array utilizing the iSCSI protocol. Figure 3 shows the iSCSI port group (PG-storage-108). Port group storage 208 is configured in the same way.

    The VMkernel TCP/IP networking stack supports iSCSI, NFS, vMotion, and fault tolerance logging. The VMkernel port enables these services on the ESX server. Virtual machines run their own system TCP/IP stacks and connect to the VMkernel at the Ethernet level through standard and distributed switches. In ESXi, the VMkernel networking interface provides network connectivity for the ESXi host and handles vMotion and IP storage. Moving a virtual machine from one host to another is called migration. VMware vMotion enables the migration of active virtual machines with no down time.

    Management of iSCSI, vMotion, and fault tolerance is enabled by the creation of four virtual kernel adapters. These adapters are bound to their respective distributed port group. For more information on creating and binding virtual kernel adapters to distributed port groups, see:

    http://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.networking.doc/GUID-59DFD949-A860-4605-A668-F63054204654.html

    Published: 2015-04-20