Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Microsoft Exchange Implementation

    This section describes the design, planning, and instructions for deploying a highly available Microsoft Exchange Server 2012 cluster for client access service and mailbox database server using VMware high availability (HA). It covers configuration guidelines for the VMware vSphere HA cluster parameters for the cluster and best practice recommendation. This guide does not cover a full installation of Microsoft Exchange Server. This section covers the following topics:

    • Installation checklist and scope
    • Network for Exchange VM
    • Storage for Exchange VM

    Installation Checklist and Scope

    This section contains detailed instructions on configuring network and storage. We assume that the following elements are already installed:

    • ESXi 5.1 hypervisor on the IBM Flex chassis compute node.
    • vCenter Server to manage ESXi 5.1 hosts.

    This deployment example assumes that VMware HA is configured on ESXi hosts using vCenter Server. Virtual machines that are running on an ESXi host at the time of complete failure will be automatically migrated.

    VMware vSphere HA requirements:

    • All hosts in a vSphere HA-enabled cluster must have access to the same shared storage location used by the VM on the cluster. This includes any Fibre channel, FCoE, iSCSI, and NFS datastores used by the VM. In this solution, we are using iSCSI and NFS datastores.
    • All hosts in a vSphere HA cluster should have an identical virtual networking configuration. In this solution, all hosts are participating the same vSphere Distributed Switch (vDS).

    Deploying Network for Exchange VM

    Microsoft Exchange is a two-tier application that includes a Client Access Server (CAS) and mailbox database server. Exchange was deployed in POD1 during testing. CAS is a front-end server for user mailboxes; the mailbox database is the back-end server. The entire user’s mailbox is accessed from the mailbox database server. The cluster is configured on back-end servers called the Exchange database availability group (DAG). 10,000 users are created on an LDAP server (Windows domain controller), Active Directory using the DATACENTER domain. All of the tested Microsoft Enterprise applications (including Microsoft Exchange) were integrated with the Windows domain controller (DATACENTER). In this solution, all applications are using separate VLANs to ensure traffic isolation.

    Deployment of Microsoft Exchange requires the following steps:

    • Define the network prefix.
    • Define VLANs.

    Note: The Windows domain controller (DATACENTER) is considered an infrastructure prefix and is assigned to a separate network. This design and implementation assumes that this VLAN (and prefix/gateway) has already been defined on the switches.

    The Windows domain controller subnet is 172.16.1.0/24, and the default gateway is 172.16.1.254.

    Two Windows domain controllers were installed and configured, one is in POD1 and other is in POD2 for redundancy. The assigned IP addresses for both domain controllers are 172.16.1.11 and 172.16.1.10, respectively.

    1. Define network subnets for assignment to Exchange server elements.
      1. Exchange Server is 172.16.4.0/24, the default gateway is 172.16.4.254. The exchange server is scaled to serve 10,000 users, and 3 client access servers (CAS) have been installed and configured. Server IP addresses are 172.16.4.10, 172.16.4.11, 172.16.4.12, and 172.16.4.13.
      2. Exchange DAG Cluster is 172.16.9.0/24 and the default gateway is 172.16.9.254. The cluster is configured to host three mailbox database servers. The users primary mailbox database is active on one server with the backup mailbox databases available on the other two servers. The mailbox database servers communicate over VLAN-109 (The VLAN over which the DAG Cluster is configured).
    2. Define VLANs for Exchange elements (enables traffic isolation).
      1. Exchange server = VLAN and vlan-id 104.
      2. Exchange Mailbox cluster = VLAN and vlan-id 109.
    3. Configure the VLAN and gateway address on QFabric switch (POD1) as this application is located only in POD1.
      [edit]set vlans Exchange vlan-id 104set vlans Exchange l3-interface vlan.104set interfaces vlan unit 104 family inet address 172.16.4.254/24set protocols ospf area 0.0.0.10 interface vlan.104 passiveset vlans Exchange-Cluster vlan-id 109
    4. Configure VLANs on all IBM Pure Flex system CNA modules.
      vlan 104enablename "EXCHANGE"member INTA1-INTA5,INTB1-INTB5,EXT1-EXT3,EXT7
      vlan 109 enablename "Exchange DAG"member INTA1-INTA5,INTB1-INTB5,EXT1-EXT3,EXT7
      Below configuration shows of 10Gb CNA I/O Module 1 and 2. vlan 104enable name "Exchange"member INTA1-INTA5,EXT1-EXT2,EXT11-EXT16
      vlan 109enablename "Exchange DAG" member INTA1-INTA5,EXT1-EXT2,EXT11-EXT16

      Note: This configuration is not required if you are using IBM Flex System Pass-thru modules. This configuration is an example of the 40-Gb CNA I/O Module (module 1 and 2).

    5. Configure VLANs on LAVC switches.
      [edit]set interfaces ae1 unit 0 family ethernet-switching vlan members Exchangeset interfaces ae2 description "IBM Standalone server"set interfaces ae2 unit 0 family ethernet-switching vlan members Exchangeset interfaces ae2 unit 0 family ethernet-switching vlan members Exchange-clusterset interfaces ae3 description IBM-FLEX-10-CNA-VLAG-BNTset interfaces ae3 unit 0 family ethernet-switching vlan members Exchange-clusterset interfaces ae3 unit 0 family ethernet-switching vlan members Exchangeset interfaces ae4 description IBM-FLEX-2-CN-1-Passthroughset interfaces ae4 unit 0 family ethernet-switching vlan members Exchange-clusterset interfaces ae4 unit 0 family ethernet-switching vlan members Exchangeset interfaces ae5 description IBM-FLEX-2-CN-2set interfaces ae5 unit 0 family ethernet-switching vlan members Exchange-clusterset interfaces ae5 unit 0 family ethernet-switching vlan members Exchangeset interfaces ae6 description IBM-FLEX-2-CN-3set interfaces ae6 unit 0 family ethernet-switching vlan members Exchange-clusterset interfaces ae6 unit 0 family ethernet-switching vlan members Exchangeset interfaces ae7 description IBM-FLEX-2-CN-4set interfaces ae7 unit 0 family ethernet-switching vlan members Exchange-clusterset interfaces ae7 unit 0 family ethernet-switching vlan members Exchangeset interfaces ae8 description IBM-FLEX-2-CN5set interfaces ae8 unit 0 family ethernet-switching vlan members Exchangeset interfaces ae8 unit 0 family ethernet-switching vlan members Exchange-clusterset vlans Exchange vlan-id 104set vlans Exchange-cluster vlan-id 109
    6. Allow same VLANs and configure Layer 3 gateway for Exchange-Cluster on both core switches Core1 and Core2.
      [edit]set interfaces ae1 description "MC-LAG to vdc-pod1-sw1-nng-ae1"set interfaces ae1 unit 0 family ethernet-switching vlan members Exchangeset interfaces ae2 description "MC-LAG to vdc-pod1-sw1-nng-ae2"set interfaces ae2 unit 0 family ethernet-switching vlan members Exchange-Clusterset interfaces ae4 description "MC-LAG to vdc-pod2-sw1-ae0"set interfaces ae4 unit 0 family ethernet-switching vlan members Exchange-Clusterset interfaces ae5 description "MC-LAG to vdc-pod2-sw1-ae1"set interfaces ae5 unit 0 family ethernet-switching vlan members Exchangeset interfaces ae9 unit 0 description "ICL Link for all VLANS"set interfaces ae9 unit 0 family ethernet-switching vlan members Exchangeset interfaces ae9 unit 0 family ethernet-switching vlan members Exchange-Clusterset interfaces ae10 description Layer2-internal-link-MC-LAG-core-sw-to-LB2-standbyset interfaces ae10 unit 0 family ethernet-switching vlan members Exchangeset interfaces irb unit 109 description Exchange-Clusterset interfaces irb unit 109 family inet address 172.16.9.252/24 arp 172.16.9.253 l2-interface ae9.0set interfaces irb unit 109 family inet address 172.16.9.252/24 arp 172.16.9.253 mac 4c:96:14:68:83:f0set interfaces irb unit 109 family inet address 172.16.9.252/24 arp 172.16.9.253 publishset interfaces irb unit 109 family inet address 172.16.9.252/24 vrrp-group 1 virtual-address 172.16.9.254set interfaces irb unit 109 family inet address 172.16.9.252/24 vrrp-group 1 priority 125set interfaces irb unit 109 family inet address 172.16.9.252/24 vrrp-group 1 preemptset interfaces irb unit 109 family inet address 172.16.9.252/24 vrrp-group 1 accept-dataset interfaces irb unit 109 family inet address 172.16.9.252/24 vrrp-group 1 authentication-type md5set interfaces irb unit 109 family inet address 172.16.9.252/24 vrrp-group 1 authentication-key "$9$Asx6uRSKvLN-weK4aUDkq"set protocols ospf area 0.0.0.0 interface irb.109 passiveset vlans Exchange vlan-id 104set vlans Exchange-Cluster vlan-id 109set vlans Exchange-Cluster l3-interface irb.109
    7. As Exchange is being deployed in a virtual environment, you next need to create an Exchange and Exchange-Cluster port group on the virtual distributed switch (VDS) using vCenter server. Log in to vCenter Server (10.94.63.29) using the vSphere Client. To create an Exchange and Exchange-Cluster port group, navigate to Home > Inventory > Networking.

      Figure 1: Home > Inventory > Networking

      Home > Inventory > Networking
    8. Right-click dvSwitch, then click New Port Group.

      Figure 2: Create New Port Group

      Create New Port Group
    9. Click Next, then Finish. Once the Exchange port group is created, you can edit the port group by right--clicking and then modifying the teaming policy.

      Figure 3: Modify Teaming Policy

      Modify Teaming Policy
    10. Repeat Steps 1 through 9 to create the port group for the Exchange-Cluster, Storage-108, and Storage-208. An example of the PG-Storage-108 port group follows.

      Figure 4: PG-STORAGE-108 Settings

      PG-STORAGE-108 Settings

      An example of PG-Storage-208 follows.

      Figure 5: PG-STORAGE-208 Settings

      PG-STORAGE-208 Settings

      Note: The storage port group using the iSCSI protocol doesn’t support port bonding (LAG). In the case of iSCSI, there is only one active uplink.

    Configuring Storage for Exchange VM

    The Exchange VM is connecting to storage via the iSCSI protocol. This section details the creation of the storage for the Exchange VM.

    To create storage via iSCSI protocol for connection to the Exchange VM, follow these steps:

    1. Log in with the EMC Unisphere tool to access EMC storage.
    2. Select a storage system.
    3. Navigate to Storage > Storage Configuration > Storage Pools. In the Pools tab, click Create.

      Figure 6: EMC Unisphere Tool

      EMC Unisphere Tool
    4. Provide a name for the storage pool (in this example, Pool 1 – Exchange-DB).

      Figure 7: Create Storage Pool Name

      Create Storage Pool Name
    5. Ensure that FAST Cache is enabled (under the Advanced tab).

      Figure 8: FAST Cache enabled

      FAST Cache enabled
    6. Create and allocate LUN to the storage pool. Select the VNX system using the Unisphere tool.
    7. Navigate to Storage > LUNs.
    8. In the Create LUN dialog, under Storage Pool Properties:
      1. Select Pool.
      2. Select a RAID type for the LUN: For Pool LUNs, only RAID 6, RAID 5, and RAID 1/0 are valid. RAID 5 is the default RAID type.
      3. If available, the software populates the storage pool for the new LUN with a list of pools that have the specified RAID type, or displays the name of the selected pool. The Capacity section displays information about the selected pool. If there are no pools with the specified RAID type, click New to create a new one.
    9. In LUN Properties, select the Thin checkbox if you are creating a thin LUN.
    10. Assign a User Capacity and ID to the LUN you want to create.
    11. To create more than one LUN, select a number in Number of LUNs to create. For multiple LUNs, the software assigns sequential IDs to the LUNs as they are available. For example, to create five LUNs starting with LUN ID 11, the LUN IDs might be 11, 12, 15, 17, and 18.
    12. In LUN Name, either specify a name or select automatically assign LUN IDs as LUN Names.
    13. Choose one of the following options:
      1. Click Apply to create the LUN with the default advanced properties, or
      2. Click the Advanced tab to assign the properties yourself.
    14. Assign optional advanced properties for the LUN:
      1. Select a default owner (SP A or SP B) for the new LUN or accept the default value of Auto.
      2. Set the FAST tiering policy option.
    15. Click Apply to create the LUN, and then click Cancel to close the dialog box. An icon for the LUN is added to the LUN view window. Below is an example of the Exchange LUN that was created.

      Figure 9: Exchange-DB LUN

      Exchange-DB LUN

    Enabling Storage Groups with Unisphere

    You must enable storage groups using Unisphere if only one server is connected to the system and you want to connect additional servers to the system. The Storage Groups option lets you place LUNs into groups that are known as storage groups. These LUNs are accessible only to the host that is connected to the storage group. To enable storage groups with Unisphere:

    1. Select All Systems > VNX System.
    2. Select Hosts > Storage Group. (Once you enable Storage Groups for a storage system, any host currently connected to the storage system will no longer be able to access data on the storage system. To the host, it will appear as if the LUNs were removed. In order for the host to access the storage data, you must add LUNs to the Storage Group and then connect the host to the Storage Group.)
    3. Click OK to save changes and close the dialog box, or click Apply to save changes without closing the dialog box. Figure 10 shows the storage group that was created. Any new LUNs added will be added to this storage group.

      Figure 10: Storage Group Created

      Storage Group Created

      Figure 11 shows the LUNs tab of the storage group properties. You can see all LUNs that have been added to the storage group.

      Figure 11: Storage Group Properties - LUNs Tab

      Storage Group Properties - LUNs Tab
    4. From the Hosts tab (Storage Group Properties), you can select hosts to add to the storage pool (which hosts are able to access the pool).

      Figure 12: Hosts Allowed to Access the Storage Group

      Hosts Allowed to Access the Storage Group

      Once the storage group is created, LUNs can be added directly to the storage pool from the Storage > LUNs screen.

      Figure 13: Add LUN to Storage Group

      Add LUN to Storage Group

    Provisioning LUNs to ESXi Hosts

    The LUNs created in the previous steps now need to be added and mounted as datastores on the appropriate ESXi hosts. To do this, you must first configure the VMware kernel network. ESXi uses VMkernel ports for system management and IP storage. VMkernel IP storage interfaces provide access to one or more EMC VNX iSCSI network portals or NFS servers.

    To configure VMkernel:

    1. Log in to the VMware vSphere client to the vCenter Server.
    2. Navigate to Home > Inventory > Hosts and Clusters.
    3. Select the hosts and on the right side, click Configuration.
    4. Under Networking, vSphere Distributed Switch, click Manage Virtual Adapters.

      Figure 14: Manage Virtual Adapters

      Manage Virtual Adapters
    5. Click Add to create a new VMkernel port, then click Next.

      Figure 15: Add New VMkernel Port

      Add New VMkernel Port
    6. Select a virtual adapter type (VMkernel) and click Next.

      Figure 16: Select VMkernel as Adapter Type

      Select VMkernel as Adapter Type
    7. Select the port group (created in previous steps for POD1). Click Next.

      Figure 17: Select Port Group

      Select Port Group
    8. Configure IP address settings for the VMkernel virtual adapter, click Next, and then click Finish.

      Figure 18: VMkernel IP Settings

      VMkernel IP Settings
    9. Before configuring the VM, make sure that the EMC VNX storage is reachable. You can do this from the ESXi server shell using vmping.
      ~ # esxcfg-vmknic -l
      Interface  Port Group/DVPort   IP Family IP Address                              Netmask         Broadcast       MAC Address       MTU     TSO MSS   Enabled Type                
      vmk0       17                  IPv4      10.94.47.131                            255.255.255.0   10.94.47.255    00:90:fa:1c:8a:04 9000    65535     true    STATIC              
      vmk0       17                  IPv6      fe80::290:faff:fe1c:8a04                64                              00:90:fa:1c:8a:04 9000    65535     true    STATIC, PREFERRED   
      vmk1       130                 IPv4      172.16.8.27                             255.255.255.0   172.16.8.255    00:50:56:60:0a:0b 9000    65535     true    STATIC              
      vmk1       130                 IPv6      fe80::250:56ff:fe60:a0b                 64                              00:50:56:60:0a:0b 9000    65535     true    STATIC, PREFERRED   
      vmk3       1498                IPv4      172.16.6.31                             255.255.255.0   172.16.6.255    00:50:56:67:9b:5c 9000    65535     true    STATIC              
      vmk3       1498                IPv6      fe80::250:56ff:fe67:9b5c                64                              00:50:56:67:9b:5c 9000    65535     true    STATIC, PREFERRED   
      vmk4       1370                IPv4      172.16.7.31                             255.255.255.0   172.16.7.255    00:50:56:68:53:ee 9000    65535     true    STATIC              
      vmk4       1370                IPv6      fe80::250:56ff:fe68:53ee                64                              00:50:56:68:53:ee 9000    65535     true    STATIC, PREFERRED
      ~ # ping 172.16.8.1
      PING 172.16.8.1 (172.16.8.1): 56 data bytes
      64 bytes from 172.16.8.1: icmp_seq=0 ttl=128 time=0.383 ms
      64 bytes from 172.16.8.1: icmp_seq=1 ttl=128 time=0.215 ms
      64 bytes from 172.16.8.1: icmp_seq=2 ttl=128 time=0.231 ms
      
      --- 172.16.8.1 ping statistics ---
      3 packets transmitted, 3 packets received, 0% packet loss
      round-trip min/avg/max = 0.215/0.276/0.383 ms
      
      ~ # ping 172.16.8.2
      PING 172.16.8.2 (172.16.8.2): 56 data bytes
      64 bytes from 172.16.8.2: icmp_seq=0 ttl=128 time=0.451 ms
      64 bytes from 172.16.8.2: icmp_seq=1 ttl=128 time=0.243 ms
      64 bytes from 172.16.8.2: icmp_seq=2 ttl=128 time=0.224 ms
      
      --- 172.16.8.2 ping statistics ---
      3 packets transmitted, 3 packets received, 0% packet loss
      round-trip min/avg/max = 0.224/0.306/0.451 ms
      ~ #
      
    10. From vCenter, click Storage Adapters. If the iSCSI software adapter is not installed, click Add and install the adapter.

      Figure 19: Install iSCSI Software Adapter

      Install iSCSI Software Adapter
    11. Once installed, right-click the iSCSI software adapter and select Properties. You should see that the software is enabled.

      Figure 20: iSCSI Initiator Is Enabled

      iSCSI Initiator Is Enabled
    12. Click the Network Configuration tab to verify that the storage is configured and connected.

      Figure 21: iSCSI Initiator Network Configuration

      iSCSI Initiator Network Configuration
    13. Click the Dynamic Discovery tab and click Add. Enter the IP and port of your EMC VNX storage.

      Figure 22: Add iSCSI Server Location in Dynamic Discovery

      Add iSCSI Server Location in Dynamic Discovery
    14. Click OK and Close. When prompted to rescan the HBA, click Yes. You see a LUN presented on the server.

      Figure 23: LUN Present on the Server

      LUN Present on the Server
    15. From the vSphere client, select the Exchange-Logs server and Add Storage.

      Figure 24: Add Storage from vSphere Client

      Add Storage from vSphere Client
    16. Select Disk/LUN as the storage type. Click Next.

      Figure 25: Select Disk/LUNfor Storage Type

      Select Disk/LUNfor Storage Type
    17. Select the Disk/LUN you want to mount. Verify that you are mounting the proper LUN using the LUN ID, capacity, and path ID. Click Next.

      Figure 26: Select LUN to Mount

      Select LUN to Mount
    18. Select VMFS 5.0 which is supported in ESXi 5.1. VMFS 5.0 also supports 2TB+. Click Next.

      Figure 27: Select VMFS-5 as a File System

      Select VMFS-5 as a File System
    19. Notice that the hard disk is blank under Current Disk Layout. Click Next.
    20. Enter a name for the datastore and click Next.

      Figure 28: Name the Datastore

      Name the Datastore
    21. Select the maximum capacity for this datastore. (The maximum capacity is the default option.) Click Next and Finish. Click Properties on the created datastore to see an output similar to the following

      Figure 29: Datastore Creation Complete

      Datastore Creation Complete

    Configuring vMotion Support

    This implementation supports vMotion. We must make sure, however, that in EMC Unisphere, all ESXi hosts have been given access to all LUNs. Once this is configured and enabled, other hosts will attempt to mount the same datastore once you start rescan on the other hosts. Similarly, all LUNs must be mounted on all appropriate ESXi hosts as this is a requirement for vMotion. Network and storage must be provisioned to all ESXi hosts. The next section will show how to add new VM or add an existing VM for application scaling. This guide does not cover the installation or configuration of the Exchange applications.

    To configure vMotion support for an ESXi host, follow these steps:

    1. Log in to Virtual Center Server using vSphere client.
    2. Select the cluster to create a new VM. Click Create a new virtual machine.

      Figure 30: Create New VM

      Create New VM
    3. Click Typical and then click Next.

      Figure 31: VM Type

      VM Type
    4. Enter a name for your virtual machine. Click Next.

      Figure 32: Give the VM a Name

      Give the VM a Name
    5. To mount storage to the VM, select the datastore POD1-VMFS-LUN3created earlier, and click, Next.

      Figure 33: Select Storage

      Select Storage
    6. Select an operating system (Windows 2012 64-bit was used in this scenario). Click Next.

      Figure 34: Select Operating System

      Select Operating System
    7. Exchange CAS requires only one NIC. The Exchange mailbox requires two NICs. You can add a new NIC here or wait until the VM is created to add another NIC. For now, leave the default and click Next.

      Figure 35: Configure Network

      Configure Network
    8. Select the virtual disk size for the operating system. (This will be the C:/ drive in the OS.) Click Finish.

      Figure 36: Select Virtual Disk Size

      Select Virtual Disk Size
    9. The current example is used to create a new VM that can be modified based on your requirements. For instance, an Exchange mailbox requires additional disks and an additional network adapter for use in Exchange clusters. An example of a modified VM is shown below.

      Figure 37: Virtual Machine with Additional Disks and Network Adapters

      Virtual Machine with Additional Disks and Network Adapters
    10. Once you have provisioned all of the VM resources, you can start installation by mounting the installation ISO as a CD. In this case you would first install and update Microsoft Windows Server 2012. Once the operating system is installed, the Exchange installation (and all the dependencies, such as AD integration) can be performed.

    Published: 2015-04-20