Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Load Balancing

    Overview

    This section provides the implementation details of the F5 load balancer deployed in the MetaFabric 1.0 solution test lab. This section explains the following topics:

    • Load-balancer topology
    • Redundancy
    • Link and network configuration
    • VIP and server pool configuration
    • Traffic flow

    Topology

    The topology used in testing the F5 load-balancing element of the MetaFabric 1.0 solution is shown in Figure 1. The MetaFabric 1.0 data center solution uses F5 to load-balance traffic between servers. The solution testing featured two VIPRION C4480 hardware chassis; each chassis is configured with one B4300 blade for 10-Gigabit connectivity. The VIPRION chassis are running this software package: BIG-IP 10.2.4 Build 591.0 Hotfix HF2 image.

    Figure 1: Load Balancing Topology

    Load Balancing Topology

    In this solution, two VIPRION systems are connected to the core switches using LAG. The F5 systems are configured with virtual IPs (VIPs) and server pools to provide load-balancing services to SharePoint, Wikimedia, and Exchange traffic. SharePoint, Wiki, and Exchange servers are connected to POD switches on VLAN 102,103, and 104, respectively. DSR mode is configured in F5 to bypass return traffic from server for all the VIPs.

    Configuring Redundancy

    Each VIPRION system is configured as a cluster in this topology, although they can also be configured as single devices. In this solution, the two VIPRION systems are configured as two clusters (one cluster per chassis), deployed in active/standby mode for redundancy. This means that one cluster is active and processing network traffic, while the other cluster is up and available to process traffic, but is in a standby state. If the active cluster becomes unavailable, the standby cluster automatically becomes active, and begins processing network traffic.

    For redundancy, a dedicated failover link is configured between two VIPRION systems as a LAG interface. Interfaces 1/1.3 and 1/1.4 in LAG are configured as failover links on both systems. The following steps are required to configure redundancy (failover):

    1. Create an interface trunk dedicated for failover.
    2. Create a dedicated failover VLAN.
    3. Create a self IP address and associate the self IP with the failover VLAN.
    4. Define a unicast entry specifying the local and remote self IP addresses.
    5. Define a multicast entry using the management interface for each VIPRION system.

    For more information on redundancy and failover configuration of the F5 Load Balancer, see:

    http://support.f5.com/kb/en-us/solutions/public/11000/900/sol11939.html

    For more information on redundant cluster configuration, see:

    http://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/VIPRION_configuration_guide_961/clustered_systems_redundant.html

    Configuring the Link and Network

    The F5 load-balancing topology deployed in this solution features two LAG interfaces. One LAG is configured for external connectivity (receives service request to VIPs configured on F5 from the Internet) and one LAG is configured for internal connectivity to the servers connected to the PODs. Both the F5 systems are configured with external and internal connectivity and those interfaces on both systems will be in the UP state. Only the active F5 system processes traffic - the standby only processes traffic in cases where the primary experiences a failure.

    Each LAG in F5 system has two member links. One member link connects to Core Switch 1 and the other connects to Core Switch 2. MC-LAG is configured on the core switches. To the F5, it appears that the LAG is connecting to a single system. LACP is used as control protocol for creating LAG between F5 and core switches.

    The following configurations are performed on both F5 systems to enable external connectivity:

    • Create a LAG named External on both F5 systems, and assign interfaces 1/1.5 and 1/1.6 as members of that LAG.
    • Create VLAN 15 and name it External on both F5 systems and the VLAN assigned to the External LAG.
    • Configure a self IP address of 192.168.15.3 on the active F5 system, and 192.168.15.4 on the standby F5 system for VLAN 15.
    • Create a floating IP address 192.168.15.5 on the active F5 system. This floating IP address is active on the active F5 cluster. If the active load balancer fails, this floating IP address is used on new active load balancer

    Configure internal connectivity with the following steps:

    • Create a LAG named core-sw on both F5 systems, and assign interfaces 1/1.1 and 1/1.2 as members of that LAG.
    • VLANs 102, 103, and 104 are named Core-Access, Wikimedia-Access, and Exchange-Access, respectively. These VLANs were created on both the F5 systems and the VLANs, and are assigned to the core-sw LAG. As per their Access names, the SharePoint, Wikimedia, and Exchange servers are located in VLANs 102, 103, and 104, respectively.
    • Create a self IP address of 172.16.2.25, 172.16.3.25, and 172.16.4.25 for VLANs 102, 103, and 104

    Internal connections to the servers are configured as a Layer 2 connection through the POD switches (that are connected to the core switches).

    Note: For external connections, static routes are advertised from the core switch for VIPs configured in F5 for clients in the Internet to send requests to the VIP for specific services like Exchange, Wikimedia, and SharePoint. For the static route, we configured a floating IP address created for “external” VLAN 192.168.15.5 as the next hop to reach the VIPs. When the active cluster fails, the new active uses the configured floating IP address, sends a gratuitous ARP for this floating IP address, and begins receiving traffic.

    Configuring VIP and Server Pool

    A virtual server IP address (VIP) is a traffic-management object on the F5 system that is represented by an IP address and a service. Clients on an external network can send application traffic to a virtual server, which then directs the traffic according to the VIP configuration. The main purpose of a virtual server is to balance traffic load across a pool of servers on an internal network. Virtual servers increase the availability of resources for processing client requests by monitoring server load and distributing the load across all available servers for a particular service.

    Note: In this solution testing, nPath routing (DSR, or Direct Server Return) is used to bypass the F5 for return path traffic from servers, routing traffic directly to the destination from the application servers.

    It is recommended to use the nPath template in the F5 configuration GUI (Template and wizards window) to configure VIP and server pools in DSR mode. The following link provides greater detail regarding configuration of nPath in F5 systems:

    http://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm_implementations_guide_10_1/sol_npath.html

    In this solution, configure three VIP addresses 10.94.127.180, .181, and .182, and assign server pools to these VIPs using the nPath template to service SharePoint, Exchange, and Wikimedia services.

    The following tasks need to be completed in order to configure the BIG-IP system to use nPath routing:

    • Create a custom Fast L4 profile.
    • Create a pool that contains the content servers.
    • Define a virtual server with port and address translation disabled and assign the custom Fast L4 profile to it.
    • Configure the virtual server address on each server loopback interface.
    • Set the default route on your servers to the router’s internal IP address.
      • Ensure that the BigIP configuration key connection.autolasthop is enabled. Alternatively, on each content server, you can add a return route to the client.
    • SharePoint VIP services are mapped 10.94.127.180:8080 (TCP port 8080).
    • MediaWiki services are mapped to 10.94.127.182:80.
    • Exchange services are mapped to:
      • 10.94.127.181:993 (IMAP4)
      • 10.94.127.181:443 (Outlook Web Access)
      • 10.94.127.181:0 (any port for RPC)
      • 10.94.127.181:995 (POP3)

    The following illustrations and steps explain the creation of nPath routing for IMAP4 for exchange. These examples can be used to guide creation of nPath for other services by substituting the VIP address and port number.

    To configure nPath routing for IMAP4 and Exchange, follow these steps:

    1. Using the nPath template create a VIP, server pool, and monitor. This template creates a Fast L4 profile and assigns it to the VIP address.

      Figure 2: Configure nPath

      Configure nPath
    2. The above window shows the configuration of nPath routing using the configuration template.
      1. Assign a unique prefix name (my_nPath_IMAP) for the F5 system to name the server pool, monitor, and other objects.
      2. To create the IMAP4 VIP as part of the Exchange service, specify the VIP address 10.94.127.181, TCP port 993.
      3. This template gives a choice for creating a new server pool or using an existing pool. By default, it creates a new server pool using the prefix name shown (my_nPath_IMAP_pool). Add servers one at a time with the IP address and port number, as shown.
      4. This template gives a choice for creating a new monitor or using an existing monitor. By default, it creates a new monitor using the prefix name shown (my_nPath_IMAP_TCP_pool). By default, it uses TCP monitoring and the user can change the monitoring type. The default interval for the health check is 30 seconds, and the timeout value is 91 seconds.
      5. Click Finished to create nPath routing for an IMAP service with VIP 10.94.127.181:993.

      Note: The default TCP monitor, with no Send string or Receive string configured, tests a service by establishing a TCP connection with the pool member on the configured service port and then immediately closes the connection without sending any data on the connection. This causes some services such as telnet and ssh to log a connection error, and fills up the server logs with unnecessary errors. To eliminate the extraneous logging, you can configure the TCP monitor to send just enough data to the service, or just use the tcp_half_open monitor. Depending on your monitoring requirements, you might also be able to monitor a service that expects empty connections, such as tcp_echo (by using the default tcp_echo monitor). NOTE: Each server has four 10-Gb NIC ports connected to the QFX3000-M QFabric PODs as a data port for all VM traffic. Each system is connected to each POD for redundancy purposes. The IBM System 3750 is connected to POD1 using 4 x 10-Gigabit Ethernet. A second IBM System 3750 connects to POD2 using 4 x 10-Gigabit Ethernet. The use of a LAG provides switching redundancy in case of a POD failure.

    3. Create or verify other objects (Pool, Profile, or Monitor) using the template created with the VIP. As you can see, nPath routing created Fast L4 Profile as needed.

      Figure 3: Verify Objects during nPath Configuration

      Verify Objects during nPath Configuration
    4. Verify the VIP by selecting Virtual Servers under the Local Traffic tab. The VIP in this example is named my_nPath_IMAP_virtual_server with an assigned IP address of 10.94.127.181 and TCP port 993. As per nPath requirement Performance (Layer 4), the profile also known as Fast L4 profile is assigned to this VIP.

      Figure 4: Configure and Verify VIP

      Configure and Verify VIP

      Note that the SNAT pool is disabled for this VIP as per the nPath requirement. When traffic enters the F5 system for this VIP, the F5 does not perform SNAT; it simply forwards the traffic to the server as is without modifying the source or destination address. This VIP simply performs load balancing and sends the traffic to the destination server with the client IP address as the source address and the VIP address (10.94.127.181) as its destination.

      Note: Servers should be configured to process these packets with the VIP address (10.94.127.181) as the destination address. A loopback interface adapter should be installed and configured with the VIP address on the server to enable processing of application traffic. A proper IP routing entry should also be present in the servers to route these packets directly to the client destination address.

    Load-Balanced Traffic Flow

    This section explains the packet flow from the client in the Internet to the server through the F5 system and from the server back to the client, bypassing the F5 system (Figure 5).

    Figure 5: Load-Balancing Traffic Flow

    Load-Balancing Traffic Flow

    The flow of traffic to and from the load balancers flows in the following manner:

    1. As described in Figure 5, three VIPs are created in the F5 system for SharePoint, Wikimedia, and Exchange. Assume that the client sends the request to the Wikimedia server. In this case, the client sends the request to the VIP address of 10.94.127.182 and the source IP address is the client’s IP address. The destination IP address will be the VIP IP address (10.94.127.182) and the destination port will be 80. As described in the previous section, the core switch advertises the VIP address in the network. As a result, the edge router knows the route to reach the VIP address of 10.94.127.182.
    2. This packet arrives on the active F5 via external LAG. Because of the nPath configuration, the Wikimedia VIP address load-balances the traffic and sends it to one of the servers without modifying the source or destination address.
    3. Because of the nPath configuration, the Wikimedia VIP address load-balances the traffic and sends it to one of the servers as is without modifying the source or the destination address. The F5 system reaches the Wikimedia servers by way of a Layer 2 connection on VLAN 103. An internal LAG connection is a trunk port carrying VLANs 102, 103, and 104 to reach all the servers.
    4. The Wikimedia server receives the traffic on the loopback address (configured with VIP IP 10.94.127.182) and processes it.
    5. The Wikimedia server sends this packet back to the client by way of a router and bypassing the F5 system.
    6. The return packet reaches the client.

    Published: 2015-04-20