Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Traffic Load Balancer Overview

 

Traffic Load Balancer Application Description

Traffic Load Balancer (TLB) is supported on MX Series routers with Multiservices Modular Port Concentrator (MS-MPC) and Modular Port Concentrator (MPC) line cards, as well as the Services Processing Card (MX-SPC3) when running Next Gen Services on MX Series routers (MX240, MX480 and MX960). TLB enables you to distribute traffic among multiple servers.

TLB leverages the MPC’s inline functionality, based on an enhanced version of equal-cost multipath (ECMP). Enhanced ECMP facilitates the distribution of sessions across multiple next-hop servers. Enhancements to native ECMP ensure that when servers fail, only flows associated with those servers are impacted, minimizing the overall network churn on services and sessions.

TLB uses the services PIC capabilities of the DPC to provide application-based health monitoring for up to 255 next-hop servers per group, thus providing Intelligent traffic steering based on health checking of server availability information in a next-hop server distribution table. The TLB solution uses a session distribution next-hop API to update the server distribution table and retrieve statistics.

TLB applies its session distribution processing to ingress traffic. Use firewall filters when necessary to select traffic from the ingress interface. Traffic is processed unchanged as it is moved from the ingress interface to the next-hop server. Network Address Translation (NAT) and packet modification are not applied.

Note

You cannot run Deterministic NAT and TLB simultaneously.

TLB Topology

TLB topology is shown in Figure 1.

Figure 1: TLB Topology
TLB Topology

TLB Key Characteristics

The following are key characteristics of TLB.

  • TLB only distributes the requests for any flow; the response is expected to return directly to the client/source.

  • TLB supports hash-based load balancing based on source IP, destination IP, and protocol.

  • TLB enables you to configure servers offline to prevent a performance impact that might be caused by a rehashing for all existing flows. You can add a server in administrative down state and use it later for traffic distribution by disabling “admin down” . This prevents traffic impact to other servers.

  • When health checking determines a server to be down, only the affected flows are rehashed.

  • When a previously down server is returned to service, all flows belonging to that server based on hashing return to it, impacting performance for the returned flows. For this reason, the automatic rejoining of a server to an active group can be disabled. Servers are returned to service only by issuing the request services traffic-load-balance real-service rejoin operational command.

  • Health check monitoring application runs on an MS-DPC/NPU.TLB traffic is not forwarded to the MS-DPC/NPU.

  • NAT is not be applied to the distributed sessions.

  • High availability is accomplished by stateless failover between two Mx3D routers. The routers can seamlessly backup one another because they leverage the same hash algorithm which results in the same server being allocated for the same flow.

TLB Application Components

Servers and Server Groups

TLB enables configuration of groups of up to 255 servers (referred to in configuration as real services) as next-hop destinations for stateless session distribution. You can configure up to 1024 servers associated with one services PIC used for health checking. All servers used in server groups must be individually configured before assignment to groups. The session distribution hashing algorithm uses key-selectable hashing for session distribution. Distribution information is maintained in a server distribution table. Users can add and delete servers to and from the TLB server distribution table and can also change the administrative status of a server.

Note

The TLB solution uses the session distribution next-hop API to update the server distribution table and retrieve statistics. Applications do not have direct control on the server distribution table management. They can only influence changes indirectly through the add and delete services of the TLB API.

Server Health Monitoring — Single Health Check and Dual Health Check

TLB supports health check protocols— ICMP, TCP, and HTTP—to monitor the health of servers in a group. You can use a single probe type for a server group, or a dual health check (TLB - DHC) configuration, which includes two probe types. The configurable health monitoring function resides on a services PIC. By default, probe requests are sent every 5 seconds. Also by default, a real server is declared down only after five consecutive probe failures and declared up only after five consecutive probe successes.

TLB provides application stickiness, meaning that server failures or changes do not affect traffic flows to other active servers. Changing a server’s administrative status from down to up , or changing a server’s administrative state to down does not impact any active flows to remaining servers in the server distribution table. Adding a server or deleting a server from a group has some traffic impact for 5 to 10 seconds.

TLB provides two levels of server health monitoring:

  • Single Health Check—One probe type is attached to a server group by means of the network-monitoring-profile configuration statement.

  • TLB Dual Health Check (TLB-DHC)—Two probe types are associated with a server group by means of the network-monitoring-profile configuration statement. A server’s status is declared based on the result of two health check probes. This feature enhancement, allowing users to configure up to two health check profiles per server group, is in traffic-dird-12.1X43-1-A2.2 and subsequent releases. If a server group is configured for DHC, a real-service is declared to be UP only if both health-check probes are simultaneously UP, otherwise a real-service declared to be DOWN.

Virtual Services

The virtual service provides an address that is associated with a the group of servers to which traffic is directed as determined by hash-based session distribution and server health monitoring.

The virtual service configuration identifies:

  • The group of servers to which sessions are distributed

  • The session distribution hashing method

Note

TLB doesn't require a specific virtual IP. VIPs 0.0.0.0 or 0::0 are acceptable.

TLB Configuration Limits

Table 1: TLB Configuration Limits

Maximum servers per group.

255

Maximum virtual services per services PIC.

32

Maximum real servers per services PIC

1024

Maximum groups per virtual service.

1

Maximum network monitoring profiles per group.

2

Maximum number of TLB instances per service interface unit.

1

Maximum number of VIPs per virtual service.

1

Supported health checking protocols.

ICMP, TCP, HTTP