Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?


Traffic Load Balancer Overview

Traffic Load Balancing Support Summary

Table 1 provides a summary of the traffic load balancing support on the MS-MPC and MS-MIC cards for Adaptive Services versus support on the MX-SPC3 security services card for Next Gen Services.

Table 1: Traffic Load Balancing Support Summary



Junos Release

< 16.1R6 & 18.2.R1

≥ 16.1R6 & 18.2R1


Max # of Instances per Chassis


2,000 / 32 in L2 DSR mode


Max # of Virtual Services per Instance




Max # of virtual IP address per virtual service



Max # of Groups per Instances




Max # of Real-Services (Servers) per Group




Max # of groups per virtual service



Max # of Network Monitor Profiles per Group



Max # of HC’s per security services per PIC/NPU in 5-sec’s


1,250 – 19.3R2

10,000 – 20.1R1

Supported Health Check Protocols


ICMP, TCP, UDP, HTTP, SSL, TLS Hello, Custom

Traffic Load Balancer Application Description

Traffic Load Balancer (TLB) is supported on MX Series routers with either of the Multiservices Modular Port Concentrator (MS-MPC), Multiservices Modular Interface Card (MS-MIC), or the MX Security Services Processing Card (MX-SPC3) and in conjunction with the Modular Port Concentrator (MPC) line cards supported on the MX Series routers as described in Table 2.


You cannot run Deterministic NAT and TLB simultaneously.

Table 2: TLB MX Series Router Platform Support Summary

TLB Mode

MX Platform Coverage

Multiservices Modular Port Concentrator (MS-MPC)

MX240, MX2480, MX960, MX2008, MX2010, MX2020

MX Security Services Processing Card (MX-SPC3)

MX240, MX480, MX960

  • TLB enables you to distribute traffic among multiple servers.

  • TLB employs an MS-MPC-based control plane and a data plane using the MX Series router forwarding engine.

  • TLB uses an enhanced version of equal-cost multipath (ECMP). Enhanced ECMP facilitates the distribution of flows across groups of servers. Enhancements to native ECMP ensure that when servers fail, only flows associated with those servers are impacted, minimizing the overall network churn on services and sessions.

  • TLB provides application-based health monitoring for up to 255 servers per group, providing Intelligent traffic steering based on health checking of server availability information. You can configure an aggregated multiservices (AMS) interface to provide one-to-one redundancy for MS-MPCs or Next Gen Services MX-SPC3 card used for server health monitoring.

  • TLB applies its flow distribution processing to ingress traffic.

  • TLB supports multiple virtual routing instances to provide improved support for large scale load balancing requirements.

  • TLB supports static virtual-IP-address-to-real-IP-address translation, and static destination port translation during load balancing.

Traffic Load Balancer Modes of Operation

Traffic Load Balancer provides three modes of operation for the distribution of outgoing traffic and for handling the processing of return traffic.

Table 3 summarizes the TLB support and which cards it’s supported on.

Table 3: TLB Versus Security Service Cards Summary

Security Service Card






Transparent Layer 3 Direct Server Return



Transparent Layer 2 Direct Server Return


Not Supported

Transparent Mode Layer 2 Direct Server Return

When you use transparent mode Layer 2 direct server return (DSR):

  • The PFE processes data.

  • Load balancing works by changing the Layer 2 MAC of packets.

  • An MS-MPC performs the network-monitoring probes.

  • Real servers must be directly (Layer 2) reachable from the MX Series router.

  • TLB installs a route and all the traffic over that route is load-balanced.

  • TLB never modifies Layer 3 and higher level headers.

Figure 1 shows the TLB topology for transparent mode Layer 2 DSR.

Figure 1: TLB Topology for Transparent ModeTLB Topology for Transparent Mode

Translated Mode

Translated mode provides greater flexibility than transparent mode Layer 2 DSR. When you choose translated mode:

  • An MS-MPC performs the network-monitoring probes.

  • The PFE performs stateless load balancing:

    • Data traffic directed to a virtual IP address undergoes translation of the virtual IP address to a real server IP address and translates the virtual port to a server listening port. Return traffic undergoes the reverse translation.

    • Client to virtual IP traffic is translated; the traffic is routed to reach its destination.

    • Server-to-client traffic is captured using implicit filters and directed to an appropriate load-balancing next hop for reverse processing. After translation, traffic is routed back to the client.

    • Two load balancing methods are available: random and hash. The random method is only for UDP traffic and provides quavms-random distribution. While not literally random, this mode provides fair distribution of traffic to an available set of servers. The hash method provides a hash key based on any combination of the source IP address, destination IP address, and protocol.


      Translated mode processing is only available for IPv4-to-IPv4 and IPv6-to-IPv6 traffic.

Figure 2 shows the TLB topology for translated mode.

Figure 2: TLB Topology for Translated ModeTLB Topology for Translated Mode

Transparent Mode Layer 3 Direct Server Return

Transparent mode Layer 3 DSR load balancing distributes sessions to servers that can be a Layer 3 hop away. Traffic is returned directly to the client from the real-server.

Traffic Load Balancer Functions

TLB provides the following functions:

  • TLB always distributes the requests for any flow. When you specify DSR mode, the response returns directly to the source. When you specify translated mode, reverse traffic is steered through implicit filters on server-facing interfaces.

  • TLB supports hash-based load balancing or random load balancing.

  • TLB enables you to configure servers offline to prevent a performance impact that might be caused by a rehashing for all existing flows. You can add a server in the administrative down state and use it later for traffic distribution by disabling the administrative down state. Configuring servers offline helps prevent traffic impact to other servers.

  • When health checking determines a server to be down, only the affected flows are rehashed.

  • When a previously down server is returned to service, all flows belonging to that server based on hashing return to it, impacting performance for the returned flows. For this reason, you can disable the automatic rejoining of a server to an active group. You can return servers to service by issuing the request services traffic-load-balance real-service rejoin operational command.


    NAT is not applied to the distributed flows.

  • Health check monitoring application runs on an MS-MPC/NPU. This network processor unit (NPU) is not used for handling data traffic.

  • TLB supports static virtual-IP-adddress-to-real-IP-address translation, and static destination port translation during load balancing.

  • TLB provides multiple VRF support.

Traffic Load Balancer Application Components

Servers and Server Groups

TLB enables configuration of groups of up to 255 servers (referred to in configuration statements as real services) for use as alternate destinations for stateless session distribution. All servers used in server groups must be individually configured before assignment to groups. Load balancing uses hashing or randomization for session distribution. Users can add and delete servers to and from the TLB server distribution table and can also change the administrative status of a server.


TLB uses the session distribution next-hop API to update the server distribution table and retrieve statistics. Applications do not have direct control on the server distribution table management. They can only influence changes indirectly through the add and delete services of the TLB API.

Server Health Monitoring — Single Health Check and Dual Health Check

TLB supports TCP, HTTP, SSL Hello, TLS Hello, and custom health check probes to monitor the health of servers in a group. You can use a single probe type for a server group, or a dual health check configuration that includes two probe types. The configurable health monitoring function resides on either an MX-SPC3 or an MS-MPC. By default, probe requests are sent every 5 seconds. Also by default, a real server is declared down only after five consecutive probe failures and declared up only after five consecutive probe successes.

Use a custom health check probe to specify the following:

  • Expected string in the probe response

  • String that is sent with the probe

  • Server status to assign when the probe times out (up or down)

  • Server status to assign when the expected response to the probe is received (up or down)

  • Protocol — UDP or TCP

TLB provides application stickiness, meaning that server failures or changes do not affect traffic flows to other active servers. Changing a server’s administrative state from up to down does not impact any active flows to remaining servers in the server distribution table. Adding a server or deleting a server from a group has some traffic impact for a length of time that depends on your configuration of the interval and retry parameters in the monitoring profile.

TLB provides two levels of server health monitoring:

  • Single Health Check—One probe type is attached to a server group by means of the network-monitoring-profile configuration statement.

  • TLB Dual Health Check (TLB-DHC)—Two probe types are associated with a server group by means of the network-monitoring-profile configuration statement. A server’s status is declared based on the result of two health check probes. Users can configure up to two health check profiles per server group. If a server group is configured for dual health check, a real-service is declared to be UP only when both health-check probes are simultaneously UP; otherwise, a real-service is declared to be DOWN.


The following restrictions apply to AMS interfaces used for server health monitoring:

  • An AMS interface configured under a TLB instance uses its configured member interfaces exclusively for health checking of configured multiple real servers.

  • The member interfaces use unit 0 for single VRF cases, but can use units other than 1 for multiple VRF cases.

  • TLB uses the IP address that is configured for AMS member interfaces as the source IP address for health checks.

  • The member interfaces must be in the same routing instance as the interface used to reach real servers. This is mandatory for TLB server health-check procedures.

Virtual Services

The virtual service provides a virtual IP address (VIP) that is associated with the group of servers to which traffic is directed as determined by hash-based or random session distribution and server health monitoring. In the case of Layer2 DSR and Layer3 DSR, the special address causes all traffic flowing to the forwarding instance to be load balanced.

The virtual service configuration includes:

  • Mode—indicating how traffic is handled (translated or transparent).

  • The group of servers to which sessions are distributed.

  • The load balancing method.

  • Routing instance and route metric.

Best Practice:

Although you can assign a virtual address of in order to use default routing, we recommend using a virtual address that can be assigned to a routing instance set up specifically for TLB.

Traffic Load Balancer Configuration Limits

Traffic Load Balancer configuration limits are described in Table 4.

Table 4: TLB Configuration Limits

Configuration Component

Configuration Limit

Maximum number of instances.

Starting in Junos OS Release 16.1R6 and Junos OS Release 18.2R1, the TLB application supports 2000 TLB instances for virtual services that use the direct-server-return or the translated mode. In earlier releases, the maximum number of instances is 32.

If multiple virtual services are using the same server group, then all of those virtual services must use the same load balancing method to support 2000 TLB instances.

For virtual services that use the layer2-direct-server-return mode, TLB supports only 32 TLB instances. To perform the same function as the layer2-direct-server-return mode and have support for 2000 TLB instances, you can use the direct-server-return mode and use a service filter with the skip action.

Maximum number of servers per group


Maximum number of virtual services per services PIC


Maximum number of health checks per services PIC in a 5-second interval

For MS-MPC services cards: 2000

For Next Gen Services mode and the MX-SPC3 services cards: 1250

Maximum number of groups per virtual service


Maximum number of virtual IP addresses per virtual service


Supported health checking protocols

ICMP, TCP, HTTP, SSL, TLS-Hello, Custom


ICMP health checking is supported only on MS-MPC services cards.

Starting in Junos OS release 22.4.1, TLB is enhanced to support TLS-Hello health check type. For TLS-Hello over TCP, TLS v1.2 and v1.3 TLS-Hello health checks are supported.

Release History Table
Starting in Junos OS Release 16.1R6 and Junos OS Release 18.2R1, the TLB application supports 2000 TLB instances for virtual services that use the direct-server-return or the translated mode.