Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Load Balancing for Aggregated Ethernet Interfaces

 

Load balancing is done on Layer 2 across the member links making the configuration better without congestion and maintaining redundancy. The below topics discuss the overview of load balancing, configuring load balancing based on MAC addresses and on LAG link, understanding the consistency through resilient hashing.

You can create a link aggregation group (LAG) for a group of Ethernet ports. Layer 2 bridging traffic is load balanced across the member links of this group, making the configuration attractive for congestion concerns as well as for redundancy. You can configure up to 128 LAG bundles on M Series, and T Series routers, and 480 LAG bundles on MX Series routers and EX9200 switches. Each LAG bundle contains up to 16 links. (Platform support depends on the Junos OS release in your installation.)

By default, the hash key mechanism to load-balance frames across LAG interfaces is based on Layer 2 fields (such as frame source and destination address) as well as the input logical interface (unit). The default LAG algorithm is optimized for Layer 2 switching. Starting with Junos OS Release 10.1, you can also configure the load balancing hash key for Layer 2 traffic to use fields in the Layer 3 and Layer 4 headers using the payload statement. However, note that the load-balancing behavior is platform-specific and based on appropriate hash-key configurations.

For more information, see Configuring Load Balancing on a LAG Link.In a Layer 2 switch, one link is overutilized and other links are underutilized.

See also

Configuring Load Balancing Based on MAC Addresses

The hash key mechanism for load-balancing uses Layer 2 media access control (MAC) information such as frame source and destination address. To load-balance traffic based on Layer 2 MAC information, include the multiservice statement at the [edit forwarding-options hash-key] or [edit chassis fpc slot number pic PIC number hash-key] hierarchy level:

To include the destination-address MAC information in the hash key, include the destination-mac option. To include the source-address MAC information in the hash key, include the source-mac option.

Note

Any packets that have the same source and destination address will be sent over the same path.

Note

You can configure per-packet load balancing to optimize EVPN traffic flows across multiple paths.

Note

Aggregated Ethernet member links will now use the physical MAC address as the source MAC address in 802.3ah OAM packets.

See also

You can configure the load balancing hash key for Layer 2 traffic to use fields in the Layer 3 and Layer 4 headers inside the frame payload for load-balancing purposes using the payload statement. You can configure the statement to look at layer-3 (and source-ip-only or destination-ip-only packet header fields) or layer-4 fields. You configure this statement at the [edit forwarding-options hash-key family multiservice] hierarchy level.

You can configure Layer 3 or Layer 4 options, or both. The source-ip-only or destination-ip-only options are mutually exclusive. The layer-3-only statement is not available on MX Series routers.

By default, Junos implementation of 802.3ad balances traffic across the member links within an aggregated Ethernet bundle based on the Layer 3 information carried in the packet.

For more information about link aggregation group (LAG) configuration, see the Junos OS Network Interfaces Library for Routing Devices.

This example configures the load-balancing hash key to use the source Layer 3 IP address option and Layer 4 header fields as well as the source and destination MAC addresses for load balancing on a link aggregation group (LAG) link:

Note

Any change in the hash key configuration requires a reboot of the FPC for the changes to take effect.

Understanding Consistent Load Balancing Through Resilient Hashing on ECMP Groups

You can use consistent load balancing to minimize flow remapping in an equal-cost multipath (ECMP) group.

By default, when there are multiple equal-cost paths to the same destination for the active route, Junos OS uses a hash algorithm to choose one of the next-hop addresses to install in the forwarding table. Whenever the set of next hops for a destination changes in any way, Junos OS rechooses the next-hop address by using the hash algorithm.

You can configure consistent load balancing on the switch to prevent the reordering of all flows to active paths in an ECMP group when one or more next-hop paths fail. Only flows for paths that are inactive are redirected to another active next-hop path. Flows mapped to servers that remain active are maintained.

This feature applies only to external BGP peers.

Configuring Consistent Load Balancing for ECMP Groups

Per-packet load balancing allows you to spread traffic across multiple equal-cost paths. By default, when a failure occurs in one or more paths, the hashing algorithm recalculates the next hop for all paths, typically resulting in the redistribution of all flows. Consistent load balancing enables you to override this behavior so that only flows for links that are inactive are redirected. All existing active flows are maintained without disruption. In a data center environment, the redistribution of all flows when a link fails potentially results in significant traffic loss or a loss of service to servers whose links remain active. Consistent load balancing maintains all active links and instead remaps only those flows affected by one or more link failures. This feature ensures that flows connected to links that remain active continue uninterrupted.

This feature applies to topologies where members of an equal-cost multipath (ECMP) group are external BGP neighbors in a single-hop BGP session. Consistent load balancing does not apply when you add a new ECMP path or modify an existing path in any way. To add a new path with minimal disruption, define a new ECMP group without modifying the existing paths. In this way, clients can be moved to the new group gradually without terminating existing connections.

  • (On MX Series) Only Modular Port Concentrators (MPCs) are supported.

  • Both IPv4 and IPv6 paths are supported.

  • ECMP groups that are part of a virtual routing and forwarding (VRF) instance or other routing instance are also supported.

  • Multicast traffic is not supported.

  • Aggregated interfaces are supported, but consistent load balancing is not supported among members of the link aggregation (LAG) bundle. Traffic from active members of the LAG bundle might be moved to another active member when one or more member links fail. Flows are rehashed when one or more LAG member links fail.

  • We strongly recommend that you apply consistent load balancing to no more than a maximum of 1,000 IP prefixes per router or switch.

  • Layer 3 adjacency over integrated routing and bridging (IRB) interfaces is supported.

You can configure the BGP add-path feature to enable replacement of a failed path with a new active path when one or more paths in the ECMP group fail. Configuring replacement of failed paths ensures that traffic flow on the failed paths only are redirected. Traffic flow on active paths will remain unaltered.

Note
  • When you configure consistent load balancing on generic routing encapsulation (GRE) tunnel interfaces, you must specify the inet address of the far end GRE interface so that the Layer 3 adjacencies over the GRE tunnel interfaces are installed correctly in the forwarding table. However, ECMP fast reroute (FRR) over GRE tunnel interfaces is not supported during consistent load balancing. You can specify the destination address on the router configured with consistent load balancing at the [edit interfaces interface name unit unit name family inet address address] hierarchy level. For example:

    For more information on generic routing encapsulation see Configuring Generic Routing Encapsulation Tunneling.

  • Consistent load balancing does not support BGP multihop for EBGP neighbors. Therefore, do not enable the multihop option on devices configured with consistent load balancing.

To configure consistent load balancing for ECMP groups:

  1. Configure BGP and enable the BGP group of external peers to use multiple paths.
  2. Create a routing policy to match incoming routes to one or more destination prefixes.
  3. Apply consistent load balancing to the routing policy so that only traffic flows to one or more destination prefixes that experience a link failure are redirected to an active link.
  4. Create a separate routing policy and enable per-packet load balancing.Note

    You must configure and apply a per-packet load-balancing policy to install all routes in the forwarding table.

  5. Apply the routing policy for consistent load balancing to the BGP group of external peers.Note

    Consistent load balancing can be applied only to BGP external peers. This policy cannot be applied globally.

  6. (Optional) Enable bidirectional forwarding detection (BFD) for each external BGP neighbor.
    Note

    This step shows the minimum BFD configuration required. You can configure additional options for BFD.

  7. Apply the per-prefix load-balancing policy globally to install all next-hop routes in the forwarding table.
  8. (Optional) Enable fast reroute for ECMP routes.
  9. Verify the status of one or more ECMP routes for which you enabled consistent load balancing.

    The output of the command displays the following flag when consistent load balancing is enabled:

    State: <Active Ext LoadBalConsistentHash>

Streaming video technology was introduced in 1997. Multicast protocols were subsequently developed to reduce data replication and network overloads. With multicasting, servers can send a single stream to a group of recipients instead of sending multiple unicast streams. While the use of streaming video technology was previously limited to occasional company presentations, multicasting has provided a boost to the technology resulting in a constant stream of movies, real-time data, news clips, and amateur videos flowing nonstop to computers, TVs, tablets, and phones. However, all of these streams quickly overwhelmed the capacity of network hardware and increased bandwidth demands leading to unacceptable blips and stutters in transmission.

To satisfy the growing bandwidth demands, multiple links were virtually aggregated to form bigger logical point-to-point link channels for the flow of data. These virtual link combinations are called multicast interfaces, also known as link aggregation groups (LAGs).

Multicast load balancing involves managing the individual links in each LAG to ensure that each link is used efficiently. Hashing algorithms continually evaluate the data stream, adjusting stream distribution over the links in the LAG, so that no link is underutilized or overutilized. Multicast load balancing is enabled by default on Juniper Networks EX8200 Ethernet Switches.

This topic includes:

Create LAGs for Multicasting in Increments of 10 Gigabits

The maximum link size on an EX8200 switch is 10 gigabits. If you need a larger link on an EX8200 switch, you can combine up to twelve 10-gigabit links. In the sample topology shown in Figure 1, four 10-gigabit links have been aggregated to form each 40-gigabit link.

Figure 1: 40-Gigabit LAGs on EX8200 Switches
40-Gigabit LAGs on EX8200 Switches

When Should I Use Multicast Load Balancing?

Use a LAG with multicast load balancing when you need a downstream link greater than 10 gigabits. This need frequently arises when you act as a service provider or when you multicast video to a large audience.

To use multicast load balancing, you need the following:

How Does Multicast Load Balancing Work?

Juniper Networks Junos operating system (Junos OS) supports the Link Aggregation Control Protocol (LACP), which is a subcomponent of IEEE 802.3ad. LACP provides additional functionality for LAGs and is supported only on Layer 3 interfaces. When traffic can use multiple member links, traffic that is part of the same stream must always be on the same link.

Multicast load balancing uses one of seven available hashing algorithms and a technique called queue shuffling (alternating between two queues) to distribute and balance the data, directing streams over all available aggregated links. You can select one of the seven algorithms when you configure multicast load balancing, or you can use the default algorithm, crc-sgip, which uses a cyclic redundancy check (CRC) algorithm on the multicast packets’ group IP address. We recommend that you start with the crc-sgip default and try other options if this algorithm does not evenly distribute the Layer 3 routed multicast traffic. Six of the algorithms are based on the hashed value of IP addresses (IPv4 or IPv6) and will produce the same result each time they are used. Only the balanced mode option produces results that vary depending on the order in which streams are added. See Table 1 for more information.

Table 1: Hashing Algorithms Used by Multicast Load Balancing

Hashing Algorithms

Based On

Best Use

crc-sgip

Cyclic redundancy check of multicast packets’ source and group IP address

Default—high-performance management of IP traffic on 10-Gigabit Ethernet network. Predictable assignment to the same link each time. This mode is complex but yields a good distributed hash.

crc-gip

Cyclic redundancy check of multicast packets’ group IP address

Predictable assignment to the same link each time. Try this mode when crc-sgip does not evenly distribute the Layer 3 routed multicast traffic and the group IP addresses vary.

crc-sip

Cyclic redundancy check of multicast packets’ source IP address

Predictable assignment to the same link each time. Try this mode when crc-sgip does not evenly distribute the Layer 3 routed multicast traffic and the stream sources vary.

simple-sgip

XOR calculation of multicast packets’ source and group IP address

Predictable assignment to the same link each time. This is a simple hashing method that might not yield as even a distribution as crc-sgip yields. Try this mode when crc-sgip does not evenly distribute the Layer 3 routed multicast traffic.

simple-gip

XOR calculation of multicast packets’ group IP address

Predictable assignment to the same link each time. This is a simple hashing method that might not yield as even a distribution as crc-gip yields. Try this when crc-gip does not evenly distribute the Layer 3 routed multicast traffic and the group IP addresses vary.

simple-sip

XOR calculation of multicast packets’ source IP address

Predictable assignment to the same link each time. This is a simple hashing method that might not yield as even a distribution as crc-sip yields. Try this mode when crc-sip does not evenly distribute the Layer 3 routed multicast traffic and stream sources vary.

balanced

Round-robin calculation method used to identify multicast links with the least amount of traffic

Best balance is achieved, but you cannot predict which link will be consistently used because that depends on the order in which streams come online. Use when consistent assignment is not needed after every reboot.

How Do I Implement Multicast Load Balancing on an EX8200 Switch?

To implement multicast load balancing with an optimized level of throughput on an EX8200 switch, follow these recommendations:

  • Allow 25 percent unused bandwidth in the aggregated link to accommodate any dynamic imbalances due to link changes caused by sharing multicast interfaces.

  • For downstream links, use multicast interfaces of the same size whenever possible. Also, for downstream aggregated links, throughput is optimized when members of the aggregated link belong to the same devices.

  • For upstream aggregated links, use a Layer 3 link whenever possible. Also, for upstream aggregated links, throughput is optimized when the members of the aggregated link belong to different devices.

Example: Configuring Multicast Load Balancing for Use with Aggregated 10-Gigabit Ethernet Interfaces on EX8200 Switches

EX8200 switches support multicast load balancing on link aggregation groups (LAGs). Multicast load balancing evenly distributes Layer 3 routed multicast traffic over the LAGs, You can aggregate up to twelve 10-gigabit Ethernet links to form a 120-gigabit virtual link or LAG. The MAC client can treat this virtual link as if it were a single link to increase bandwidth, provide graceful degradation as link failures occur, and increase availability. On EX8200 switches, multicast load balancing is enabled by default. However, if it is explicitly disabled, you can reenable it. .

Note

An interface with an already configured IP address cannot form part of the LAG.

Note

Only EX8200 standalone switches with 10-gigabit links support multicast load balancing. Virtual Chassis does not support multicast load balancing.

This example shows how to configure a LAG and reenable multicast load balancing:

Requirements

This example uses the following hardware and software components:

  • Two EX8200 switches, one used as the access switch and one used as the distribution switch

  • Junos OS Release 12.2 or later for EX Series switches

Before you begin:

Overview and Topology

Multicast load balancing uses one of seven hashing algorithms to balance traffic between the individual 10-gigabit links in the LAG. For a description of the hashing algorithms, see multicast-loadbalance. The default hashing algorithm is crc-sgip. You can experiment with the different hashing algorithms until you determine the one that best balances your Layer 3 routed multicast traffic.

When a link larger than 10 gigabits is needed on an EX8200 switch, you can combine up to twelve 10-gigabit links to create more bandwidth. This example uses the link aggregation feature to combine four 10-gigabit links into a 40-gigabit link on the distribution switch. In addition, multicast load balancing is enabled to ensure even distribution of Layer 3 routed multicast traffic on the 40-gigabit link. In the sample topology illustrated in Figure 2, an EX8200 switch in the distribution layer is connected to an EX8200 switch in the access layer.

Note

Link speed is automatically determined based on the size of the LAG configured. For example, if a LAG is composed of four 10-gigabit links, the link speed is 40 gigabits per second).

Note

The default hashing algorithm, crc-sgip, involves a cyclic redundancy check of both the multicast packet source and group IP addresses.

You will configure a LAG on each switch and reenable multicast load balancing. When reenabled, multicast load balancing will automatically take effect on the LAG, and the speed is set to 10 gigabits per second for each link in the LAG. Link speed for the 40-gigabit LAG is automatically set to 40 gigabits per second.

Configuration

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, and then copy and paste the commands into the CLI at the [edit] hierarchy level.

set chassis aggregated-devices ethernet device-count 1
set interfaces ae0 aggregated-ether-options minimum-links 1
set interfaces xe-0/1/0 ether-options 802.3ad ae0
set interfaces xe-1/1/0 ether-options 802.3ad ae0
set interfaces xe-2/1/0 ether-options 802.3ad ae0
set interfaces xe-3/1/0 ether-options 802.3ad ae0
set chassis multicast-loadbalance hash-mode crc-gip

Step-by-Step Procedure

To configure a LAG and reenable multicast load balancing:

  1. Specify the number of aggregated Ethernet interfaces to be created:
    [edit chassis]

    user@switch# set aggregated-devices ethernet device-count 1
  2. Specify the minimum number of links for the aggregated Ethernet interface (aex), that is, the LAG, to be labeled up:Note

    By default, only one link needs to be up for the LAG to be labeled up.

    [edit interfaces]

    user@switch# set ae0 aggregated-ether-options minimum-links 1
  3. Specify the four members to be included within the LAG:
    [edit interfaces]

    user@switch# set xe-0/1/0 ether-options 802.3ad ae0

    user@switch# set xe-1/1/0 ether-options 802.3ad ae0

    user@switch# set xe-2/1/0 ether-options 802.3ad ae0

    user@switch# set xe-3/1/0 ether-options 802.3ad ae0
  4. Reenable multicast load balancing:
    [edit chassis]
    user@switch# set multicast-loadbalance
    Note

    You do not need to set link speed the way you do for LAGs that do not use multicast load balancing. Link speed is automatically set to 40 gigabits per second on a 40-gigabit LAG.

  5. You can optionally change the value of the hash-mode option in the multicast-loadbalance statement to try different algorithms until you find the one that best distributes your Layer 3 routed multicast traffic.

    If you change the hashing algorithm when multicast load balancing is disabled, the new algorithm takes effect after you reenable multicast load balancing.

Results

Check the results of the configuration:

Verification

To confirm that the configuration is working properly, perform these tasks:

Verifying the Status of a LAG Interface

Purpose

Verify that a link aggregation group (LAG) (ae0) has been created on the switch.

Action

Verify that the ae0 LAG has been created:

user@switch> show interfaces ae0 terse

Meaning

The interface name aex indicates that this is a LAG. A stands for aggregated, and E stands for Ethernet. The number differentiates the various LAGs.

Verifying Multicast Load Balancing

Purpose

Check that traffic is load-balanced equally across paths.

Action

Verify load balancing across the four interfaces:

Meaning

The interfaces should be carrying approximately the same amount of traffic.

Dynamic Load Balancing

Load balancing is used to ensure that network traffic is distributed as evenly as possible across members in a given ECMP (Equal-cost multi-path routing) or LAG (Link Aggregation Group). In general, load balancing is classified as either static or dynamic. Static load balancing (SLB) computes hashing solely based on the packet contents (for example, source IP, destination IP, and so on.). The biggest advantage of SLB is that packet ordering is guaranteed as all packets of a given flow take the same path. However, because the SLB mechanism does not consider the path or link load, the network often experiences the following problems:

  • Poor link bandwidth utilization

  • Elephant flow on a single link completely dropping mice flows on it.

Dynamic load balancing (DLB) is an improvement on top of SLB.

For ECMP, you can configure DLB globally, whereas for LAG, you configure it for each aggregated Ethernet interface. You can apply DLB on selected ether-type (IPv4, IPv6, and MPLS) based on configuration. If you don't configure any ether-type, then DLB is applied to all EtherTypes. Note that you must explicitly configure the DLB mode because there is no default mode.

Note
  • Starting in Junos OS Release 19.4R1, QFX5120-32C and QFX5120-48Y switches support dynamic load balancing for both ECMP and LAG. For LAG, DLB must be configured on per aggregated ethernet interface basis.

  • Starting in Junos OS evolved Release 19.4R2, QFX5220 switches support dynamic load balancing (DLB) for ECMP. For ECMP, DLB must be configured globally.

  • You cannot configure both DLB and resilient hashing at the same time. Otherwise, a commit error will be thrown.

  • DLB is applicable only for unicast traffic.

  • DLB is not supported when the LAG is one of the egress ECMP members.

  • DLB is not supported for remote LAG members.

  • DLB is not supported on Virtual Chassis and Virtual Chassis Fabric (VCF).

  • DLB on LAG and HiGig-trunk are not supported at the same time.

  • QFX5220 switches does not support DLB on LAG.

Table 2: Platforms That Support Dynamic Load Balancing for ECMP/LAG

Platform

DLB Support for ECMP

DLB Support for LAG

QFX5120-32C

Yes

Yes

QFX5120-48Y

Yes

Yes

QFX5220

Yes

No

You can use the following DLB modes to load-balance traffic:

  • Per packet mode

    In this mode, DLB is initiated for each packet in the flow. This mode makes sure that the packet always gets assigned to the best-quality member port. However, in this mode, DLB may experience packet reordering problems that can arise due to latency skews.

  • Flowlet mode

    This mode relies on assigning links based on flowlets instead of flows. Real-world application traffic relies on flow control mechanisms of upper-layer transport protocols such as TCP, which throttle the transmission rate. As a result, flowlets are created. You can consider flowlets as multiple bursts of the same flow separated by a period of inactivity between these bursts—this period of inactivity is referred to as the inactivity interval. The inactivity interval serves as the demarcation criteria for identifying new flowlets and is offered as a user-configurable statement under the DLB configuration. In this mode, DLB is initiated per flowlet—that is, for the new flow as well as for the existing flow that has been inactive for a sufficiently long period of time (configured inactivity-interval). The reordering problem of per packet mode is addressed in this mode as all the packets in a flowlet take the same link. If the inactivity-interval value is configured to be higher than the maximum latency skew across all ECMP paths, then you can avoid packet reordering across flowlets while increasing link utilization of all available ECMP links.

  • Assigned flow mode

    You can use assigned flow mode to selectively disable rebalancing for a period of time to isolate problem sources. You cannot use this mode for real-time DLB or predict the egress ports that will be selected using this mode because assigned flow mode does not consider port load and queue size.

Note

Here are some of the important behaviors of DLB:

  • DLB is applicable for incoming EtherTypes only.

  • From a DLB perspective, both Layer 2 and Layer 3 link aggregation group (LAG) bundles are considered the same.

  • The link utilisation will not be optimal if you use dynamic load balancing in asymmetric bundles—that is, on ECMP links with different member capacities.

  • With DLB, no reassignment of flow happens when a new link is added in per packet and assigned flow modes. This can cause suboptimal usage in link flap scenarios where a utilized link may not be utilized after it undergoes a flap if no new flow or flowlets are seen after the flap.

Benefits

  • DLB considers member bandwidth utilization along with packet content for member selection. As a result, we achieve better link utilization based on real-time link loads.

  • DLB ensures that links hogged by elephant flows are not used by mice flows. Thus, by using DLB, we avoid hash collision drops that occur with SLB. That is, with DLB the links are spread across, and thus the collision and the consequent drop of packets are avoided.

Configuring Dynamic Load Balancing

This topic describes how to configure dynamic load balancing (DLB) in flowlet mode.

Starting in Junos OS Release 19.4R1, QFX5120-32C and QFX5120-48Y switches support dynamic load balancing for both ECMP and LAG. For LAG, DLB must be configured on per aggregated ethernet interface basis.

Starting in Junos OS evolved Release 19.4R2, QFX5220 switches support dynamic load balancing (DLB) for ECMP. For ECMP, DLB must be configured globally.

Configuring DLB for ECMP (Flowlet mode)

To configure dynamic load balancing for ECMP with flowlet mode (QFX5120-32C, QFX5120-48Y, and QFX5220 switches):

  1. Enable dynamic load balancing with flowlet mode:
  2. (Optional) Configure the inactivity-interval value - minimum inactivity interval (in micro seconds) for link re-assignment:
  3. (Optional) Configure dynamic load balancing with ether-type:
  4. (Optional) You can view the options configured for dynamic load balancing on ECMP using show forwarding-options enhanced-hash-key command.

Similarly, you can configure DLB for ECMP with Per packet or Assigned flow mode.

Configuring DLB for LAG (Flowlet mode)

Before you begin, create an aggregated ethernet (AE) bundle by configuring a set of router interfaces as aggregated Ethernet and with a specific aggregated ethernet (AE) group identifier.

To configure dynamic load balancing for LAG with flowlet mode (QFX5120-32C and QFX5120-48Y):

  1. Enable dynamic load balancing with flowlet mode:
  2. (Optional) Configure the inactivity-interval value - minimum inactivity interval (in micro seconds) for link re-assignment:
  3. (Optional) Configure dynamic load balancing with ether-type:
  4. (Optional) You can view the options configured for dynamic load balancing on LAG using show forwarding-options enhanced-hash-key command.

Similarly, you can configure DLB for LAG with Per packet or Assigned flow mode.

Example: Configure Dynamic Load Balancing

This example shows how to configure dynamic load balancing.

Requirements

This example uses the following hardware and software components:

  • Two QFX5120-32C or QFX5120-48Y switches

  • Junos OS Release 19.4R1 or later running on all devices

Overview

Dynamic load balancing (DLB) is an improvement on top of SLB.

For ECMP, you can configure DLB globally, whereas for LAG, you configure it for each aggregated Ethernet interface. You can apply DLB on selected ether-type such as IPv4, IPv6, and MPLS based on configuration. If you don't configure any ether-type, then DLB is applied to all EtherTypes. Note that you must explicitly configure the DLB mode because there is no default mode.

Note
  • Starting in Junos OS Release 19.4R1, QFX5120-32C and QFX5120-48Y switches support dynamic load balancing on both ECMP and LAG.

  • You cannot configure both DLB and Resilient Hashing at the same time. Otherwise, commit error will be thrown.

Topology

In this topology, both R0 and R1 are connected.

Figure 3: Dynamic Load Balancing
Note

This example shows static configuration. You can also add configuration with dynamic protocols.

Configuration

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, and then copy and paste the commands into the CLI at the [edit] hierarchy level.

R0

R1

Configure Dynamic Load Balancing for LAG (QFX5120-32C and QFX5120-48Y)

Step-by-Step Procedure

The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode.

To configure the R0 router:

Note

Repeat this procedure for the other routers, after modifying the appropriate interface names, addresses, and any other parameters for each router.

  1. Configure Link Aggregation Group (LAG).

    After configuring LAG, in the verification section, execute the steps in the Verifying Traffic Load before configuring Dynamic Load Balancing Feature on LAG section, to check the configuration or the traffic load before configuring DLB.

  2. Configure Dynamic Load Balancing with per-packet mode for LAG.

    After configuring the DLB, in the verification section, execute the steps in the Verifying Traffic Load after configuring Dynamic Load Balancing Feature on LAG section, to check the configuration or the traffic load before configuring DLB.

Verification

Confirm that the configuration is working properly.

Verify Traffic Load Before Configuring Dynamic Load Balancing Feature on LAG

Purpose

Verify before the DLB feature is configured on the Link Aggregation Group.

Action

From operational mode, run the show interfaces interface-name | match pps command.

user@R0>show interfaces xe-0/0/0 | match pps
user@R0>show interfaces xe-0/0/10 | match pps

Verify Traffic Load After Configuring Dynamic Load Balancing Feature on LAG

Purpose

Verify that packets received on the R0 are load-balanced.

Action

From operational mode, run the show interfaces interface-name command.

user@R0>show interfaces xe-0/0/0 | match pps
user@R0>show interfaces xe-0/0/10 | match pps

Meaning

Dynamic Load balancing with per-packet mode successfully working. After applying dynamic load balancing feature on LAG, the load is equally shared in the network.

Configure Dynamic Load Balancing for ECMP (QFX5120-32C, QFX5120-48Y, and QFX5220 switches)

Step-by-Step Procedure

The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode.

To configure the R0 router:

Note

Repeat this procedure for the other routers, after modifying the appropriate interface names, addresses, and any other parameters for each router.

  1. Configure the Gigabit Ethernet interface link connecting from R0 to R1.
  2. Create the static routes:
  3. Apply the load-balancing policy. The dynamic load balancing feature requires the multiple ECMP next hops to be present in the forwarding table.
  4. Configure Dynamic Load Balancing with per-packet mode for ECMP.
  5. On R1, configure the Gigabit Ethernet interface link.

Verification

Confirm that the configuration is working properly at R0.

Verify Dynamic Load Balancing on R0

Purpose

Verify that packets received on the R0 are load-balanced.

Action

From operational mode, run the run show route forwarding-table destination destination-address command.

user@R0>show route forwarding-table destination 20.0.1.0/24
user@R0>show route 20.0.1.0/24

Meaning

Verify Load Balancing on R1

Purpose

Confirm that the configuration is working properly at R1.

Action

From operational mode, run the show route command.

user@R1>show route 20.0.1.25

Meaning

Dynamic Load balancing with per-packet mode successfully working. After applying dynamic load balancing feature on ECMP, the load is equally shared in the network.

Release History Table
Release
Description
Starting in Junos OS evolved Release 19.4R2, QFX5220 switches support dynamic load balancing (DLB) for ECMP. For ECMP, DLB must be configured globally.
Starting in Junos OS Release 19.4R1, QFX5120-32C and QFX5120-48Y switches support dynamic load balancing for both ECMP and LAG. For LAG, DLB must be configured on per aggregated ethernet interface basis.
Starting with Junos OS Release 10.1, you can also configure the load balancing hash key for Layer 2 traffic to use fields in the Layer 3 and Layer 4 headers using the payload statement.