Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Load Balancing for Aggregated Ethernet Interfaces

Load balancing is done on Layer 2 across the member links making the configuration better without congestion and maintaining redundancy. The below topics discuss the overview of load balancing, configuring load balancing based on MAC addresses and on LAG link, understanding the consistency through resilient hashing.

Configuring Load Balancing Based on MAC Addresses

The hash key mechanism for load-balancing uses Layer 2 media access control (MAC) information such as frame source and destination address. To load-balance traffic based on Layer 2 MAC information, include the multiservice statement at the [edit forwarding-options hash-key] or [edit chassis fpc slot number pic PIC number hash-key] hierarchy level:

To include the destination-address MAC information in the hash key, include the destination-mac option. To include the source-address MAC information in the hash key, include the source-mac option.

Note:

Any packets that have the same source and destination address will be sent over the same path.

Note:

You can configure per-packet load balancing to optimize EVPN traffic flows across multiple paths.

Note:

Aggregated Ethernet member links will now use the physical MAC address as the source MAC address in 802.3ah OAM packets.

Example: Configuring Multicast Load Balancing for Use with Aggregated 10-Gigabit Ethernet Interfaces on EX8200 Switches

EX8200 switches support multicast load balancing on link aggregation groups (LAGs). Multicast load balancing evenly distributes Layer 3 routed multicast traffic over the LAGs, You can aggregate up to twelve 10-gigabit Ethernet links to form a 120-gigabit virtual link or LAG. The MAC client can treat this virtual link as if it were a single link to increase bandwidth, provide graceful degradation as link failures occur, and increase availability. On EX8200 switches, multicast load balancing is enabled by default. However, if it is explicitly disabled, you can reenable it. .

Note:

An interface with an already configured IP address cannot form part of the LAG.

Note:

Only EX8200 standalone switches with 10-gigabit links support multicast load balancing. Virtual Chassis does not support multicast load balancing.

This example shows how to configure a LAG and reenable multicast load balancing:

Requirements

This example uses the following hardware and software components:

  • Two EX8200 switches, one used as the access switch and one used as the distribution switch

  • Junos OS Release 12.2 or later for EX Series switches

Before you begin:

Overview and Topology

Multicast load balancing uses one of seven hashing algorithms to balance traffic between the individual 10-gigabit links in the LAG. For a description of the hashing algorithms, see multicast-loadbalance. The default hashing algorithm is crc-sgip. You can experiment with the different hashing algorithms until you determine the one that best balances your Layer 3 routed multicast traffic.

When a link larger than 10 gigabits is needed on an EX8200 switch, you can combine up to twelve 10-gigabit links to create more bandwidth. This example uses the link aggregation feature to combine four 10-gigabit links into a 40-gigabit link on the distribution switch. In addition, multicast load balancing is enabled to ensure even distribution of Layer 3 routed multicast traffic on the 40-gigabit link. In the sample topology illustrated in Figure 2, an EX8200 switch in the distribution layer is connected to an EX8200 switch in the access layer.

Note:

Link speed is automatically determined based on the size of the LAG configured. For example, if a LAG is composed of four 10-gigabit links, the link speed is 40 gigabits per second).

Note:

The default hashing algorithm, crc-sgip, involves a cyclic redundancy check of both the multicast packet source and group IP addresses.

Figure 2: 40-Gigabit LAG Composed of Four 10-Gigabit Links40-Gigabit LAG Composed of Four 10-Gigabit Links

You will configure a LAG on each switch and reenable multicast load balancing. When reenabled, multicast load balancing will automatically take effect on the LAG, and the speed is set to 10 gigabits per second for each link in the LAG. Link speed for the 40-gigabit LAG is automatically set to 40 gigabits per second.

Configuration

Procedure

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, and then copy and paste the commands into the CLI at the [edit] hierarchy level.

Step-by-Step Procedure

To configure a LAG and reenable multicast load balancing:

  1. Specify the number of aggregated Ethernet interfaces to be created:

  2. Specify the minimum number of links for the aggregated Ethernet interface (aex), that is, the LAG, to be labeled up:

    Note:

    By default, only one link needs to be up for the LAG to be labeled up.

  3. Specify the four members to be included within the LAG:

  4. Reenable multicast load balancing:

    Note:

    You do not need to set link speed the way you do for LAGs that do not use multicast load balancing. Link speed is automatically set to 40 gigabits per second on a 40-gigabit LAG.

  5. You can optionally change the value of the hash-mode option in the multicast-loadbalance statement to try different algorithms until you find the one that best distributes your Layer 3 routed multicast traffic.

    If you change the hashing algorithm when multicast load balancing is disabled, the new algorithm takes effect after you reenable multicast load balancing.

Results

Check the results of the configuration:

Verification

To confirm that the configuration is working properly, perform these tasks:

Verifying the Status of a LAG Interface

Purpose

Verify that a link aggregation group (LAG) (ae0) has been created on the switch.

Action

Verify that the ae0 LAG has been created:

Meaning

The interface name aex indicates that this is a LAG. A stands for aggregated, and E stands for Ethernet. The number differentiates the various LAGs.

Verifying Multicast Load Balancing

Purpose

Check that traffic is load-balanced equally across paths.

Action

Verify load balancing across the four interfaces:

Meaning

The interfaces should be carrying approximately the same amount of traffic.

Dynamic Load Balancing

Load balancing is used to ensure that network traffic is distributed as evenly as possible across members in a given ECMP (Equal-cost multi-path routing) or LAG (Link Aggregation Group). In general, load balancing is classified as either static or dynamic. Static load balancing (SLB) computes hashing solely based on the packet contents (for example, source IP, destination IP, and so on.). The biggest advantage of SLB is that packet ordering is guaranteed as all packets of a given flow take the same path. However, because the SLB mechanism does not consider the path or link load, the network often experiences the following problems:

  • Poor link bandwidth utilization

  • Elephant flow on a single link completely dropping mice flows on it.

Dynamic load balancing (DLB) is an improvement on top of SLB.

For ECMP, you can configure DLB globally, whereas for LAG, you configure it for each aggregated Ethernet interface. You can apply DLB on selected ether-type (Dynamic Load Balancing) (IPv4, IPv6, and MPLS) based on configuration. If you don't configure any ether-type (Dynamic Load Balancing), then DLB is applied to all EtherTypes. Note that you must explicitly configure the DLB mode because there is no default mode.

Note:
  • Starting in Junos OS Release 22.3R1-EVO, QFX5130-32CD switches support dynamic load balancing for both ECMP and LAG.

  • Starting in Junos OS Release 19.4R1, QFX5120-32C and QFX5120-48Y switches support dynamic load balancing for both ECMP and LAG. For LAG, DLB must be configured on per aggregated ethernet interface basis.

  • Starting in Junos OS evolved Release 19.4R2, QFX5220 switches support dynamic load balancing (DLB) for ECMP. For ECMP, DLB must be configured globally.

  • You cannot configure both DLB and resilient hashing at the same time. Otherwise, a commit error will be thrown.

  • DLB is applicable only for unicast traffic.

  • DLB is not supported when the LAG is one of the egress ECMP members.

  • DLB is not supported for remote LAG members.

  • DLB is not supported on Virtual Chassis and Virtual Chassis Fabric (VCF).

  • DLB on LAG and HiGig-trunk are not supported at the same time.

  • QFX5220, QFX5230-64CD, and QFX5240 switches do not support DLB on LAG.

Table 2: Platforms That Support Dynamic Load Balancing for ECMP/LAG

Platform

DLB Support for ECMP

DLB Support for LAG

QFX5120-32C

Yes

Yes

QFX5120-48Y

Yes

Yes

QFX5220

Yes

No

QFX5230-64CD

Yes

No

QFX5240

Yes

No

You can use the following DLB modes to load-balance traffic:

  • Per packet mode

    In this mode, DLB is initiated for each packet in the flow. This mode makes sure that the packet always gets assigned to the best-quality member port. However, in this mode, DLB may experience packet reordering problems that can arise due to latency skews.

  • Flowlet mode

    This mode relies on assigning links based on flowlets instead of flows. Real-world application traffic relies on flow control mechanisms of upper-layer transport protocols such as TCP, which throttle the transmission rate. As a result, flowlets are created. You can consider flowlets as multiple bursts of the same flow separated by a period of inactivity between these bursts—this period of inactivity is referred to as the inactivity interval. The inactivity interval serves as the demarcation criteria for identifying new flowlets and is offered as a user-configurable statement under the DLB configuration. In this mode, DLB is initiated per flowlet—that is, for the new flow as well as for the existing flow that has been inactive for a sufficiently long period of time (configured inactivity-interval). The reordering problem of per packet mode is addressed in this mode as all the packets in a flowlet take the same link. If the inactivity-interval value is configured to be higher than the maximum latency skew across all ECMP paths, then you can avoid packet reordering across flowlets while increasing link utilization of all available ECMP links.

  • Assigned flow mode

    You can use assigned flow mode to selectively disable rebalancing for a period of time to isolate problem sources. You cannot use this mode for real-time DLB or predict the egress ports that will be selected using this mode because assigned flow mode does not consider port load and queue size.

Note:

Here are some of the important behaviors of DLB:

  • DLB is applicable for incoming EtherTypes only.

  • From a DLB perspective, both Layer 2 and Layer 3 link aggregation group (LAG) bundles are considered the same.

  • The link utilisation will not be optimal if you use dynamic load balancing in asymmetric bundles—that is, on ECMP links with different member capacities.

  • With DLB, no reassignment of flow happens when a new link is added in per packet and assigned flow modes. This can cause suboptimal usage in link flap scenarios where a utilized link may not be utilized after it undergoes a flap if no new flow or flowlets are seen after the flap.

Benefits

  • DLB considers member bandwidth utilization along with packet content for member selection. As a result, we achieve better link utilization based on real-time link loads.

  • DLB ensures that links hogged by elephant flows are not used by mice flows. Thus, by using DLB, we avoid hash collision drops that occur with SLB. That is, with DLB the links are spread across, and thus the collision and the consequent drop of packets are avoided.

Configuring Dynamic Load Balancing

This topic describes how to configure dynamic load balancing (DLB) in flowlet mode.

Starting in Junos OS Release 19.4R1, QFX5120-32C and QFX5120-48Y switches support dynamic load balancing for both ECMP and LAG. For LAG, DLB must be configured on per aggregated ethernet interface basis.

Starting in Junos OS evolved Release 19.4R2, QFX5220 switches support dynamic load balancing (DLB) for ECMP. For ECMP, DLB must be configured globally.

Configuring DLB for ECMP (Flowlet mode)

To configure dynamic load balancing for ECMP with flowlet mode (QFX5120-32C, QFX5120-48Y, and QFX5220 switches):

  1. Enable dynamic load balancing with flowlet mode:
  2. (Optional) Configure the inactivity-interval value - minimum inactivity interval (in micro seconds) for link re-assignment:
  3. (Optional) Configure dynamic load balancing with ether-type:
  4. (Optional) You can view the options configured for dynamic load balancing on ECMP using show forwarding-options enhanced-hash-key command.

Similarly, you can configure DLB for ECMP with Per packet or Assigned flow mode.

Configuring DLB for LAG (Flowlet mode)

Before you begin, create an aggregated ethernet (AE) bundle by configuring a set of router interfaces as aggregated Ethernet and with a specific aggregated ethernet (AE) group identifier.

To configure dynamic load balancing for LAG with flowlet mode (QFX5120-32C and QFX5120-48Y):

  1. Enable dynamic load balancing with flowlet mode:

  2. (Optional) Configure the inactivity-interval value - minimum inactivity interval (in micro seconds) for link re-assignment:

  3. (Optional) Configure dynamic load balancing with ether-type:

  4. (Optional) You can view the options configured for dynamic load balancing on LAG using show forwarding-options enhanced-hash-key command.

Similarly, you can configure DLB for LAG with Per packet or Assigned flow mode.

Example: Configure Dynamic Load Balancing

This example shows how to configure dynamic load balancing.

Requirements

This example uses the following hardware and software components:

  • Two QFX5120-32C or QFX5120-48Y switches

  • Junos OS Release 19.4R1 or later running on all devices

Overview

Dynamic load balancing (DLB) is an improvement on top of SLB.

For ECMP, you can configure DLB globally, whereas for LAG, you configure it for each aggregated Ethernet interface. You can apply DLB on selected ether-type (Dynamic Load Balancing) such as IPv4, IPv6, and MPLS based on configuration. If you don't configure any ether-type (Dynamic Load Balancing), then DLB is applied to all EtherTypes. Note that you must explicitly configure the DLB mode because there is no default mode.

Note:
  • Starting in Junos OS Release 19.4R1, QFX5120-32C and QFX5120-48Y switches support dynamic load balancing on both ECMP and LAG.

  • You cannot configure both DLB and Resilient Hashing at the same time. Otherwise, commit error will be thrown.

Topology

In this topology, both R0 and R1 are connected.

Figure 3: Dynamic Load BalancingDynamic Load Balancing
Note:

This example shows static configuration. You can also add configuration with dynamic protocols.

Configuration

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, and then copy and paste the commands into the CLI at the [edit] hierarchy level.

R0

R1

Configure Dynamic Load Balancing for LAG (QFX5120-32C and QFX5120-48Y)

Step-by-Step Procedure

The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode.

To configure the R0 router:

Note:

Repeat this procedure for the other routers, after modifying the appropriate interface names, addresses, and any other parameters for each router.

  1. Configure Link Aggregation Group (LAG).

    After configuring LAG, in the verification section, execute the steps in the Verifying Traffic Load before configuring Dynamic Load Balancing Feature on LAG section, to check the configuration or the traffic load before configuring DLB.

  2. Configure Dynamic Load Balancing with per-packet mode for LAG.

    After configuring the DLB, in the verification section, execute the steps in the Verifying Traffic Load after configuring Dynamic Load Balancing Feature on LAG section, to check the configuration or the traffic load before configuring DLB.

Configure Dynamic Load Balancing for ECMP (QFX5120-32C, QFX5120-48Y, and QFX5220 switches)

Step-by-Step Procedure

The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode.

To configure the R0 router:

Note:

Repeat this procedure for the other routers, after modifying the appropriate interface names, addresses, and any other parameters for each router.

  1. Configure the Gigabit Ethernet interface link connecting from R0 to R1.

  2. Create the static routes:

  3. Apply the load-balancing policy. The dynamic load balancing feature requires the multiple ECMP next hops to be present in the forwarding table.

  4. Configure Dynamic Load Balancing with per-packet mode for ECMP.

  5. On R1, configure the Gigabit Ethernet interface link.

Verification

Confirm that the configuration is working properly.

Verify Traffic Load Before Configuring Dynamic Load Balancing Feature on LAG
Purpose

Verify before the DLB feature is configured on the Link Aggregation Group.

Action

From operational mode, run the show interfaces interface-name | match pps command.

Verify Traffic Load After Configuring Dynamic Load Balancing Feature on LAG
Purpose

Verify that packets received on the R0 are load-balanced.

Action

From operational mode, run the show interfaces interface-name command.

Meaning

Dynamic Load balancing with per-packet mode successfully working. After applying dynamic load balancing feature on LAG, the load is equally shared in the network.

Verification

Confirm that the configuration is working properly at R0.

Verify Dynamic Load Balancing on R0

Purpose

Verify that packets received on the R0 are load-balanced.

Action

From operational mode, run the run show route forwarding-table destination destination-address command.

Meaning

Verify Load Balancing on R1

Purpose

Confirm that the configuration is working properly at R1.

Action

From operational mode, run the show route command.

Meaning

Dynamic Load balancing with per-packet mode successfully working. After applying dynamic load balancing feature on ECMP, the load is equally shared in the network.

Change History Table

Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.

Release
Description
19.4R2-EVO
Starting in Junos OS evolved Release 19.4R2, QFX5220 switches support dynamic load balancing (DLB) for ECMP. For ECMP, DLB must be configured globally.
19.4R1
Starting in Junos OS Release 19.4R1, QFX5120-32C and QFX5120-48Y switches support dynamic load balancing for both ECMP and LAG. For LAG, DLB must be configured on per aggregated ethernet interface basis.
10.1
Starting with Junos OS Release 10.1, you can also configure the load balancing hash key for Layer 2 traffic to use fields in the Layer 3 and Layer 4 headers using the payload statement.