Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Aggregated Ethernet Interfaces

 
Summary

Learn about aggregated Ethernet interfaces (or Ethernet link aggregation), how to configure an aggregated Ethernet interface, LACP, and other supported features.

What Are Aggregated Ethernet Interfaces?

You can group or bundle multiple Ethernet interfaces together to form a single link layer interface known as the aggregated Ethernet interface (aex) or a link aggregation group (LAG). The IEEE 802.3ad standard defines link aggregation of Ethernet interfaces and provides a method by which you can group or bundle multiple Ethernet interfaces. Bundling multiple interfaces together enables you to increase the supported bandwidth. The device treats the aggregated Ethernet interface or LAG as a single link instead of a combination of multiple links.

Benefits

  • Increased bandwidth and cost effectiveness—The aggregated link provides higher bandwidth than the bandwidth provided by each individual link without requiring new equipment.

  • Increased resiliency and availability—If any of the physical links goes down, the traffic is reassigned to another member link.

  • Load balancing—The aggregated Ethernet bundle balances the load between its member links if a link fails.

Configuration Guidelines for Aggregated Ethernet Interfaces

Consider the following guidelines as you configure an aggregated Ethernet interface.

  • For Junos OS Evolved, if you add a new member interface to the aggregated Ethernet bundle, a link flap event is generated. The physical interface is deleted as a regular interface and then added back as a member. During this time, the details of the physical interface are lost.

  • You must not configure aggregated Ethernet for subscriber management by using the ether-options statement. If you do so, subscriber management does not work properly—there are issues with subscriber accounting and statistics. Use the gigether-options statement to configure aggregated Ethernet interfaces on the member link interfaces.

  • You cannot configure simple filters on member link interfaces in an aggregated Ethernet bundle.

  • You cannot configure any IQ-specific capabilities such as MAC accounting, VLAN rewrites, or VLAN queuing on member link interfaces in an aggregated Ethernet bundle.

Platform Support for LAG

Table 1 lists the MX Series routers and the maximum number of interfaces per LAG and the maximum number of LAG groups they support. MX Series routers can support up to 64 interfaces per LAG.

Table 1: Maximum Interface Per LAG and Maximum LAGs per MX Router

MX Series Routers

Maximum Interfaces per LAG

Maximum LAG Groups

MX5, MX10, MX40, MX80, and MX104

16

Limited by the interface capacity. 80 on MX104.

MX150

10

10

MX240, MX480, MX960, MX10003, MX10008, MX10016, MX2010, and MX2020

64

128 (Before 14.2R1)

1000 (14.2R1 and later)

Table 2 lists the PTX Series routers and the maximum number of interfaces per LAG and the maximum number of LAG groups they support. PTX Series routers can support up to 128 LAGs.

Table 2: Maximum Interface Per LAG and Maximum LAGs per PTX Router

PTX Series Routers

Maximum Interfaces per LAG

Maximum LAG Groups

PTX1000, PTX10002, and PTX10003, and PTX10008

64

128

PTX3000 and PTX5000

64

128

(Junos OS Evolved) PTX10008

64

1152

Configure Aggregated Ethernet Interfaces

Table 3 describes the steps to configure aggregated Ethernet interfaces on your routing device.

Table 3: Aggregated Ethernet Interfaces Configuration

Configuration Step

Command

Step 1: Specify the number of aggregated Ethernet bundles you want on your device. If you specify the device-count value as 2, you can configure two aggregated bundles.

[edit chassis aggregated-devices ethernet]
user@host# set device-count number

Step 2: Specify the members you want to include within the aggregated Ethernet bundle and add them individually. Aggregated interfaces are numbered from ae0 through ae4092.

[edit interfaces ]
user@host# set interface-name gigether-options 802.3ad aex

Step 3: Specify the link speed for the aggregated Ethernet links. When you specify the speed, all the interfaces that make up the aggregated Ethernet bundle have the same speed. You can also configure the member links of an aggregated Ethernet bundle with a combination of rates—that is, mixed rates—for efficient bandwidth utilization.

[edit interfaces]
user@host# set aex aggregated-ether-options link-speed speed

Step 4: Specify the minimum number of links for the aggregated Ethernet interface (aex) —that is, the defined bundle— to be labeled up. By default, only one link must be up for the bundle to be labeled up.

You cannot configure the minimum number of links and the minimum bandwidth at the same time. They are mutually exclusive.

[edit interfaces]
user@host# set aex aggregated-ether-options minimum-links number

Step 5: (Optional) Specify the minimum bandwidth for the aggregated Ethernet links.

You cannot configure link protection with minimum bandwidth.

You cannot configure the minimum number of links and the minimum bandwidth at the same time. They are mutually exclusive.

[edit interfaces]
user@host# set aex aggregated-ether-options minimum-bandwidth

Step 6: Specify an interface family and the IP address for the aggregated Ethernet bundle. Aggregated Ethernet interfaces can be VLAN-tagged or untagged.

Packet tagging provides a logical way to differentiate traffic on ports which support multiple virtual local area network (VLAN). While you must configure aggregated Ethernet interfaces to receive tagged traffic, you must also configure aggregated Ethernet interfaces that can receive untagged traffic.

Tagged Interface

[edit interfaces]
user@host# set aex vlan-tagging unit 0 vlan-id vlan-id

Untagged Interface

[edit interfaces]
user@host# set aex unit 0 family inet address ip-address

Step 7: (Optional) Configure your device to collect multicast statistics for the aggregated Ethernet interface.

To view the multicast statistics, use the show interfaces statistics detail command. If you have not configured collection of multicast statistics, you cannot view the multicast statistics.

[edit interfaces]
user@host# set aex multicast-statistics

Step 8: Verify and commit the configuration.

[edit interfaces]
user@host# run show configuration
user@host# commit

Step 9: (Optional) Delete an aggregated Ethernet Interface.

[edit]
user@host# delete interfaces aex

OR

[edit]
user@host# delete chassis aggregated-devices ethernet device-count

Mixed-Mode and Mixed-Rate Aggregated Ethernet Interfaces

On Juniper Networks devices, you can configure the member links of an aggregated Ethernet bundle to operate at different link speeds (also known as rates). The configured aggregated Ethernet bundle is known as a mixed-rate aggregated Ethernet bundle. When you configure the member links of an aggregated Ethernet bundle in LAN mode as well as WAN mode for 10-Gigabit Ethernet interfaces, the configuration is known as mixed-mode configuration.

Benefits

  • Efficient bandwidth utilization—When you configure the member links with different link speeds, the bandwidth is efficiently and completed used.

  • Load balancing—Balances the load between member links within an aggregated Ethernet bundle if a link fails.

Platform Support for Mixed Aggregated Ethernet Bundles

Table 4 lists the platforms and corresponding MPCs that support mixed-rate aggregated Ethernet bundles on MX Series routers.

Table 4: Platform Support Matrix for Mixed-Rate Aggregated Ethernet Bundles on MX Series Routers

Supported MPCs

Supported Platform

Initial Release

16x10GE (MPC-3D-16XGE-SFPP)

MX240, MX480, MX960, MX2010, and MX2020

14.2R1

MPC1E (MX-MPC1-3D; MX-MPC1E-3D; MX-MPC-1-3D-Q; MX-MPC1E-3D-Q)

MX240, MX480, MX960, MX2010, and MX2020

14.2R1

MPC2E (MX-MPC2-3D; MX-MPC2E-3D; MX-MPC2-3D-Q;MX-MPC2E-3D-Q; MX-MPC2-3D-EQ;MX-MPC2E-3D-EQ; MX-MPC2-3D-P)

MX240, MX480, MX960, MX2010, and MX2020

14.2R1

MPC3E (MX-MPC3E-3D)

MX240, MX480, MX960, MX2010, and MX2020

14.2R1

MPC4E (MPC4E-3D-32XGE-SFPP and MPC4E-3D-2CGE-8XGE)

MX240, MX480, MX960, MX2010, and MX2020

14.2R1

MPC5E (6x40GE+24x10GE;6x40GE+24x10GEQ;2x100GE+4x10GE; 2x100GE+4x10GEQ)

MX240, MX480, MX960, MX2010, and MX2020

14.2R1

MPC6E (MX2K-MPC6E)

MX2010 and MX2020

14.2R1

MPC7E (Multi-Rate) (MPC7E-MRATE)

MX240, MX480, MX960, MX2010, and MX2020

15.1F4

 
 

MPC7E 10G (MPC7E-10G)

MX240, MX480, MX960, MX2010, and MX2020

15.1F5

MPC8E (MX2K-MPC8E)

MX2010 and MX2020

15.1F5

MPC9E (MX2K-MPC9E)

MX2010 and MX2020

15.1F5

MPC10E (MPC10E-15C-MRATE)

MX240, MX480, and MX960

19.1R1

Table 5 lists the platforms and corresponding hardware components that support mixed aggregated Ethernet bundles.

Table 5: Platform Support Matrix for Mixed Aggregated Ethernet Bundles on T Series

Rate and Mode

Supported Platform

Supported FPCs

Supported PICs

10-Gigabit Ethernet LAN and WAN

(WAN rate: OC192)

T640, T1600, T4000, and TX Matrix Plus routers

  • T4000 FPC5 (T4000-FPC5-3D)

  • 10-Gigabit Ethernet LAN/WAN PIC with Oversubscription and SFP+ (PF-24XGE-SFPP)

  • 10-Gigabit Ethernet LAN/WAN PIC with SFP+ (PF-12XGE-SFPP)

  • Enhanced Scaling FPC3 (T640-FPC3-ES)

  • 10-Gigabit Ethernet PIC with XENPAK (PC-1XGE-XENPAK)

  • Enhanced Scaling FPC4 (T640-FPC4-ES)

  • Enhanced Scaling FPC4-1P (T640-FPC4-1P-ES)

  • T1600 Enhanced Scaling FPC4 (T1600-FPC4-ES)

  • 10-Gigabit Ethernet LAN/WAN PIC with SFP+ (PD-5-10XGE-SFPP)

  • 10-Gigabit Ethernet LAN/WAN PIC with XFP (PD-4XGE-XFP)

40-Gigabit Ethernet, 100-Gigabit Ethernet

T4000 and TX Matrix Plus routers

  • T4000 FPC5 (T4000-FPC5-3D)

  • 100-Gigabit Ethernet PIC with CFP (PF-1CGE-CFP)

 

T640, T1600, T4000, and TX Matrix Plus routers

  • Enhanced Scaling FPC4 (T640-FPC4-ES)

  • Enhanced Scaling FPC4-1P (T640-FPC4-1P-ES)

  • T1600 Enhanced Scaling FPC4 (T1600-FPC4-ES)

  • 100-Gigabit Ethernet PIC with CFP (PD-1CE-CFP-FPC4)

    Note: This PIC is available packaged only in an assembly with the T1600-FPC4-ES FPC.

  • 40-Gigabit Ethernet PIC with CFP (PD-1XLE-CFP)

Consider the following guidelines as you configure a mixed-rate aggregated Ethernet bundle:

  • You can configure a maximum of 64 member links to form a mixed aggregated Ethernet bundle.

  • When you mix a 10-Gigabit Ethernet interface in LAN mode and a 10-Gigabit Ethernet interface in WAN mode in the same aggregated bundle on MX Series, it is not considered a mixed-rate aggregate. To mix the interfaces having the same speed but different framing options, you need not use the mixed statement at the [edit interfaces interface-name aggregated-ether-options link-speed] hierarchy level.

  • Mixed-rate aggregated Ethernet links can interoperate with non-Juniper Networks aggregated Ethernet member links provided that mixed-rate aggregated Ethernet load balancing is configured at egress.

  • After you configure a mixed-rate aggregated Ethernet link on a 100-Gigabit Ethernet PIC with CFP, changing aggregated Ethernet link protection or LACP link protection configurations results in aggregated Ethernet link flapping. Also, changing the configuration of a mixed aggregated Ethernet link can result in aggregated Ethernet link flapping.

  • Packets are dropped when the total throughput of the hash flow exiting a member link (or the throughput of multiple hash flows exiting a single member link) exceeds the link speed of the member link. This can happen when the egress member link changes because of a link failure and the hash flow switches to a member link of speed that is less than the total throughput of the hash flow.

  • Mixed-rate aggregated Ethernet links do not support rate-based CoS components such as scheduler, shaper, and policer. However, the default CoS settings are supported on the mixed-rate aggregated Ethernet links.

  • Load balancing of the egress traffic across the member links of a mixed-rate aggregated Ethernet link is proportional to the rates of the member links. Egress multicast load balancing is not supported on mixed aggregated Ethernet interfaces.

  • Mixed-rate aggregated Ethernet interface do not support aggregated Ethernet link protection, link protection on a 1:1 model, and LACP link protection.

Configure Mixed-Rate Aggregated Ethernet Interfaces

Table 6 describes the steps to configure mixed-rate aggregated Ethernet bundle on your device.

Table 6: Mixed-Rate Aggregated Ethernet Configuration

Configuration Step

Command

Step 1: Specify the number of aggregated Ethernet bundles you want on your device. If you specify the device-count value as 2, you can configure two aggregated bundles.

[edit chassis aggregated-devices ethernet]
user@host# set device-count number

Step 2: Specify the members you want to include within the aggregated Ethernet bundle. Aggregated interfaces are numbered from ae0 through ae4092.

[edit interfaces ]
user@host# set interface-name gigether-options 802.3ad aex

Step 3: Specify the link speed for the aggregated Ethernet links. When you specify the speed as mixed, you can configure the member links of an aggregated Ethernet bundle with a combination of rates—that is, mixed rates—for efficient bandwidth utilization.

You cannot configure the minimum number of links for the aggregated Ethernet bundle to be labeled up, when you configure the link speed as mixed.

[edit interfaces]
user@host# set aex aggregated-ether-options link-speed mixed

Step 4: Specify the minimum bandwidth for the aggregated Ethernet links.

You cannot configure link protection with the minimum bandwidth.

[edit interfaces]
user@host# set aex aggregated-ether-options minimum-bandwidth

Step 5: Verify and commit the configuration.

[edit interfaces]
user@host# run show configuration
user@host# commit

Link Aggregation Control Protocol (LACP), defined in IEEE 802.3ad, is a monitoring protocol that detects link-layer failure within a network. You can use LACP to monitor the local and remote ends of member links in a LAG.

By default, LACP is not configured on aggregated Ethernet interfaces. Ethernet links do not exchange information about the state of the link. When you configure LACP, the transmitting link (also known as actor) initiates transmission of LACP packets to the receiving link (also known as partner). The actor is the local interface in an LACP exchange. The partner is the remote interface in an LACP exchange.

When you configure LACP, you must select one of the following transmission modes for each end of the LAG:

  • Active-To initiate transmission of LACP packets and response to LACP packets, you must configure LACP in active mode. If either the actor or partner is active, they exchange LACP packets.

  • Passive-There is no exchange of LACP packets. This is the default transmission mode.

Benefits

  • Link monitoring—LACP detects invalid configurations on the local end as well as the remote end of the link.

  • Link resiliency and redundancy—If a link fails, LACP ensures that traffic continues to flow on the remaining links.

Configuration Guidelines for LACP

Consider the following guidelines when you configure LACP:

  • When you configure LACP on multiple different physical interfaces, only features that are supported across all of the linked devices are supported in the resulting link aggregation group (LAG) bundle. For example, different PICs can support a different number of forwarding classes. If you use link aggregation to link together the ports of a PIC that supports up to 16 forwarding classes with a PIC that supports up to 8 forwarding classes, the resulting LAG bundle supports up to 8 forwarding classes. Similarly, linking together a PIC that supports weighted random early detection (WRED) with a PIC that does not support it results in a LAG bundle that does not support WRED.

  • If you configure the LACP system identifier (by using the system-id systemid statement) to be all zeros (00:00:00:00:00:00), the commit operation throws an error.

  • When you enable a device to process packets received on a member link irrespective of the LACP state if the state of the aggregated Ethernet bundle is up (by using the accept-data statement), then the device does not process the packets as defined in the IEEE 802.3ax standard. According to this standard, the packets should be dropped, but they are processed instead because you configured the accept-data statement.

Configure LACP

Table 7 describes the steps to configure LACP on an aggregated Ethernet interface.

Table 7: LACP Configuration

Configuration Step

Command

Step 1: Specify the LACP transmission mode - active or passive.

[edit interfaces interface-name aggregated-ether-options]
user@host# set lacp active
user@host# set lacp passive

Step 2: Specify the interval at which the interfaces send LACP packets.

When you configure different intervals for the active and passive interfaces, the actor transmits the packets at the rate configured on the partner’s interface.

[edit interfaces interface-name aggregated-ether-options lacp]
user@host# set periodic interval

Step 3: Configure the LACP system identifier.

The user-defined system identifier in LACP enables two ports from two different devices to act as though they were part of the same aggregate group.

The system identifier is a 48-bit (6-byte) globally unique field. It is used in combination with a 16-bit system-priority value, which results in a unique LACP system identifier.

[edit interfaces interface-name aggregated-ether-options lacp]
user@host# set system-id system-id

Step 4: Configure the LACP system priority at the Aggregated Ethernet interface level.

This system priority takes precedence over the priority value configured at the global [edit chassis] level. The device with numerically lower value (higher priority value) becomes the controlling device. If both devices have the same LACP system priority value, the device MAC address determines which device is in control.

[edit interfaces interface-name aggregated-ether-options lacp]
user@host# set system-priority system-priority

Step 5: (Optional) Configure the LACP administrative key.

You must configure MC-LAG to configure this option. For more information on MC-LAG, see Understanding Multichassis Link Aggregation Groups.

[edit interfaces interface-name aggregated-ether-options lacp]
user@host# set admin-key number

Step 6: Specify the time period, in seconds, for which LACP maintains the state of a member link as expired. To prevent excessive flapping of a LAG member link, you can configure LACP to prevent the transition of an interface from down to up for a specified interval.

[edit interfaces interface-name aggregated-ether-options lacp]
user@host# set hold-time timer-value

Step 7: Configure the device to process packets received on a member link irrespective of the LACP state if the aggregated interface status is up.

[edit interfaces interface-name aggregated-ether-options lacp]
user@host# set accept-data

Step 8: Verify and commit the configuration.

[edit interfaces interface-name aggregated-ether-options lacp]
user@host# run show configuration
user@host# commit

By default, aggregated Ethernet bundles use a hash-based algorithm to distribute traffic over multiple links. Traffic destined through a logical interface of a bundle can exit through any of the member links based on the hashing algorithm. Egress policy is distributed between individual member interface schedulers or policers instantiated in each Packet Forwarding Engine hosting a member link. Distributed egress policy enforcement relies on traffic load balancing and so is not always accurate.

Targeted distribution provides a mechanism to direct traffic through specified links of an aggregated Ethernet bundle. You can also use targeted distribution to assign roles to member links to handle link failure scenarios. Targeted distribution ensures accurate policy enforcement that is not distributed for a given logical interface. Targeted distribution is applicable to both Layer 2 and Layer 3 interfaces, irrespective of the family configured for the logical interface. The outbound traffic of a Layer 3 host is distributed among all the member links of an aggregated Ethernet bundle. Targeted distribution is implemented only for the transit traffic.

You can form distribution lists consisting of member links of the aggregated Ethernet interfaces and you can assign roles to these lists, as follows:

  • Primary distribution list: You can configure the member links that will be part of the primary distribution list. Traffic is load-balanced among all the member links in the primary list. If all links within the primary list are up, traffic is forwarded on those links. If some of the links within a primary list fail, the remaining links carry traffic.

  • Backup distribution list: You can configure the member links that will be part of the backup distribution list. If all links within the primary list go down, only then the links in the backup list start carrying traffic. If some of links within the backup list fail, the remaining links in the backup list carry traffic.

  • Standby distribution list: All remaining links are added to the defined standby list. If all the links within the primary list and the backup list go down, only then the links in the standby list start carrying traffic. When the links in the primary distribution list come back online, they resume carrying traffic.

Benefits

  • Accurate policy enforcement—Policy enforcement is not distributed and is, therefore, accurate.

  • Load balancing—With targeted distribution, you can load-balance the traffic between the aggregated Ethernet bundle member links.

Example: Configure Targeted Distribution for Accurate Policy Enforcement on Logical Interfaces Across Aggregated Ethernet Member Links

This example shows how to configure primary and backup targeted distribution lists for aggregated Ethernet member links. Member links are assigned membership to the distribution lists. Logical interfaces of the aggregated Ethernet bundle are then assigned membership to the primary list and the backup list.

Requirements

This example uses the following software and hardware components:

  • Junos OS Release 16.1 and later releases

  • One MX Series 5G Universal Routing Platform

Overview

Targeted distribution provides a mechanism to direct traffic through specified links of an aggregated Ethernet bundle, and also assigns roles to member links to handle link failure scenarios. You can configure targeted distribution to load-balance the traffic between the aggregated Ethernet bundle member links. You can map a logical interface to a single link only for the outgoing traffic.

This example uses the apply-groups configuration for specifying the distribution lists for the logical interfaces of the aggregated Ethernet member links. You can use the apply-groups statement to inherit the Junos OS configuration statements from a configuration group. The apply-groups configuration statement in the example shows the odd-numbered member links of the aggregated Ethernet bundle being assigned the primary list dl2 and even-numbered member links being assigned primary list dl1.

The aggregated Ethernet interface used in this example is ae10 with units 101, 102, 103, and 104. The physical interface ge-0/0/3 is specified as distribution list dl1 and ge-0/0/4 as dl2. The logical interface unit numbers of the aggregated Ethernet bundle ending in an odd number are assigned to the distribution list dl1 as the primary list, and those ending in an even number are assigned the distribution list dl2 as the primary list.

To configure targeted distribution, you must:

  1. Create a global apply group.

  2. Assign each member of the aggregated Ethernet interface to a different distribution list.

  3. Attach the apply group to the aggregated Ethernet interface.

  4. Create the logical interfaces. The apply group automatically assigns the distribution lists to each member of the aggregated Ethernet bundle as required.

Configuration

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

To configure targeted distribution:

  1. Create a global apply group and specify the primary list and the backup list.
  2. Assign each member of the aggregated Ethernet bundle to a different distribution list.
  3. Attach the defined apply group to the aggregated Ethernet interface.
  4. Create the logical interfaces and configure its parameters.

Results

From configuration mode, confirm your configuration by using the show command. If the output does not display the intended configuration, repeat the configuration instructions in this example to correct it.

Verification

Verify Targeted Distribution of Logical Interfaces

Purpose

Verify that the logical interfaces are assigned to the distribution lists.

Action

To verify that the logical interfaces are assigned to the distribution lists, enter the show interfaces detail or extensive command.

The show interfaces detail or extensive command output shows the logical interfaces ending in an odd number being assigned to the distribution list dl1 (ge-0/0/3) and those ending in an even number being assigned to the distribution list dl2 (ge-0/0/4) by default. If there is a failure of either of those interfaces, the logical interfaces switch to the interfaces in the backup list or continue to use the active member interface. For example, on the aggregated Ethernet bundle ae10.101, the primary interface shown is ge-0/0/4 and on the aggregated Ethernet bundle ae10.102, the primary interface is ge-0/0/3, and similarly for the other logical interfaces.

user@host# run show interfaces extensive ae10

Independent Micro-BFD Sessions for LAG

The Bidirectional Forwarding Detection (BFD) protocol is a simple detection protocol that quickly detects failures in the forwarding paths. To enable failure detection for aggregated Ethernet interfaces in a LAG, you can configure an independent, asynchronous-mode BFD session on every LAG member link in a LAG bundle. Instead of a single BFD session monitoring the status of the UDP port, independent micro-BFD sessions monitor the status of individual member links.

When you configure micro-BFD sessions on every member link in a LAG bundle, each individual session determines the Layer 2 and Layer 3 connectivity of each member link in a LAG.

After the individual session is established on a particular link, member links are attached to the LAG and then load balanced by either one of the following:

  • Static configuration—The device control process acts as the client to the micro-BFD session.

  • Link Aggregation Control Protocol (LACP)—LACP acts as the client to the micro-BFD session.

When the micro-BFD session is up, a LAG link is established and data is transmitted over that LAG link. If the micro-BFD session on a member link is down, that particular member link is removed from the load balancer, and the LAG managers stop directing traffic to that link. These micro-BFD sessions are independent of each other despite having a single client that manages the LAG interface.

Micro-BFD sessions run in the following modes:

  • Distribution mode—In this mode, the Packet Forwarding Engine (PFE) sends and receives the packets at Layer 3. By default, micro-BFD sessions are distributed at Layer 3.

  • Non-distribution mode—In this mode, the Routing Engine sends and receives the packets at Layer 2. You can configure the BFD session to run in this mode by including the no-delegate-processing statement under periodic packet management (PPM).

A pair of routing devices in a LAG exchange BFD packets at a specified, regular interval. The routing device detects a neighbor failure when it stops receiving a reply after a specified interval. This allows the quick verification of member link connectivity with or without LACP. A UDP port distinguishes BFD over LAG packets from BFD over single-hop IP packets. The Internet Assigned Numbers Authority (IANA) has allocated 6784 as the UDP destination port for micro-BFD.

Benefits

  • Failure detection for LAG—Enables failure detection between devices that are in point-to-point connections.

  • Multiple BFD sessions—Enables you to configure multiple micro-BFD sessions for each member link instead of a single BFD session for the entire bundle.

Configuration Guidelines for Micro-BFD Sessions

Consider the following guidelines as you configure individual micro-BFD sessions on an aggregated Ethernet bundle.

  • This feature works only when both the devices support BFD. If BFD is configured at one end of the LAG, this feature does not work.

  • Starting with Junos OS Release 13.3, IANA has allocated 01-00-5E-90-00-01 as the dedicated MAC address for micro BFD. Dedicated MAC mode is used by default for micro BFD sessions.

  • In Junos OS, micro-BFD control packets are always untagged by default. For Layer 2 aggregated interfaces, the configuration must include vlan-tagging or flexible-vlan-tagging options when you configure Aggregated Ethernet with BFD. Otherwise, the system will throw an error while committing the configuration.

  • When you enable micro-BFD on an aggregated Ethernet interface, the aggregated interface can receive micro-BFD packets. In Junos OS Release 19.3 and later, for MPC10E and MPC11E MPCs, you cannot apply firewall filters on the micro-BFD packets received on the aggregated Ethernet interface. For MPC1E through MPC9E, you can apply firewall filters on the micro-BFD packets received on the aggregated Ethernet interface only if the aggregated Ethernet interface is configured as an untagged interface.

  • Starting with Junos OS Release 14.1, specify the neighbor in a BFD session. In releases before Junos OS Release 16.1, you must configure the loopback address of the remote destination as the neighbor address. Beginning with Junos OS Release 16.1, you can also configure this feature on MX Series routers with aggregated Ethernet interface address of the remote destination as the neighbor address.

  • Beginning with Release 16.1R2, Junos OS checks and validates the configured micro-BFD local-address against the interface or loopback IP address before the configuration commit. Junos OS performs this check on both IPv4 and IPv6 micro-BFD address configurations, and if they do not match, the commit fails.

  • For the IPv6 address family, disable duplicate address detection before configuring this feature with aggregated Ethernet interface addresses. To disable duplicate address detection, include the dad-disable statement at the [edit interface aex unit y family inet6] hierarchy level.

Caution

Deactivate bfd-liveness-detection at the [edit interfaces aex aggregated-ether-options] hierarchy level or deactivate the aggregated Ethernet interface before changing the neighbor address from the loopback IP address to the aggregated Ethernet interface IP address. Modifying the local and neighbor address without deactivating bfd-liveness-detection or the aggregated Ethernet interface first might cause micro-BFD sessions failure.

Example: Configure Independent Micro-BFD Sessions for LAG

This example shows how to configure an independent micro-BFD session for aggregated Ethernet interfaces.

Requirements

This example uses the following hardware and software components:

  • MX Series routers with Junos MPCs

  • T Series routers with Type 4 FPC or Type 5 FPC

    BFD for LAG is supported on the following PIC types on T-Series:

    • PC-1XGE-XENPAK (Type 3 FPC)

    • PD-4XGE-XFP (Type 4 FPC)

    • PD-5-10XGE-SFPP (Type 4 FPC)

    • 24x10GE (LAN/WAN) SFPP, 12x10GE (LAN/WAN) SFPP, 1X100GE Type 5 PICs

  • PTX Series routers with 24X10GE (LAN/WAN) SFPP

  • Junos OS Release 13.3 or later running on all devices

Overview

The example includes two routers that are directly connected. Configure two aggregated Ethernet interfaces, AE0 for IPv4 connectivity and AE1 for IPv6 connectivity. Configure a micro-BFD session on the AE0 bundle using IPv4 addresses as local and neighbor endpoints on both routers. Configure a micro-BFD session on the AE1 bundle using IPv6 addresses as local and neighbor endpoints on both routers. This example verifies that independent micro-BFD sessions are active in the output.

Topology

Figure 1 shows the sample topology.

Figure 1: Configuring an Independent Micro-BFD Session for LAG
Configuring an Independent Micro-BFD
Session for LAG

Configuration

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, and then copy and paste the commands into the CLI at the [edit] hierarchy level.

Router R0

Router R1

Configure a Micro-BFD Session for Aggregated Ethernet Interfaces

Step-by-Step Procedure

The following example requires that you navigate various levels in the configuration hierarchy. For information about navigating the CLI, see “Using the CLI Editor in Configuration Mode” in the CLI User Guide.

Note

Repeat this procedure for Router R1, modifying the appropriate interface names, addresses, and any other parameters for each router.

To configure a micro-BFD session for aggregated Ethernet interfaces on Router R0:

  1. Configure the physical interfaces.
  2. Configure the loopback interface.
  3. Configure an IP address on the aggregated Ethernet interface ae0 with either IPv4 or IPv6 addresses, according to your network requirements.
  4. Set the routing option, create a static route, and set the next-hop address.Note

    You can configure either an IPv4 or IPv6 static route, depending on your network requirements.

  5. Configure Link Aggregation Control Protocol (LACP).
  6. Configure BFD for the aggregated Ethernet interface ae0, and specify the minimum interval, local IP address, and the neighbor IP address.
  7. Configure an IP address on the aggregated Ethernet interface ae1.

    You can assign either IPv4 or IPv6 addresses as per your network requirements.

  8. Configure BFD for the aggregated Ethernet interface ae1.
    Note

    Starting with Junos OS Release 16.1, you can also configure this feature with the aggregated Ethernet interface address as the local address in a micro-BFD session.

    Starting with Release 16.1R2, Junos OS checks and validates the configured micro-BFD local-address against the interface or loopback IP address before the configuration commit. Junos OS performs this check on both IPv4 and IPv6 micro-BFD address configurations, and if they do not match, the commit fails.

  9. Configure tracing options for BFD for troubleshooting.

Results

From operational mode, enter the show interfaces, show protocols, and show routing-options commands and confirm your configuration. If the output does not display the intended configuration, repeat the instructions in this example to correct the configuration.

If you are done configuring the device, commit the configuration.

Verification

Confirm that the configuration is working properly.

Verify That the Independent BFD Sessions Are Up

Purpose

Verify that the micro-BFD sessions are up, and view details about the BFD sessions.

Action

From operational mode, enter the show bfd session extensive command.

user@R0> show bfd session extensive

Meaning

The Session Type field represents the independent micro-BFD sessions running on the links in a LAG. The TX interval value, RX interval value output represents the setting configured with the minimum-interval statement. All of the other output represents the default settings for BFD. To modify the default settings, include the optional statements under the bfd-liveness-detection statement.

View Detailed BFD Events

Purpose

View the contents of the BFD trace file to assist in troubleshooting, if required.

Action

From operational mode, enter the file show /var/log/bfd command.

user@R0> file show /var/log/bfd

Meaning

BFD messages are being written to the specified trace file.

MAC Address Accounting for Dynamically Learned Addresses on Aggregated Ethernet Interfaces

You can configure source MAC address and destination MAC address-based accounting for MAC addresses that are dynamically learned on aggregated Ethernet interfaces.

By default, dynamic learning of source and destination MAC addresses on aggregated Ethernet interfaces is disabled. When you enable this feature, you can configure source and destination MAC address-based accounting on the routed interfaces on MX Series routers with DPCs and MPCs. Also, when you enable dynamic learning of MAC addresses, the MAC-filter settings for each member link of the aggregated Ethernet bundle is updated. The limit on the maximum number of MAC addresses that can be learned from an interface does not apply to this dynamic learning of MAC addresses functionality.

Destination MAC-based accounting is supported only for MAC addresses dynamically learned at the ingress interface, including each individual child or member link of the aggregated Ethernet bundle. MPCs do not support destination MAC address learning. Dynamic learning of MAC addresses can be supported on only the aggregated Ethernet interface or on selective individual member links. MAC learning support on the bundle depends on the capability of individual member links. If a link in the bundle does not contain the capability to support MAC learning or accounting, it is disabled on the aggregated Ethernet bundle.

The MAC data for the aggregated bundle is displayed after collecting data from individual child links. On DPCs, these packets are accounted in the egress direction (Output Packet/Byte count), whereas on MPCs, these packets are not accounted because DMAC learning is not supported. This difference in behavior also occurs between child links on DPCs and MPCs. Because this feature to enable dynamic learning is related to collecting MAC database statistics from child links based on the command issued from the CLI, there is an impact on the time it takes to display the data on the console based on the size of the MAC database and the number of child links spread across different FPCs.

Benefits

  • Compute Statistics—Enables you to compute MAC Address statistics for dynamically learned MAC addresses.

What Is Enhanced LAG?

When you associate a physical interface with an aggregated Ethernet interface, the physical child links are also associated with the parent aggregated Ethernet interface to form a LAG. So, one child next hop is created for each member link of an aggregated Ethernet interface for each VLAN interface. For example, an aggregate next hop for an aggregated Ethernet interface with 16 member links leads to the creation of 17 next hops per VLAN.

When you configure enhanced LAG, child next hops are not created for member links and, as a result, a higher number of next hops can be supported. To configure enhanced LAG, you must configure the device’s network services mode as enhanced-ip. This feature is not supported if the device’s network services mode is set to operate in the enhanced-ethernet mode. This feature is enabled by default if the network services mode on the device is configured as enhanced-mode.

Benefits

  • Reduction in memory and CPU usage to support aggregated Ethernet interfaces.

  • Improvement in system performance and scaling numbers.

Release History Table
Release
Description
In Junos OS Release 19.3 and later, for MPC10E and MPC11E MPCs, you cannot apply firewall filters on the micro-BFD packets received on the aggregated Ethernet interface. For MPC1E through MPC9E, you can apply firewall filters on the micro-BFD packets received on the aggregated Ethernet interface only if the aggregated Ethernet interface is configured as an untagged interface.
Beginning with Junos OS Release 16.1, you can also configure this feature on MX Series routers with aggregated Ethernet interface address of the remote destination as the neighbor address.
Beginning with Release 16.1R2, Junos OS checks and validates the configured micro-BFD local-address against the interface or loopback IP address before the configuration commit.
Starting with Junos OS Release 14.1, specify the neighbor in a BFD session. In releases before Junos OS Release 16.1, you must configure the loopback address of the remote destination as the neighbor address.
Starting with Junos OS Release 13.3, IANA has allocated 01-00-5E-90-00-01 as the dedicated MAC address for micro BFD.