Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Multihoming an Ethernet-Connected End System Design and Implementation

 

For an overview of multihoming an Ethernet-connected end system in this reference design, see the Multihoming Support for Ethernet-Connected End Systems section in Data Center Fabric Blueprint Architecture Components.

Figure 1 illustrates the multihomed Ethernet-connected end system in this procedure:

Figure 1: Ethernet-Connected Multihoming Example Overview
Ethernet-Connected Multihoming Example Overview

Configuring a Multihomed Ethernet-Connected End System using EVPN Multihoming with VLAN Trunking

EVPN multihoming is used in this building block to connect an Ethernet-connected end system into the overlay network. EVPN multihoming works by treating two or more physical multihomed links as a single Ethernet segment identified by an EVPN Ethernet Segment ID (ESI). The set of physical links belonging to the same Ethernet segment are treated as one aggregated Ethernet interface. The member links—much like member links in a traditional aggregated Ethernet interface—provide redundant paths to and from the end system while also ensuring overlay network traffic is load-balanced across the multiple paths.

LACP with the fast timer mode is used to improve fault detection and disablement of impaired members of an Ethernet segment. MicroBFD may also be used to further improve fault isolation but may not scale to support all end-system facing ports. Furthermore, support for microBFD must exist at the end system.

The reference design tested an Ethernet-connected server was connected to a single leaf or multihomed to 2 or 3 leaf devices to verify that traffic can be properly handled in multihomed setups with more than 2 leaf devices; in practice, an Ethernet-connected server can be multihomed to a large number of leaf devices.

To configure a multihomed Ethernet-connected server:

  1. (Aggregated Ethernet interfaces only) Create the aggregated Ethernet interfaces to connect each leaf device to the server. Enable LACP with a fast period interval for each aggregated Ethernet interface.

    Leaf 10:

    Leaf 11:

    Leaf 12:

    Note

    The three leaf devices in this step use the same aggregated Ethernet interface name—ae11—and member link interfaces—et-0/0/13 and et-0/0/14— to organize and simplify network administration.

    Avoid using different AE names at each VTEP for the same ESI as this will require configuring the LACP admin-key so that the end system can identify the multihomed links as part of the same LAG.

  2. Configure each interface into a trunk interface. Assign VLANs to each trunk interface.Note

    If you are connecting your end system to the leaf device with a single link, replace the interface name—for example, ae11—with a physical interface name—for example, et-0/0/13—for the remainder of this procedure.

    Leaf 10:

    Leaf 11:

    Leaf 12:

  3. Configure the multihomed links with an ESI and specify an LACP system identifier for each link.

    Assign each multihomed interface into the ethernet segment—which is identified using the Ethernet Segment Identifier (ESI)—that is hosting the Ethernet-connected server. Ensure traffic is passed over all multihomed links by configuring each link as all-active.

    The ESI values must match on all multihomed interfaces.

    Leaf 10 :

    Leaf 11:

    Leaf 12:

  4. Enable LACP and configure a system identifier.

    The LACP system identifier must match on all multihomed interfaces.

    Leaf 10:

    Leaf 11:

    Leaf 12:

  5. After committing the configuration, verify that the links on each leaf switch are in the Up state

    Example:

  6. Verify that LACP is operational on the multihomed links.

Enabling Storm Control

Storm control can be enabled as part of this building block. Storm control is used to prevent BUM traffic storms by monitoring BUM traffic levels and taking a specified action to limit BUM traffic forwarding when a specified traffic level—called the storm control level—is exceeded. See Understanding Storm Control for additional information on the feature.

In this reference design, storm control is enabled on server-facing aggregated Ethernet interfaces to rate limit broadcast, unknown unicast, and multicast (BUM) traffic. If the amount of BUM traffic exceeds 1% of the available bandwidth on the aggregated Ethernet interface, storm control drops BUM traffic to prevent broadcast storms.

To enable storm control:

  1. Create the storm control profile that will be used to enable the feature. The interfaces that are configured using the storm control profile are specified in this step.

    Leaf Device:

  2. Set the storm control configuration within the profile.

    In this reference design, storm control is configured to strategically drop BUM traffic when the amount of BUM traffic exceeds 1% of all available interface bandwidth.

    Note

    Dropping BUM traffic is the only supported storm control action in the Cloud Data Center architecture.

    Note

    The storm control settings in this version of the reference design drop multicast traffic that exceeds the configured storm control threshold. If your network supports multicast-based applications, consider using a storm control configuration—such as the no-multicast option in the storm-control-profiles statement—that is not represented in this reference design.

    Storm control settings in support of multicast-based applications will be included in a future version of this reference design.

  3. To verify storm control activity, filter system log messages related to storm control by entering the show log messages | match storm command.

Multihoming a Ethernet-Connected End System—Release History

Table 1 provides a history of all of the features in this section and their support within this reference design.

Table 1: Release History

Release

Description

17.3R1-S1

The EVPN multihoming, VLAN trunking, and storm control features documented in this section are supported on all devices within the reference design running Junos OS Release 17.3R1-S1 or later.

Related Documentation