Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Bridged Overlay Design and Implementation

 

A bridged overlay provides Ethernet bridging between leaf devices in an EVPN network, as shown in Figure 1. This overlay type simply extends VLANs between the leaf devices across VXLAN tunnels. Bridged overlays provide an entry level overlay style for data center networks that require Ethernet connectivity but do not need routing services between the VLANs.

In this example, loopback interfaces on the leaf devices act as VXLAN tunnel endpoints (VTEPs). The tunnels enable the leaf devices to send VLAN traffic to other leaf devices and Ethernet-connected end systems in the data center. The spine devices only provide basic EBGP underlay and IBGP overlay connectivity for these leaf-to-leaf VXLAN tunnels.

Figure 1: Bridged Overlay
Bridged Overlay
Note

If inter-VLAN routing is required for a bridged overlay, you can use an MX Series router or SRX Series security device that is external to the EVPN/VXLAN fabric. Otherwise, you can select one of the other overlay types that incorporate routing (such as an edge-routed bridging overlay, a centrally-routed bridging overlay, or a routed overlay) discussed in this Cloud Data Center Architecture Guide.

The following sections provide the detailed steps of how to configure a bridged overlay:

Configuring a Bridged Overlay

Bridged overlays are supported on all platforms included in this reference design. To configure a bridged overlay, you configure VNIs, VLANs, and VTEPs on the leaf devices, and BGP on the spine devices.

When you implement this style of overlay on a spine device, the focus is on providing overlay transport services between the leaf devices. Consequently, you configure an IP fabric underlay and an IBGP overlay. There are no VTEPs or IRB interfaces needed, because the spine device does not provide any routing functionality or EVPN/VXLAN capabilities in a bridged overlay.

When you implement this style of overlay on a leaf device, you enable EVPN with VXLAN encapsulation to connect to other leaf devices, configure VTEPs, establish route targets and route distinguishers, configure Ethernet Segment Identifier (ESI) settings, and map VLANs to VNIs. Again, you do not include IRB interfaces or routing on the leaf devices for this overlay method.

The following sections provide the detailed steps of how to configure and verify the bridged overlay:

Configuring a Bridged Overlay on the Spine Device

To configure a bridged overlay on a spine device, perform the following:

Note

The following example shows the configuration for Spine 1, as shown in Figure 2.

Figure 2: Bridged Overlay – Spine Device
Bridged Overlay – Spine Device
  1. Ensure the IP fabric underlay is in place. To configure an IP fabric on a spine device, see IP Fabric Underlay Network Design and Implementation.
  2. Confirm that your IBGP overlay is up and running. To configure an IBGP overlay on your spine devices, see Configuring IBGP for the Overlay.

Verifying a Bridged Overlay on the Spine Device

Issue the following commands to verify that the overlay is working properly on your spine devices:

  1. Verify that the spine device has reachability to the leaf devices. This output shows the possible routes to Leaf 1.
    user@spine-1> show route 192.168.1.1
  2. Verify that IBGP is functional on the spine devices acting as a route reflector cluster. You should see peer relationships with all spine device loopback interfaces (192.168.0.1 through 192.168.0.4) and all leaf device loopback interfaces (192.168.1.1 through 192.168.1.96).
    user@spine-1> show bgp summary

Configuring a Bridged Overlay on the Leaf Device

To configure a bridged overlay on a leaf device, perform the following:

Note
  • The following example shows the configuration for Leaf 1, as shown in Figure 3.

Figure 3: Bridged Overlay – Leaf Device
Bridged Overlay – Leaf Device
  1. Ensure the IP fabric underlay is in place. To configure an IP fabric on a leaf device, see IP Fabric Underlay Network Design and Implementation.
  2. Confirm that your IBGP overlay is up and running. To configure an IBGP overlay on your leaf device, see Configuring IBGP for the Overlay.
  3. Configure the EVPN protocol with VXLAN encapsulation, and specify the VTEP source interface (in this case, the loopback interface of the leaf device).

    Leaf 1:

  4. Define an EVPN route target and route distinguisher, and use the auto option to derive route targets automatically. Setting these parameters specifies how the routes are imported and exported. The import and export of routes from a bridging table is the basis for dynamic overlays. In this case, members of the global BGP community with a route target of target:64512:1111 participate in the exchange of EVPN/VXLAN information.

    Leaf 1:

    Note

    A specific route target processes EVPN Type 1 routes, while an automatic route target processes Type 2 routes. This reference design requires both route targets.

  5. Configure ESI settings. Because the end systems in this reference design are multihomed to three leaf devices per device type cluster (such as QFX5100), you must configure the same ESI identifier and LACP system identifier on all three leaf devices for each unique end system. Unlike other topologies where you would configure a different LACP system identifier per leaf device and have VXLAN select a single designated forwarder, use the same LACP system identifier to allow the 3 leaf devices to appear as a single LAG to a multihomed end system. In addition, use the same aggregated Ethernet interface number for all ports included in the ESI.

    The configuration for Leaf 1 is shown below, but you must replicate this configuration on both Leaf 2 and Leaf 3 per the topology shown in Figure 4.

    Tip

    When you create an ESI number, always set the high order octet to 00 to indicate the ESI is manually created. The other 9 octets can be any hexadecimal value from 00 to FF.

    Figure 4: ESI Topology for Leaf 1, Leaf 2, and Leaf 3
    ESI Topology for Leaf 1, Leaf 2, and Leaf
3

    Leaf 1:

  6. Configure VLANs and map them to VNIs. This step enables the VLANs to participate in VNIs across the EVPN/VXLAN domain.

    Leaf 1:

Verifying the Bridged Overlay on the Leaf Device

Issue the following commands to verify that the overlay is working properly on your leaf devices:

  1. Verify the interfaces are operational. Interfaces xe-0/0/10 and xe-0/0/11 are dual homed to the Ethernet-connected end system through interface ae11, while interfaces et-0/0/48 through et-0/0/51 are uplinks to the four spine devices.
    user@leaf-1> show interfaces terse | match ae.*
    user@leaf-1> show lacp interfaces
    user@leaf-1> show ethernet-switching interface ae11
  2. Verify on Leaf 1 and Leaf 3 that the Ethernet switching table has installed both the local MAC addresses and the remote MAC addresses learned through the overlay.Note

    To identify end systems learned remotely from the EVPN overlay, look for the MAC address, ESI logical interface, and ESI number. For example, Leaf 1 learns about an end system with the MAC address of 02:0c:10:03:02:02 through esi.1885. This end system has an ESI number of 00:00:00:00:00:00:51:10:00:01. Consequently, this matches the ESI number configured for Leaf 4, 5, and 6 (QFX5110 switches), so we know that this end system is multihomed to these three leaf devices.

    user@leaf-1> show ethernet-switching table vlan-id 30
  3. Verify the remote EVPN routes coming from VNI 1000 and MAC address 02:0c:10:01:02:02. In this case, they are coming from Leaf 4 (192.168.1.4) by way of Spine 1 (192.168.0.1).Note

    The format of the EVPN routes is EVPN-route-type:route-distinguisher:vni:mac-address.

    user@leaf-1> show route table bgp.evpn.0 evpn-ethernet-tag-id 1000 evpn-mac-address 02:0c:10:01:02:02
    user@leaf-1> show route table bgp.evpn.0 evpn-ethernet-tag-id 1000 evpn-mac-address 02:0c:10:01:02:02 detail
  4. Verify the source and destination address of each VTEP interface and view their status.Note

    There are 96 leaf devices, so there are 96 VTEP interfaces in this reference design - one VTEP interface per leaf device.

    user@leaf-1> show ethernet-switching vxlan-tunnel-end-point source
    user@leaf-1> show interfaces terse vtep
    user@leaf-1> show interfaces vtep
  5. Verify that each VNI maps to the associated VXLAN tunnel.
    user@leaf-1> show ethernet-switching vxlan-tunnel-end-point remote
  6. Verify that MAC addresses are learned through the VXLAN tunnels.
    user@leaf-1> show ethernet-switching vxlan-tunnel-end-point remote mac-table
  7. Verify multihoming information of the gateway and the aggregated Ethernet interfaces.
    user@leaf-1> show ethernet-switching vxlan-tunnel-end-point esi
  8. Verify that the VXLAN tunnel from one leaf to another leaf is load balanced with equal cost multipathing (ECMP) over the underlay.
    user@leaf-1> show route forwarding-table table default-switch extensive | find vtep.32770
  9. Verify that remote MAC addresses are reachable through ECMP.
    user@leaf-1> show route forwarding-table table default-switch extensive destination 02:0c:10:01:02:03/48
    Note

    Though the MAC address is reachable over multiple VTEP interfaces, QFX5100, QFX5110, and QFX5200 switches do not support ECMP across the overlay because of a merchant ASIC limitation. Only the QFX10000 line of switches contain a custom Juniper Networks ASIC that supports ECMP across both the overlay and the underlay.

    user@leaf-1> show ethernet-switching table vlan-id 10 | match 02:0c:10:01:02:03
    user@leaf-1> show route forwarding-table table default-switch extensive destination 02:0c:10:01:02:03/48
  10. Verify which device is the Designated Forwarder (DF) for broadcast, unknown, and multicast (BUM) traffic coming from the VTEP tunnel.Note

    Because the DF IP address is listed as 192.168.1.2, Leaf 2 is the DF.

    user@leaf-1> show evpn instance esi 00:00:00:00:00:00:51:00:00:01 designated-forwarder

Bridged Overlay — Release History

Table 1 provides a history of all of the features in this section and their support within this reference design.

Table 1: Bridged Overlay in the Cloud Data Center Reference Design– Release History

Release

Description

17.3R3-S2

All features documented in this section are supported on all devices within the reference design running Junos OS Release 17.3R3-S2 or later.

Related Documentation