Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Centrally-Routed Bridging Overlay Design and Implementation

 

A centrally-routed bridging overlay performs routing at a central location in the EVPN network as shown in Figure 1, In this example, IRB interfaces are configured in the overlay at each spine device to route traffic between the VLANs that originate at the leaf devices and end systems. For an overview of centrally-routed bridging overlays, see the Centrally-Routed Bridging Overlay section in Data Center Fabric Blueprint Architecture Components.

Figure 1: Centrally-Routed Bridging Overlay
Centrally-Routed Bridging Overlay

The following sections provide the detailed steps of how to implement a centrally-routed bridging overlay:

Configuring a VLAN-Aware Centrally-Routed Bridging Overlay in the Default Instance

This basic form of overlay is supported on all platforms included in this reference design. It uses the simplest VLAN-aware method to enable a single, default switching instance that supports up to 4094 VLANs.

As shown in Figure 2, you configure VLANs at the leaf devices, and IRB interfaces for routing at the spine devices. Such configuration is placed in the default switching instance at the [edit vlans], [edit interfaces], [edit protocols evpn], and [edit switch-options] hierarchy levels. Routing instances are not required for this overlay style, but can be implemented as an option depending on the needs of your network.

Figure 2: VLAN-Aware Centrally-Routed Bridging Overlay
VLAN-Aware Centrally-Routed Bridging
Overlay

When you implement this style of overlay on a spine device, you configure IRB interfaces to route traffic between Ethernet virtual network instances, set virtual gateway addresses, add VXLAN features to optimize traffic paths, configure EVPN with VXLAN encapsulation in the default switching instance or in a routing instance, set the loopback interface as the VTEP, configure route distinguishers and route targets to direct traffic to peers, and map VLANs to VNIs.

When you implement this style of overlay on a leaf device, you configure Ethernet Segment Identifier (ESI) settings, enable EVPN with VXLAN encapsulation in the default switching instance, establish route targets and route distinguishers, and map VLANs to VNIs. In this reference design, you also have an option to configure the leaf device as a Virtual Chassis to expand the number of end system-facing ports.

For an overview of VLAN-aware centrally-routed bridging overlays, see the Centrally-Routed Bridging Overlay section in Data Center Fabric Blueprint Architecture Components. If you need to implement more than 4094 VLANs, see Configuring a VLAN-Aware Centrally-Routed Bridging Overlay with Virtual Switches.

The following sections provide the detailed steps of how to configure and verify the VLAN-aware centrally-routed bridging overlay in the default switching instance:

Configuring a VLAN-Aware Centrally-Routed Bridging Overlay in the Default Instance on the Spine Device

To configure a VLAN-aware centrally-routed bridging overlay in the default switching instance on a spine device, perform the following:

Note

The following example shows the configuration for Spine 1, as shown in Figure 3.

Figure 3: VLAN-Aware Centrally-Routed Bridging Overlay in the Default Instance – Spine Device
VLAN-Aware Centrally-Routed Bridging Overlay
in the Default Instance – Spine Device
  1. Ensure the IP fabric underlay is in place. To configure an IP fabric on a spine device, see IP Fabric Underlay Network Design and Implementation.
  2. Confirm that your IBGP overlay is up and running. To configure an IBGP overlay on your spine devices, see Configuring IBGP for the Overlay.
  3. Configure the VTEP tunnel endpoint as the loopback address, and add a route distinguisher and a route target (target:64512:1111). Also, keep your configuration simple by using the auto route target option, which uses one target for both import and export.

    Spine 1:

  4. Configure IRB interfaces for each VNI and the corresponding virtual gateway address (which uses .254 in the 4th octet for each prefix). Include VXLAN features, such as proxy-macip-advertisement and virtual-gateway-accept-data, to improve performance and manageability.Note
    • The proxy-macip-advertisement statement allows MAC address plus IP address information (ARP entries) learned locally for a subnet to be sent by one central gateway (spine device) to the other central gateways. This is referred to as ARP synchronization. This feature improves convergence times and traffic handling in the EVPN/VXLAN network.

    • You must configure both the virtual-gateway-accept-data statement and the preferred IPv4 and IPv6 addresses to use the ping operation and verify connectivity to the virtual gateway IP address from the end system.

    Spine 1:

  5. Configure a secondary logical unit on the loopback interface for the default switching instance.

    Spine 1:

  6. Configure EVPN with VXLAN encapsulation. Include the no-gateway-community option to advertise the virtual gateway and IRB MAC addresses to the EVPN peer devices so that Ethernet-only PE devices can learn these MAC addresses.

    Spine 1:

  7. Configure mapping between VLANs and VXLAN VNIs.

    Spine 1:

  8. Configure a routing instance named VRF 1, and map IRB interfaces irb.100 (VNI 10000) and irb.200 (VNI 20000) to this instance.Note

    Because the irb.300 (VNI 30000) and irb.400 (VNI 40000) interfaces are not configured inside a routing instance, they are part of the default switching instance for the spine devices. The end result of your configuration should match the diagram shown in Figure 3.

    Spine 1:

Verifying the VLAN-Aware Centrally-Routed Bridging Overlay in the Default Instance for the Spine Device

Issue the following commands to verify that the overlay is working properly on your spine devices:

  1. Verify the IRB interfaces are operational for both IPv4 and IPv6.
    user@spine-1> show interfaces terse irb
  2. Verify that the VTEP interfaces are up.
    user@spine-1> show interfaces terse vtep
    user@spine-1> show interfaces terse vtep | match eth-switch | count
  3. Verify the endpoint destination IP address for the VTEP interfaces. The spine devices display their VTEPs as loopback addresses in the range of 192.168.0.x (1 - 4) and the leaf devices display their VTEPs as loopback addresses in the range of 192.168.1.x (1 - 96).
    user@spine-1> show interfaces vtep
  4. Verify that the spine device has all the routes to the leaf devices.
    user@spine-2> show route 192.168.1.1
  5. Verify that each end system resolves the virtual gateway MAC address for a subnet using the gateway IRB address on the central gateways (spine devices).
    user@spine-1> show arp no-resolve vpn VRF_1
    user@spine-1> show ipv6 neighbors
  6. Verify the switching table for VNI 10000 to see entries for end systems and the other spine devices.
    user@spine-1> show ethernet-switching table vlan-id 100
  7. Verify MAC address and ARP information learned from the leaf devices over the control plane.
    user@spine-1> show evpn database mac-address 02:0c:10:01:02:01 extensive
  8. Verify the remote VXLAN tunnel end points.
    user@spine-1> show ethernet-switching vxlan-tunnel-end-point remote
  9. Verify that MAC addresses are learned through the VXLAN tunnel.
    user@spine-1> show ethernet-switching vxlan-tunnel-end-point remote mac-table

Configuring a VLAN-Aware Centrally-Routed Bridging Overlay in the Default Instance on the Leaf Device

To configure a VLAN-aware centrally-routed bridging overlay in the default switching instance on a leaf device, perform the following:

Note
Figure 4: VLAN-Aware Centrally-Routed Bridging Overlay in the Default Instance – Leaf Device
VLAN-Aware Centrally-Routed Bridging Overlay
in the Default Instance – Leaf Device
  1. Ensure the IP fabric underlay is in place. To configure an IP fabric on a leaf device, see IP Fabric Underlay Network Design and Implementation.
  2. Confirm that your IBGP overlay is up and running. To configure an IBGP overlay on your leaf device, see Configuring IBGP for the Overlay.
  3. Configure the EVPN protocol with VXLAN encapsulation, and specify the VTEP source interface (in this case, the loopback interface of the leaf device).

    Leaf 1:

  4. Define an EVPN route target and route distinguisher, and use the auto option to derive route targets automatically.Setting these parameters specifies how the routes are imported and exported. The import and export of routes from a routing or bridging table is the basis for dynamic overlays. In this case, members of the global BGP community with a route target of target:64512:1111 participate in the exchange of EVPN/VXLAN information.

    Leaf 1:

  5. Configure ESI settings on all similar leaf devices. Because the end systems in this reference design are multihomed to three leaf devices per device type cluster (such as QFX5100), you must configure the same ESI identifier and LACP system identifier on all three leaf devices for each unique end system. Unlike other topologies where you would configure a different LACP system identifier per leaf device and have VXLAN select a single designated forwarder, use the same LACP system identifier to allow the 3 leaf devices to appear as a single LAG to a multihomed end system. In addition, use the same aggregated Ethernet interface number for all ports included in the ESI.

    The configuration for Leaf 1 is shown below, but you must replicate this configuration on both Leaf 2 and Leaf 3 per the topology shown in Figure 5.

    Tip

    When you create an ESI number, always set the high order octet to 00 to indicate the ESI is manually created. The other 9 octets can be any hexadecimal value from 00 to FF.

    Figure 5: ESI Topology for Leaf 1, Leaf 2, and Leaf 3
    ESI Topology for Leaf 1, Leaf 2, and Leaf
3

    Leaf 1:

  6. Configure VLANs and map them to VNIs. This step enables the VLANs to participate in VNIs across the EVPN/VXLAN domain.

    Leaf 1:

Verifying the VLAN-Aware Centrally-Routed Bridging Overlay in the Default Instance for the Leaf Device

Issue the following commands to verify that the overlay is working properly on your leaf devices:

  1. Verify the interfaces are operational. The output for Leaf 1 indicates the configured Virtual Chassis with interfaces et-0/x/y (FPC 0) and et-1/x/y (FPC 1).
    user@leaf-1> show interfaces terse | match ae.*
    user@leaf-1> show lacp interfaces
  2. Verify that the EVPN routes are being learned through the overlay.Note
    • Only selected excerpts of this output are displayed.

    • The format of the EVPN routes is EVPN-route-type:route-distinguisher:vni:mac-address.

    user@leaf-1> show route table bgp.evpn.0 evpn-ethernet-tag-id 10000
  3. Verify on Leaf 1 and Leaf 3 that the Ethernet switching table has installed both the local MAC addresses and the remote MAC addresses learned through the overlay.Note

    To identify end systems learned remotely from the EVPN overlay, look for the MAC address, ESI logical interface, and ESI number. For example, Leaf 1 learns about an end system with the MAC address of 02:0c:10:03:02:02 through esi.1885. This end system has an ESI number of 00:00:00:00:00:00:51:10:00:01. Consequently, this matches the ESI number configured for Leaf 4, 5, and 6 (QFX5110 switches), so we know that this end system is multihomed to these three leaf devices.

    user@leaf-1> show ethernet-switching table vlan-id 300
    user@leaf-3> show ethernet-switching table vlan-id 100
  4. Verify on Leaf 1 that the virtual gateway ESI (esi.1679) is reachable by all the spine devices.
    user@leaf-1> show ethernet-switching vxlan-tunnel-end-point esi | find esi.1679
  5. Verify the remote EVPN routes coming from VNI 10000 and MAC address 02:0c:10:01:02:02. In this case, they are coming from Leaf 4 (192.168.1.4) by way of Spine 1 (192.168.0.1).Note

    The format of the EVPN routes is EVPN-route-type:route-distinguisher:vni:mac-address.

    user@leaf-1> show route table bgp.evpn.0 evpn-ethernet-tag-id 10000 evpn-mac-address 02:0c:10:01:02:02
    user@leaf-1> show route table bgp.evpn.0 evpn-ethernet-tag-id 10000 evpn-mac-address 02:0c:10:01:02:02 detail
  6. Verify the source and destination address of each VTEP interface and view their status.Note

    There are 96 leaf devices and four spine devices, so there are 100 VTEP interfaces in this reference design - one VTEP interface per device.

    user@leaf-1> show ethernet-switching vxlan-tunnel-end-point source
    user@leaf-1> show interfaces terse vtep
    user@leaf-1> show interfaces vtep
  7. Verify that each VNI maps to the associated VXLAN tunnel.
    user@leaf-1> show ethernet-switching vxlan-tunnel-end-point remote
  8. Verify that MAC addresses are learned through the VXLAN tunnels.
    user@leaf-1> show ethernet-switching vxlan-tunnel-end-point remote mac-table
  9. Verify multihoming information of the gateway and the aggregated Ethernet interfaces.
    user@leaf-1> show ethernet-switching vxlan-tunnel-end-point esi
  10. Verify that the VXLAN tunnel from one leaf to another leaf is load balanced with equal cost multipathing (ECMP) over the underlay.
    user@leaf-1> show route forwarding-table table default-switch extensive | find vtep.32770
  11. Verify that remote MAC addresses are reachable through ECMP.
    user@leaf-1> show route forwarding-table table default-switch extensive destination 02:0c:10:01:02:03/48
    Note

    Though the MAC address is reachable over multiple VTEP interfaces, QFX5100, QFX5110, and QFX5200 switches do not support ECMP across the overlay because of a merchant ASIC limitation. Only the QFX10000 line of switches contain a custom Juniper Networks ASIC that supports ECMP across both the overlay and the underlay.

    user@leaf-1> show ethernet-switching table vlan-id 100 | match 02:0c:10:01:02:03
    user@leaf-1> show route forwarding-table table default-switch extensive destination 02:0c:10:01:02:03/48
  12. Verify which device is the Designated Forwarder (DF) for broadcast, unknown, and multicast (BUM) traffic coming from the VTEP tunnel.Note

    Because the DF IP address is listed as 192.168.1.2, Leaf 2 is the DF.

    user@leaf-1> show evpn instance esi 00:00:00:00:00:00:51:00:00:01 designated-forwarder

Configuring a VLAN-Aware Centrally-Routed Bridging Overlay with Virtual Switches

The second VLAN-aware centrally-routed bridging overlay model uses virtual switches, which enables you to configure multiple switching instances where each switching instance can support up to 4094 VLANs per instance.

The configuration method for VLANs (at the leaf devices) and IRB interfaces (at the spine devices) is similar to the default instance method for VLAN-aware centrally-routed bridging overlays. The main difference is that these elements are now configured inside virtual switching instances, as shown in Figure 6.

Figure 6: VLAN-Aware Centrally-Routed Bridging Overlay — Virtual Switch Instance
VLAN-Aware Centrally-Routed Bridging Overlay
— Virtual Switch Instance

When you implement this style of overlay on a spine device, you configure virtual gateways, virtual MAC addresses, and a virtual switch instance with the loopback interface as the VTEP, VXLAN encapsulation, VLAN to VNI mapping, and IRB interfaces (to provide routing between VLANs).

To implement this overlay style on a leaf device, you must use one of the QFX10000 line of switches. Preparing a leaf device to participate in this overlay type requires end system-facing elements (ESI, flexible VLAN tagging, extended VLAN bridge encapsulation, LACP settings, and VLAN IDs) and a virtual switch configuration (setting the loopback interface as the VTEP, configuring route distinguishers and targets, EVPN/VXLAN, and VLAN to VNI mapping).

For an overview of VLAN-aware centrally-routed bridging overlays, see the Centrally-Routed Bridging Overlay section in Data Center Fabric Blueprint Architecture Components.

Note

The following sections provide the detailed steps of how to configure and verify the VLAN-aware centrally-routed bridging overlay with virtual switches:

Configuring the VLAN-Aware Centrally-Routed Bridging Overlay with Virtual Switches on a Spine Device

To configure a VLAN-aware style of centrally-routed bridging overlay on a spine device, perform the following:

Note

The following example shows the configuration for Spine 1, as shown in Figure 7.

Figure 7: VLAN-Aware Centrally-Routed Bridging Overlay with Virtual Switches – Spine Device
VLAN-Aware Centrally-Routed Bridging Overlay
with Virtual Switches – Spine Device
  1. Ensure the IP fabric underlay is in place. To configure an IP fabric on spine devices, see IP Fabric Underlay Network Design and Implementation.
  2. Confirm that your IBGP overlay is up and running. To configure an IBGP overlay on your spine devices, see Configuring IBGP for the Overlay.
  3. Configure a virtual switch instance for the VLAN-aware method. Include VTEP information, VXLAN encapsulation, VLAN to VNI mapping, IRB interfaces, and other instance details as part of the configuration.

    Spine 1:

  4. Configure spine devices for the VLAN-aware method. Include settings for the IPv4 and IPv6 virtual gateways and virtual MAC addresses. This example shows the configuration for Spine 1.

    Spine 1:

Verifying the VLAN-Aware Model for a Centrally-Routed Bridging Overlay with Virtual Switches on a Spine Device

To verify this style of overlay on a spine device, perform the following:

  1. Verify the IRB interfaces for VNIs 90000 and 100000 are operational for both IPv4 and IPv6.
    user@spine-1> show interfaces terse irb | find irb\.900
  2. Verify switching details about the EVPN routing instance. This output includes information about the route distinguisher (192.168.1.10:900), VXLAN encapsulation, ESI (00:00:00:00:00:01:00:00:00:02), verification of the VXLAN tunnels for VLANs 900 and 1000, EVPN neighbors (Spine 2 - 4, and Leaf 10 - 12), and the source VTEP IP address (192.168.0.1).
    user@spine-1> show evpn instance VS1 extensive
  3. Verify the MAC address table on the leaf device.Note
    • 00:00:5e:90:00:00 and 00:00:5e:a0:00:00 are the IP subnet gateways on the spine device.

    • 02:0c:10:09:02:01 and 02:0c:10:08:02:01 are end systems connected through the leaf device.

    user@spine-1> show ethernet-switching table instance VS1
  4. Verify the end system MAC address is reachable from all three leaf devices.
    user@spine-1> show ethernet-switching vxlan-tunnel-end-point esi | find esi.2467
  5. Verify the end system is reachable through the forwarding table.
    user@spine-1> show route forwarding-table table VS1 destination 02:0c:10:09:02:01/48 extensive
  6. Verify end system information (MAC address, IP address, etc.) has been added to the IPv4 ARP table and IPv6 neighbor table.
    user@spine-1> show arp no-resolve expiration-time | match "irb.900|irb.1000"
    user@spine-1> show ipv6 neighbors | match "irb.900|irb.1000"
  7. Verify that the EVPN database contains the MAC address (02:0c:10:08:02:01) and ARP information learned from an end system connected to the leaf device.
    user@spine-1> show evpn database mac-address 02:0c:10:08:02:01 extensive

Configuring the VLAN-Aware Centrally-Routed Bridging Overlay with Virtual Switches on a Leaf Device

To configure a VLAN-aware centrally-routed bridging overlay in a virtual switch on a leaf device, perform the following:

Note

The following example shows the configuration for Leaf 10, as shown in Figure 8.

Figure 8: VLAN-Aware Centrally-Routed Bridging Overlay with Virtual Switches – Leaf Device
VLAN-Aware Centrally-Routed Bridging Overlay
with Virtual Switches – Leaf Device
  1. Ensure the IP fabric underlay is in place. To configure an IP fabric on leaf devices, see IP Fabric Underlay Network Design and Implementation.
  2. Confirm that your IBGP overlay is up and running. To configure an IBGP overlay on your leaf devices, see Configuring IBGP for the Overlay.
  3. Configure a virtual switch instance VS1 to enable EVPN/VXLAN and map VLANs 900 and 1000 to VNIs 90000 and 100000.

    Leaf 10:

  4. Configure the leaf device to communicate with the end system. In this example, configure Leaf 10 with LACP options, an all active ESI, and VLANs 900 and 1000 (which are reserved in this example for the VLAN-aware virtual switch method). An illustration of the topology is shown in Figure 9.
    Figure 9: ESI Topology for Leaf 10, Leaf 11, and Leaf 12
    ESI Topology for Leaf 10, Leaf 11, and
Leaf 12

    Leaf 10:

Verifying the VLAN-Aware Centrally-Routed Bridging Overlay with Virtual Switches on a Leaf Device

To verify this style of overlay on a leaf device, perform the following:

  1. Verify that the aggregated Ethernet interface is operational on the leaf device.
    user@leaf-10> show interfaces terse ae12
  2. Verify switching details about the EVPN routing instance. This output includes information about the route distinguisher (192.168.1.10:900), VXLAN encapsulation, ESI (00:00:00:00:00:01:00:00:00:02), verification of the VXLAN tunnels for VLANs 900 and 1000, EVPN neighbors (Spine 1 - 4, and Leaf 11 and 12), and the source VTEP IP address (192.168.1.10).
    user@leaf-10> show evpn instance VS1 extensive
  3. View the MAC address table on the leaf device to confirm that spine device and end system MAC addresses appear in the table.Note
    • 00:00:5e:90:00:00 and 00:00:5e:a0:00:00 are the IP subnet gateways on the spine device.

    • 02:0c:10:09:02:01 and 02:0c:10:08:02:01 are end systems connected through the leaf device.

    user@leaf-10> show ethernet-switching table instance VS1
  4. Verify that the IP subnet gateway ESIs discovered in Step 3 (esi.2144 for VNI 90000 and esi.2139 for VNI 100000) are reachable from all four spine devices.
    user@leaf-10> show ethernet-switching vxlan-tunnel-end-point esi | find esi.2144
    user@leaf-10> show ethernet-switching vxlan-tunnel-end-point esi | find esi.2139
  5. Verify the IP subnet gateway on the spine device (00:00:5e:a0:00:00) is reachable through the forwarding table.
    user@leaf-10> show route forwarding-table table VS1 destination 00:00:5e:a0:00:00/48 extensive

Centrally-Routed Bridging Overlay — Release History

Table 1 provides a history of all of the features in this section and their support within this reference design.

Table 1: Centrally-Routed Bridging Overlay in the Data Center Fabric Reference Design– Release History

Release

Description

17.3R3-S2

Adds support for Contrail Enterprise Multicloud, where you can configure centrally-routed bridging overlays from the Contrail Command GUI.

17.3R1-S1

All features documented in this section are supported on all devices within the reference design running Junos OS Release 17.3R1-S1 or later.

Related Documentation