Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Centrally-Routed Bridging Overlay Design and Implementation

 

A centrally-routed bridging overlay performs routing at a central location in the EVPN network as shown in Figure 1, In this example, IRB interfaces are configured in the overlay at each spine device to route traffic between the VLANs that originate at the leaf devices and end systems. For an overview of centrally-routed bridging overlays, see the Centrally-Routed Bridging Overlay section in Data Center Fabric Blueprint Architecture Components.

Figure 1: Centrally-Routed Bridging Overlay
Centrally-Routed Bridging Overlay

The following sections provide the detailed steps of how to implement a centrally-routed bridging overlay:

Configuring a VLAN-Aware Centrally-Routed Bridging Overlay in the Default Instance

This basic form of overlay is supported on all platforms included in this reference design. It uses the simplest VLAN-aware method to enable a single, default switching instance that supports up to 4094 VLANs.

As shown in Figure 2, you configure VLANs at the leaf devices, and IRB interfaces for routing at the spine devices. Such configuration is placed in the default switching instance at the [edit vlans], [edit interfaces], [edit protocols evpn], and [edit switch-options] hierarchy levels. Routing instances are not required for this overlay style, but can be implemented as an option depending on the needs of your network.

Figure 2: VLAN-Aware Centrally-Routed Bridging Overlay
VLAN-Aware Centrally-Routed Bridging
Overlay

When you implement this style of overlay on a spine device, you:

  • Configure IRB interfaces to route traffic between Ethernet virtual network instances.

  • Set virtual gateway addresses.

  • Add VXLAN features to optimize traffic paths.

  • Configure EVPN with VXLAN encapsulation in the default switching instance or in a routing instance.

  • Set the loopback interface as the VTEP source interface.

  • Configure route distinguishers and route targets to direct traffic to peers.

  • Map VLANs to VNIs.

When you implement this style of overlay on a leaf device, you:

  • Configure Ethernet Segment Identifier (ESI) settings.

  • Enable EVPN with VXLAN encapsulation in the default switching instance.

  • Establish route targets and route distinguishers.

  • Map VLANs to VNIs.

For an overview of VLAN-aware centrally-routed bridging overlays, see the Centrally-Routed Bridging Overlay section in Data Center Fabric Blueprint Architecture Components.

If you need to implement more than 4094 VLANs, you can use a centrally-routed bridging overlay with virtual switches (available on switches in the QFX10000 line) or MAC-VRF instances. See Configuring a VLAN-Aware Centrally-Routed Bridging Overlay with Virtual Switches or MAC-VRF Instances. With MAC-VRF instances, you expand your options to either isolate traffic between tenant systems or to enable routing and forwarding between tenant systems.

The following sections provide the detailed steps of how to configure and verify the VLAN-aware centrally-routed bridging overlay in the default switching instance:

Configuring a VLAN-Aware Centrally-Routed Bridging Overlay in the Default Instance on the Spine Device

To configure a VLAN-aware centrally-routed bridging overlay in the default switching instance on a spine device, perform the following:

Note

The following example shows the configuration for Spine 1, as shown in Figure 3.

Figure 3: VLAN-Aware Centrally-Routed Bridging Overlay in the Default Instance – Spine Device
VLAN-Aware Centrally-Routed Bridging Overlay
in the Default Instance – Spine Device
  1. Ensure the IP fabric underlay is in place. To configure an IP fabric on a spine device, see IP Fabric Underlay Network Design and Implementation.
  2. Confirm that your IBGP overlay is up and running. To configure an IBGP overlay on your spine devices, see Configuring IBGP for the Overlay.
  3. Configure the VTEP tunnel endpoint as the loopback address, and add a route distinguisher and a route target (target:64512:1111). Also, keep your configuration simple by using the auto route target option, which uses one target for both import and export.

    Spine 1:

  4. Configure IRB interfaces for each VNI and the corresponding virtual gateway address (which uses .254 in the 4th octet for each prefix). Include VXLAN features, such as proxy-macip-advertisement and virtual-gateway-accept-data, to improve performance and manageability.Note
    • The proxy-macip-advertisement statement allows MAC address plus IP address information (ARP entries) learned locally for a subnet to be sent by one central gateway (spine device) to the other central gateways. This is referred to as ARP synchronization. We recommend you enable this feature in a centrally-routed bridging overlay fabric when any of the leaf devices in the fabric advertise only the MAC address of the connected hosts in their EVPN Type 2 route advertisements. If all of the leaf devices in the fabric can advertise both MAC and IP addresses of hosts in Type 2 advertisements, this setting is optional. This feature improves convergence times and traffic handling in the EVPN/VXLAN network.

    • You must configure both the virtual-gateway-accept-data statement and the preferred IPv4 and IPv6 addresses to use the ping operation and verify connectivity to the virtual gateway IP address from the end system.

    Spine 1:

  5. Configure a secondary logical unit on the loopback interface for the default switching instance.

    Spine 1:

  6. Configure EVPN with VXLAN encapsulation. Include the no-gateway-community option to advertise the virtual gateway and IRB MAC addresses to the EVPN peer devices so that Ethernet-only PE devices can learn these MAC addresses.

    Spine 1:

  7. Configure mapping between VLANs and VXLAN VNIs.

    Spine 1:

  8. Configure a routing instance named VRF 1, and map IRB interfaces irb.100 (VNI 10000) and irb.200 (VNI 20000) to this instance.Note

    Because the irb.300 (VNI 30000) and irb.400 (VNI 40000) interfaces are not configured inside a routing instance, they are part of the default switching instance for the spine devices. The end result of your configuration should match the diagram shown in Figure 3.

    Spine 1:

Verifying the VLAN-Aware Centrally-Routed Bridging Overlay in the Default Instance for the Spine Device

Issue the following commands to verify that the overlay is working properly on your spine devices:

  1. Verify the IRB interfaces are operational for both IPv4 and IPv6.
    user@spine-1> show interfaces terse irb
  2. Verify that the VTEP interfaces are up.
    user@spine-1> show interfaces terse vtep
    user@spine-1> show interfaces terse vtep | match eth-switch | count
  3. Verify the endpoint destination IP address for the VTEP interfaces. The spine devices display their VTEPs as loopback addresses in the range of 192.168.0.x (1 - 4) and the leaf devices display their VTEPs as loopback addresses in the range of 192.168.1.x (1 - 96).
    user@spine-1> show interfaces vtep
  4. Verify that the spine device has all the routes to the leaf devices.
    user@spine-2> show route 192.168.1.1
  5. Verify that each end system resolves the virtual gateway MAC address for a subnet using the gateway IRB address on the central gateways (spine devices).
    user@spine-1> show arp no-resolve vpn VRF_1
    user@spine-1> show ipv6 neighbors
  6. Verify the switching table for VNI 10000 to see entries for end systems and the other spine devices.
    user@spine-1> show ethernet-switching table vlan-id 100
  7. Verify MAC address and ARP information learned from the leaf devices over the control plane.
    user@spine-1> show evpn database mac-address 02:0c:10:01:02:01 extensive
  8. Verify the remote VXLAN tunnel end points.
    user@spine-1> show ethernet-switching vxlan-tunnel-end-point remote
  9. Verify that MAC addresses are learned through the VXLAN tunnel.
    user@spine-1> show ethernet-switching vxlan-tunnel-end-point remote mac-table

Configuring a VLAN-Aware Centrally-Routed Bridging Overlay in the Default Instance on the Leaf Device

To configure a VLAN-aware centrally-routed bridging overlay in the default switching instance on a leaf device, perform the following:

Note
  • The following example shows the configuration for Leaf 1, as shown in Figure 4.

Figure 4: VLAN-Aware Centrally-Routed Bridging Overlay in the Default Instance – Leaf Device
VLAN-Aware Centrally-Routed Bridging Overlay
in the Default Instance – Leaf Device
  1. Ensure the IP fabric underlay is in place. To configure an IP fabric on a leaf device, see IP Fabric Underlay Network Design and Implementation.
  2. Confirm that your IBGP overlay is up and running. To configure an IBGP overlay on your leaf device, see Configuring IBGP for the Overlay.
  3. Configure the EVPN protocol with VXLAN encapsulation, and specify the VTEP source interface (in this case, the loopback interface of the leaf device).

    Leaf 1:

  4. Define an EVPN route target and route distinguisher, and use the auto option to derive route targets automatically.Setting these parameters specifies how the routes are imported and exported. The import and export of routes from a routing or bridging table is the basis for dynamic overlays. In this case, members of the global BGP community with a route target of target:64512:1111 participate in the exchange of EVPN/VXLAN information.

    Leaf 1:

  5. Configure ESI settings on all similar leaf devices. Because the end systems in this reference design are multihomed to three leaf devices per device type cluster (such as QFX5100), you must configure the same ESI identifier and LACP system identifier on all three leaf devices for each unique end system. Unlike other topologies where you would configure a different LACP system identifier per leaf device and have VXLAN select a single designated forwarder, use the same LACP system identifier to allow the 3 leaf devices to appear as a single LAG to a multihomed end system. In addition, use the same aggregated Ethernet interface number for all ports included in the ESI.

    The configuration for Leaf 1 is shown below, but you must replicate this configuration on both Leaf 2 and Leaf 3 per the topology shown in Figure 5.

    Tip

    When you create an ESI number, always set the high order octet to 00 to indicate the ESI is manually created. The other 9 octets can be any hexadecimal value from 00 to FF.

    Figure 5: ESI Topology for Leaf 1, Leaf 2, and Leaf 3
    ESI Topology for Leaf 1, Leaf 2, and Leaf
3

    Leaf 1:

  6. Configure VLANs and map them to VNIs. This step enables the VLANs to participate in VNIs across the EVPN/VXLAN domain.

    Leaf 1:

Verifying the VLAN-Aware Centrally-Routed Bridging Overlay in the Default Instance for the Leaf Device

Issue the following commands to verify that the overlay is working properly on your leaf devices:

  1. Verify the interfaces are operational.
    user@leaf-1> show interfaces terse | match ae.*
    user@leaf-1> show lacp interfaces
  2. Verify that the EVPN routes are being learned through the overlay.Note
    • Only selected excerpts of this output are displayed.

    • The format of the EVPN routes is EVPN-route-type:route-distinguisher:vni:mac-address.

    user@leaf-1> show route table bgp.evpn.0 evpn-ethernet-tag-id 10000
  3. Verify on Leaf 1 and Leaf 3 that the Ethernet switching table has installed both the local MAC addresses and the remote MAC addresses learned through the overlay.Note

    To identify end systems learned remotely from the EVPN overlay, look for the MAC address, ESI logical interface, and ESI number. For example, Leaf 1 learns about an end system with the MAC address of 02:0c:10:03:02:02 through esi.1885. This end system has an ESI number of 00:00:00:00:00:00:51:10:00:01. Consequently, this matches the ESI number configured for Leaf 4, 5, and 6 (QFX5110 switches), so we know that this end system is multihomed to these three leaf devices.

    user@leaf-1> show ethernet-switching table vlan-id 300
    user@leaf-3> show ethernet-switching table vlan-id 100
  4. Verify on Leaf 1 that the virtual gateway ESI (esi.1679) is reachable by all the spine devices.
    user@leaf-1> show ethernet-switching vxlan-tunnel-end-point esi | find esi.1679
  5. Verify the remote EVPN routes coming from VNI 10000 and MAC address 02:0c:10:01:02:02. In this case, they are coming from Leaf 4 (192.168.1.4) by way of Spine 1 (192.168.0.1).Note

    The format of the EVPN routes is EVPN-route-type:route-distinguisher:vni:mac-address.

    user@leaf-1> show route table bgp.evpn.0 evpn-ethernet-tag-id 10000 evpn-mac-address 02:0c:10:01:02:02
    user@leaf-1> show route table bgp.evpn.0 evpn-ethernet-tag-id 10000 evpn-mac-address 02:0c:10:01:02:02 detail
  6. Verify the source and destination address of each VTEP interface and view their status.Note

    There are 96 leaf devices and four spine devices, so there are 100 VTEP interfaces in this reference design - one VTEP interface per device.

    user@leaf-1> show ethernet-switching vxlan-tunnel-end-point source
    user@leaf-1> show interfaces terse vtep
    user@leaf-1> show interfaces vtep
  7. Verify that each VNI maps to the associated VXLAN tunnel.
    user@leaf-1> show ethernet-switching vxlan-tunnel-end-point remote
  8. Verify that MAC addresses are learned through the VXLAN tunnels.
    user@leaf-1> show ethernet-switching vxlan-tunnel-end-point remote mac-table
  9. Verify multihoming information of the gateway and the aggregated Ethernet interfaces.
    user@leaf-1> show ethernet-switching vxlan-tunnel-end-point esi
  10. Verify that the VXLAN tunnel from one leaf to another leaf is load balanced with equal cost multipathing (ECMP) over the underlay.
    user@leaf-1> show route forwarding-table table default-switch extensive | find vtep.32770
  11. Verify that remote MAC addresses are reachable through ECMP.
    user@leaf-1> show route forwarding-table table default-switch extensive destination 02:0c:10:01:02:03/48
    Note

    Though the MAC address is reachable over multiple VTEP interfaces, QFX5100, QFX5110, QFX5120-32C, and QFX5200 switches do not support ECMP across the overlay because of a merchant ASIC limitation. Only the QFX10000 line of switches contain a custom Juniper Networks ASIC that supports ECMP across both the overlay and the underlay.

    user@leaf-1> show ethernet-switching table vlan-id 100 | match 02:0c:10:01:02:03
    user@leaf-1> show route forwarding-table table default-switch extensive destination 02:0c:10:01:02:03/48
  12. Verify which device is the Designated Forwarder (DF) for broadcast, unknown, and multicast (BUM) traffic coming from the VTEP tunnel.Note

    Because the DF IP address is listed as 192.168.1.2, Leaf 2 is the DF.

    user@leaf-1> show evpn instance esi 00:00:00:00:00:00:51:00:00:01 designated-forwarder

Configuring a VLAN-Aware Centrally-Routed Bridging Overlay with Virtual Switches or MAC-VRF Instances

You can configure a VLAN-aware centrally-routed bridging overlay model using virtual switches or MAC-VRF instances. With either of these models, you can configure multiple switching instances where each switching instance can support up to 4094 VLANs per instance.

The configuration method for VLANs (at the leaf devices) and IRB interfaces (at the spine devices) is similar to the default instance method for VLAN-aware centrally-routed bridging overlays. The main difference is that you configure certain elements inside the virtual switching instances or MAC-VRF instances. See Figure 6.

Figure 6: VLAN-Aware Centrally-Routed Bridging Overlay — Virtual Switch Instance or MAC-VRF Instance
VLAN-Aware Centrally-Routed Bridging Overlay
— Virtual Switch Instance or MAC-VRF Instance

When you implement this style of overlay on a spine device, you:

  • Configure a virtual switch or MAC-VRF instance with:

    • The loopback interface as the VTEP source interface.

    • Route distinguishers and route targets.

    • EVPN with VXLAN encapsulation.

    • VLAN to VNI mappings and Layer 3 IRB interface associations.

  • Configure virtual gateways, virtual MAC addresses, and corresponding IRB interfaces (to provide routing between VLANs).

To implement this overlay style on a leaf device:

  • Configure a virtual switch or a MAC-VRF instance with:

    • The loopback interface as the VTEP source interface.

    • Route distinguishers and route targets.

    • EVPN with VXLAN encapsulation.

    • VLAN to VNI mappings.

  • Set the following end system-facing elements:

    • An Ethernet segment ID (ESI).

    • Flexible VLAN tagging and extended VLAN bridge encapsulation.

    • LACP settings.

    • VLAN IDs.

For an overview of VLAN-aware centrally-routed bridging overlays, see the Centrally-Routed Bridging Overlay section in Data Center Fabric Blueprint Architecture Components.

For information on MAC-VRF instances, see MAC-VRF Instances for Multitenancy in Network Virtualization Overlays and MAC-VRF Routing Instance Type Overview.

Note

The following sections provide the detailed steps of how to configure and verify the VLAN-aware centrally-routed bridging overlay with virtual switches or MAC-VRF instances.

Configuring the VLAN-Aware Centrally-Routed Bridging Overlay with Virtual Switches or MAC-VRF Instances on a Spine Device

To configure a VLAN-aware style of centrally-routed bridging overlay on a spine device, perform the following:

Note

The following example shows the configuration for Spine 1, as shown in Figure 7.

Figure 7: VLAN-Aware Centrally-Routed Bridging Overlay with Virtual Switches or a MAC-VRF Instance – Spine Device
VLAN-Aware Centrally-Routed Bridging Overlay
with Virtual Switches or a MAC-VRF Instance – Spine Device
  1. Ensure the IP fabric underlay is in place. To configure an IP fabric on spine devices, see IP Fabric Underlay Network Design and Implementation.
  2. Confirm that your IBGP overlay is up and running. To configure an IBGP overlay on your spine devices, see Configuring IBGP for the Overlay.
  3. Configure a virtual switch instance (VS1) or a MAC-VRF instance (MAC-VRF-1) for a VLAN-aware service. With the VLAN-aware service type, you can configure the instance with one or more VLANs. Include VTEP information, VXLAN encapsulation, VLAN to VNI mapping, associated IRB interfaces, and other instance details (such as a route distinguisher and a route target) as part of the configuration.

    For a virtual switch instance, use instance-type virtual-switch. Using the VLAN-aware model, configure VLANs VNI_90000 and VNI_100000 in the virtual switch instance with the associated IRB interfaces.

    Spine 1 (Virtual Switch Instance):

    With MAC-VRF instances, use instance-type mac-vrf. You also configure the service type when you create the MAC-VRF instance. Here we configure service-type vlan-aware with the two VLANs VNI_90000 and VNI_100000 and their associated IRB interfaces in the MAC-VRF instance.

    Spine 1 (MAC-VRF Instance):

  4. (MAC-VRF instances only) Enable shared tunnels on the device.

    A device can have problems with VTEP scaling when the configuration uses multiple MAC-VRF instances. As a result, to avoid this problem, we require that you enable the shared tunnels feature on the QFX5000 line of switches with a MAC-VRF instance configuration. When you configure the shared-tunnels option, the device minimizes the number of next-hop entries to reach remote VTEPs. The following statement globally enables shared VXLAN tunnels on the device:

    This statement is optional on the QFX10000 line of switches, which can handle higher VTEP scaling than QFX5000 switches.

    Note

    This setting requires you to reboot the device.

  5. Configure spine devices with one or more VLANs for the VLAN-aware method. Include settings for the IPv4 and IPv6 virtual gateways and virtual MAC addresses. This example shows the configuration for Spine 1 with IRB interfaces and virtual gateways for VLANs VNI_90000 and VNI_100000.

    Spine 1:

Verifying the VLAN-Aware Model for a Centrally-Routed Bridging Overlay with Virtual Switches or MAC-VRF Instances on a Spine Device

To verify this style of overlay on a spine device, run the commands in this section.

Most commands here show output for a virtual switch instance configuration. With a MAC-VRF instance configuration, you can alternatively use:

  • show mac-vrf forwarding commands that are aliases for the show ethernet-switching commands in this section.

  • The show mac-vrf routing database command, which is an alias for the show evpn database command in this section.

  • The show mac-vrf routing instance command, which is an alias for the show evpn instance command in this section.

See MAC-VRF Routing Instance Type Overview for tables of show mac-vrf forwarding and show ethernet-switching command mappings, and show mac-vrf routing command aliases for show evpn commands.

Otherwise, you can use the commands in this section for either virtual switch instances or MAC-VRF instances.

The output with a MAC-VRF instance configuration displays similar information for MAC-VRF routing instances as this section shows for virtual switch instances. One main difference you might see is in the output with MAC-VRF instances on devices where you enable the shared tunnels feature. With shared tunnels enabled, you see VTEP interfaces in the following format:

where:

  • index is the index associated with the MAC-VRF routing instance.

  • shared-tunnel-unit is the unit number associated with the shared tunnel remote VTEP logical interface.

For example, if a device has a MAC-VRF instance with index 26 and the instance connects to two remote VTEPs, the shared tunnel VTEP logical interfaces might look like this:

  1. Verify the IRB interfaces for VNIs 90000 and 100000 are operational for both IPv4 and IPv6.
    user@spine-1> show interfaces terse irb | find irb\.900
  2. (MAC-VRF instances only) Verify the VLANs you configured as part of the MAC-VRF instance.
    user@spine-1> show mac-vrf forwarding instance MAC-VRF-1
    user@spine-1> show vlans VNI-90000
  3. Verify switching details about the EVPN routing instance. This output includes information about the route distinguisher (192.168.1.10:900), VXLAN encapsulation, ESI (00:00:00:00:00:01:00:00:00:02), verification of the VXLAN tunnels for VLANs 900 and 1000, EVPN neighbors (Spine 2 - 4, and Leaf 10 - 12), and the source VTEP IP address (192.168.0.1).
    user@spine-1> show evpn instance VS1 extensive
  4. Verify the MAC address table on the leaf device.Note
    • 00:00:5e:90:00:00 and 00:00:5e:a0:00:00 are the IP subnet gateways on the spine device.

    • 02:0c:10:09:02:01 and 02:0c:10:08:02:01 are end systems connected through the leaf device.

    user@spine-1> show ethernet-switching table instance VS1
  5. Verify the end system MAC address is reachable from all three leaf devices.
    user@spine-1> show ethernet-switching vxlan-tunnel-end-point esi | find esi.2467
  6. Verify the end system is reachable through the forwarding table.
    user@spine-1> show route forwarding-table table VS1 destination 02:0c:10:09:02:01/48 extensive
  7. Verify end system information (MAC address, IP address, etc.) has been added to the IPv4 ARP table and IPv6 neighbor table.
    user@spine-1> show arp no-resolve expiration-time | match "irb.900|irb.1000"
    user@spine-1> show ipv6 neighbors | match "irb.900|irb.1000"
  8. Verify that the EVPN database contains the MAC address (02:0c:10:08:02:01) and ARP information learned from an end system connected to the leaf device.
    user@spine-1> show evpn database mac-address 02:0c:10:08:02:01 extensive

Configuring the VLAN-Aware Centrally-Routed Bridging Overlay with Virtual Switches or MAC-VRF Instances on a Leaf Device

To configure a VLAN-aware centrally-routed bridging overlay in a virtual switch or a MAC-VRF instance on a leaf device, perform the following:

Note

The following example shows the configuration for Leaf 10, as shown in Figure 8.

Figure 8: VLAN-Aware Centrally-Routed Bridging Overlay with Virtual Switches or MAC-VRF Instances – Leaf Device
VLAN-Aware Centrally-Routed Bridging Overlay
with Virtual Switches or MAC-VRF Instances – Leaf Device
  1. Ensure the IP fabric underlay is in place. To configure an IP fabric on leaf devices, see IP Fabric Underlay Network Design and Implementation.
  2. Confirm that your IBGP overlay is up and running. To configure an IBGP overlay on your leaf devices, see Configuring IBGP for the Overlay.
  3. Configure a virtual switch instance (VS1) or a MAC-VRF instance (MAC-VRF-1) to enable EVPN/VXLAN. Also, map VLANs 900 and 1000 to VNIs 90000 and 100000 in the instance.

    For a virtual switch instance, use instance-type virtual-switch.

    Leaf 10 (Virtual Switch Instance):

    With MAC-VRF instances, use instance-type mac-vrf. You also configure the service type when you create the MAC-VRF instance. Here we configure service-type vlan-aware with the two VLANs VNI_90000 and VNI_100000, and their VNI mappings.

    Leaf 10 (MAC-VRF Instance):

  4. (MAC-VRF instances only) Enable shared tunnels on the device.

    A device can have problems with VTEP scaling when the configuration uses multiple MAC-VRF instances. As a result, to avoid this problem, we require that you enable the shared tunnels feature on the QFX5000 line of switches with a MAC-VRF instance configuration. When you configure the shared-tunnels option, the device minimizes the number of next-hop entries to reach remote VTEPs. The following statement globally enables shared VXLAN tunnels on the device:

    This statement is optional on the QFX10000 line of switches, which can handle higher VTEP scaling than QFX5000 switches.

    Note

    This setting requires you to reboot the device.

  5. Configure the leaf device to communicate with the end system. In this example, configure an aggregated Ethernet interface on Leaf 10—in this case, ae12 with two member interfaces. With the interface definition, include LACP options, an ESI in all-active mode, and VLANs 900 and 1000 (which this example uses for the VLAN-aware service type). Figure 9 illustrates the topology.
    Figure 9: ESI Topology for Leaf 10, Leaf 11, and Leaf 12
    ESI Topology for Leaf 10, Leaf 11, and
Leaf 12

    Leaf 10:

    Note that in this example, you configure the aggregated Ethernet interface to support the service provider configuration style. See Flexible Ethernet Service Encapsulation for more information on the service provider style configuration for switch interfaces.

Verifying the VLAN-Aware Centrally-Routed Bridging Overlay with Virtual Switches or MAC-VRF Instances on a Leaf Device

To verify this style of overlay on a leaf device, run the commands in this section.

Most commands here show output for a virtual switch instance configuration. With a MAC-VRF instance configuration, you can alternatively use:

  • show mac-vrf forwarding commands that are aliases for the show ethernet-switching commands in this section.

  • The show mac-vrf routing instance command, which is an alias for the show evpn instance command in this section.

See MAC-VRF Routing Instance Type Overview for tables of show mac-vrf forwarding and show ethernet-switching command mappings, and show mac-vrf routing command aliases for show evpn commands.

Otherwise, you can use the commands in this section for either virtual switch instances or MAC-VRF instances.

The output with a MAC-VRF instance configuration displays similar information for MAC-VRF routing instances as this section shows for virtual switch instances. One main difference you might see is in the output with MAC-VRF instances on devices where you enable the shared tunnels feature. With shared tunnels enabled, you see VTEP interfaces in the following format:

where:

  • index is the index associated with the MAC-VRF routing instance.

  • shared-tunnel-unit is the unit number associated with the shared tunnel remote VTEP logical interface.

For example, if a device has a MAC-VRF instance with index 26 and the instance connects to two remote VTEPs, the shared tunnel VTEP logical interfaces might look like this:

  1. Verify that the aggregated Ethernet interface is operational on the leaf device.
    user@leaf-10> show interfaces terse ae12
  2. (MAC-VRF instances only) Verify the VLANs you configured as part of the MAC-VRF instance.
    user@leaf-10> show mac-vrf forwarding instance MAC-VRF-1
    user@leaf-10> show vlans VNI-90000
  3. Verify switching details about the EVPN routing instance. This output includes information about the route distinguisher (192.168.1.10:900), VXLAN encapsulation, ESI (00:00:00:00:00:01:00:00:00:02), verification of the VXLAN tunnels for VLANs 900 and 1000, EVPN neighbors (Spine 1 - 4, and Leaf 11 and 12), and the source VTEP IP address (192.168.1.10).
    user@leaf-10> show evpn instance VS1 extensive
  4. View the MAC address table on the leaf device to confirm that spine device and end system MAC addresses appear in the table.Note
    • 00:00:5e:90:00:00 and 00:00:5e:a0:00:00 are the IP subnet gateways on the spine device.

    • 02:0c:10:09:02:01 and 02:0c:10:08:02:01 are end systems connected through the leaf device.

    user@leaf-10> show ethernet-switching table instance VS1
  5. Verify that the IP subnet gateway ESIs discovered in Step 3 (esi.2144 for VNI 90000 and esi.2139 for VNI 100000) are reachable from all four spine devices.
    user@leaf-10> show ethernet-switching vxlan-tunnel-end-point esi | find esi.2144
    user@leaf-10> show ethernet-switching vxlan-tunnel-end-point esi | find esi.2139
  6. Verify the IP subnet gateway on the spine device (00:00:5e:a0:00:00) is reachable through the forwarding table.
    user@leaf-10> show route forwarding-table table VS1 destination 00:00:5e:a0:00:00/48 extensive

Centrally-Routed Bridging Overlay — Release History

Table 1 provides a history of all of the features in this section and their support within this reference design.

Table 1: Centrally-Routed Bridging Overlay in the Data Center Fabric Reference Design– Release History

Release

Description

19.1R2

QFX10002-60C and QFX5120-32C switches running Junos OS Release 19.1R2 and later releases in the same release train support all features documented in this section.

17.3R3-S2

Adds support for Contrail Enterprise Multicloud, where you can configure centrally-routed bridging overlays from the Contrail Command GUI.

17.3R3-S1

All devices in the reference design that support Junos OS Release 17.3R3-S1 and later releases in the same release train also support all features documented in this section

Related Documentation