Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Centrally-Routed Bridging Overlay Design and Implementation

A centrally-routed bridging (CRB) overlay performs routing at a central location in the EVPN network as shown in Figure 1, In this example, IRB interfaces are configured in the overlay at each spine device to route traffic between the VLANs that originate at the leaf devices and end systems. For an overview of CRB overlays, see the Centrally-Routed Bridging Overlay section in Data Center Fabric Blueprint Architecture Components.

Figure 1: CRB Overlay CRB Overlay

The following sections provide the detailed steps of how to implement a CRB overlay:

Configuring a VLAN-Aware CRB Overlay in the Default Instance

The VLAN-aware CRB overlay is a basic overlay supported on all platforms included in this reference design. It uses the simplest VLAN-aware method to enable a single, default switching instance that supports up to 4094 VLANs.

As shown in Figure 2, you configure VLANs at the leaf devices, and IRB interfaces for routing at the spine devices. Such configuration is placed in the default switching instance at the [edit vlans], [edit interfaces], [edit protocols evpn], and [edit switch-options] hierarchy levels. Routing instances are not required for this overlay style, but can be implemented as an option depending on the needs of your network.

Figure 2: VLAN-Aware CRB Overlay VLAN-Aware CRB Overlay

When you implement this style of overlay on a spine device, you:

  • Configure IRB interfaces to route traffic between Ethernet virtual network instances.

  • Set virtual gateway addresses.

  • Add VXLAN features to optimize traffic paths.

  • Configure EVPN with VXLAN encapsulation in the default switching instance or in a routing instance.

  • Set the loopback interface as the VTEP source interface.

  • Configure route distinguishers and route targets to direct traffic to peers.

  • Map VLANs to VNIs.

When you implement this style of overlay on a leaf device, you:

  • Configure Ethernet Segment Identifier (ESI) settings.

  • Enable EVPN with VXLAN encapsulation in the default switching instance.

  • Establish route targets and route distinguishers.

  • Map VLANs to VNIs.

For an overview of VLAN-aware CRB overlays, see the Centrally-Routed Bridging Overlay section in Data Center Fabric Blueprint Architecture Components.

If you need to implement more than 4094 VLANs, you can use a CRB overlay with virtual switches (available on switches in the QFX10000 line) or MAC-VRF instances. See Configuring a VLAN-Aware CRB Overlay with Virtual Switches or MAC-VRF Instances. With MAC-VRF instances, you expand your options to either isolate traffic between tenant systems or to enable routing and forwarding between tenant systems.

The following sections provide the detailed steps of how to configure and verify the VLAN-aware CRB overlay in the default switching instance:

Configuring a VLAN-Aware CRB Overlay in the Default Instance on the Spine Device

To configure a VLAN-aware CRB overlay in the default switching instance on a spine device, perform the following:

Note:

The following example shows the configuration for Spine 1, as shown in Figure 3.

Figure 3: VLAN-Aware CRB Overlay in the Default Instance – Spine Device VLAN-Aware CRB Overlay in the Default Instance – Spine Device
  1. Ensure the IP fabric underlay is in place. To configure an IP fabric on a spine device, see IP Fabric Underlay Network Design and Implementation.
  2. Confirm that your IBGP overlay is up and running. To configure an IBGP overlay on your spine devices, see Configure IBGP for the Overlay.
  3. Configure the VTEP tunnel endpoint as the loopback address, and add a route distinguisher and a route target (target:64512:1111). Also, keep your configuration simple by using the auto route target option, which uses one target for both import and export.

    Spine 1:

  4. Configure IRB interfaces for each VNI and the corresponding virtual gateway address (which uses .254 in the 4th octet for each prefix). Include VXLAN features, such as proxy-macip-advertisement and virtual-gateway-accept-data, to improve performance and manageability.
    Note:
    • We strongly recommend you set the proxy-macip-advertisement option on the spine devices in a CRB fabric. This option enables one central gateway (the spine device) to send both MAC address and IP address information (ARP entries) that it learns locally to the other central gateways. This operation is called ARP synchronization. Setting this option ensures that ARP synchronization happens efficiently if any leaf devices in the fabric advertise only the MAC addresses in their EVPN Type 2 route advertisements for their connected hosts. This setting improves convergence times and traffic handling in the fabric.

    • You must configure both the virtual-gateway-accept-data statement and the preferred IPv4 and IPv6 addresses to use the ping operation and verify connectivity to the virtual gateway IP address from the end system.

    Spine 1:

  5. Configure a secondary logical unit on the loopback interface for the default switching instance.

    Spine 1:

  6. Configure EVPN with VXLAN encapsulation. Include the no-gateway-community option to advertise the virtual gateway and IRB MAC addresses to the EVPN peer devices so that Ethernet-only PE devices can learn these MAC addresses.

    Spine 1:

  7. Configure mapping between VLANs and VXLAN VNIs.

    Spine 1:

  8. Configure a routing instance named VRF 1, and map IRB interfaces irb.100 (VNI 10000) and irb.200 (VNI 20000) to this instance.
    Note:

    Because the irb.300 (VNI 30000) and irb.400 (VNI 40000) interfaces are not configured inside a routing instance, they are part of the default switching instance for the spine devices. The end result of your configuration should match the diagram shown in Figure 3.

    Spine 1:

Verifying the VLAN-Aware CRB Overlay in the Default Instance for the Spine Device

Issue the following commands to verify that the overlay is working properly on your spine devices:

  1. Verify the IRB interfaces are operational for both IPv4 and IPv6.
  2. Verify that the VTEP interfaces are up.
  3. Verify the endpoint destination IP address for the VTEP interfaces. The spine devices display their VTEPs as loopback addresses in the range of 192.168.0.x (1 - 4) and the leaf devices display their VTEPs as loopback addresses in the range of 192.168.1.x (1 - 96).
  4. Verify that the spine device has all the routes to the leaf devices.
  5. Verify that each end system resolves the virtual gateway MAC address for a subnet using the gateway IRB address on the central gateways (spine devices).
  6. Verify the switching table for VNI 10000 to see entries for end systems and the other spine devices.
  7. Verify MAC address and ARP information learned from the leaf devices over the control plane.
  8. Verify the remote VXLAN tunnel end points.
  9. Verify that MAC addresses are learned through the VXLAN tunnel.

Configuring a VLAN-Aware CRB Overlay in the Default Instance on the Leaf Device

To configure a VLAN-aware CRB overlay in the default switching instance on a leaf device, perform the following:

Note:
  • The following example shows the configuration for Leaf 1, as shown in Figure 4.

Figure 4: VLAN-Aware CRB Overlay in the Default Instance – Leaf Device VLAN-Aware CRB Overlay in the Default Instance – Leaf Device
  1. Ensure the IP fabric underlay is in place. To configure an IP fabric on a leaf device, see IP Fabric Underlay Network Design and Implementation.
  2. Confirm that your IBGP overlay is up and running. To configure an IBGP overlay on your leaf device, see Configure IBGP for the Overlay.
  3. Configure the EVPN protocol with VXLAN encapsulation, and specify the VTEP source interface (in this case, the loopback interface of the leaf device).

    Leaf 1:

  4. Define an EVPN route target and route distinguisher, and use the auto option to derive route targets automatically. Setting these parameters specifies how the routes are imported and exported. The import and export of routes from a routing or bridging table is the basis for dynamic overlays. In this case, members of the global BGP community with a route target of target:64512:1111 participate in the exchange of EVPN-VXLAN information.

    Leaf 1:

  5. Configure ESI settings on all similar leaf devices. Because the end systems in this reference design are multihomed to three leaf devices per device type cluster (such as QFX5100), you must configure the same ESI identifier and LACP system identifier on all three leaf devices for each unique end system. Unlike other topologies where you would configure a different LACP system identifier per leaf device and have VXLAN select a single designated forwarder, use the same LACP system identifier to allow the 3 leaf devices to appear as a single LAG to a multihomed end system. In addition, use the same aggregated Ethernet interface number for all ports included in the ESI.

    The configuration for Leaf 1 is shown below, but you must replicate this configuration on both Leaf 2 and Leaf 3 per the topology shown in Figure 5.

    Tip:

    When you create an ESI number, always set the high order octet to 00 to indicate the ESI is manually created. The other 9 octets can be any hexadecimal value from 00 to FF.

    Figure 5: ESI Topology for Leaf 1, Leaf 2, and Leaf 3 ESI Topology for Leaf 1, Leaf 2, and Leaf 3

    Leaf 1:

  6. Configure VLANs and map them to VNIs. This step enables the VLANs to participate in VNIs across the EVPN-VXLAN domain.

    Leaf 1:

Verifying the VLAN-Aware CRB Overlay in the Default Instance for the Leaf Device

Issue the following commands to verify that the overlay is working properly on your leaf devices:

  1. Verify the interfaces are operational.
  2. Verify that the EVPN routes are being learned through the overlay.
    Note:
    • Only selected excerpts of this output are displayed.

    • The format of the EVPN routes is EVPN-route-type:route-distinguisher:vni:mac-address.

  3. Verify on Leaf 1 and Leaf 3 that the Ethernet switching table has installed both the local MAC addresses and the remote MAC addresses learned through the overlay.
    Note:

    To identify end systems learned remotely from the EVPN overlay, look for the MAC address, ESI logical interface, and ESI number. For example, Leaf 1 learns about an end system with the MAC address of 02:0c:10:03:02:02 through esi.1885. This end system has an ESI number of 00:00:00:00:00:00:51:10:00:01. Consequently, this matches the ESI number configured for Leaf 4, 5, and 6 (QFX5110 switches), so we know that this end system is multihomed to these three leaf devices.

  4. Verify on Leaf 1 that the virtual gateway ESI (esi.1679) is reachable by all the spine devices.
  5. Verify the remote EVPN routes coming from VNI 10000 and MAC address 02:0c:10:01:02:02. In this case, they are coming from Leaf 4 (192.168.1.4) by way of Spine 1 (192.168.0.1).
    Note:

    The format of the EVPN routes is EVPN-route-type:route-distinguisher:vni:mac-address.

  6. Verify the source and destination address of each VTEP interface and view their status.
    Note:

    There are 96 leaf devices and four spine devices, so there are 100 VTEP interfaces in this reference design - one VTEP interface per device.

  7. Verify that each VNI maps to the associated VXLAN tunnel.
  8. Verify that MAC addresses are learned through the VXLAN tunnels.
  9. Verify multihoming information of the gateway and the aggregated Ethernet interfaces.
  10. Verify that the VXLAN tunnel from one leaf to another leaf is load balanced with equal cost multipathing (ECMP) over the underlay.
  11. Verify that remote MAC addresses are reachable through ECMP.
    Note:

    Though the MAC address is reachable over multiple VTEP interfaces, QFX5100, QFX5110, QFX5120-32C, and QFX5200 switches do not support ECMP across the overlay because of a merchant ASIC limitation. Only the QFX10000 line of switches contain a custom Juniper Networks ASIC that supports ECMP across both the overlay and the underlay.

  12. Verify which device is the Designated Forwarder (DF) for broadcast, unknown, and multicast (BUM) traffic coming from the VTEP tunnel.
    Note:

    Because the DF IP address is listed as 192.168.1.2, Leaf 2 is the DF.

Configuring a VLAN-Aware CRB Overlay with Virtual Switches or MAC-VRF Instances

You can configure a VLAN-aware CRB overlay model using virtual switches or MAC-VRF instances. With either of these models, you can configure multiple switching instances where each switching instance can support up to 4094 VLANs per instance.

The configuration method for VLANs (at the leaf devices) and IRB interfaces (at the spine devices) is similar to the default instance method for VLAN-aware CRB overlays. The main difference is that you configure certain elements inside the virtual switching instances or MAC-VRF instances. See Figure 6.

Figure 6: VLAN-Aware CRB Overlay — Virtual Switch Instance or MAC-VRF Instance VLAN-Aware CRB Overlay — Virtual Switch Instance or MAC-VRF Instance

When you implement this style of overlay on a spine device, you:

  • Configure a virtual switch or MAC-VRF instance with:

    • The loopback interface as the VTEP source interface.

    • Route distinguishers and route targets.

    • EVPN with VXLAN encapsulation.

    • VLAN to VNI mappings and Layer 3 IRB interface associations.

  • Configure virtual gateways, virtual MAC addresses, and corresponding IRB interfaces (to provide routing between VLANs).

To implement this overlay style on a leaf device:

  • Configure a virtual switch or a MAC-VRF instance with:

    • The loopback interface as the VTEP source interface.

    • Route distinguishers and route targets.

    • EVPN with VXLAN encapsulation.

    • VLAN to VNI mappings.

  • Set the following end system-facing elements:

    • An Ethernet segment ID (ESI).

    • Flexible VLAN tagging and extended VLAN bridge encapsulation.

    • LACP settings.

    • VLAN IDs.

For an overview of VLAN-aware CRB overlays, see the Centrally-Routed Bridging Overlay section in Data Center Fabric Blueprint Architecture Components.

For information on MAC-VRF instances, see MAC-VRF Instances for Multitenancy in Network Virtualization Overlays and MAC-VRF Routing Instance Type Overview.

Note:

The following sections provide the detailed steps of how to configure and verify the VLAN-aware CRB overlay with virtual switches or MAC-VRF instances.

Configuring the VLAN-Aware CRB Overlay with Virtual Switches or MAC-VRF Instances on a Spine Device

To configure a VLAN-aware style of CRB overlay on a spine device, perform the following:

Note:

The following example shows the configuration for Spine 1, as shown in Figure 7.

Figure 7: VLAN-Aware CRB Overlay with Virtual Switches or a MAC-VRF Instance – Spine Device VLAN-Aware CRB Overlay with Virtual Switches or a MAC-VRF Instance – Spine Device
  1. Ensure the IP fabric underlay is in place. To configure an IP fabric on spine devices, see IP Fabric Underlay Network Design and Implementation.
  2. Confirm that your IBGP overlay is up and running. To configure an IBGP overlay on your spine devices, see Configure IBGP for the Overlay.
  3. Configure a virtual switch instance (VS1) or a MAC-VRF instance (MAC-VRF-1) for a VLAN-aware service. With the VLAN-aware service type, you can configure the instance with one or more VLANs. Include VTEP information, VXLAN encapsulation, VLAN to VNI mapping, associated IRB interfaces, and other instance details (such as a route distinguisher and a route target) as part of the configuration.

    For a virtual switch instance, use instance-type virtual-switch. Using the VLAN-aware model, configure VLANs VNI_90000 and VNI_100000 in the virtual switch instance with the associated IRB interfaces.

    Spine 1 (Virtual Switch Instance):

    With MAC-VRF instances, use instance-type mac-vrf. You also configure the service type when you create the MAC-VRF instance. Here we configure service-type vlan-aware with the two VLANs VNI_90000 and VNI_100000 and their associated IRB interfaces in the MAC-VRF instance.

    Spine 1 (MAC-VRF Instance):

  4. (MAC-VRF instances only) Enable shared tunnels on the device.

    A device can have problems with VTEP scaling when the configuration uses multiple MAC-VRF instances. As a result, to avoid this problem, we require that you enable the shared tunnels feature on the QFX5000 line of switches with a MAC-VRF instance configuration. When you configure the shared-tunnels option, the device minimizes the number of next-hop entries to reach remote VTEPs. The following statement globally enables shared VXLAN tunnels on the device:

    This statement is optional on the QFX10000 line of switches, which can handle higher VTEP scaling than QFX5000 switches.

    Note:

    This setting requires you to reboot the device.

  5. Configure spine devices with one or more VLANs for the VLAN-aware method. Include settings for the IPv4 and IPv6 virtual gateways and virtual MAC addresses. This example shows the configuration for Spine 1 with IRB interfaces and virtual gateways for VLANs VNI_90000 and VNI_100000.

    Spine 1:

Verifying the VLAN-Aware Model for a CRB Overlay with Virtual Switches or MAC-VRF Instances on a Spine Device

To verify this style of overlay on a spine device, run the commands in this section.

Most commands here show output for a virtual switch instance configuration. With a MAC-VRF instance configuration, you can alternatively use:

  • show mac-vrf forwarding commands that are aliases for the show ethernet-switching commands in this section.

  • The show mac-vrf routing database command, which is an alias for the show evpn database command in this section.

  • The show mac-vrf routing instance command, which is an alias for the show evpn instance command in this section.

See MAC-VRF Routing Instance Type Overview for tables of show mac-vrf forwarding and show ethernet-switching command mappings, and show mac-vrf routing command aliases for show evpn commands.

Otherwise, you can use the commands in this section for either virtual switch instances or MAC-VRF instances.

The output with a MAC-VRF instance configuration displays similar information for MAC-VRF routing instances as this section shows for virtual switch instances. One main difference you might see is in the output with MAC-VRF instances on devices where you enable the shared tunnels feature. With shared tunnels enabled, you see VTEP interfaces in the following format:

where:

  • index is the index associated with the MAC-VRF routing instance.

  • shared-tunnel-unit is the unit number associated with the shared tunnel remote VTEP logical interface.

For example, if a device has a MAC-VRF instance with index 26 and the instance connects to two remote VTEPs, the shared tunnel VTEP logical interfaces might look like this:

  1. Verify the IRB interfaces for VNIs 90000 and 100000 are operational for both IPv4 and IPv6.
  2. (MAC-VRF instances only) Verify the VLANs you configured as part of the MAC-VRF instance.
  3. Verify switching details about the EVPN routing instance. This output includes information about the route distinguisher (192.168.1.10:900), VXLAN encapsulation, ESI (00:00:00:00:00:01:00:00:00:02), verification of the VXLAN tunnels for VLANs 900 and 1000, EVPN neighbors (Spine 2 - 4, and Leaf 10 - 12), and the source VTEP IP address (192.168.0.1).
  4. Verify the MAC address table on the leaf device.
    Note:
    • 00:00:5e:90:00:00 and 00:00:5e:a0:00:00 are the IP subnet gateways on the spine device.

    • 02:0c:10:09:02:01 and 02:0c:10:08:02:01 are end systems connected through the leaf device.

  5. Verify the end system MAC address is reachable from all three leaf devices.
  6. Verify the end system is reachable through the forwarding table.
  7. Verify end system information (MAC address, IP address, etc.) has been added to the IPv4 ARP table and IPv6 neighbor table.
  8. Verify that the EVPN database contains the MAC address (02:0c:10:08:02:01) and ARP information learned from an end system connected to the leaf device.

Configuring the VLAN-Aware CRB Overlay with Virtual Switches or MAC-VRF Instances on a Leaf Device

To configure a VLAN-aware CRB overlay in a virtual switch or a MAC-VRF instance on a leaf device, perform the following:

Note:

The following example shows the configuration for Leaf 10, as shown in Figure 8.

Figure 8: VLAN-Aware CRB Overlay with Virtual Switches or MAC-VRF Instances – Leaf Device VLAN-Aware CRB Overlay with Virtual Switches or MAC-VRF Instances – Leaf Device
  1. Ensure the IP fabric underlay is in place. To configure an IP fabric on leaf devices, see IP Fabric Underlay Network Design and Implementation.
  2. Confirm that your IBGP overlay is up and running. To configure an IBGP overlay on your leaf devices, see Configure IBGP for the Overlay.
  3. Configure a virtual switch instance (VS1) or a MAC-VRF instance (MAC-VRF-1) to enable EVPN-VXLAN. Also, map VLANs 900 and 1000 to VNIs 90000 and 100000 in the instance.

    For a virtual switch instance, use instance-type virtual-switch.

    Leaf 10 (Virtual Switch Instance):

    With MAC-VRF instances, use instance-type mac-vrf. You also configure the service type when you create the MAC-VRF instance. Here we configure service-type vlan-aware with the two VLANs VNI_90000 and VNI_100000, and their VNI mappings.

    Leaf 10 (MAC-VRF Instance):

  4. (MAC-VRF instances only) Enable shared tunnels on the device.

    A device can have problems with VTEP scaling when the configuration uses multiple MAC-VRF instances. As a result, to avoid this problem, we require that you enable the shared tunnels feature on the QFX5000 line of switches with a MAC-VRF instance configuration. When you configure the shared-tunnels option, the device minimizes the number of next-hop entries to reach remote VTEPs. The following statement globally enables shared VXLAN tunnels on the device:

    This statement is optional on the QFX10000 line of switches, which can handle higher VTEP scaling than QFX5000 switches.

    Note:

    This setting requires you to reboot the device.

  5. Configure the leaf device to communicate with the end system. In this example, configure an aggregated Ethernet interface on Leaf 10—in this case, ae12 with two member interfaces. With the interface definition, include LACP options, an ESI in all-active mode, and VLANs 900 and 1000 (which this example uses for the VLAN-aware service type). Figure 9 illustrates the topology.
    Figure 9: ESI Topology for Leaf 10, Leaf 11, and Leaf 12 ESI Topology for Leaf 10, Leaf 11, and Leaf 12

    Leaf 10:

    Note that in this example, you configure the aggregated Ethernet interface to support the service provider configuration style. See Flexible Ethernet Service Encapsulation for more information on the service provider style configuration for switch interfaces.

Verifying the VLAN-Aware CRB Overlay with Virtual Switches or MAC-VRF Instances on a Leaf Device

To verify this style of overlay on a leaf device, run the commands in this section.

Most commands here show output for a virtual switch instance configuration. With a MAC-VRF instance configuration, you can alternatively use:

  • show mac-vrf forwarding commands that are aliases for the show ethernet-switching commands in this section.

  • The show mac-vrf routing instance command, which is an alias for the show evpn instance command in this section.

See MAC-VRF Routing Instance Type Overview for tables of show mac-vrf forwarding and show ethernet-switching command mappings, and show mac-vrf routing command aliases for show evpn commands.

Otherwise, you can use the commands in this section for either virtual switch instances or MAC-VRF instances.

The output with a MAC-VRF instance configuration displays similar information for MAC-VRF routing instances as this section shows for virtual switch instances. One main difference you might see is in the output with MAC-VRF instances on devices where you enable the shared tunnels feature. With shared tunnels enabled, you see VTEP interfaces in the following format:

where:

  • index is the index associated with the MAC-VRF routing instance.

  • shared-tunnel-unit is the unit number associated with the shared tunnel remote VTEP logical interface.

For example, if a device has a MAC-VRF instance with index 26 and the instance connects to two remote VTEPs, the shared tunnel VTEP logical interfaces might look like this:

  1. Verify that the aggregated Ethernet interface is operational on the leaf device.
  2. (MAC-VRF instances only) Verify the VLANs you configured as part of the MAC-VRF instance.
  3. Verify switching details about the EVPN routing instance. This output includes information about the route distinguisher (192.168.1.10:900), VXLAN encapsulation, ESI (00:00:00:00:00:01:00:00:00:02), verification of the VXLAN tunnels for VLANs 900 and 1000, EVPN neighbors (Spine 1 - 4, and Leaf 11 and 12), and the source VTEP IP address (192.168.1.10).
  4. View the MAC address table on the leaf device to confirm that spine device and end system MAC addresses appear in the table.
    Note:
    • 00:00:5e:90:00:00 and 00:00:5e:a0:00:00 are the IP subnet gateways on the spine device.

    • 02:0c:10:09:02:01 and 02:0c:10:08:02:01 are end systems connected through the leaf device.

  5. Verify that the IP subnet gateway ESIs discovered in Step 3 (esi.2144 for VNI 90000 and esi.2139 for VNI 100000) are reachable from all four spine devices.
  6. Verify the IP subnet gateway on the spine device (00:00:5e:a0:00:00) is reachable through the forwarding table.

Centrally-Routed Bridging Overlay — Release History

Table 1 provides a history of all of the features in this section and their support within this reference design.

Table 1: CRB Overlay in the Data Center Fabric Reference Design– Release History

Release

Description

19.1R2

QFX10002-60C and QFX5120-32C switches running Junos OS Release 19.1R2 and later releases in the same release train support all features documented in this section.

17.3R3-S2

Adds support for Contrail Enterprise Multicloud, where you can configure CRB overlays from the Contrail Command GUI.

17.3R3-S1

All devices in the reference design that support Junos OS Release 17.3R3-S1 and later releases in the same release train also support all features documented in this section