Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Configure VXLAN Stitching for Layer 2 Data Center Interconnect

This document describes the configuration and validation steps for implementing Data Center Interconnect (DCI) using VXLAN stitching in a gateway device. The VXLAN stitching feature enables you to stitch together specific VXLAN Virtual Network Identifiers (VNIs) to provide Layer 2 stretch between DCs on a granular basis.

Juniper Network’s switching and routing devices support a number of different DCI options. For example, Over the Top (OTT) DCI can be used to extend the overlay between PODS. See OTT DCI for details. One drawback to the OTT method is it extends all VLANs between the PODs, either at layer 2 or Layer 3. Also, OTT DCI requires end-to-end VXLAN VNI significance. This can be an issue if two DC/PODs are being merged that don’t have overlapping VLAN to VNI assignments.

In some cases you want a more granular control over which VLANs are extended between PODs. The Junos VXLAN stitching feature allows you to perform DCI at the VNI level to extend Layer 2 connectivity on a per VLAN basis. Or, you might need to translate VNIs to accommodate instances where the same VNIs are assigned to different VLANs in each POD. For example, take the case where VLAN 1 is assigned VNI 10001 in POD 1, while in POD 2 the same VLAN is assigned to VNI 20002. In this case you either have to reconfigure one of the PODs to achieve a global (overlapping) mapping of VLANs to VNIs. Alternatively, you can employ translational stitching to map local POD VNI values to the VNI used over the WAN.

Juniper Networks supports VXLAN stitching for both 3-stage and 5-stage IP fabrics. In addition, VXLAN stitching is supported for centrally-routed bridging (CRB) overlay, edge-routed bridging (ERB) overlay, and bridged overlay architectures. This use case assumes that your EVPN-VXLAN POD fabrics are already configured with leaves and spines using one or a combination of the supported architectures shown in Table 1.

To enable VXLAN stitched connectivity between the two PODs, you add a tier of WAN routers to extend the underlay. The underlay extension extends the overlay between the PODs. Thenyou configure VXLAN stitching on the gateway devices to extend the desired VLANs (now represented as VXLAN VNIs), between the PODs.

Note:

We use the term “WAN Routers” in this document. This doesn’t imply that you have an actual WAN network between the PODs. The WAN routers might be local to both PODs, as is the case in this example. You can also use VXLAN stitching over an extended WAN network when the PODs are geographically remote.

Figure 1 provides a high level diagram showing the POD/DC fabric types we validated in this reference design.

Figure 1: VXLAN Stitching Reference Architectures VXLAN Stitching Reference Architectures

In Figure 1, each WAN router connects to each gateway device in both PODs. These connections and the related BGP peer sessions serve to extend the underlay between the two PODs. Specifically, the devices advertise the loopback addresses of the gateway devices between the PODs. This loopback reachability establishes an EBGP based peering session to extend the overlay between the gateway devices in both pods.

POD 1 represents a 3-stage CRB architecture with the gateway function collapsed into the spine devices. Thus, in POD 1 the terms spine and gateway are each applicable. In general we’ll use the term gateway when describing the spine devices as the focus here is on their gateway functionality.

POD 2, in contrast, is a 5-stage ERB architecture with lean spines and discrete gateway devices. The gateway devices in POD 2 can also be called super-spine or border leaf devices. In the context of this example, they perform the VXLAN stitching functionality and so we refer to them as gateway devices.

Table 1 outlines the POD architectures we validated as part of this reference design.

Table 1: Supported POD Architectures for VXLAN Stitching

POD 1

POD 2

Edge-Routed Bridging

Edge-Routed Bridging

Centrally-Routed Bridging

Edge-Routed Bridging

Centrally-Routed Bridging

Centrally-Routed Bridging

Bridged Overlay

Bridged Overlay

3 or 5 stage fabric

3 or 5 stage fabric

Other items to note when using VXLAN stitching include:

  • You can combine the role of spine and gateway into a collapsed design as shown for POD 1.

  • The stitched VNI can have the same value (global stitching) when the PODs have overlapping VLAN to VNI assignments, or can be translated between the two PODs. The latter capability is useful when merging PODs (DCs) that don’t have overlapping VNI to VLAN assignments.

  • We support VXLAN stitching in the default-switch EVPN instance (EVI) and in MAC-VRF routing instances.

  • We support Layer 2 stitching for unicast and BUM traffic only. With BUM traffic, the designated forwarder (DF) for the local gateway ESI LAG performs ingress replication and forwards a copy of the BUM traffic to each remote gateway. At the remote gateway devices, the DF for the remote ESI LAG performs ingress replication and sends a copy of the BUM traffic to all leaf nodes in the local POD.

  • The gateway device must be a switch in the QFX10000 line running Junos Software Release 20.4R3 or higher.

  • We recommend that you configure the IRB interfaces on the spine devices in a CRB fabric with the proxy-macip-advertisement configuration statement. This option ensures correct ARP operation over a CRB EVPN-VXLAN fabric and is part of the CRB reference architecture. See proxy-mac-ip-advertisement for more information on this option.

Note the following about the EVPN–VXLAN fabric reference design:

  • This example assumes that the tiers of spine and leaf devices in the two PODs already exist and are up and running. As a result, this topic provides the configuration for the gateway to WAN router EBGP underlay peering, the inter-POD EBGP overlay peering, and the configuration needed for VXLAN stitching.

    For information about configuring the spine and leaf devices in the two PODs, see the following:

  • This example integrates the WAN routers into an existing two POD EVPN-VXLAN fabric. To keep the focus on VXLAN stitching, both PODs in the example use the same 3-stage Clos fabric based on a CRB architecture. In addition to their role as Layer 3 VXLAN gateways, the spines also perform the VXLAN stitching function. The result is an example of a collapsed gateway architecture.

    Figure 2 shows the collapsed gateway CRB based VXLAN stitching example topology.

    Figure 2: VXLAN Stitching Example Topology VXLAN Stitching Example Topology

    In this example, you add the gateway functionality to a pre-existing CRB spine configuration. As noted above, we also support 5-stage architectures with the super-spine layer performing the gateway peering and stitching functions. We recommend using a discrete gateway device for maximum scaling and performance. With a 3-stage or 5-stage ERB architecture, you add the gateway configuration to the lean spine or super spine devices, respectively.

  • When configuring the overlay BGP peering between the PODs, you can use either IBGP or EBGP. Typically, you use IBGP if your data centers (PODs) use the same autonomous system (AS) number and EBGP if your PODs use different AS numbers. Our example uses different AS numbers in each POD, therefore EBGP peering is used to extend the overlay between the PODs.

  • After you integrate the WAN routers to extend the underlay and overlay between the two PODs, you configure translational VXLAN stitching to extend a given VLAN between the PODs. Translational VXLAN stitching translates the VNI value used locally in each POD to a common VNI value used across the WAN segment. Note that in our example, we assign VLAN 1 a different (non-overlapping) VNI value in each POD. This is why we use translational stitching in this case. You normally use global mode stitching when the same VNI value is mapped to the same VLAN in both PODS.

Configure Gateway Devices to Extend the Underlay Over the WAN

This section shows you how to configure the collapsed gateway devices (a CRB spine with added VXLAN stitching gateway functionality) so they can communicate with the WAN devices. Recall that each POD already has a fully functional underlay and CRB overlay based on the reference implementation for a 3-stage CRB architecture. See Centrally-Routed Bridging Overlay Design and Implementation for details.

You configure the spine/gateway devices to peer with the WAN routers to extend the underlay between the two PODs. This involves configuring EBGP peering and policy to tag and advertise the loopback routes from each gateway. These routes establish the inter-POD EBGP peering sessions that extend the fabric overlay in the next section.

Note:

Configuring the WAN routers is outside the scope of this document. They simply need to support aggregate Ethernet interfaces and EBGP peering to the gateway devices. In this example the WAN routers must re-advertise all routes received from one POD to the other. In the case of a Junos device, this is the default policy for the EBGP underlay peering in this example.

Figure 3 provides the details regarding interfaces, IP addressing, and AS numbering for the DCI portion of the POD networks.

Figure 3: Details for Underlay and Overlay Extension Across the WAN Details for Underlay and Overlay Extension Across the WAN

The configuration on all the gateway devices is similar. We’ll walk you through configuring the gateway 1 device and then provide the full configuration delta for the other 3 gateways.

Gateway 1

  1. Configure the interfaces that connect the gateway 1 device to the two WAN routers.

    Here, we create an aggregated Ethernet (AE) interface that includes a single member. With this approach you can easily add additional member links to increase the WAN throughput or resiliency.

  2. Create a BGP peer group named underlay-bgp-wan, and configure it as an EBGP group.
  3. Configure the EBGP Underlay AS number.

    In this reference design, you assign a unique AS number to each device in the underlay network. See Figure 3 for the AS numbers of the gateway and WAN devices.

    You configure the AS number for EBGP in the underlay network at the BGP peer group level using the local-as statement because the system AS number setting at the [edit routing-options autonomous-system] hierarchy is used for MP-IBGP overlay peering in the local fabric, and for the EBGP peering used to extend the overlay between the PODs.

  4. Configure EBGP peering with WAN devices 1 and 2.

    You configure each WAN device as a EBGP neighbor by specifying the WAN device’s IP address and AS number. See Figure 3 for the IP addresses and AS numbers of the spine devices.

  5. Configure an import routing policy that subtracts 10 from the local preference value of routes received from the WAN when they are tagged with a specific community. This policy ensures that the gateway device always prefers a local underlay route, even when the same gateway loopback address is also learned over the WAN peering.

    Recall that we use EBGP for gateway to WAN router peering. By default, EBGP readvertises all routes received to all other EBGP (and IBGP) neighbors. This means that when gateway 1 advertises its loopback route to WAN router 1, the WAN router readvertises that route to gateway 2. The result is that each gateway has both an intra-fabric route and an inter-fabric route to reach the other gateway in its local POD.

    We want to ensure that the gateway always prefers the intra-fabric path. We do this by adjusting the local preference value for routes received from the WAN (to make them less preferred regardless of AS path length). The policy also blocks the readvertisement of gateway loopback routes learned over the WAN peering into the local fabric. The result is the leaf devices see only the intra-fabric gateway loopback route while the gateway devices always prefer the intra-fabric gateway route.

    You define the referenced community in the next step.

    Note:

    This examples assumes you are starting from a baseline reference architecture in both PODs. Part of the pre-existing reference baseline is the fabric underlay and overlay related BGP peering and policy. This example is based on the spine and gateway being collapsed into a single device. Now that you have added underlay and overlay extension via WAN routers, you should modify your existing underlay policy on the gateway, or in our case the spine/gateway device, to block readvertisement of routes tagged with the wan_underlay_comm from the other fabric devices.

    We show an example of this modification here. The newly added from_wan term suppresses advertisement of routes with the matching community into the fabric underlay.

  6. Configure an export routing policy to advertise the gateway loopback interface address to the WAN devices. This policy rejects all other advertisements. You now define the wan_underlay_comm community used to tag these routes.
  7. Configure multipath with the multiple-as option to enable load balancing between EBGP peers in different ASs.

    By default, EBGP selects one best path for each prefix and installs that route in the forwarding table. Also, you configure BGP multipath so all equal-cost paths to a given destination are installed into the routing table.

  8. Enable Bidirectional Forwarding Detection (BFD) for the WAN EBGP sessions. BFD enables rapid detection of failures and thereby fast reconvergence.

Configure Gateway Devices to Extend the Overlay Over the WAN

This section show how to extend the EVPN overlay between the two PODs using EBGP. Recall that in this example the two PODs have unique AS numbers, so EBGP is used.

As is typical for 3-stage CRB fabric, our spine devices (gateways) function as route reflectors in the overlay for the leaf devices in their respective PODs. In this section you define a new EBGP peering group that extends the overlay between the PODs. See Figure 3 for details about the AS numbering and spine device loopback addresses.

The configuration on all the gateway devices is similar. Once again, we’ll walk you through configuring the gateway 1 device, and provide the full configuration delta for the other 3 gateways.

Gateway 1

  1. Configure the EBGP group to extend the EVPN overlay to the remote gateway devices.

    Normally, we use IBGP for an EVPN overlay. We use EBGP here because we assigned the PODS different AS numbers. Note here that you must enable the multihop option. By default, EBGP expects a directly connected peer. In this example, the peer is remotely attached to the far side of the WAN. Also, you must configure the no-nexthop-change option. This option alters the default EBGP behavior of updating the BGP next hop to a local value when re-advertising routes. With this option, you tell the gateway device to leave the BGP protocol next hop for the overlay route unchanged. This is important because the gateway IP address may not be a VXLAN VTEP address, for example, in an ERB fabric where the next hop for an EVPN type 2 route must identify that leaf device. Not overwriting the next hop ensures that the correct VTEPs are used for VXLAN tunnels.

    You configure the EBGP peering between the gateway device loopback addresses.

  2. As with the underlay peering, we add BFD for rapid failure detection in the overlay extension. Note that here we specify a longer interval for the overlay peering. In the underlay extension peering, we used a 1-second interval. Here we configure a 4-second interval to help ensure the overlay sessions remain up in the event of an underlay failure that requires reconvergence.
  3. Be sure to commit the changes on all gateway devices after these steps.

Gateway Device Configurations for Underlay and Overlay Extension

This section provides the configuration delta for all four gateway devices. You add this delta to the initial CRB baseline to extend the POD underlay and overlay over the WAN.

Note:

The final two statements modify the existing fabric underlay policy to block re-advertisement of routes tagged with the wan_underlay_comm community from the other leaf devices.

Gateway 1 (POD 1)

Gateway 2 (Pod 1)

Gateway 3 (POD 2)

Gateway 4 (POD 2)

Verify Underlay and Overlay Extension Over the WAN

This section shows how you verify the gateway devices are properly integrated into the WAN to extend the underlay and overlay networks between the two PODs.

  1. Verify that the aggregated Ethernet interfaces are operational. Proper BGP session establishment is a good sign the interface can pass traffic. If in doubt, ping the remote end of the AE link.

    The output confirms that the aggregated Ethernet interface ae4 is operational on gateway 1. The traffic counters also confirm the interface sends and receives packets.

  2. Verify that the underlay EBGP sessions to the WAN devices are established.

    The output shows that both EBGP peering sessions on the gateway 1 device are established to both WAN routers.

  3. Verify that the overlay EBGP sessions are established between the gateway devices across the WAN.

    The output confirms that both the overlay EBGP sessions from the gateway 1 device are established to both remote gateways in POD 2.

    With underlay and overlay extension verified, you are ready to move onto configuring VXLAN stitching for Layer 2 DCI between the PODs.

Configure Translational VXLAN Stitching DCI in the Default Switch Instance

In this section you configure VXLAN stitching in the gateway devices to provide Layer 2 stretch between the two PODs using the default switch instance. We support VXLAN stitching in the default switch instance and in MAC-VRF instances. We begin with the default switch instance, and later show the delta for the MAC-VRF instance case.

VXLAN stitching supports both a global mode and a translational mode. In the global mode, the VNI remains the same end-to-end, that is, across both the PODs and the WAN network. You use global mode when the VLAN and VNI assignments overlap between the PODs. In translational mode, you map a local POD VNI value to a VNI used across the WAN.

You configure VXLAN stitching only on the gateway devices. The leaf devices don’t require any changes. In ERB fabrics, the lean spine devices also don’t require any changes if you have a super-spine layer performing the gateway function.

Table 2 outlines the POD VLAN and VNI assignments. In this example, the PODs use a different VNI for the same VLAN. This is why you configure translational stitching in this case. With translational stitching, the VNI can be unique to each POD and still be stitched to a shared VNI assignment over the WAN.

Table 2: VLAN to VNI Mappings

POD 1

POD 2

WAN DCI

VLAN 1

VNI: 100001

VNI: 110001

VNI: 910001

VLAN 2

VNI: 100002

VNI: 110002

VNI: 910002

Figure 4 provides a high-level view of the VXLAN stitching plan for VLAN 1 in our example.

Figure 4: Translational VXLAN Stitching Summary for VLAN 1 Translational VXLAN Stitching Summary for VLAN 1

Figure 4 shows VLAN 1 in POD 1 uses VNI 100001, while the same VLAN in POD 2 maps to 11000. You stitch both VLANs to a common VNI 910001 for transport over the WAN. When received from the WAN, the gateway translates the stitched VNI back to the VNI used locally within its POD.

Once again, the configuration on the gateway devices is similar. We walk you through the steps needed on the gateway 1 device, and provide the configuration delta for the other gateway nodes.

Perform these steps to configure translational VXLAN stitching on gateway 1.

Gateway 1

  1. Configure the default switch instance EVPN parameters for route exchange between the two PODs. This configuration includes support for an all-active ESI LAG between the gateways. Setting up an ESI LAG over the WAN ensures that all WAN links are actively used to forward traffic without the risk of packet loops. You must use the same ESI value for all gateways within a given POD, and each POD must use a unique ESI value. Therefore, in this example you configure two unique ESIs, one for each pair of gateways in each POD.

    The route target controls route imports. You configure the same route target on all gateway devices to ensure that all routes advertised by one POD are imported by the other. You set the route distinguisher to reflect each gateway device’s loopback address.

  2. Configure VXLAN stitching for VLANs 1 and 2. You specify the VNIs that are stitched over the WAN at the [edit protocols evpn interconnect interconnected-vni-list] hierarchy. The gateway devices in both PODs must use the same VNI across the WAN for each stitched VNI.
  3. Configure translational VXLAN stitching for VLAN 1 by linking the local VLAN/VNI to a translational VNI. Note that the translational VNI value matches the VNI you configured at the protocols evpn interconnect interconnected-vni-list hierarchy in the previous step. Thus, with the following commands you map a local VNI to a WAN VNI.

    For global VXLAN stitching, you simple omit the translational statement and configure the user VLAN to use the same VNI value you configure at the [edit protocols evpn interconnect interconnected-vni-list] hierarchy.

    Recall that the leaf and spine devices in each pod are already configured for the CRB reference architecture. As part of the pre-existing configuration, VLANs are defined on both the spine and leaf devices. The VLAN definition on all devices includes a VLAN ID to VXLAN VNI mapping. The spine’s VLAN configuration differs from the leaf, however, in that it includes the Layer 3 IRB interface, again making this an example of CRB. The existing configuration for VLAN 1 is shown at the gateway 1 (spine 1) device for reference:

    You now modify the configuration for VLAN 1 on the gateway 1 device to evoke translational VXLAN stitching. The VNI you specify matches one of the VNI values you configured at the edit protocols evpn interconnect interconnected-vni-list hierarchy in a previous step. The result is that the device translates VNI 100001 (used locally in POD 1 for VLAN 1) to VNI 910001 when sending it over the WAN. In the remote POD, a similar configuration maps from the WAN VNI back to the local VNI associated with the same VLAN in the remote POD. In configuration mode, enter the following command:

  4. Configure translational VXLAN stitching for VLAN 2.

    You modify the configuration for VLAN 2 to invoke translational VXLAN stitching from VNI 100002 (used locally in POD 1 for VLAN 2) to VNI 910002 over the WAN.

  5. Confirm the change for VLAN 1. We omit VLAN 2 for brevity. The following command displays the change to VLAN 1 in configuration mode:
  6. Be sure to commit your changes on all gateway devices when done.

Gateway Device Configurations for Translational VXLAN Stitching in Default Switch Instance

This section provides the configuration delta for all four gateway devices. You add this delta to the CRB baseline that you have modified for DCI over the WAN. Once you have extended the underlay and overlay, the following configurations perform translational VXLAN stitching between the local POD’s VNI and the VNI on the WAN.

Gateway 1 (POD 1)

Gateway 2 (Pod 1)

Gateway 3 (POD 2)

Gateway 4 (POD 2)

Verify Translational VXLAN Stitching in Default Switch Instance

  1. Confirm the ESI LAG between the gateway devices is operational and in active-active mode.

    The output shows that ESI 00:00:ff:ff:00:11:00:00:00:01 is operational. The output also shows active-active forwarding (Mode column shows all-active) and both Designated forwarder and Backup forwarder device addresses.

  2. View remote VXLAN VTEPs to confirm the remote gateway devices are listed as WAN VTEPs.

    The output correctly shows both remote gateways as Wan-VTEP.

  3. View the EVPN database on the gateway 1 device for VXLAN VNI 100001. Recall that in our example this is the VNI you assigned to VLAN 1 on the CRB leaves and spines in POD 1.

    The output confirms the VNI value 100001 associated with VLAN 1 is advertised and used in the local POD.

  4. View the EVPN database on the gateway 1 device for VXLAN VNI 910001. Recall that this is the VNI associated with VLAN 1 for translational VXLAN stitching over the WAN.

    The output confirms the VNI value 910001 associated with VLAN 1 is advertised to the remote POD. This confirms that VNI 910001 is used over the WAN. Given the local VNI differs from the VNI used on the WAN, this confirms translational VXLAN stitching for the default switch instance use case.

VXLAN Stitching in a MAC-VRF Routing Instance

We support both global and translational VXLAN stitching in MAC-VRF routing instances. Because we demonstrated translational stitching for the previous default switch instance, for the MAC-VRF case we show global mode VXLAN stitching.

Coverage of MAC-VRF routing instances is beyond the scope of this document. Once again, we assume you have a working CRB fabric with MAC-VRF instances configured as per the reference baseline. For details on configuring MAC-VRF, see MAC-VRF Routing Instance Type Overview and a sample use case at EVPN-VXLAN DC IP Fabric MAC VRF L2 services.

To keep the focus on the VXLAN stitching feature, we call out the delta for adding VXLAN stitching to an existing MAC-VRF. As with the default switch instance, we apply the stitching configuration only to the gateway devices. In the case of MAC-VRF, however, you configure the VLAN to VNI mapping in the MAC-VRF instance, rather than at the [edit vlans] hierarchy. Another difference in the MAC-VRF case is that you configure the interconnected-vni-list statement in the routing instance instead of at the [edit protocols evpn interconnect interconnected-vni-list] hierarchy.

The goal in this example is to perform global VXLAN stitching for VLANs 1201 and 1202, which map to VXLAN VNIs 401201 and 401201, respectively. You configure the same VLAN to VNI mapping in both PODS. You can use global mode stitching because the VLAN to VNI assignments overlap in both PODs.

You add the following commands to the gateway devices for the MAC-VRF instance that will perform stitching. The configuration defines the ESI LAG used between the local gateways and specifies the list of interconnected VNIs.

You need a similar configuration on all gateway devices. As before, we walk though the configuration details for the gateway 1 device and then provide the complete configuration delta for the other gateways.

In the below example you configure VNIs 401201 and 401202 for VXLAN stitching over the WAN segment.

Note:

When configuring VXLAN stitching in a MAC-VRF context, you must include the set forwarding-options evpn-vxlan shared-tunnels option on all leaf nodes in the QFX5000 line of switches running Junos OS. After adding this statement, you must reboot the switch. We don’t recommend using the shared tunnels statement on gateway nodes in the QFX10000 line of switches running Junos OS with VXLAN stitching in MAC-VRF routing instances.

Shared tunnels are enabled by default on devices running Junos OS Evolved (which supports EVPN-VXLAN only with MAC-VRF configurations).

As noted, a complete MAC-VRF routing instance configuration is beyond our scope. The configuration block below uses a pre-existing MAC-VRF instance based on the MAC-VRF reference design. We show this configuration snip to better illustrate why this is an example of global mode VXLAN stitching (for a MAC-VRF instance). The sample is from the CRB spine 1 device, which is also a gateway in our collapsed gateway example topology. For brevity, we only show the configuration for VLAN 1201.

In the above, the MAC-VRF definition for VLAN 1201 specifies the same VNI (401201) listed at the [edit routing-instances MACVRF-mac-vrf-ep-t2-stchd-1 protocols evpn interconnect interconnected-vni-list] hierarchy. This results in end-to-end (global) significance for that VNI.

As with the default switch instance, it is trivial to invoke translational VXLAN stitching in the MAC-VRF context.

For example, to translate from local VNI 300801 for VLAN 801 to a WAN VNI of 920001, you simply modify the VLAN definition in the related MAC-VRF instance to include the translation-vni 920001 statement.

By adding the translation-vni 920001 statement to the MAC-VRF VLAN configuration, you tell the gateway device to translate from local VNI 300801 to VNI 920001 when sending over the WAN.

Gateway Device Configurations for Global VXLAN Stitching With MAC-VRF

This section provides the configuration delta for all four gateway devices to support global mode VXLAN stitching in a MAC-VRF context. You add this delta is added to the CRB baseline you modified for DCI over the WAN. After you extend the underlay and overlay, the configurations below perform global VXLAN stitching for VNIs 401201 and 401202. Because this is global mode example, you don’t include the translation-vni statement. The VLAN and interconnect VNI values are the same.

Gateway 1 (POD 1)

Gateway 2 (Pod 1)

Gateway 3 (POD 2)

Gateway 4 (POD 2)

Note:

When configuring VXLAN stitching in a MAC-VRF context, you must include the set forwarding-options evpn-vxlan shared-tunnels option on all leaf nodes in the QFX5000 line of switches. After adding this statement you must reboot the switch. We don’t recommend configuring the shared tunnel statement on gateway nodes in the QFX10000 line of switches running Junos OS with VXLAN stitching in MAC-VRF routing instances.

Shared tunnels are enabled by default on devices running Junos OS Evolved (which supports EVPN-VXLAN only with MAC-VRF configurations).

Verify Global VXLAN Stitching in a MAC-VRF Instance

  1. Confirm the ESI LAG used between the gateway devices is operational and in active-active mode for the MAC-VRF case.

    The output shows that ESI 00:00:ff:ff:00:11:00:00:00:01 is operational. Active-Active forwarding is verified by the all-active mode and the presence of both a designated and backup forwarder.

  2. View remote VXLAN VTEPs to confirm the remote gateway devices are listed as WAN VTEPs.

    The output correctly shows both remote gateways as a Wan-VTEP.

  3. View the EVPN database on the gateway 1 device for VXLAN VNI 401201 for advertisements to the WAN. In our example, this is the VNI assigned to VLAN 1201 in both PODs. As this is a CRB example, you defined VLAN 1201 on the spines and on the leaf devices. Only the spine devices include the Layer 3 IRB interfaces in their VLAN configurations, however.

    The output confirms VNI value 401201, which is associated with VLAN 1201 and the irb.1201 interface, is advertised to the remote POD. This confirms that VNI 401201 is used over the WAN for VLAN 1201.

  4. View the EVPN database on the gateway 1 device for VXLAN VNI 401201 for advertisements to the local POD. Recall this is the VNI associated with VLAN 1201 in both PODs. This is the same VNI you used in the previous command to display advertisements to the remote gateways.

    The output shows that VNI 401201 is advertised to the local POD. This confirms that VNI 401201 is used locally. Given the same VNI is used locally, and across the WAN, this confirms global VXLAN stitching in a MAC-VRF case.

Virtual Machine Traffic Optimization (VMTO) with VXLAN Stitching

In some environments you may want to install /32 or /128 host routes to optimize traffic to a specific VM. When you use VXLAN stitching, configure the following on all gateway nodes to enable installation of host routes.

The first command adds host route support to the default switch instance. The second adds host route support for a specific MAC-VRF instance. You must configure both if you are using a mix of instance types.

Verify Host Route Support

Purpose

Confirm that /32 host routes are imported into a Layer 3 VRF table when using the default switch instance or a MAC-VRF table when using MAC-VRF.

Action

Display the related routing instance’s route table and look for routes with a /32 (or /128) bit prefix. We begin with the display of a Layer 3 VRF table used with VXLAN stitching, the default switch instance:

Next, we display a MAC-VRF instance route table.