Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

How to Configure OTT DCI in an EVPN Network

Requirements

This configuration example uses the following devices and software versions:

  • Spine devices consisting of QFX5120-32C switches running Junos OS Release 19.1R3-S2 in both DCs.

  • Leaf devices consisting of QFX5110-48S switches running Junos OS Release 18.4R2-S5 in DC1.

  • Leaf devices consisting of QFX5120-48Y switches running Junos OS Release 18.4R2-S5 in DC2.

Overview

In this deployment, we use an architecture with two data center fabric networks that have been configured in an edge-routed bridging (ERB) overlay model. The data centers are interconnected with a Layer 3 IP underlay network. Figure 1 shows the WAN connectivity that provides the underlay transport between the two data centers. We use an over-the-top EVPN-VXLAN DCI architecture to provide both Layer 2 and Layer 3 connectivity for endpoints and virtual machines in the two data centers.

Figure 1: OTT DCI Architecture OTT DCI Architecture

For this example, some VLANs are configured to exist only within a specific data center while other VLANs span across the two data centers. Figure 2 shows the VLANs that are configured in the two data centers. We configure the VLANs as follows:

  • VLANs 10, 11, and 12 are local to Data Center 1. The endpoints on these VLANs use Layer 3 routing to reach other VLANs in Data Center 2.

  • VLANs 170, 171, and 172 are local to Data Center 2. The endpoints on these VLANs use Layer 3 routing to reach other VLANs in Data Center 1.

  • VLANs 202 and 203 are common to Data Center 1 and Data Center 2. These VLANs stretch across all leaf devices in both data centers. The endpoints on these VLANs use Layer 2 bridging to reach the endpoints in the other data center.

Figure 2: VLANs in Data CentersVLANs in Data Centers

You can use any routing protocol in the underlay network. The underlay routing protocol enables devices to exchange loopback IP addresses with each other. In this example eBGP is used as the underlay routing protocol within each data center fabric. The eBGP protocol is also used to extend the underlay network across the WAN connection between the two data centers. You can use either iBGP or eBGP as the overlay routing protocol between the DCs. The choice depends on whether you use the same or different autonomous system numbers for your data centers. In this example, we will use iBGP for the overlay protocol within a fabric and eBGP between the two data centers. Figure 3 shows the autonomous system numbers and IP addresses used in this example.

Figure 3: Details for WAN Router PeeringDetails for WAN Router Peering

Configure the Border Spine Devices for WAN Connectivity

Requirements

Border Spine WAN Underlay configuration

Note:

The configurations shown on this page are established on a pre-existing EVPN fabric with an operational underlay and overlay network. In other words, this section shows only the additional configurations that are needed to extend the DC underlay to a WAN cloud. In contrast, the detailed configurations show the full configuration of each DC device.

This section shows how to configure the WAN underlay configuration for border spine devices. For brevity, we will only show the steps for configuring one spine device in each data center. You can configure the other border spine devices by applying similar configuration steps.

Note:

Repeat these procedures for all border spine devices in each data center.

Refer to the Detailed Configurations for the EVPN-VXLAN Network for the Data Centers for the complete configurations used in this example.

Border Spine WAN Underlay

Procedure

Step-by-Step Procedure
  1. Add the WAN peering session to the existing underlay group on Data Center 1 Spine 1 (DC1-Spine1). The border spine devices use the WAN underlay network to exchange loopback address information between the data centers. This in turn allows the spine devices to form an eBGP overlay connection across the wan cloud.

    In this example the WAN router peering is added to the pre-existing UNDERLAY BGP peer group. Because this group is already configured as part of the DC’s underlay, you only need to add the WAN router as a neighbor to the existing UNDERLAY group.

  2. Add the WAN peering to the existing UNDERLAY group on Data Center 2 Spine 1 (DC2-Spine1).

  3. Configure the common WAN underlay policy used on the spine and leaf devices in both data centers. We reserve subnet 10.80.224.128/25 for loopback address in Data center 1, and subnet 10.0.0.0/24 for loopback address in Data Center 2. By specifying a reserved range of loopback addresses in our policy, we do not have to update the policy when a device is added as long as the device is configured with a reserved loopback address.

    In this example the underlay exchanges only loopback routes both within, and between, the DCs. The import and export policies for the underlay are written in such a way that they can be used by all devices in both DCs. For example, the UNDERLAY-EXPORT policy is configured to match both the local and remote DC loopback addresses. When applied to a device in DC1, only the 10.80.224.128/25 orlonger term is matched. Having the extra route-filter term for the loopback addresses assigned to DC2 causes no harm, and it allows the same policy to be applied to the underlay in both DCs.

Border Spine WAN Overlay

Step-by-Step Procedure

For the fabric overlay, you configure a full mesh of eBGP peering among all border spine devices in both data centers using eBGP. The spine devices function as overlay route reflectors within their respective DC. The overlay peering session extends the EVPN overlay between the DCs.

Because the inter-DC overlay session runs over a WAN cloud bidirectional failure detection (BFD) is not enabled for this peer group. BFD is enabled to run within each DC, both in the underlay and overlay.

Note:

By default eBGP changes the IP address in the Protocol next-hop field to use its own IP address when re-advertising routes. In this case, we want the protocol next-hop field to use the VTEP IP address of the leaf device that originated the route. To prevent BGP from changing the next hops of overlay routes we enable the no-nexthop-change option in the OVERLAY_INTERDC peer group.

  1. Configure the WAN overlay network on DC1-Spine1.

  2. Configure the WAN overlay network on DC2-Spine1.

Configure the Leaf Devices to support Layer 2 DCI

To extend Layer 2 (VLAN) connectivity across the two DCs we must configure the network to extend the Layer 2 broadcast domain for all endpoints in the shared VLAN. In this example, the endpoints on VLANs 202 and 203 belong to the same Layer 2 domain regardless of whether the endpoints are in the local or remote data centers. The goal is to extend, or stretch, these VLANs between the two DCs.

To extend Layer 2 connectivity for these shared VLANs between the DCs, you must configure the following:

  • Configure unique VXLAN network identifier (VNI) based route targets for EVPN type 2 and EVPN type 3 routes that are associated with the stretched VLANs under the protocols evpn hierarchy.

  • Apply an import policy to accept the unique route targets that are bound to the stretched VLANS using the vrf-import command at the switch-options hierarchy. The stretched VLANS are EVPN type 2 and type 3 routes that are advertised by the remote data center.

Note:

Repeat these procedures for all leaf devices in the relevant data center. The commands shown below are added to the existing switch-options and protocols evpn hierarchies to stretch the shared VLANs between the two DCs.

Procedure

Step-by-Step Procedure

  1. Configure VNI specific targets for the stretched VLANs at Data Center 1 Leaf 1 (DC1-Leaf1).

  2. Configure a VRF import policy to match the VNI targets for the stretched VLANs at Data Center 1 Leaf 1 (DC1-Leaf1). The policy also matches the default EVPN type 1 routes used for Auto-Discovery (AD). The policy itself is defined in a later step.

  3. Configure VNI specific targets for the stretched VLANs at Data Center 2 Leaf 1 (DC2-Leaf1).

  4. Specify a VRF import policy to match on the VNI targets for the stretched VLANs at Data Center 2 Leaf 1 (DC2-Leaf1).

  5. Define the VRF import policy that you configured at the switch-options vrf-import hierarchy in the previous step. In this example, we use unique route targets for each data center, and for each Layer 2 domain that is extended between the DCs. The policy is set to import the EVPN type 1 AD route that is associated with each DC/POD, as well as the EVPN type 2 and EVPN type 3 routes that are used by the stretched VLANs in both data centers.

    This policy should be configured on all leaf devices in both DCs.

    Note:

    In your deployment you can use a common route target for the leaf devices in your data centers. When you use a common route target, the routes from the data centers will be automatically imported as part of the implicit import policy.

    The values used for the comm_pod1 and comm_po2 communities match the values specified with the vrf-target statement under the switch-options hierarchy of the leaf devices in DC1 and DC2, respectively. This is the route target that is added to all EVPN type 1 routes, which are used for auto discovery. This target is also attached to all other EVPN routes that do not have a VNI specific target specified at the protocols evpn hierarchy.

    In this example the stretched VLANs are configured to use VNI specific targets. Therefore the import policy is written to match on all three of the targets used by each DC. Once again, this approach allows a common policy to be used across all leaf devices in both DCs.

Requirements

Configure the Leaf Devices to support Layer 3 DCI

EVPN type 5 routes are used for Layer 3 connectivity between DCs for VLANs that are not stretched. An EVPN type 5 route is also called an IP prefix route. In other words, a Type 5 route is used when the Layer 2 broadcast domain is confined within the data center. EVPN type 5 routes enable Layer 3 connectivity by advertising the IP prefix of the IRB interface associated with non-extended VLANs. In this example, VLANs (10, 11, 12) are confined to Data Center 1, while VLANs (170, 171, 172) are confined within Data Center 2. We establish Layer 3 inter-data center connectivity between the members of these VLANs by using IP prefix routes.

Procedure

Step-by-Step Procedure

  1. Configure Layer 3 DCI on DC1-Leaf1 by defining a Layer 3 VRF with support for EVPN type 5 routes.

  2. Configure Layer 3 DCI on DC2-Leaf1 by defining a Layer 3 VRF with support for EVPN type 5 routes.

  3. Configure the Layer 3 DCI policy for all leaf devices in DC1

  4. Configure the Layer 3 DCI policy for all leaf devices in DC2

Alternate Policy Configuration for Importing Route Targets

Requirements

In this example, the auto-derived route targets include the overlay AS number as part of its identifier. As a result you end up with different route targets for the same VNI in Data Center 1 and Data Center 2.

For a simpler policy configuration, you can automatically derive route targets by including the vrf-target statement with the auto option at the switch-options hierarchy. When you enable vrf-target auto, Junos automatically creates the route targets used for EVPN types 2 and type 3 routes. For EVPN-VXLAN, the route target is automatically derived from the VXLAN network identifier (VNI). In this example, the auto-derived EVPN type 2 and type 3 route targets for VNI 1202 are target:64730:268436658 in Data Center 1 and target:64830:268436658 in Data Center 2. These route targets are then attached to associated EVPN type 2 and type 3 routes. The vrf-target auto statement creates an implicit import policy to match on and import the same route target values for these VNIs.

With the simpler policy the EVPN type 2 and 3 routes for VLAN 202 and 203 are not automatically imported. To import EVPN type 2 and 3 routes from a data center with a different autonomous system number, include the vrf-target auto import-as statement.

Here is the configuration example for Leaf 1 in Data Center 1.

Verification

Procedure

Requirements

Overview

Verify BGP Peering

Purpose

Verify the underlay, overlay, and inter-DC BGP Peering sessions.

Action

Verify that all underlay and overlay BGP peering sessions are established. This includes the eBGP overlay peering between DCs formed across the WAN cloud.

The below is taken from the DC2-Spine1 device in DC2. All BGP peering sessions should be established on all leaf and spines devices in both DCs.

Meaning

The output on DC2-Spine1 confirms all of it’s BGP sessions are established. There is an underlay and overlay session per local leaf, plus the WAN peering session, and the 2 overlay peering sessions to the spine devices in the remote DC. This brings the total number of sessions to 9. The ability to establish the overlay peering to the remote DC confirms proper underlay route exchange over the WAN cloud.

Verify VTEPs in a Leaf Device

Purpose

Verify the Layer 2 connection between two leaf devices in different data centers.

Action

Verify that VTEP interfaces are up on the leaf devices in both the local and remote data centers.

The following is a snippet of the output of VTEP status from a leaf in data center 1.

Meaning

The output on DC1-Leaf1 shows that Layer 2 VXLAN tunnels are created between this leaf device and the other leaf devices in the local data center fabric (the VTEP address of leaf 2 in DC1 is shown in the snippet). The output also shows VTEPs are learned between this leaf device and the other leaf devices in the remote data center fabric (the VTEP for leaf 2 in DC2 is shown in the snippet). The input and output packet counters verify successful data transmission between endpoints on these VTEPs.

Verify EVPN type 1 Routes

Purpose

Verify EVPN type 1 routes are being added to the routing table.

Action

Verify that the EVPN type1 (AD/ESI) route has been received and installed..

Display EVPN routing instance information. The snippet below is taken at leaf 1 in DC1.

Meaning

These outputs show that DC1-Leaf1 has received EVPN type1 (AD/ESI) routes from the remote data center. You can use the output of the show route extensive command to show that these routes have a route target of “target:64830:999" and that these routes were successfully installed. The ESI 02:02:02:02:01 and 02:02:02:02:02 are Ethernet segments in the remote data center. This output also shows the remote PEs with IP addresses of 10.0.0.18 and 10.0.0.19 are part of these Ethernet segments.

Verify EVPN Type 2 Routes

Purpose

Verify EVPN type 2 routes are being added to the routing table.

Action

Verify that an EVPN type 2 route has been received and installed in the route table on a leaf device in data center 1.

Use show route to find an endpoint in VLAN 203 that is in the remote data center.

Use the detailed view for show route to see more details.

Use the show vlans command to find the information on VLAN 203.

The above output shows that VLAN 203 is configured on VTEPs “vtep.32769 - 10.80.224.141 – DC1-Leaf2”, “vtep.32772 – 10.0.0.18 – DC2-Leaf2”, “vtep.32770 - 10.0.0.19 – DC2-Leaf1”. The output also indicates that there are endpoints connected to VLAN v203 on these VTEPs. You can use show interfaces VTEP to get the VTEP IP address for these VTEPs. Use the show evpn database and the show ethernet-switching-table commands to find the MAC address and IP address for these endpoints.

Use the show evpn database to find entries in the EVPN database.

Use the show ethernet-switching table command to display information for a particular MAC address.

Meaning

The output shows that the MAC address from a device on VLAN 203 in the remote PE has been installed in the EVPN database and is in Ethernet Switching table.

Verify the Layer 3 DCI

Purpose

Verify Layer 3 routes are installed in the routing table.

Action

Verify that an EVPN type 5 route has been received and installed in the route table on a leaf device in data center 1.

Verify that you are receiving EVPN type 5 routes from the 10.1.170.0/24, which is the subnet for VLAN 170. This VLAN is local to data center 2. To reach this subnet from Data Center 1 requires Layer 3 routing.

You can view more details about the EVPN type 5 route for the 10.1.170.0/24 network. Traffic sent from this leaf device for this subnet is encapsulated in VXLAN tunnel with VNI 9999 and sent to remote leaf 10.0.0.18 in data center 2.

The output confirms that routes for the 10.1.170.0/24 subnet are present in this leaf device. Since we are exporting and importing host routes, you see the specific EVPN type 5 host routes.

The output shows routes for 10.1.203.0/24 subnet in this leaf device. In this case, VLAN 203 stretches across both data centers. When you only limit the EVPN type 5 routes to subnet routes, you will have asymmetric the inter-vni routing for L2 stretched VLANs. If you prefer to deploy a symmetric model for inter-vni routing for L2 stretched VLANs, you must export and import EVPN type 5 host routes.

This example uses the T5_EXPORT policy applied to the TENANT_1_VRF as an export policy for the EVPN protocol to effect advertisement of /32 host routes. As a result this example demonstrates symmetric routing for inter-VLAN routing when those VLANs are stretched between DCs.

Meaning

This output shows that the both the 10.1.203.0/24 subnet and the specific host route for 10.1.203.52/32 (the IP address of an endpoint in Data Center 2) have been installed. The more EVPN type 5 route is preferred over the EVPN type 2 route for 10.1.203.52/32 route. This results in symmetric Inter-VNI routing over Layer 2 stretched VLANs.