Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Data Center Interconnect Design and Implementation Using Type 5 Routes

Data Center Interconnect Using EVPN Type 5 Routes

EVPN Type 5 routes, also known as IP prefix routes, are used in a DCI context to pass traffic between data centers that are using different IP address subnetting schemes.

In this reference architecture, EVPN Type 5 routes are exchanged between spine devices in different data centers to allow for the passing of traffic between data centers.

Physical connectivity between the data centers is required before EVPN Type 5 messages can be sent across data centers. This physical connectivity is provided by backbone devices in a WAN cloud. A backbone device is connected to each spine device in a single data center and participates in the overlay IBGP and underlay EBGP sessions. EBGP also runs in a separate BGP group to connect the backbone devices to each other; EVPN signaling is enabled in this BGP group.

Figure 1 shows two data centers using EVPN Type 5 routes for DCI.

Figure 1: DCI Using EVPN Type 5 Routes Topology OverviewDCI Using EVPN Type 5 Routes Topology Overview

For additional information on EVPN Type 5 routes, see EVPN Type-5 Route with VXLAN encapsulation for EVPN-VXLAN.

All procedures in this section assume that EVPN Type 2 routes are successfully being passed in the data centers. See Centrally-Routed Bridging Overlay Design and Implementation for setup instructions.

This section covers the processes for configuring a DCI using EVPN Type 5 routes, and includes the following procedures:

Configuring Backbone Device Interfaces

The backbone devices in this architecture are part of the WAN cloud and must provide connectivity both to the spine devices in each data center as well as to the other backbone device. This connectivity must be established before EVPN Type 5 routes can be exchanged between spine devices in different data centers.

Figure 2 provides an overview of the IP addresses that are configured in these steps.

Figure 2: IP Address Summary for Backbone and Spine DevicesIP Address Summary for Backbone and Spine Devices

To configure the spine device and backbone device interfaces:

Set up the interfaces and assign IP addresses:
  • (Aggregated Ethernet interfaces) Configure the aggregated Ethernet interfaces on the spine device switches in Data Centers 1 and 2 and on the backbone devices.

    This step shows the assignment of the IP address to the aggregated Ethernet interfaces only. For complete step-by-step instructions on creating aggregated Ethernet interfaces, see Configuring Link Aggregation.

    Spine Device 1 in Data Center 1:

    Spine Device 2 in Data Center 1:

    Spine Device 3 in Data Center 1:

    Spine Device 4 in Data Center 1:

    Spine Device 5 in Data Center 2:

    Spine Device 6 in Data Center 2:

    Backbone Device 1:

    Backbone Device 2:

  • (Standalone interfaces that are not included in aggregated Ethernet interfaces) See Configuring the Interface Address.

Enabling EBGP as the Underlay Network Routing Protocol Between the Spine Devices and the Backbone Devices

EBGP is used as the routing protocol of the underlay network in this reference design. The backbone devices must participate in EBGP with the spine devices to support underlay connectivity.

The process for enabling EBGP on the spine and leaf devices is covered in the IP Fabric Underlay Network Design and Implementation section of this guide. This procedure assumes EBGP has already been enabled on the spine and leaf devices, although some EBGP configuration on the spine devices needs to be updated to support backbone devices and is therefore included in these steps.

EBGP works in this reference design by assigning each leaf, spine, and backbone device into it’s own unique 32-bit autonomous system (AS) number.

Figure 3 shows an overview of the EBGP topology for the spine and backbone devices when backbone devices are included in the reference design.

Figure 3: EBGP Topology with Backbone DevicesEBGP Topology with Backbone Devices

Figure 4 illustrates the EBGP protocol parameters that are configured in this procedure. Repeat this process for the other devices in the topology to enable EBGP on the remaining devices.

Figure 4: EBGP Configuration in a Backbone TopologyEBGP Configuration in a Backbone Topology

To enable EBGP to support the underlay network in this reference design:

  1. Create and name the BGP peer group. EBGP is enabled as part of this step.

    All Spine and Backbone Devices:

  2. Configure the ASN for each device in the underlay.

    In this reference design, every device is assigned a unique ASN in the underlay network. The ASN for EBGP in the underlay network is configured at the BGP peer group level using the local-as statement because the system ASN setting is used for MP-IBGP signaling in the overlay network.

    Spine Device 2 in Data Center 1 Example:

    Spine Device 5 in Data Center 2 Example:

    Backbone Device 1:

    Backbone Device 2:

  3. Configure BGP peers by specifying the ASN of each BGP peer in the underlay network on each spine and backbone device.

    In this reference design, the backbone devices peer with every spine device in the connected data center and the other backbone device.

    The spine devices peer with the backbone device that connects them into the WAN cloud.

    Spine Device 2 in Data Center 1 Example:

    Spine Device 5 in Data Center 2 Example:

    Backbone Device 1:

    Backbone Device 2:

  4. Create a routing policy that identifies and includes the loopback interface in EBGP routing table updates and apply it.

    This export routing policy is applied and is used to advertise loopback interface reachability to all devices in the IP Fabric in the overlay network.

    Each Spine Device and Backbone Device:

  5. Enable multipath to ensure all routes are installed and shared in the forwarding table.

    Each Spine Device and Backbone Device:

Enabling IBGP for the Overlay Network on the Backbone Device

The backbone devices must run IBGP to have overlay network connectivity and be able to support DCI using EVPN Type 5 routes.

Figure 5 shows the IBGP configuration of the validated reference design when backbone devices are included in the topology. In the validated reference design, all spine and leaf devices in the same data center are assigned into the same autonomous system. The backbone devices are assigned into the same autonomous system as the spine and leaf devices of the data center that is using the backbone device as the entry point into the WAN cloud.

Figure 5: IBGP Overview with Backbone DevicesIBGP Overview with Backbone Devices

Figure 6 illustrates the route reflector configuration in the validated reference design. One route reflector cluster—cluster ID 192.168.2.10—includes backbone device 1 as the route reflector and all spine devices in data center 1 as route reflector clients. Another route reflector cluster—cluster ID 192.168.2.11—includes backbone device 2 as the route reflector and all spine devices in data center 2 as route reflector clients.

Figure 6: IBGP Route Reflector TopologyIBGP Route Reflector Topology

The validated reference design supports multiple hierarchical route reflectors, where one cluster includes backbone devices acting as route reflectors for the spine device clients and another cluster includes spine devices acting as route reflectors for leaf device clients. To see the configuration steps for configuring the other route reflector, see Configure IBGP for the Overlay.

Figure 7 shows the full hierarchical route reflector topology when two data centers are connected:

Figure 7: Hierarchical IBGP Route Reflector TopologyHierarchical IBGP Route Reflector Topology

For more information on BGP route reflectors, see Understanding BGP Route Reflectors.

This procedure assumes IBGP has been enabled for the spine and leaf devices as detailed in Configure IBGP for the Overlay. The spine device configurations are included in this procedure to illustrate their relationships to the backbone devices.

To setup IBGP connectivity for the backbone devices:

  1. Configure an AS number for overlay IBGP. All leaf and spine devices in the same data center are configured into the same AS. The backbone devices are configured into the same AS as the spine and leaf devices in the data centers using the backbone device as the entry point into the WAN cloud.

    Backbone Device 1 and All Spine and Leaf Devices in Data Center 1:

    Backbone Device 2 and All Spine and Leaf Devices in Data Center 2:

  2. Configure IBGP using EVPN signaling on the backbone devices. Form the route reflector clusters (cluster IDs 192.168.2.10 and 192.168.2.11) and configure BGP multipath and MTU Discovery.

    Backbone Device 1:

    Backbone Device 2:

  3. Configure IBGP using EVPN signaling on the spine devices. Enable BGP multipath and MTU Discovery.

    Spine Device 2 in Data Center 1 Example:

    Spine Device 5 in Data Center 2 Example:

Enabling EBGP as the Routing Protocol Between the Backbone Devices

EBGP is also used as the routing protocol between the backbone devices in this reference design. The backbone devices are connected using IP and the backbone devices must be configured as EBGP peers.

A second EBGP group—BACKBONE-BGP—is created in these steps to enable EBGP between the backbone devices. Each backbone device is assigned into a unique 32-bit AS number within the new EBGP group in these steps. The backbone devices, therefore, are part of two EBGP groups—UNDERLAY-BGP and BACKBONE-BGP—and have a unique AS number within each group. EVPN signaling, which has to run to support EVPN between the backbone devices, is also configured within the EBGP group during this procedure.

Figure 8 illustrates the attributes needed to enable EBGP between the backbone devices.

Figure 8: EBGP Topology for Backbone Device ConnectionEBGP Topology for Backbone Device Connection

To enable EBGP as the routing protocol between the backbone devices:

  1. Create and name the BGP peer group. EBGP is enabled as part of this step.

    Both Backbone Devices:

  2. Configure the ASN for each backbone device.

    Backbone Device 1:

    Backbone Device 2:

  3. Configure the backbone devices as BGP peers.

    Backbone Device 1:

    Backbone Device 2:

  4. Enable EVPN signaling between the backbone devices:

    Both Backbone Devices:

Configuring DCI Using EVPN Type 5 Routes

EVPN Type 5 messages are exchanged between IRB interfaces on spine devices in different data centers when EVPN Type 5 routes are used for DCI. These IRB interfaces are configured in a routing instance.

Each data center has a unique virtual network identifier (VNI 102001 and 202001) in this configuration, but both VNIs are mapped to the same VLAN (VLAN 2001) in the same routing instance (VRF 501).

See Figure 9 for an illustration of the routing instance.

Figure 9: DCI Using EVPN Type 5 RoutesDCI Using EVPN Type 5 Routes

To enable DCI using EVPN Type 5 routes:

Note:

This procedure assumes that the routing instances, IRBs, & VLANs created earlier in this guide are operational. See Centrally-Routed Bridging Overlay Design and Implementation.

When implementing border leaf functionality on an MX router, keep in mind that the router supports virtual switch instances only. MX routers do not support default instances.

  1. Configure the preferred addresses of the IRB interfaces.

    Spine Device 2 in Data Center 1:

    Spine Device 5 in Data Center 2:

  2. Configure mapping between VLANs and the IRB interfaces.

    Spine Device 2 in Data Center 1:

    Spine Device 5 in Data Center 2:

  3. Configure a routing instance, and map the IRB interface to this instance.

    Spine Device 2 in Data Center 1:

    Spine Device 5 in Data Center 2:

  4. Configure the VRF instance to generate EVPN Type 5 Routes.
    Note:

    The VNI of the local or remote data center—VNI 100501 or 200501 in this reference architecture—must be entered as the VNI in the set routing-instances VRF-501 protocols evpn ip-prefix-routes vni command.

    Spine Device 2 in Data Center 1:

    Spine Device 5 in Data Center 2:

  5. On QFX5xxx switches that function as spine devices, enable the chained composite next hop feature. With this feature enabled, the switches can more efficiently process large amounts of EVPN Type 5 routes by directing routes that share the same destination to a common forwarding next hop.
    Note:

    On QFX10000 switches, this feature is enabled by default.

    Spine Device 2 in Data Center 1 and Spine Device 5 in Data Center 2:

Verifying That DCI Using EVPN Type 5 Routes is Operating

Enter the following commands to verify that traffic can be sent between data centers using EVPN Type 5 routes:

  1. Verify that an EVPN Type 5 route has been received from the spine device in the other data center by entering the show route table command. Enter the VRF instance number and the route distinguisher in the command line to filter the results.

    Spine Device 2 in Data Center 1:

    Spine Device 5 in Data Center 2:

  2. Verify that EVPN Type 5 routes are exported and imported in the VRF instance by entering the show evpn ip-prefix-database l3-context command and specifying the VRF instance.

    Spine Device 2 in Data Center 1:

    Spine Device 5 in Data Center 2:

  3. Verify the EVPN Type 5 route encapsulation details by entering the show route table command with the extensive option.

    Spine Device 2 in Data Center 1:

    Spine Device 5 in Data Center 2:

DCI Using Type 5 Routes — Release History

Table 1 provides a history of all of the features in this section and their support within this reference design.

Table 1: DCI Using Type 5 Routes Release History

Release

Description

19.1R2

QFX10002-60C and QFX5120-32C switches running Junos OS Release 19.1R2 and later releases in the same release train support all features documented in this section.

18.4R2-S2

QFX5110 and QFX5120-48Y switches, and MX routers running Junos OS Release 18.4R2-S2 and later releases in the same release train support all features documented in this section.