Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Data Center Interconnect

Contrail Networking supports the automation of data center interconnect (DCI) of two different data centers.

These topics provide information on data center interconnect deployment topologies and how you can create a data center interconnect.

Understanding Data Center Interconnect

You can automate data center interconnect (DCI) of two different data centers. Multiple tenants connected to a logical router in a data center can exchange routes with tenants connected to a logical router in another data center. All BGP routers in a data center should peer with local route reflectors and not with BGP routers on another fabric. Contrail Networking Release 5.1 supports layer three interconnect of data centers that exist in different fabrics. Starting in Contrail Networking Release 2011, layer 2 DCI functionality is also supported. Contrail Networking defines elements (spine switch and leaf switch) that belong to a data center.

A single Contrail Networking cluster can manage multiple data center pods that are composed of two-tier IP fabric. These data center pods are used to provision overlay layer 2 and layer 3 networking services as virtual networks and logical routers.

Contrail Networking automates the interconnection of logical routers (Layer 3 VRF) in each pod. A DCI object represents the extension of a logical router from one data center pod to another by using EVPN VXLAN Type 5 routes. These logical routers that are extended to the devices in each fabric are assigned DCI-Gateway role. The routing policies are configured on both pods to ensure EVPN type 5 routes are exchanged across the data center.

Note:

The gateway devices must support DCI-Gateway routing bridging role.

Starting in Contrail Networking Release 2005, you can configure DCI-Gateway routing-bridging role on MX240, MX480, MX960, and MX10003 devices.

Data Center Interconnect Deployment Topologies

Contrail Networking supports the following data center interconnect (DCI) deployment topologies.

DCI using EBGP

Figure 1: DCI using EBGP ConnectionDCI using EBGP Connection

DCI using EBGP connection establishes an EBGP connection between two data centers. The data centers are configured with two different autonomous system (AS) numbers as depicted in Figure 1.

DCI using IBGP

Figure 2: DCI using IBGP ConnectionDCI using IBGP Connection

DCI using IBGP connection establishes an IBGP connection between two data centers. The data centers are configured with the same autonomous system (AS) numbers as depicted in Figure 2.

Creating Data Center Interconnect

These topics provide step-by-step instructions to create data center interconnect.

Prerequisites

Before you start creating data center interconnect, ensure that:

  • Junos OS 18.1 or later is installed

  • Data center pods that Contrail Networking automates must have IP reachability

  • Logical routers and client virtual networks are connected

  • Logical routers extended to the devices in each fabric are assigned DCI-Gateway role

  • BGP sessions between loopback addresses are reachable

  • Underlay connectivity is enabled

  • There is a route reflector on each data center that Contrail Networking is peering to

Follow these steps to create a data center interconnect.

Onboard Brownfield Devices

Follow the steps provided in the Onboard Brownfield Devices topic to onboard devices and assign roles to devices.

See Table 1 for an example configuration of how you can assign roles to a device. When you configure a QFX series device as a data center gateway, ensure that you assign DC-Gateway role to the spine and leaf device.

Table 1: Assign Roles to Devices

Device

Physical Role

Routing-Bridging Role

Spine devices

spine

CRB-Gateway, Route-Reflector, CRB-MCAST-Gateway, DCI-Gateway

Leaf devices

leaf

CRB-Access, DCI-Gateway

Create Virtual Network

Follow the steps provided in the Create Virtual Network topic to create virtual networks.

Before you begin, ensure that you

  • Do not add network policy while creating the virtual network.

    You can create the network policy and add it to the virtual network after you create the virtual network. For more information on creating a network policy, see Create Network Policy.

  • Have created a Network IPAM. For more information on creating a network IPAM, see Create Network IPAM.

After you have created the virtual network and the network policy, follow these steps to attach the network policy to the virtual network.

  1. Navigate to Overlay>Virtual Networks.

    The All networks page is displayed.

  2. To select the virtual network you want to add the policy to, select the check box next to the name of the virtual network. Then click the Edit icon at the end of the row.

    The Edit Virtual Network page is displayed.

  3. Select the network policy from the Network Policies list and click Save.

    The policy is now added and the All networks page is displayed.

Create Logical Routers

Follow the steps provided in the Create Logical Routers topic to configure logical routers.

While creating logical router, ensure that you

  • Select VXLAN Routing as the Logical Router Type.

  • Select the virtual network(s) from the Connected Networks list.

  • Select the physical router (Spine device) to which you want to extend the logical router.

Create DCI

Follow these steps to create a DCI of two different data centers by using the Contrail Command user interface (UI).

  1. Navigate to Overlay > Interconnects.

    The Data Center Interconnect page is displayed.

  2. Click Create.

    The Create Data Center Interconnect page is displayed.

  3. Enter a name for the DCI in the DCI name field.
  4. Select DCI mode.

    You can select L2 or L3 DCI mode.

    If you have selected L2 as the DCI mode, the Fabric field, Available Virtual Networks table, and Selected Virtual Networks table are displayed. See Figure 3.

    Contrail Networking Release 2011 supports layer 2 DCI functionality.

    Figure 3: L2 DCI ModeL2 DCI Mode

    Enter the following information.

    1. Select the fabrics that the data centers are a part of, from the Fabric list.

      The available virtual networks that are part of Contrail are listed in the Available Virtual Networks table.

    2. From the Available Virtual Networks table, select the virtual networks you want included in the DCI by clicking the arrow next to each listed virtual network.

      The virtual networks that you selected are displayed in the Selected Virtual Networks table.

    3. Click Create to create the L2 DCI mode.

    If you have selected L3 as the DCI mode, the Connections section is displayed. See Figure 4.

    Figure 4: L3 DCI ModeL3 DCI Mode

    Enter the following information.

    1. Select a logical router from the Select logical router list.
    2. Select fabric from the Select fabric list.
    3. Select the physical router to which you want to extend the logical router to, from the Extend to Physical Router (RB Role = DCI-Gateway) list.
    4. Repeat steps 4.a through 4.c to create the next connection.
    5. Click Create to create the L3 DCI mode.

The DCI is now created and is listed in the Data Center Interconnect page.

Change History Table

Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.

Release
Description
2011
Starting in Contrail Networking Release 2011, layer 2 DCI functionality is also supported.
2005
Starting in Contrail Networking Release 2005, you can configure DCI-Gateway routing-bridging role on MX240, MX480, MX960, and MX10003 devices.