Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Configure Data Center Interconnect (DCI)

Data Center Interconnect Overview

You can use CEM to interconnect multiple data centers over a WAN such as the Internet or an enterprise network. We support DCI based on EVPN/VXLAN and not Layer 3 VPN and not EVPN/MPLS.

Multiple tenants connected to a logical router (VRF routing instance) in one data center can exchange routes with tenants connected to a logical router in another data center.

The implementation described in this section uses EBGP peering between the data centers.

Data Center Interconnect Configuration Overview

In this example, Figure 1 we are configuring DCI between Data Center 1 (DC1) and Data Center 2 (DC2). Physical connectivity between the data centers is provided by backbone devices in a WAN cloud. In DC1, we are connecting to the WAN cloud from the border leafs. In DCI2, we are connecting to the WAN cloud from the border spines. We are using BGP as the routing protocol between the border devices and the devices in the WAN cloud.

Figure 1: Data Center Interconnect Between DC1 and DC2Network topology diagram of two data centers with spine-leaf architecture, interconnected via eBGP. Data Center 1 features spines, server leafs, bare metal servers, and border leafs. Data Center 2 has border spines, leafs, and bare metal servers. Logical routers facilitate routing.

DCI Configuration Overview

With CEM, you can automate data center interconnect (DCI) of two data centers. You can use the same CEM cluster to configure multiple data centers in distinct fabrics.

To configure DCI between Data Center 1 and Data Center 2:

  1. Assign device roles to the spines and border leafs used for DCI

  2. Configure EBGP peering on the underlay

  3. Create virtual networks

  4. Create logical routers

  5. Create Data Center Interconnect

  6. Configure BGP peers on the WAN cloud device.

Assign Device Roles for Border Spine and Border Leaf Devices

In this procedure we assign roles for the border leaf and border spine devices used for DCI.

To assign roles:

  1. On the Fabric Devices summary screen, select Action> Reconfigure Roles.
    Juniper Contrail Command interface showing Fabric Devices section under Infrastructure tab, listing network devices with status, name, management IP, loopback IP, vendor, product name, role, and routing details. Navigation menu on the left sidebar includes Monitoring, Infrastructure, and more. Action dropdown for managing devices and right panel displays network namespaces and placeholders for network intent and device credentials.
  2. Next to the spine devices, select Assign Roles.
    User interface for assigning a role to a device with dropdowns for Physical Role set to pnf and Routing Bridging Roles set to PNF-Servicechain. Buttons for Cancel and Assign are at the bottom.
  3. Be sure that the following roles are assigned.

    In DC1, set the roles as follows:

    • Border leaf—CRB Access, CRB Gateway, DCI Gateway

    • Spine—CRB Gateway, Route Reflector

    • Server leaf—CRB Access

    In DC2, set the roles as follows:

    • Border spine—CRB Gateway, DCI Gateway, Route Reflector

    • Leaf—CRB Access

    For a description of roles, see Device Roles.

Manually Configure BGP Peering

When you assign the CRB Gateway or DCI gateway role to a device, CEM autoconfigures IBGP overlay peering between the fabrics. In our implementation, it creates BGP peering between the spine and border leaf devices on DC1 and the border spine devices on DC2.

CEM cannot always configure the underlay automatically when the data centers are not directly connected to each other. In this case, CEM requires loopback-to-loopback reachability between the two data centers on devices with the DCI Gateway role.

We are using an MX Series router as the cloud device. On the cloud device configure the border leaf devices and border spine devices as BGP peers.

  1. Configure the following on the cloud device.
  2. On DC1 border leaf 1, configure the MX device as a BGP peer.
  3. On DC1 border leaf 2, configure the MX device as a BGP peer.
  4. On DC2 border spine 1, configure the MX device as a BGP peer.
  5. On DC2 border spine 2, configure the MX router as the peer.

Configure Virtual Networks

We are creating a virtual network in each data center. A virtual network lets hosts in the same network communicate with each other. This is like assigning a VLAN to each host.

To create a virtual network:

  1. Navigate to Overlay > Virtual Networks and click Create.

    The Virtual Networks screen appears.

    Screenshot of Juniper Contrail Command interface for creating a virtual network, showing fields for name, network policies, allocation mode, VxLAN ID, subnets configuration, host routes, and create or cancel buttons.
  2. Create two virtual networks as follows:

    Field

    VN3-A Configuration

    VN3-B Configuration

    Name

    VN3-A

    VN3-B

    Allocation Mode

    User defined subnet only

    User defined subnet only

    Subnets

    Network IPAM

    default-domain:default-pruject:default...

    default-domain:default-pruject:default...

    CIDR

    10.10.1.0/24

    10.10.2.0/24

    Gateway

    10.10.1.1

    10.10.2.1

Create Virtual Port Groups

You configure VPGs to add interfaces to your virtual networks. To create a VPG:

  1. Navigate to Overlay > Virtual Port Group and click Create.

    The Create Virtual Port Group screen appears.

    Contrail Command interface showing Edit Virtual Port Group section with Virtual Port Group Name BMS5, Fabric Name DC1, and VLAN ID 111.
  2. Create two VPGs with the values shown in the following table.

    To assign a physical interface, find the interface under Available Physical Interface. There can be multiple pages of interfaces. To move an interface to the Assigned Physical Interface, click the > next to the interface.

    Name

    BMS5

    BMS6

    Assigned Physical Interface

    xe-0/0/3:0

    (on DC1-Server-Leaf 1)

    xe-0/0/3:0

    (DC1-Server-Leaf2)

    Network (Virtual Network)

    VN3-A

    VN3-B

    VLAN ID

    111

    112

Create Logical Routers

CEM uses logical routers (LRs) to create a virtual routing and forwarding (VRF) routing instance for each logical router with IRB interfaces on the border spine or border leaf devices.

  1. Navigate to Overlay > Logical Routers and click Create.

    The Logical Router screen appears:

    Contrail Command interface for creating a Logical Router named DC1-LR1 with VXLAN Routing and NAT enabled.
  2. On the Logical Router screen, create a logical router:

    Field

    DC1-LR1 Configuration

    Name

    DC1-LR1

    Extend to Physical Router

    DC1-Border-Leaf1

    DC1-Border-Leaf2

    Logical Router Type

    VXLAN Routing

    Connected Networks

    VN3-A

    VN3-B

Create Data Center Interconnect

The DCI configuration sets up the connection between two data centers. Once you add DCI, CEM adds family EVPN to the BGP peers between the border leaf and border spine devices in DC1 and DC2.

  1. Click Overlay > DCI Interconnect.

    The Edit DCI screen appears.

  2. Fill in the screen as shown:
    Contrail Command interface showing Data Center Interconnect configuration with DC1-to-DC2-2 in L3 mode, logical routers and DCI gateways selected, and create or cancel options.

Verify Data Center Interconnect

To verify that DCI is working, we will ping from a server on a virtual network in one data center to a server on a virtual network in the other data center.

  1. Run ping from BMS6 (DC1 Server Leaf 2) to BMS3 (DC2 Leaf 3)
  2. Run ping from BMS6 (DC1 server Leaf 2) to BMS1 (DC2 Leaf 1)