Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Data Center Interconnect Design and Implementation Using IPVPN

 

This section describes how to configure DCI using IPVPN. We are using IPVPN to pass traffic between data centers.

In this reference architecture, IPVPN routes are exchanged between spine devices in different data centers to allow for the passing of traffic between data centers.

Physical connectivity between the data centers is required before IPVPN routes can be sent across data centers. The backbone devices in a WAN cloud provide the physical connectivity. A backbone device is connected to each spine device in a single data center and participates in the overlay IBGP and underlay EBGP sessions. EBGP also runs in a separate BGP group to connect the backbone devices to each other; EVPN signaling and IPVPN (inet-vpn) is enabled in this BGP group.

Figure 1 shows two data centers using IPVPN for DCI.

Figure 1: Data Center Interconnect using IPVPN
Data Center Interconnect using IPVPN

Configuring Data Center Interconnect Using IPVPN

Configuring DCI for IPVPN is similar to configuring DCI for EVPN Type 5 routes with the exceptions shown in this section.

In this example, we are showing the configuration of IPVPN on Spine 1.

  1. Configure the underlay link from the spine to the backbone.
  2. On the backbone device, configure IBGP and MPLS for the overlay network. IPVPN requires that you use MPLS.
  3. On the spine, configure a routing instance to support DCI using IPVPN routes. This routing instance accepts L3VPN routes and also advertises data center routes as L3VPN routes to other IPVPN provider edge routers.

Verifying Data Center Interconnect Using IPVPN

  1. Verify that data center routes are advertised as IPVPN routes to remote data centers.
    host@SPINE-1> show interfaces terse irb.2401
    host@SPINE-1> show route advertising-protocol bgp 192.168.2.1 table VRF-601.inet.0 match-prefix 30.1.145.0 extensive
  2. On Spine 4, verify that the remote data center accepts the routes as IPVPN routes.
    host@SPINE-4> show route table VRF-601.inet.0 match-prefix 30.1.145.0

Data Center Interconnect—Release History

Table 1 provides a history of all of the features in this section and their support within this reference design.

Table 1: DCI Using IPVPN Release History

Release

Description

19.1R2

QFX10002-60C switches running Junos OS Release 19.1R2 and later releases in the same release train support DCI using IPVPN.

18.4R2-S2

MX routers running Junos OS Release 18.4R2-S2 and later releases in the same release train also support DCI using IPVPN.

18.1R3-S5

All devices in the reference design that support Junos OS Release 18.1R3-S5 and later releases in the same release train also support DCI using IPVPN.