Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Five-Stage IP Fabric Design and Implementation

To enable you to scale your existing EVPN-VXLAN network in a data center, Juniper Networks supports a 5-stage IP fabric. Although a 5-stage IP fabric is actually comprised of 3 tiers of networking devices, the term 5-stage refers to the number of network devices that traffic sent from one host to another must traverse to reach its destination.

Juniper Networks supports a 5-stage IP fabric in an inter-point of delivery (POD) connectivity use case within a data center. This use case assumes that your EVPN-VXLAN network already includes tiers of spine and leaf devices in two PODs. To enable connectivity between the two PODs, you add a tier of super spine devices. To determine which Juniper Networks devices you can use as a super spine device, see the Data Center EVPN-VXLAN Fabric Reference Designs—Supported Hardware Summary table.

Figure 1 shows the 5-stage IP fabric that we use in this reference design.

Figure 1: Sample 5-Stage IP FabricSample 5-Stage IP Fabric

As shown in Figure 1, each super spine device is connected to each spine device in each POD.

We support the following network overlay type combinations in each POD:

  • The EVPN-VXLAN fabric in both PODs has a centrally routed bridging overlay.

  • The EVPN-VXLAN fabric in both PODs has an edge-routed bridging overlay.

  • The EVPN-VXLAN fabric in one POD has a centrally routed bridging overlay, and the fabric in the other POD has an edge-routed bridging overlay.

Juniper Network’s 5-stage IP fabric supports RFC 7938, Use of BGP for Routing in Large-Scale Data Centers. However, where appropriate, we use terminology that more effectively describes our use case.

Note the following about the 5-stage IP fabric reference design:

How to Integrate the Super Spine Devices into the IP Fabric Underlay Network

This section shows you how to configure the super spine devices so that they can communicate with the spine devices, which are already configured as part of an existing IP fabric underlay network.

For details about the interfaces and autonomous systems (ASs) in the IP fabric underlay network, see Figure 2.

Figure 2: Integrating Super Spine Devices into an Existing IP Fabric Underlay NetworkIntegrating Super Spine Devices into an Existing IP Fabric Underlay Network
  1. Configure the interfaces that connect the super spine devices to Spines 1 through 4.

    For the connection to each spine device, we create an aggregated Ethernet interface that currently includes a single link. We use this approach in case you need to increase the throughput to each spine device at a later time.

    For interface details for the super spine devices, see Figure 2.

    Super Spine 1

    Super Spine 2

  2. Specify an IP address for loopback interface lo0.0.

    We use the loopback address for each super spine device when setting up an export routing policy later in this procedure.

    Super Spine 1

    Super Spine 2

  3. Configure the router ID.

    We use the router ID for each super spine device when setting up the route reflector cluster in the EVPN overlay network.

    Super Spine 1

    Super Spine 2

  4. Create a BGP peer group named underlay-bgp, and enable EBGP as the routing protocol in the underlay network.

    Super Spines 1 and 2

  5. Configure the AS number.

    In this reference design, each device is assigned a unique AS number in the underlay network. For the AS numbers of the super spine devices, see Figure 2.

    The AS number for EBGP in the underlay network is configured at the BGP peer group level using the local-as statement because the system AS number setting is used for MP-IBGP signaling in the EVPN overlay network.

    Super Spine 1

    Super Spine 2

  6. Set up a BGP peer relationship with Spines 1 through 4.

    To establish the peer relationship, on each super spine device, configure each spine device as a neighbor by specifying the spine device’s IP address and AS number. For the IP addresses and AS numbers of the spine devices, see Figure 2.

    Super Spine 1

    Super Spine 2

  7. Configure an export routing policy that advertises the IP address of loopback interface lo0.0 on the super spine devices to the EBGP peering devices (Spines 1 through 4). This policy rejects all other advertisements.

    Super Spines 1 and 2

  8. Enable multipath with the multiple-as option, which enables load balancing between EBGP peers in different ASs.

    EBGP, by default, selects one best path for each prefix and installs that route in the forwarding table. When BGP multipath is enabled, all equal-cost paths to a given destination are installed into the forwarding table.

    Super Spines 1 and 2

  9. Enable Bidirectional Forwarding Detection (BFD) for all BGP sessions to enable the rapid detection of failures and reconvergence.

    Super Spines 1 and 2

How to Integrate the Super Spine Devices into the EVPN Overlay Network

This section explains how to integrate the super spine devices into the EVPN overlay network. In this control-plane driven overlay, we establish a signalling path between all devices within a single AS using IBGP with Multiprotocol BGP (MP-IBGP).

In this IBGP overlay, the super spine devices act as a route reflector cluster, and the spine devices are route reflector clients. For details about the route reflector cluster ID and BGP neighbor IP addresses in the EVPN overlay network, see Figure 3.

Figure 3: Integrating Super Spine Devices into Existing EVPN Overlay NetworkIntegrating Super Spine Devices into Existing EVPN Overlay Network
  1. Configure an AS number for the IBGP overlay.

    All devices participating in this overlay (Super Spines 1 and 2, Spines 1 through 4, Leafs 1 through 4) must use the same AS number. In this example, the AS number is private AS 4210000001.

    Super Spines 1 and 2

  2. Configure IBGP using EVPN signaling to peer with Spines 1 through 4. Also, form the route reflector cluster (cluster ID 192.168.2.10), and configure equal cost multipath (ECMP) for BGP. Enable path maximum transmission unit (MTU) discovery to dynamically determine the MTU size on the network path between the source and the destination, with the goal of avoiding IP fragmentation.

    For details about the route reflector cluster ID and BGP neighbor IP addresses for super spine and spine devices, see Figure 3.

    Super Spine 1

    Super Spine 2

    Note:

    This reference design does not include the configuration of BGP peering between Super Spines 1 and 2. However, if you want to set up this peering to complete the full mesh peering topology, you can optionally do so by creating another BGP group and specifying the configuration in that group. For example:

    Super Spine 1

    Super Spine 2

  3. Enable BFD for all BGP sessions to enable rapid detection of failures and reconvergence.

    Super Spines 1 and 2

How to Verify That the Super Spine Devices Are Integrated Into the Underlay and Overlay Networks

This section explains how you can verify that the super spine devices are properly integrated into the IP fabric underlay and EVPN overlay networks.

After you successfully complete this verification, the super spine devices will handle communication between PODs 1 and 2 by advertising EVPN type-2 routes. This method will work if your PODs use the same IP address subnet scheme. However, if each POD uses a different IP address subnet scheme, you must additionally configure the devices that handle inter-subnet routing in the PODs to advertise EVPN type-5 routes. For more information, see How to Enable the Advertisement of EVPN Type-5 Routes on the Routing Devices in the PODs later in this topic.

  1. Verify that the aggregated Ethernet interfaces are enabled, that the physical links are up, and that packets are being transmitted if traffic has been sent.

    The output below provides this verification for aggregated Ethernet interface ae1 on Super Spine 1.

  2. Verify that the BGP is up and running.

    The output below verifies that EBGP and IBGP peer relationships with Spines 1 through 4 are established and that traffic paths are active.

  3. Verify that BFD is working.

    The output below verifies that BGP sessions between Super Spine 1 and Spines 1 through 4 are established and in the Up state.

How to Enable the Advertisement of EVPN Type-5 Routes on the Routing Devices in the PODs

After you complete the tasks in the following sections, the super spine devices will handle communication between PODs 1 and 2 by advertising EVPN type-2 routes.

If servers connected to the leaf devices in both PODs are in the same subnet, you can skip the task in this section. However, if servers in each POD are in different subnets, you must further configure the devices that handle inter-subnet routing in the PODs to advertise EVPN type-5 routes as described in this section. This type of route is also known as an IP prefix route.

In this EVPN type-5 reference design, the EVPN-VXLAN fabric in both PODs has a centrally routed bridging overlay. In this type of overlay, the spine devices handle inter-subnet routing. Therefore, this section explains how to enable the advertisement of EVPN type-5 routes on the spine devices in PODs 1 and 2.

To enable the advertisement of EVPN type-5 routes, you set up a tenant routing instance named VRF-1 on each spine device. In the routing instance, you specify which host IP addresses and prefixes that you want a spine device to advertise as EVPN type-5 routes with a VXLAN network identifier (VNI) of 500001. A spine device will advertise the EVPN type-5 routes to the other spine and leaf devices within the same POD. The spine device will also advertise the EVPN type-5 routes to the super spine devices, which will in turn advertise the routes to the spine devices in the other POD. All spine devices on which you have configured VRF-1 will import the EVPN type-5 routes into their VRF-1 routing table.

After you enable the advertisement of EVPN type-5 routes, the super spine devices will handle communication between PODs 1 and 2 by advertising EVPN type-5 routes.

Figure 4 shows the EVPN type-5 configuration details for the inter-POD use case.

Figure 4: Advertisement of EVPN-Type-5 Routes Between PODs 1 and 2Advertisement of EVPN-Type-5 Routes Between PODs 1 and 2

Table 1 outlines the VLAN ID to IRB interface mappings for this reference design.

Table 1: VLAN ID to IRB Interface Mappings

VLAN Names

VLAN IDs

IRB Interface

Spines 1 and 2 in POD 1

VLAN BD-1

1

irb.1

VLAN BD-2

2

irb.2

Spines 3 and 4 in POD 2

VLAN BD-3

3

irb.3

VLAN BD-4

4

irb.4

To set up the advertisement of EVPN type-5 routes:

  1. Create loopback interface lo0.1, and specify that is in the IPv4 address family.

    For example:

    Spine 1

  2. Configure a routing instance of type vrf named VRF-1. In this routing instance, include loopback interface lo0.1 so that the spine device, which acts as a VXLAN gateway, can resolve ARP requests and the IRB interfaces that correspond to each spine device (see Table 1). Set a route distinguisher and VRF targets for the routing instance. Configure load balancing for EVPN type-5 routes with the multipath ECMP option.

    For example:

    Spine 1

    Spine 2

    Spine 3

    Spine 4

  3. Enable EVPN to advertise direct next hops, specify VXLAN encapsulation, and assign VNI 500001 to the EVPN type-5 routes.

    For the configuration of Spines 1 through 4, use VNI 500001 in this configuration.

    For example:

    Spine 1

  4. Define an EVPN type-5 export policy named ExportHostRoutes for tenant routing instance VRF-1.

    For example, the following configuration establishes that VRF-1 advertises all host IPv4 and IPv6 addresses and prefixes learned by EVPN and from networks directly connected to Spine 1.

    Spine 1

  5. Apply the export policy named ExportHostRoutes to VRF-1.

    For example:

    Spine 1

  6. In this reference design, QFX5120-32C switches act as spine devices. For these switches and all other QFX5XXX switches that act as spine devices in a centrally routed bridging overlay, you must perform the following additional configuration to properly implement EVPN pure type-5 routing.

    Spines 1 through 4

How to Verify the Advertisement of EVPN Type-5 Routes on the Routing Devices in the PODs

To verify that the spine devices in this reference design are properly advertising EVPN type-5 routes:

  1. View the VRF route table to verify that the end system routes and spine device routes are being exchanged.

    The following snippet of output shows IPv4 routes only.

    Spine 1

    Spine 3

  2. Verify that EVPN type-5 IPv4 and IPv6 routes are exported and imported into the VRF-1 routing instance.

    Spine 1

    Spine 3

  3. Verify the EVPN type-5 route encapsulation details. The following output shows the details for specified prefixes.

    Spine 1

    Spine 2