Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

IP Fabric Underlay Network Design and Implementation

For an overview of the supported IP fabric underlay models and components used in these designs, see the IP Fabric Underlay Network section in Data Center Fabric Blueprint Architecture Components.

This section explains how to configure spine and leaf devices in 3-stage and 5-stage IPv4 fabric underlays. For information about how to configure the additional tier of super spine devices in a 5-stage IP fabric underlay, see Five-Stage IP Fabric Design and Implementation. For the steps to configure an IPv6 Fabric design in reference architectures that support that configuration, see IPv6 Fabric Underlay and Overlay Network Design and Implementation with EBGP instead.

The IP underlay network building block is arranged in a Clos-based fabric topology. The underlay network uses EBGP as the routing protocol in place of a traditional IGP like OSPF. You can use other routing protocols in the underlay protocol in your data center; the usage of those routing protocols is beyond the scope of this document.

Aggregated Ethernet interfaces with MicroBFD are also used in this building block. MicroBFD improves fault detection in an aggregated Ethernet interface by running BFD on individual links of the aggregated Ethernet interface.

Figure 1 and Figure 2 provide high-level illustrations of a 3-stage and 5-stage IP fabric underlay networks, respectively.

Figure 1: Three-Stage IP Fabric Underlay NetworkThree-Stage IP Fabric Underlay Network
Figure 2: Five-Stage IP Fabric Underlay NetworkFive-Stage IP Fabric Underlay Network

Configuring the Aggregated Ethernet Interfaces Connecting Spine Devices to Leaf Devices

In this design each spine device is interconnected to each leaf device using a single link or a two-member aggregated Ethernet interface. The decision to use a single link or an aggregated Ethernet interface largely depends on the needs of your network; see Data Center Fabric Reference Design Overview and Validated Topology for more information on interface requirements.

The majority of IP Fabric topologies do not use aggregated Ethernet interfaces to interconnect spine and leaf devices. You can skip this section if you are connecting your spine and leaf devices using single links.

Use the following instructions to configure the interfaces that interconnect spine and leaf devices as aggregated Ethernet interfaces with two member links. An IPv4 address is assigned to each aggregated Ethernet interface. LACP with a fast periodic interval is also enabled.

Figure 3 shows the spine device interfaces that are configured in this procedure:

Figure 3: Spine 1 InterfacesSpine 1 Interfaces

Figure 4 shows the leaf device interfaces that are configured in this procedure:

Figure 4: Leaf 1 InterfacesLeaf 1 Interfaces

To configure aggregated Ethernet interfaces with fast LACP:

  1. Set the maximum number of aggregated Ethernet interfaces permitted on the device.

    We recommend setting this number to the exact number of aggregated Ethernet interfaces on your device, including aggregated Ethernet interfaces that are not used for spine to leaf device connections.

    In this example, the aggregated Ethernet device count value is set at 10 for a leaf device and 100 for a spine device.

    Leaf Device:

    Spine Device:

  2. Create and name the aggregated Ethernet interfaces, and optionally assign a description to each interface.

    This step shows how to create three aggregated Ethernet interfaces on spine 1 and four aggregated ethernet interfaces on leaf 1.

    Repeat this procedure for every aggregated Ethernet interface connecting a spine device to a leaf device.

    Spine 1:

    Leaf 1:

  3. Assign interfaces to each aggregated Ethernet interface on your device.

    Spine 1:

    Leaf 1:

  4. Assign an IP address to each aggregated Ethernet interface on the device.

    Spine 1:

    Leaf 1:

  5. Enable fast LACP on every aggregated Ethernet interface on the device.

    LACP is enabled using the fast periodic interval, which configures LACP to send a packet every second.

    Spine 1:

    Leaf 1:

  6. After the configuration is committed, confirm that the aggregated Ethernet interfaces are enabled, that the physical links are up, and that packets are being transmitted if traffic has been sent.

    The output below provides this confirmation information for ae1 on Spine 1.

  7. Confirm the LACP receive state is Current and that the transmit state is Fast for each link in each aggregated Ethernet interface bundle.

    The output below provides LACP status for interface ae1.

  8. Repeat this procedure for every device in your topology.

    The guide presumes that spine and leaf devices are interconnected by two-member aggregated Ethernet interfaces in other sections. When you are configuring or monitoring a single link instead of an aggregated Ethernet link, substitute the physical interface name of the single link interface in place of the aggregated Ethernet interface name.

Enabling EBGP as the Routing Protocol in the Underlay Network

In this design, EBGP is the routing protocol of the underlay network and each device in the IP fabric is assigned a unique 32-bit autonomous system number (ASN). The underlay routing configuration ensures that all devices in the underlay IP fabric are reliably reachable from one another. Reachability between VTEP across the underlay IP Fabric is also required to support overlay networking with VXLAN.

Figure 5 shows the EBGP configuration of the underlay network.

Figure 5: EBGP Underlay Network OverviewEBGP Underlay Network Overview

To enable EBGP as the routing protocol for the underlay network on a device:

  1. Create and name the BGP peer group. EBGP is enabled as part of this step.

    The underlay EBGP group is named UNDERLAY in this design.

    Spine or Leaf Device:

  2. Configure the ASN for each device in the underlay.

    Recall that in this design, every device is assigned a unique ASN in the underlay network. The ASN for EBGP in the underlay network is configured at the BGP peer group level using the local-as statement because the system ASN setting is used for MP-IBGP signaling in the overlay network.

    The examples below show how to configure the ASN for EBGP for Spine 1 and Leaf 1.

    Spine 1:

    Leaf 1:

  3. Configure BGP peers by specifying the ASN of each BGP peer in the underlay network on each spine and leaf device.

    In this design, for a spine device, every leaf device is a BGP peer, and for a leaf device, every spine device is a BGP peer.

    The example below demonstrates how to configure the peer ASN in this design.

    Spine 1:

    Leaf 1:

  4. Set the BGP hold time. The BGP hold time is the length of time, in seconds, that a peer waits for a BGP message—typically a keepalive, update, or notification message—before closing a BGP connection.

    Shorter BGP hold time values guard against BGP sessions staying open for unnecessarily long times in scenarios where problems occur, such as keepalives not being sent. A longer BGP hold time ensures BGP sessions remain active even during problem periods.

    The BGP hold time is configured at 10 seconds for every device in this design.

    Spine or Leaf Device:

  5. Configure EBGP to signal the unicast address family for the underlay BGP peer group.

    Spine or Leaf Device:

  6. Configure an export routing policy that advertises the IP address of the loopback interface to EBGP peering devices.

    This export routing policy is used to make the IP address of the loopback interface reachable from all devices in the IP Fabric. Loopback IP address reachability is required to allow leaf and spine device peering using MP-IBGP in the overlay network. IBGP peering in the overlay network must be established to allow devices in the fabric to share EVPN routes. See Configure IBGP for the Overlay.

    The route filter IP address in this step—192.168.1.10—is the loopback address of the leaf device.

    Leaf 1:

  7. After committing the configuration, enter the show bgp summary command on each device to confirm that the BGP state is established and that traffic paths are active.

    Issue the show bgp summary command on Spine 1 to verify EBGP status.

  8. Repeat this procedure for every spine and leaf device in your topology.

Enabling Load Balancing

ECMP load balancing allows traffic to be sent to the same destination over multiple equal cost paths. Load balancing must be enabled on all spine and leaf devices to ensure that traffic is sent over all available paths provided by the IP Fabric.

Traffic is load balanced per Layer 4 flow on Junos devices. The ECMP algorithm load balances each traffic flow over one of the multiple paths, and all traffic for that flow is transmitted using the selected link.

To enable ECMP-based load balancing on a device:

  1. Enable multipath with the multiple AS option in BGP on all devices in the IP Fabric.

    EBGP, by default, selects one best path for each prefix and installs that route in the forwarding table. When BGP multipath is enabled, all equal-cost paths to a given destination are installed into the forwarding table. The multiple-as option enables load balancing between EBGP neighbors in different autonomous systems.

    All Spine and Leaf Devices:

  2. Create a policy statement that enables per-packet load balancing.

    All Spine and Leaf Devices:

  3. Export the policy statement to the forwarding table.

    All Spine and Leaf Devices:

IP Fabric Underlay Network — Release History

Table 1 provides a history of all of the features in this section and their support within this reference design.

Table 1: IP Fabric Underlay Network Release History

Release

Description

19.1R2

QFX10002-60C and QFX5120-32C switches running Junos OS Release 19.1R2 and later releases in the same release train also support all features documented in this section except the following:

  • MicroBFD, which is supported on QFX10002-36Q/72Q, QFX10008, and QFX10016 switches only.

18.4R2

QFX5120-48Y switches running Junos OS Release 18.4R2 and later releases in the same release train support all features documented in this section except MicroBFD.

18.1R3-S3

QFX5110 switches running Junos OS Release 18.1R3-S3 and later releases in the same release train support all features documented in this section except MicroBFD.

17.3R3-S1

All devices in the reference design that support Junos OS Release 17.3R3-S1 and later releases in the same release train also support all features documented in this section. The following is an exception:

  • MicroBFD, which is supported on QFX10002-36Q/72Q, QFX10008, and QFX10016 switches only.