Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Configure IBGP for the Overlay

For a control-plane driven overlay, there must be a signalling path between the VXLAN virtual tunnel endpoint (VTEP) devices. In this reference design with an IPv4 Fabric underlay, all overlay types use IBGP with Multiprotocol BGP (MP-IBGP) to maintain the signalling path between the VTEPs within an autonomous system. The spine devices act as a route reflector cluster, and the leaf devices are route reflector clients, as shown in Figure 1.

Figure 1: IBGP Route Reflector ClusterIBGP Route Reflector Cluster

To configure an EVPN-VXLAN data center fabric architecture with an IPv6 Fabric, see IPv6 Fabric Underlay and Overlay Network Design and Implementation with EBGP instead of this procedure. In an IPv6 Fabric configuration, we use EBGP and IPv6 for underlay connectivity, as well as EBGP and IPv6 for peering and EVPN signalling in the overlay. With an IPv6 Fabric, the VTEPs encapsulate the VXLAN packets with an IPv6 outer header and tunnel the packets using IPv6. You can use either an IPv4 Fabric or an IPv6 Fabric in your data center architecture. You can’t mix IPv4 Fabric and IPv6 Fabric elements in the same architecture.

To configure IBGP for the overlay peering in an IPv4 Fabric, perform the following:

  1. Configure an AS number for overlay IBGP. All leaf and spine devices participating in the overlay use the same AS number. In this example, the AS number is private AS 4210000001.

    Spine and Leaf Devices:

  2. Configure IBGP using EVPN signaling on each spine device to peer with every leaf device (Leaf 1 through Leaf 96). Also, form the route reflector cluster (cluster ID 192.168.0.10) and configure equal cost multipath (ECMP) for BGP. The configuration included here belongs to Spine 1, as shown in Figure 2.
    Figure 2: IBGP – Spine DeviceIBGP – Spine Device
    Tip:

    By default, BGP selects only one best path when there are multiple, equal-cost BGP paths to a destination. When you enable BGP multipath by including the multipath statement at the [edit protocols bgp group group-name] hierarchy level, the device installs all of the equal-cost BGP paths into the forwarding table. This feature helps load balance the traffic across multiple paths.

    Spine 1:

  3. Configure IBGP on the spine devices to peer with all the other spine devices acting as route reflectors. This step completes the full mesh peering topology required to form a route reflector cluster.

    Spine 1:

  4. Configure BFD on all BGP groups on the spine devices to enable rapid detection of failures and reconvergence.

    Spine 1:

  5. Configure IBGP with EVPN signaling from each leaf device (route reflector client) to each spine device (route reflector cluster). The configuration included here belongs to Leaf 1, as shown in Figure 3.
    Figure 3: IBGP – Leaf DeviceIBGP – Leaf Device

    Leaf 1:

  6. Configure BFD on the leaf devices to enable rapid detection of failures and reconvergence.
    Note:

    QFX5100 switches only support BFD liveness detection minimum intervals of 1 second or longer. The configuration here has a minimum interval of 350 ms, which is supported on devices other than QFX5100 switches.

    Leaf 1:

  7. Verify that IBGP is functional on the spine devices.
  8. Verify that BFD is operational on the spine devices.
  9. Verify that IBGP is operational on the leaf devices.
  10. Verify that BFD is operational on the leaf devices.