Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Example: Configuring an EVPN-VXLAN Edge-Routed Bridging Fabric with an Anycast Gateway

Ethernet VPN (EVPN) is a BGP-based control plane technology that enables hosts (physical servers and virtual machines) to be placed anywhere in a network and remain connected to the same logical Layer 2 (L2) overlay network. Virtual Extensible LAN (VXLAN) is a tunneling protocol that creates the data plane for the L2 overlay network.

The physical underlay network over which EVPN-VXLAN is commonly deployed is a two-layer IP fabric, which includes spine and leaf devices as shown in Figure 1 . A two-layer spine and leaf fabric is referred to as a 3-stage Clos.

This example details how to deploy an edge-routed bridging (ERB) architecture using a 3-stage Clos fabric. In this design, the spine devices (such as QFX10000 switches) provide only IP connectivity between the leaf devices. In this capacity we call the spine devices lean spines, as they require no VXLAN functionality. The leaf devices (such as QFX5100 switches) provide connectivity to attached workloads. In the ERB case, the leaf devices provide L2 and Layer 3 (L3) VXLAN functionality in the overlay network. L2 gateways provide bridging within the same VLAN. An L3 gateway handles traffic between VLANs (inter-VLAN), using integrated routing and bridging (IRB) interfaces.

In this example, the IRB interfaces are configured with an anycast IP address. For an ERB example that uses virtual gateway address (VGA) IP address, see Example: Configuring an EVPN-VXLAN Edge-Routed Bridging Fabric With a Virtual Gateway

Note:

We also call the ERB architecture a "collapsed" fabric. Compared with a CRB design, the L2 and L3 VXLAN gateway functions collapse into a single layer of the fabric (the leaves).

For background on EVPN-VXLAN technology and supported architectures, see EVPN Primer.

For an example of how to configure an EVPN-VXLAN centrally-routed bridging (CRB) overlay, see Example: Configure an EVPN-VXLAN Centrally-Routed Bridging Fabric.

Figure 1: A 3-Stage (Leaf-And-Spine) Edge-Routing Bridging ArchitectureA 3-Stage (Leaf-And-Spine) Edge-Routing Bridging Architecture

This example describes how to configure an EVPN-VXLAN ERB overlay. As a result, you configure routing instances and IRB interfaces on the leaf devices only.

Requirements

This example uses the following hardware and software components:

  • Two devices that function as transit spine devices.

  • Four devices running Junos OS Release 15.1X53-D60 or later software that serve as leaf devices and provide both L2 and L3 gateway functionality.

    • Updated and re-validated using QFX10002 switches running Junos OS Release 21.3R1
  • See the hardware summary for a list of supported platforms.

Overview and Topology

The ERB overlay shown in Figure 2 includes two transit spine devices and four leaf devices that function as both L2 and L3 gateways. Four servers are attached to the leaf devices. Server A is connected to Leaf1 and Leaf2 through a link aggregation group (LAG) interface. On both leaf devices, the interface is assigned the same Ethernet segment identifier (ESI) and set to multihoming all-active mode.

Figure 2: ERB Overlay within a Data CenterERB Overlay within a Data Center

In this topology, Server A and Server C are in VLAN 101, Server B is in VLAN 102, and Server D is in VLAN 103. For communication between VLANs to occur, you must configure IRB interfaces for each VLAN on all leaf devices.

The most significant difference between the configuration of ERB compared with CRB is the configuration and location of the L3 gateway. Therefore, this example focuses on the EVPN-VXLAN configuration, in particular, the L3 gateway configuration, on the leaf devices.

For an ERB overlay, you can configure the IRB interfaces within an EVPN instance (EVI) using one of the following methods:

  • Method 1—This method entails a unique IP address for each IRB interface, but uses the same MAC for each IRB interface. With this method a single MAC entry is installed for each IRB address on both the leaf devices and servers. For each IRB interface on a particular leaf device, for example, Leaf1, you specify the following:

    • A unique IP address for each IRB interface.

    • The same MAC address is used for each IRB interface.

    For example:

    Table 1: Unique IP Address with Same MAC per IRB Interface

    irb.101

    IP address: 10.1.101.254/24

    MAC address: 00:00:5e:00:53:01

    irb.102

    IP address: 10.1.102.254/24

    MAC address: 00:00:5e:00:53:01

    irb.103

    IP address: 10.1.103.254/24

    MAC address: 00:00:5e:00:53:01

  • Method 2—This method entails a unique IP address and MAC for each IRB interface. With this method, a MAC entry is installed for each IRB address on the leaf devices but only a single MAC on the servers. For each IRB interface on Leaf1, you specify the following:

    • A unique IP address for each IRB interface.

    • A unique MAC address is used for each IRB interface..

    For example:

    Table 2: Unique IP Address and MAC per IRB Interface

    irb.101

    IP address: 10.1.101.254/24

    MAC address: 00:00:5e:00:53:01

    irb.102

    IP address: 10.1.102.254/24

    MAC address: 00:00:5e:00:53:02

    irb.103

    IP address: 10.1.103.254/24

    MAC address: 00:00:5e:00:53:03

  • Method 3—This method entails a unique IP address and VGA for each IRB interface. With this method a MAC entry is installed for each IRB address and the VGA on the leaf devices and servers. For each IRB interface on Leaf1, you specify the following:

    • A unique IP address for each IRB interface.

    • A unique VGA address for each IRB interface.

    For example:

    Table 3: Unique IP Address and Virtual Gateway Address per IRB Interface

    irb.101

    IP address: 10.1.101.1/24

    VGA address: 10.1.101.254

    irb.102

    IP address: 10.1.102.1/24

    VGA address: 10.1.102.254

    irb.103

    IP address: 10.1.103.1/24

    VGA address: 10.1.103.254

For methods 1 and 2, the same IRB interface configuration is applied across all leaf devices. For method 3, a unique IRB interface address and the same VGA is applied across all leaf devices. In this example, method 1 is used to configure the IRB interfaces.

This example (with method 1) configures the same MAC address for each IRB interface on each leaf device. Each host uses the same MAC address when sending inter-VLAN traffic regardless of where the host is located or which leaf device receives the traffic. For example, in the topology shown in Figure 2, multi-homed Server A in VLAN 101 sends a packet to Server B in VLAN 102. If Leaf1 is down, Leaf2 continues to forward the inter-VLAN traffic even without the configuration of a redundant default gateway MAC address.

Note:

The IRB interfaces configuration in this example doesn't include a virtual gateway address (VGA) and a corresponding V-MAC address that establishes redundant default gateway functionality, which is mentioned above. By configuring the same MAC address for each IRB interface on each leaf device, hosts use the local leaf device configured with the common MAC address as the default L3 gateway.

Therefore, you eliminate the need to advertise a redundant default gateway and dynamically synchronize the MAC addresses of the redundant default gateway throughout the EVPN control plane. As a result, when configuring each leaf device, you must disable the advertisement of the redundant default gateway by including the default-gateway do-not-advertise configuration statement at the [edit protocols evpn] hierarchy level in your configuration.

Also, although the IRB interface configuration used in this example does not include a VGA, you can configure a VGA as needed to make EVPN-VXLAN work properly in your ERB overlay. If you configure a VGA for each IRB interface, you specify the same IP address for each VGA on each leaf device instead of configuring the same MAC address for each IRB interface on each leaf device as is shown in this example.

When it comes to handling the replication of broadcast, unknown unicast, and multicast (BUM) traffic, note that the configuration on Leaf1:

  • includes the set protocols evpn multicast-mode ingress-replication command. This command causes Leaf1, which is a hardware VTEP, to handle replicating and sending BUM traffic instead of relying on a multicast-enabled underlay.

Configuration For Leaf1

CLI Quick Configuration

To quickly configure this example, copy the following commands and paste them into a text file. Remove any line breaks, and change any details necessary to match your network configuration. Then copy and paste the commands into the CLI at the [edit] hierarchy level.

Leaf1

Configuring EVPN-VXLAN on Leaf1

Step-by-Step Procedure

  1. Configure the underlay configuration. In this example we use EBGP for the underlay routing protocol.

  2. Configure Server A to be multihomed to Leaf1 and Leaf2 by configuring an aggregated Ethernet interface, specifying an ESI for the interface, and setting the mode so that the connections to both leaf devices are active. We show the applied VLAN configuration in a later step.

    Note:

    When configuring the AE interface on Leaf2, you must specify the same ESI (00:01:01:01:01:01:01:01:01:01) as the ESI for the same interface on Leaf1.

  3. Configure the IRB interfaces, each with unique IP addresses and the same MAC address.

    Note:

    Each leaf device should have the same IRB interface configuration.

  4. Set up the EBGP-based overlay configuration. Make sure to include the multihop configuration option because we use loopback peering.

    Note:

    Some IP fabrics use an IBGP based EVPN-VXLAN overlay. For an example of an IP fabric that uses IBGP for the overlay, see Example: Configure an EVPN-VXLAN Centrally-Routed Bridging Fabric. Note that choosing EBGP or IBGP for the overlay does not impact the fabric architecture. Both CRB and ERB designs support either type of overlay.

  5. Set up the EVPN-VXLAN domain, which entails determining which VNIs are included in the domain, specifying that Leaf1, which is a hardware VTEP, handles the replication and sending of BUM traffic, disabling the advertisement of the redundant default gateway throughout the EVPN control plane, and specifying a route target for each VNI.

  6. Set up an EVPN routing instance.

  7. Configure the switch options to use loopback interface lo0.0 as the source interface of the VTEP, set a route distinguisher, and set the vrf target.

  8. Configure VLANs associated with IRB interfaces and VXLAN VNIs.

Verification

The section describes the following verifications for this example:

Verifying BGP

Purpose

Verify that the spine devices have established BGP session connectivity.

Action

Display the BGP summary:

Meaning

Both underlay and overlay BGP sessions are established with the spine devices.

Verifying the ESI

Purpose

Verify the status of the ESI.

Action

Display the status of the ESI:

Meaning

The ESI is up and Leaf2 is the remote provider edge (PE) device and the designated forwarder.

Verifying the EVPN Database

Purpose

Verify the MAC addresses in the EVPN database.

Action

Verify the MAC addresses in the EVPN database for VLAN 101.

Meaning

The MAC and IP addresses for Server A are shown with an active source of the ESI, and the MAC and IP addresses for server C are shown with an active source from Leaf3.

Verifying Connectivity

Purpose

Verify ping works between servers.

Action

Ping from server A to the other servers.

Meaning

End-to-end connectivity is working.

Quick Configuration For All Devices

CLI Quick Configuration

To quickly configure this example, copy the following commands and paste them into a text file. Remove any line breaks, and change any details necessary to match your network configuration. Then copy and paste the commands into the CLI at the [edit] hierarchy level.

Leaf2

Leaf3

Leaf4

Spine 1

Spine 2