Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Example: Configuring an EVPN-VXLAN Edge-Routed Bridging Fabric With a Virtual Gateway

Ethernet VPN (EVPN) is a control plane technology that enables hosts (physical [bare-metal] servers and virtual machines [VMs]) to be placed anywhere in a network and remain connected to the same logical Layer 2 (L2) overlay network. Virtual Extensible LAN (VXLAN) is a tunneling protocol that creates the data plane for the L2 overlay network.

The physical underlay network over which EVPN-VXLAN is commonly deployed is a two-layer IP fabric, which includes spine and leaf devices as shown in Figure 1 . A two-layer spine and leaf fabric is referred to as a 3-stage Clos.

This example details how to deploy an edge-routed bridging (ERB) architecture using a 3-stage Clos fabric. In this design the spine devices—for example, QFX10000 switches—provide only IP connectivity between the leaf devices. In this capacity they are referred to as lean spines, as they require no VXLAN functionality. The leaf devices—for example, QFX5100 switches—provide connectivity to attached workloads, and in the ERB case, provide L2 and Layer 3 (L3) VXLAN functionality in the overlay network. L2 gateways provide bridging within the same VLAN while a L3 gateway handles traffic between VLANs (inter-VLAN), through the use of integrated routing and bridging (IRB) interfaces.

In this example, we configure the IRB interfaces with a virtual gateway address (VGA). For an ERB example that uses an anycast IP address on the IRBs, see Example: Configuring an EVPN-VXLAN Edge-Routed Bridging Fabric with an Anycast Gateway

Note:

The ERB architecture is sometimes called a "collapsed" fabric. This is because, when compared to a CEB design, the L2 and L3 VXLAN gateway functional is collapsed into a single layer of the fabric (the leaves).

For background information on EVPN-VXLAN technology and supported architectures, see EVPN Primer.

For an example of how to configure an EVPN-VXLAN centrally-routed bridging (CRB) overlay see Example: Configure an EVPN-VXLAN Centrally-Routed Bridging Fabric.

Figure 1: A 3-Stage (Leaf and Spine) Edge-Routing Bridging Architecture A 3-Stage (Leaf and Spine) Edge-Routing Bridging Architecture

Starting with Junos OS Release 17.3R1, the QFX5110 switch can function as a leaf device, which acts as L2 and L3 VXLAN gateways in an EVPN-VXLAN ERB overlay.

This topic provides a sample configuration of a QFX device that functions as a leaf in an ERB overlay.

Requirements

This example uses the following hardware and software components:

  • Two devices that function as transit spine devices.

  • Four devices running Junos OS Release 17.3R1 or later that serve as leaf devices and provide both L2 and L3 VXLAN gateway functionality.

    • Updated and re-validated using QFX10002 switches running Junos OS Release 21.3R1

  • See the hardware summary for a list of supported platforms.

Overview and Topology

In this example, a service provider supports ABC Corporation, which has multiple servers. Server A and Server C communicate with each other using VLAN 101. Server B and Server D communicate using the L3 gateway. To enable this communication in the ERB overlay shown in Figure 2, you configure the key software entities in Table 1 on the switches that function as L2 and L3 VXLAN gateways, or leaf devices.

Figure 2: Sample Edge-Routed Bridging Overlay Sample Edge-Routed Bridging Overlay
Table 1: Layer 3 Inter-VLAN Routing Entities Configured on Leaf1, Leaf2, Leaf3 and Leaf4

Entities

Configuration on Leaf1, Leaf2, Leaf3 and Leaf4

VLANs

v101

v102

v103

VRF instances

vrf101

vrf102_103

IRB interfaces

irb.101

10.1.101.1/24 (IRB IP address)

10.1.101.254 (virtual gateway address)

irb.102

10.1.102.1/24 (IRB IP address)

10.1.102.254 (virtual gateway address)

irb.103

10.1.103.1/24 (IRB IP address)

10.1.103.254 (virtual gateway address)

As outlined in Table 1, you configure VLAN v101 for Server A and Server C, VLAN v102 for Server B, and VLAN v103 for Server D on each leaf device. To segregate the L3 routes for VLANs v101, v102, and v103, you create VPN routing and forwarding (VRF) instances vrf102 and vrf102_103 on each leaf device. To route traffic between the VLANs, you configure IRB interfaces irb.101, irb.102, and irb.103. You also associate VRF instance vrf101 with IRB interface irb.101, and VRF instance vrf102_103 with IRB interfaces irb.102 and irb.103.

You configure IRB interfaces irb.101, irb.102, and irb.103 to function as default L3 gateways that handle server inter-VLAN traffic. To that end, in each IRB interface configuration, you also include a virtual gateway address (VGA), which configures an IRB interface as a default L3 gateway. In addition, this example assumes that each server is configured to use a particular default gateway. For more information about default gateways and how inter-VLAN traffic flows between a physical server to another physical server or VM in another VLAN in an ERB overlay, see Using a Default Layer 3 Gateway to Route Traffic in an EVPN-VXLAN Overlay Network.

Note:

When configuring a VGA for an IRB interface, keep in mind that the IRB IP address and VGA must be different.

As outlined in Table 1, you configure a separate VRF routing instance for VLAN v101 and VLANs v102 and v103. To enable the communication between hosts in VLANs v101, v102, and v103, this example shows how to export unicast routes from the routing table for vrf101 and import the routes into the routing table for vrf102_103 and vice versa. This feature is also known as route leaking.

Quick Configuration

CLI Quick Configuration

To quickly configure Leaf1, copy the following commands and paste them into a text file. Remove any line breaks and change any details necessary to match your network configuration. Then copy and paste the commands into the CLI at the [edit] hierarchy level.

Underlay Network Configuration

CLI Quick Configuration

To quickly configure a underlay network on Leaf1, copy the following commands and paste them into a text file. Remove any line breaks and change any details necessary to match your network configuration. Then copy and paste the commands into the CLI at the [edit] hierarchy level.

Configuring the Underlay Network

Step-by-Step Procedure

To configure a underlay network on Leaf1:

  1. Configure the interfaces connected to the spine devices and the loopback interface on Leaf1.

  2. Configure the router ID, autonomous system number, and apply the load balancing policy for Leaf1. We show the policy configuration in a later step.

  3. Configure an EBGP group that peers with both spine devices. We show the policy to advertise the loopback address for Leaf1 in a later step.

  4. Configure policies to load balancing and advertise the loopback address of Leaf1.

EVPN-VXLAN Overlay Network Configuration

CLI Quick Configuration

To quickly configure an overlay network, copy the following commands and paste them into a text file. Remove any line breaks and change any details necessary to match your network configuration. Then copy and paste the commands into the CLI at the [edit] hierarchy level.

Configuring an EVPN-VXLAN Underlay Network

Step-by-Step Procedure

To configure a basic EVPN-VXLAN overlay network on Leaf1:

  1. Configure an EBGP-based overlay between Leaf1 and the spine devices, specify a local IP address for Leaf1, and include the EVPN signaling Network Layer Reachability Information (NLRI) to the BGP group.

    Note:

    Some IP fabrics use an IBGP based EVPN-VXLAN overlay. For an example of an IP fabric that uses IBGP for the overlay, see Example: Configure an EVPN-VXLAN Centrally-Routed Bridging Fabric. Note that choosing either EBGP or IBGP for the overlay does not impact the fabric architecture. Both CRB and ERB designs support either type of overlay.

  2. Configure VXLAN encapsulation for the data packets exchanged between the EVPN neighbors. Specify that all VXLAN network identifiers (VNIs) are part of the virtual routing and forwarding (VRF) instance. Also, specify that the MAC address of the IRB interface and the MAC address of the corresponding default gateway are advertised without the extended community option default-gateway.

  3. Configure switch options to set a route distinguisher and VRF target for the VRF routing instance, and associate interface lo0 with the virtual tunnel endpoint (VTEP).

Customer Profile Configuration

CLI Quick Configuration

To quickly configure a basic customer profile for Servers A, Server B, Server C and Server D, copy the following commands and paste them into a text file. Remove any line breaks and change any details necessary to match your network configuration. Then copy and paste the commands into the CLI at the [edit] hierarchy level.

Configuring a Customer Profile

Step-by-Step Procedure

To configure a basic customer profile on Leaf1:

  1. Enable Server A to be multihomed to Leaf1 and Leaf2 by configuring an aggregated Ethernet interface, specifying an ESI for the interface, and setting the mode so that the connections to both leaf devices are active.

    Note:

    When configuring the ae0 interface on Leaf2, you must specify the same ESI (00:01:01:01:01:01:01:01:01:01) that is specified for the same interface on Leaf1.

  2. Configure IRB interfaces and associated VGAs (default L3 virtual gateways), which enable the communication between servers in different VLANs. Add the optional configuration virtual-gateway-accept-data to allow the VGA to respond to ping packets.

    Note:

    When configuring a VGA for an IRB interface, keep in mind that the IRB IP address and VGA must be different.

  3. Configure a VRF routing instance for VLAN v101 and another VRF routing instance for VLAN v102 and v103. In each routing instance, associate the IRB interfaces, a route-distinguisher, and a vrf-target.

  4. Configure VLANs v101, v102, and v103 and associate an IRB interface and VNI with each VLAN.

Route Leaking Configuration

At this point based on the configuration, Server A and Server C should be able to reach each other. Server B andServer D should be able to reach each other. To enable all servers to be able to reach each other, we leak routes between the routing instances.

CLI Quick Configuration

To quickly configure route leaking, copy the following commands and paste them into a text file. Remove any line breaks and change any details necessary to match your network configuration. Then copy and paste the commands into the CLI at the [edit] hierarchy level.

Configuring Route Leaking

Step-by-Step Procedure

To configure route leaking on Leaf1:

  1. Configure the communities that match the targets you configured for the routing instances. Configure policies that are applied under each routing instance that matches on the vrf-target of the other routing-instance.

  2. In the VRF routing instances, apply the routing policies configured in the previous step. This establishes a common route target between the routing instances.

  3. Configure the auto-export option that allows sharing routes between instances with common route targets.

Verification

The section describes the following verifications for this example:

Verifying BGP

Purpose

Verify that the spine devices have established BGP session connectivity.

Action

Display the BGP summary:

Meaning

Both underlay and overlay BGP sessions are established with the spine devices.

Verifying the ESI

Purpose

Verify the status of the ESI.

Action

Display the status of the ESI:

Meaning

The ESI is up and Leaf2 is the remote provider edge (PE) device and the designated forwarder.

Verifying the EVPN Database

Purpose

Verify the MAC addresses in the EVPN database.

Action

Verify the MAC addresses in the EVPN database for VLAN 101.

Meaning

The MAC and IP addresses for Server A are shown with an active source of the ESI, and the MAC and IP addresses for Server C are shown with an active source from Leaf3. Also, the MAC for the VGA and each leaf IRB interface are shown.

Verifying Connectivity

Purpose

Verify ping works between servers.

Action

Ping from Server A to the other servers.

Meaning

End-to-end connectivity is working.

Quick Configuration For All Devices

CLI Quick Configuration

To quickly configure this example, copy the following commands and paste them into a text file. Remove any line breaks and change any details necessary to match your network configuration. Then copy and paste the commands into the CLI at the [edit] hierarchy level.

Leaf2

Leaf3

Leaf4

Spine 1

Spine 2

Release History Table
Release
Description
17.3R1
Starting with Junos OS Release 17.3R1, the QFX5110 switch can function as a leaf device, which acts as an L2 and an L3 VXLAN gateway in an EVPN-VXLAN ERB overlay.