Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Example: Configure an EVPN-VXLAN Centrally-Routed Bridging Fabric

Modern data centers rely on an IP fabric. An IP fabric uses BGP-based Ethernet VPN (EVPN) signaling in the control plane and Virtual Extensible LAN (VXLAN) encapsulation in the data plane. This technology provides a standards-based, high-performance solution for Layer 2 (L2) bridging within a VLAN and for routing between VLANs.

In most cases a one-to-one relationship exists between a user VLAN and a VXLAN network identifier (VNI). As a result, the abbreviations VLAN and VXLAN are often used interchangeably. By default, VXLAN encapsulation strips any ingress VLAN tag when received from an access port. The rest of the Ethernet frame is encapsulated in VXLAN for transport across the fabric. At the egress point, the VXLAN encapsulation is stripped and the VLAN tag (if any) is reinserted before the frame is sent to the attached device.

This is an example of an EVPN-VXLAN IP fabric based on a centrally routed bridging (CRB) architecture. Integrated routing and bridging (IRB) interfaces provide Layer 3 (L3) connectivity to servers and VMs that belong to different VLANs and networks. These IRB interfaces serve as the default gateway for inter-VLAN traffic within a fabric. They also serve as destinations that are remote to the fabric, such as in the case of Data Center Interconnect (DCI). In a CRB design you define the IRB interfaces on the spine devices only. Such a design is therefore referred to as being centrally routed, as all routing occurs on the spines.

For an example of an edge-routed bridging (ERB) design, see Example: Configuring an EVPN-VXLAN Edge-Routed Bridging Fabric with an Anycast Gateway

For background information about EVPN-VXLAN technology and supported architectures, see EVPN Primer.

Requirements

The original example used the following hardware and software components:

  • Two QFX10002 switches (Spine 1 and Spine 2) running Junos OS Release 15.1X53-D30 software.

  • Four QFX5100 switches (Leaf 1 through Leaf 4) running Junos OS Release 14.1X53-D30 software.

    • Updated and re-validated using Junos OS Release 20.4R1.

    • See the hardware summary for a list of supported platforms.

Overview

In this example, physical servers that support three groups of users (meaning three VLANs) require the following connectivity:

  1. Servers A and C should be able to communicate at L2. These servers must share a subnet, and therefore, a VLAN.
  2. Servers B and D must be on separate VLANs to isolate broadcast. These servers must be able to communicate atL3.
  3. Servers A and C should not be able to communicate with Servers B and D.

To meet these connectivity requirements these protocols and technologies are used:

  • EVPN establishes an L2 virtual bridge to connect servers A and C and places servers B and D into their respective VLANs.

  • Within the EVPN topology, BGP exchanges route information.

  • VXLAN tunnels the L2 frames through the underlying L3 fabric. The use of VXLAN encapsulation preserves the L3 fabric for use by routing protocols.

  • IRB interfaces route IP packets between the VLANs.

Again, the IRB interfaces are configured on the spine devices only, for centrally routed bridging (CRB). In this design the spine devices function as L3 gateways for the various servers, VMs, or containerized workloads connected to the access ports on the leaf switches. When the workloads exchange data within their own VLANs, the leaf bridges the traffic. The resulting VXLAN encapsulated traffic is then sent through the spines as underlay (IP) traffic. For intra-VLAN traffic the VXLAN virtual tunnel endpoint (VTEP) functionality of the spine is not used. Intra-VLAN traffic is sent between the VTEPs in the source and destination leaf.

In contrast, inter-VLAN (and inter-fabric) traffic must be routed. This traffic is encapsulated into VXLAN and bridged by the leaf to the spine. The leaves know to send this traffic to the spine because the source, that needs routing targets the destination MAC address of the VLAN's default gateway. In other words, frames that are sent to the IRB's MAC address are forwarded at L2 to the spine device.

At the spine, the L2 encapsulation is removed to accommodate an L3 route lookup in the routing instance associated with the VLAN/IRB. For inter-VLAN traffic the spine determines the destination VLAN and the corresponding VXLAN VNI from its route lookup. The spine then re-encapsulates the traffic and sends it through the underlay to the target leaf or leaves.

Topology

The simple IP Clos topology shown in Figure 1 includes two spine switches, four leaf switches, and four servers. Each leaf switch has a connection to each of the spine switches for redundancy.

The server networks are segmented into 3 VLANS, each of which is mapped to a VXLAN Virtual Network Identifier (VNI). VLAN v101 supports servers A and C, and VLANs v102 and v103 support servers B and D, respectively. See Table 1 for configuration parameters.

Figure 1: EVPN-VXLAN TopologyEVPN-VXLAN Topology
Figure 2: EVPN-VXLAN Logical Topology

The logical topology shows the expected connectivity. In this example one routing instance is used to connect servers A and C using VLAN 101, and one routing instance is used to connect servers B and D using VLANs 102 and 103. The servers are only able to communicate with other servers that are in the same routing instance by default.

Because servers A and C share the same VLAN, the servers communicate at L2. Therefore, the IRB interface is not needed for servers A and C to communicate. We define an IRB interface in the routing instance as a best practice to enable future L3 connectivity. In contrast, Servers B and D require L3 connectivity through their respective IRB interfaces to communicate, given that these servers are in different VLANs running on unique IP subnets.

EVPN-VXLAN Logical Topology

Table 1 provides key parameters, including the IRB interfaces, configured for each network. An IRB interface supports each VLAN and routes data packets over the VLAN from the other VLANs.

Table 1: Key VLAN and VXLAN Parameters

Parameters

Servers A and C

Servers B and C

VLAN

v101

v102

 

v103

VXLAN VNI

101

102

 

103

VLAN ID

101

102

 

103

IRB interface

irb.101

irb.102

 

irb.103

Keep the following in mind when you configure the parameters in Table 1. You must:

  • Associate each VLAN with a unique IP subnet, and therefore a unique IRB interface.

  • Assign each VLAN a unique VXLAN network identifier (VNI).

  • Specify each IRB interface as part of an L3 virtual routing forwarding (VRF) instance, or you can lump the interfaces together in the default switch instance. This example uses VRF instances to enforce separation between the user community (VLANs).

  • Include in the configuration of each IRB interface a default gateway address, which you specify with the virtual-gateway-address configuration statement at the [interfaces irb unit logical-unit-number family inet address ip-address] hierarchy level. Configuring a virtual gateway sets up a redundant default gateway for each IRB interface.

Spine 1: Underlay Network Configuration

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Spine 1: Configure the Underlay Network

Step-by-Step Procedure

To configure the underlay network on Spine 1:

  1. Configure the L3 fabric interfaces.

  2. Specify an IP address for the loopback interface. This IP address serves as the source IP address in the outer header of VXLAN-encapsulated packets.

  3. Configure the routing options. The configuration includes a reference to a load-balancing policy to enable the use of equal cost multiple path (ECMP) routing through the underlay.

  4. Configure a BGP group for the external BGP (EBGP)-based underlay. Note that multipath is included in the BGP configuration to allow use of multiple equal-cost paths . Normally BGP uses a tie-breaking algorithm that selects the single best path.

  5. Configure the per-packet load-balancing policy.

  6. Configure a policy to advertise direct interface routes into the underlay. At a minimum you must advertise the loopback interface (lo0) routes into the underlay.

Spine 1: Overlay Network Configuration

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste the configuration into a text file, remove any line breaks, change any details necessary to match your configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Configuring the Overlay Network on Spine 1

Step-by-Step Procedure

To configure the overlay network on Spine 1:

  1. Configure an internal BGP (IBGP)-based EVPN-VXLAN overlay. Note that the EVPN address family is configured to support advertisement of EVPN routes. In this case we define an overlay peering to Spine 2 for spine-to-spine connectivity. As with the underlay, we also enabled BGP multipath in the overlay.

    Note:

    Some IP fabrics use an EBGP-based EVPN-VXLAN overlay. For an example of an IP fabric that uses EBGP for both the underlay and overlay, see Example: Configuring an EVPN-VXLAN Edge-Routed Bridging Fabric with an Anycast Gateway. Note that the choice of EBGP or IBGP for the overlay does not negatively affect the fabric architecture. Both centrally routed bridging (CRB) and edge-routed bridging (ERB) designs support either overlay type.

  2. Configure VXLAN encapsulation for the L2 frames exchanged between the L2 VXLAN VTEPs.

  3. Configure the default gateway option no-gateway-community for protocols EVPN.

    Note:

    When a virtual-gateway-address is used, a VRRP based MAC “00:00:5e:00:01:01” is used on both the spines, so MAC sync is not needed. See default gateway for more information.

  4. Specify how multicast traffic is replicated in the fabric.

  5. Configure default routing instance options (virtual switch type).

Spine 1: Access Profile Configuration

CLI Quick Configuration

The access profile or access port configuration involves the settings needed to attach server workloads, BMSs or VMs, to the access (leaf) switches. This step involves definition of the device's VLAN along with the routing instance and IRB configuration that provide user isolation and L3 routing, respectively.

Because this is an example of a centrally routed bridging (CRB) fabric, the routing instances and integrated routing and bridging (IRB) interfaces are defined on the spine devices only. The leaf devices in a CRB fabric have only L2 VXLAN functionality.

To quickly configure this example, copy the following commands, paste the commands into a text file, remove any line breaks, change any details necessary to match your configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Note:

When proxy-macip-advertisement is enabled, the L3 gateway advertises MAC and IP routes (MAC+IP type 2 routes) on behalf of L2 VXLAN gateways in EVPN-VXLAN networks. This behavior is not supported on EVPN-MPLS. Starting in Junos OS Release 20.2R2, the following warning message appears when you enable proxy-macip-advertisement:

WARNING: Only EVPN VXLAN supports proxy-macip-advertisement configuration,

The message appears when you change your configuration, save your configuration, or use the show command to display your configuration

Configuring the Access Profiles for Spine 1

Step-by-Step Procedure

To configure profiles for the server networks:

  1. Configure the IRB interfaces that support routing among VLANs 101, 102, and 103.

  2. Specify which virtual network identifiers (VNIs) are included in the EVPN-VXLAN domain.

  3. Configure a route target for each VNI.

    Note:

    In the original configuration, the spine devices run Junos OS Release 15.1X53-D30, and the leaf devices run 14.1X53-D30. In these software releases, when you include the vrf-target configuration statement in the [edit protocols evpn vni-options vni] hierarchy level, you must also include the export option. Note that later Junos OS releases do not require this option. As a result the configurations in this updated example omit the export option.

  4. Configure the routing instance for servers A and C.

  5. Configure the routing instance for servers B and D.

  6. Configure VLANs v101, v102, and v103, and associate the corresponding VNIs and IRB interfaces with each VLAN.

Spine 2: Full Configuration

CLI Quick Configuration

The Spine 2 configuration is similar to that of Spine 1, so we provide the full configuration instead of a step-by-step configuration. To quickly configure this example, copy the following commands, paste the commands into a text file, remove any line breaks, change any details necessary to match your configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Leaf 1: Underlay Network Configuration

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste the commands into a text file, remove any line breaks, change any details necessary to match your configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Configuring the Underlay Network for Leaf 1

Step-by-Step Procedure

To configure the underlay network for Leaf 1:

  1. Configure the L3 interfaces.

  2. Specify an IP address for the loopback interface. This IP address serves as the source IP address in the outer header of any VXLAN encapsulated packets.

  3. Set the routing options.

  4. Configure an external BGP (EBGP) group that includes the spines as peers to handle underlay routing.

  5. Configure a policy that spreads traffic across multiple paths between the Juniper Networks switches.

  6. Configure a policy to advertise direct interface routes. At a minimum the underlay must have full reachability to the device loopback addresses.

Leaf 1: Overlay Network Configuration

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste the commands into a text file, remove any line breaks, change any details necessary to match your configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Configuring the Overlay Network for Leaf 1

Step-by-Step Procedure

To configure the overlay network for Leaf 1:

  1. Configure an internal BGP (IBGP) group for the EVPN-VXLAN overlay network.

  2. Configure VXLAN encapsulation for the data packets exchanged between the EVPN neighbors.

  3. Specify how multicast traffic is replicated in the EVPN-VXLAN environment.

  4. Configure the default routing instance options (virtual switch type).

Leaf 1: Access Profile Configuration

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste the commands into a text file, remove any line breaks, change any details necessary to match your configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Configuring the Access Profile for Leaf 1

Step-by-Step Procedure

To configure the profile for the server network:

  1. Configure anL2 Ethernet interface for the connection with the physical server. This interface is associated with VLAN 101. In this example the access interface is configured as a trunk to support VLAN tagging. Untagged access interfaces are also supported.

  2. Configure a route target for the virtual network identifier (VNI).

    Note:

    In the original configuration, the spine devices run Junos OS Release 15.1X53-D30, and the leaf devices run 14.1X53-D30. In these software releases, when you include the vrf-target configuration statement in the [edit protocols evpn vni-options vni] hierarchy level, you must also include the export option. Note that later Junos OS releases do not require this option, as reflected in the updated configurations used in this example.

  3. Specify which VNIs are included in the EVPN-VXLAN domain.

  4. Configure VLAN v101. The VLAN is mapped to the same VXLAN VNI that you configured on the spine devices. Note that the L3 integrated routing and bridging (IRB) interface is not specified on the leaf devices. This is because in centrally routed bridging (CRB) the leaves perform L2 bridging only.

Leaf 2: Full Configuration

CLI Quick Configuration

The Leaf 2 configuration is similar to that of Leaf 1, so we provide the full configuration instead of a step-by-step configuration. To quickly configure this example, copy the following commands, paste the commands into a text file, remove any line breaks, change any details necessary to match your configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Leaf 3: Full Configuration

CLI Quick Configuration

The Leaf 3 configuration is similar to that of Leaf 1, so we provide the full configuration instead of a step-by-step configuration. To quickly configure this example, copy the following commands, paste the commands into a text file, remove any line breaks, change any details necessary to match your configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Leaf 4: Full Configuration

CLI Quick Configuration

The Leaf 4 configuration is similar to that of Leaf 1, so we provide the full configuration instead of step-by-step configuration. To quickly configure this example, copy the following commands, paste the commands into a text file, remove any line breaks, change any details necessary to match your configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Verification

Confirm that the integrated routing and bridging (IRB) interfaces are working properly:

Verify the IRB Interfaces

Purpose

Verify the configuration of the IRB interfaces on Spine 1 and Spine 2.

Action

From operational mode, enter the show interfaces irb command.

Meaning

The sample output from Spine 1 verifies the following:

  • IRB interfaces irb.101, irb.102, and irb.103 are configured.

  • The physical interface upon which the IRB interfaces are configured is up and running.

  • Each IRB interface is properly mapped to its respective VLAN.

  • The configuration of each IRB interface correctly reflects the IP address and destination (virtual gateway address) assigned to it.

Verifying the Routing Instances

Purpose

Verify that the routing instances for servers A and B, and for servers C and D, are properly configured on Spine 1 and Spine 2.

Action

From operational mode, enter the show route instance routing-instance-name extensive command routing instances serversAC and serversBD.

Meaning

In the sample output from Spine 1, the routing instances for servers A and C and for servers B and D show the loopback interface and IRB interfaces that are associated with each group. The output also shows the actual route distinguisher, virtual routing and forwarding (VRF) import, and VRF export policy configuration.

Verifying Dynamic MAC Addresses Learning

Purpose

Verify that for VLANs v101, v102, and v103, a dynamic MAC address is installed in the Ethernet switching tables on all leaves.

Action

From operational mode, enter the show ethernet-switching table command.

Meaning

The sample output from Leaf 1 indicates that it has learned the MAC address 00:00:5e:00:01:01 for its virtual gateway (IRB). This is the MAC address that the attached servers use to reach their default gateway. Because the same virtual IP/MAC is configured on both spines, the virtual IP is treated as an ESI LAG to support active forwarding to both spines without the risk of packet loops. The output also indicates that Leaf 1 learned the IRB MAC addresses for Spine 1 and Spine 2, which function as VTEPs.

Verifying Routes in the Routing Instances

Purpose

Verify that the correct routes are in the routing instances.

Action

From operational mode, enter the show route table routing-instance-name.inet.0 command.

Meaning

The sample output from Spine 1 indicates that the routing instance for servers A and C has the IRB interface routes associated with VLAN 101 and that the routing instance for server B and D has the IRB interfaces routes associated with VLANs 102 and 103.

Based on the routes in each table, it is clear that servers A and C in VLAN 101 cannot reach servers in C and D in VLANs 102 or 103. The output also shows that the common table that houses routes for servers B and D allows L3 communications through their IRB interfaces.

Verify Connectivity

Purpose

Verify that servers A and C can ping each other and servers B and D can ping each other.

Action

Run the ping command from the servers.

Meaning

The sample output shows that server A can ping server C, and server B can ping server D. Servers A and C should not be able to ping servers B and D, and servers B and D should not be able to ping servers A and C.

Spine 1 and 2: Route Leaking (Optional)

Referring to Figure 2, recall that you configured three VLANs and two routing instances to provide connectivity for servers A and C in VLAN 101 and for servers B and D in VLANs 102 and 103, respectively. In this section you modify the configuration to leak routes between the two routing instances. Figure 3 shows the resulting logical connectivity after the integrated routing and bridging (IRB) routes are leaked.

Figure 3: EVPN-VXLAN Logical Topology with Route LeakingEVPN-VXLAN Logical Topology with Route Leaking

With your routing information base (RIB) group modifications, you can expect the servers in VLAN 101 to reach the servers in both VLANs 102 and 103 using L3 connectivity.

CLI Quick Configuration

At this stage you have deployed a CRB-based EVPN fabric and have confirmed expected connectivity. That is, servers A and C can communicate at L2. Servers B and D (on VLANs 102 and 103, respectively) communicate through IRB routing in their shared routing instance. What if you want all servers to be able to ping each other? One option to solve this problem is to leak the routes between the routing instances. See auto-export for more information about leaking routes between virtual routing and forwarding (VRF) instances. To quickly configure this example, copy the following commands, paste the commands into a text file, remove any line breaks, change any details necessary to match your configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Verification with Route Leaking (Optional)

Verifying Routes with Route Leaking (Optional)

Purpose

Verify that the correct routes are in the routing instances.

Action

From operational mode, enter the show route table routing-instance-name.inet.0 command.

Meaning

The sample output from Spine 1 indicates that both routing instances now have the integrated routing and bridging (IRB) interface routes associated with all three VLANs. Because you copied routes between the instance tables, the net result is the same as if you configured all three VLANs in a common routing instance. Thus, you can expect full L3 connectivity between servers in all three VLANs.

Verifying Connectivity with Route Leaking (Optional)

Purpose

Verify that servers A and C can ping servers B and D.

Action

Run the ping command from the servers.

Meaning

The sample output shows that server A can ping server B and server D. It also shows that server C can ping server B and server D. This confirms the expected full connectivity among the servers and their VLANs.

Change History Table

Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.

Release
Description
15.1X53-D30
Starting with Junos OS Release 15.1X53-D30, you can use integrated routing and bridging (IRB) interfaces to route between VLANs using Virtual Extensible LAN (VXLAN) encapsulation.