Example: Configuring an EVPN-VXLAN Edge-Routed Bridging Overlay Within a Data Center
Ethernet VPN (EVPN) is a control plane technology that enables hosts (physical servers and virtual machines [VMs]) to be placed anywhere in a network and remain connected to the same logical Layer 2 overlay network. Virtual Extensible LAN (VXLAN) is a tunneling protocol that creates the data plane for the Layer 2 overlay network.
The physical underlay network over which EVPN-VXLAN is commonly deployed is a two-layer IP fabric, which includes spine and leaf devices as shown in Figure 1. The spine devices—for example, QFX10000 switches—provide connectivity between the leaf devices, and the leaf devices—for example, QFX5100 switches—provide connectivity to attached hosts. In the overlay network, the leaf devices function as Layer 2 gateways that handle traffic within a VXLAN, and the spine devices function as Layer 3 gateways that handle traffic between VXLANs through the use of integrated routing and bridging (IRB) interfaces. For more information about configuring an EVPN-VXLAN centrally-routed bridging overlay (an EVPN-VXLAN topology with a two-layer IP fabric), see Example: Configuring IRB Interfaces in an EVPN-VXLAN Environment to Provide Layer 3 Connectivity for Hosts in a Data Center.
You can also deploy EVPN-VXLAN over a physical underlay network in which the IP fabric is collapsed into a single layer of QFX10000 switches that function as leaf devices. In this fabric, which is shown in Figure 2, the leaf devices serve as both Layer 2 and Layer 3 gateways. In this topology, transit spine devices provide Layer 3 routing functionality only.
This example describes how to configure an EVPN-VXLAN edge-routed bridging overlay (EVPN-VXLAN topology with a collapsed IP fabric), in particular, the Layer 3 gateway, on a leaf device.
This example uses the following hardware and software components:
Two routers that function as transit spine devices.
Three QFX10000 switches running Junos OS Release 15.1X53-D60 or later software. These switches are leaf devices that provide both Layer 2 and Layer 3 gateway functionality.
This example focuses on the configuration of the Layer 2 overlay network on a leaf device. The transit spine devices used in this example provide Layer 3 functionality only. As a result, this example does not include the configuration of these devices.
Further, this example provides the configuration for leaf 1 only. The configuration for leaf 1 essentially serves as a template for the configuration of the other leaf devices. For the configuration of the other leaf devices, where appropriate, you can replace leaf 1-specific information with the information specific to the device you are configuring, add additional commands, and so on.
Two physical (bare-metal) servers and one server with VMs that are supported by a hypervisor.
Overview and Topology
The edge-routed bridging overlay shown in Figure 3 includes two transit spine devices, an IP fabric, which includes three leaf devices that function as both Layer 2 and Layer 3 gateways, two physical servers, and one virtualized server on which VMs and a hypervisor are installed. Physical server 1 is connected to leaf 1 and leaf 2 through a link aggregation group (LAG) interface. On both leaf devices, the interface is assigned the same Ethernet segment identifier (ESI) and set to multihoming active-active mode.
All leaf devices are in the same autonomous system (65200).
In this topology, an application on physical server 1 needs to communicate with VM 1 on the virtualized server. Physical servers 1 and 2 are included in VLAN 1, and the virtualized server is included in VLAN 2. For communication between VLANs 1 and 2 to occur, two IRB interfaces—irb.1, which is associated with VLAN 1, and irb.2, which is associated with VLAN 2—must be configured on each leaf device.
The most significant difference between the configuration of an edge-routed bridging overlay and a centrally-routed bridging overlay is the configuration of the Layer 3 gateway. Therefore, this example focuses on the EVPN-VXLAN configuration, in particular, the Layer 3 gateway configuration on the leaf devices.
For the edge-routed bridging overlay, you can configure the IRB interfaces within an EVPN instance using one of the following methods:
Method 1—For each IRB interface on a particular leaf device, for example, leaf 1, the following is specified:
A unique IP address.
The same MAC address.
IP address: 10.1.1.1/24
MAC address: 00:00:5e:00:53:01
IP address: 10.1.2.1/24
MAC address: 00:00:5e:00:53:01
Method 2—For each IRB interface on leaf 1, the following is specified:
A unique IP address.
A unique MAC address.
IP address: 10.1.1.1/24
MAC address: 00:00:5e:00:53:aa
IP address: 10.1.2.1/24
MAC address: 00:00:5e:00:53:bb
Regardless of the method that you use to configure the IRB interfaces on leaf 1, if irb.1 and irb.2 are also configured on other leafs, for example, leafs 2 and 3, you must specify the same configurations that you specified on leaf 1 for those IRB interfaces on leafs 2 and 3. For example, Figure 4 shows the configurations for irb.1 and irb.2 on leafs 1, 2, and 3 for both methods.
In this example, method 1 is used to configure the IRB interfaces.
As shown in this example, with the same MAC address configured for each IRB interface on each leaf device, each host uses the same MAC address when sending inter-VLAN traffic regardless of where the host is located or which leaf device receives the traffic. For example, in the topology shown in Figure 3, multi-homed physical server 1 in VLAN 1 sends a packet to VM 1 in VLAN 2. If leaf 1 is down, leaf 2 continues to forward the inter-VLAN traffic even without the configuration of a redundant default gateway MAC address.
Note that the configuration of IRB interfaces used in this example does not include a virtual gateway address (VGA) and a corresponding MAC address that establishes redundant default gateway functionality, which is mentioned above. By configuring the same MAC address for each IRB interface on each leaf device, hosts use the local leaf device configured with the common MAC address as the default Layer 3 gateway. Therefore, you eliminate the need to advertise a redundant default gateway and dynamically synchronize the MAC addresses of the redundant default gateway throughout the EVPN control plane. As a result, when configuring each leaf device, you must disable the advertisement of the redundant default gateway by including the default-gateway do-not-advertise configuration statement in the [edit protocols evpn] hierarchy level in your configuration.
Although the IRB interface configuration used in this example does not include a VGA, you can configure it as needed to make EVPN-VXLAN work properly in your edge-routed bridging overlay. If you configure a VGA for each IRB interface, you specify the same IP address for each VGA on each leaf device instead of configuring the same MAC address for each IRB interface on each leaf device as is shown in this example.
When it comes to handling the replication of broadcast, unknown unicast, and multicast (BUM) traffic, note that the configuration on leaf 1:
includes the set protocols evpn multicast-mode ingress-replication command. This command causes leaf 1, which is a hardware VTEP, to handle the replication and sending of BUM traffic instead of a multicast client in the EVPN-VXLAN topology.
Retains the QFX10000 switch’s default setting of disabled for ingress node replication for EVPN-VXLAN. With this feature disabled, if a QFX10000 switch that functions as a VTEP receives a BUM packet intended, for example, for a physical server in a VLAN with the VNI of 1001, the VTEP replicates and sends the packet only to VTEPs on which the VNI of 1001 is configured. If this feature is enabled, the VTEP replicates and sends this packet to all VTEPs in its database, including those that do not have VNI 1001 configured. To prevent a VTEP from needlessly flooding BUM traffic throughout an EVPN-VXLAN overlay network, we strongly recommend that if not already disabled, you disable ingress node replication on each of the leaf devices by specifying the delete vlans vlan-name vxlan ingress-node-replication command.
CLI Quick Configuration
To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, and then copy and paste the commands into the CLI at the  hierarchy level.
Configuring EVPN-VXLAN on Leaf 1
Enable physical server 1 to be multihomed to leaf 1 and leaf 2 by configuring an aggregated Ethernet interface, specifying an ESI for the interface, and setting the mode so that the connections to both leaf devices are active.
When configuring the ae202 interface on leaf 2, you must specify the same ESI (00:11:22:33:44:55:66:77:88:99) that is specified for the same interface on leaf 1.user@switch# set interfaces et-0/0/53 ether-options 802.3ad ae202user@switch# set interfaces ae202 esi 00:11:22:33:44:55:66:77:88:99user@switch# set interfaces ae202 esi all-activeuser@switch# set interfaces ae202 aggregated-ether-options lacp activeuser@switch# set interfaces ae202 aggregated-ether-options lacp system-id 00:00:00:04:04:04user@switch# set interfaces ae202 unit 0 family ethernet-switching interface-mode trunkuser@switch# set interfaces ae202 unit 0 family ethernet-switching vlan members 1-2
Configure two IRB interfaces, each with unique IP addresses and the same MAC address.user@switch# set interfaces irb unit 1 family inet address 10.1.1.1/24user@switch# set interfaces irb unit 1 mac 00:00:5e:00:53:01user@switch# set interfaces irb unit 2 family inet address 10.1.2.1/24user@switch# set interfaces irb unit 2 mac 00:00:5e:00:53:01
Configure a loopback interface (lo0.0) for the leaf device and a logical loopback address (lo0.1) for the EVPN routing instance (VRF-1).user@switch# set interfaces lo0 unit 0 family inet address 192.168.0.11/32user@switch# set interfaces lo0 unit 1 family inet address 192.168.10.11/32
Set up the IBGP overlay network.user@switch# set protocols bgp group overlay-evpn type internaluser@switch# set protocols bgp group overlay-evpn local-address 192.168.0.11user@switch# set protocols bgp group overlay-evpn family evpn signalinguser@switch# set protocols bgp group overlay-evpn local-as 65200user@switch# set protocols bgp group overlay-evpn multipathuser@switch# set protocols bgp group overlay-evpn neighbor 192.168.0.22user@switch# set protocols bgp group overlay-evpn neighbor 192.168.0.33
Set up the EVPN-VXLAN domain, which entails determining which VNIs are included in the domain, specifying that leaf 1, which is a hardware VTEP, handles the replication and sending of BUM traffic, disabling the advertisement of the redundant default gateway throughout the EVPN control plane, and specifying a route target for each VNI.user@switch# set protocols evpn encapsulation vxlanuser@switch# set protocols evpn extended-vni-list 1001user@switch# set protocols evpn extended- vni-list 1002user@switch# set protocols evpn multicast-mode ingress-replicationuser@switch# set protocols evpn default-gateway do-not-advertiseuser@switch# set protocols evpn vni-options vni 1001 vrf-target export target:1:1001user@switch# set protocols evpn vni-options vni 1002 vrf-target export target:1:1002
Set up communities for the VNIs, and create policies that import and accept the overlay routes.user@switch# set policy-options community comm-leaf_esi members target 9999:9999user@switch# set policy-options community com1001 members target:1:1001user@switch# set policy-options community com1002 members target:1:1002user@switch# set policy-options policy-statement LEAF-IN term import_leaf_esi from community comm-leaf_esiuser@switch# set policy-options policy-statement LEAF-IN term import_leaf_esi then acceptuser@switch# set policy-options policy-statement vrf-1-to-200 term import_vni1001 from community com1001user@switch# set policy-options policy-statement vrf-1-to-200 term import_vni1001 then acceptuser@switch# set policy-options policy-statement vrf-1-to-200 term import_vni1002 from community com1002user@switch# set policy-options policy-statement vrf-1-to-200 term import_vni1002 then accept
Set up an EVPN routing instance.user@switch# set routing-instances VRF_1 instance-type vrfuser@switch# set routing-instances VRF_1 interface irb.1user@switch# set routing-instances VRF_1 interface irb.2user@switch# set routing-instances VRF_1 interface lo0.1user@switch# set routing-instances VRF_1 route-distinguisher 192.168.0.11:1user@switch# set routing-instances VRF_1 vrf-target target:1:1
In the above EVPN routing instance configuration, a unique logical loopback interface (lo0.1) is specified, and an IP address for the interface is specified using the set interfaces lo0 unit logical-unit-number family inet address ip-address/prefix command. All items configured in the above routing instance except for the logical loopback interface are required for EVPN. However, the configuration of a logical loopback interface and associated IP address are required to ensure that VXLAN control packets are properly processed.
Configure the switch options to use loopback interface lo0.0 as the source interface of the VTEP, set a route distinguisher, and import the route targets for the three communities into the EVPN (MAC) table.user@switch# set switch-options vtep-source-interface lo0.0user@switch# set switch-options route-distinguisher 192.168.0.11:5000user@switch# set switch-options vrf-import LEAF-INuser@switch# set switch-options vrf-import vrf-1-to-200user@switch# set switch-options vrf-target target:9999:9999
Configure VLANs to which IRB interfaces and VXLAN VNIs are associated.user@switch# set vlans bd1 vlan-id 1user@switch# set vlans bd1 l3-interface irb.1user@switch# set vlans bd1 vxlan vni 1001user@switch# set vlans bd2 vlan-id 2user@switch# set vlans bd2 l3-interface irb.2user@switch# set vlans bd2 vxlan vni 1002
If not already disabled, disable ingress node replication to prevent leaf 1 from needlessly flooding BUM traffic throughout the EVPN-VXLAN overlay network.user@switch# delete vlans bd1 vxlan ingress-node-replicationuser@switch# delete vlans bd2 vxlan ingress-node-replication
The section describes the following verifications for this example:
Verifying the IRB Interfaces
Verify that the IRB interfaces are up and running.
Display the status of the IRB interfaces:
user@leaf1> show interfaces irb terse
Interface Admin Link Proto Local Remote irb up up irb.1 up up inet 10.1.1.1/24 irb.2 up up inet 10.1.2.1/24
The IRB interfaces are up and running.
Verifying the VTEP Interfaces
Verify the status of the VTEP interfaces.
Display the status of the VTEP interfaces:
user@leaf1> show interfaces vtep terse
Interface Admin Link Proto Local Remote vtep up up vtep.32769 up up eth-switch vtep.32770 up up eth-switch vtep.32771 up up eth-switch
The interfaces for each of the VTEPs is up. Therefore, the VTEP interfaces are functioning normally.
Verifying the EVPN Routing Instance
Verify the routing table for VRF_1.
Verify the routing table for the EVPN routing instance VRF_1.
user@leaf1> show route table VRF_1.inet.0
VRF_1.inet.0: 5 destinations, 5 routes (5 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 10.1.1.0/24 *[Direct/0] 00:07:38 > via irb.1 10.1.1.1/32 *[Local/0] 00:07:38 Local via irb.22.214.171.124/24 *[Direct/0] 00:07:38 > via irb.2 10.1.2.1/32 *[Local/0] 00:07:38 Local via irb.2 192.168.10.11/32 *[Direct/0] 00:07:38 > via lo0.1
The EVPN routing instance is functioning correctly.