Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Example: Configuring an EVPN-VXLAN Deployment Using the Virtual Gateway Address

This example shows how to configure an Ethernet VPN (EVPN)-Virtual Extensible LAN (VXLAN) deployment using the virtual gateway address.

Requirements

This example uses the following hardware and software components:

  • Two MX960 3D Universal Edge Router gateways

  • Two top-of-rack (ToR) QFX5100 switches

  • Three end host devices

  • Junos OS Release 14.2 R6 or later (for MX960 routers)Junos OS Release 14.1X53-D30 or later (for QFX5100 switches)

Overview and Topology

Figure 1 shows a topology example for configuring the virtual gateway address in an EVPN-VXLAN deployment. It shows two QFX Series switches (192.168.0.122 and 192.168.0.125) (acting as ToRs, or leaf devices) providing Layer 2 gateway functionality, and two MX Series routers (192.168.0.212 and 192.168.0.210) functioning as spine devices and providing Layer 3 default gateway functionality.

Note:

This topology example assumes that the underlay has already been configured and is not shown in the diagram.

Figure 1: EVPN-VXLAN Virtual Gateway Address Topology ExampleEVPN-VXLAN Virtual Gateway Address Topology Example
Note:

Sending pings to the virtual gateway IP address is currently not supported.

For the two MX Series routers, configure the following information:

  • IRB interfaces, virtual gateway addresses, and loopback logical interfaces.

  • Multiprotocol internal BGP (MP-IBGP) overlays between the spine and leaf devices, using BGP route reflection, and EVPN as the signaling protocol.

  • Routing policies to allow specific routes into the virtual-switch tables.

  • Routing instances (Layer 3 VRFs) for each virtual network, including a unique route distinguisher, and a vrf-target value.

  • Virtual-switch instances (Layer 2 MAC-VRFs) for each virtual network, the VTEP source interface (always lo0.0), route distinguisher, and vrf-import policy.

  • EVPN protocol, encapsulation method, VNI list, and BUM traffic forwarding method for each virtual switch.

  • Bridge domain within each virtual switch that maps VNIDs to VLAN IDs, an IRB (Layer 3) interface, and the BUM forwarding method.

For the two QFX Series switches (ToRs), configure the following information:

  • Host facing interfaces with VLANs, VLAN IDs, and loopback logical interfaces.

  • Link Aggregation Control Protocol (LACP)-enabled link aggregation group (LAG), Ethernet Segment ID (ESI), and all-active mode.

  • Multiprotocol internal BGP (MP-IBGP) overlays between the leaf and spine devices, and EVPN as the signaling protocol.

  • EVPN with VXLAN as the encapsulation method, extended-vni-list, multicast mode, and route targets for each VNI.

  • Vrf-imp policy, vtep-source-interface, route-distinguisher, and vrf import and target information.

  • VLANs, with VLAN IDs mapped to globally significant VNIs, and VXLAN ingress node replication.

Note:

You can set the virtual gateway address as the default IPv4 or IPv6 gateway address for end hosts (virtual machines or servers).

Configuration

This section provides step-by-step instructions for a complete configuration for an EVPN-VXLAN deployment with a virtual gateway address:

Configuring Routing Instances and Bridge Domains for MX1

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Step-by-Step Procedure

The following example requires that you navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User Guide.

  1. Configure an integrated routing and bridging (IRB) interface for each of the two virtual networks (VNs), including a virtual gateway address to act as a common MAC address and IP address across both MX Series (spine) devices.

  2. Configure the loopback interface.

  3. Configure a multiprotocol internal BGP (MP-IBGP) overlay between the spine and leaf devices, using BGP route reflection, and set EVPN as the signaling protocol.

  4. Configure a second MP-IBGP overlay to connect the spine devices to each other using EVPN signaling.

  5. Configure routing policies to allow specific routes into the virtual-switch tables. Ensure that the policy includes target 9999:9999 so that the virtual switches import the Type-1 Ethernet Segment ID (ESI) routes from the ToR/Leaf devices.

  6. Configure routing instances (Layer 3 VRFs) for each virtual network. Assign each routing instance a unique route distinguisher, associate the appropriate IRB interface, and assign a vrf-target value.

  7. Configure virtual-switch instances (Layer 2 MAC-VRFs) for each virtual network. Define the VTEP source interface (always lo0.0), route distinguisher (used to identify and advertise EVPN routes), vrf-import policy (defines which route targets to import into the virtual switches’ EVPN tables), and vrf-target (exports and tags all routes for that local VRF using the defined route target). Then for each virtual switch, configure the EVPN protocol, encapsulation method, VNI list, and BUM traffic forwarding method. Finally, configure a bridge domain for each virtual switch that maps VNIDs to VLAN IDs, associate an IRB (Layer 3) interface, and identify the BUM forwarding method.

Configuring Routing Instances and Bridge Domains for MX2

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Step-by-Step Procedure

The following example requires that you navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User Guide.

  1. Configure an integrated routing and bridging (IRB) interface for each of the two virtual networks (VNs), including a virtual gateway address to act as a common MAC address and IP address across both MX Series (spine) devices.

  2. Configure the loopback interface.

  3. Configure a multiprotocol internal BGP (MP-IBGP) overlay between the spine and leaf devices, using BGP route reflection, and set EVPN as the signaling protocol.

  4. Configure a second MP-IBGP overlay to connect the spine devices to each other using EVPN signaling.

  5. Configure routing policies to allow specific routes into the virtual-switch tables. Ensure that the policy includes target 9999:9999 so that the virtual switches import the Type-1 Ethernet Segment ID (ESI) routes from the ToR/Leaf devices.

  6. Configure routing instances (Layer 3 VRFs) for each virtual network. Assign each routing instance a unique route distinguisher, associate the appropriate IRB interface, and assign a vrf-target value.

  7. Configure virtual-switch instances (Layer 2 MAC-VRFs) for each virtual network. Define the VTEP source interface (always lo0.0), route distinguisher (used to identify and advertise EVPN routes), vrf-import policy (defines which route targets to import into the virtual switches’ EVPN tables), and vrf-target (exports and tags all routes for that local VRF using the defined route target). Then for each virtual switch, configure the EVPN protocol, encapsulation method, VNI list, and BUM traffic forwarding method. Finally, configure a bridge domain for each virtual switch that maps VNIDs to VLAN IDs, associate an IRB (Layer 3) interface, and identify the BUM forwarding method.

Configuring Interfaces and VLANs for ToR1

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Step-by-Step Procedure

The following example requires that you navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User Guide.

  1. Create and configure the host-facing interface towards the CE2 end host device, and configure its VLAN information.

  2. Create and configure the host-facing interface towards the CE25 end host device, and configure it as a member of the aggregated Ethernet bundle ae0.

  3. Configure a Link Aggregation Control Protocol (LACP)-enabled link aggregation group (LAG) interface towards the CE25 end host device. The Ethernet Segment ID (ESI) is globally unique across the entire EVPN domain. The all-active configuration enables both ToR1 and ToR2 to forward traffic to, and from the CE25 end host device.

  4. Configure the loopback interface.

  5. Configure a multiprotocol internal BGP (MP-IBGP) overlay between the leaf and spine devices and configure EVPN as the signaling protocol.

  6. Configure EVPN using VXLAN as the encapsulation method, configure the extended-vni-list to establish which VNIs are part of the EVPN-VXLAN MP-BGP domain, set the multicast mode to use ingress-replication (instead of using a multicast underlay), and then configure route targets for each VNI under vni-options.

  7. Configure the vrf-imp policy to identify and permit the target communities to be imported into the default-switch.evpn.0 instance from bgp.evpn.0.

  8. Configure the vtep-source-interface (which is always set to lo0.0), the route-distinguisher, and vrf import and target information.

    Note:

    The route-distinguisher must be unique, network-wide, across all switches to ensure all route advertisements within MP-BGP are globally unique. The vrf-target tags outbound routing information for the switch, including (at a minimum) all ESI (Type-1) routes. The vrf-import statement references the vrf-imp policy to allow inbound routing information from remote devices.

  9. Define the VLANs, map locally significant VLAN IDs to globally significant VNIs, and set VXLAN ingress node replication.

Configuring Interfaces and VLANs for ToR2

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Step-by-Step Procedure

The following example requires that you navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User Guide.

  1. Create and configure the host-facing interface towards the CE5 end host device, and configure its VLAN information.

  2. Create and configure the host-facing interface towards the CE25 end host device, and configure it as a member of the aggregated Ethernet bundle ae0.

  3. Configure a Link Aggregation Control Protocol (LACP)-enabled link aggregation group (LAG) interface towards the CE25 end host device. The Ethernet Segment ID (ESI) is globally unique across the entire EVPN domain. The all-active configuration enables both ToR1 and ToR2 to forward traffic to, and from the CE25 end host device.

  4. Configure the loopback interface.

  5. Configure a multiprotocol internal BGP (MP-IBGP) overlay between the leaf and spine devices and configure EVPN as the signaling protocol.

  6. Configure EVPN using VXLAN as the encapsulation method, configure the extended-vni-list to establish which VNIs are part of the EVPN-VXLAN MP-BGP domain, set the multicast mode to use ingress-replication (instead of using a multicast underlay), and then configure route targets for each VNI under vni-options.

  7. Configure the vrf-imp policy to identify and permit the target communities to be imported into the default-switch.evpn.0 instance from bgp.evpn.0.

  8. Configure the vtep-source-interface (which is always set to lo0.0), the route-distinguisher, and vrf import and target information.

    Note:

    The route-distinguisher must be unique, network-wide, across all switches to ensure all route advertisements within MP-BGP are globally unique. The vrf-target tags outbound routing information for the switch, including (at a minimum) all ESI (Type-1) routes. The vrf-import statement references the vrf-imp policy to allow inbound routing information from remote devices.

  9. Define the VLANs, map locally significant VLAN IDs to globally significant VNIs, and set VXLAN ingress node replication.

Verification

Confirm that the configuration is working properly.

Verifying Connectivity from MX1 to the End Host Devices

Purpose

Verify that the MX1 router gateway can ping the CE2, CE5, and CE25 end host devices.

Action

Enter the run ping 10.10.0.2 routing-instance VS_VLAN50 command to ping the CE2 end host device.

Enter the run ping 10.10.0.5 routing-instance VS_VLAN50 command to ping the CE5 end host device.

Enter the run ping 10.20.0.25 routing-instance VS_VLAN51 command to ping the CE25 end host device.

Meaning

Ping from the MX1 router gateway to the CE2, CE5, and CE25 end host devices is successful.

When sending a ping from the MX Series router gateway, the gateway uses the unique part of the IRB IP address as its source, which enables the ICMP response to be received on that address, resulting in a successful ping. The anycast part of the IRB IP address is used for gateway redundancy.

Verifying Connectivity from MX2 to the End Host Devices

Purpose

Verify that the MX2 router gateway can ping the CE2, CE5, and CE25 end host devices.

Action

Enter the run ping 10.10.0.2 routing-instance VS_VLAN50 command to ping the CE2 end host device.

Enter the run ping 10.10.0.5 routing-instance VS_VLAN50 command to ping the CE5 end host device.

Enter the run ping 10.20.0.25 routing-instance VS_VLAN51 command to ping the CE25 end host device.

Meaning

Ping from the MX2 router gateway to the CE2, CE5, and CE25 end host devices is successful.

When sending a ping from the MX Series router gateway, the gateway uses the unique part of the IRB IP address as its source, which enables the ICMP response to be received on that address, resulting in a successful ping. The anycast part of the IRB IP address is used for gateway redundancy.

Verifying IRB Virtual (Anycast) Gateway Reachability on ToR1

Purpose

Verify that the leaf devices (ToR devices) have reachability to the IRB virtual gateways for VNI 50 and VNI 51, and that ESI information is being received from both MX1 and MX2 devices.

Action

  1. Enter the show route receive-protocol bgp 192.168.0.212 command to display the EVPN routes received from MX1.

  2. Enter the show route table default-switch.evpn.0 evpn-esi-value 05:00:00:ff:78:00:00:06:7d:00 command to display the Type 1 ESI routes for VNI 50 in the default-switch.evpn.0 table.

Meaning

From the sample output for the show route receive-protocol bgp 192.168.0.212 command, ToR1 is receiving Type 1 advertisements for the auto-generated ESIs for the IRB anycast gateways on MX1. It also shows the Type 2 advertisements for the IRB anycast MAC and IP addresses (00:00:5e:00:53:01/10.10.0.151 and 00:00:5e:00:53:01/10.20.0.251), and the IRB physical MAC and IP addresses (00:00:5e:00:53:f0/10.10.0.101 and 00:00:5e:00:53:f0/10.20.0.201).

Note:

ToR1 receives similar route advertisements from MX2.

From the sample output for the show route table default-switch.evpn.0 evpn-esi-value 05:00:00:ff:78:00:00:06:7d:00 command, ToR1 installs the ESI advertisements received from MX1 (192.168.0.212) and MX2 (192.168.0.210) into the default-switch table.

Verifying Virtual Gateway Address VLAN Mappings on ToR1

Purpose

Verify that the IRB virtual gateways for VNI 50 and VNI 51 correctly map to their related VLANs on the leaf (ToR) devices, so that end hosts reach their designated default gateway.

Action

Enter the show ethernet-switching table vlan-id 50 command to display the members of VLAN 50.

Enter the show ethernet-switching table vlan-id 51 command to display the members of VLAN 51.

Meaning

The output shows the MAC addresses and auto-generated ESIs for the IRB anycast gateways. This means the gateways are correctly being mapped to their respective VLANs.

Note:

The Junos OS version used on the ToR (QFX5100) devices in this configuration example load-balances anycast gateways per VNI. For a given VNI, the switch forwards traffic to a single VTEP.

Verifying Intrasubnet and Intersubnet Traffic Connectivity Between End Host Devices

Purpose

Verify that there is intrasubnet and intersubnet traffic connectivity between the end host devices: CE2, CE5, and CE25.

Action

Enter the run ping 10.10.0.2 command to ping from the CE5 end host device to the CE2 end host device to verify intrasubnet traffic.

Enter the run ping 10.20.0.25 command to ping from the CE5 end host device to the CE25 end host device to verify intersubnet traffic.

Meaning

Intrasubnet (from CE5 end host device to CE2 end host device) and intersubnet (from CE5 end host device to CE25 end host device) traffic connectivity is operational.