Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Example: Configuring the IaaS: EVPN and VXLAN Solution

 

This example describes how to build, configure, and verify a bare metal server (BMS) network containing a BGP-based IP fabric underlay, supported by an EVPN and VXLAN overlay.

Requirements

Table 1 lists the hardware and software components used in this example.

Table 1: Solution Hardware and Software Requirements

Device

Hardware

Software

Fabric devices

QFX5100-24Q

Junos OS Release 14.1X53-D30.3

Spine devices

QFX10002-72Q

Junos OS Release 15.1X53-D60.4

Leaf devices

QFX5100-48S

Junos OS Release 14.1X53-D35.3

Host emulation

Traffic Generator

Overview and Topology

The topology used in this example consists of a series of QFX5100 and QFX10002 switches, as shown in Figure 1.

Figure 1: IaaS: EVPN and VXLAN Solution - Underlay Topology
IaaS: EVPN and VXLAN Solution -
Underlay Topology

In this example, the fabric layer has four QFX5100-24Q switches, the spine layer has four QFX10002-72Q switches, and the leaf layer uses four QFX5100-48S switches. Leaf 1, Leaf 2, Spine 1, and Spine 2 are included in a single point of delivery (POD) named POD 1; and Leaf 3, Leaf 4, Spine 3, and Spine 4 are included in POD 2. Both data center PODs connect to the fabric layer, which provides inter-POD connectivity.

Note

This topology simulates conditions for PODs contained either in the same data center or PODs located in different data centers.

Two hosts per leaf device are connected to Leaf 1, Leaf 2, and Leaf 3. One host is dual-homed to Leaf 3 and Leaf 4 through Switch 5, and one host is single-homed to Leaf 4.

This first diagram also represents the EBGP underlay for the solution, utilizing an individual autonomous system number for each device and a unique loopback address for each device for easy monitoring and troubleshooting of the network.

The topology for the overlay is shown in Figure 2.

Figure 2: IaaS: EVPN and VXLAN Solution - Overlay Topology
IaaS: EVPN and VXLAN
Solution - Overlay Topology

A full mesh IBGP configuration connects the spine devices together, and all spine and leaf devices belong to a single autonomous system (65200). A route reflector cluster is assigned to each POD and enables the leaf devices within the POD to have redundant connections to the spine layer.

The example included in this solution explores the use of both Type 2 and Type 5 EVPN routes and contains configuration excerpts to enable you to select either option. In Figure 3, Type 2 routes are distributed within the same VLAN.

Figure 3: IaaS: EVPN and VXLAN Solution - Type 2 Intra-VLAN Traffic
IaaS: EVPN and VXLAN Solution
- Type 2 Intra-VLAN Traffic

As shown, when traffic flows between hosts that are connected to the same leaf (1.1), the traffic stays locally on the leaf and does not need to be sent to the upper layers.

To reach hosts connected to other leaf devices in the same POD (1.2), traffic travels between the leaf devices and spine devices across the IP fabric. Host traffic is switched using a VXLAN tunnel established between the leaf devices. The ingress leaf device encapsulates the host traffic with a VXLAN header, the traffic is switched using the outer header, and it travels over the spine layer to reach the other leaf device. The egress leaf device de-encapsulates the VXLAN header and switches the frame to the destination host.

To reach hosts located in another POD (1.3), the traffic must be sent up through the leaf, spine, and fabric layers and then down through the spine and leaf layers in the second POD to reach the destination host. The VXLAN tunnel established between the leaf devices in the different PODs enables traffic to travel from the ingress leaf device, across the spine layer in the first POD, through the fabric layer, to the spine layer in the second POD, and to the egress leaf device. The egress leaf device de-encapsulates the VXLAN header and switches the frame to the destination host.

Figure 4 shows how Type 2 routes are handled between different VLANs.

Figure 4: IaaS: EVPN and VXLAN Solution - Type 2 Inter-VLAN Traffic
IaaS: EVPN and VXLAN Solution
- Type 2 Inter-VLAN Traffic

As shown, the process is the same for all three cases of inter-VLAN traffic because they each require Layer 3 routing (1.1, 1.2, and 1.3). Host traffic containing an inner header is encapsulated with a VXLAN header and an outer header that lists the local spine device as the destination. The spine device strips the outer header, de-encapsulates the VXLAN header, performs a route lookup for the inner header, and forwards the traffic across an EVPN routing instance to the respective host using a VXLAN tunnel that references the appropriate leaf device. The egress leaf device de-encapsulates the VXLAN header and switches the frame to the desired host. In this example, VLANs 100 to 108 illustrate intra-VLAN and inter-VLAN traffic using EVPN route Type 2.

As a final option, Figure 5 shows how Type 5 routes are handled between VLANs.

Figure 5: IaaS: EVPN and VXLAN Solution - Type 2 and Type 5 Inter-VLAN Traffic
IaaS: EVPN and VXLAN Solution
- Type 2 and Type 5 Inter-VLAN Traffic

For the first two cases (1.1 and 1.2), inter-VLAN traffic is handled the same as shown in Figure 4. However, when sending Type 5 inter-VLAN traffic between different data centers (1.3), the host traffic is encapsulated with a VXLAN header and an outer header that lists the local spine device as the destination. The local spine device de-encapsulates the VXLAN header, performs a route lookup for the inner header, and forwards the traffic across an EVPN routing instance to the remote spine device in the second POD by using a VXLAN header. The remote spine device de-encapsulates the packet and performs a route lookup for the respective routing instance based on the VNI number. The spine device then encapsulates the traffic and sends it across a VXLAN tunnel to the respective leaf device. The egress leaf device de-encapsulates the VXLAN header and switches the frame to the destination host. In this example, VLANs 999 (Spine 1 and Spine 2) and 888 (Spine 3 and Spine 4) illustrate inter-VLAN traffic using EVPN route Type 5.

Note

At the time this guide was written, Type 5 can only be used for inter-VLAN topologies. To support intra-VLAN topologies, use Type 2.

Table 2 lists the IPv4 addresses used in this example, Table 3 displays the IPv6 addresses used in this example, and Table 4 lists the loopback addresses and autonomous system numbers for the fabric, spine, and leaf devices.

Table 2: IPv4 Addressing

IPv4 Network Prefixes

Network

Fabric to spine point-to-point links

172.16.0.0/24

Spine to leaf point-to-point links

172.16.0.0/24

Loopback IP addresses (for all devices)

10.0.0.0/24

Anycast IPv4 addresses

A set of nine addresses that increment the third octet and use .1 for the fourth octet:

  • 10.1.100.1/16

  • 10.1.101.1/16

  • 10.1.102.1/16

  • 10.1.103.1/16

  • 10.1.104.1/16

  • 10.1.105.1/16

  • 10.1.106.1/16

  • 10.1.107.1/16

  • 10.1.108.1/16

Server/traffic generator IPv4 host devices

A range of five addresses (0 - 4) per host, with the host number represented in the tens place. For example, Host 7 has the following range of addresses: 10.1.100.70/16 - 10.1.100.74/16.

Table 3: IPv6 Addressing

IPv6 Network Prefixes

Network

Anycast IPv6 addresses

A set of nine addresses that increment the fifth double-octet and use :1 for the final double-octet:

  • 2001:db8:10:1:100::1/80

  • 2001:db8:10:1:101::1/80

  • 2001:db8:10:1:102::1/80

  • 2001:db8:10:1:103::1/80

  • 2001:db8:10:1:104::1/80

  • 2001:db8:10:1:105::1/80

  • 2001:db8:10:1:106::1/80

  • 2001:db8:10:1:107::1/80

  • 2001:db8:10:1:108::1/80

Server/traffic generator IPv6 host devices

A set of addresses that increment the fifth double-octet and use :<210 + spine-number> for the final double-octet. For example, for Spine 1, 210 + 1 equals 211, so the corresponding IPv6 addresses are as follows:

  • 2001:db8:10:1:100::211/80

  • 2001:db8:10:1:101::211/80

  • 2001:db8:10:1:102::211/80

  • 2001:db8:10:1:103::211/80

  • 2001:db8:10:1:104::211/80

  • 2001:db8:10:1:105::211/80

  • 2001:db8:10:1:106::211/80

  • 2001:db8:10:1:107::211/80

  • 2001:db8:10:1:108::211/80

Table 4: Loopback Addresses and Underlay ASNs for Fabric Devices, Spine Devices, and Leaf Devices

 

Loopback Address

ASN

Fabric 1

10.0.0.1

65001

Fabric 2

10.0.0.2

65002

Fabric 3

10.0.0.3

65003

Fabric 4

10.0.0.4

65004

Spine 2

10.0.0.11

65011 (underlay)

65200 (overlay)

Spine 3

10.0.0.12

65012 (underlay)

65200 (overlay)

Spine 4

10.0.0.13

65013 (underlay)

65200 (overlay)

Spine 4

10.0.0.14

65014 (underlay)

65200 (overlay)

Leaf 1

10.0.0.21

65021 (underlay)

65200 (overlay)

Leaf 2

10.0.0.22

65022 (underlay)

65200 (overlay)

Leaf 3

10.0.0.23

65023 (underlay)

65200 (overlay)

Leaf 4

10.0.0.24

65024 (underlay)

65200 (overlay)

Configuring the IaaS: EVPN and VXLAN Solution

Note

You can use Ansible scripts to generate a large portion of the IP fabric and EVPN VXLAN configurations. For more information, see: Ansible Junos Configuration for EVPN/VXLAN.

This section explains how to build out the leaf, spine, and fabric layers with an EBGP-based IP fabric underlay and an IBGP-based EVPN and VXLAN overlay for the solution. It includes the following sections:

Configuring Leaf Devices for the IaaS: EVPN and VXLAN Solution

CLI Quick Configuration

To quickly configure the leaf devices, enter the following representative configuration statements on each device:

Note

The configuration shown here applies to Leaf 1.

Step-by-Step Procedure

To configure the leaf devices:

  1. Configure Ethernet interfaces to reach the hosts :
  2. Configure the interfaces connecting the leaf device to the spine devices:
  3. Configure the loopback interface with a reachable IPv4 address. This loopback address is the tunnel source address.
  4. Configure the router ID for the leaf device:
  5. Configure an EBGP-based underlay between the leaf and spine devices and enable BFD and LLDP:
  6. Create a routing policy that only advertises and receives loopback addresses from the IP fabric and EBGP underlay:
  7. Configure an IBGP overlay between the leaf and spine devices, enable BFD and BMP, and include the EVPN signaling network layer reachability information (NLRI) in the IBGP group:
  8. Configure load balancing:
  9. Configure a routing policy to reject EVPN Type 1 and Type 2 routes from spine devices in the other POD (this facilitates optimal path selection):
  10. Configure EVPN:
  11. Configure a routing policy to import EVPN routes into the switching table and establish BGP communities:
  12. Configure switch options to set a route distinguisher and VRF target for the EVPN routing instance, apply the EVPN routing policy, and associate interface lo0 with the VTEP:
  13. Configure VLANs and VXLAN VNIs:

Configuring Spine Devices for the IaaS: EVPN and VXLAN Solution

CLI Quick Configuration

To quickly configure the spine devices, enter the following representative configuration statements on each device:

Note

The configuration shown here applies to Spine 1.

Step-by-Step Procedure

To configure interfaces on the spine devices:

  1. Configure interfaces to connect to the leaf and fabric devices:
  2. Configure IRB interfaces with both IPv4 and IPv6 addresses for each VLAN. This dual stack configuration provides a gateway for both IPv4 and IPv6 hosts:Note

    By including the proxy-macip-advertisement statement at the [edit interfaces irb unit logical-unit-number] hierarchy level, the spine device generates an EVPN Type 2 proxy advertisement that contains both the MAC address and the IP route.

  3. Configure a loopback interface for the device (lo0) and logical loopback addresses (lo0.x) for each routing instance:
  4. Configure the router ID for the spine device:
  5. Configure load balancing:
  6. Configure an EBGP-based underlay between the spine and leaf devices, and between the spine and fabric devices, and enable BFD and LLDP:
  7. Create a routing policy that only advertises and receives loopback addresses from the IP fabric and the EBGP underlay:Note

    This policy also suppresses advertisements to other spine device loopback interfaces in the same POD and enables optimal routing.

  8. Configure an IBGP overlay between the spine and leaf devices, enable a BGP route reflector cluster and BMP, and include the EVPN signaling network layer reachability information (NLRI) in the IBGP group:
  9. Configure a routing policy that suppresses EVPN Type 5 routes from being advertised to the leaf devices (that are using Type 2 routes instead):
  10. Configure a second IBGP overlay to connect the spine devices to each other and include the EVPN signaling network layer reachability information (NLRI) in the IBGP group:
  11. Configure BFD for all BGP sessions:
  12. Configure EVPN:Note

    By including the no-gateway-community statement at the [edit protocols evpn default-gateway] hierarchy level, the spine device advertises the MAC address of the IRB interface without the default gateway community.

  13. Configure switch options to set a route distinguisher and VRF target for the EVPN routing instance, apply the EVPN routing policy, and associate interface lo0 with the VTEP:
  14. Configure a routing policy to import EVPN routes to the switching table:
  15. Create a policy to export IPv4 and IPv6 network addresses for Type 5 routes:
  16. Configure VLANs and VXLAN VNIs:
  17. Configure three routing instances—one for a tenant that uses EVPN Type 5 within its data center, and two for tenants that use EVPN Type 2:

    EVPN Type 5 Routing Instance

    EVPN Type 2 Routing Instance (VRF Tenant 10)

    EVPN Type 2 Routing Instance (VRF Tenant 20)

Configuring Fabric Devices for the IaaS: EVPN and VXLAN Solution

CLI Quick Configuration

To quickly configure the fabric devices, enter the following representative configuration statements on each device:

Note

The configuration shown here applies to Fabric 1.

Step-by-Step Procedure

To configure interfaces on the spine devices:

  1. Configure interfaces to connect to the spine devices in both PODs:
  2. Configure load balancing:
  3. Configure the router ID for the fabric device:
  4. Complete the underlay by configuring an EBGP session with each spine device, enabling LLDP on all interfaces, and creating a routing policy that accepts the loopback addresses from all devices in the IP fabric and advertises its own loopback address:

Configuring Host Multihoming

Step-by-Step Procedure

Some tenants require their hosts to be multihomed to multiple leaf devices for redundancy and resiliency. To enable Host 8 to be multihomed to Leaf 3 and Leaf 4:

  1. Configure Switch 5 to permit traffic to flow between Host 8, Leaf 3, and Leaf 4. Create an aggregated Ethernet interface that includes a link to Leaf 3 and a link to Leaf 4 in the bundle, establish a trunk port, and enable VLANs 100 through 108.
  2. Configure Leaf 3 so it can connect with Host 8 through Switch 5. Configure an aggregated Ethernet interface to connect to Switch 5, establish a trunk port, set an EVPN Ethernet segment identifier (ESI), and permit VLANs 100 through 108.
  3. Configure Leaf 4 so it can connect with Host 8 through Switch 5. Configure an aggregated Ethernet interface to connect to Switch 5, establish a trunk port, set an ESI, and permit VLANs 100 through 108.

Configuring Additional Features for the IaaS: EVPN and VXLAN Solution

In this section, you configure BGP Monitoring Protocol (BMP), distributed denial of service (DDoS) protection, storm control, and class of service on all devices to enhance the capabilities of the network described in this IaaS solution.

Configuring BMP, DDoS Protection, Storm Control, CoS, and Port Mirroring

Step-by-Step Procedure

To configure BMP, DDoS protection, storm control, CoS, and port mirroring:

Note

The following configurations are taken from Leaf 1, so remember to extend this configuration model to the other devices in the IP fabric.

  1. Configure BMP on all devices in the IP fabric:
  2. Configure DDoS to protect the Routing Engine of your device:
  3. Configure storm control on all devices in the IP fabric:
  4. Configure CoS on all devices in the IP fabric:
  5. Configure port mirroring on all devices in the IP fabric:

Verification

Confirm that the IaaS: EVPN and VXLAN solution configuration is working properly.

Leaf: Verifying Interfaces

Purpose

Verify the state of the server-facing and spine-facing interfaces.

Action

Verify that the server-facing and spine-facing interfaces are up:

user@leaf-1> show interfaces terse

Meaning

The server-facing and spine-facing interfaces are connected and operating correctly.

Leaf: Verifying IPv4 BGP Sessions

Purpose

Verify the state of underlay (EBGP) and overlay (IBGP) sessions between the leaf and spine devices.

Action

Verify that IPv4 EBGP and IBGP sessions are established with Spine 1 and Spine 2:

user@leaf-1> show bgp summary

Meaning

Because there are peer connections to AS 65011 (underlay to Spine 1), AS 65012 (underlay to Spine 2), and AS 65200 (overlay to both spine devices), both the EBGP and IBGP sessions are established and functioning correctly.

Leaf: Verifying BFD

Purpose

Verify that bidirectional forwarding detection is operating correctly between the leaf and spine devices.

Action

Verify that BFD is operating between Leaf 1, Spine 1, and Spine 2:

user@leaf-1> show bfd session

Meaning

BFD is operating correctly between the leaf and spine devices for both the underlay and the overlay.

Leaf: Verifying EVPN Routes

Purpose

Verify that the EVPN routes are being learned through the overlay.

Action

Issue the show route table bgp.evpn.0 command to display the status of learned EVPN routes for VNI 1000:

user@leaf-1> show route table bgp.evpn.0 evpn-ethernet-tag-id 1000

Meaning

Because the output contains routes for all spine devices (10.0.0.11, 10.0.0.12, 10.0.0.13, and 10.0.0.14) and all fabric devices (10.0.0.21, 10.0.0.22, 10.0.0.23, and 10.0.0.24), EVPN routes are being learned through the overlay.

Leaf: Verifying the EVPN Routes in Detail

Purpose

Verify additional information about the EVPN routes.

Action

Note

When analyzing EVPN operational command output, the address format is as follows:

<route-type>:<route-distinguisher>::<vni>::<mac-address>

The address 2:10.0.0.23:1::1000::de:ad:be:e1:00:30/304 can be broken down as follows:

  • EVPN route type—2

  • Route distinguisher—10.0.0.23:1

  • VNI—1000

  • MAC address—de:ad:be:e1:00:30

  1. Verify the mapping of EVPN routes and MAC addresses:

    user@leaf-1> show route table bgp.evpn.0 evpn-ethernet-tag-id 1000 evpn-mac-address de:ad:be:e1:00:30
  2. Verify detailed information about the mapping of EVPN routes and MAC addresses:

    user@leaf-1> show route table bgp.evpn.0 evpn-ethernet-tag-id 1000 evpn-mac-address de:ad:be:e1:00:30 detail

Meaning

The mapping of EVPN routes and MAC addresses is functioning correctly.

Leaf: Verifying VTEP Interfaces

Purpose

Verify the source and destination address of the VTEP interfaces and their status.

Action

  1. Verify source address information for the VTEP interfaces:

    user@leaf-1> show ethernet-switching vxlan-tunnel-end-point source
  2. Verify the summary status of the VTEP interfaces:

    user@leaf-1> show interfaces terse vtep
    Note

    There are four leaf devices and four spine devices, so there are a total of eight VTEP interfaces (one VTEP per device).

  3. Verify the full status of the VTEP interfaces:

    user@leaf-1> show interfaces vtep*

Meaning

Because the VLAN-to-VNI mappings are correct, all eight VTEP interfaces are up, and each VTEP terminates remotely at one of the leaf and spine devices, the VTEP interfaces are functioning normally.

Leaf: Verifying VNI-to-VXLAN Tunnel Mappings

Purpose

Verify that each VNI maps properly to each VXLAN tunnel, the leaf device is properly connected to the remote VTEPs, and has the correct reachability.

Action

Verify the mapping of the VNIs to the VXLAN tunnels by displaying the remote VTEP information:

user@leaf-1> show ethernet-switching vxlan-tunnel-end-point remote

Meaning

The VNIs are mapped to the correct VXLAN tunnels, the leaf device is properly connected to the remote VTEPs, and has the correct reachability.

Note

VNI 1999 only appears in VXLAN tunnels associated with POD 1 (10.0.0.11, 10.0.0.12, 10.0.0.21, and 10.0.0.22).

Leaf: Verifying MAC Address Learning

Purpose

Verify that the MAC addresses are learned through the VXLAN tunnels.

Action

Display the MAC addresses that are learned through the VXLAN tunnels:

user@leaf-1> show ethernet-switching vxlan-tunnel-end-point remote mac-table

Meaning

MAC addresses are being shared across the VXLAN tunnels correctly.

Leaf: Verifying Multihoming

Purpose

Verify that multihoming is working on Leaf 3 and Leaf 4 by reviewing information for the LAG interfaces in POD 2.

Action

Display VTEP ESI information:

user@leaf-1> show ethernet-switching vxlan-tunnel-end-point esi
Note

There are 11 Ethernet segment identifier (ESI) numbers. The first ESI number that starts with 00: belongs to the multihomed LAG interface connecting Switch 5 to Leaf 3 and Leaf 4. The remaining 10 ESI numbers that start with 05: map to the Layer 3 gateway for the 10 VNIs. The Layer 3 gateway of each VNI is reachable from Leaf 1 through either Spine 1 (10.0.0.11) or Spine 2 (10.0.0.12).

Meaning

Multihoming is working on Leaf 3 and Leaf 4 as expected.

Leaf: Verifying ECMP

Purpose

Verify that the VXLAN tunnels prefer to use ECMP-based underlay paths.

Action

Verify that the VXLAN tunnel to Fabric 3 (vtep.32775) prefers the ECMP-based paths over the underlay paths:

user@leaf-1> show route forwarding-table table default-switch extensive | find vtep.32775

Meaning

The VXLAN tunnel prefers to use ECMP-based underlay paths.

Leaf: Verifying Remote MAC Address Reachability Through ECMP

Purpose

Verify that the remote MAC address is reachable through ECMP.

Action

Display extensive forwarding table information for a selected MAC address to verify its reachability:

user@leaf-1> show route forwarding-table table default-switch extensive destination de:ad:be:e1:00:21/48

Meaning

The remote MAC address is reachable through ECMP.

Leaf: Verifying Local and Remote MAC Address Learning

Purpose

Verify that the switching table learns both local and remote MAC addresses.

Action

Display switching table information for VLAN 100:

user@leaf-1> show ethernet-switching table vlan-id 100

Meaning

The switching table displays MAC addresses that were learned locally (xe-0/0/12 and xe-0/0/13) and remotely (vtep.* and esi.*).

Spine: Verifying Interfaces

Purpose

Verify that the fabric-facing and leaf-facing interfaces are up.

Action

Display the fabric-facing and leaf-facing interfaces:

user@spine-1> show interfaces terse

Meaning

The fabric-facing and leaf-facing interfaces are connected and operating correctly.

Spine: Verifying IPv4 BGP Sessions

Purpose

Verify the state of the underlay (EBGP) and overlay (IBGP) sessions that connect the spine devices to the leaf devices, the fabric devices, and the other spine devices.

Action

Verify that IPv4 EBGP and IBGP sessions are established with the other devices in the IP fabric:

user@spine-1> show bgp summary

Meaning

Because there are peer connections to AS 65001, AS 65002, AS 65003, and AS 65004 (the four fabric devices), AS 65021 and AS 65022 (the underlay for Leaf 1 and Leaf 2), and AS 65200 (overlay to Spine 2, Spine 3, Spine 4, Leaf 1, and Leaf 2), all EBGP and IBGP sessions are established and functioning correctly.

Spine: Verifying BFD

Purpose

Verify that Bidirectional Forwarding Detection (BFD) is operating correctly between the leaf, spine, and fabric devices.

Action

Verify that BFD is operating between the devices in the IP fabric:

user@spine-1> show bfd session

Meaning

BFD is operating correctly between the leaf, spine, and fabric devices for both the underlay and the overlay.

Spine: Verifying the IRB Interfaces

Purpose

Verify that the IRB interfaces are up.

Action

Display the summary status for the IRB interfaces:

user@spine-1> show interfaces irb terse

Meaning

The IRB interfaces are established and functioning correctly.

Spine: Verifying VTEP Interfaces

Purpose

Verify the overall status of the VTEP interfaces.

Action

Display the summary status of the VTEP interfaces:

user@spine-1> show interfaces vtep terse

Meaning

Because all eight VTEP interfaces are up, the VTEP interfaces are functioning normally.

Spine: Verifying VTEP Destination Addresses

Purpose

Verify the full status of the VTEP interfaces.

Action

Display the full status of the VTEP interfaces:

user@spine-1> show interfaces vtep*

Meaning

Because each VTEP interface terminates remotely at one of the leaf and spine devices, the VTEP interfaces are functioning correctly.

Spine: Verifying Inter-Spine ECMP

Purpose

Verify that ECMP is working between the spine devices.

Action

Verify the preferred paths between selected spine devices.

  1. Display the preferred paths between Spine 1 and Spine 2:

    user@spine-1> show route 10.0.0.12
  2. Display the preferred paths between Spine 1 and Spine 3:

    user@spine-1> show route 10.0.0.13

Meaning

Because there are four equal-cost paths to reach the other spine devices, inter-spine ECMP is functioning correctly.

Spine: Verifying the Routing Instances

Purpose

Verify the routing tables for the customer routing instances Tenant 10 and Tenant 20, and the EVPN Type 5 routing instance.

Action

  1. Verify the IPv4 routing table for Tenant 10:

    user@spine-1> show route table VRF_TENANT_10.inet.0
  2. Verify the IPv4 routing table for Tenant 20:

    user@spine-1> show route table VRF_TENANT_20.inet.0
  3. Verify the IPv4 routing table for the EVPN Type 5 instance:

    user@spine-1> show route table TYPE-5.inet.0
  4. Verify the IPv6 routing table for Tenant 10:

    user@spine-1> show route table VRF_TENANT_10.inet6.0
  5. Verify the IPv6 routing table for Tenant 20:

    user@spine-1> show route table VRF_TENANT_20.inet6.0
  6. Verify the IPv6 routing table for the EVPN Type 5 instance:

    user@spine-1> show route table TYPE-5.inet6.0

Meaning

The two customer routing instances and the EVPN Type 5 routing instance for both IPv4 and IPv6 are functioning correctly.

Spine: Verifying the Layer 3 Gateway

Purpose

Verify that each tenant host resolves the gateway MAC address by using the Layer 3 gateway IRB interface on the spine devices.

Action

Display the ARP table to verify that the hosts use the IRB interface as a Layer 3 gateway:

user@spine-1> show arp no-resolve

Meaning

The host MAC addresses are mapped to the corresponding IRB interfaces that are being used as a Layer 3 gateway.

Spine: Verifying the Switching Table

Purpose

Verify that VNI, VTEP, and ESI information appears in the switching table for a corresponding VLAN.

Action

Display switching table information for VLAN 100:

user@spine-1> show ethernet-switching table vlan-id 100

Meaning

Because VNI, VTEP, and ESI information appears in the switching table, the mapping of this information to the VLANs is functioning correctly.

Spine: Verifying the Source of the VXLAN Tunnel

Purpose

Verify source information for the VXLAN tunnel to confirm the correct VLAN-to-VNI mappings and local VTEP configuration.

Action

Display source information for the VXLAN tunnel:

user@spine-1> show ethernet-switching vxlan-tunnel-end-point source

Meaning

Because the VLAN-to-VNI mappings and local VTEP configuration are correct, the VXLAN tunnel source is functioning correctly.

Spine: Verifying VNI-to-VXLAN Tunnel Mapping

Purpose

Verify that each VNI maps properly to each VXLAN tunnel, the spine device is properly connected to the remote VTEPs, and has the correct reachability..

Action

Verify the mapping of the VNIs to the VXLAN tunnels by displaying the remote VTEP information:

user@spine-1> show ethernet-switching vxlan-tunnel-end-point remote

Meaning

The VNIs are mapped to the correct VXLAN tunnels, the spine device is properly connected to the remote VTEPs, and has the correct reachability..

Spine: Verifying MAC Address Learning

Purpose

Verify that the MAC addresses are learned through the VXLAN tunnels.

Action

Display the MAC addresses that are learned through the VXLAN tunnels:

user@spine-1> show ethernet-switching vxlan-tunnel-end-point remote mac-table

Meaning

MAC addresses are being shared across the VXLAN tunnels correctly.

Fabric: Verifying Interfaces

Purpose

Verify the state of the spine-facing interfaces.

Action

Verify that the spine-facing interfaces are up:

user@fabric-1> show interfaces terse

Meaning

The spine-facing interfaces for the fabric devices are operating correctly.

Fabric: Verifying IPv4 BGP Sessions

Purpose

Verify the state of spine-facing IPv4 BGP sessions.

Action

Verify that the IPv4 BGP sessions are established:

user@fabric-1> show bgp summary

Meaning

The BGP sessions are established and functioning correctly.

Fabric: Verifying BFD

Purpose

Verify that Bidirectional Forwarding Detection (BFD) is operating correctly between the fabric and spine devices.

Action

Verify that BFD is operating between the fabric and spine devices:

user@fabric-1> show bfd session

Meaning

BFD is operating correctly between the fabric and spine devices.

All Devices: Verifying Port Mirroring

Purpose

Verify that port mirroring is operating correctly.

Action

  1. Display the port-mirroring firewall filters:

    user@spine-1> show firewall
  2. Display the port-mirroring statistics:

    user@spine-1> show forwarding-options port-mirroring

Meaning

Port mirroring is operating correctly.