Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

How to Deploy QFX Fabric for Advanced NSX-T Environments with EVPN Integration

 

This use case shows how to set up VM-to-VM Layer 2 communication in an EVPN Fabric.

Requirements

NSX-T Requirements:

System requirements for installing NSX-T are available in the VMware NSX-T 3.0 Data Center Installation Guide, which is used as a reference to set up the physical topology below:

  • Management and Edge Cluster—Three NSX managers are deployed on ESXi servers following the installation procedure available in VMware NSX-T 3.0 Data Center Installation Guide.

  • Compute Clusters—Two servers (ESXi host OS) per compute cluster orchestrate virtual machines are used for demonstrating application functionality.

Juniper hardware requirements for installing NSX-T include:

  • Two QFX5110 swiches and two QFX5120 switches configured as leaf devices with ESI-LAG configured on each pair, running Junos OS release 19.1R3-S2

  • Two QFX5210-64C switches configured as spine devices, running Junos OS release 19.1R3-S2.

Overview

This configuration example uses NSX-T version 3.0.0 and QFX5100, QFX5120, and QFX10008 as the physical underlay, which supports communication across virtual and physical environments.

Physical Topology Overview

This section describes NSX-T configuration and Junos device configuration details to deploy a QFX fabric for advanced NSX-T environments with EVPN Integration.

The topology below shows the physical configuration, using NSX-T, QFX10008 as spine devices, QFX5110-48S and QFX5120-48Y as leaf devices.

Figure 1: Physical Topology
Physical Topology

NSX-T Configuration

NSX-T configuration includes configuring the following components that are applicable for intra-VNI and inter-VNI use cases:

Profile

To configure multi-homing between compute nodes and TOR switches, you should configure LAG on both NSX-T GUI and Juniper QFX Series switches.

Step-by-Step Procedure

  1. Configure LAG on NSX-T 3.0.0. For instructions, see Create an Uplink Profile.

Transport Nodes

Information about preparing N-VDS transport nodes is provided in How to Deploy QFX Series Switches for basic VMware NSX-T Environment. Similarly, apply the LAG profile that you have created to enable the LAG functionality.

Step-by-Step Procedure

  1. Configure the transport node. For instructions, see Create a Standalone Host or Bare Metal Server Transport Node.

Segments

Segments function as a logical switch to allow communication between attached VMs. Each segment has a virtual network identifier (VNI) used by Geneve tunnel and the transport zone defines the reach of Geneve tunnel. A segment includes multiple segment ports, and you can attach a VM to segments through vCenter.

Step-by-Step Procedure

  1. Add segments and segment ports. For instructions, see Add a Segment.

Tier-1 Gateway

A tier-1 gateway functions as a logical router for east-west traffic communication. Three segments such as Web50, Database60, and Storage70 are attached to the tier-1 gateway. Enable route advertisement based on your needs.

  1. Add a tier-1 gateway. For instructions, see Add a Tier-1 Gateway.

Junos Configuration

Configure the Junos devices in the environment as shown below.

Underlay (iBGP – IP CLOS)

The underlay network consists of two leaf devices (QFX5100 and QFX5120) and two spine devices (QFX10008) that are configured in a 3- stage CLOS by using iBGP loopback addresses used for BGP neighbors.

Step-by-Step Procedure

  1. Configure Layer 2 access, IRB interfaces and ESI-LAG.
    1. R0-Config:
    2. R1-Config:
    3. R2-Config:
    4. R3-Config:
    5. R4-Config:
    6. R5-Config:
  2. Configure OSPF, BGP, EVPN, and AS number.
    1. R0-Config:
    2. R1-Config:
    3. R2-Config:
    4. R3-Config:
    5. R4-Config:
    6. R5-Config:
  3. Configure switch options.
    1. R0-Config:
    2. R1-Config:
    3. R2-Config:
    4. R3-Config:
    5. R4-Config:
    6. R5-Config:
  4. Configure L2 VLANs and VNIs mapping.

    You need not configure R0 and R1.

    1. R2-Config:
    2. R3-Config:
    3. R4-Config:
    4. R5-Config:
  5. Configure VRF for EVPN.

    You need not configure R0 and R1.

    1. R2-Config:
    2. R3-Config:
    3. R4-Config:
    4. R5-Config:
  6. Configure policy-options.
    1. R0-Config:
    2. R1-Config:
    3. R2-Config:
    4. R3-Config:
    5. R4-Config:
    6. R5-Config:

Verification

Step-by-Step Procedure

The verification steps focus on the verifying that the physical devices and all configuration components for NSX-T and the Junos devices are functioning as desired.

  1. Verify the status of the cluster:
  2. Verify the TEP status:

    In the output, check the list of segments that are created.

    In the output, check the TEP information on the logical switch (VNI 71680 = web50).

    In the output, check the MAC addresses on the logical switch (VNI 71680 = web50).

  3. Verify the VTEP status on Compute (ESXi) hosts:
  4. Verify VTEP status on QFX Series switches:
  5. Verify ESI-LAG status:

Verify Traffic Flow

VM to VM use cases are:

  • Intra-VNI (VNI 71680 <-> VNI 71680)

  • Inter-VNI (VNI 71680 <-> VNI 71682)

Step-by-Step Procedure

Verify VM-to-VM Intra-VNI Traffic Flow (VM-1: 172.16.50.11 <-> VM-2: 172.16.50.12).

  1. Orchestration:

    VM-2 (172.16.50.12) in VNI 71680 hosted on TEP-2 (ESXi). VM-1 (172.16.50.11), hosted on TEP-1 (ESXi).

  2. Test case:
    • Demonstrate bi-directional communication between VM-1 and VM-2 on different hosts using interfaces under the same IP subnet.

    • Both Transport Nodes are ESXI servers that are configured with NVDS1 and reside on the same overlay transport zone. The TEP IP address is installed on the VMK10 VMkernel networking interface. VM-1 (web11) and VM2 (web12) are connected to the same overlay-based segments Web50.

    Traffic flow proceeds as follows:

    1. VM-1 on Transport Node A sends IP packets toward TEP-1 (ESXi).
    2. TEP-1 encapsulates the packet with Geneve header.
    3. Transport Node A sends the packet towards TEP-2 over the Junos devices that function as the IP underlay.
    4. TEP-2 on Transport Node B receive and de-capsulates the packet by removing the Geneve header.
    5. TEP-2 sends the packet towards the final destination VM-2.
  3. Preliminary check:
    1. VM-1 (web11):
    2. VM-2 (web12):
    3. Check interface connectivity (ping from 172.16.50.11 to 172.16.50.12):

Step-by-Step Procedure

Verify VM-to-VM Inter-VNI Traffic Flow (VM-1: 172.16.50.11 <-> VM-2: 172.16.70.12)

  1. Orchestration:

    VM-2 (172.16.70.12) in VNI 71680 hosted on TEP-2 (ESXi). VM-1 (172.16.50.11) in VNI 71682, hosted on TEP-1 (ESXi).

  2. Test case:

    Demonstrate bi-directional communication between VM-1 and VM-2 on different transport nodes using interfaces under different IP subnets.

    Distributed Router (DR) of the Tier-1 Gateway is configured on both Transport Nodes. VM-1 (web11) is connected to segments web50. VM-2 (storage12) is connected to segments Storage70. Both segments are in the same overlay-based transport zone.

    Traffic flow proceeds as follows:

    1. VM-1 on Transport Node A sends IP packets towards its default gateway DR.
    2. DR checks its own routing table and realizes that the packet belongs to a different segment Storage70.
    3. TEP-1 encapsulates the packet with Geneve header including the VNI 71682 of Storage70.
    4. Transport Node A sends the packet over the Junos devices (acting as the IP underlay) towards TEP-2.
    5. TEP-2 on Transport Node B receives and decapsulates the packet with Geneve header that has VNI 71682 inside.
    6. TEP-2 sends the packet towards the final destination VM-2.
  3. Preliminary check:
    1. VM-1 (web11):
    2. VM-2 (storage12):
    3. Check interface connectivity (ping from 172.16.50.11 to 172.16.70.12):