Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

How to Add Non-virtualized Nodes to NSX-T Environments with EVPN Integration

 

This use case shows how to set up VM to BMS communication in EVPN Fabric. NSX-T version 3.0.0 introduces EVPN Type-5 capabilities. In this use case, you can connect Geneve tunnel with EVPN-VXLAN to realize Layer 3 traffic between virtual and physical workloads. NSX-T Tier-0 Gateway is connected to Juniper data center gateway to exchange routes.

Requirements

This configuration example uses NSX-T version 3.0.0 and QFX5100, QFX5120, and QFX10008 as physical underlay, which supports the VM to BMS communication across virtual and physical environment.

NSX-T Requirements

System requirements for installing NSX-T are available in the VMware NSX-T 3.0 Data Center Installation Guide, which is used as a reference to set up the physical topology below:

  • Management and Edge Cluster—Three NSX managers are deployed on ESXi servers following the installation procedure available in the VMware NSX-T 3.0 Data Center Installation Guide.

  • Compute Clusters—Two servers (ESXi host OS) per compute cluster orchestrate virtual machines are used for demonstrating application functionality.

  • Bare-Metal Servers—One BMS server (CentOS host OS) is used to orchestrate a bare-metal server that does not run virtual machines.

Juniper Hardware Requirements

Hardware requirements for installing NSX-T include:

  • Two QFX5110 swiches and two QFX5120 switches configured as leaf devices, running Junos OS release 19.1R3-S2

  • Two QFX10008 switches configured as spine devices, running Junos OS release 19.1R3-S2.

Overview

Physical Topology

The topology illustration below demonstrates the physical configuration, using NSX-T, QFX10008 as spine devices, QFX5110-48S and QFX5120-48Y as leaf devices.

Figure 1: Physical Topology
Physical Topology

NSX-T Configuration

This section describes NSX-T configuration and Junos device configuration details to deploy a QFX fabric for advanced NSX-T environments with EVPN Integration.

VMware vSphere Distributed Switch (VDS)

The NSX management and edge cluster hosts the NSX managers and edge nodes. NSX Manager is a virtual appliance in the vCenter server environment.

  1. Install NSX Manager. For instructions, see NSX Manager Installation.
  2. Create VDS and port groups for connectivity between Edge virtual interface and ESXI physical NICs. For instructions, see NSX Edge Networking Setup.

    The overlay-portGroup is pinning to vmnic 3 in uplink 2 for the overlay connection.

    In uplink 1, VLAN-portGroup is pinning to vmnic 2 for the underlay connection. This connection establishes a BGP session between Tier-0 Gateway and Juniper QFX Series devices. Choose Accept from the Promiscuous mode and Forged transmits drop-down menu and VLAN trunking from the VLAN type drop-down menu.

Edge Transport Nodes

NSX Edge transport nodes provides centralized network services and is responsible for the NSX-T dataplane implementation. It can belong to one overlay transport zone or multiple transport zones.

Step-by-Step Procedure

  1. Create the Edge VM. For instructions, see Create an NSX Edge Transport Node.

    The uplink of overlay-backed N-VDS connects to overlay-portGroup portgroup for the Geneve tunnel. The uplink of vlan-backed N-VDS connects to VLAN-portGroup portgroup for edge to northbound uplink connectivity.

  2. Check the tunnel between edge and transport nodes when the edge VM deployment is complete.
  3. Attach nsx-edge-01 to edge-cluster.

Segments

Overlay-backed segments function as a logical switch to allow communication between attached VMs. VLAN-backed segments allow Tier-0 Gateway to connect to the upstream. For traffic flow between logical and physical infrastructure, apply VLAN-based transport zone.

  1. Add segments and segment ports. For instructions, see Add a Segment.
  2. Expand segment details to present configurations on each type of segments.

VNI Pool

You can create a VNI pool to be used when you configure EVPN for a Tier-0 gateway. VNI pools cannot have values that overlap.

  1. Add a VNI Pool. For instructions, see Add a VNI Pool.

Tier-0 Gateway

A Tier-0 gateway performs the functions of a Tier-0 logical router. It processes traffic between the logical and physical networks.

  1. Add a Tier-0 gateway. For instructions, see Add a Tier-0 Gateway.

    Configure the following parameters:

    1. Configure EVPN. For instructions, see Configuring EVPN.

      Under the EVPN Settings section, VNI pool is attached. Loopback interface of Tier-0 gateway is configured as EVPN local tunnel endpoints.

    2. Configure the external interface. For instructions, see Add a Tier-0 Gateway.

      Under Interfaces, two interfaces are created. Loopback interface, local-lo0 is used as the source interface of the VTEP. External interface, ulink1 is configured for Northbound connectivity to Juniper QFX TORs.

    3. Configure static routes under Routing to provide a route between loopback address of Tier-0 gateway and uplink physical infrastructure. For instructions, see Configure a Static Route.
    4. Configure BGP. After you enable BGP and provide an AS number, fill out the BGP neighbor details. Provide remote neighbor IP address and remote AS number that is configured in the Juniper underlay. Use loopback addresses on both Tier-0 gateway and QFX devices to establish BGP sessions. For instructions, see Configure BGP.

      Enable the L2VPN_EVPN IP address family in Router Filter. For instructions, see Configure BGP.

    5. Configure route redistribution to advertise local and connected routes. For instructions, see Enable Route Redistribution on the Tier-0 Logical Router in Manager Mode.
  2. Add VRF gateways on NSX-T and map them to Juniper VRFs that are configured on QFX Series switches. For instructions, see Add a VRF Gateway.

    vrfa is connected to T0gw Tier-0 gateway and is associated to its cluster, edge-cluster.

    Configure the following parameters under VRF Settings for vrfa:

    1. Configure route targets. For instructions, see Add a VRF Gateway.
    2. Under Interfaces, create interfaces that are connected to the VM attached segments. For instructions, see Add a Tier-0 Gateway.
    3. Allow default settings for BGP session.
    4. Configure route redistribution to advertise local and connected routes. For instructions, see Enable Route Redistribution on the Tier-0 Logical Router in Manager Mode.

    vrfb is connected to T0gw Tier-0 gateway and is associated to its cluster, edge-cluster.

    Configure the following parameters under VRF Settings for vrfb:

    1. Configure route targets. For instructions, see Add a VRF Gateway.
    2. Under Interfaces, create interfaces that are connected to the VM attached segments. For instructions, see Add a Tier-0 Gateway.
    3. Allow default settings for BGP session.
    4. Configure route redistribution to advertise local and connected routes. For instructions, see Enable Route Redistribution on the Tier-0 Logical Router in Manager Mode.

Junos Configuration

Configure the Junos devices as shown below.

Underlay (iBGP – IP CLOS)

The underlay network consists of two leaf devices (QFX5110 and QFX5120) and two spine devices (QFX10008) that are configured in a 3- stage CLOS by using iBGP loopback addresses used for BGP neighbor configurations.

Step-by-Step Procedure

  1. Configure L2 access, IRB interfaces, and ESI-LAG.
    1. R0-Config:
    2. R1-Config:
    3. R2-Config:
    4. R3-Config:
    5. R4-Config:
    6. R5-Config:
    7. R6-Config:
  2. Configure OSPF, BGP, EVPN, and AS number.
    1. R0-Config:
    2. R1-Config:
    3. R2-Config:
    4. R3-Config:
    5. R4-Config:
    6. R5-Config:
    7. R6-Config:
  3. Configure switch-options.
    1. R0-Config:
    2. R1-Config:
    3. R2-Config:
    4. R3-Config:
    5. R4-Config:
    6. R5-Config:
    7. R6-Config:
  4. Configure L2 VLANs and VNIs mapping.

    You need not configure R0 and R1.

    1. R2-Config:
    2. R3-Config:
    3. R4-Config:
    4. R5-Config:
    5. R6-Config:
  5. Configure VRF-to-VRF for EVPN Type 5.

    You need not configure R0 and R1.

    1. R2-Config:
    2. R3-Config:
    3. R4-Config:
    4. R5-Config:
    5. R6-Config:
  6. Configure policy-options.
    1. R0-Config:
    2. R1-Config:
    3. R2-Config:
    4. R3-Config:
    5. R4-Config:
    6. R5-Config:
    7. R6-Config:

Step-by-Step Procedure

To support the additional overhead of Geneve encapsulation, the minimum MTU required on physical interfaces is 1600 bytes. However, we recommend 1700 bytes to address possible expansion of Geneve header in the future. Add this configuration under the [edit interfaces] hierarchy on physical devices (QFX platforms). For configuration simplicity, we have applied MTU of 9000 bytes in this configuration. For example:

  1. user@QFX10008-3# set interfaces xe-7/0/7:2 mtu 9000;

Verification

The verification steps focus on verifying that the physical devices and all configuration components for NSX-T and the Junos devices are functioning as desired.

Step-by-Step Procedure

  1. Verify the status of the cluster:
  2. Verify the BGP and EVPN status:
    1. user@QFX10008-3> show route table Type5-vrf.inet.0

      The output displays routes that are learned through EVPN type-5 in vrf-a.

    2. user@QFX10008-3> show route table Type5-vrf-b.inet.0

      The output displays routes that are learned through EVPN type-5 in vrf-b.

    3. user@QFX10008-3> show bgp summary

      The output displays the overview of BGP information.

    4. user@QFX10008-3> show route receive-protocol bgp 11.0.1.2

      The output displays information in format received from BGP protocol neighbor 11.0.1.2.

    5. user@QFX10008-3> show route advertising-protocol bgp 11.0.1.2

      The output displays information in format intended for BGP protocol neighbor 11.0.1.2.

    6. user@QFX10008-3> show evpn l3-context Type5-vrf extensive

      The output displays vrfa Layer 3 context extensive information.

    7. user@QFX10008-3> show evpn l3-context Type5-vrf-b extensive

      The output displays vrfa Layer 3 context extensive information.

    8. user@QFX10008-3> show evpn ip-prefix-database prefix 69.69.69.69 extensive

      The output displays EVPN internal IP prefix database of vrfa.

    9. user@QFX10008-3> show evpn ip-prefix-database prefix 70.70.70.70 extensive

      The output displays EVPN internal IP prefix database of vrfb.

  3. Verify the Tier-0 Gateway status.

    Log in NSX Edge VM and access the VRFs of SR or DR to check the routing information on the Edge Transport node:

    1. nsxt-edge1> get logical-router

      The output displays a list of logical routers that include DR and VRF of Tier-0 gateway.

    2. Log in to vrf 2 to check BGP related information on Tier-0 gateway service router.

      nsxt-edge1> vrf 2

      nsxt-edge1 (tier0_sr)> get bgp neighbor

      The output shows that a BGP session is established on the NSX-T edge node.

    3. nsxt-edge1 (tier0_sr)> get bgp evpn

      The output displays 13 prefixes (13 paths). Displays BGP EVPN routes.

    4. Exit vrf 2 and login to vrf 1, confirm connectivity between the interface connected to segment web50 and BMS in the same tenant.

      nsxt-edge1> vrf 1

      nsxt-edge1(tier0_vrf_sr)> ping 172.16.150.101 source 172.16.50.1

    5. Exit vrf 1 and log in to vrf 6, confirm connectivity between the interface connected to segment compute80 and BMS in the same tenant.

      nsxt-edge1> vrf 6

      nsxt-edge1(tier0_vrf_sr)> ping 172.16.160.101 source 172.16.80.1

Verify Traffic Flow

The following VM to BMS use cases are illustrated in this section:

  • vrfa - VM: 172.16.50.12 <-> BMS: 172.16.150.101

  • vrfb - VM: 172.16.90.11 <-> BMS: 172.16.160.101

Verify vrfa - VM: 172.16.50.12 <-> BMS: 172.16.150.101 Traffic Flows

Step-by-Step Procedure

Verify vrfa - VM: 172.16.50.12 <-> BMS: 172.16.150.101.

  1. Orchestration:
    • VM-1 connected to segment web50 is attached to vrfa. vrfa is associated to Tier-0 gateway, which has Northbound connectivity to Juniper EVPN fabric.

    • VM-2 connected to segment app90 is attached to vrfb. vrfb is associated to the same Tier-0 gateway.

    • BMS-1 in tenant a is connected to Juniper QFX Series switches that has vrfa related configurations prepared on the device.

    • BMS-2 in tenant b is connected to the same Juniper QFX Series switches that has vrfb related configurations prepared on the device.

  2. Test case:
    • Confirm physical topology connectivity.

    • With BGP and EVPN configured on both NSX-T and Juniper QFX Series switches, route learned through the link between Tier-0 gateway and QFX TOR.

    • Check routing table in the corresponding vrf. VM-1 and VM-2 learns the routes to the destinations.

    • Packets are sent out through Geneve tunnel to the Edge node.

    • Edge node checks its routing table based on the vrf where the VM resides and sends out through physical link to TOR.

    • Geneve tunnel terminates at the Edge node. EVPN configured at Juniper QFX Series switches manages the remaining tasks.

    • From TOR that is connected to the Edge node to the designated BMS, EVPN-VXLAN is used for connectivity. Juniper QFX Series switches behave as VTEPs to assist traffic flow.

  3. Preliminary check (vrfa):
    1. VM (web12):
    2. BMS-1:
    3. Check interface connectivity (ping from 172.16.50.12 to 172.16.150.101):
  4. Preliminary check (vrfb):
    1. VM (app11):
    2. BMS-2:
    3. Check interface connectivity (ping from 172.16.90.11 to 172.16.160.101):
  5. Packet capture:

    Check packets flow between 172.16.90.11 (VM: app11) and 172.16.160.101 (BMS-2 in vrfb) are double encapsulated with both VXLAN and GENEVE headers.