Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

How to Deploy QFX Series Switches for Basic VMware NSX-T Environment

 

This use case deploys NSX-T version 3.0.0, vCenter 6.7, QFX5100, QFX5120, and QFX5210-64C as physical underlay to support communication across virtual and physical environments.

Requirements

System requirements for installing NSX-T are available in the VMware NSX-T 3.0 Data Center Installation Guide, which you can use to set up the physical topology below:

  • Management and Edge Cluster—Three NSX managers are deployed on ESXi servers following the installation procedure in VMware NSX-T 3.0 Data Center Installation Guide.

  • Compute Clusters—Two servers (ESXi host OS) per compute cluster orchestrate virtual machines to demonstrate application functionality.

Juniper hardware requirements for installing NSX-T include:

  • One QFX5110 and one QFX5120 configured as leaf devices, running Junos OS release 19.1R3-S2

  • Two QFX5210-64C switches configured as spine devices, running Junos OS release 19.1R3-S2.

Overview

This section describes NSX-T configuration and Junos device configuration details to deploy a QFX fabric for basic VMware NSX-T environment.

Physical Topology

The topology illustration below demonstrates the physical configuration, using NSX-T, QFX5210-64C as spine devices, QFX5110-32Q and QFX5120-48Y as leaf devices.

Figure 1: Physical Topology
Physical Topology

NSX-T Configuration

NSX-T configuration includes configuring the following components that are applicable for intra-VNI and inter-VNI use cases:

NSX Manager and Edge nodes

The NSX management and edge cluster hosts the NSX managers and edge nodes. NSX Manager is a virtual appliance in the vCenter server environment.

Step-by-Step Procedure

  1. Install NSX Manager. For instructions, see NSX Manager Installation.

    After the NSX manager is installed, you can spawn two more NSX managers from a three-node cluster.

Compute Managers

NSX Manager and vCenter Server have a one-to-one relationship. For every instance of NSX Manager, there is a vCenter Server, even in a cross-vCenter NSX environment. Only one NSX Manager can be registered with a vCenter Server system. Changing the vCenter registration of a configured NSX Manager is not supported.

Step-by-Step Procedure

  1. Register the NSX Manager with the vCenter server. For instructions, see Add a Compute Manager.

    After the NSX Manager is registered, check whether the Registration Status is Registered and Connection Status is Up.

Compute Cluster

Host preparation is the process in which the NSX Manager installs kernel modules on ESXi hosts that are members of vCenter clusters and builds the control-plane and management-plane fabric. NSX Data Center for vSphere kernel modules packaged in VIB files run within the hypervisor kernel and provide services such as distributed routing, distributed firewall, and VXLAN bridging capabilities.

Step-by-Step Procedure

To prepare host transport nodes in NSX-T, install the following parameters to create N-VDS:

  1. Install overlay transport zone that is used for Geneve tunnels and for carrying encapsulated traffic. For instructions, see Configure a Managed Host Transport Node.
  2. Install uplink profiles, which are logical interfaces on N-VDS. You can have single uplink or multiple uplinks based on the profile you choose. For instructions, see Create an Uplink Profile.
  3. Assign DHCP, IP pool, or static IP list. For instructions, see Create an IP Pool in Manager Mode.

Transport Zone

Transport zone defines the reach of transport nodes and its VMs. Only VMs on the physical hosts that belong to the same transport zone can communicate with each other. Under the transport zone configuration, you can choose either overlay or VLAN transport zone and provide uplink teaming policy.

Step-by-Step Procedure

  1. Create transport zones. For instructions, see Create Transport Zones

IP Address Pool

IP address pool is used to allocate IP addresses for Tunnel Endpoints (TEPs). While adding an IP address pool, you must ensure that the IP address is configured in the network fabric.

Step-by-Step Procedure

  1. Add IP address pools. For instructions, see Create an IP Pool in Manager Mode.

Segments

Segments function as a logical switch to allow communication between attached VMs. Each segment has a virtual network identifier (VNI) used by Geneve tunnel and the transport zone defines the reach of Geneve tunnel. A segment includes multiple segment ports, and you can attach a VM to segments through vCenter.

Step-by-Step Procedure

  1. Add segments and segment ports. For instructions, see Add a Segment.

Tier-1 Gateway

A Tier-1 gateway functions as a logical router. It connects downlink to segments and uplink to Tier-0 gateways.

Step-by-Step Procedure

  1. Add a Tier-1 gateway. For instructions, see Add a Tier-1 Gateway.

Junos Configuration

Configure the Junos devices as shown below.

Underlay (EBGP – IP CLOS)

The underlay network consists of two leaf devices (QFX5110 and QFX5120) and two spine devices (QFX5210) that are configured in a 3-stage CLOS by using EBGP.

Step-by-Step Procedure

  1. Configure the QFX5110 leaf device that are attached to ESXi/VMs.
    1. user@QFX5110-01> show configuration protocols bgp
    2. user@QFX5110-01> show configuration policy-options policy-statement send-direct
    3. user@QFX5110-01> show configuration policy-options policy-statement send-ebgp
  2. Configure the QFX5120 leaf device that is attached to ESXi/VMs.
    1. user@QFX5120-01> show configuration protocols bgp
    2. user@QFX5120-01> show configuration policy-options policy-statement send-direct
    3. user@QFX5120-01> show configuration policy-options policy-statement send-ebgp
  3. Configure the QFX5210 spine device on the left side.
    1. user@QFX5210-01> show configuration protocols bgp
    2. user@QFX5210-01> show configuration policy-options policy-statement send-direct
    3. user@QFX5210-01> show configuration policy-options policy-statement send-ebgp
  4. Configure the QFX5210 spine device on the right side.
    1. user@QFX5210-02> show configuration protocols bgp
    2. user@QFX5210-02> show configuration policy-options policy-statement send-direct
    3. user@QFX5210-02> show configuration policy-options policy-statement send-ebgp

Overlay (Geneve)

For QFX Series devices with interfaces attached to ESXI servers (VMs) and on physical interfaces acting as VTEPs, the MTU needs to be set to 1600 bytes or higher to support the additional overhead of Geneve encapsulation. However, we recommend 1700 bytes to address possible expansion of Geneve header in the future. Add this configuration under the [edit interfaces] hierarchy on QFX5110 and QFX5120 devices. For configuration simplicity, we have applied MTU of 9000 bytes in this configuration.

For example, increase MTU to account for Geneve header:

Step-by-Step Procedure

On the leaf devices, manually add the configuration for TEP connection.

  1. Add the configuration on the QFX5110 device:
  2. Add the configuration on the QFX5120 device:

Verification

Step-by-Step Procedure

The verification steps focus on the verifying that the physical devices and all configuration components for NSX-T and the Junos devices are functioning as desired.

  1. Verify the status of the cluster:
  2. Verify the TEP status:
    1. nsx-t-1> get logical-switch

      In the output, check the list of segments that are created.

    2. nsx-t-1> get logical-switch 3558d7a7-c9da-45ac-9815-66127d047e39 vtep

      In the output, check the TEP information on the logical switch (VNI 67584 = LS-app).

    3. nsx-t-1> get logical-switch 3558d7a7-c9da-45ac-9815-66127d047e39 mac-table

      In the output, check the MAC addresses on the logical switch (VNI 67584 = LS-app).

  3. Verify the VTEP status on Compute (ESXi) hosts:
    1. user@host:~# nsxdp-cli vswitch instance list
    2. user@host:~# esxcfg-vmknic -l
  4. Verify VTEP status on QFX Series switches:
    1. user@QFX5110-01> show ethernet-switching table
    2. user@QFX5110-01> show arp no-resolve

Verify Traffic Flow

VM to VM use cases are:

  • Intra-VNI (VNI 67584 <-> VNI 67584)

  • Inter-VNI (VNI 67584 <-> VNI 67586)

Verify VM-to-VM Intra-VNI Traffic Flow (VM-1: 172.16.15.11 <-> VM-2: 172.16.15.12).

  1. Orchestration:

    VM-2 (172.16.15.12) in VNI 67586 hosted on TEP-2 (ESXi). VM-1 (172.16.15.11), hosted on TEP-1 (ESXi).

  2. Test case:

    Demonstrate bi-directional communication between VM-1 and VM-2 on different hosts using interfaces under the same IP subnet.

    Both transport nodes are ESXI servers that are configured with PROD-NVDS and reside on the same overlay transport zone. The TEP IP address is installed on the VMK10 VMkernel networking interface. VM-1 (web-01a) and VM2 (web-02a) are connected to the same overlay-based segments LS-web.

    Traffic flow proceeds as follows:

    1. VM-1 on Transport Node A sends IP packets toward TEP-1 (ESXi).
    2. TEP-1 encapsulates the packet with Geneve header.
    3. Transport Node A sends the packet towards TEP-2 over the Junos devices that function as the IP underlay.
    4. TEP-2 on Transport Node B receive and de-capsulates the packet by removing the Geneve header.
    5. TEP-2 sends the packet towards the final destination VM-2.
  3. Preliminary check:
    1. VM-1 (web-01a):
    2. VM-2 (web-02a):
    3. Check interface connectivity (ping from 172.16.15.11 to 172.16.15.12):

Verify VM-to-VM Inter-VNI Traffic Flow (VM-1: 172.16.15.11 <-> VM-2: 172.16.25.12)

  1. Orchestration:

    VM-2 (172.16.25.12) in VNI 67584 hosted on TEP-2 (ESXi). VM-1 (172.16.15.11) in VNI 67586, hosted on TEP-1 (ESXi).

  2. Test case:

    Demonstrate bi-directional communication between VM-1 and VM-2 on different transport nodes using interfaces under different IP subnets.

    Distributed Router (DR) of then Tier-1 Gateway is configured on both Transport Nodes. VM-1 (web-01a) is connected to segments LS-web. VM-2 (app-02a) is connected to segments LS-app. Both segments are in the same overlay-based transport zone.

    Traffic flow proceeds as follows:

    1. VM-1 on Transport Node A sends IP packets towards its default gateway DR.
    2. DR checks its own routing table and realizes that the packet belongs to a different segment LS-app.
    3. TEP-1 encapsulates the packet with Geneve header including the VNI 67584 of LS-app.
    4. Transport Node A sends the packet over the Junos devices (acting as the IP underlay) towards TEP-2.
    5. TEP-2 on Transport Node B receives and decapsulates the packet with Geneve header that has VNI 67584 inside.
    6. TEP-2 sends the packet towards the final destination VM-2.
  3. Preliminary check:
    1. VM-1 (web-01a):
    2. VM-2 (app-02a):
    3. Check interface connectivity (ping from 172.16.15.11 to 172.16.25.12):