Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

How to Add Bare Metal Servers to Basic VMware NSX-T Environment

 

This use case illustrates and explains VM to BMS Layer 3 communication.

Requirements

System requirements for installing NSX-T are available in the VMware NSX-T 3.0 Data Center Installation Guide, which is used as a reference to set up the physical topology below:

  • Management and Edge Cluster—Three NSX managers are deployed on ESXi servers following the installation procedure available in the VMware NSX-T 3.0 Data Center Installation Guide.

  • Compute Clusters—Three servers (ESXi host OS) per compute cluster orchestrate virtual machines to show application functionality.

  • Bare-Metal Servers—One BMS server (CentOS host OS) orchestrates a bare-metal server that does not run virtual machines.

Juniper hardware requirements for installing NSX-T include:

  • One QFX5110 and one QFX5120 switch configured as leaf devices, running Junos OS release 19.1R3-S2

  • Two QFX5210-64C switches as spine devices, running Junos OS release 19.1R3-S2.

Overview

The topology below shows the physical configuration, using NSX-T, QFX5210-64C as spine devices, QFX5110-32Q and QFX5120-48Y as leaf devices.

Figure 1: Physical Topology
Physical Topology

NSX-T Configuration

NSX-T configuration includes configuring the following components that are applicable for VM-to-BMS use cases:

VMware vSphere Distributed Switch (VDS)

The NSX management and edge cluster hosts the NSX managers and edge nodes. NSX Manager is a virtual appliance in the vCenter server environment.

Step-by-Step Procedure

  1. Install NSX Manager. For instructions, see NSX Manager Installation.
  2. Create VDS and port groups for connectivity between Edge virtual interface and ESXI physical NICs. For instructions, see Create a vSphere Distributed Switch.

    Edge-Uplink-Overlay port group is pinning to vmnic 4 in uplink 1 for overlay connection.

    In the Edge-Uplink-to-TOR portgroup, VLAN type is set to VLAN trunking. The portgroup is pinning to vmnic1 in uplink 2 for underlay connection, this connection is used to establish a BGP session between the Tier-0 Gateway and Juniper QFX Series devices. Ensure that you choose the Accept option from Promiscuous mode and Forged transmits drop-down menu.

Edge Transport Nodes

NSX Edge transport nodes provide centralized network services and is responsible for the NSX-T dataplane implementation. It can belong to one overlay transport zone or multiple transport zones.

Step-by-Step Procedure

To create an Edge VM:

  1. Connect the uplink of overlay-backed N-VDS to the Edge-Uplink-Overlay portgroup for Geneve tunnel. For instructions, see Create an NSX Edge Transport Node.
  2. Connect the uplink of VLAN-backed N-VDS to the Edge-Uplink-to-TOR portgroup for Edge to Northbound uplink connectivity. For instructions, see NSX Edge Networking Setup.

    After the Edge VM deployment is complete, the Configuration Status shows Success and the Node Status shows Up.

  3. Attach nsx-edge-01 to Edge-Cluster-01. For instructions, see Create an NSX Edge Cluster.

Segments

Segments function as a logical switch to allow communication between attached VMs. Each segment has a virtual network identifier (VNI) Geneve tunnel uses and a transport zone that defines the reach of the Geneve tunnel. A segment includes multiple segment ports, and you can attach a VM to segments through vCenter.

To enable the traffic flow between logical and physical infrastructure, you must apply a VLAN-based transport zone.

Step-by-Step Procedure

To create segments:

  1. Create LS-db segment to attach to VMs. For instructions, see Add a Segment.
  2. Create Uplink-1 segment for Tier-0 Gateway to connect to the upstream. For instructions, see Add a Tier-1 Gateway.

Tier-0 Gateway

A Tier-0 gateway is used to create a downlink connection to the Tier-1 gateway and an uplink connection to the physical fabric.

Step-by-Step Procedure

  1. Establish a BGP connection. For instructions, see Configure BGP.
  2. Attach the Tier-1 Gateway to the Tier-0 Gateway. For instructions, see Add a Tier-1 Gateway.

    Router Link is auto-generated by NSX-T when you attach Tier-1 Gateway to Tier-0 Gateway.

    External Interface is created with IP address 192.168.100.2 and connected to vlan-backed uplink segment for Northbound connectivity.

  3. Enable the BGP connection and configure BGP neighbors. Enter the IP address and remote AS number based on the physical configurations. For instructions, see Configure BGP.

    Route Filter is auto-generated with IPv4. You should manually add the EVPN related information under Route Filter in the integration section. For the IP Fabric use case, you can use the default configuration.

  4. Configure Route Re-distribution. For instructions, see Enable Route Redistribution on the Tier-0 Logical Router in Manager Mode.

Tier-1 Gateway

A Tier-1 gateway functions as a logical router. It connects the downlink to segments and the uplink to Tier-0 gateways.

Step-by-Step Procedure

  1. Associate the Tier-1 gateway with the Tier-0 gateway and its cluster. For instructions, see Add a Tier-1 Gateway.

Junos Configuration

Configure the Junos devices.

Underlay (EBGP – IP CLOS)

The underlay network consists of two leaf devices (QFX5110 and QFX5120) and two spine devices (QFX5210) that are configured in a 3- stage CLOS by using EBGP.

Step-by-Step Procedure

  1. Configure the QFX5110 leaf devices that are attached to the BMS.
    1. user@QFX5110-01> show configuration protocols bgp
    2. user@QFX5110-01> show configuration policy-options policy-statement send-direct
    3. user@QFX5110-01> show configuration policy-options policy-statement send-ebgp
  2. Configure the QFX5120 leaf device that is attached to ESXi/VMs.
    1. user@QFX5120-02> show configuration protocols bgp
    2. user@QFX5120-02> show configuration policy-options policy-statement send-direct
    3. user@QFX5120-02> show configuration policy-options policy-statement send-ebgp
    4. user@QFX5120-01> show configuration protocols bgp
    5. user@QFX5120-01> show configuration policy-options policy-statement send-direct
    6. user@QFX5120-01> show configuration policy-options policy-statement send-ebgp
  3. Configure the QFX5210 Spine device on the left side.
    1. user@QFX5210-01> show configuration protocols bgp
    2. user@QFX5210-01> show configuration policy-options policy-statement send-direct
    3. user@QFX5210-01> show configuration policy-options policy-statement send-ebgp
  4. Configure the QFX5210 Spine device on the right side.
    1. user@QFX5210-02> show configuration protocols bgp
    2. user@QFX5210-02> show configuration policy-options policy-statement send-direct
    3. user@QFX5210-02> show configuration policy-options policy-statement send-ebgp

Overlay (Geneve)

For QFX Series devices with interfaces attached to ESXI servers (VMs), to support the additional overhead of Geneve encapsulation, the MTU on physical interfaces acting as VTEPs needs to be set to 1600 bytes or higher. However, we recommend an MTU of 1700 bytes to address possible expansion of Geneve header in the future. This configuration is added manually under the [edit interfaces] hierarchy on QFX5110 and QFX5120 devices. For configuration simplicity, we have applied MTU of 9000 bytes in this configuration.

For example, increase MTU to account for Geneve header:

Step-by-Step Procedure

On the leaf devices, add the configuration for the TEP connection.

  1. Add the configuration on the QFX5110 device:
  2. Add the configuration on the QFX5120 device:

Verification

Step-by-Step Procedure

The verification steps focus on the verifying that the physical devices and all configuration components for NSX-T and the Junos devices are functioning as desired.

  1. Verify the status of the cluster:
  2. Verify the TEP status:

    You can verify the TEP status for all ESXi hosts at the compute hosts and at the NSX-T manager and Juniper QFX Series devices.

    1. Verify the TEP status on QFX Series switches:
      1. user@QFX5120-01> show ethernet-switching table
      2. user@QFX5120-01> show arp no-resolve

        00:50:56:8d:ab:c4 is the MAC address of the remote BGP neighbor (192.168.100.2) on NSX-T.

        00:50:56:8d:79:1c is the MAC address of the TEP (172.20.13.151) on Edge nodes.

        00:50:56:65:b7:d3 is the MAC address of the TEP (172.20.12.151) on Transport nodes.

      3. user@QFX5120-01> show bgp summary

        A BGP session is established between QFX5120 switch and NSX-T Tier-0 Gateway.

    2. Verify the TEP status on the ESXi transport node:
      1. user@host:~# esxcfg-vmknic -l
      2. user@host:~# nsxdp-cli vswitch instance list
  3. Verify the Tier-0 Gateway status:

    After you log in to NSX Edge VM, you can check the routing information by accessing the VRFs of Service Router (SR) or Distributed Router (DR).

    1. Verify the Tier-0 Gateway status on NSX-Manager:

      Segments are configured on NSX-T along with the segments that are created for attaching VMs for L2 switching functionality. Some transit segments are created for routing functionality to connect Tier-1 Gateway, Tier-0 Gateway, and physical infrastructure.

    2. Verify the Tier-0 Gateway status on the NSX-T Edge VM:
      1. Verify the status on NSX-T Edge VM.

        Check the list of logical routers in the output message that include both DR and SR on the Tier-0 and Tier-1 gateways.

      2. Log in to vrf 6 to check BGP related information on Tier-0 Gateway SR:

        Verify that the BGP session is established on the NSX-T Edge node.

        Underlay IP addresses are learned through the uplink BGP neighbor 192.168.100.1.

        VM IP addresses are learned through the downlink (100.64.176.1) connected to Tier-1 Gateway.

      3. Check BGP status:

Verify Traffic Flow

VM to BMS use cases are:

  • VM to BMS communication (VM: 172.16.35.11 <-> BMS: 192.10.10.2)

Step-by-Step Procedure

Verify VM to BMS traffic flow (VM: 172.16.35.11 <-> BMS: 192.10.10.2).

  1. Orchestration:

    Routing function is configured on NSX-T Edge nodes for L3 connectivity between underlay VLAN and overlay logical switch.

    Steps required for VM-to-BMS L3 communication are as follows:

    1. Confirm physical test bed connectivity.
    2. Create segments to attach VM, Tier-1, and Tier-0 gateways for routing capabilities.
    3. Configure BGP on both Tier-0 Gateway and TOR. Ensure BGP session is established between Tier-0 Gateway and TOR.
  2. Preliminary check:
    1. VM (db-01a):
    2. BMS:
    3. Check interface connectivity (ping from 172.16.35.11 to 192.10.10.2):