Overview of VMware NSX-T
About This Network Configuration Example
This network configuration example (NCE) describes how to configure VMware NSX-T and Juniper QFX5K and QFX10K Series switches to offer a network virtualization solution.
VMware NSX-T Data Center provides an integrated control plane, data plane, and management plane that support host-based overlay solution for virtual workloads by leveraging the Geneve protocol, while Juniper QFX switches provide a fabric underlay.
The use case shows how to integrate NSX-T with QFX5K and QFX10K Series switches.
Use Case Overview
The use cases in this NCE are about integrating NSX-T with QFX5K and QFX10K Series switches with an IP Fabric underlay. Each use case is suitable in different deployments:
How to Deploy QFX Series Switches for Basic VMware NSX-T Environment
This use case is suitable for Greenfield deployments for which only virtual workloads communication is required. It is easy to implement with minimal configuration needed on NSX-T. A Geneve tunnel is established between hypervisors. Juniper Networks provides the underlay for IP connectivity.
For more information, see How to Deploy QFX Series Switches for Basic VMware NSX-T Environment.
How to Add Bare Metal Servers to Basic VMware NSX-T Environment
For connectivity between virtual and physical workloads, we are using Edge nodes and Tier-0 Gateways with BGP sessions between the Edge nodes and leaf devices exchange routes between the virtual and physical workloads.
For more information, see How to Add Bare Metal Servers to Basic VMware NSX-T Environment.
How to Deploy QFX Fabric for Advanced NSX-T Environments with EVPN Integration
This use case is suitable for Brownfield deployments where you have already implemented EVPN in your production network. The NSX-T configuration is the same as that of the How to Deploy QFX Series Switches for Basic VMware NSX-T Environment use case, except that the EVPN fabric serves as underlay connectivity. You need not change the EVPN configuration to realize virtual workload communication. The Geneve packet is encapsulated with the VXLAN header on the leaf device. ESI-LAG is an option for multi-homing deployments.
For more information, see How to Deploy QFX Fabric for Advanced NSX-T Environments with EVPN Integration.
How to Add Non-virtualized Nodes to NSX-T Environments with EVPN Integration
This use case is suitable for Brownfield deployments where you have already implemented EVPN in their production and has an additional requirement of connectivity between virtual workloads and physical workloads. The edge node is introduced for connectivity between virtual workloads and physical workloads and functioned similar to a border router in a typical data center deployment. Virtual workloads use Geneve tunnel to reach to the Edge node device for route learning. The edge node as an EVPN endpoint, leverages the VXLAN tunnel and sends traffic to the destination bare metal server.
For more information, see How to Add Non-virtualized Nodes to NSX-T Environments with EVPN Integration.
Network virtualization concept is similar to server virtualization. With server virtualization, the Hypervisor abstracts the physical hardware and allows you to create multiple unique virtual machines (VMs) within a few seconds. Similarly, network virtualization allows you to create network services that consist of switches, routers, firewalls, load balancers, and so on. The virtualized network services can expand across multiple physical devices that connect virtual machines by using the physical hardware infrastructure as the IP backplane. This creates unique networks within a few seconds. These networks are created in software and are independent of the underlying physical infrastructure, so they can be moved along with the VMs they connect to.
VMware NSX-T 3.0 is the latest generation of VMware’s network virtualization product series. NSX-T is the successor to the NSX-V product. NSX-T supports third-party Hypervisors and next generation overlay encapsulation protocols such as Generic Network Virtualization Encapsulation (Geneve). NSX-T acts as a network Hypervisor that allows software abstraction of various network services that include logical switch (segments), logical routers (Tier-0 or Tier-1 Gateway), logical firewalls, logical load balancers, and logical VPNs.
Various components of VMware NSX-T data center are:
NSX Manager—Integrated management component of NSX-T, provides the functionality of controller, manager, and policy. It is installed as a virtual appliance in the vCenter server environment.
ESXi (Transport Node)—Servers and edge nodes that have NSX-T prepared in the NSX-T data center.
N-VDS—NSX-managed Virtual Distributed Switch (N-VDS), derived from VMware vSphere Distributed Switch (VDS), de-couples the data plane from the compute manager (vCenter). It is a software abstraction layer present between servers and physical network for network connectivity. It can be created on both host and edge transport nodes and can co-exist with VMware Standard Switch (VSS) and VDS.
Segments—Formerly known as logical switch in NSX-V, similar to VLANs, reproduces switching functionality in NSX-T environment to provide network connections between attached VMs.
NSX Edge—Provides gateway and security services to the virtual network. It enables east-west traffic between the VMs on the same host in different subnets without accessing the physical network. It also enables north-south traffic for VMs to access the public networks.
Service Router—Instantiated on the NSX Edge node and provides gateway functionality and services such as NAT, load balancing, and so on.
Distributed Router—Runs as a kernel module and is embedded in all transport nodes to provide basic routing capabilities such as east-west routing and local routing inside hypervisor.
NSX Tier-1 Gateway—Provides east-west connectivity.
NSX Tier-0 Gateway—Provides north-south connectivity. It supports static routing, BGP dynamic routing, and equal-cost multi-path (ECMP) routing. It is required for traffic flow between logical and physical networks.
NSX-T Overlay Transport Zones and VLAN Backed Transport Zones
NSX-T uses Geneve to provide overlay capabilities. To isolate traffic and create multi-tenant broadcast domains in the data center, you should use a transport zone.
Transport zones define a collection of transport nodes and their VMs that can communicate with each other across physical infrastructure. NSX-T has two types of transport zones:
Overlay transport zone—Used for Geneve tunnel between transport nodes and to transmit Geneve encapsulated traffic.
VLAN-backed transport zone—Used to establish northbound connectivity between edge and physical infrastructure. This implementations uses VLAN-backed transport zone for VM-to-BMS communication.
Geneve and VXLAN Encapsulation
Geneve is used to encapsulate data plane packets for unicast or multicast traffic traverse through network architecture between TEPs (Tunnel Endpoints). It is designed to standardize the data format and combine the functionality of the current network virtualization encapsulation protocols such as Virtual Extensible LAN (VXLAN), NVGRE, and STT. It implements Type-Length-Value (TLV) mechanism for flexibility and to be compatible with multiple vendors.
Geneve encapsulates any type of network traffic and increases its extensibility by leveraging several option fields. The end-user application and its VMs are not modified by Geneve.
Geneve Tunnel Endpoint (TEP)
Similar to VXLAN, Geneve is an overlay technology that provides tunnels to transmit Layer 2 packets over a Layer 3 network. TEPs are created during NSX-T preparation on both host and edge transport nodes to encapsulate and decapsulate Layer 2 packets. TEPs can be in the same or different subnets based on your own design of network fabric. Geneve traffic is transmitted in a standard UDP packet with both IPv4 and IPv6 supported.