Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

NSX-T Inventory Mapping to Apstra Virtual Infrastructure

Overview

Apstra software can connect to the NSX-T API to gather information about the inventory in terms of hosts, clusters, VMs, portgroups, vDS/N-vDS, and NICs within the NSX-T environment. Apstra can integrate with NSX-T to provide Apstra admins visibility into the application workloads (aka VMs) running and alert them about any inconsistencies that would affect workload connectivity. Apstra Virtual Infrastructure visibility helps provide underlay/overlay correlation visibility and use IBA analytics for overlay/underlay.

You cannot view the NSX Inventory in Apstra until the NSX-T manager is associated to a blueprint.

As per above screenshot inventory collection for NSX-T is done via Apstra extensible telemetry collector.

NSX-T Networking Terminology and correlation

NSX-T uses the following terminology for their control plane and data plane components. Also please find respective correlation with respect to Apstra.

Transport Zones

Transport Zones (TZ) define a group of ESXi hosts that can communicate with one another on a physical network.

There are two types of Transport Zones:

  1. Overlay Transport Zone: This transport zone can be used by both transport nodes or NSX edges.When an ESXi host or NSX-T Edge transport node is added to an Overlay transport zone, an N-VDS is installed on the ESXi host or NSX Edge Node.
  2. VLAN Transport Zone: It can be used by NSX Edge and host transport nodes for its VLAN uplinks.

Each Hypervisor Hosts can only belong to one Transport Zone at a given point of time.

A newly created VLAN VN tagged towards an interface in Apstra fabric corresponds to a VLAN based transport zone as per the screenshots below:

Here tagged VLAN VN is mapped to the respective Transport Zone in NSX-T with traffic type as VLAN.

N-VDS

An NSX-managed virtual distributed switch provides the underlying forwarding and is the data plane of the transport nodes.

A few notables about N-VDS virtual switches include:

  • pnics are physical ports on the ESXi host
  • pnics can be bundled to form a link aggregation (LAG)
  • uplinks are logical interfaces of an N-VDS
  • uplinks are assigned pnics or LAGs

Here TEP are Tunnel Endpoints used for the NSX overlay networking (geneve encapsulation/decapsulation). P1/P2 are pNICs mapped to the uplink profile(U1/U2).

N-VDS are instantiated at the Hypervisor level and can be thought of Virtual switch connected to the ToR physical leaf devices as below:

Transport Node

It is a node capable of participating in an NSX-T Data Center overlay or VLAN networking.

VMs hosted on different Transport nodes communicate seamlessly across the overlay network. A transport node can belong to:

  • Multiple VLAN transport zones.
  • At most one overlay transport zone with a standard N-VDS.

This can be compared to setting end hosts(servers) in an Apstra blueprint to be part of VLAN (leaf-local) or VXLAN (inter-leaf) Virtual Network.

NSX Edge Node

The NSX Edge provides routing services and connectivity to networks that are external to the NSX-T deployment. It is required for establishing external connectivity from the NSX-T domain, through a Tier-0 router via BGP or static routing.

NSX Edge VMs have uplinks towards ToR leaves needing a separate VLAN transport zone. Apstra fabric must be configured with the corresponding VLAN Virtual Network.

Note:

NSX-T Edge Bare Metal or VM form factors are Transport nodes and discovered as hypervisors in Apstra. However, VM edge Transport nodes can't be correlated to the connected ToR Leaf.

NSX Controller Cluster

It provides control plane functions for NSX-T Data Center logical switching and routing components.

NSX Manager

It is a node that hosts the API services, the management plane, and the agent services.

NSX Inventory Model

  • In NSX-T Transport nodes are hypervisor hosts and they can be correlated to server nodes in a Blueprint connected to the ToR leaf devices. In NSX-T Data Center, ESXi hosts are prepared as Transport Node which allows nodes to exchange traffic for virtual networks on Apstra Fabric or amongst network on nodes. You must ensure hypervisors (ESXi) networking stack is sending LLDP packets to aid the correlation of ESXi hosts with server nodes in the blueprint.
  • PNIC is the actual physical network adapter on ESXi or hypervisor host. Hypervisor PNICs can be correlated to the server interface on the Blueprint. LAG or Teaming configuration is done on the links mapped to these physical NICs. This can be correlated to bond configuration done on the ToR leaf devices towards the end servers.
  • In NSX-T integration with Apstra VM virtual networks are discovered. These can be correlated to blueprint virtual networks. In case VMs need to communicate with each other over tunnels between hypervisors VMs are connected to the same logical switch in NSX-T(called N-VDS). Each logical switch has a virtual network identifier (VNI), like a VLAN ID. This corresponds to VXLAN VNIs as in Apstra fabric physical infrastructure.
  • The NSX-T Uplink Profile defines the network interface configuration facing the fabric in terms of LAG and LACP config on PNIC interfaces. The uplink profile is mapped in Transport node for the links from the hypervisor/ESXi towards top-of-rack switches in Apstra Fabric.
  • VNIC defines Virtual Interface of transport nodes or VMs. N-VDS switch does mapping of physical NICs to such uplink virtual interfaces. These Virtual Interfaces can be correlated to server interface ports of Apstra Fabric.

Model Details and Relationship

Hypervisor

  • Hostname: FQDN attribute of transport node
  • Hypervisor_id: Id attribute of transport node
  • Label: Display name attribute of transport node
  • version: NSX-T version installed on the transport node

To obtain NSX-T API response for respective hypervisor hosts and understand the correlation you can use graph query. To open the GraphQL Explorer, click the “>_” button

After that in the graph explorer we can type a graph query on the left as per the screenshot below using GraphQL:

To check for respective Label for the transport nodes below query can be used:

Request:

Response:

Hypervisors which act as Transport Nodes can be visualized in Apstra under Active tab with Has Hypervisor = Yes option as below:

To obtain respective hostname for the transport nodes below query can be used:

Request:

Response:

Hypervisor PNIC

  • MAC address: Physical address attribute of transport node’s interface
  • Switch_id: Switch name attribute of transport node’s transport zone
  • Label: Interface id attribute of transport node’s interface
  • Neighbor_name: System name attribute of transport node’s interface lldp neighbor
  • Neighbor_intf: Name attribute of transport node’s interface lldp neighbor
  • MTU: MTU attribute of transport node’s interface

Physical NICs are selected for uplink profile dedicated for the Overlay Network. NSX-T Uplink Profile defines the network interface configuration for the PNIC interfaces facing the Apstra fabric in terms of LAG and LACP config.

So the uplink profile is mapped in Transport node for the links from the NSX-T logical switch of the hypervisor/ESXi hosts. It points towards top-of-rack switches in Apstra Fabric.

NSX-API Request/Response to check MAC address for the Transport node interfaces.

Request:

Response:

The MAC address shown in above example is learned on a LAG interface in Apstra Fabric towards the NSX-T Transport Node. It is the MAC address of the ESXi host pNICs having LAG bond towards ToR leaf devices in Apstra fabric.

The NSX-API Request/Response below checks the switch name attribute of transport node’s transport zone.

Request:

Response:

Switch ID attribute of the respective transport zone are read by NSX-T API from NSX manager as below:

NSX-API Request/Response to check Transport node’s interface.

Request:

Response:

Transport nodes has the mapping of physical NICs which can be seen returned as labels according to above NSX-T API response.

Please find below NSX-API Request/Response to check Transport node’s LLDP neighbor System name attribute.

Request:

Response:

Here Leaf1/2 are LLDP neighbors to the Transport nodes.

To obtain respective transport node’s LLDP neighbor interface name attribute below query can be used:

Request:

Response:

NSX-API Request/Response to check the MTU attribute of Transport node’s interface.

Request:

Response:

MTU size of 1600 or greater is needed on any network that carries Geneve overlay traffic must. Hence in the NSX-T reply we can notice MTU value 1600 on network interfaces towards Transport nodes.

VNIC

  • MAC address: Physical address attribute of transport node’s or VM's Virtual interface
  • Label: VNIC label attribute of transport node
  • Ipv4_addr: IP address attribute of transport node’s virtual interface
  • Traffic_types: It is derived from transport node’s virtual interface type
  • MTU: MTU attribute of transport node’s virtual interface

You can check the VNIC mac address attribute with the below NSX-API Request/Response. This can be of transport node’s interface Virtual Interface or can be for the Virtual Interface of the VMs. For transport nodes under Host Switches select the Virtual NIC that matches the MAC address of the VM NIC attached to the uplink port group.

Request:

Response:

NSX-API Request/Response to check VNIC label which signifies interface id attribute of transport node’s virtual interface or device name attribute of virtual machine’s virtual interface.

Request:

Response:

Below is the NSX-API Request/Response to check VNIC Ipv4 address which signifies ip address attribute of transport node’s virtual interface or for the virtual interface of logical port.

Request:

Response:

Here “192.168.1.13” and “192.168.1.12” are ipv4 addresses for the bridge interface of the host transport nodes i.e "nsx-vtep0.0" which acts as a virtual tunnel endpoint (VTEP) of the transport node. Each hypervisor has a Virtual Tunnel Endpoint (VTEP) responsible for encapsulating the VM traffic inside a VLAN header and routing the packet to a destination VTEP for further processing. This can be compared to VXLAN Virtual Network anycast GW VTEP IP.

NSX-API Request/Response to check traffic types for the transport node’s virtual interface. Traffic type for the transport node can be overlay type as per the example below or it can be of VLAN type. One can add both the VLAN and overlay NSX Transport Zones to the Transport Nodes.

VLAN based Transport zone is mainly for uplink based traffic. In case VMs on different Hypervisor hosts need to communicate to each other then overlay network should be used. It can be compared to VXLAN Virtual network in Apstra Fabric.

Request:

Response:

NSX-API Request/Response to obtain the mtu size for the transport node. MTU size for networks that carry overlay traffic must be size of 1600 or greater as it carries Geneve overlay traffic. N-VDS and TEP kernel interface all should have the same jumbo frame MTU size(i.e 1600 or greater).

Request:

Response:

So Virtual Interface i.e NSX VTEP and vswitch should have mtu of 1600 as per screenshot above.

Port Channel Policy

  • Label: Name attribute of the host switch uplink lag profile
  • Mode: Mode attribute of host switch uplink lag profile
  • Hashing_algorithm: Load balance algorithm attribute of host switch uplink lag profile

An uplink profile is mapped in a Transport node on the NSX-T side with policies for the links from the hypervisor hosts to NSX-T logical switches.

The links from the Hypervisor hosts to NSX-T logical switches can comprise of the LAG or Teaming configuration which must be tied to physical NICs.

NSX-API Request/Response to check the logical switch uplink LAG profile attribute.

Request:

Response:

Uplink profile label can also be matched with one retrieved from the GUI in NSX-T Manager as below:

Below is NSX-API Request/Response to check the LACP mode attribute for the uplink LAG profile.

Request:

Response:

NSX-API Request/Response to check load balancing algorithm attribute of host switch uplink profile.

Request:

Response:

From the LAG profile screenshot above it can be validated that it is using Source MAC Address based load balancing algorithm.

Vnet

  • Vn_type: Transport type attribute of transport zone
  • Label: Display name attribute of logical switch
  • switch_label: Switch name attribute of transport zone
  • Vlan: Vlan attribute of logical switch for vlan transport zone
  • Vni: vni attribute of logical switch for overlay transport zone

To obtain respective transport type attribute of the transport zone below query can be used. This mainly signifies the type of traffic for a transport zone which can be Overlay or VLAN type.

Request:

Response:

Traffic type can also be identified in NSX-T Manager GUI as below:

NSX-API Request/Response to check the display name of the N-VDS logical switch.

Request:

Response:

Here as per API response above “zz-cvx-nsxt.cvx.2485377892354-2902673742_1000” is the respective logical switch associated with the transport zone.

Below is the NSX-API Request/Response to check VLAN ID attribute of a VLAN based logical switch for the transport zone.

Request:

Response:

Here in Apstra Fabric VNI IDs 1000 and 2000 represent such VXLAN Virtual network for east-west L2 stretched traffic. Bridge backed logical switch on NSX-T should have the same VLAN IDs defined.

NSX-API Request/Response to check the VNI attribute of logical switch of NSX-T

Request:

Response: