Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Example: Configuring Virtual Chassis Fabric and VMware NSX for MetaFabric Architecture 2.0

 

The power of SDN enabled through Juniper Networks Virtual Chassis Fabric and VMware NSX allows you to quickly build an enterprise private cloud. You can now build multi-tier applications and deploy them within seconds. Virtual Chassis Fabric provides high performance and an easy-to-use network to support SDN with VMware NSX. There is no need to worry about multicast protocols or spanning tree with Virtual Chassis Fabric, because the entire fabric works like a single, logical switch.

This example shows how to configure a QFX5100-only Virtual Chassis Fabric (VCF) and VMware NSX for MetaFabric Architecture 2.0. For more details on the MetaFabric architecture, see the MetaFabric™ Architecture Virtualized Data Center Design and Implementation Guide   and MetaFabric™ Architecture 1.1: Configuring Virtual Chassis Fabric and Network Director 1.6  .

Requirements

This example uses the following hardware and software components:

  • Four QFX5100-24Q switches used as the spine layer in the VCF

  • Six QFX5100-48S switches used in the leaf layer in the VCF

  • Junos OS Release 14.1X53-D10 or later for all QFX Series QFX5100 switches participating in the VCF

  • VMware ESXi 5.5.0.update2-2068190.x86_64

  • VMware vCenter Appliance 5.5.0.20200-2183109_OVF10.ova

  • VMware NSX Manager 6.1.0-2107742.ova

  • VMware Client Integration Plugin 5.5.0.mac64

  • Four servers with Supermicro X9SCM-iiF motherboards, 3.3GHz Intel Xeon E3-1230V2, 32GB Samsung DDR-1600 Memory, and 128GB SSD Crucial M4

  • 48TB Synology RS2414(RP)+ and DSM 5.1 U2 for the network-attached storage (NAS) device

Overview and Topology

MetaFabric Architecture 2.0 continues to provide the proper foundation for a virtualized environment that supports virtual machine movement, robust application hosting, and storage in a data center environment. However, this evolving architecture now includes a QFX5100-only VCF and VMware NSX 6.1.0 for virtualization.

The MetaFabric Architecture 2.0 topology used in this example consists of a VCF with 10 members as shown in Figure 1.

Figure 1: MetaFabric Architecture 2.0 Topology
MetaFabric Architecture 2.0 Topology

There are also the following components:

  • Two servers for VMware virtualization (ESXi)

  • A separate physical server for VMware vCenter to manage the clusters, virtual machines, and VMware NSX services

  • A physical server to host applications that do not support virtualization

  • A NAS device using the iSCSI protocol, so that each host has adequate storage for VMs and file storage for images and other media

In this example, a QFX5100-only VCF replaces the mixed-mode VCF seen in the MetaFabric Architecture 1.1 solution. As before, the VCF connects directly to servers and storage on the access side (also known as the leaf layer in a VCF), and edge devices on the data center network side (also known as the spine layer in a VCF).

The VCF used in this example is a same-mode fabric that implements four QFX5100-24Q switches in the spine layer and six QFX5100-48S switches in the leaf layer for a total of 10 VCF devices. All server, storage, and network destinations are a maximum of two hops from each other to keep latency to a minimum and application performance to a maximum.

The configuration tasks for MetaFabric Architecture 2.0 integrate the VCF with the VMware NSX software suite. This document assumes that you have already installed your VCF and you are ready to begin configuring it. This document also assumes that you are familiar with VMware vSphere, but still new to the VMware NSX software suite.

To configure the MetaFabric Architecture 2.0 network, perform the following tasks:

  1. Set up your Virtual Chassis Fabric to provide basic IP connectivity.

  2. Configure the VMware NSX Manager and integrate it with the VMware vCenter server.

  3. Configure the VMware NSX components through the VMware vCenter Web client.

  4. Create logical switches inside of VMware NSX to provide the connectivity to the components.

  5. Create and configure a VMware NSX Edge Gateway and a VMware NSX LDR.

  6. Integrate your Virtual Chassis Fabric with VMware NSX.

Configuring a Virtual Chassis Fabric for MetaFabric Architecture 2.0

The minimal configuration tasks for Virtual Chassis Fabric fall into four areas: VLANs, interfaces, IGMP, and OSPF. One of the benefits of VCF is that you can configure the fabric from the master Routing Engine – a single point of management for all the VCF devices. It is also very easy to configure multicast support in VCF with a single IGMP command. As a result, there is no need to worry about multicast protocols such as Protocol Independent Multicast (PIM).

If you want to provide additional redundancy to the VMware ESXi hosts or physical servers, you can choose as an option to set up IEEE 802.1AC / LACP between physical switches. Because VCF works like a single, logical switch, there is no requirement to set up additional protocols, such as multichassis link aggregation (MC-LAG) or Spanning Tree Protocol (STP).

This example explains how to configure a VCF to support the MetaFabric Architecture 2.0 solution. It includes the following sections:

Configuring VLANS for the VCF

CLI Quick Configuration

To quickly configure VLANs for the VCF, enter the following configuration statements on the device acting in the master role:

[edit]
set vlans NSX_UNDERLAY vlan-id 15
set vlans NSX_UNDERLAY description “Default VLAN for VMware ESXi hosts and Synology storage”
set vlans NSX_UNDERLAY l3-interface irb.15

Step-by-Step Procedure

To configure VLANs:

  1. Assign VLAN ID 15 to the NSX_UNDERLAY VLAN.
    [edit vlans]
    user@vcf# set NSX_UNDERLAY vlan-id 15
  2. Add a description for the NSX_UNDERLAY VLAN.
    [edit vlans]
    user@vcf# set vlans NSX_UNDERLAY description “Default VLAN for VMware ESXi hosts and Synology storage”
  3. Add interface irb.15 as the Layer 3 IRB interface for the NSX_UNDERLAY VLAN.
    [edit vlans]
    user@vcf# set vlans NSX_UNDERLAY l3-interface irb.15

Configuring Interfaces for the VCF

CLI Quick Configuration

To quickly configure interfaces for the VCF, enter the following configuration statements on the device acting in the master role:

[edit]
set interfaces irb.15 family inet address 10.0.1.1/24
set interfaces irb.15 mtu 9000
set interfaces lo0.0 family inet address 10.0.0.1/24
set interfaces xe-6/0/4.0 family ethernet-switching interface-mode access
set interfaces xe-6/0/4.0 family ethernet-switching vlan members NSX_UNDERLAY
set interfaces xe-6/0/4.0 mtu 9216
set interfaces xe-7/0/4.0 family ethernet-switching interface-mode access
set interfaces xe-7/0/4.0 family ethernet-switching vlan members NSX_UNDERLAY
set interfaces xe-7/0/4.0 mtu 9216
set interfaces xe-7/0/5.0 family ethernet-switching interface-mode access
set interfaces xe-7/0/5.0 family ethernet-switching vlan members NSX_UNDERLAY

To configure the interfaces:

  1. Configure interface irb.15 as the Layer 3 integrated routing and bridging (IRB) interface for the NSX_UNDERLAY VLAN.

    It acts as the default gateway for all hosts and storage devices.

    [edit interfaces]
    user@vcf# set irb.15 family inet address 10.0.1.1/24
  2. Configure loopback interface lo0.
    [edit interfaces]
    user@vcf# set lo0.0 family inet address 10.0.0.1/24
  3. Configure three interfaces as access ports.
    [edit interfaces]
    user@vcf# set xe-6/0/4.0 family ethernet-switching interface-mode access
    user@vcf# set xe-7/0/4.0 family ethernet-switching interface-mode access
    user@vcf# set xe-7/0/5.0 family ethernet-switching interface-mode access
  4. Assign the three interfaces to the NSX_UNDERLAY VLAN.
    [edit interfaces]
    user@vcf# set xe-6/0/4.0 family ethernet-switching vlan members NSX_UNDERLAY
    user@vcf# set xe-7/0/4.0 family ethernet-switching vlan members NSX_UNDERLAY
    user@vcf# set xe-7/0/5.0 family ethernet-switching vlan members NSX_UNDERLAY
  5. Increase the maximum transmission unit (MTU) beyond the default value of 1,500 bytes.

    Because there will be VXLAN encapsulated traffic flowing between VMware ESXi servers, you must select a larger MTU to accommodate for the outer MAC address, UDP header, IP header, and VXLAN header. VCF supports Jumbo Frames, so set the MTU over 9,000 bytes.

    [edit interfaces]
    user@vcf# set irb.15 mtu 9000
    user@vcf# set xe-6/0/4.0 mtu 9216
    user@vcf# set xe-7/0/4.0 mtu 9216

Configuring IGMP for the VCF

CLI Quick Configuration

VMware NSX uses multicast for flooding broadcast, unknown unicast, and multicast traffic. As a result, you must configure Internet Group Management Protocol (IGMP) when integrating physical servers with the VMware NSX virtual networks, so that the flooding of traffic can extend into the VCF.

To quickly configure IGMP for the VCF, enter the following configuration statements on the device acting in the master role:

[edit]
set protocols igmp interface xe-6/0/4.0
set protocols igmp interface xe-7/0/4.0
set protocols igmp interface irb.15

To configure IGMP:

  1. Configure IGMP on selected interfaces so that the hosts can signal their interest in multicast groups.
    [edit protocols igmp]
    user@vcf# set interface xe-6/0/4.0
    user@vcf# set interface xe-7/0/4.0
    user@vcf# set interface irb.15

Configuring OSPF for the VCF

CLI Quick Configuration

To quickly configure OSPF for the VCF, enter the following configuration statements on the device acting in the master role:

[edit]
set protocols ospf area 0.0.0.0 interface irb.15
set protocols ospf area 0.0.0.0 interface lo0.0

To configure OSPF:

  1. Configure OSPF on the loopback and IRB interfaces so that the VMs and servers can communicate across the VCF at Layer 3.
    [edit protocols ospf]
    user@vcf# set area 0.0.0.0 interface irb.15
    user@vcf# set area 0.0.0.0 interface lo0.0

Configuring VMware NSX for MetaFabric Architecture 2.0

This portion of the example explains the components required to install and configure VMware NSX to work with the MetaFabric Architecture 2.0 solution. These components include:

  • Integrating the VMware NSX Manager into the VMware vCenter Server. This step provides connectivity so the VMware NSX can be managed through the VMware vCenter web client.

  • Setting up the basic logical switches, transport zones, and segment IDs for VXLAN.

  • Configuring the VMware NSX Edge Gateway and Logical Distributed Router (LDR) to provide virtual connectivity between the VMware ESXi hosts and the physical network.

This example includes the following sections:

Configuring the ESXi Hosts

Step-by-Step Procedure

Configure the following ESXi hosts:

  1. esxi-01—A Supermicro server that is compatible with VMware software. Configure the vKernel management IP address for esxi-01 as 10.0.1.140. When you install the VMware NSX components, place the NSX Manager and NSX Edge on this host. When all components have been configured, create an example application on this host with a Web server, an application server, and a database server. All of the servers are deployed in pairs, with one VM per host.
  2. esxi-02—A host that is exactly the same as the esxi-01 host running on Supermicro hardware. Deploy the VMware NSX Controller and Edge Gateway on this host to balance your network. The other half of the example servers run on this host as well. Configure the vKernel management IP address for esxi-02 as 10.0.1.141.
  3. vcenter—A separate VMware vCenter server that is used to manage esxi-01 and esxi-02. Although you can run a nested VMware vCenter server on the same hosts that are being managed, it is best to keep them separate to avoid any confusion and reduce troubleshooting in the future. Configure the VMware vCenter server with an IP address of 10.0.1.110.
  4. storage-01—A Synology NAS device. The ESXi hosts esxi-01 and esxi-02 use iSCSI to mount storage remotely on this device. Configure the IP address 10.0.1.40 on this device to provide management and iSCSI connectivity.

Results

In summary, the physical IP address assignments for servers and storage in this example are shown in Table 1.

Table 1: IP Address Assignments

Device

IP Address

esxi-01

10.0.1.140

esxi-02

10.0.1.141

vcenter

10.0.1.110

storage-01

10.0.1.40

A graphical representation of the hosts and appliances are shown in Figure 2.

Figure 2: Virtual Chassis Fabric and VMware NSX IP Addressing
Virtual Chassis Fabric and VMware NSX
IP Addressing

Installing VMware NSX

GUI Step-by-Step Procedure

To install VMware NSX:

  1. Deploy the VMware-NSX-Manager-6.1.0-2107742.ova as a new template by logging in to VMware vCenter Web client, clicking Deploy OVT template, and specifying the VMware-NSX-Manager-6.1.0-2107742.ova file.

  2. Go through the installation steps to accept the EULA, set a password, and specify a hostname.

  3. For the network settings, configure an IP address of 10.0.1.111 for the VMware NSX Manager.

Integrating VMware NSX Manager

GUI Step-by-Step Procedure

After you deploy the OVT template successfully, the VMware NSX Manager starts automatically. To integrate VMware NSX Manager into your network:

  1. Log in to the Web client at http://10.0.1.111 as shown in Figure 3.

    Figure 3: Network Director 1.6 — VCF Autoprovisioning
    Network Director 1.6 — VCF Autoprovisioning
  2. Configure a username of admin, and enter the same password that you specified during the creation of the OVT template.

  3. Log in to the VMware Manager Appliance and integrate it with the VMware vCenter Server.

  4. After you log in, click Manage Application Settings, then select NSX Management Service, and click Configure.

  5. Type the IP address of the VMware vCenter Server, which in this example is 10.0.1.110, and click OK to make sure that the status appears as Connected as shown in Figure 4.

    Figure 4: Integrating VMware NSX Manager with VMware vCenter Server
    Integrating VMware NSX Manager with VMware
vCenter Server

Installing the VMware NSX Controller

GUI Step-by-Step Procedure

To install the VMware NSX Controller:

  1. Log In to the VMware vCenter Web client.

    You should see a new management pane on the left called Networking & Security. This pane is where you provision and manage all VMware NSX tasks.

  2. Install the VMware NSX Controller.

    By default, no controllers are installed as shown in Figure 5.

    Figure 5: VMware NSX Controller Nodes
    VMware NSX Controller Nodes
  3. Install a new VMware NSX Controller by clicking the green + symbol, select a cluster and data store for the new VMware NSX Controller appliance, and click Next.

  4. Set up an IP address pool to be used for VMware NSX IP address assignments.

    In this case, use the IP range of 10.0.1.200 - 10.0.1.219.

  5. Select the virtual switch that the VMware NSX Controller will use for connectivity.

    This example uses the new distributed virtual switch DPortGroup as shown in Figure 6.

    Figure 6: Adding the VMware NSX Controller
    Adding the VMware NSX Controller
  6. When you have completed entering the resource selection, virtual switch, IP pool, and password, click OK.

    When the VMware NSX Controller is installed correctly, you should see it listed in the NSX Controller nodes section as shown in Figure 7.

    Figure 7: VMware NSX Controller Installed Successfully
    VMware NSX Controller Installed Successfully

Configuring VXLAN Transport

GUI Step-by-Step Procedure

To configure VXLAN transport:

  1. Navigate back to the Network & Security page, click Installation, look for the Host Preparation tab, click the Configure button for the New Cluster, and begin the VXLAN transport configuration as shown in Figure 8.

    Figure 8: Host Preparation and Installation
    Host Preparation and Installation
  2. Define which virtual switch the cluster uses for VXLAN networking.

    In this example, select the default distributed virtual switch Dswitch as shown in Figure 9.

    Figure 9: Configure VXLAN Networking
    Configure VXLAN Networking

    Set the MTU to at least 1600 to account for the additional 50 bytes for VXLAN. Use the same previous IP pool that you created earlier to configure VXLAN networking as well. When you have finished entering these values, click OK.

  3. Add a new transport zone for VXLAN by going back to the Networking & Security page and clicking Logical Network Preparation.

    You should see a tab called Transport Zones.

  4. Click the New Transport Zone button.

    As shown in Figure 10, use the Multicast option for Replication mode so that the VCF can handle the replication and MAC address learning tasks.

    Figure 10: New Transport Zone
    New Transport Zone
    Note

    A transport zone is nothing but an abstract zone that defines how VMware NSX handles MAC address learning. Generally, a single transport zone is sufficient for a small or medium enterprise private cloud. However, if you want to build a scale-out architecture, it is a good idea to create one transport zone per POD.

Configuring a Segment ID

GUI Step-by-Step Procedure

To configure a segment ID:

  1. Add a VXLAN Segment ID and Multicast Address pool.

    As you create new logical switches (VXLANs), the segment ID (VNI) and multicast address are assigned automatically from a pool as shown in Figure 11.

    Figure 11: Segment ID Pool
    Segment ID Pool

    In this example, create a segment ID pool in the range of 5000-5200. Also, check the box to enable multicast addressing. The multicast addresses in our example are in the range of 239.1.1.10 to 239.1.1.20.

    Note

    If you plan to implement this feature in a production environment, you need to create a larger multicast address pool than the one shown in this example.

  2. After you create the segment ID and multicast address pool, you should see a summary as shown in Figure 12.

    Figure 12: VXLAN Segment ID and Multicast Address Allocation
    VXLAN Segment ID and Multicast Address
Allocation

Configuring Logical Switches

GUI Step-by-Step Procedure

Before you create the VMware NSX Edge Gateway and LDR, you need to create the logical switches that the appliances use. You must configure four logical switches as shown in Table 2.

Table 2: Logical Switch Settings

Name

VNI

Multicast Group

Transport Zone

Uplink Logical Switch

5000

239.1.1.10

Transport Zone 1

Database Switch

5001

239.1.1.11

Transport Zone 1

Application Switch

5002

239.1.1.12

Transport Zone 1

Web Switch

5003

239.1.1.13

Transport Zone 1

These four logical switches enable you to create the logical topology shown in Figure 13. The Uplink Logical Switch is used between the VMware NSX Edge Gateway and VMware NSX LDR. The database, application, and web logical switches are used by the VMware NSX LDR for our example application. This enables you to create a 3-tier application with network segmentation easily.

Figure 13: Logical Topology of Juniper Networks and VMware Components
Logical Topology of Juniper Networks and
VMware Components

All of the VMware NSX virtual switches are associated with a VNI as shown in Figure 14. Each hypervisor has a virtual tunnel end-point (VTEP) which is responsible for encapsulating VM traffic inside of a VXLAN header and routing the packet to a destination VTEP for further processing.

Figure 14: VMware NSX Logical Switches and VTEPs
VMware NSX Logical Switches and VTEPs

To configure logical switches:

  1. Navigate back to the Networking & Security page and click Logical Switches as shown in Figure 15.

    Figure 15: Adding new Logical Switches
    Adding new Logical Switches
  2. Add and configure each logical switch as shown in Table 2.

    Do not assign the segment ID or multicast group, as the segment ID and multicast group pool automatically assigns these values for each new logical switch. However, to keep the values the same as shown in Table 2, create the following logical switches in order:

    1. Uplink Logical Switch

    2. Database Logical Switch

    3. Application Logical Switch

    4. Web Logical Switch

    When you finish this task, you can create the VMware NSX Edge Gateway and LDR using the newly created logical switches.

Configuring the VMware NSX Edge Gateway

GUI Step-by-Step Procedure

Because the physical topology and addressing have been resolved, you can begin to implement the logical topology and integrate the VCF with VMware NSX for vSphere. You need a logical gateway between the physical network and the logical networks in this example. The gateway acts as a logical edge router to provide a routing and security policy between the physical and virtual resources.

The VMware NSX Edge Gateway requires two interfaces. The first interface is an Uplink with an IP address of 10.0.1.112 as shown in Figure 16.

Figure 16: Logical Topology of Juniper Networks and VMware Components
Logical Topology of Juniper Networks
and VMware Components

Any traffic that needs to enter or leave the virtual networks created by VMware NSX must transit through the VMware NSX Edge Gateway Uplink interface and security policies. The Uplink interface also enables the OSPF routing protocol so that any virtual networks created by the NSX Logical Distributed Router (LDR) can be advertised to the physical network. For the purposes of this example, use the standard OSPF backbone Area 0 between the irb.15 interface of the VCF and the VMware NSX Edge Gateway Uplink interface.

The second VMware NSX Edge Gateway interface is the Internal interface that connects to the VMware NSX LDR. Configure the Internal interface for OSPF Area 1. Any virtual networks created by the VMware NSX LDR are advertised directly to the Internal interface, and then sent to the VCF.

Table 3 shows the associated values for both the Uplink and Internal interfaces.

Table 3: VMware NSX Edge Gateway Virtual Switches

Interface

Virtual Switch

IP Address

VNI

Multicast Group

Uplink

DPortGroup

10.0.1.112/24

Internal

Uplink Logical Switch

172.16.1.2/24

5000

239.1.1.10

To configure the VMware NSX Edge Gateway:

  1. Return to the Networking & Security page and click NSX Edges as shown in Figure 17.

    Figure 17: - VMware NSX Edges
    - VMware NSX Edges
  2. Click the green + icon to create a new VMware NSX Edge Gateway as shown in Figure 18, give the new appliance a name, and click Next.

    Figure 18: New NSX Edge
    New NSX Edge
  3. Configure the deployment options.

    In this example, use a compact appliance size.

    Note

    Check the VMware NSX documentation to see which appliance size suites your production data center depending on the scale and performance.

  4. Configure the uplink interface – the first of two interfaces for VMware NSX Edge Gateway – by placing this interface into the DPortGroup as shown in Figure 19.

    The NSX Edge Uplink interface communicates with the VCF.

    Figure 19: Add VMware NSX Edge Interfaces
    Add VMware NSX Edge Interfaces
  5. Click the green + symbol to add new interfaces, name the first interface as NSX Edge Uplink, and click the next green + symbol to add a new subnet.

    For this example, you need the uplink interface to use OSPF to connect with the VCF.

  6. To establish base IP connectivity, assign an IP address of 10.0.1.112/24.

  7. Perform the same actions you did in Step 4 to create a second VMware NSX Edge Gateway interface that connects with the south-bound VMware NSX LDR, and call this the Internal interface.

    It must connect to the Uplink Logical Switch that you created earlier, and is shown in Figure 20.

    Figure 20: Internal VMware NSX Edge Interface
    Internal VMware NSX Edge Interface
  8. Click the green + symbol to create a new subnet and configure the IP address as 172.16.1.2/24 (per the VMware NSX logical design in Figure 16).

    This address connects to the VMware NSX LDR, which you will configure in the next procedure.

  9. Deploy the new VMware NSX Edge Gateway.

    After installation it should be deployed as shown in Figure 21.

    Figure 21: VMware NSX Edge Deployed
    VMware NSX Edge Deployed

Configuring the VMware NSX Logical Distributed Router

GUI Step-by-Step Procedure

To configure the VMware NSX Logical Distributed Router (LDR):

  1. Use the same procedure you used to install the VMware NSX Edge by returning to the Network & Security page, clicking NSX Edges, and clicking the green + symbol to create a new VMware NSX Edge for the VMware NSX LDR.

  2. Add the interfaces according to the information in Table 4 and Table 5.

    Table 4: VMware NSX LDR Virtual Switches

    Interface

    Virtual Switch

    IP Address

    VNI

    Multicast Group

    Uplink

    Uplink Logical Switch

    172.16.1.1/24

    5000

    239.1.1.10

    vnic10

    Database Switch

    192.168.10.1/24

    5001

    239.1.1.11

    vnic11

    Application Switch

    192.168.20.1/24

    5002

    239.1.1.12

    vnic12

    Web Switch

    192.168.30.1/24

    5003

    239.1.1.13

    Table 5: VMware NSX LDR Interface Settings

    Name

    IP Address

    Subnet Mask

    Virtual Switch

    Type

    LDR1 Uplink

    172.16.1.1

    24

    Uplink Logical Switch

    Uplink

    Database Gateway

    192.168.10.1

    24

    Database Logical Switch

    Internal

    Application Gateway

    192.168.20.1

    24

    Application Logical Switch

    Internal

    Web Gateway

    192.168.30.1

    24

    Web Gateway

    Internal

    The database, application, and web gateways are the default gateway addresses for the VMs. The LDR1 Uplink acts as a transit interface to the VMware NSX Edge Gateway for connectivity outside of the VMware NSX environment.

  3. After the interfaces are configured, you should see the interface summary on the Manage tab as shown in Figure 22.

    Figure 22: VMware NSX LDR Interfaces
    VMware NSX LDR Interfaces

Configuring Routing Protocols

GUI Step-by-Step Procedure

To configure routing protocols for the VMware NSX network:

  1. Return to the Networking & Security page, click NSX Edges, click each of the VMware NSX edge appliances, go to Manage > Routing, and set a router ID of 172.16.1.1 in the Global Configuration section as shown in Figure 23.

    Figure 23: Setting the Router ID
    Setting the Router ID

    This step configures the router ID for the VMware NSX Edge Gateway and VMware NSX LDR.

  2. While at the Manage > Routing section, click OSPF in the navigation bar as shown in Figure 24.

    Figure 24: Configuring OSPF in VMware NSX
    Configuring OSPF in VMware NSX

    Per the logical design in Figure 16, the OSPF area between the VMware NSX Edge Gateway and the VCF is Area 0 (0.0.0.0). The OSPF area between the VMware NSX Edge Gateway and VMware NSX LDR is Area 1 (0.0.0.1).

  3. For each VMware NSX Edge appliance, click the green + symbol to create an area definition, and assign the appropriate interface to the corresponding OSPF area as shown in Table 6.

    Table 6: VMware NSX OSPF Areas and Interfaces

    VMware NSX Appliance

    OSPF Area

    OSPF Interface

    VMware NSX Edge Gateway

    0.0.0.0

    Uplink

    VMware NSX Edge Gateway

    0.0.0.1

    Internal

    VMware NSX LDR

    0.0.0.1

    Uplink

Configuring Example Applications

GUI Step-by-Step Procedure

Now that you have configured all the VMware NSX components and the VCF, the final step is to create an example allocation and integrate it into VMware NSX.

To configure example applications to interact with VMware NSX:

  1. Create six servers and place them into the three logical switches: database, application, and web.

    This example application consists of Debian 7 Linux servers. Simply create new VMs with the settings shown in Table 7.

    Table 7: Example Application VM Settings

    Name

    IP Address

    Virtual Switch

    Host

    db-01

    192.168.10.100

    Database Logical Switch

    esxi-01

    db-02

    192.168.10.101

    Database Logical Switch

    esxi-02

    app-01

    192.168.20.100

    Application Logical Switch

    esxi-01

    app-02

    192.168.20.101

    Application Logical Switch

    esxi-02

    web-01

    192.168.30.100

    Web Logical Switch

    esxi-01

    web-02

    192.168.30.101

    Web Logical Switch

    esxi-02

    Different VMs are placed on different VMware ESXi hosts on purpose. This design ensures that VXLAN works between the VMware ESXi hosts and that multicast MAC address learning occurs on the VCF.

Verification

Confirm that the MetaFabric Architecture 2.0 configuration is working properly.

Verifying Connectivity Between the VMware NSX Edge Gateway and the VMware NSX LDR

Purpose

Confirm that the VMware NSX Edge Gateway and the VMware NSX LDR can reach each other.

Action

After you configure the VMware NSX OSPF settings, test the connectivity by logging in to the VMware vSphere Controller of the VMware NSX Edge Gateway appliance. Use the admin username and the password you specified during the creation of the appliance. Verify connectivity between the VMware NSX Edge Gateway and the VMware NSX LDR by issuing the ping command as shown in Figure 25.

Figure 25: Test Connectivity Between VMware NSX Edge Appliances
Test Connectivity Between VMware NSX Edge
Appliances

Meaning

If the ping command is successful, connectivity between the VMware NSX Edge Gateway and the VMware NSX LDR is working properly.

Verifying OSPF

Purpose

Confirm that the OSPF configuration is working.

Action

On the VCF, issue the show ospf neighbor command:

user@vcf> show ospf neighbor

On both VMware NSX Edge appliances, issue the show ip ospf neighbor command to verify that the OSPF state is Full/DR.

Meaning

If the OSPF state is Full in both the VCF and the VMware NSX Edge appliances, connectivity between the virtual and physical components is working properly.

Verifying Connectivity Between the VCF and the VMWare NSX Components

Purpose

Confirm that your VCF and VMware NSX configuration is working.

Action

To verify connectivity between web-01 and db-01, issue the ping command on a client for web-01 as shown in Figure 26.

Figure 26: Ping Between web-01 and db-01
Ping Between web-01 and db-01

The VMs have full connectivity, but only through the local VMware LDR on the local VMware ESXi host. The next step is to verify connectivity through VXLAN and multicast MAC address learning. To verify connectivity between web-01 and db-02, issue the ping command on a client for web-01 as shown in Figure 27.

Figure 27: Ping Between web-01 and db-02
Ping Between web-01 and db-02

Meaning

When web-01 pings db-02, the traffic is encapsulated in VXLAN and transmitted across the VCF. MAC address learning happens through multicast, and all subsequent unicast traffic is sent directly to the VTEP on the VMware ESXi host esxi-02. Because the pings between web-01 and db-01 were successful, and the pings between web-01 and db-02 were successful, connectivity between the VCF and the VMWare NSX components is working properly.