Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Virtualization Overview

    In the MetaFabric 1.0 solution, all compute nodes are installed into a virtual environment featuring the VMware ESXi 5.1 operating system. VMware ESXi provides the foundation for building a reliable data center. VMware ESXi 5.1 is the latest hypervisor architecture from VMware. ESXi, vSphere client, and vCenter are components of vSphere. ESXi server is the most important part of vSphere. ESXi is the virtualization server. All the virtual machines or Guest OS are installed on the ESXi server.

    To install, manage, and access those virtual servers which sit above the ESXi server, you will need another part of the vSphere suite called vSphere client or vCenter. The vSphere client allows administrators to connect to ESXi servers and access or manage virtual machines, and is used from the client machine to connect to the ESXi server and perform management tasks.

    The VMware vCenter server is similar to the vSphere client, but it is a server with even more power. The VMware vCenter server is installed on a Windows or Linux server. In this solution, the vCenter server is installed on a Windows 2008 server that is running as a virtual machine (VM). The VMware vCenter server is a centralized management application that lets you manage virtual machines and ESXi hosts centrally. VMware vSphere client is used to access vCenter Server and ultimately manage ESXi servers (Figure 1). VMware vCenter server is compulsory for enterprises to have enterprise features such as vMotion, VMware High Availability, VMware Update Manager, and VMware Distributed Resource Scheduler (DRS). For example, you can easily clone an existing virtual machine by using vCenter server. vCenter is another important part of the vSphere package.

    Figure 1: VMware vSphere Client Manages vCenter Server Which in Turn Manages Virtual Machines in the Data Center

    VMware vSphere Client Manages vCenter
Server Which in Turn Manages Virtual Machines in the Data Center

    In Figure 2, all the compute nodes are part of a data center and the VMware HA Cluster is configured on compute nodes. All compute nodes are running ESXi 5.1 OS, which is a host operating system to all the data center VMs running business-critical applications. With vSphere Client, you can also access ESXi hosts or the vCenter Server. The vSphere Client is used to access the vCenter Server and manage VMware enterprise features.

    A vSphere Distributed Switch (VDS) functions as a single virtual switch across all associated hosts (Figure 2). This enables you to set network configurations that span across all member hosts, allowing virtual machines to maintain a consistent network configuration as they migrate across multiple hosts. Each vSphere Distributed Switch is a network hub that virtual machines can use. A vSphere Distributed Switch can forward traffic internally between virtual machines or link to an external network by connecting to physical Ethernet adapters, also known as uplink adapters. Each vSphere Distributed Switch can also have one or more dvPort groups assigned to it. dvPort groups group multiple ports under a common configuration and provide a stable anchor point for virtual machines connecting to labeled networks. Each dvPort group is identified by a network label, which is unique to the current data center. VLANs enable a single physical LAN segment to be further segmented so that groups of ports are isolated from one another as if they were on physically different segments. The standard is 802.1Q. A VLAN ID, which restricts port group traffic to a logical Ethernet segment within the physical network, is optional.

    Figure 2: VMWare vSphere Distributed Switch Topology

    VMWare vSphere Distributed Switch Topology

    VMware vSphere distributed switches can be divided into two logical areas of operation: the data plane and the management plane. The data plane implements packet switching, filtering, and tagging. The management plane is the control structure used by the operator to configure data plane functionality from the vCenter Server. The VDS eases this management burden by treating the network as an aggregated resource. Individual host-level virtual switches are abstracted into one large VDS spanning multiple hosts at the data center level. In this design, the data plane remains local to each VDS but the management plane is centralized.

    The first step in configuration is to create a vSphere distributed switch on a vCenter Server. After you have created a vSphere distributed switch, you must add hosts, create dvPort groups, and edit vSphere distributed switch properties and policies.

    With the distributed switch feature, VMware vSphere supports provisioning, administering, and monitoring of virtual networking across multiple hosts, including the following functionalities:

    • Central control of the virtual switch port configuration, port group naming, filter settings, and so on.
    • Link Aggregation Control Protocol (LACP) that negotiates and automatically configures link aggregation between vSphere hosts and access layer switches.
    • Network health-check capabilities to verify vSphere with the physical network configuration.

    Additionally, the distributed switch functionality supports (Figure 2):

    • Distributed port — A port on a vSphere distributed switch that connects to a host’s VMkernel or to a virtual machine’s network adapter.
    • Distributed virtual port groups (DVPortgroups) — Port groups that specify port configuration options for each member port. DVportgroups is a set of DV ports. Configuration is inherited from dvSwitch to dvPortgroup.
    • Distributed virtual uplinks (dvUplinks) — dvUplinks provide a level of abstraction for the physical NICs (vmnics) on each host.
    • Private VLANs (PVLANs) — PVLAN support enables broader compatibility with existing networking environments using the technology.

    Figure 3: VMware vSphere Distributed Switch Topology

    VMware vSphere Distributed Switch Topology

    Figure 3 shows an illustration of two compute nodes running ESXi 5.1 OS with multiple VMs deployed on the ESXi hosts. Notice that two physical compute nodes are running VMs in this topology, and the vSphere distributed switch (VDS) is virtually extended across all ESXi hosts managed by the vCenter server. The configuration of VDS is centralized to the vCenter Server.

    A LAG bundle is configured between the access switches and ESXi hosts. As mentioned in the compute node section, an RSNG configuration is required on the QFX3000-M QFabric systems.

    ESXi 5.1 supports LACP protocol for the LAG, which can be enabled by connecting the vCenter Server Web GUI only.

    Note: Link Aggregation Control Protocol (LACP) can only be configured via the vSphere Web Client.

    Published: 2015-04-20