Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Requirements

    The solution requirements that guided testing and validation include:

    • Solution must support lossless Ethernet.
    • Solution must provide redundant network connectivity using all available bandwidth.
    • Solution must support moving a virtual machine (VM) between hosts.
    • Solution must support high availability of a VM.
    • Solution must support virtual network identification using Link Layer Discovery Protocol (LLDP).
    • Solution must provide physical and virtual visibility and reporting of VM movements.
    • Solution must support lossless Ethernet for storage transit.
    • Solution must support booting a server from the storage array.

    The IBM Flex Chassis was selected as the physical compute host. High-level implementation details include:

    • IBM Flex server is configured with multiple ESXi hosts hosting all the VMs running the business-critical applications (SharePoint, Exchange, Media-wiki, and WWW).
    • Distributed vSwitch is configured between multiple physical ESXi hosts configured in IBM hosts.

    Topology

    The topology used in the data center compute, virtualization, and storage design is shown in Figure 1.

    Figure 1: Compute and Virtualization as Featured in the MetaFabric 1.0 Solution

    Compute and Virtualization as Featured
in the MetaFabric 1.0 Solution

    Compute Hardware Overview

    The IBM System x3750 M4 is a 4-socket server that features a streamlined design, optimized for performance. This solution uses two IBM standalone system 3750s as an Infra Cluster. Infra Cluster is hosting all infrastructure-related VMs such as Junos Space, Firefly Host Security Design VM, and Virtual Center server. The IBM System 3750 has dedicated two management ports, which are connected to a management switch as a LAG. Figure 2 shows the IBM System x3750 M4.

    Figure 2: IBM x3750 M4

    IBM x3750 M4

    The configuration of out-of-band management (OOB) is required to properly manage the computing hardware featured in this solution. The configuration of OOB is covered in this section.

    To configure the IBM System x3750 M4 in the OOB role, follow these steps:

    1. Configure two LAG (ae11 and ae12) interfaces for each IBM system on the management switch.
      [edit]set interfaces ge-1/0/44 ether-options 802.3ad ae11set interfaces ge-1/0/45 ether-options 802.3ad ae11set interfaces ge-1/0/46 ether-options 802.3ad ae12set interfaces ge-1/0/47 ether-options 802.3ad ae12set interfaces ae11 description "connection to POD1 Standalone server"set interfaces ae11 aggregated-ether-options minimum-links 1set interfaces ae11 unit 0 family ethernet-switching vlan members Compute-VLANset interfaces ae12 description "connection to POD2 standalone server"set interfaces ae12 aggregated-ether-options minimum-links 1set interfaces ae12 unit 0 family ethernet-switching vlan members Compute-VLANset vlans Compute-VLAN vlan-id 800
    2. Configure LAG on the IBM system. This configuration step is performed as part of the virtualization configuration section.

      Note: Each server has four 10-Gigabit Ethernet NIC ports connected to the QFX3000-M QFabric system as a data port for all VM traffic. Each system is connected to each POD for redundancy purposes. The IBM System 3750 is connected to POD1 using 4 x 10-Gigabit Ethernet. A second IBM System 3750 connects to POD2 using 4 x 10-Gigabit Ethernet. The use of LAG provides switching redundancy in case of a POD failure.

    3. Configure POD1 to connect to the IBM System 3750 server. Four ports of data traffic are configured as a LAG and carry several VLANs that are required for the Infra Cluster.
      [edit]set interfaces interface-range POD1-Standalone-server member n2:xe-0/0/8set interfaces interface-range POD1-Standalone-server member n3:xe-0/0/8set interfaces interface-range POD1-Standalone-server member n3:xe-0/0/9set interfaces interface-range POD1-Standalone-server member n2:xe-0/0/9set interfaces interface-range POD1-Standalone-server ether-options 802.3ad RSNG2:ae0set interfaces RSNG2:ae0 description POD1-Standalone-serverset interfaces RSNG2:ae0 unit 0 family ethernet-switching port-mode trunkset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members MGMTset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Infraset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Exchangeset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Wikimediaset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members SQLset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Storage-POD1set interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Exchange-Clusterset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members SharePointset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Tera-VMset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Security-Mgmtset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Vmotionset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members VM-FTset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Remote-Access
    4. Configure POD2 for connection to the second IBM System 3750.
      [edit]set interfaces interface-range IBM-Standalone member "n3:xe-0/0/[26-27]"set interfaces interface-range IBM-Standalone member "n5:xe-0/0/[26-27]"set interfaces interface-range IBM-Standalone ether-options 802.3ad RSNG3:ae1set interfaces RSNG3:ae1 description POD2-IBM-Standaloneset interfaces RSNG3:ae1 unit 0 family ethernet-switching port-mode trunkset interfaces RSNG3:ae1 unit 0 family ethernet-switching vlan members MGMTset interfaces RSNG3:ae1 unit 0 family ethernet-switching vlan members Storage-POD2set interfaces RSNG3:ae1 unit 0 family ethernet-switching vlan members Infraset interfaces RSNG3:ae1 unit 0 family ethernet-switching vlan members SQLset interfaces RSNG3:ae1 unit 0 family ethernet-switching vlan members SharePointset interfaces RSNG3:ae1 unit 0 family ethernet-switching vlan members Exchange-clusterset interfaces RSNG3:ae1 unit 0 family ethernet-switching vlan members Exchangeset interfaces RSNG3:ae1 unit 0 family ethernet-switching vlan members Wikimediaset interfaces RSNG3:ae1 unit 0 family ethernet-switching vlan members Tera-VMset interfaces RSNG3:ae1 unit 0 family ethernet-switching vlan members Security-Mgmtset interfaces RSNG3:ae1 unit 0 family ethernet-switching vlan members Vmotionset interfaces RSNG3:ae1 unit 0 family ethernet-switching vlan members VM-FTset interfaces RSNG3:ae1 unit 0 family ethernet-switching vlan members Remote-Access

    The MetaFabric 1.0 solution utilizes a second set of compute hardware as well. The IBM Flex System Enterprise Chassis is a 10U next-generation server platform that features integrated chassis management. It is a compact, high-density, high-performance, and scalable rack-mount system. It supports up to 14 one-bay compute nodes that share common resources, such as power, cooling, management, and I/O resources within a single Enterprise chassis. The IBM Flex System can also support up to seven 2-bay compute nodes or three 4-bay compute nodes when the shelves are removed. You can mix and match 1-bay, 2-bay, and 4-bay compute nodes to meet specific hardware needs.

    The major components of Enterprise Chassis (Figure 3) are:

    • Fourteen 1-bay compute node bays (can also support seven 2-bay or three 4-bay compute nodes with the shelves removed).
    • Six 2500W power modules that provide N+N or N+1 redundant power. Optionally, the chassis can be ordered through the configure-to-order (CTO) process with six 2100W power supplies for N+1 redundant power.
    • Ten fan modules.
    • Four physical I/O modules.
    • A wide variety of networking solutions that include Ethernet, Fibre Channel, FCoE, and
    • InfiniBand.
    • Two IBM Chassis Management Module (CMMs). The CMM provides single-chassis management support.

    Figure 3: IBM Flex System Enterprise Chassis (Front View)

    IBM Flex System Enterprise Chassis
(Front View)

    The following components can be installed into the rear of the chassis (Figure 4):

    • Up to two CMMs.
    • Up to six 2500W or 2100W power supply modules.
    • Up to six fan modules that consist of four 80-mm fan modules and two 40-mm fan modules.
    • Additional fan modules can be installed for a total of 10 modules.
    • Up to four I/O modules.

    Figure 4: IBM Flex System (Rear View)

    IBM Flex System (Rear View)

    The IBM Flex System includes a Chassis Management Module (CMM). The CMM provides a single point of chassis management as well as the network path for remote keyboard, video, and mouse (KVM) capability for compute nodes within the chassis. The IBM Flex System chassis can accommodate one or two CMMs. The first is installed into CMM Bay 1, the second into CMM bay 2. Installing two CMMs provides control redundancy for the IBM Flex System.

    The CMM provides these functions:

    • Power control
    • Fan management
    • Chassis and compute node initialization
    • Switch management
    • Diagnostics
    • Resource discovery and inventory management
    • Resource alerts and monitoring management
    • Chassis and compute node power management
    • Network management

    The CMM has the following connectors:

    • USB connection: Can be used for insertion of a USB media key for tasks such as firmware updates.
    • 10/100/1000-Mbps RJ45 Ethernet connection: For connection to a management network. The CMM can be managed through this Ethernet port.

    Configuring Compute Switching

    The IBM Flex System also offers modular switching options to enable various levels of switching redundancy, subscription (1:1 vs oversubscription), and switched or Pass-thru modes of operation. The first of these modules used in the solution is the IBM Flex System Fabric CN4093 10Gb/40Gb Converged Scalable Switch. The IBM Flex System Fabric CN4093 10Gb/40Gb Converged Scalable Switch provides unmatched scalability, performance, convergence, and network virtualization, while also delivering innovations to help address a number of networking concerns and providing capabilities that help you prepare for the future.

    The switch offers full Layer 2/3 switching and FCoE Full Fabric and Fibre Channel NPV Gateway operations to deliver a converged and integrated solution, and it is installed within the I/O module bays of the IBM Flex System Enterprise Chassis. The switch can help you migrate to a 10-Gb or 40-Gb converged Ethernet infrastructure and offers virtualization features.

    Figure 5: IBM Flex System Fabric CN4093 10Gb/40Gb Converged Scalable Switch

    IBM Flex System Fabric CN4093 10Gb/40Gb
Converged Scalable Switch

    The CN4093 switch is initially licensed for fourteen 10-GbE internal ports, two external 10-GbE SFP+ ports, and six external Omni Ports enabled.

    The base switch and upgrades are as follows:

    • 00D5823 is the part number for the physical device, which comes with 14 internal 10-GbE ports enabled (one to each node bay), two external 10-GbE SFP+ ports that are enabled to connect to a top-of-rack switch or other devices, and six Omni Ports enabled to connect to either Ethernet or Fibre Channel networking infrastructure, depending on the SFP+ cable or transceiver used.
    • 00D5845 (Upgrade 1) can be applied on the base switch when you need more uplink bandwidth with two 40-GbE QSFP+ ports that can be converted into 4x 10-GbE SFP+ DAC links with the optional break-out cables. This upgrade also enables 14 more internal ports, for a total of 28 ports, to provide more bandwidth to the compute nodes using 4-port expansion cards.
    • 00D5847 (Upgrade 2) can be applied on the base switch when you need more external Omni Ports on the switch or if you want more internal bandwidth to the node bays. The upgrade enables the remaining 6 external Omni Ports, plus 14 more internal 10-Gb ports, for a total of 28 internal ports, to provide more bandwidth to the compute nodes using four-port expansion cards.

    Further ports can be enabled:

    • Fourteen more internal ports and two external 40 GbE QSFP+ uplink ports with Upgrade 1
    • Fourteen more internal ports and six more external Omni Ports with the Upgrade 2 license options.
    • Upgrade 1 and Upgrade 2 can be applied on the switch independently from each other or in combination for full feature capability.

    The CNA module has a management and console port. There are two different command-line interface (CLI) modes on IBM/BNT network devices: IBMNOS mode and ISCLI (Industry Standard CLI) mode. The first time you start the CN4093, it boots into the IBM Networking OS CLI. To access the ISCLI, enter the following command and reset the CN4093.

    1. Reset the CN4093.
      Router (config)# boot cli-mode ibmnos-cli

    The switch retains your CLI selection, even when you reset the configuration to factory defaults. The CLI boot mode is not part of the configuration settings. If you downgrade the switch software to an earlier release, it will boot into menu-based CLI. However, the switch retains the CLI boot mode, and will restore your CLI choice.

    The second modular switching option deployed as part of the solution is the IBM Flex System EN4091 10 Gb Ethernet Pass-thru Module (Figure 6). The EN4091 10-Gb Ethernet Pass-thru Module offers a one-for-one connection between a single node bay and an I/O module uplink. It has no management interface, and can support both 1-Gbps and 10-Gbps dual-port adapters that are installed in the compute nodes. If quad-port adapters are installed in the compute nodes, only the first two ports have access to the Pass-thru module ports.

    The necessary 1-GbE or 10-GbE module (SFP, SFP+, or DAC) must also be installed in the external ports of the pass-thru module. This configuration supports the speed (1 Gb or 10 Gb) and medium (fiber-optic or copper) for adapter ports on the compute nodes.

    Figure 6: IBM Flex System EN4091 10Gb Ethernet Pass-thru Module

    IBM Flex System EN4091 10Gb Ethernet
Pass-thru Module

    The EN4091 10Gb Ethernet Pass-thru Module has the following specifications:

    • Internal ports - 14 internal full-duplex Ethernet ports that can operate at 1-Gb or 10-Gb speeds.
    • External ports - 14 ports for 1-Gb or 10-Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ DAC.
    • Unmanaged device that has no internal Ethernet management port. However, it is able to provide its VPD to the secure management network in the Chassis Management Module.
    • Allows direct connection from the 10-Gb Ethernet adapters that are installed in compute nodes in a chassis to an externally located top-of-rack switch or other external device.

    Note: The EN4091 10-Gb Ethernet Pass-thru Module has only 14 internal ports. As a result, only two ports on each compute node are enabled, one for each of the two modules that are installed in the chassis. If four-port adapters are installed in the compute nodes, ports 3 and 4 on those adapters are not enabled.

    Configuring Compute Nodes

    The Juniper MetaFabric 1.0 solution utilizes the IBM Flex System servers as the primary compute nodes. The lab configuration utilized 5 compute nodes (of a possible 14) in each IBM Pure Flex System. The IBM Flex System portfolio of compute nodes includes Intel Xeon processors and IBM POWER7 processors. Depending on the compute node design, nodes can come in one of these form factors:

    • Half-width node: Occupies one chassis bay, half the width of the chassis (approximately 215 mm or 8.5 in.). An example is the IBM Flex System x220 Compute Node.
    • Full-width node: Occupies two chassis bays side-by-side, the full width of the chassis (approximately 435 mm or 17 in.). An example is the IBM Flex System p460 Compute Node.

    The solution lab utilized the IBM Flex System x220 compute node (Figure 7). The IBM Flex System x220 Compute Node, machine type 7906, is the next generation cost-optimized compute node that is designed for less demanding workloads and low-density virtualization. The x220 is efficient and equipped with flexible configuration options and advanced management to run a broad range of workloads. The IBM Flex System x220 Compute Node is a high-availability, scalable compute node that is optimized to support the next-generation microprocessor technology. With a balance of cost and system features, the x220 is an ideal platform for general business workloads. The x220 is a half-wide compute node and requires that the chassis shelf is installed in the IBM Flex System Enterprise Chassis. The IBM Flex System x220 Compute Node features the Intel Xeon E5-2400 series processors. The Xeon E5-2400 series processor has models with either 4, 6, or 8 cores per processor with up to 16 threads per socket. The x220 supports LP DDR3 memory LRDIMMs, RDIMMs, and UDIMMs. The x220 server has two 2.5-inch hot-swap drive bays accessible from the front of the blade server. On standard models, the two 2.5-inch drive bays are connected to a ServeRAID C105 onboard SATA controller with software RAID capabilities.

    The applications that are installed on the compute nodes can run natively on a dedicated physical server or they can be virtualized (in a virtual machine that is managed by a hypervisor layer). All the compute nodes are using the VMware ESXi 5.1 operating system as a baseline for virtualization, and all the enterprise applications are running as virtual machines on top of the ESXi 5.1 Operating System.

    Figure 7: IBM Flex System x220 Compute Node

    IBM Flex System x220 Compute Node

    This solution implementation utilizes two compute PODs. Two IBM Pure Flex Systems are connected to the QFX3000-M QFabric system POD1, and two Flex Systems are connected to POD2 (also utilizing QFX3000-M).

    The POD1 and POD2 topologies use similar hardware to run the virtual servers.

    POD1 includes the following compute hardware:

    • Two IBM Pure Flex Systems with x220 compute node
    • IBM Pure Flex Chassis 40Gb CNA Card
    • IBM Pure Flex Chassis 10Gb Pass-thru (P/T) I/O Card

    POD2 includes the following compute hardware:

    • Two IBM Pure Flex Systems with x220 compute node
    • IBM Pure Flex Chassis 10Gb CNA Card
    • IBM Pure Flex Chassis 10Gb Pass-thru (P/T) I/O Card

    Before moving on to configuring the virtualization, a short overview of switching operation and configuration in the IBM flex system is required. Figure 8 shows the POD1 network topology utilizing the IBM Pure Flex System Pass-thru (P/T) chassis.

    Figure 8: IBM Pure Flex Pass-thru Chassis

    IBM Pure Flex Pass-thru Chassis

    IBM Pure Flex Pass-thru chassis has four 10-Gb Ethernet I/O Cards. Each I/O card has 14 10-Gb Ethernet network ports for each compute node. That means each compute node will have four network adapters on the physical connection. Each module has 14 external network ports which are internally linked with 14 Compute nodes through the back plane.

    • I/O module 1 of port number 1 will connect to compute node 1
    • I/O module 2 of port number 1 will connect to compute node 1
    • I/O module 3 of port number 1 will connect to compute node 1
    • I/O module 4 of port number 1 will connect to compute node 1

    The 14 Compute nodes have connectivity to all 4 I/O modules, and each compute node has 4 network ports. All four network ports are connected to different node of RSNG of the QFX3000-M QFabric system, which gives full redundancy. Also LAG is configured between the servers and access switches. This configuration ensures utilization of all the links while providing full redundancy.

    The next section shows a sample configuration for connection from the QFX3000-M (POD1) and the QFX3000-M (POD2) to the two pass-thru chassis compute nodes.

    Configuring POD to Pass-thru Chassis Compute Nodes

    To configure the connection between the PODs and the compute role, follow these steps:

    1. Configure POD 1 (QFabric QFX3000-M). Please note that you must use an MTU setting of 9192. Enabling jumbo frames in the data center generally enables better performance.
      [edit]set interfaces interface-range IBM-FLEX-2-CN1-passthrough member "n2:xe-0/0/[30-31]"set interfaces interface-range IBM-FLEX-2-CN1-passthrough member "n3:xe-0/0/[30-31]"set interfaces interface-range IBM-FLEX-2-CN1-passthrough ether-options 802.3ad RSNG2:ae1set interfaces interface-range IBM-FLEX-2-CN2-passthrough member "n3:xe-0/0/[32-33]"set interfaces interface-range IBM-FLEX-2-CN2-passthrough member "n2:xe-0/0/[32-33]"set interfaces interface-range IBM-FLEX-2-CN2-passthrough ether-options 802.3ad RSNG2:ae2set interfaces RSNG2:ae1 description "IBM Flex-2 Passthrough-CN1"set interfaces RSNG2:ae1 unit 0 family ethernet-switching port-mode trunkset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members MGMTset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Infraset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Exchangeset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Wikimediaset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members SQLset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Storage-POD1set interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Exchange-Clusterset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members SharePointset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Security-Mgmtset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Vmotionset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members VM-FTset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Remote-Accessset interfaces RSNG2:ae2 description "IBM Flex-2 Passthrough-CN2"set interfaces RSNG2:ae2 unit 0 family ethernet-switching port-mode trunkset interfaces RSNG2:ae2 unit 0 family ethernet-switching vlan members MGMTset interfaces RSNG2:ae2 unit 0 family ethernet-switching vlan members Infraset interfaces RSNG2:ae2 unit 0 family ethernet-switching vlan members Exchangeset interfaces RSNG2:ae2 unit 0 family ethernet-switching vlan members Wikimediaset interfaces RSNG2:ae2 unit 0 family ethernet-switching vlan members SQLset interfaces RSNG2:ae2 unit 0 family ethernet-switching vlan members Storage-POD1set interfaces RSNG2:ae2 unit 0 family ethernet-switching vlan members Exchange-Clusterset interfaces RSNG2:ae2 unit 0 family ethernet-switching vlan members SharePointset interfaces RSNG2:ae2 unit 0 family ethernet-switching vlan members Security-Mgmtset interfaces RSNG2:ae2 unit 0 family ethernet-switching vlan members Vmotionset interfaces RSNG2:ae2 unit 0 family ethernet-switching vlan members VM-FTset interfaces RSNG2:ae2 unit 0 family ethernet-switching vlan members Remote-Accessset vlans Exchange vlan-id 104 set vlans Exchange l3-interface vlan.104set vlans Exchange-Cluster vlan-id 109 set vlans Infra vlan-id 101set vlans MGMT vlan-id 800 set vlans Remote-Access vlan-id 810set vlans SQL vlan-id 105 set vlans Security-Mgmt vlan-id 801set vlans SharePoint vlan-id 102 set vlans Storage-POD1 vlan-id 108set vlans Storage-POD1 l3-interface vlan.108 set vlans VM-FT vlan-id 107set vlans Vmotion vlan-id 106 set vlans Wikimedia vlan-id 103set vlans Wikimedia l3-interface vlan.103
    2. Configure POD 2 (QFabric QFX3000-M). Please note the use of an MTU setting of 9192. Enabling Jumbo frames in the data center generally enables better performance.
      set groups Jumbo-MTU interfaces <*ae*> mtu 9192 set interfaces interface-range IBM-FLEX-2-CN1-passthrough member "n3:xe-0/0/[34-35]"set interfaces interface-range IBM-FLEX-2-CN1-passthrough member "n5:xe-0/0/[34-35]"set interfaces interface-range IBM-FLEX-2-CN1-passthrough ether-options 802.3ad RSNG3:ae0set interfaces interface-range IBM-FLEX-2-CN2-passthrough member "n1:xe-0/0/[38-39]"set interfaces interface-range IBM-FLEX-2-CN2-passthrough member "n2:xe-0/0/[38-39]"set interfaces interface-range IBM-FLEX-2-CN2-passthrough ether-options 802.3ad RSNG2:ae1
      set interfaces RSNG3:ae0 description IBM-FLEX-2-CN-1-Passthroughset interfaces RSNG3:ae0 unit 0 family ethernet-switching port-mode trunkset interfaces RSNG3:ae0 unit 0 family ethernet-switching vlan members MGMTset interfaces RSNG3:ae0 unit 0 family ethernet-switching vlan members Storage-POD2set interfaces RSNG3:ae0 unit 0 family ethernet-switching vlan members Infraset interfaces RSNG3:ae0 unit 0 family ethernet-switching vlan members SQLset interfaces RSNG3:ae0 unit 0 family ethernet-switching vlan members SharePointset interfaces RSNG3:ae0 unit 0 family ethernet-switching vlan members Exchange-clusterset interfaces RSNG3:ae0 unit 0 family ethernet-switching vlan members Exchangeset interfaces RSNG3:ae0 unit 0 family ethernet-switching vlan members Wikimediaset interfaces RSNG3:ae0 unit 0 family ethernet-switching vlan members Security-Mgmtset interfaces RSNG3:ae0 unit 0 family ethernet-switching vlan members Vmotionset interfaces RSNG3:ae0 unit 0 family ethernet-switching vlan members VM-FTset interfaces RSNG3:ae0 unit 0 family ethernet-switching vlan members Remote-Accessset interfaces RSNG2:ae1 description IBM-FLEX-2-CN2-passthroughset interfaces RSNG2:ae1 unit 0 family ethernet-switching port-mode trunk
      set interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members MGMTset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Storage-POD2set interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Infraset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members SQLset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members SharePointset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Exchange-clusterset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Exchangeset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Wikimediaset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Security-Mgmtset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Vmotionset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members VM-FTset interfaces RSNG2:ae1 unit 0 family ethernet-switching vlan members Remote-Accessset vlans Exchange vlan-id 104set vlans Exchange-cluster vlan-id 109set vlans Infra vlan-id 101set vlans MGMT vlan-id 800set vlans Remote-Access vlan-id 810set vlans SQL vlan-id 105set vlans SQL l3-interface vlan.105set vlans Security-Mgmt vlan-id 801set vlans SharePoint vlan-id 102set vlans SharePoint l3-interface vlan.102set vlans Storage-POD2 vlan-id 208set vlans Storage-POD2 l3-interface vlan.208set vlans VM-FT vlan-id 107set vlans Vmotion vlan-id 106set vlans Wikimedia vlan-id 103

    The MetaFabric 1.0 solution also utilizes the 40-Gb Ethernet CNA I/O module (in POD1). A short overview of the operation and configuration of this module is required. Figure 9 shows the POD1 network topology utilizing the IBM Pure Flex System Chassis with the 40-Gb Ethernet CNA I/O module.

    Figure 9: POD1 Topology with the IBM Pure Flex Chassis + 40Gbps CNA Module

    POD1 Topology with the IBM Pure Flex
Chassis + 40Gbps CNA Module

    Figure 9 is an example of the IBM Pure Flex System Compute Node 1. All compute nodes in an IBM Pure Flex System utilizing the 10-Gb CNA or 40-Gb CNA modules will have a similar physical look and connectivity. Actually, I/O Module Switch is integrated into the IBM Pure Flex System. By looking at the IBM Pure Flex System, you will only see EXT ports physically. INT ports are not visible behind the chassis. INT ports connect to backplane of the CNA Fabric switch I/O Module. EXT ports are connected to external switches to the QFX3000-M QFabric system. Ethernet LAG is also configured between the QFX POD1 and the compute nodes in POD1. EXT Ports (3 and 7) from each I/O port are connected to Node (6 and 7) of the QFX3000-M QFabric system. We have created RSNG4:ae0 LAG between the I/O Module Switch and the QFX3000-M QFabric system.

    Without a license, the CNA module has one network port (INT A port) which is internally linked with an external port (EXT) through the chassis backplane. As shown in the example, Compute Node 1 will see only one network port (INTA). INTA port will be visible only to VMware ESXi which is running on the compute node. EXT ports are connected to external switches where physical cables connect to another layer of switches. After you install the advanced license for the 40-Gb CNA Fabric Switch I/O Module, an additional internal port is activated. After installing the license, you will see two ports in each I/O Module for the compute node.

    For instance, Compute Node 1 has two ports (INTA1 and INTB1) on each 40-Gb CNA Fabric Switch I/O Module. As we have two CNA Fabric Switch I/O Modules, Compute Node 1 will have four internal network ports. The other 40-Gb CNA Fabric Switch I/O Module will have the same naming convention (such as INTA1 and INTB1 which is on the different CNA Fabric Switch I/O Module 1 or I/O Module 2). Once an expanded license is installed, external ports EXT3, EXT4, EXT5, and EXT6 become a single EXT3 40-Gb port; and EXT7, EXT8, EXT9, and EXT10 become a single EXT7 40-Gb port on each CNA Fabric Switch I/O module.

    Note: Simply creating RSNG between the CNA Fabric Switch and QFX3000-M QFabric system is not an effective configuration. This configuration can cause intermittent packet loss because both I/O modules from the CNA Fabric switch work independently. Intermittent packet loss will happen if you configure LAG/RSNG only on the QFX3000-M QFabric system switch. To resolve this issue, LAG must also be configured on the CNA Fabric Switch I/O module.

    Configuration of LAG on CNA Fabric Switches is covered below. In this solution example, we have cross-connected the EXT1 and EXT2 ports of CNA Fabric Switch I/O Module 1 and 2. This is referred to as an ISL on CNA Fabric Switches. This is the major configuration required to work LAG efficiently on the internal and external side. LAG is configured on INT ports and EXT ports. This LAG is configured using the LACP protocol as a trunk port and carrying multiple VLAN application traffic.

    Configuring the CNA Fabric Switches

    To configure LAG on the CNA Fabric Switches, follow these steps:

    1. Configure the Fabric Switch I/O Module1 on the IBM Pure Flex System 40-Gbps CNA.
      interface port INTA1 tagging exit !
      interface port INTA2 tagging exit !interface port INTA3 tagging exit !interface port INTA4 tagging exit !interface port INTA5 tagging exit !interface port INTB1 tagging exit !interface port INTB2 tagging exit !interface port INTB3 tagging exit !interface port INTB4 tagging exit !interface port INTB5 tagging exit !interface port EXT1 tagging exit !interface port EXT2 tagging exit !interface port EXT3 tagging exit !interface port EXT7 tagging exit !interface port INTA1lacp mode activelacp key 1001 !interface port INTA2lacp mode activelacp key 1002!interface port INTA3lacp mode activelacp key 1003!interface port INTA4lacp mode activelacp key 1004!interface port INTA5lacp mode activelacp key 1005!interface port INTB1lacp mode activelacp key 1001!interface port INTB1lacp mode activelacp key 1001!interface port INTB2lacp mode activelacp key 1002!interface port INTB3lacp mode activelacp key 1003!interface port INTB4lacp mode activelacp key 1004!interface port INTB5lacp mode activelacp key 1005!interface port EXT1lacp mode activelacp key 200!interface port EXT2lacp mode activelacp key 200!interface port EXT3lacp mode activelacp key 1000!interface port EXT7lacp mode activelacp key 1000!vlan 101enablename "INFRA"member INTA1-INTA5,INTB1-INTB5,EXT1-EXT3,EXT7!vlan 102enablename "SharePoint"member INTA1-INTA5,INTB1-INTB5,EXT1-EXT3,EXT7!vlan 103enablename "WM"member INTA1-INTA5,INTB1-INTB5,EXT1-EXT3,EXT7!vlan 104enablename "EXCHANGE"member INTA1-INTA5,INTB1-INTB5,EXT1-EXT3,EXT7!vlan 105enablename "SQL"member INTA1-INTA5,INTB1-INTB5,EXT1-EXT3,EXT7!vlan 106enablename " Vmotion"member INTA1-INTA5,INTB1-INTB5,EXT1-EXT3,EXT7!vlan 107enablename "VM-FT"member INTA1-INTA5,INTB1-INTB5,EXT1-EXT3,EXT7!vlan 108enablename "Storage-iSCSI"member INTA1-INTA5,INTB1-INTB5,EXT1-EXT3,EXT7!vlan 109enablename "Exchange DAG"member INTA1-INTA5,INTB1-INTB5,EXT1-EXT3,EXT7!vlan 800enablename "VDC Mgmt"member INTA1-INTA5,INTB1-INTB5,EXT1-EXT3,EXT7!vlan 801enablename "Security-MGMT"member INTA1-INTA5,INTB1-INTB5,EXT1-EXT3,EXT7!vlan 4094enablename "VLAN 4094"member EXT1-EXT2 !!vlag enablevlag tier-id 10vlag isl vlan 4094vlag isl adminkey 200vlag adminkey 1000 enablevlag adminkey 1005 enablevlag adminkey 1003 enablevlag adminkey 1001 enablevlag adminkey 1002 enablevlag adminkey 1004 enable

      Note: Similar configuration is required on CNA Fabric switch I/O Module 2 because of the different I/O module integrated into the single IBM Pure Flex System. In this configuration, INTA1 and INTB1 use LACP key 1001 to create a LAG, EXT3 and EXT7 use LACP key 1000, and EXT1 and 2 use LACP key 200. As a result, there are a total of three LAGs. EXT1 and 2 act as an ISL link and carry traffic for LACP LAGs 1000 and 1001 (which is internal and external traffic).

    2. Configure QFX3000-M QFabric System connectivity to the IBM Pure Flex System 40Gb CNA I/O Module.
      [edit]set chassis node-group RSNG4 node-device n6 pic 1 xle port-range 4 15set chassis node-group RSNG4 node-device n7 pic 1 xle port-range 4 15set chassis node-group RSNG4 aggregated-devices ethernet device-count 10set interfaces interface-range IBM-FLEX-1-40G-IO-1-2-VLAG member n6:xle-0/1/6set interfaces interface-range IBM-FLEX-1-40G-IO-1-2-VLAG member n7:xle-0/1/6set interfaces interface-range IBM-FLEX-1-40G-IO-1-2-VLAG member n6:xle-0/1/8set interfaces interface-range IBM-FLEX-1-40G-IO-1-2-VLAG member n7:xle-0/1/8set interfaces interface-range IBM-FLEX-1-40G-IO-1-2-VLAG ether-options 802.3ad RSNG4:ae0set interfaces RSNG4:ae0 description "40G CNA to IBM-FLEX-1-IO-1"set interfaces RSNG4:ae0 mtu 9192set interfaces RSNG4:ae0 aggregated-ether-options lacp activeset interfaces RSNG4:ae0 unit 0 family ethernet-switching port-mode trunkset interfaces RSNG4:ae0 unit 0 family ethernet-switching vlan members MGMTset interfaces RSNG4:ae0 unit 0 family ethernet-switching vlan members Infraset interfaces RSNG4:ae0 unit 0 family ethernet-switching vlan members Exchangeset interfaces RSNG4:ae0 unit 0 family ethernet-switching vlan members Wikimediaset interfaces RSNG4:ae0 unit 0 family ethernet-switching vlan members SQLset interfaces RSNG4:ae0 unit 0 family ethernet-switching vlan members Storage-POD1set interfaces RSNG4:ae0 unit 0 family ethernet-switching vlan members Exchange-Clusterset interfaces RSNG4:ae0 unit 0 family ethernet-switching vlan members SharePointset interfaces RSNG4:ae0 unit 0 family ethernet-switching vlan members Security-Mgmtset interfaces RSNG4:ae0 unit 0 family ethernet-switching vlan members Vmotionset interfaces RSNG4:ae0 unit 0 family ethernet-switching vlan members VM-FTset interfaces RSNG4:ae0 unit 0 family ethernet-switching vlan members Remote-Access

      Note: In this configuration, two node devices (N6 and N7) are part of Node group RSNG4. Four XLE ports are configured in a LAG as RSNG4:ae0 with the LACP active protocol. RSNG4:ae0 is configured as a trunk carrying multiple VLANs.

    The MetaFabric 1.0 solution also utilizes the IBM Pure Flex System Chassis with the 10-Gb Ethernet CNA I/O Module (in POD2). A short overview of the operation and configuration of this module is required. Figure 10 shows the POD2 network topology utilizing the IBM Pure Flex System Chassis with the 10-Gb Ethernet CNA I/O Module.

    Figure 10: POD 2 Topology Using the IBM Pure Flex System Chassis with the 10-Gbps CNA I/O Module

    POD 2 Topology Using the IBM Pure Flex
System Chassis with the 10-Gbps CNA I/O Module

    EXT Ports 1 and 2 of I/O Modules 1 and 2 are connected to each other, respectively. This creates an interswitch link (ISL) between the two I/O modules. The ISL creation enables both I/O modules to act as a single switch. EXT ports 11, 12, and 16 are connected to the QFX3000-M QFabric PODs. POD2 also has an RSNG node group that is connected to servers. Figure 10 shows an example of three RSNG node groups in a QFX3000-M QFabric system connected to an IBM Pure Flex System chassis. The diagram above features the IBM Pure Flex System chassis with a 10-Gb CNA I/O Module. This configuration was only used for Compute Node 1. Configuration details for the compute node connected to POD2 are below.

    Configuring the 10Gb CNA Module Connections

    To configure the IBM 10Gb CNA module connectivity to POD 2, follow these steps:

    1. Configure the CNA Fabric Switch I/O module on the IBM Pure Flex System 10-Gb CNA I/O Module.
      interface port INTA1taggingexit!
      interface port INTA2 rmontaggingexit!
      interface port INTA3taggingexit!
      interface port INTA4taggingexit!
      interface port INTA5taggingexit!
      interface port EXT1taggingpvid 4094exit!
      interface port EXT2taggingpvid 4094exit!
      interface port EXT11taggingexit!
      interface port EXT12taggingexit!
      interface port EXT13taggingexit!
      interface port EXT14taggingexit!
      interface port EXT15taggingexit!
      interface port EXT16taggingexit!
      vlan 101enablename "Infra"member INTA1-INTA5,EXT1-EXT2,EXT11-EXT16!
      vlan 102enablename "SharePoint"member INTA1-INTA5,EXT1-EXT2,EXT11-EXT16!
      vlan 103enablename "WikiMedia"member INTA1-INTA5,EXT1-EXT2,EXT11-EXT16!
      vlan 104enablename "Exchange"member INTA1-INTA5,EXT1-EXT2,EXT11-EXT16!
      vlan 105enablename "SQL"member INTA1-INTA5,EXT1-EXT2,EXT11-EXT16!
      vlan 106enablename " Vmotion"member INTA1-INTA5,EXT1-EXT2,EXT11-EXT16!
      vlan 107enablename "FT"member INTA1-INTA5,EXT1-EXT2,EXT11-EXT16!
      vlan 108enablename "Storage-iSCSI"member INTA1-INTA5,EXT1-EXT2,EXT11-EXT16!
      vlan 800enablename "VDC Mgmt"member INTA1-INTA5,EXT1-EXT2,EXT11-EXT16!
      vlan 801enablename "Security-Mgmt"member INTA1-INTA5,EXT1-EXT2,EXT11-EXT16!
      vlan 4094enablename "VLAN 4094"member EXT1-EXT2!
      interface port INTA1lacp mode activelacp key 1001!
      interface port INTA2lacp mode activelacp key 1002!
      interface port INTA3lacp mode activelacp key 1003!
      interface port INTA4lacp mode activelacp key 1004!
      interface port INTA5lacp mode activelacp key 1005!
      interface port EXT1lacp mode activelacp key 200 !
      interface port EXT2lacp mode activelacp key 200 !
      interface port EXT11lacp mode activelacp key 1000 !
      interface port EXT12lacp mode activelacp key 1000 !
      interface port EXT13lacp mode activelacp key 1000 !
      interface port EXT14lacp mode activelacp key 1000 !
      interface port EXT15lacp mode activelacp key 1000 !
      interface port EXT16lacp mode activelacp key 1000 ! ! !
      vlag enablevlag tier-id 10vlag isl vlan 4094vlag isl adminkey 200vlag adminkey 1000 enablevlag adminkey 1001 enablevlag adminkey 1002 enablevlag adminkey 1003 enablevlag adminkey 1004 enablevlag adminkey 1005 enable
    2. Configure QFX3000-M QFabric System connectivity to the IBM Pure Flex System 10-Gb CNA I/O Module.
      [edit]set interfaces interface-range IBM-FLEX-1-10G-CNA-IO-1-2-VLAG member "n1:xe-0/0/[24-27]"set interfaces interface-range IBM-FLEX-1-10G-CNA-IO-1-2-VLAG member "n2:xe-0/0/[30-31]"set interfaces interface-range IBM-FLEX-1-10G-CNA-IO-1-2-VLAG ether-options 802.3ad RSNG2:ae0set interfaces RSNG2:ae0 description IBM-FLEX-1-10G-CNAset interfaces RSNG2:ae0 unit 0 family ethernet-switching port-mode trunkset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members MGMTset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Storage-POD2set interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Infraset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members SQLset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members SharePointset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Exchange-clusterset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Exchangeset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Wikimediaset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Security-Mgmtset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Vmotionset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members VM-FTset interfaces RSNG2:ae0 unit 0 family ethernet-switching vlan members Remote-Accessset vlans Exchange vlan-id 104set vlans Exchange-cluster vlan-id 109set vlans Infra vlan-id 101set vlans MGMT vlan-id 800set vlans Remote-Access vlan-id 810set vlans SQL vlan-id 105set vlans SQL l3-interface vlan.105set vlans Security-Mgmt vlan-id 801set vlans SharePoint vlan-id 102set vlans SharePoint l3-interface vlan.102set vlans Storage-POD2 vlan-id 208set vlans Storage-POD2 l3-interface vlan.208set vlans VM-FT vlan-id 107set vlans Vmotion vlan-id 106set vlans Wikimedia vlan-id 103

    Published: 2015-04-20