Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Solution Design

    This section explains the compute resources, network infrastructure, and storage components required to implement the MetaFabric 1.0 solution. It also discusses the software applications, high availability, class of service, security, and network management components of this solution.

    The purpose of the data center is to host business-critical applications for the enterprise. Each role in the data center is designed and configured to ensure the highest quality user experience possible. All of the functional roles within the data center exist to support the applications in the data center.


    In the compute area, you need to select the physical and virtual components that will host your business-critical applications, network management, and security services. This includes careful selection of VMs, servers, hypervisor switches, and blade switches.

    Virtual Machines

    A virtual machine (VM) is a virtual computer that is made up of a host operating system and applications. A hypervisor is software that runs on a physical server, emulating physical hardware for VMs. The VM operates on the emulated hardware of the hypervisor. The VM believes that it is running on dedicated, physical hardware. This layer of abstraction enables the benefit of presentation to the operating system; regardless of changes to the hardware, the operating system sees the same set of logical hardware. This enables operators to make changes to the physical environment without causing issues on the servers hosted in the virtual environment, as seen in Figure 1.

    Figure 1: Virtual Machine Design

    Virtual Machine Design

    Virtualization also enables flexibility that is not possible on physical servers. Operating systems can be migrated from one set of physical hardware to another with very little effort. Complete environments, to include the operating system and installed applications, can be cloned in a virtual environment, enabling complete backups of the environment or, in some cases, you can clone or recreate identical servers on different physical hardware for redundancy or mobility purposes. These clones can be activated upon primary VM failure and enable an easy level of redundancy to exist at the data center application layer. An extension to the benefit of cloning is that new operating systems can be created from these clones very quickly, enabling faster service rollouts and faster time to revenue for new services.


    The server in the virtualized IT data center is simply the physical compute resource that hosts the VMs. The server offers processing power, storage, memory, and I/O services to the VMs. The hypervisor is installed directly on top of the servers without any sort of host operating system, becoming a bare-metal operating system that provides a framework for virtualization in the data center.

    Because the server hosts the revenue generating portion of the data center (the VMs and resident applications), redundancy is essential at this layer. A virtualized IT data center server must support full hardware redundancy, management redundancy, the ability to upgrade software while the server is in service, hot swapping of power supplies, cooling, and other components, and the ability to combine multiple server or blade chassis into a single, logical management plane.

    The server chassis must be able to provide transport between the physical hardware and virtual components, connect to hosts through 10-Gigabit Ethernet ports, use 10-Gigabit Ethernet or 40-Gigabit Ethernet interfaces to access the POD, consolidate storage, data, and management functions, provide class of service, reduce the need for physical cables, and provide active/active forwarding.

    Figure 2: Server Design

    Server Design

    As seen in Figure 2, this solution includes 40-Gigabit Ethernet connections between QFabric system redundant server Node groups and IBM Flex servers that host up to 14 blade servers. Other supported connection types include 10-Gigabit Ethernet oversubscribed ports and 10-Gigabit Ethernet pass-through ports. The solution also has two built-in switches per Flex server and uses MC-LAG to keep traffic flowing through the data center.

    Hypervisor Switching

    The hypervisor switch is the first hop from the application servers in the MetaFabric 1.0 architecture. Virtual machines connect to a distributed virtual switch (dvSwitch) which is responsible for mapping a set of physical network cards (pNICs) across a set of physical hosts into a single logical switch that can be centrally managed by a virtualization orchestration tool such as VMware vCenter (Figure 3). The dvSwitch enables intra-VM traffic on the same switching domain to pass between the VMs locally without leaving the blade server or virtual environment. The dvSwitch also acts like a Virtual Chassis, connects multiple ESXi hosts simultaneously, and offers port group functionality (similar to a VLAN) to provide access between VMs.

    Figure 3: VMware Distributed Virtual Switch

    VMware Distributed Virtual Switch

    This poses an interesting security challenge on the hypervisor switch, as traditional, appliance-based firewalls do not have visibility into the hypervisor switching environment. In cases where restrictions must be placed on VM-to-VM traffic, security software can be installed on the hypervisor to perform firewall functions between VMs.

    The hypervisor switch is a critical piece of the MetaFabric 1.0 architecture. As such, it should support functions that enable class of service and SLA attainment. Support for IEEE 802.1p is required to support class of service. Support for link aggregation of parallel links (IEEE 802.3ad) is also required to ensure redundant connection of VMs. As in the other switching roles, support for SLA attainment is also a necessity at this layer. The hypervisor switch should support SNMPv3, flow accounting and statistics, remote port mirroring, and centralized management and reporting to ensure that SLAs can be measured and verified.

    To complete the configuration for the hypervisor switch, provide class of service on flows for IP storage, vMotion, management, fault tolerance, and VM traffic. As shown in Figure 4, this solution implements the following allocations for network input/output (I/O) control shares: IP storage (33.3 percent), vMotion (33.3 percent), management (8.3 percent), fault tolerance (8.3 percent), and VM traffic (16.6 percent). These categories have been maximized for server-level traffic.

    Figure 4: VMware Network I/O Control Design

    VMware Network I/O Control Design

    Blade Switching

    The virtualized IT data center features virtual appliances that are often hosted on blade servers, or servers that support multiple interchangeable processing blades that give the blade server the ability to host large numbers of VMs. The blade server includes power and cooling modules as well as input/output (I/O) modules that enable Ethernet connection into the blade server (Figure 5). Blade switching is performed between the physical Ethernet port on the I/O module and the internal Ethernet port on the blade. In some blade servers, a 1:1 subscription model (one physical port connects to one blade) is used (this is called pass-thru switching), with one external Ethernet port connecting directly to a specific blade via an internal Ethernet port. The pass-through model offers the benefit of allowing full line bandwidth to each blade server without oversubscription. The downside to this approach is often a lack of flexibility in VM mobility and provisioning as VLAN interfaces need to be moved on the physical switch and the blade switch when a move is required.

    Figure 5: Sample Blade Switch, Rear View

    Sample Blade Switch, Rear View

    Another mode of blade switch operation is where the blade switch enables oversubscription to the blade servers. In this type of blade server, there may be only 4 external ports that connect internally to 12 separate blade servers. This would result in 3:1 oversubscription (three internal ports to every one external port). The benefit to this mode of operation is that it minimizes the number of connected interfaces and access switch cabling per blade server, even though the performance of oversubscribed links and their connected VMs can degrade as a result. While this architecture is designed for data centers that utilize blade servers, the design works just as well in data centers that do not utilize blade servers to host VMs.

    Table 1 shows that both pass-through blade servers and oversubscribed blade servers are acceptable choices for this solution in your data center network. In some cases, you might need the faster speed provided by the 40-Gigabit Ethernet connections to support newer equipment, while in others you would prefer the line-rate performance offered by a pass-through switch. As a result, all three blade server types are supported in this design.

    Table 1: Comparison of Pass-Through Blade Servers and Oversubscribed Blade Servers


    Pass-Through SW

    10G Chassis SW

    40G Chassis SW





    10-Gigabit Ethernet host interface




    40-Gigabit Ethernet uplink interface




    Consolidate storage, data, and management




    Class of service




    Cable reduction


    Yes (12:14)

    Yes (2:14)









    To provide support for compute and virtualization in the virtualized IT data center, this solution uses:

    • Virtual machines—VMs running Windows and applications, such as Microsoft SharePoint, Microsoft Exchange, and WikiMedia
    • Servers—IBM x3750 and IBM Flex System chassis
      • Configure an IBM Flex System server with multiple ESXi hosts supporting all the VMs running business-critical applications (SharePoint, Exchange, and MediaWiki).
      • Configure a distributed vSwitch between multiple physical ESXi hosts configured on the IBM servers.
    • Hypervisor—VMware vSphere 5.1 and vCenter
    • Blade switches—IBM EN4091 and CN4093

    This design for the compute and virtualization segment of the data center meets the requirements of this solution for workload mobility and migration for VMs, location independence for VMs, VM visibility, high availability, fault tolerance, and centralized virtual switch management.


    The network is often the main focus of the data center as it is built to pass traffic to, from, and between application servers hosted in the data center. Given the criticality of this architectural role, and the various tiers within the data center switching block, it is further broken up into access switching, aggregation switching, core switching, edge routing, and WAN connectivity. Each segment within the data center switching role has unique design considerations that relate back to business criticality, SLA requirements, redundancy, and performance. It is within the data center switching architectural roles that the network must be carefully designed to ensure that your data center equipment purchases maximize network scale and performance while minimizing costs.

    Access and Aggregation

    The access layer consists of physical switches that connect to servers and end hosts. Access switching typically focuses on implementing Layer 2 switches, but can include Layer 3 components (such as IRB) to support more robust VM mobility. Access switching should also support high availability. In a multi-chassis or virtual chassis environment, where multiple physical switches can be combined to form a single, logical switch, redundancy can be achieved at the access layer. This type of switch architecture is built with control plane redundancy, MC-LAG, and the ability to upgrade individual switches while they are in service. Additionally, the access switching role should support storage traffic, or the ability to pass data traffic over Ethernet via iSCSI and Fiber Channel over Ethernet (FCoE). Data Center Bridging (DCB) should also be supported by the access switching role to enable full support of storage traffic. Within DCB, support for priority-based flow control (PFC), enhanced transmission selection (ETS), and data center bridging exchange (DCBX) should also be supported as these features enable storage traffic to pass properly between all servers and storage devices within a data center segment.

    The aggregation switch acts as a multiplexing point between the access and the core of the data center. The aggregation architectural role serves to combine a large number of smaller interfaces from the access into high bandwidth trunk ports that can be more easily consumed by the core switch. Redundancy should be a priority in the design of the aggregation role as all Layer 2 flows between the data center and the core switch are combined and forwarded by the data center aggregation switch role. At this layer, a switching architecture that supports the combination of multiple switches into a single, logical system with control and forwarding plane redundancy is recommended. This switching architecture enables redundancy features such as MC-LAG, loop-free redundant paths, and in-service software upgrades to enable data center administrators to consistently meet and exceed SLAs.

    One recommendation is to combine the access and aggregation layers of your network by using a QFabric system. Not only does a QFabric system offer a single point of provisioning, management, and troubleshooting for the network operator, it also collapses switching tiers for any-to-any connectivity, provides lower latency, and enables all access devices to be only one hop away from one another, as shown in Figure 6.

    Figure 6: Juniper Networks QFabric Systems Enable a Flat Data Center Network

    Juniper Networks QFabric Systems Enable
a Flat Data Center Network

    To implement the access and aggregation switching portions of the virtualized IT data center, this solution uses the QFX3000-M QFabric system. There are two QFabric systems (POD1 and POD2) in this solution to provide performance and scale. The QFabric PODs support 768 ports per POD and feature low port-to-port latency, a single point of management per POD, and lossless Ethernet to support storage traffic. The use of predefined POD configurations enables the enterprise to more effectively plan data center rollouts by offering predictable growth and scale in the solution architecture. Key configuration steps include:

    • Configure the QFX3000-M QFabric systems with 3 redundant server Node groups (RSNGs) connected to 2 IBM Flex System blade servers to deliver application traffic.
      • The first IBM Flex System server uses a 40-Gigabit Ethernet converged network adapter (CNA) connected to a QFabric system RSNG containing QFX3600 Node devices (RSNG4).
      • The second IBM Flex System server has 10-Gigabit Ethernet pass through modules connected to RSNG2 and RSNG3 on the second QFabric system.
    • Connect the EMC VNX storage platform to the QFabric systems for storage access using iSCSI and NFS.
    • Connect the QFabric systems with the EX9214 core switch by way of a network Node group containing 2 Node devices which use four 24-port LAGs configured as trunk ports.
    • Configure OSPF in the PODs (within the QFabric system network Node group) towards the EX9214 core switch and place these connections in Area10 as a totally stubby area.

    Core Switching

    The core switch is often configured as a Layer 3 device that handles routing between various Layer 2 domains in the data center. A robust implementation of the core switch in the virtualized IT data center will support both Layer 2 and Layer 3 to enable a full range of interoperability and service provisioning in a multitenant environment. Much like in the edge role, the redundancy of core switching is critical as it too is a traffic congestion point between the customer and the application. A properly designed data center includes a fully redundant core switch layer that supports a wide range of interfaces (1-Gigabit, 10-Gigabit, 40-Gigabit, and 100-Gigabit Ethernet) with high density. The port density in the core switching role is a critical factor as the data center core should be designed to support future expansion without requiring new hardware (beyond line cards and interface adapters). The core switch role should also support a wide array of SLA statistics collection, and should be service-aware to support collection of service-chaining statistics. The general location of the core switching function in this solution is shown in Figure 7.

    Figure 7: Core Switching

    Core Switching

    Table 2 shows some of the reasons for choosing an EX9200 switch over an EX8200 switch to provide core switching capabilities in this solution. The EX9200 switch provides a significantly larger number of 10-Gigabit Ethernet ports, support for 40-Gigabit Ethernet ports, ability to host more analyzer sessions, firewall filters, and BFD connections, and critical support for in-service software upgrade (ISSU) and MC-LAG. These reasons make the EX9200 switch the superior choice in this solution.

    Table 2: Core Switch Hardware - Comparison of the EX9200 and EX8200 Switches

    Solution Requirement




    Line-rate 10G







    Analyzer Sessions


















    Table 3 shows some of the reasons for choosing MC-LAG as the forwarding technology over Virtual Chassis in this solution. MC-LAG provides dual control planes, a non-disruptive implementation, support for LACP, state replication across peers, and support for ISSU without requiring dual Routing Engines.

    Table 3: Core Switch Forwarding - Comparison of MC-LAG and Virtual Chassis


    Virtual Chassis


    Control Planes



    Centralized Management



    Maximum Chassis






    Require IEEE 802.3ad (LACP)



    State Replication



    Require Dual Routing Engines






    To implement the core switching portion of the virtualized IT data center, this solution uses two EX9214 switches with the following capabilities and configuration:

    • Key features—240 Gbps line rate per slot for 10-Gigabit Ethernet, support for 40-Gigabit Ethernet ports, 64 analyzer sessions, scalable to 256,000 firewall filters, and support for bidirectional forwarding detection (BFD), in-service software upgrade (ISSU), and MC-LAG groups.
    • Key configuration steps (Figure 8)
      • Configure Layer 2 MC-LAG active/active on the EX9214 towards the QFabric PODs, the F5 load balancer, and the MX240 edge router (by way of the redundant Ethernet link provided by the SRX3600 edge firewall) to provide path redundancy.
      • Configure IRB and VRRP for all MC-LAG links for high availability.
      • Configure IRB on the EX9214 and the QFabric PODs to terminate the Layer 2/Layer 3 boundary.
      • Configure a static route on the core switches to direct traffic from the Internet to the load balancers.
      • o Configure OSPF to advertise a default route to the totally stubby areas in the QFabric PODs. Each QFabric POD has its own OSPF area. Also, configure the EX9214 core switches as area border routers (ABRs) that connect all three OSPF areas, and designate backbone area 0 over aggregated link ae20 between the two core switches

        Figure 8: Core Switching Design

        Core Switching Design

    Edge Routing and WAN

    Edge Routing

    The edge is the point in the network that aggregates all customer and Internet connections into and out of the data center. Although high availability and redundancy are important considerations throughout the data center, it is at the edge that they are the most vital; the edge serves as a choke point for all data center traffic and a loss at this layer renders the data center out of service. At the edge, full hardware redundancy should be implemented using platforms that support control plane and forwarding plane redundancy, link aggregation, MC-LAG, redundant uplinks, and the ability to upgrade the software and platform while the data center is in service. This architectural role should support a full range of protocols to ensure that the data center can support any interconnect type that may be offered. Edge routers in the data center require support for IPv4 and IPv6, as well as ISO and MPLS protocols. As the data center might be multi-tenant, the widest array of routing protocols should also be supported, to include static routing, RIP, OSPF, OSPF-TE, OSPFv3, IS-IS, and BGP. With large scale multi-tenant environments in mind, it is important to support Virtual Private LAN Service (VPLS) through the support of bridge domains, overlapping VLAN IDs, integrated routing and bridging (IRB), and IEEE 802.1Q (QinQ). The edge should support a complete set of MPLS VPNs, including L3VPN, L2VPN (RFC 4905 and RFC 6624, or Martini and Kompella drafts, respectively), and VPLS.

    Network Address Translation (NAT) is another factor to consider when designing the data center edge. It is likely that multiple customers serviced by the data center will have overlapping private network address schemes. In environments where direct Internet access to the data center is enabled, NAT is required to translate routable, public IP addresses to the private IP addressing used in the data center. The edge must support Basic NAT 44, NAPT44, NAPT66, Twice NAT44, and NAPT-PT.

    Finally, as the edge is the ingress and egress point of the data center, the implementation should support robust data collection to enable administrators to verify and prove strict service-level agreements (SLAs) with their customers. The edge layer should support collection of average traffic flows and statistics, and at a minimum should support the ability to report exact traffic statistics to include the exact number of bytes and packets that were received, transmitted, queued, lost, or dropped, per application. Figure 9 shows the location of the edge routing function in this solution.

    Figure 9: Edge Routing

    Edge Routing


    The WAN role provides transport between end users, enterprise remote sites, and the data center. There are several different WAN topologies that can be used, depending on the business requirements of the data center. A data center can simply connect directly to the Internet, utilizing simple IP-based access directly to servers in the data center, or a secure tunneled approach using generic routing encapsulation (GRE) or IP Security (IPsec). Many data centers serve a wide base of customers and favor Multiprotocol Label Switching (MPLS) interconnection via the service provider’s managed MPLS network, allowing customers to connect directly into the data center via the carrier’s MPLS backbone. Another approach to the WAN is to enable direct peering between customers and the data center; this approach enables customers to bypass transit peering links by establishing a direct connection (for example, via private leased line) into the data center. Depending on the requirements of the business and the performance requirements of the data center hosted applications, the choice of WAN interconnection offers the first choice in determining performance and security of the data center applications. Choosing a private peering or MPLS interconnect offers improved security and performance at a higher expense. In cases where the hosted applications are not as sensitive to security and performance, or where application protocols offer built-in security, a simple Internet connected data center can offer an appropriate level of security and performance at a lower cost.

    To implement the edge routing and WAN portions of the virtualized IT data center, this solution uses MX240 Universal Edge routers. Because the MX240 router offers dual Routing Engines and ISSU at a reasonable price point, it is the preferred option over the smaller MX80 router. The key connection and configuration steps are:

    • Connect the MX240 edge routers to the service provider networks to provide Internet access to the data center.
    • Configure the two edge routers to be EBGP peers with 2 service providers to provide redundant Internet connections.
    • Configure IBGP between the 2 edge routers and applying a next-hop self export policy.
    • Configure BGP local preference on the primary service provider to offer a preferred exit point to the Internet.
    • Export a dynamic, condition-based, default route to the Internet into OSPF on both edge routers toward the edge firewalls and core switches to provide Internet access for the virtualized IT data center devices (Figure 10).
    • Configure both edge routers in Area 1 for OSPF.
    • Enable Network Address Translation (NAT) to convert private IP addresses into public IP addresses.

    Figure 10: Edge Routing Design

    Edge Routing Design

    This design for the network segment of the data center meets the requirements of this solution for 1-Gigabit, 10-Gigabit, and 40-Gigabit Ethernet ports, converged data and storage, load balancing, quality of experience, network segmentation, traffic isolation and separation, and time synchronization.


    The storage role of the MetaFabric 1.0 architecture is to provide centralized file and block data storage so that all hosts inside of the data center can access it. The data storage can be local to a VM, such as a database that resides within a hosted application, or shared, such as a MySQL database that can reside on a storage array to serve multiple different applications. The MetaFabric 1.0 architecture requires the use of shared storage to enable compute virtualization and VM mobility.

    One of the key goals of the virtualized IT data center is to converge both data and storage onto the same network infrastructure to reduce the overall cost and make operations and troubleshooting easier. There are several different options when converging storage traffic: FCoE, NFS, and iSCSI. One of the most recent trends in building a green-field data center is to use IP storage and intentionally choose not to integrate legacy Fibre Channel networks. Additionally, because iSCSI has better performance, lower read-write response times, lower cost, and full application support, iSCSI offers the better storage network choice over NFS. Additionally, storage traffic is very latency and drop sensitive, so it is critical that the network infrastructure provide a lossless Ethernet service to correctly prioritize all storage traffic. As a result, this solution uses both iSCSI and NAS for storage, and provides a lossless Ethernet service to guarantee the delivery of storage traffic.

    Table 4 shows a comparison of FCoE, NFS, and iSCSI. Because NFS and iSCSI meet the same requirements provided by FCoE, plus the ability to scale to 10-Gigabit Ethernet and beyond, the NFS and iSCSI storage protocols are the preferred choice for the MetaFabric 1.0 solution.

    Table 4: Comparison of Storage Protocols





    Lossless Ethernet




    10GE and beyond




    Converged data and storage




    Less than 3p end-to-end latency




    Figure 11 shows the path of storage traffic as it travels through the data center and highlights the benefit of priority queuing to provide lossless Ethernet transport for storage traffic. By configuring Priority Flow Control (PFC), the storage device can monitor storage traffic in the storage VLAN and notify the server when traffic congestion occurs. The server can pause sending additional storage traffic until after the storage device has cleared the congested receive buffers. However, other queues are not affected and uncongested traffic continues flowing without interruption.

    Figure 11: Storage Lossless Ethernet Design

    Storage Lossless Ethernet Design

    The packet flow for storage traffic is as follows:

    1. The server transmits storage traffic to the QFabric system.
    2. The QFabric system classifies the traffic.
    3. Traffic is queued according to priority.
    4. The QFabric system transmits the traffic to the storage array.
    5. The storage array receives the traffic.
    6. The storage array transmits traffic back to the QFabric system.
    7. The QFabric system classifies the traffic.
    8. Traffic is queued according to priority.
    9. The QFabric system transmits the traffic to the servers and VMs.
    10. The server receives the traffic.

    To implement the storage portion of the virtualized IT data center, this solution uses EMC VNX5500 unified storage with a single storage array. This storage is connected to the QFabric PODs, which in turn connect to the servers and VMs, as seen in Figure 12. The design assumes that the data center architect wishes to save on cost initially by sharing a single storage array with multiple QFabric PODs. However, the design can evolve to allocating one storage array per one QFabric POD, as usage and demand warrant such expansion.

    Figure 12: Storage Design

    Storage Design

    This solution also implements Data Center Bridging (DCB) to enable full support of storage traffic. Within DCB, support for priority-based flow control (PFC), enhanced transmission selection (ETS), and Data Center Bridging Capability Exchange (DCBX) enables storage traffic to pass properly between all servers and storage devices within a data center segment and to deliver a lossless Ethernet environment.

    This design for the storage segment of the virtualized IT data center meets the requirements of this solution for scale, lossless Ethernet, the ability to boot from shared storage, and support for multiple protocol storage.


    Applications in the virtualized IT data center are built as Virtual Machines (VMs) and are hosted on servers, or physical compute resources that reside on the blade server. This design for applications meets the requirements of this solution for business-critical applications and high performance.

    The MetaFabric 1.0 solution supports a complete software stack that covers four major application categories: compute management, network management, network services, and business-critical applications (Figure 13). These applications run on top of IBM servers and VMware vSphere 5.1.

    Figure 13: Virtualized IT Data Center Solution Software Stack

    Virtualized IT Data Center Solution
Software Stack

    Compute Management

    VMware vCenter is a virtualization management platform that offers centralized control and visibility into compute, storage, and networking resources. Data center operators use the de facto, industry-standard vCenter on a daily basis to manage and provision VMs. VMware vCloud Director allows the data center manager to create an in-house cloud service and partition the virtualization environment into segments that can be administered by separate business units or administrative entities. The pool of resources can now be partitioned into virtual data centers which can offer their own independent virtualization services. Use of vCenter and vCloud Director offers the first element of software application support for the MetaFabric 1.0 solution.

    Network Management

    The MetaFabric 1.0 solution uses Junos Space Management Applications to provide network provisioning, orchestration, and inventory management. The applications include Network Director for management of wired and wireless data center networks, and Security Director for security policy administration.

    Network Services

    Network load balancing is a common network service. There are two methods to provide network load balancing: virtual and hardware-based. The virtual load balancer operates in the hypervisor as a VM. One of the benefits of a virtual load balancer is rapid provisioning of additional load-balancing power. Another benefit is that the administration of the virtual load balancer can be delegated to another administrative entity without impacting other applications and traffic.

    However the drawback to a virtual load balancer is that the performance is limited to the number of compute resources that are available. Hardware load balancers offer much more performance in traffic throughput and SSL encryption and decryption with dedicated security hardware.

    The MetaFabric 1.0 solution uses the local traffic manager (LTM) from F5 Networks.

    The load balancers provide the following services:

    • Advertise the existence of the application
    • Distribute the traffic across a set of servers.
    • Leverage features such as SSL acceleration and compression.
    • Provide additional Layer 7 features.

    Business-Critical Applications

    Software applications are made of multiple server tiers; the most common are Web, application, and database servers. Each server has its own discrete set of responsibilities. The Web tier handles the interaction with the users and the application. The application tier handles all of the application logic and programming. The database tier handles all of the data storage and application inventory.

    The following software applications were tested as part of the MetaFabric 1.0 solution:

    • Microsoft SharePoint

      The SharePoint application requires three tiers: Web, application, and database. The Web tier uses Microsoft IIS to handle Web tracking and interaction with end users. The application tier uses Microsoft SharePoint and Active Directory to provide the file sharing and content management software. Finally, the database tier uses Microsoft SQL Server to store and organize the application data.

    • Microsoft Exchange

      The Exchange application requires two tiers: a Web tier, and a second tier that combines the application and the database into a single tier.

    • MediaWiki Application

      The MediaWiki application requires two tiers: a combined Web and application tier, and a database tier. Apache httpd is combined with the hypertext preprocessor (PHP) to render and present the application, while the data is stored on the database tier with MySQL.

    High Availability

    This design meets the high availability requirements of hardware redundancy and software redundancy.

    Hardware Redundancy

    To provide hardware redundancy in the virtualized IT data center, this solution uses:

    • Redundant server hardware—Two IBM 3750 standalone servers and two IBM Pure Flex System Chassis
    • Redundant access and aggregation PODs—Two QFX3000-M QFabric systems
    • Redundant core switches—Two EX9214 switches
    • Redundant edge firewalls—Two SRX3600 Services Gateways
    • Redundant edge routers—Two MX240 Universal Edge routers
    • Redundant storage—Two EMC VNX5500 unified storage
    • Redundant load balancers—Two F5 LTM 4200v load balancers
    • Out-of-band management switches use Virtual Chassis technology—Four EX4300 switches

    Software Redundancy

    To provide software redundancy in the virtualized IT data center, this solution uses:

    • Graceful restart—Helper routers assist restarting devices in restoring routing protocols, state, and convergence.
    • Graceful Routing Engine switchover—Keeps the operating system state synchronized between the master and backup Routing Engines in a Juniper Network device.
    • In-service software upgrade(for the core switches and edge routers)—Enables the network operating system to be upgraded without downtime.
    • MC-LAG—Enables aggregated Ethernet interface bundles to contain interfaces from more than one device.
    • Nonstop active routing—Keeps the Layer 3 protocol state synchronized between the master and backup Routing Engines.
    • Nonstop bridging—Keeps the Layer 2 protocol state synchronized between the master and backup Routing Engines.
    • Nonstop software upgrade—(for the QFX3000-M QFabric system PODs)—Enables the network operating system to be upgraded with minimal impact to forwarding.
    • Virtual Router Redundancy Protocol (VRRP)—Provides a virtual IP address for traffic and forwards the traffic to one of two peer routers, depending on which one is operational.

    MC-LAG Design Considerations

    To allow all the links to forward traffic without using Spanning Tree Protocol (STP), you can configure MC-LAG on edge routers and core switches. The edge routers use MC-LAG toward the edge firewalls, and the core switches use MC-LAG toward each QFabric POD, application load balancer (F5), and out-of-band (OOB) management switch.

    Multichassis link aggregation group (MC-LAG) is a feature that supports aggregated Ethernet bundles spread across more than one device. Link Aggregation Control Protocol (LACP) supports MC-LAG and is used for dynamic configuration and monitoring on links. The available options for MC-LAG include Active/Standby (where one device is active and the other assists if the active device fails) or Active/Active (where both devices actively participate in the MC-LAG connection).

    For this solution, MC-LAG Active/Active is preferred because it provides link-level and node-level protection for Layer 2 networks and Layer 2/Layer 3 combined hybrid environments.

    Highlights of MC-LAG Active/Active

    MC-LAG Active/Active has the following characteristics:

    • Both core switches have active aggregated Ethernet member interfaces and forward the traffic. If one of the core switches fails, the other core switch will forward the traffic. Traffic is load balanced by default, so link-level efficiency is 100 percent.
    • The Active/Active method has faster convergence than the Active/Standby method. Fast convergence occurs because information is exchanged between the routers during operations. After a failure, the remaining operational core switch does not need to relearn any routes and continues to forward the traffic.
    • Routing protocols (such as OSPF) can be used over MC-LAG/IRB interfaces for Layer 3 termination.
    • If you configure Layer 3 protocols in the core, you can use an integrated routing and bridging (IRB) interface to offer a hybrid Layer 2 and Layer 3 environment at the core switch.
    • Active/Active also offers maximum utilization of resources and end-to-end load balancing.

    To extend a link aggregation group (LAG) across two devices (MC-LAG):

    • Both devices need to synchronize their aggregated Ethernet LACP configurations
    • Learned MAC address and ARP entries must be synchronized.

    The above MC-LAG requirements are achieved by using the following protocols/mechanisms as shown in Figure 14:

    1. Interchassis Control Protocol (ICCP)
    2. Interchassis Link Protection Link (ICL-PL)

    Figure 14: MC-LAG – ICCP and ICL Design

    MC-LAG – ICCP and ICL Design
    1. ICCP
      • ICCP is a control plane protocol for MC-LAG. It uses TCP as a transport protocol and Bidirectional Forwarding Detection (BFD) for fast convergence. When you configure ICCP, you must also configure BFD.
      • ICCP synchronizes configurations and operational states between the two MC-LAG peers.
      • ICCP also synchronizes MAC address and ARP entries learned from one MC-LAG node and shares them with the other peer.
      • • Peering with the ICCP peer loopback IP address is recommended to avoid any direct link failure between MC-LAG peers. As long as the logical connection between the peers remains up, ICCP stays up.
      • Although you can configure ICCP on either a single link or an aggregated bundle link, an aggregated Ethernet LAG is preferred.
      • • You can also configure ICCP and ICL links on a single aggregated Ethernet bundle under multiple logical interfaces using flexible VLAN tagging supported on MX Series platforms.
    2. ICL-PL
      • ICL is a special layer 2 link for Active-Active only between the MC-LAG peers
      • ICL-PL is needed to protect connectivity of MC-LAG in case of failure of all core facing links corresponding to one MC-LAG node.
      • If the traffic receiver is single homed to one of the MC-LAG nodes (N1), ICL is used to forward the packets received by way of the MC-LAG interface to the other MC-LAG nodes (N2).
      • Split horizon is enabled to avoid loop on the ICL
      • There is no data plane MAC learning over ICL.

    MC-LAG Specific Configuration Parameters

    Redundancy group ID—ICCP uses a redundancy group to associate multiple chassis that perform similar redundancy functions. A redundancy group establishes a communication channel so that applications on ICCP peers can reach each other. A redundancy group ID is similar to a mesh group identifier.

    MC-AE ID—The multi-chassis aggregated Ethernet (MC-AE) ID is a per-multi-chassis interface. For example, if one MC-AE interface is spread across multiple core switches, you should assign the same redundancy group ID. When an application wants to send a message to a particular redundancy group, the application provides the information and ICCP delivers it to the members of that redundancy group.

    Service ID—A new service ID object for bridge domains overrides any global switch options configuration for the bridge domain. The service ID is unique across the entire network for a given service to allow correct synchronization. For example, a service ID synchronizes applications like IGMP, ARP, and MAC address learning for a given service across the core switches. (Note: Both MC-LAG peers must share the same service ID for a given bridge domain.)

    MC-LAG Active/Active Layer 3 Routing Features

    MC-LAG Active/Active is a Layer 2 logical link. IRB interfaces are used to create integrated Layer 2 and Layer 3 links. As a result, you have two design options when assigning IP addresses across MC-LAG peers:

    • Option 1: VRRP MC-LAG Active/Active provides common virtual IP and MAC addresses and unique physical IP and MAC addresses. Both address types are needed if you configure routing protocols on MC-LAG Active/Active interfaces. The VRRP data forwarding logic has been modified in Junos OS if you configure both MC-LAG Active/Active and VRRP. When configured simultaneously, both the MC-LAG and VRRP peers forward traffic and load-balance the traffic between them, as shown in Figure 15.

      Figure 15: VRRP and MC-LAG – Active/Active Option

      VRRP and MC-LAG – Active/Active

      Data packets received by the backup VRRP peer on the MC-LAG member link are forwarded to the core link without sending them to the master VRRP peer.

    • Option 2: MAC address synchronization Figure 16 provides a unique IP address per peer, but shares a MAC address between the MC-LAG peers. You should use option 2 if you do not plan to configure routing protocols on the MC-LAG Active/Active interfaces.

      Figure 16: MC-LAG – MAC Address Synchronization Option

      MC-LAG – MAC Address Synchronization
      • You configure the same IP address on the IRB interfaces of both node.
      • The lowest MAC address is selected as the gateway MAC address.
      • • The peer with the higher IRB MAC address learns the peer’s MAC address through ICCP and installs the peer MAC address as its own MAC address.
      • On MX Series platforms, configure mcae-mac-synchronize in the bridge domain configuration.
      • On EX9214 switches, configure mcae-mac-synchronize in a VLAN configuration.

    We recommend Option 1 as the preferred method for the MetaFabric 1.0 solution for the following reasons:

    • The solution requires OSPF as the routing protocol between the QFabric PODs and the core switches on the MC-LAG IRB interfaces and only Option 1 supports routing protocols.
    • Layer 3 extends to the QFabric PODs for some VLANs for hybrid Layer 2/Layer 3 connectivity to the core.

    MC-LAG Active/Active Traffic Forwarding Rules

    Figure 17: MC-LAG – Traffic Forwarding Rules

    MC-LAG – Traffic Forwarding Rules

    As shown in Figure 17, the following forwarding rules apply to MC-LAG Active/Active:

    • Traffic received on N1 from MCAE1 could be flooded to the ICL link to reach N2. When it reaches N2, it must not be flooded back to MCAE1.
    • Traffic received on SH1 could be flooded to MCAE1 and ICL by way of N1. When N2 receives SH1 traffic across the ICL link, it must not be again flooded to MCAE1. N2 also receives the SH1 traffic by way of the MC-AE link.
      • When receiving a packet from the ICL link, the MC-LAG peers forward the traffic to all local SH links. If the corresponding MCAE link on the peer is down, the receiving peer also forwards the traffic to its MCAE links.

      Note: ICCP is used to signal MCAE link state between the peers.

    • When N2 receives traffic from the ICL link and the N1 core link is up, the traffic should not be forwarded to the N2 core link.

    MC-LAG Active/Active High Availability Events

    ICCP is down, when ICL is up:

    Figure 18: MC-LAG – ICCP Down

    MC-LAG – ICCP Down

    Here are the actions that happen when the ICCP link is down and the ICL link is up:

    • By default, if the ICCP link fails, as shown in Figure 18, the peer defaults to its own local LACP system ID and the links for only one peer (whichever one negotiates with the customer edge [CE] router first) are attached to the bundle. Until LACP converges with a new system ID, there will be minimum traffic impact.
    • One peer stays active, while the other enters standby mode (but this is nondeterministic).
    • The access switch selects a core switch and establishes LACP peering.

    To optimize for this condition, include the prefer-status-control-active statement on the active peer.

    • With the prefer-status-control-activestatement configured on the active peer, the peer remains active and retains the same LACP system ID.
    • With the force-icl-down statement, the ICL link shuts down when the ICCP link fails.
    • By configuring these statements, traffic impact is minimized during an ICCP link failure.

    ICCP is up and ICL goes down:

    Figure 19: MC-LAG – ICL Down

    MC-LAG – ICL Down

    Here are the actions that happen when the ICCP link is up and the ICL link is down:

    • If you configure a peer with the prefer-status-control-standby statement, the MC-AE interfaces shared with the peer and connected to the ICL go down.
    • This configuration ensures a loop-free topology because it does not forward duplicate packets in the Layer 2 network.

    Active MC-LAG node down with ICCP loopback peering with prefer-status-control-active on both peers:

    Figure 20: MC-LAG – Peer Down

    MC-LAG – Peer Down

    Here are the actions that happen when both MC-LAG peers are configured with the prefer-status-control-active statement and the active peer goes down:

    • When you configure MC-LAG Active/Active between SW1/SW2 and the QFabric POD, SW1 becomes active and SW2 becomes standby. During an ICCP failure event, if SW1 has the prefer-status-control active statement and it fails, SW2 is not aware of the ICCP or SW1 failures. As a result, SW2 mcae-id switches to the default LACP system ID, which causes the MC-LAG link to go down and up, and results in long traffic reconvergence times.
    • To avoid this situation, configure the prefer-status-control-active statement on both SW1 and SW2. Also, you should prevent ICCP failures by configuring ICCP on a loopback interface.
    • Configure backup-liveness-detection on both the active and standby peers. BFD helps to detect peer failures and enable sub-second reconvergence.

    The design for high availability in the MetaFabric 1.0 solution meets the requirements for hardware redundancy and software redundancy.

    Class of Service

    Key design elements for class of service in this solution include network control (OSPF, BGP, and BFD), virtualization control (high availability, fault tolerance), storage (iSCSI and NAS), business-critical applications (Exchange, SharePoint, MediaWiki, and vMotion) and best-effort traffic. As seen in Figure 21, incoming packets are sorted, assigned to queues based on traffic type, and transmitted based on the importance of the traffic. For example, iSCSI lossless Ethernet traffic has the largest queue and highest priority, followed by critical traffic (fault tolerance and high availability), business-critical application traffic (including vMotion), and bulk best-effort traffic with the lowest priority.

    Figure 21: Class of Service – Classification and Queuing

    Class of Service – Classification
and Queuing

    As seen in Figure 22, the following percentages are allocated for class of service in this solution: network control (5 percent), virtualization control (5 percent), storage (60 percent), business-critical applications (25 percent) and best-effort traffic (5 percent). These categories have been maximized for network-level traffic, as the network supports multiple servers and switches. As a result, storage traffic and application traffic are the most critical traffic types in the network, and these allocations have been verified by our testing.

    Figure 22: Class of Service – Buffer and Transmit Design

    Class of Service – Buffer and
Transmit Design

    To provide class of service in the virtualized IT data center and meet the design requirements, this solution uses:

    • Lossless Ethernet for storage traffic
    • Five allocations to differentiate traffic
    • A queue for business-critical applications
    • Best-effort traffic for data traffic


    Security is a vital component of any network architecture and the virtualized IT data center is no exception. There are various areas within the data center where security is essential. At the perimeter, security is focused on securing the edge of the data center from external threats and with providing a secure gateway to the Internet. Remote access is another area where security is vital in the data center. Operators will often require remote access to the data center to perform maintenance or new service activations. This remote access must be secured and monitored to ensure that only authorized users are permitted access. Robust authentication, authorization and accounting (AAA) mechanisms should be in place to ensure that only authorized operators are allowed. Given that the data center is a cost and revenue center that can house the critical data and applications of many different enterprises, multi-factor authentication is an absolute necessity to properly secure remote access.

    Software application security in the virtualized IT data center is security that is provided between VMs. A great deal of inter-VM communication occurs in the data center and controlling this interactivity is a crucial security concern. If a server is supposed to access a database residing on another server, or on a storage array, a virtual security appliance should be configured to limit the communication between those resources to allow only those protocols that are necessary for operation. Limiting the communication between resources prevents security breaches in the data center and might be a requirement depending on the regulatory requirements of the hosted applications (HIPPA, for instance, can dictate security safeguards that must exist between patient and business data). As discussed in the Virtual Machine section, security in the virtual network, or between VMs, differs from security that can be implemented on a physical network. In a physical network, a hardware firewall can connect to different subnets, security zones, or servers and provide security between those devices (Figure 23). In the virtual network, the physical firewall does not have the ability to see traffic between the VMs. In these cases, a virtual hypervisor security appliance should be installed to enable security between VMs.

    Figure 23: Physical Security Compared to Virtual Network Security

    Physical Security Compared to Virtual
Network Security

    Application Security

    When securing VMs, you need a comprehensive virtualization security solution that implements hypervisor security with full introspection; includes a high-performance, hypervisor-based stateful firewall; uses an integrated intrusion detection system (IDS); provides virtualization-specific antivirus protection; and offers unrivaled scalability for managing multitenant cloud data center security. The Juniper Networks Firefly Host (formerly vGW) offers all these features and enables the operator to monitor software, patches, and files installed on a VM from a central location. Firefly Host is designed to be centrally managed from a single-pane view, giving administrators a comprehensive view of virtual network security and VM inventory.

    Table 5 shows the relative merits of three application security design options: vSRX, SRX, and Firefly Host. Because other choices lack intrusion detection and prevention, quarantine capabilities, and mission-critical line-rate performance and scalability, Firefly Host is the preferred choice for this solution. Additionally, Firefly Host is integrated into all VMs and provides every endpoint with its own virtual firewall.

    Table 5: Application Security Options




    Firefly Host

    Stateful security policies




    Centralized management




    Intrusion detection and prevention








    10G line-rate performance at scale




    To provide application security in the virtualized IT data center, this solution uses the Juniper Networks Firefly Host to provide VM-to-VM application security. Firefly Host integrates with VMware vCenter for comprehensive VM security and management.

    Figure 24: Application Security Design

    Application Security Design

    In Figure 24, the following sequence occurs for VM-to-VM traffic:

    1. A VM sends traffic to a destination VM.
    2. The Firefly Host appliance inspects the traffic.
    3. The traffic matches the security policy.
    4. The ESXi host transmits the traffic.
    5. The second ESXi host receives the traffic.
    6. Firefly Host inspects the traffic.
    7. The traffic matches the security policy and permits the traffic to continue to the destination.
    8. The destination VM receives the traffic.

    Perimeter Security

    Edge firewalls handle security functions such as Network Address Translation (NAT), intrusion detection and prevention (IDP), security policy enforcement, and virtual private network (VPN) services. As shown in Figure 25, there are four locations where you could provide security services for the physical devices in your data center:

    1. Firewall filters in the QFabric system PODs
    2. Firewall filters in the core switches
    3. Dedicated, stateful firewalls (like the SRX3600)
    4. Physical firewalls connected to the QFabric system PODs

    Figure 25: Physical Security Design

    Physical Security Design

    This solution implements option 3, which uses a stateful firewall to protect traffic flows travelling between the edge routers and core switches. Anything below the POD level is protected by the Firefly Host application.

    To provide perimeter security in the virtualized IT data center, this solution uses the SRX3600 Services Gateway as an edge firewall. This firewall offers up to 55-Gbps of firewall performance, which can easily support the VM traffic generated by this solution. The key configuration tasks include:

    • Configure the SRX gateways as an active/backup cluster.
    • Place redundant Ethernet group reth1 (configured toward the edge routers) in the non-trust zone.
    • Place reth0 (configured toward the core switches) in the trust zone.
    • Configure a security policy for traffic coming from the non-trust zone to allow only access to data center applications.
    • Configure Source Network Address Translation (SNAT) for Internet access to the application servers (private address) to provide Internet access.
    • Configure Destination Network Address Translation (DNAT) for remote access to the data center by translating the Pulse gateway internal IP address to an Internet-accessible IP address.
    • Configure the edge firewalls in OSPF area 1.

    Secure Remote Access

    The MetaFabric 1.0 solution requires secure remote access into the data center environment. Such access must provide multifactor authentication, granular security controls, and user scale that give multitenant data centers the ability to provide access to administrators and access to many thousands of users.

    The secure remote access application must be accessible through the Internet; capable of providing encryption, RBAC, and two-factor authentication services; able to access a virtualized environment; and scale to 10,000 users.

    Table 6 shows a comparison of the MAG gateway and the Junos Pulse gateway options. For this solution, the Junos Pulse gateway is superior because it offers all the capabilities of the MAG gateway as well as being a virtualized application.

    Table 6: Data Center Remote Access Options


    MAG Gateway

    Virtual Pulse Gateway

    Internet accessible






    Two-factor authentication



    Scale to 10,000 users






    To provide secure remote access to and from the virtualized IT data center, this solution uses the Juniper Networks SA Series SSL VPN Appliances as remote access systems and the Junos Pulse gateway.

    Figure 26: Remote Access Flow

    Remote Access Flow

    As shown in Figure 26, the remote access flow in the virtualized IT data center happens as follows:

    1. The user logs in from the Internet.
    2. The user session is routed to the firewall.
    3. Destination NAT is performed on the session.
    4. The authorized user matches the security policy.
    5. The traffic is forwarded to the Junos Pulse gateway.
    6. Traffic arrives on the Untrust interface.
    7. Trusted traffic permits a local address to be assigned to the user.
    8. The user is authenticated and granted access through RBAC.

    This design for security in the MetaFabric 1.0 solution meets the requirements for perimeter security, application security, and secure remote access.

    Network Management

    Network management is often reduced to its basic services: fault, configuration, accounting, performance, and security (FCAPS). In the virtualized IT data center, network management is more than a simple tool that facilitates FCAPS: it is an enabler to growth and innovation that provides end-to-end orchestration of all data center resources. Effective network management provides a single-pane view of the data center. This single-pane view enables visibility and mobility and enables the data center operator to monitor and change the environment across all data center tiers. Network management in the virtualized IT data center can be broken down into seven tiers (Figure 27).

    Figure 27: Seven Tier Model of Network Management

    Seven Tier Model of Network Management

    It is the combination of these tiers that provides complete orchestration in the data center and enables operators to turn up new services quickly, and change or troubleshoot existing services using a single-pane view of the data center. The user interface is responsible for interacting with the data center operator. This is the interface from which the data center single-pane view is presented. From the user interface, an operator can view, modify, delete, or add network elements and new services. The user interface acts as a single role-based access control (RBAC) policy enforcement point, allowing an operator seamless access to all authorized devices while protecting other resources from unapproved access. The application programming interface (API) enables single-pane management by providing a common interface and language to other applications, support tools, and devices in the data center network (REST API is an example commonly used in network management). The API enables the single-pane view by abstracting all support elements and presenting them through a single network management interface – the user interface.

    The network management platform should have the capability to support specialized applications. Applications in the network management space are specifically designed to solve a specific problem in the management of the data center environment. A single application on the network management platform can be responsible for configuring and monitoring the security elements in the data center, while another application is designed to manage the physical and virtual switching components in the data center. Again, the abstraction of all of these applications into a single-pane view is essential to data center operations to ensure simplicity and a common management point in the data center.

    The next tier of data center network management is the global network view. Simply put, this is the tier where complete view of the data center and its resources can be assembled and viewed. This layer should support topology discovery, the automatic discovery of not only devices, but how those devices are interconnected to one another. The global network view should also support path computation (the link distance between network elements as well as the set of established paths between those network elements). The resource virtualization tier of network management enables management of the various endpoints in the data center and acts as an abstraction layer that allows the operator to manage endpoints that require different protocols such as OpenFlow or Device Management Interface (DMI).

    The common data services tier of network management enables the various applications and interfaces on the network management system to share relevant information between the layers. An application that manages a set of endpoints might require network topology details in order to map and potentially push changes to those network devices. This requires that the applications within the network management system share data; this is enabled by the common data services layer.

    Managed devices in the network management role are simply the endpoints that are managed by the network management system. These devices include physical and virtual switches, routers, VMs, blade servers, and security appliances, to name a few. The managed devices and the orchestration of services between those devices is the prime purpose of the network management system. Network management should be the answer to the question, ”how does a data center operator easily stand up and maintain services within the data center?” The network management system orchestrates the implementation and operation of the managed devices in the data center

    Finally, integration adapters are required within a complete network management system. As every device in the data center might not be manageable by a single network management system, other appliances or services might be required to manage the entire data center. The integration and coordination of these various network management tools is the purpose of this layer. Some data center elements such as Virtual Machines might require VMware ESXi server to manage the VMs and hypervisor switch, while another network management appliance monitors environmental and performance conditions on the host server. A third system might be responsible for configuring and monitoring the network connections between the blade servers and the rest of the data center. Integration adapters enable each of these components to talk to one another and, in many cases, allow a single network management system to control the entire network management footprint from a single pane of glass.

    Out-of-Band Management

    The requirements for out-of-band management include:

    • Administration of the compute, network, and storage segments of the data center.
    • Separation of the control plane from the data plane so the management network remains accessible.
    • Support for 1-Gigabit Ethernet management interfaces.
    • Provide traffic separation across compute, network, and storage segments.
    • Enable administrators access to the management network.
    • Deny management-to-management traffic.

    Some of the key elements of this design are seen in Figure 28.

    Figure 28: Out of Band Management Network Design

    Out of Band Management Network Design

    To provide out-of-band management in the virtualized IT data center, this solution uses two pairs of EX4300 switches configured as a Virtual Chassis (Figure 29). The key connection and configuration steps include:

    • • Connect all OOB network devices to the EX4300 Virtual Chassis (100-Megabit Fast Ethernet and 1-Gigabit Ethernet).
    • Configure the EX4300 Virtual Chassis OOB management system in OSPF area 2.
    • Connect the 2 IBM 3750 standalone servers that host the management VMs (vCenter, Junos Space, Network Director 1.5, domain controller, and Junos Pulse gateway) to the EX4300 Virtual Chassis.
    • • Create four VLANs to separate storage, compute, network, and management traffic from each other.
    • Manage and monitor the VMs on the test bed using VMware vSphere and Network Director 1.5.

    Figure 29: Out of Band Management – Detail

    Out of Band Management – Detail

    Network Director

    To provide network configuration and provisioning in the virtualized IT data center, this solution uses Juniper Networks Network Director. Network Director 1.5 is used to manage network configuration, provisioning, and monitoring

    Security Director

    To provide security policy configuration in the virtualized IT data center, this solution uses Juniper Networks Security Director. Security Director is used to manage security policy configuration and provisioning.

    This design meets the network management requirements of managing both virtual and physical components within the data center and handling the FCAPS considerations.

    Performance and Scale

    • The solution must support 20,000 virtual machines and scale up to 2,000 servers.
    • The solution must support a total of 30,000 users.
      • 10,000 Microsoft Exchange users
      • 10,000 Microsoft SharePoint user transactions
      • 10,000 MediaWiki user transactions
    • The solution must offer less than 3μ latency between servers and 21μ latency between PODs
    • The solution must provide high availability.
      • Less than one second convergence*
      • No single point of failure

    Published: 2015-04-20