Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Network

    The network is often the main focus of the data center as it is built to pass traffic to, from, and between application servers hosted in the data center. Given the criticality of this architectural role, and the various tiers within the data center switching block, it is further broken up into access switching, aggregation switching, core switching, edge routing, and WAN connectivity. Each segment within the data center switching role has unique design considerations that relate back to business criticality, SLA requirements, redundancy, and performance. It is within the data center switching architectural roles that the network must be carefully designed to ensure that your data center equipment purchases maximize network scale and performance while minimizing costs.

    Access and Aggregation

    The access layer consists of physical switches that connect to servers and end hosts. Access switching typically focuses on implementing Layer 2 switches, but can include Layer 3 components (such as IRB) to support more robust VM mobility. Access switching should also support high availability. In a multi-chassis or virtual chassis environment, where multiple physical switches can be combined to form a single, logical switch, redundancy can be achieved at the access layer. This type of switch architecture is built with control plane redundancy, MC-LAG, and the ability to upgrade individual switches while they are in service. Additionally, the access switching role should support storage traffic, or the ability to pass data traffic over Ethernet via iSCSI and Fiber Channel over Ethernet (FCoE). Data Center Bridging (DCB) should also be supported by the access switching role to enable full support of storage traffic. Within DCB, support for priority-based flow control (PFC), enhanced transmission selection (ETS), and data center bridging exchange (DCBX) should also be supported as these features enable storage traffic to pass properly between all servers and storage devices within a data center segment.

    The aggregation switch acts as a multiplexing point between the access and the core of the data center. The aggregation architectural role serves to combine a large number of smaller interfaces from the access into high bandwidth trunk ports that can be more easily consumed by the core switch. Redundancy should be a priority in the design of the aggregation role as all Layer 2 flows between the data center and the core switch are combined and forwarded by the data center aggregation switch role. At this layer, a switching architecture that supports the combination of multiple switches into a single, logical system with control and forwarding plane redundancy is recommended. This switching architecture enables redundancy features such as MC-LAG, loop-free redundant paths, and in-service software upgrades to enable data center administrators to consistently meet and exceed SLAs.

    One recommendation is to combine the access and aggregation layers of your network by using a QFabric system. Not only does a QFabric system offer a single point of provisioning, management, and troubleshooting for the network operator, it also collapses switching tiers for any-to-any connectivity, provides lower latency, and enables all access devices to be only one hop away from one another, as shown in Figure 1.

    Figure 1: Juniper Networks QFabric Systems Enable a Flat Data Center Network

    Juniper Networks QFabric Systems Enable
a Flat Data Center Network

    To implement the access and aggregation switching portions of the virtualized IT data center, this solution uses the QFX3000-M QFabric system. There are two QFabric systems (POD1 and POD2) in this solution to provide performance and scale. The QFabric PODs support 768 ports per POD and feature low port-to-port latency, a single point of management per POD, and lossless Ethernet to support storage traffic. The use of predefined POD configurations enables the enterprise to more effectively plan data center rollouts by offering predictable growth and scale in the solution architecture. Key configuration steps include:

    • Configure the QFX3000-M QFabric systems with 3 redundant server Node groups (RSNGs) connected to 2 IBM Flex System blade servers to deliver application traffic.
      • The first IBM Flex System server uses a 40-Gigabit Ethernet converged network adapter (CNA) connected to a QFabric system RSNG containing QFX3600 Node devices (RSNG4).
      • The second IBM Flex System server has 10-Gigabit Ethernet pass through modules connected to RSNG2 and RSNG3 on the second QFabric system.
    • Connect the EMC VNX storage platform to the QFabric systems for storage access using iSCSI and NFS.
    • Connect the QFabric systems with the EX9214 core switch by way of a network Node group containing 2 Node devices which use four 24-port LAGs configured as trunk ports.
    • Configure OSPF in the PODs (within the QFabric system network Node group) towards the EX9214 core switch and place these connections in Area10 as a totally stubby area.

    Core Switching

    The core switch is often configured as a Layer 3 device that handles routing between various Layer 2 domains in the data center. A robust implementation of the core switch in the virtualized IT data center will support both Layer 2 and Layer 3 to enable a full range of interoperability and service provisioning in a multitenant environment. Much like in the edge role, the redundancy of core switching is critical as it too is a traffic congestion point between the customer and the application. A properly designed data center includes a fully redundant core switch layer that supports a wide range of interfaces (1-Gigabit, 10-Gigabit, 40-Gigabit, and 100-Gigabit Ethernet) with high density. The port density in the core switching role is a critical factor as the data center core should be designed to support future expansion without requiring new hardware (beyond line cards and interface adapters). The core switch role should also support a wide array of SLA statistics collection, and should be service-aware to support collection of service-chaining statistics. The general location of the core switching function in this solution is shown in Figure 2.

    Figure 2: Core Switching

    Core Switching

    Table 1 shows some of the reasons for choosing an EX9200 switch over an EX8200 switch to provide core switching capabilities in this solution. The EX9200 switch provides a significantly larger number of 10-Gigabit Ethernet ports, support for 40-Gigabit Ethernet ports, ability to host more analyzer sessions, firewall filters, and BFD connections, and critical support for in-service software upgrade (ISSU) and MC-LAG. These reasons make the EX9200 switch the superior choice in this solution.

    Table 1: Core Switch Hardware - Comparison of the EX9200 and EX8200 Switches

    Solution Requirement

    EX8200

    EX9200

    Delta

    Line-rate 10G

    128

    240

    +88%

    40G

    No

    Yes

    Analyzer Sessions

    7

    64

    +815%

    ACLs

    54K

    256K

    +375%

    BFD

    175

    900

    +415%

    ISSU

    No(NSSU)

    Yes

    MC-LAG

    No

    Yes

    Table 2 shows some of the reasons for choosing MC-LAG as the forwarding technology over Virtual Chassis in this solution. MC-LAG provides dual control planes, a non-disruptive implementation, support for LACP, state replication across peers, and support for ISSU without requiring dual Routing Engines.

    Table 2: Core Switch Forwarding - Comparison of MC-LAG and Virtual Chassis

    Attribute

    Virtual Chassis

    MC-LAG

    Control Planes

    1

    2

    Centralized Management

    Yes

    No

    Maximum Chassis

    2

    2

    Implementation

    Disruptive

    Non-disruptive

    Require IEEE 802.3ad (LACP)

    No

    Yes

    State Replication

    Kernel

    ICCP

    Require Dual Routing Engines

    Yes

    No

    ISSU

    No

    Yes

    To implement the core switching portion of the virtualized IT data center, this solution uses two EX9214 switches with the following capabilities and configuration:

    • Key features—240 Gbps line rate per slot for 10-Gigabit Ethernet, support for 40-Gigabit Ethernet ports, 64 analyzer sessions, scalable to 256,000 firewall filters, and support for bidirectional forwarding detection (BFD), in-service software upgrade (ISSU), and MC-LAG groups.
    • Key configuration steps (Figure 3)
      • Configure Layer 2 MC-LAG active/active on the EX9214 towards the QFabric PODs, the F5 load balancer, and the MX240 edge router (by way of the redundant Ethernet link provided by the SRX3600 edge firewall) to provide path redundancy.
      • Configure IRB and VRRP for all MC-LAG links for high availability.
      • Configure IRB on the EX9214 and the QFabric PODs to terminate the Layer 2/Layer 3 boundary.
      • Configure a static route on the core switches to direct traffic from the Internet to the load balancers.
      • o Configure OSPF to advertise a default route to the totally stubby areas in the QFabric PODs. Each QFabric POD has its own OSPF area. Also, configure the EX9214 core switches as area border routers (ABRs) that connect all three OSPF areas, and designate backbone area 0 over aggregated link ae20 between the two core switches

        Figure 3: Core Switching Design

        Core Switching Design

    Edge Routing and WAN

    Edge Routing

    The edge is the point in the network that aggregates all customer and Internet connections into and out of the data center. Although high availability and redundancy are important considerations throughout the data center, it is at the edge that they are the most vital; the edge serves as a choke point for all data center traffic and a loss at this layer renders the data center out of service. At the edge, full hardware redundancy should be implemented using platforms that support control plane and forwarding plane redundancy, link aggregation, MC-LAG, redundant uplinks, and the ability to upgrade the software and platform while the data center is in service. This architectural role should support a full range of protocols to ensure that the data center can support any interconnect type that may be offered. Edge routers in the data center require support for IPv4 and IPv6, as well as ISO and MPLS protocols. As the data center might be multi-tenant, the widest array of routing protocols should also be supported, to include static routing, RIP, OSPF, OSPF-TE, OSPFv3, IS-IS, and BGP. With large scale multi-tenant environments in mind, it is important to support Virtual Private LAN Service (VPLS) through the support of bridge domains, overlapping VLAN IDs, integrated routing and bridging (IRB), and IEEE 802.1Q (QinQ). The edge should support a complete set of MPLS VPNs, including L3VPN, L2VPN (RFC 4905 and RFC 6624, or Martini and Kompella drafts, respectively), and VPLS.

    Network Address Translation (NAT) is another factor to consider when designing the data center edge. It is likely that multiple customers serviced by the data center will have overlapping private network address schemes. In environments where direct Internet access to the data center is enabled, NAT is required to translate routable, public IP addresses to the private IP addressing used in the data center. The edge must support Basic NAT 44, NAPT44, NAPT66, Twice NAT44, and NAPT-PT.

    Finally, as the edge is the ingress and egress point of the data center, the implementation should support robust data collection to enable administrators to verify and prove strict service-level agreements (SLAs) with their customers. The edge layer should support collection of average traffic flows and statistics, and at a minimum should support the ability to report exact traffic statistics to include the exact number of bytes and packets that were received, transmitted, queued, lost, or dropped, per application. Figure 4 shows the location of the edge routing function in this solution.

    Figure 4: Edge Routing

    Edge Routing

    WAN

    The WAN role provides transport between end users, enterprise remote sites, and the data center. There are several different WAN topologies that can be used, depending on the business requirements of the data center. A data center can simply connect directly to the Internet, utilizing simple IP-based access directly to servers in the data center, or a secure tunneled approach using generic routing encapsulation (GRE) or IP Security (IPsec). Many data centers serve a wide base of customers and favor Multiprotocol Label Switching (MPLS) interconnection via the service provider’s managed MPLS network, allowing customers to connect directly into the data center via the carrier’s MPLS backbone. Another approach to the WAN is to enable direct peering between customers and the data center; this approach enables customers to bypass transit peering links by establishing a direct connection (for example, via private leased line) into the data center. Depending on the requirements of the business and the performance requirements of the data center hosted applications, the choice of WAN interconnection offers the first choice in determining performance and security of the data center applications. Choosing a private peering or MPLS interconnect offers improved security and performance at a higher expense. In cases where the hosted applications are not as sensitive to security and performance, or where application protocols offer built-in security, a simple Internet connected data center can offer an appropriate level of security and performance at a lower cost.

    To implement the edge routing and WAN portions of the virtualized IT data center, this solution uses MX240 Universal Edge routers. Because the MX240 router offers dual Routing Engines and ISSU at a reasonable price point, it is the preferred option over the smaller MX80 router. The key connection and configuration steps are:

    • Connect the MX240 edge routers to the service provider networks to provide Internet access to the data center.
    • Configure the two edge routers to be EBGP peers with 2 service providers to provide redundant Internet connections.
    • Configure IBGP between the 2 edge routers and applying a next-hop self export policy.
    • Configure BGP local preference on the primary service provider to offer a preferred exit point to the Internet.
    • Export a dynamic, condition-based, default route to the Internet into OSPF on both edge routers toward the edge firewalls and core switches to provide Internet access for the virtualized IT data center devices (Figure 5).
    • Configure both edge routers in Area 1 for OSPF.
    • Enable Network Address Translation (NAT) to convert private IP addresses into public IP addresses.

    Figure 5: Edge Routing Design

    Edge Routing Design

    This design for the network segment of the data center meets the requirements of this solution for 1-Gigabit, 10-Gigabit, and 40-Gigabit Ethernet ports, converged data and storage, load balancing, quality of experience, network segmentation, traffic isolation and separation, and time synchronization.

    Published: 2015-04-20