Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Introducing the Data Center Fabric Blueprint Architecture

 

Data Center Fabric Blueprint Architecture Introduction

This section provides an introduction to the Data Center Fabric Blueprint Architecture.

It includes the following sections.

Evolution of the Data Center Network

For certain problems, complexity can be shifted from one form to another, but cannot be eliminated. The issue of managing complexity has been true of networks of any notable scale and diversity of purpose, and the data center network has been no exception.

In the 1990’s, demands on the data center network were lower. Fewer large systems with predetermined roles dominated. Security was lax and so networks were mostly unsegmented. Storage was direct-attached. Large chassis-based network equipment was centrally placed. Constraints in data center power, space, and cooling necessitated a means to place a system in a location that optimized for these other resources, while connecting it to the network switch designated for that end system’s business function. Structured cabling became the ticket to getting the most out of the system.

When storage became disaggregated, the IP network and protocol stack was not ready for its demands, resulting in a parallel network for storage using Fibre Channel technology. This technology was built on the premise that the network must be lossless. The “best effort” IP/Ethernet stack was not suitable. Structured cabling continued to dominate in this era.

As larger systems gave way to smaller systems, end-of-row network designs appeared. Small fixed form factor switches were not ready to take on enterprise data center workloads, so chassis-based switches continued to dominate. The need for both segmentation and freedom of end-system placement led to the three-tier multihop learning bridge network designs using the brittle STP to eliminate loops. The multihop learning bridge network allowed operators the flexibility to place an end system at any location within the physical span of the bridge network. This reduced the dependence on structured cabling at the expense of network capacity and catastrophic network meltdowns, two issues related to the limitations of STP.

In the last act, end systems became compact enough to fit 30 or more in a single rack, constrained mainly by power and cooling capacity. With this rack density, combined with the emergence of data center grade 1RU switches, came the top-of-rack (ToR) design. The ToR switch replaced the passive patch panel, and the heyday of structured cabling came to an end. Yet the brittle three-tier learning bridge network design remained. The lack of control plane redundancy in ToR switches, the desire to leverage all available links, and the inability of the learning bridge network and STP to scale to the significant increase in network switches led to the addition of MC-LAG, which reduced the exposure of link failures to the STP, giving STP one last act.

Finally, operating a second Fibre Channel network became too expensive as systems disaggregated. Attempts to force fit Fibre Channel onto Ethernet ensued for a while, resulting in Data Center Bridging (DCB), an attempt to make Ethernet perform lossless forwarding. During this time, a second act for Ethernet came in the form of Transparent Interconnection of Lots of Links (TRILL), an Ethernet-on-Ethernet overlay protocol that uses IS-IS for route distribution, and its derivatives -- a misguided attempt to perform hop-by-hop routing with Ethernet addresses that simply did not (and could not) go far enough. Both DCB and TRILL were evolutionary dead ends. In this chaos, both EVPN and SDN were born.

For a variety of reasons, these past architectures and technologies were often limited to serving a specific use case. Multiple use cases often required multiple networks connecting different end systems. This lack of agility in compute, storage, and network resources led to cloud technologies, like Kubernetes and EVPN. Here “cloud” is defined as infrastructure that frees the operator to implement any use case on the same physical infrastructure, on demand, and without any physical changes.

In the present generation of “cloud,” workloads present themselves to the network in the form of virtual machines or containers that are most often free to move between physical computers. Storage is fully IP-based, and in many cases it is highly distributed. The endpoint scale and dynamism in cloud is the straw that broke the back of the learning bridge network. Meanwhile, Fibre Channel is on its deathbed.

The Next Act for Data Center Networks

For truly cloud-native workloads that have no dependency on Ethernet broadcast, multicast, segmentation, multitenancy, or workload mobility, the best solution is typically a simple IP fabric network. When a unique workload instance requires mobility, the current host system can advertise the unique IP address of the workload. This can be performed with EBGP route exchange between the host system and the ToR. However, support for BUM and multitenancy require more advanced network functions. This is where overlays are added.

Over its evolution the data center network was a function of the demands and expectations placed on it. As the nature of workloads changed, the network had to adapt. Each solution simplified a set of problems by trading one form of complexity and cost for another. The cloud network is no different. In the end, bits must be moved from point A to point B reliably, securely, and at the desired throughput. Where operators need a single network to serve more than one purpose (the multiservice cloud network), they can add network-layer segmentation and other functions to share the infrastructure across diverse groups of endpoints and tenants. Operational simplicity is achieved with a centralized controller that implements an intent model that is consistent with the cloud scale functions of the network layer. Technical simplicity is achieved using a reduced set of building blocks that are based on open standards and homogeneous across the end-to-end network.

This guide introduces a building block approach to creating multiservice cloud networks on the foundation of a modern IP fabric. The Juniper Networks solutions team will systematically review the set of building blocks required for an agile network, focus on specific, state-of-the-art, open standards-based technology that enables each function, and add new functionality to the guide as it becomes available in future software releases.

All the building blocks are fully synergistic and you can combine any of the building blocks with any other to satisfy a diverse set of use cases simultaneously — this is the hallmark of the cloud. You should consider how you can leverage the building blocks in this guide to achieve the use cases that are relevant to your network.

Building Blocks

The guide organizes the technologies used to build multiservice cloud network architectures into modular building blocks. Each building block includes features that either must be implemented together to build the network, are often implemented together because they provide complementary functionality, or are presented together to provide choices for particular technologies.

This blueprint architecture includes required building blocks and optional building blocks. The optional building blocks can be added or removed to support the needs of a specific multiservice data center fabric network.

This guide walks you through the design and technology choices associated with each building block, and provides information designed to help you choose the building blocks that best meet the needs for your multiservice data center fabric network. The guide also provides the implementation procedures for each building block.

The currently-supported building blocks include:

  • IP Fabric Underlay Network

  • Network Virtualization Overlays

    • Centrally-Routed Bridging Overlay

    • Edge-Routed Bridging Overlay

    • Routed Overlay

  • Multihoming

    • Multihoming of Ethernet-connected End Systems

    • Multihoming of IP-connected End Systems

  • Data Center Interconnect (DCI)

  • Service Chaining

  • Multicast

  • Ingress Virtual Machine Traffic Optimization

  • DHCP Relay

  • Proxy ARP

  • Layer 2 Port Security

Additional building blocks will be added to this guide as support for the technology becomes available and is validated by the Juniper Networks testing team.

Planned building blocks include:

  • Network Virtualization Overlays

    • Optimized Overlay Replication

    • Overlay Border Gateways

  • Network Access Control

  • Differentiated Services

  • Timing Distribution

  • Network Hardening

Each building block is discussed in more detail in Data Center Fabric Blueprint Architecture Components.