Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Data Center Fabric Reference Design Overview and Validated Topology

 

This section provides a high-level overview of the Data Center Fabric reference design topology and summarizes the topologies that were tested and validated by the Juniper Networks Test Team.

It includes the following sections:

Reference Design Overview

The Data Center Fabric reference design tested by Juniper Networks is based on an IP Fabric underlay in a Clos topology that uses the following devices:

Each leaf device is interconnected to each spine device using either an aggregated Ethernet interface that includes two high-speed Ethernet interfaces (10-Gbps, 40-Gbps, or 100-Gbps) as LAG members or a single high-speed Ethernet interface.

Figure 1 provides an illustration of the topology used in this reference design:

Figure 1: Data Center Fabric Reference Design - Topology
Data Center Fabric Reference Design -
Topology

End systems such as servers connect to the data center network through leaf device interfaces. Each end system was multihomed to three leaf devices using a 3-member aggregated Ethernet interface as shown in Figure 2.

Figure 2: Data Center Fabric Reference Design - Multihoming
Data Center Fabric Reference Design -
Multihoming

The objective for multihoming end systems to 3 different leaf devices is to verify that multihoming of an end system to more than 2 leaf devices is fully supported.

Hardware Summary

Table 1 summarizes the hardware and associated software that you can use to create this reference design.

Note
  • For this reference design, we support the hardware listed in Table 1 only with the associated Junos OS software releases.

  • To learn about any existing issues and limitations for a hardware device in this reference design, see the release notes for the Junos OS software release with which the device was tested.

Table 1: Data Center Fabric Reference Design Hardware and Software Summary

Device Roles

Hardware

Junos OS Software Releases1

Centrally-Routed Bridging Overlay

Spine

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX10002-60C

QFX5120-32C

19.1R2

Leaf

QFX5100

QFX5110

QFX5200

QFX10002-36Q/72Q

17.3R3-S1

QFX5120-48Y

18.4R2

QFX5120-32C

19.1R2

Edge-Routed Bridging Overlay

Lean spine

QFX10002-36Q/72Q

QFX10008

QFX10016

QFX5200

17.3R3-S1

QFX5110

QFX5210-64C

18.1R3-S3

QFX10002-60C

QFX5120-32C

19.1R2

Leaf

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX10002-60C

19.1R2

QFX5110

18.1R3-S3

QFX5120-48Y

18.4R2

QFX5120-32C

19.1R2

Data Center Interconnect (DCI) (using EVPN Type 5 routes and IPVPN), Service Chaining

Border spine

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX10002-60C2

19.1R2

QFX5120-48Y3

18.4R2-S4

QFX5120-32C3

19.1R3

Border leaf

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

MX204; MX240, MX480, and MX960 with MPC7E; MX100032

18.4R2-S2

QFX10002-60C2

19.1R2

QFX5120-48Y3

18.4R2-S4

QFX5120-32C3

19.1R3

1This column includes the initial Junos OS release train with which we introduce support for the hardware in the reference design. For each initial Junos OS release train, we also support the hardware with later releases in the same release train.

2While functioning in this role, this hardware does not support multicast traffic.

3While functioning in this role, this hardware does not support DCI or multicast traffic.

This table does not include backbone devices that connect the data center to a WAN cloud. Backbone devices provide physical connectivity between data centers and are required for DCI. See Data Center Interconnect Design and Implementation.

Interfaces Summary

This section summarizes the interface connections between spine and leaf devices that were validated in this reference design.

It contains the following sections:

Interfaces Overview

In the validated reference design, spine and leaf devices are interconnected using either an aggregated Ethernet interface that includes two high-speed Ethernet interfaces or a single high-speed Ethernet interface.

The reference design was validated with the following combinations of spine and leaf device interconnections:

  • QFX10002, QFX10008, or QFX10016 switches as spine devices and QFX5100, QFX5110, QFX5200, and QFX10002 switches as leaf devices.

    All 10-Gbps, 40-Gbps, or 100-Gbps interfaces on the supported platforms were used to interconnect a spine and leaf device.

  • Combinations of aggregated Ethernet interfaces containing two 10-Gbps, 40-Gbps, or 100-Gbps member interfaces between the supported platforms were validated.

  • Channelized 10-Gbps, 40-Gbps, or 100-Gbps interfaces used to interconnect spine and leaf devices as single links or as member links in a 2-member aggregated Ethernet bundle were also validated.

Spine Device Interface Summary

As previously stated, the validated design includes up to 4 spine devices and leaf devices that are interconnected by one or two high-speed Ethernet interfaces.

QFX10008 and QFX10016 switches were used as they can achieve the port density necessary for this reference design. See QFX10008 Hardware Overview or QFX10016 Hardware Overview for information on supported line cards and the number of high-speed Ethernet interfaces supported on these switches.

QFX10002-36Q/72Q, QFX10002-60C, and QFX5120-32C switches, however, do not have the port density to support this reference design at the larger scales but can be deployed as spine devices in smaller scale environments. See QFX10002 Hardware Overview and QFX5120 System Overview for information about the number of high-speed Ethernet interfaces supported on QFX10002-36Q/72Q, QFX10002-60C, and QFX5120-32C switches, respectively.

All channelized spine device interface options are tested and supported in the validated reference design.

Leaf Device Interface Summary

Each leaf device in the reference design connects to the four spine devices and has the port density to support this reference design.

The number and types of high-speed Ethernet interfaces used as uplink interfaces to spine devices vary by leaf device switch model.

To see which high-speed interfaces are available with each leaf device switch model, see the following documents: