Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Data Center Fabric Reference Design Overview and Validated Topology

 

This section provides a high-level overview of the Data Center Fabric reference design topology and summarizes the topologies that were tested and validated by the Juniper Networks Test Team.

It includes the following sections:

Reference Design Overview

The Data Center Fabric reference design tested by Juniper Networks is based on a an IP Fabric underlay in a Clos topology that uses up to four QFX10000 series switches as spine devices and 96 QFX series switches as leaf devices. Each leaf device is interconnected to each spine device using either an aggregated Ethernet interface that includes two high-speed Ethernet interfaces (10-Gbps, 40-Gbps, or 100-Gbps) as LAG members or a single high-speed Ethernet interface.

Figure 1 provides an illustration of the topology used in this reference design:

Figure 1: Data Center Fabric Reference Design - Topology
Data Center Fabric Reference Design -
Topology

End systems such as servers connect to the data center network through leaf device interfaces. Each end system was multihomed to three leaf devices using a 3-member aggregated Ethernet interface as shown in Figure 2.

Figure 2: Data Center Fabric Reference Design - Multihoming
Data Center Fabric Reference Design -
Multihoming

The objective for multihoming end systems to 3 different leaf devices is to verify that multihoming of an end system to more than 2 leaf devices is fully supported.

Hardware and Software Summary

Table 1 summarizes the hardware and software components that can be used to create this reference design.

Table 1: Data Center Fabric Reference Design Hardware and Software Summary

Device

Hardware

Interfaces

Software

Spine Devices

4 QFX10002, QFX10008, or QFX10016 switches

See Spine Device Interface Summary.

Junos OS Release 17.3R1-S1 or later

Leaf Devices

Up to 96 of the following switches:

  • QFX5100 switches

  • QFX5100 Virtual Chassis

  • QFX5110 switches

  • QFX5200 switches

  • QFX10002 switches

See Leaf Device Interface Summary.

Junos OS Release 17.3R1-S1 or later

This table does not include backbone devices that connect the data center to a WAN cloud. Backbone devices provide physical connectivity between data centers and are required for DCI. See Data Center Interconnect Design and Implementation.

Interfaces Summary

This section summarizes the interface connections between spine and leaf devices that were validated in this reference design.

It contains the following sections:

Interfaces Overview

In the validated reference design, spine and leaf devices are interconnected using either an aggregated Ethernet interface that includes two high-speed Ethernet interfaces or a single high-speed Ethernet interface.

The reference design was validated with the following combinations of spine and leaf device interconnections:

  • QFX10002, QFX10008, or QFX10016 switches as spine devices and QFX5100, QFX5110, QFX5200, and QFX10002 switches as leaf devices.

    All 10-Gbps, 40-Gbps, or 100-Gbps interfaces on the supported platforms were used to interconnect a spine and leaf device.

  • Combinations of aggregated Ethernet interfaces containing two 10-Gbps, 40-Gbps, or 100-Gbps member interfaces between the supported platforms were validated.

  • Channelized 10-Gbps, 40-Gbps, or 100-Gbps interfaces used to interconnect spine and leaf devices as single links or as member links in a 2-member aggregated Ethernet bundle were also validated.

Spine Device Interface Summary

As previously stated, the validated design includes up to 4 spine devices and up to 96 leaf devices that are interconnected by one or two high-speed Ethernet interfaces. A spine device must support 192 high-speed Ethernet interfaces to connect to 96 leaf devices using 2-member aggregated ethernet interfaces.

QFX10008 and QFX10016 switches were used as they can achieve the 192 ports necessary for this reference design. See QFX10008 Hardware Overview or QFX10016 Hardware Overview for information on supported line cards and the number of high-speed Ethernet interfaces supported on these switches.

QFX10002-36Q and QFX10002-72Q switches, however, do not have the port density to support this reference design at the larger scales but can be deployed as spine devices in smaller scale deployments. See QFX10002 Hardware Overview for information on the number of high-speed Ethernet interfaces supported on QFX10002 switches.

All channelized spine device interface options are tested and supported in the validated reference design.

Leaf Device Interface Summary

Each leaf device in the reference design connects to the four spine devices and has the port density to support this reference design.

The number and types of high-speed Ethernet interfaces used as uplink interfaces to spine devices vary by leaf device switch model.

To see which high-speed interfaces are available with each leaf device switch model, see the following documents: