Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Understanding DCB Features and Requirements on EX Series Switches

Data center bridging (DCB) is a set of enhancements to the IEEE 802.1 bridge specifications. DCB modifies and extends Ethernet behavior to support I/O convergence in the data center. I/O convergence includes but is not limited to the transport of Ethernet LAN traffic and Fibre Channel (FC) storage area network (SAN) traffic on the same physical Ethernet network infrastructure.

A converged architecture saves cost by reducing the number of networks and switches required to support both types of traffic, reducing the number of interfaces required, reducing cable complexity, and reducing administration activities.

You can use DCB features on CEE-enabled switches to transport converged Ethernet and FC traffic while providing the class-of-service (CoS) characteristics and other characteristics FC requires for transmitting storage traffic.

Note:

This topic only applies to DCB features on EX Series switches that do not support the Enhanced Layer 2 Software (ELS) configuration style. EX4500 and EX4550 switches are the only non-ELS EX Series switches that support DCB features.

DCB features on ELS EX Series switches and QFX Series switches are described in Understanding DCB Features and Requirements.

This topic describes:

EX Series Switch DCB Features Overview

To accommodate FC traffic, DCB specifications provide:

  • High-bandwidth interface

  • A discovery and exchange protocol for communicating configuration and capabilities among neighbors to ensure consistent configuration across the network, called Data Center Bridging Capability Exchange protocol (DCBX), which is an extension of Link Layer Discovery Protocol (LLDP, described in IEEE 802.1AB).

  • A flow control mechanism called priority-based flow control (PFC, described in IEEE 802.1Qbb) to help provide lossless transport.

Note:

The switches support the DCBX standards and PFC, but do not support enhanced transmission selection (ETS) and quantized congestion notification (QCN).

Physical Interfaces

The switches provide the high-bandwidth interfaces (10-Gigabit Ethernet interfaces) required to support DCB and converged traffic. Your switch can have both 1-gigabit and 10-gigabit interfaces, depending on the configuration. DCBX works only on 10-gigabit, full-duplex interfaces. However, LLDP and DCBX are enabled by default on all the interfaces.

DCBX

DCB devices use DCBX to exchange configuration information with directly connected peers (switches and data center devices such as servers). DCBX is an extension of LLDP. If you attempt to enable DCBX on an interface on which LLDP is disabled, the configuration commit fails. See Understanding Data Center Bridging Capability Exchange Protocol for EX Series Switches for details.

Lossless Transport

FC traffic requires lossless transport (defined as no frames dropped because of congestion). Standard Ethernet does not support lossless transport, but the DCB extensions to Ethernet along with proper buffer management enable an Ethernet network to provide the level of CoS necessary to transport FC frames encapsulated in Ethernet over an Ethernet network.

This section describes these factors in creating lossless transport over Ethernet:

PFC

PFC is a link-level flow control mechanism similar to Ethernet PAUSE (described in IEEE 802.3x). Ethernet PAUSE stops all traffic on a link for a specified period of time. PFC allows you to assign special priority to a specific traffic class for a specified period of time without stopping the traffic assigned to other priorities on the link. You assign this priority by using a congestion notification profile.

The switches support up to six traffic classes and allow you to associate those classes with six different congestion notification profiles.

PFC enables you to provide lossless transport for traffic assigned to use the PFC congestion notification profile and to use standard Ethernet transport for the rest of the link traffic.

Buffer Management

Buffer management is critical to the proper functioning of PFC, because if buffers are allowed to overflow, frames are dropped and transport is not lossless.

For each lossless flow priority, the switch requires sufficient buffer space to:

  • Store frames sent during the time it takes to send the PFC PAUSE across the cable between devices

  • Store frames that are already on the wire when the sender receives the PFC PAUSE

The amount of buffer space needed to prevent frame loss due to congestion depends on the cable length, cable speed, and processing speed.

The switch automatically sets the threshold for sending a PFC PAUSE frame to accommodate delay from cables as long as 984 feet (300 meters) and to accommodate large frames that might be on the wire when the switch sends the PAUSE. This ensures that the switch sends PAUSEframes early enough to allow the sender to stop transmitting before the receive buffers on the switch overflow.