Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Example: Configuring CoS Scheduling Across the QFabric System

If you do not want to use the default class of service (CoS) scheduling of traffic across the QFabric system, then in addition to configuring CoS on Node device access interfaces, you can configure two-tier hierarchical scheduling on the fabric interfaces of a QFabric system. Configuring CoS on the fabric interfaces provides more control over class of service (CoS) across the QFabric system and helps to ensure predictable bandwidth consumption across the fabric path.

This topic describes:

Requirements

This example uses the following hardware and software components:

  • Juniper Networks QFabric System with two Juniper Networks QFX3500 Node devices

  • Junos OS Release 12.3 or later for the QFX Series

Overview

Configuring CoS across the QFabric system enables you to control scheduling resources as traffic passes through each type of interface. You can configure CoS on the following QFabric system interface types:

  • Node device access interfaces (xe interfaces)—Schedule traffic on the output queues of the 10-Gigabit Ethernet access ports, using standard Node device CoS scheduling configuration components, as described elsewhere in the QFX Series documentation. You can configure different scheduling for different ports and queues.

  • Node device fabric interfaces (fte interfaces)—Schedule traffic on the output queues of the 40-Gbps fabric interfaces that connect a Node device to a QFX3008-I or a QFX3600-I Interconnect device using standard Node device CoS scheduling configuration components. You can configure different scheduling for different interfaces and output queues.

  • Interconnect device fabric interfaces (fte interfaces)—Schedule traffic on the output queues of the 40-Gbps fabric interfaces that connect an Interconnect device to a Node device. You can configure different scheduling for different interfaces and fabric forwarding class sets (fabric fc-sets).

  • Interconnect device internal Clos fabric interfaces (bfte interfaces)—Schedule traffic on the internal 40-Gbps Clos fabric interfaces that connect the three stages of the Clos fabric within the Interconnect device. You can configure one Clos fabric interface scheduler, which is applied to all of the internal Clos fabric interfaces. You cannot configure different schedulers for different Clos fabric interfaces.

This example shows you how to configure hierarchical port scheduling across the QFabric, including the configuration of Node device access interfaces, Node device fabric interfaces, Interconnect device fabric interfaces, and internal Interconnect device Clos fabric interfaces.

Configuring CoS on Interconnect device fabric interfaces differs from configuring CoS on Node device interfaces because the Interconnect device is a shared infrastructure that supports traffic from multiple Node devices and multiple Node device CoS configurations. Take the amounts and types of traffic traversing the Interconnect device into account when you configure CoS on Interconnect device interfaces.

Configuring scheduling across the QFabric system entails configuring interfaces on Node devices and Interconnect devices. You configure some or all of the following CoS components on each interface, depending upon the interface type (access, Node fabric, Interconnect fabric, or Interconnect Clos fabric):

  • Mapping forwarding classes to priorities (IEEE 802.1p code points) and queues, and configuring lossless forwarding classes

  • Defining fc-sets (priority groups)

  • Defining drop profiles

  • Defining schedulers

  • Mapping forwarding classes to schedulers (scheduler map on Node devices, fabric scheduler map on Interconnect devices)

  • Defining traffic control profiles

  • Configuring a congestion notification profile to enable priority-based flow control (PFC) on lossless forwarding classes (priorities) (Node device access interfaces only)

  • Applying congestion notification profiles to interfaces (Node device access interfaces only)

  • Assigning fc-sets and traffic control profiles to interfaces (Node device interfaces only) or assigning fabric scheduler maps to interfaces (Interconnect device interfaces only)

Note:

This example uses the default behavior aggregate classifiers on the Node device access interfaces. Classifiers are not applied to fabric interfaces. Although packet classification is not scheduling, it controls the forwarding class mapping to IEEE 802.1p priorities, and the loss priorities to which packets are mapped when they enter Node device access ports.

When you plan port bandwidth scheduling for priority groups (fc-sets on Node devices and class groups on Interconnect devices) and priorities (forwarding classes on Node devices and fabric fc-sets on Interconnect devices), take into account:

  • The amounts and types of traffic you expect to traverse the Node device interfaces

  • The amounts and types of aggregated traffic from all of the connected Node devices that you expect to traverse the Interconnect device interfaces

  • The mapping of priorities into priority groups. Traffic that requires similar treatment usually belongs in the same priority group. To do this on Node devices, place forwarding classes that require similar bandwidth, loss priority, and other characteristics in the same fc-set. For example, you can map all types of best-effort traffic forwarding classes into one fc-set. On Interconnect devices, the default mapping of fabric fc-sets to class groups defines priority group membership and is not user-configurable.

  • How much of the port bandwidth you want to allocate to each priority group and to each of the priorities in each priority group. The following considerations apply to bandwidth allocation:

    • Estimate how much traffic you expect in each priority’s output queue (forwarding class on Node devices and fabric fc-set on Interconnect devices) and how much traffic you expect in each priority group (fc-set on Node devices and class group on Interconnect devices). The priority group traffic is the aggregated amount of traffic in the priorities that belong to the priority group.

    • On Node devices, the combined minimum guaranteed bandwidth of the priorities in a priority group should not exceed the minimum guaranteed bandwidth (guaranteed rate) of the priority group. (On Interconnect devices, class group bandwidth is derived from the bandwidth of the member fabric fc-sets, so the sum of the priority bandwidths cannot exceed the priority group bandwidth.) The transmit rate scheduler parameter defines the minimum guaranteed bandwidth for priorities (forwarding classes and fabric fc-sets). Scheduler maps associate schedulers with forwarding classes (Node devices) and fabric scheduler maps associate schedulers with fabric fc-sets (Interconnect devices).

    • The combined minimum guaranteed bandwidth of all of the priority groups on an interface should not exceed the interface’s total bandwidth.

Topology

Figure 1 shows the network topology used in this example.

Figure 1: Network Topology for Scheduling Across the QFabric SystemNetwork Topology for Scheduling Across the QFabric System

Table 1 and Table 2 describe the scheduling configuration components on the Node device and the Interconnect device.

To simplify Node device configuration, this example uses the same scheduling configuration on the access interfaces and the fabric interfaces of both QFabric Node devices. This is possible because the scheduler (forwarding-class scheduling) and traffic control profile (fc-set scheduling) rates are specified as percentages of bandwidth instead of as absolute values, so the schedulers and traffic control profiles utilize the port bandwidth in the same way regardless of the absolute amount of available bandwidth. If you want to treat traffic differently on different interfaces or on different interface types, you can configure different schedulers and traffic control profiles and apply them to the interfaces.

Table 1 shows the scheduling configuration components for Node device interfaces.

Table 1: Components of the QFabric Node Device Hierarchical Port Scheduling Configuration Topology

Scheduling Component

Settings

Hardware

Two QFX3500 Node devices in a QFabric system

Forwarding classes

This example uses five forwarding classes:

  • best-effort

  • fcoe

  • no-loss

  • network-control

  • mcast

This example uses the default configuration for three forwarding classes (best-effort, network-control, and mcast). Best-effort traffic is classified into low loss priority and high loss priority by IEEE 802.1p classifiers at the Node device ingress interfaces.

The two lossless forwarding classes (fcoe and no-loss) are configured as lossless forwarding classes:

  • fcoe—Mapped to queue 3 with the no-loss parameter specified

  • no-loss—Mapped to queue 4 with the no-loss parameter specified

Note:

Starting with Junos OS Release 12.3, you must include the no-loss parameter in the forwarding class configuration for forwarding classes that you want to be lossless. In Junos OS Release 12.3, all default forwarding classes, including the fcoe and no-loss forwarding classes, are lossy forwarding classes by default and must be explicitly configured as lossless to receive lossless CoS treatment. This is a change from lossless forwarding class configuration in earlier releases.

Forwarding class sets (priority groups)

best-effort-pg—contains the forwarding classes best-effort and network-control

noloss-pg—contains the forwarding classes fcoe and no-loss

multidestination-pg—contains the forwarding class mcast

Drop profiles

Note:

Lossless traffic (fcoe and no-loss forwarding classes) and multidestination traffic do not use drop profiles

This example uses the following drop profiles for lossy traffic classes:

  • Best-effort unicast traffic with low packet loss priority:Name—dp-be-lowDrop start point—25%Drop end point—50%Maximum drop rate—80%

  • Best-effort traffic unicast with high packet loss priority:Name—dp-be-highDrop start point—10%Drop end point—40%Maximum drop rate—100%

  • Network-control traffic:Name—dp-ncDrop start point—75%Drop end point—100%Maximum drop rate—50%

Queue (forwarding class) schedulers

Schedulers configure the bandwidth characteristics of forwarding classes, which are mapped to output queues and to IEEE 802.1p CoS priorities.

  • Best-effort traffic scheduler:Name—be-schedTransmit rate (minimum guaranteed bandwidth)—90%Shaping rate (maximum bandwidth)—100%Priority—lowDrop profiles—dp-be-low and dp-be-high

  • Network-control traffic scheduler:Name—nc-schedTransmit rate—10%Shaping rate—100%Priority—lowDrop profile—dp-nc

  • FCoE traffic scheduler:Name—fcoe-schedTransmit rate—60%Shaping rate—100%Priority—lowDrop profile—None

  • No-loss traffic scheduler:Name—nl-schedTransmit rate—40%Shaping rate—100%Priority—lowDrop profile—None

  • Multidestination traffic scheduler:Name—mcast-schedTransmit rate—100%Shaping rate—100%Priority—lowDrop profile—None

Note:

If you want to specify absolute values instead of percentages for the transmit rate and the shaping rate, you should create separate schedulers for access and fabric interfaces, because access interfaces are 10-Gigabit Ethernet interfaces and fabric interfaces are 40-Gbps interfaces.

Forwarding class to scheduler mapping

  • Best-effort traffic scheduler map:Name—be-mapMapping—forwarding class best-effort to scheduler be-sched, forwarding class network-control to scheduler nc-sched

  • Lossless traffic scheduler map:Name—nl-mapMapping—forwarding class fcoe to scheduler fcoe-sched, forwarding class no-loss to scheduler nl-sched

  • Multidestination traffic scheduler map:Name—mcast-mapMapping—forwarding class mcast to scheduler mcast-sched

Priority group (fc-set) traffic control profiles

Traffic control profiles configure the bandwidth for fc-sets (priority groups) and control the amount of port bandwidth allocated to the forwarding classes in the fc-sets.

  • Best-effort traffic control profile:Name—be-tcpGuaranteed rate (minimum guaranteed bandwidth)—25%Shaping rate (maximum bandwidth)—100%Scheduler map—be-map

  • Lossless traffic control profile:Name—nl-tcpGuaranteed rate—50%Shaping rate—100%Scheduler map—nl-map

  • Multidestination traffic control profile:Name—mcast-tcpGuaranteed rate—25%Shaping rate—100%Scheduler map—mcast-map

Hierarchical scheduling (fc-sets and traffic control profiles) association with interfaces

Apply the fc-sets and traffic control profiles to the interfaces of both Node devices:

  • Access interfaces—ND1:xe-0/0/20, ND1:xe-0/0/21, ND2:xe-0/0/20, ND2:xe-0/0/21

  • Fabric interfaces—ND1:fte-0/1/0, ND2:fte-0/1/0

PFC (access interfaces only; do not apply PFC to fabric interfaces)

Code points: 011—fcoe forwarding class traffic priority010—no-loss forwarding class traffic priority

Congestion notification profile name—nl-cnp

Enabled on interfaces: ND1:xe-0/0/20, ND1:xe-0/0/21, ND2:xe-0/0/20, and ND2:xe-0/0/21

To simplify Interconnect device configuration, this example uses the same scheduling configuration on the fabric interfaces and the Clos fabric interfaces. If you want to treat traffic differently on different fabric interfaces or on different fabric interface types, you can configure different fabric schedulers, map them to fabric fc-sets, and apply them to the interfaces. (You can apply different mappings of schedulers to fabric fc-sets on different interfaces.)

Note:

On Interconnect devices, the network-control forwarding class is mapped by default to the strict-high priority fabric fc-set (fabric_fcset_strict_high). The strict-high priority fabric fc-set receives all of the port bandwidth it needs to service strict-high priority traffic. You can configure a scheduler with a shaping rate (maximum bandwidth) and a drop profile to limit the bandwidth available to the strict-high priority fabric fc-set, if desired. The available fabric port bandwidth for all other traffic in all other fabric fc-sets is the bandwidth that remains after the interface services the strict-high priority traffic.

Table 2 shows the scheduling configuration components for Interconnect device interfaces:

Table 2: Components of the QFabric Interconnect Device Hierarchical Port Scheduling Configuration Topology

Fabric Scheduling Component

Settings

Hardware

One QFabric Interconnect device connected to two QFX3500 Node devices in a QFabric system

Forwarding classes

Interconnect devices use the forwarding classes defined on the connected Node devices. The forwarding classes are mapped by default to fabric fc-sets on the Interconnect device.

Note:

If you do not want to use the default forwarding class to fabric fc-set mapping, you can configure the mapping. Forwarding class to fabric fc-set mapping is global and applies to all traffic that crosses the Interconnect device.

Fabric fc-sets

This example uses four of the default fabric fc-sets, with the default mapping of forwarding classes to fabric fc-sets:

  • fabric_fcset_be (includes the best-effort forwarding class)

  • fabric_fcset_noloss1 (includes the fcoe forwarding class)

  • fabric_fcset_noloss2 (includes the no-loss forwarding class)

  • fabric_fcset_multicast1 (includes the mcast forwarding class)

Class groups (priority groups)

The three default class groups and fabric fc-set membership in the class groups are not user-configurable.

Drop profiles

Note:

Lossless traffic (fabric_fcset_noloss1 and fabric_fcset_noloss2) multidestination traffic do not use drop profiles

This example uses the following drop profiles for lossy traffic classes:

  • Best-effort unicast traffic with low packet loss priority:Name—fab-dp-be-lowDrop start point—20%Drop end point—50%Maximum drop rate—80%

  • Best-effort unicast traffic with high packet loss priority:Name—fab-dp-be-highDrop start point—5%Drop end point—35%Maximum drop rate—100%

Queue (fabric fc-set) fabric schedulers

Schedulers configure the bandwidth for fabric fc-sets, which are mapped to output queues and to IEEE 802.1p CoS priorities.

The sum of the minimum guaranteed bandwidths (transmit rates) of each fabric fc-set in a class group equals the total minimum guaranteed port bandwidth of the class group. The sum of all of the fabric fc-set transmit rates in all of the class groups equals the percentage of available port bandwidth allocated to the class groups. The sum of all of the fabric fc-set transmit rates must be less than or equal to 100 percent.

  • Best-effort traffic scheduler:

    Name—fab-be-schedTransmit rate—25%Shaping rate—100%Drop profiles—fab-dp-be-low, fab-dp-be-high

  • FCoE traffic scheduler:Name—fab-fcoe-schedTransmit rate—30%Shaping rate—100%Drop profile—None

  • No-loss traffic scheduler:Name—fab-nl-schedTransmit rate—25%Shaping rate—100%Drop profile—None

  • Multidestination traffic scheduler:Name—fab-mcast-schedTransmit rate—20%Shaping rate—100%Drop profile—None

Fabric fc-set to fabric forwarding class set scheduler mapping

  • Best-effort traffic fabric scheduler mapping:Name—fab-traffic-mapMapping—fabric_fcset_be to scheduler fab-be-sched

  • FCoE traffic fabric scheduler mapping:Name—fab-traffic-mapMapping—fabric_fcset_noloss1 to scheduler fab-fcoe-sched

  • No-loss traffic fabric scheduler mapping:Name—fab-traffic-mapMapping—fabric_fcset_noloss2 to scheduler fab-nl-sched

  • Multidestination traffic fabric scheduler mapping:Name—fab-traffic-mapMapping—fabric_fcset_mcast1 to scheduler fab-mcast-sched

Applying hierarchical scheduling (fabric scheduler map) to interfaces

Fabric interfaces: ICD1:fte-0/0/3, ICD1:fte-1/0/7

Clos fabric interfaces: ICD1:bfte-*/*/*

Configuration

The configuration example is split into two parts, one part for Node device scheduling configuration and one part for Interconnect device scheduling configuration. Although this example uses the same scheduling on Node device access and fabric interfaces, you can configure different schedulers for different interfaces. This example also uses the same scheduling on Interconnect device fabric and Clos fabric interfaces, and you can configure different schedulers for different interfaces.

To configure scheduling across a QFabric system, perform these tasks:

CLI Quick Configuration

Node device configuration: to quickly configure scheduling across a QFabric system, copy the following commands, paste them in a text file, remove line breaks, change variables and details to match your network configuration, and then copy and paste the commands into the CLI for Node devices ND1 and ND2 at the [edit] hierarchy level. In this example, we use identical scheduling and interfaces on Node devices ND1 and ND2 to simplify the configuration.

Interconnect device configuration: to quickly configure scheduling across a QFabric system, copy the following commands, paste them in a text file, remove line breaks, change variables and details to match your network configuration, and then copy and paste the commands into the CLI for Interconnect device ICD1 at the [edit] hierarchy level. In this example, we use identical scheduling on the fabric interfaces and the Clos fabric interfaces to simplify the configuration.

Note:

This configuration uses the default mapping of forwarding classes to fabric fc-sets.

Configuring QFX3500 Node Devices ND1 and ND2

Step-by-Step Procedure

To perform a step-by-step configuration of lossless forwarding classes, forwarding class sets, drop profiles for lossy traffic, queue schedulers, traffic control profiles, access and fabric interfaces, and PFC:

  1. Configure the two lossless forwarding classes (priorities):

  2. Configure fc-sets (priority groups) to group forwarding classes (priorities) that require similar CoS treatment:

  3. Configure the drop profile for the best-effort low loss-priority queue:

  4. Configure the drop profile for the best-effort high loss-priority queue:

  5. Configure the drop profile for the network-control queue:

  6. Configure the scheduler that defines the minimum guaranteed bandwidth, priority, maximum bandwidth, and drop profiles for the best-effort queue:

  7. Configure the scheduler that defines the minimum guaranteed bandwidth, priority, maximum bandwidth, and drop profile for the network-control queue:

  8. Configure the scheduler that defines the minimum guaranteed bandwidth, priority, and maximum bandwidth for the FCoE queue:

  9. Configure the scheduler that defines the minimum guaranteed bandwidth, priority, and maximum bandwidth for the no-loss queue:

  10. Configure the scheduler that defines the minimum guaranteed bandwidth, priority, maximum bandwidth, and drop profile for the mcast queue:

  11. Map the schedulers to the appropriate forwarding classes:

  12. Define the traffic control profile for the best-effort priority group (queue to scheduler mapping, minimum guaranteed bandwidth, and maximum bandwidth):

  13. Define the traffic control profile for the guaranteed delivery priority group (queue to scheduler mapping, minimum guaranteed bandwidth, and maximum bandwidth):

  14. Define the traffic control profile for the multidestination priority group (queue to scheduler mapping, minimum guaranteed bandwidth, and maximum bandwidth):

  15. Apply the three forwarding class sets and the appropriate traffic control profiles to the Node device ND1 access interfaces and fabric interface:

  16. Apply the three forwarding class sets and the appropriate traffic control profiles to the Node device ND2 access interfaces and fabric interface:

  17. Configure a congestion notification profile to enable PFC on the FCoE and no-loss queue IEEE 802.1 code points:

  18. Apply the PFC configuration to the access interfaces on Node device ND1:

  19. Apply the PFC configuration to the access interfaces on Node device ND2:

Configuring QFX3500 Interconnect Device ICD1

Step-by-Step Procedure

To perform a step-by-step configuration of drop profiles for lossy traffic, queue schedulers, and fabric and Clos fabric interfaces:

  1. Configure the drop profile for the best-effort low loss-priority queue:

  2. Configure the drop profile for the best-effort high loss-priority queue:

  3. Configure the fabric scheduler that defines the minimum guaranteed bandwidth, maximum bandwidth, and drop profiles for the best-effort (fabric_fcset_be) queue:

  4. Configure the fabric scheduler that defines the minimum guaranteed bandwidth and maximum bandwidth for the FCoE (fabric_fcset_noloss1) queue:

  5. Configure the fabric scheduler that defines the minimum guaranteed bandwidth and maximum bandwidth for the no-loss (fabric_fcset_noloss2) queue:

  6. Configure the fabric scheduler that defines the minimum guaranteed bandwidth, maximum bandwidth, and drop profile for the multidestination traffic (fabric_fcset_mcast1) queue:

  7. Map the fabric schedulers to the appropriate fabric fc-sets in the fabric forwarding class scheduler map:

  8. To configure scheduling on the interfaces, apply the scheduler map to the Interconnect device fabric interfaces and Clos fabric interfaces:

Results

Display the results of the CoS configuration on QFX3500 Node devices ND1 and ND2. The system shows only the explicitly configured parameters; it does not show default parameters such as the classifier configuration or the default forwarding classes. In this example, the three lossy forwarding classes (best-effort, network-control, and mcast) are not shown because the example uses the default configuration for these forwarding classes. The results on both Node devices are similar, except the interface names are different because the interface names include the Node device name. The results below are for Node device ND1:

Display the results of the CoS configuration on QFX3500 Interconnect device ICD1. The system shows only the explicitly configured parameters; it does not show default parameters:

Verification

To verify that the hierarchical scheduling components have been created and are operating properly, perform these tasks:

Verifying Lossless Forwarding Class Configuration on the Node Devices

Purpose

On Node devices ND1 and ND2, verify that the two lossless forwarding classes (fcoe and no-loss) have been configured. The system shows only the explicitly configured forwarding classes, so the default configuration of the best-effort, network-control, and mcast forwarding classes is not shown.

Action

List the forwarding classes using the operational mode command show configuration class-of-service forwarding-classes:

Meaning

The show configuration class-of-service forwarding-classes command lists each of the configured forwarding classes, the queue to which the forwarding class is mapped, and whether the forwarding class has been configured to be lossless with the no-loss option. The command output shows that:

  • Forwarding class fcoe maps to queue 3 and is configured as a lossless queue with the no-loss option

  • Forwarding class no-loss maps to queue 4 and is configured as a lossless queue with the no-loss option

Verifying Forwarding Class Set Configuration on the Node Devices

Purpose

Verify that the correct forwarding classes belong to the appropriate fc-set.

Action

List the fc-sets on Node devices ND1 and ND2 using the operational mode command show class-of-service forwarding-class-set:

Meaning

The show class-of-service forwarding-class-set command lists all of the configured fc-sets (priority groups), the forwarding classes (priorities) that belong to each fc-set, and the internal index number of each fc-set. The command output shows that:

  • The fc-set best-effort-pg includes the forwarding classes best-effort and network-control.

  • The fc-set noloss-pg includes the forwarding classes fcoe and no-loss.

  • The fc-set multidestination-pg includes the forwarding class mcast.

Verifying Drop Profile Configuration on the Node Devices

Purpose

On Node devices ND1 and ND2, verify that the drop profiles dp-be-low, dp-be-high, and dp-nc are configured with the correct fill levels and drop probabilities.

Action

On Node devices ND1 and ND2, list the drop profiles using the operational mode command show configuration class-of-service drop-profiles:

Meaning

The show configuration class-of-service drop-profiles command lists the drop profiles and their properties. The command output shows that there are three drop profiles configured, dp-be-low, dp-be-high, and dp-nc. The output also shows that:

  • For dp-be-low, the drop start point (the first fill level) is when the queue is 25 percent filled, the drop end point (the second fill level) occurs when the queue is 50 percent filled, and the drop probability at the drop end point is 80 percent.

  • For dp-be-high, the drop start point (the first fill level) is when the queue is 10 percent filled, the drop end point (the second fill level) occurs when the queue is 40 percent filled, and the drop probability at the drop end point is 100 percent.

  • For dp-nc, the drop start point (the first fill level) is when the queue is 75 percent filled, the drop end point (the second fill level) occurs when the queue is 100 percent filled, and the drop probability at the drop end point is 50 percent.

Verifying Drop Profile Configuration on the Interconnect Device

Purpose

On Interconnect device ICD1, verify that drop profiles fab-dp-be-low and fab-dp-be-high are configured with the correct fill levels and drop probabilities.

Action

List the drop profiles using the operational mode command show configuration class-of-service drop-profiles:

Meaning

The show configuration class-of-service drop-profiles command lists the drop profiles and their properties. The command output shows that there are two drop profiles configured, fab-dp-be-low and fab-dp-be-high. The output also shows that:

  • For fab-dp-be-low, the drop start point (the first fill level) is when the queue is 20 percent filled, the drop end point (the second fill level) occurs when the queue is 50 percent filled, and the drop probability at the drop end point is 80 percent.

  • For fab-dp-be-high, the drop start point (the first fill level) is when the queue is 5 percent filled, the drop end point (the second fill level) occurs when the queue is 35 percent filled, and the drop probability at the drop end point is 100 percent.

Verifying Queue Scheduler Configuration and Mapping on the Node Devices

Purpose

Verify that the Node device ND1 and ND2 queue schedulers are configured with the correct bandwidth parameters and priorities, mapped to the correct forwarding classes and queues, and mapped to the correct drop profiles.

Action

List the scheduler maps using the operational mode command show class-of-service scheduler-map:

Meaning

The show class-of-service scheduler-map command lists the three configured scheduler maps. For each scheduler map, the command output includes:

  • The name of the scheduler map (Scheduler map field)

  • The name of the scheduler (Scheduler field)

  • The forwarding classes mapped to the scheduler (Forwarding class field)

  • The minimum guaranteed queue bandwidth (Transmit rate field)

  • The scheduling priority (Priority field)

  • The maximum bandwidth in the priority group that the queue can consume (Shaping rate field)

  • The drop profile loss priority (Loss priority field) for each drop profile name (name field)

The command output shows that:

  • The scheduler map be-map has been created and has these properties:

    • There are two schedulers, be-sched and nc-sched.

    • The scheduler be-sched has one forwarding class, best-effort.

    • Scheduler be-sched forwarding class best-effort has a minimum guaranteed bandwidth of 90 percent, can consume a maximum of 100 percent of the priority group bandwidth, and uses the drop profile dp-be-low for low loss-priority traffic, the default drop profile for medium-high loss-priority traffic, and the drop profile dp-be-high for high loss-priority traffic.

    • The scheduler nc-sched has one forwarding class, network-control.

    • The network-control forwarding class has a minimum guaranteed bandwidth of 10 percent, can consume a maximum of 100 percent of the priority group bandwidth, and uses the drop profile dp-nc for low loss-priority traffic and the default drop profile for medium-high and high loss priority traffic.

  • The scheduler map nl-map has been created and has these properties:

    • There are two schedulers, fcoe-sched and nl-sched.

    • The scheduler fcoe-sched has one forwarding class, fcoe.

    • The fcoe forwarding class has a minimum guaranteed bandwidth of 60 percent, and can consume a maximum of 100 percent of the priority group bandwidth.

    • The scheduler nl-sched has one forwarding class, no-loss.

    • The no-loss forwarding class has a minimum guaranteed bandwidth of 40 percent, and can consume a maximum of 100 percent of the priority group bandwidth.

  • The scheduler map mcast-map has been created and has these properties:

    • There is one scheduler, mcast-sched.

    • The scheduler mcast-sched has one forwarding class, mcast.

    • The mcast forwarding class has a minimum guaranteed bandwidth of 100 percent, and can consume a maximum of 100 percent of the priority group bandwidth.

Verifying Fabric Queue Scheduler Configuration and Mapping on the Interconnect Device

Purpose

Verify that the Interconnect device ICD1 fabric queue schedulers are configured with the correct bandwidth parameters, mapped to the correct fabric fc-sets, and mapped to the correct drop profiles.

Action

List the fabric scheduler maps using the operational mode command show class-of-service scheduler-map-forwarding-class-sets:

Meaning

The show class-of-service scheduler-map-forwarding-class-sets command lists the configured fabric scheduler map. The command output includes:

  • The name of the fabric scheduler map (Scheduler map forwarding class set field)

  • The name of the fabric scheduler (Scheduler field)

  • The fabric fc-sets mapped to the scheduler (Forwarding class set field)

  • The minimum guaranteed queue bandwidth (Transmit rate field)

  • The maximum bandwidth in the priority group that the queue can consume (Shaping rate field)

  • The drop profile loss priority (Loss priority field) for each drop profile name (Name field)

The command output shows that:

  • The fabric scheduler map fab-traffic-map has been created and has these properties:

    • There are four fabric schedulers, fab-be-sched, fab-fcoe-sched, fab-nl-sched, and fab-mcast-sched.

    • The fabric scheduler fab-be-sched has one fabric fc-set, fabric_fcset_be.

      The fabric fc-set fabric_fcset_be has a minimum guaranteed bandwidth of 25 percent, can consume a maximum of 100 percent of the priority group bandwidth, and uses the drop profile fab-dp-be-low for low loss-priority traffic, the default drop profile for medium-high loss-priority traffic, and the drop profile fab-dp-be-high for high loss-priority traffic.

    • The fabric scheduler fab-fcoe-sched has one fabric fc-set, fabric_fcset_noloss1.

      The fabric_fcset_noloss1 fabric fc-set has a minimum guaranteed bandwidth of 30 percent, and can consume a maximum of 100 percent of the priority group bandwidth.

    • The fabric scheduler fab-nl-sched has one fabric fc-set, fabric_fcset_noloss2.

      The fabric_fcset_noloss2 fabric fc-set has a minimum guaranteed bandwidth of 25 percent, and can consume a maximum of 100 percent of the priority group bandwidth.

    • The fabric scheduler fab-mcast-sched has one fabric fc-set, fabric_fcset_mcast1.

      The fabric_fcset_multicast1 fabric fc-set has a minimum guaranteed bandwidth of 20 percent, and can consume a maximum of 100 percent of the priority group bandwidth.

Verifying Traffic Control Profile Configuration on the Node Devices

Purpose

Verify that the traffic control profiles (priority groups) be-tcp, nl-tcp, and mcast-tcp have been created with the correct bandwidth parameters and scheduler mapping.

Action

List the traffic control profiles using the operational mode command show class-of-service traffic-control-profile:

Meaning

The show class-of-service traffic-control-profile command lists all of the configured traffic control profiles. For each traffic control profile, the command output includes:

  • The name of the traffic control profile (Traffic control profile)

  • The maximum port bandwidth the priority group can consume (Shaping rate)

  • The scheduler map associated with the traffic control profile (Scheduler map)

  • The minimum guaranteed priority group port bandwidth (Guaranteed rate)

The command output shows that:

  • The traffic control profile be-tcp can consume a maximum of 100 percent of the port bandwidth, is associated with the scheduler map be-map, and has a minimum guaranteed bandwidth of 25 percent of port bandwidth.

  • The traffic control profile nl-tcp can consume a maximum of 100 percent of the port bandwidth, is associated with the scheduler map nl-map, and has a minimum guaranteed bandwidth of 50 percent.

  • The traffic control profile mcast-tcp can consume a maximum of 100 percent of the port bandwidth, is associated with the scheduler map mcast-map, and has a minimum guaranteed bandwidth of 25 percent.

Verifying That PFC Is Enabled on Lossless Queues on the Node Devices

Purpose

Verify that PFC is enabled on the correct queues (as mapped to IEEE 802.1p priorities in the forwarding class configuration) for lossless transport.

Action

List the congestion notification profiles using the operational mode command show class-of-service congestion-notification:

Meaning

The show class-of-service congestion-notification command lists all of the congestion notification profiles and the IEEE 802.1p code points with PFC enabled. The command output shows that PFC is enabled for code points 011 (fcoe queue) and 100 (no-loss queue) for the nl-cnp congestion notification profile.

Verifying Access and Fabric Interface Scheduling Configuration on the Node Devices

Purpose

Verify that the correct fc-sets, traffic control profiles, and congestion notification profiles are mapped to the correct interfaces on Node devices ND1 and ND2.

Action

List the interfaces on Node devices ND1 and ND2 using the operational mode command show configuration class-of-service interfaces. For example, the output on Node device ND1 shows:

Meaning

The show configuration class-of-service interfaces command shows that the fc-sets and (output) traffic control profiles mapped to the interfaces are:

  • best-effort-pg fc-set with be-tcp traffic control profile

  • noloss-pg fc-set with nl-tcp traffic control profile

  • multidestination-pg fc-set with mcast-tcp traffic control profile

The command also shows that the access interfaces include the congestion notification profile nl-cnp to enable PFC on the IEEE 802.1p code points of lossless traffic.

Verifying Fabric Interface Scheduling Configuration on the Interconnect Device

Purpose

Verify that the correct fabric scheduler maps are associated with the correct fabric and Clos fabric interfaces on Interconnect device ICD1.

Action

List the interfaces using the operational mode command show configuration class-of-service interfaces:

Meaning

The show configuration class-of-service interfaces command shows that the same fabric forwarding class scheduler map is on all of the interfaces:

  • fab-traffic-map