ON THIS PAGE
Example: Configuring CoS Scheduling Across the QFabric System
If you do not want to use the default class of service (CoS) scheduling of traffic across the QFabric system, then in addition to configuring CoS on Node device access interfaces, you can configure two-tier hierarchical scheduling on the fabric interfaces of a QFabric system. Configuring CoS on the fabric interfaces provides more control over class of service (CoS) across the QFabric system and helps to ensure predictable bandwidth consumption across the fabric path.
This topic describes:
Requirements
This example uses the following hardware and software components:
Juniper Networks QFabric System with two Juniper Networks QFX3500 Node devices
Junos OS Release 12.3 or later for the QFX Series
Overview
Configuring CoS across the QFabric system enables you to control scheduling resources as traffic passes through each type of interface. You can configure CoS on the following QFabric system interface types:
Node device access interfaces (xe interfaces)—Schedule traffic on the output queues of the 10-Gigabit Ethernet access ports, using standard Node device CoS scheduling configuration components, as described elsewhere in the QFX Series documentation. You can configure different scheduling for different ports and queues.
Node device fabric interfaces (fte interfaces)—Schedule traffic on the output queues of the 40-Gbps fabric interfaces that connect a Node device to a QFX3008-I or a QFX3600-I Interconnect device using standard Node device CoS scheduling configuration components. You can configure different scheduling for different interfaces and output queues.
Interconnect device fabric interfaces (fte interfaces)—Schedule traffic on the output queues of the 40-Gbps fabric interfaces that connect an Interconnect device to a Node device. You can configure different scheduling for different interfaces and fabric forwarding class sets (fabric fc-sets).
Interconnect device internal Clos fabric interfaces (bfte interfaces)—Schedule traffic on the internal 40-Gbps Clos fabric interfaces that connect the three stages of the Clos fabric within the Interconnect device. You can configure one Clos fabric interface scheduler, which is applied to all of the internal Clos fabric interfaces. You cannot configure different schedulers for different Clos fabric interfaces.
This example shows you how to configure hierarchical port scheduling across the QFabric, including the configuration of Node device access interfaces, Node device fabric interfaces, Interconnect device fabric interfaces, and internal Interconnect device Clos fabric interfaces.
Configuring CoS on Interconnect device fabric interfaces differs from configuring CoS on Node device interfaces because the Interconnect device is a shared infrastructure that supports traffic from multiple Node devices and multiple Node device CoS configurations. Take the amounts and types of traffic traversing the Interconnect device into account when you configure CoS on Interconnect device interfaces.
Configuring scheduling across the QFabric system entails configuring interfaces on Node devices and Interconnect devices. You configure some or all of the following CoS components on each interface, depending upon the interface type (access, Node fabric, Interconnect fabric, or Interconnect Clos fabric):
Mapping forwarding classes to priorities (IEEE 802.1p code points) and queues, and configuring lossless forwarding classes
Defining fc-sets (priority groups)
Defining drop profiles
Defining schedulers
Mapping forwarding classes to schedulers (scheduler map on Node devices, fabric scheduler map on Interconnect devices)
Defining traffic control profiles
Configuring a congestion notification profile to enable priority-based flow control (PFC) on lossless forwarding classes (priorities) (Node device access interfaces only)
Applying congestion notification profiles to interfaces (Node device access interfaces only)
Assigning fc-sets and traffic control profiles to interfaces (Node device interfaces only) or assigning fabric scheduler maps to interfaces (Interconnect device interfaces only)
This example uses the default behavior aggregate classifiers on the Node device access interfaces. Classifiers are not applied to fabric interfaces. Although packet classification is not scheduling, it controls the forwarding class mapping to IEEE 802.1p priorities, and the loss priorities to which packets are mapped when they enter Node device access ports.
When you plan port bandwidth scheduling for priority groups (fc-sets on Node devices and class groups on Interconnect devices) and priorities (forwarding classes on Node devices and fabric fc-sets on Interconnect devices), take into account:
The amounts and types of traffic you expect to traverse the Node device interfaces
The amounts and types of aggregated traffic from all of the connected Node devices that you expect to traverse the Interconnect device interfaces
The mapping of priorities into priority groups. Traffic that requires similar treatment usually belongs in the same priority group. To do this on Node devices, place forwarding classes that require similar bandwidth, loss priority, and other characteristics in the same fc-set. For example, you can map all types of best-effort traffic forwarding classes into one fc-set. On Interconnect devices, the default mapping of fabric fc-sets to class groups defines priority group membership and is not user-configurable.
How much of the port bandwidth you want to allocate to each priority group and to each of the priorities in each priority group. The following considerations apply to bandwidth allocation:
Estimate how much traffic you expect in each priority’s output queue (forwarding class on Node devices and fabric fc-set on Interconnect devices) and how much traffic you expect in each priority group (fc-set on Node devices and class group on Interconnect devices). The priority group traffic is the aggregated amount of traffic in the priorities that belong to the priority group.
On Node devices, the combined minimum guaranteed bandwidth of the priorities in a priority group should not exceed the minimum guaranteed bandwidth (guaranteed rate) of the priority group. (On Interconnect devices, class group bandwidth is derived from the bandwidth of the member fabric fc-sets, so the sum of the priority bandwidths cannot exceed the priority group bandwidth.) The transmit rate scheduler parameter defines the minimum guaranteed bandwidth for priorities (forwarding classes and fabric fc-sets). Scheduler maps associate schedulers with forwarding classes (Node devices) and fabric scheduler maps associate schedulers with fabric fc-sets (Interconnect devices).
The combined minimum guaranteed bandwidth of all of the priority groups on an interface should not exceed the interface’s total bandwidth.
Topology
Figure 1 shows the network topology used in this example.

Table 1 and Table 2 describe the scheduling configuration components on the Node device and the Interconnect device.
To simplify Node device configuration, this example uses the same scheduling configuration on the access interfaces and the fabric interfaces of both QFabric Node devices. This is possible because the scheduler (forwarding-class scheduling) and traffic control profile (fc-set scheduling) rates are specified as percentages of bandwidth instead of as absolute values, so the schedulers and traffic control profiles utilize the port bandwidth in the same way regardless of the absolute amount of available bandwidth. If you want to treat traffic differently on different interfaces or on different interface types, you can configure different schedulers and traffic control profiles and apply them to the interfaces.
Table 1 shows the scheduling configuration components for Node device interfaces.
Scheduling Component |
Settings |
---|---|
Hardware |
Two QFX3500 Node devices in a QFabric system |
Forwarding classes |
This example uses five forwarding classes:
This example uses the default configuration for three forwarding classes (best-effort, network-control, and mcast). Best-effort traffic is classified into low loss priority and high loss priority by IEEE 802.1p classifiers at the Node device ingress interfaces. The two lossless forwarding classes (fcoe and no-loss) are configured as lossless forwarding classes:
Note:
Starting with Junos OS Release 12.3, you must include
the |
Forwarding class sets (priority groups) |
best-effort-pg—contains the forwarding classes best-effort and network-control noloss-pg—contains the forwarding classes fcoe and no-loss multidestination-pg—contains the forwarding class mcast |
Drop profiles Note:
Lossless traffic (fcoe and no-loss forwarding classes) and multidestination traffic do not use drop profiles |
This example uses the following drop profiles for lossy traffic classes:
|
Queue (forwarding class) schedulers |
Schedulers configure the bandwidth characteristics of forwarding classes, which are mapped to output queues and to IEEE 802.1p CoS priorities.
Note:
If you want to specify absolute values instead of percentages for the transmit rate and the shaping rate, you should create separate schedulers for access and fabric interfaces, because access interfaces are 10-Gigabit Ethernet interfaces and fabric interfaces are 40-Gbps interfaces. |
Forwarding class to scheduler mapping |
|
Priority group (fc-set) traffic control profiles |
Traffic control profiles configure the bandwidth for fc-sets (priority groups) and control the amount of port bandwidth allocated to the forwarding classes in the fc-sets.
|
Hierarchical scheduling (fc-sets and traffic control profiles) association with interfaces |
Apply the fc-sets and traffic control profiles to the interfaces of both Node devices:
|
PFC (access interfaces only; do not apply PFC to fabric interfaces) |
Code points: 011—fcoe forwarding class traffic priority010—no-loss forwarding class traffic priority Congestion notification profile name—nl-cnp Enabled on interfaces: ND1:xe-0/0/20, ND1:xe-0/0/21, ND2:xe-0/0/20, and ND2:xe-0/0/21 |
To simplify Interconnect device configuration, this example uses the same scheduling configuration on the fabric interfaces and the Clos fabric interfaces. If you want to treat traffic differently on different fabric interfaces or on different fabric interface types, you can configure different fabric schedulers, map them to fabric fc-sets, and apply them to the interfaces. (You can apply different mappings of schedulers to fabric fc-sets on different interfaces.)
On Interconnect devices, the network-control forwarding class is mapped by default to the strict-high priority fabric fc-set (fabric_fcset_strict_high). The strict-high priority fabric fc-set receives all of the port bandwidth it needs to service strict-high priority traffic. You can configure a scheduler with a shaping rate (maximum bandwidth) and a drop profile to limit the bandwidth available to the strict-high priority fabric fc-set, if desired. The available fabric port bandwidth for all other traffic in all other fabric fc-sets is the bandwidth that remains after the interface services the strict-high priority traffic.
Table 2 shows the scheduling configuration components for Interconnect device interfaces:
Fabric Scheduling Component |
Settings |
---|---|
Hardware |
One QFabric Interconnect device connected to two QFX3500 Node devices in a QFabric system |
Forwarding classes |
Interconnect devices use the forwarding classes defined on the connected Node devices. The forwarding classes are mapped by default to fabric fc-sets on the Interconnect device. Note:
If you do not want to use the default forwarding class to fabric fc-set mapping, you can configure the mapping. Forwarding class to fabric fc-set mapping is global and applies to all traffic that crosses the Interconnect device. |
Fabric fc-sets |
This example uses four of the default fabric fc-sets, with the default mapping of forwarding classes to fabric fc-sets:
|
Class groups (priority groups) |
The three default class groups and fabric fc-set membership in the class groups are not user-configurable. |
Drop profiles Note:
Lossless traffic (fabric_fcset_noloss1 and fabric_fcset_noloss2) multidestination traffic do not use drop profiles |
This example uses the following drop profiles for lossy traffic classes:
|
Queue (fabric fc-set) fabric schedulers |
Schedulers configure the bandwidth for fabric fc-sets, which are mapped to output queues and to IEEE 802.1p CoS priorities. The sum of the minimum guaranteed bandwidths (transmit rates) of each fabric fc-set in a class group equals the total minimum guaranteed port bandwidth of the class group. The sum of all of the fabric fc-set transmit rates in all of the class groups equals the percentage of available port bandwidth allocated to the class groups. The sum of all of the fabric fc-set transmit rates must be less than or equal to 100 percent.
|
Fabric fc-set to fabric forwarding class set scheduler mapping |
|
Applying hierarchical scheduling (fabric scheduler map) to interfaces |
Fabric interfaces: ICD1:fte-0/0/3, ICD1:fte-1/0/7 Clos fabric interfaces: ICD1:bfte-*/*/* |
Configuration
The configuration example is split into two parts, one part for Node device scheduling configuration and one part for Interconnect device scheduling configuration. Although this example uses the same scheduling on Node device access and fabric interfaces, you can configure different schedulers for different interfaces. This example also uses the same scheduling on Interconnect device fabric and Clos fabric interfaces, and you can configure different schedulers for different interfaces.
To configure scheduling across a QFabric system, perform these tasks:
- CLI Quick Configuration
- Configuring QFX3500 Node Devices ND1 and ND2
- Configuring QFX3500 Interconnect Device ICD1
- Results
CLI Quick Configuration
Node device configuration: to quickly configure
scheduling across a QFabric system, copy the following commands, paste
them in a text file, remove line breaks, change variables and details
to match your network configuration, and then copy and paste the commands
into the CLI for Node devices ND1 and ND2 at the [edit]
hierarchy level. In this example, we use identical scheduling and
interfaces on Node devices ND1 and ND2 to simplify the configuration.
[edit class-of-service] set forwarding-classes class fcoe queue-num 3 no-loss set forwarding-classes class no-loss queue-num 4 no-loss set forwarding-class-sets best-effort-pg class best-effort set forwarding-class-sets best-effort-pg class network-control set forwarding-class-sets noloss-pg class fcoe set forwarding-class-sets noloss-pg class no-loss set forwarding-class-sets multidestination-pg class mcast set drop-profiles dp-be-low interpolate fill-level 25 fill-level 50 drop-probability 0 drop-probability 80 set drop-profiles dp-be-high interpolate fill-level 10 fill-level 40 drop-probability 0 drop-probability 100 set drop-profiles dp-nc interpolate fill-level 75 fill-level 100 drop-probability 0 drop-probability 50 set schedulers be-sched priority low transmit-rate percent 90 set schedulers be-sched shaping-rate percent 100 set schedulers be-sched drop-profile-map loss-priority low protocol any drop-profile dp-be-low set schedulers be-sched drop-profile-map loss-priority high protocol any drop-profile dp-be-high set schedulers nc-sched priority low transmit-rate percent 10 set schedulers nc-sched shaping-rate percent 100 set schedulers nc-sched drop-profile-map loss-priority low protocol any drop-profile dp-nc set schedulers fcoe-sched priority low transmit-rate percent 60 set schedulers fcoe-sched shaping-rate percent 100 set schedulers nl-sched priority low transmit-rate percent 40 set schedulers nl-sched shaping-rate percent 100 set schedulers mcast-sched priority low transmit-rate percent 100 set schedulers mcast-sched shaping-rate percent 100 set scheduler-maps be-map forwarding-class best-effort scheduler be-sched set scheduler-maps be-map forwarding-class network-control scheduler nc-sched set scheduler-maps nl-map forwarding-class fcoe scheduler fcoe-sched set scheduler-maps nl-map forwarding-class no-loss scheduler nl-sched set scheduler-maps mcast-map forwarding-class mcast scheduler mcast-sched set traffic-control-profiles be-tcp scheduler-map be-map guaranteed-rate percent 25 set traffic-control-profiles be-tcp shaping-rate percent 100 set traffic-control-profiles nl-tcp scheduler-map nl-map guaranteed-rate percent 50 set traffic-control-profiles nl-tcp shaping-rate percent 100 set traffic-control-profiles mcast-tcp scheduler-map mcast-map guaranteed-rate percent 25 set traffic-control-profiles mcast-tcp shaping-rate percent 100 set interfaces ND1:xe-0/0/20 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp set interfaces ND1:xe-0/0/20 forwarding-class-set noloss-pg output-traffic-control-profile nl-tcp set interfaces ND1:xe-0/0/20 forwarding-class-set multidestination-pg output-traffic-control-profile mcast-tcp set interfaces ND1:xe-0/0/21 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp set interfaces ND1:xe-0/0/21 forwarding-class-set noloss-pg output-traffic-control-profile nl-tcp set interfaces ND1:xe-0/0/21 forwarding-class-set multidestination-pg output-traffic-control-profile mcast-tcp set interfaces ND1:fte-0/1/0 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp set interfaces ND1:fte-0/1/0 forwarding-class-set noloss-pg output-traffic-control-profile nl-tcp set interfaces ND1:fte-0/1/0 forwarding-class-set multidestination-pg output-traffic-control-profile mcast-tcp set interfaces ND2:xe-0/0/20 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp set interfaces ND2:xe-0/0/20 forwarding-class-set noloss-pg output-traffic-control-profile nl-tcp set interfaces ND2:xe-0/0/20 forwarding-class-set multidestination-pg output-traffic-control-profile mcast-tcp set interfaces ND2:xe-0/0/21 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp set interfaces ND2:xe-0/0/21 forwarding-class-set noloss-pg output-traffic-control-profile nl-tcp set interfaces ND2:xe-0/0/21 forwarding-class-set multidestination-pg output-traffic-control-profile mcast-tcp set interfaces ND2:fte-0/1/0 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp set interfaces ND2:fte-0/1/0 forwarding-class-set noloss-pg output-traffic-control-profile nl-tcp set interfaces ND2:fte-0/1/0 forwarding-class-set multidestination-pg output-traffic-control-profile mcast-tcp set congestion-notification-profile nl-cnp input ieee-802.1 code-point 011 pfc set congestion-notification-profile nl-cnp input ieee-802.1 code-point 100 pfc set interfaces ND1:xe-0/0/20 congestion-notification-profile nl-cnp set interfaces ND1:xe-0/0/21 congestion-notification-profile nl-cnp set interfaces ND2:xe-0/0/20 congestion-notification-profile nl-cnp set interfaces ND2:xe-0/0/21 congestion-notification-profile nl-cnp
Interconnect device configuration: to quickly configure scheduling
across a QFabric system, copy the following commands, paste them in
a text file, remove line breaks, change variables and details to match
your network configuration, and then copy and paste the commands into
the CLI for Interconnect device ICD1 at the [edit]
hierarchy
level. In this example, we use identical scheduling on the fabric
interfaces and the Clos fabric interfaces to simplify the configuration.
This configuration uses the default mapping of forwarding classes to fabric fc-sets.
[edit class-of-service] set drop-profiles fab-dp-be-low interpolate fill-level 20 fill-level 50 drop-probability 0 drop-probability 80 set drop-profiles fab-dp-be-high interpolate fill-level 5 fill-level 35 drop-probability 0 drop-probability 100 set schedulers fab-be-sched transmit-rate percent 25 set schedulers fab-be-sched shaping-rate percent 100 set schedulers fab-be-sched drop-profile-map loss-priority low protocol any drop-profile fab-dp-be-low set schedulers fab-be-sched drop-profile-map loss-priority high protocol any drop-profile fab-dp-be-high set schedulers fab-fcoe-sched transmit-rate percent 30 set schedulers fab-fcoe-sched shaping-rate percent 100 set schedulers fab-nl-sched transmit-rate percent 25 set schedulers fab-nl-sched shaping-rate percent 100 set schedulers fab-mcast-sched transmit-rate percent 20 set schedulers fab-mcast-sched shaping-rate percent 100 set scheduler-map-forwarding-class-sets fab-traffic-map forwarding-class-set fabric_fcset_be scheduler fab-be-sched set scheduler-map-forwarding-class-sets fab-traffic-map forwarding-class-set fabric_fcset_noloss1 scheduler fab-fcoe-sched set scheduler-map-forwarding-class-sets fab-traffic-map forwarding-class-set fabric_fcset_noloss2 scheduler fab-nl-sched set scheduler-map-forwarding-class-sets fab-traffic-map forwarding-class-set fabric_fcset_mcast1 scheduler fab-mcast-sched set interfaces ICD1:fte-0/0/3 scheduler-map-forwarding-class-sets fab-traffic-map set interfaces ICD1:fte-1/0/7 scheduler-map-forwarding-class-sets fab-traffic-map set interfaces ICD1:bfte-*/*/* scheduler-map-forwarding-class-sets fab-traffic-map
Configuring QFX3500 Node Devices ND1 and ND2
Step-by-Step Procedure
To perform a step-by-step configuration of lossless forwarding classes, forwarding class sets, drop profiles for lossy traffic, queue schedulers, traffic control profiles, access and fabric interfaces, and PFC:
Configure the two lossless forwarding classes (priorities):
[edit class-of-service] user@switch# set forwarding-classes class fcoe queue-num 3 no-loss user@switch# set forwarding-classes class no-loss queue-num 4 no-loss
Configure fc-sets (priority groups) to group forwarding classes (priorities) that require similar CoS treatment:
[edit class-of-service] user@switch# set forwarding-class-sets best-effort-pg class best-effort user@switch# set forwarding-class-sets best-effort-pg class network-control user@switch# set forwarding-class-sets noloss-pg class fcoe user@switch# set forwarding-class-sets noloss-pg class no-loss user@switch# set forwarding-class-sets multidestination-pg class mcast
Configure the drop profile for the best-effort low loss-priority queue:
[edit class-of-service] user@switch# set drop-profiles dp-be-low interpolate fill-level 25 fill-level 50 drop-probability 0 drop-probability 80
Configure the drop profile for the best-effort high loss-priority queue:
[edit class-of-service] user@switch# set drop-profiles dp-be-high interpolate fill-level 10 fill-level 40 drop-probability 0 drop-probability 100
Configure the drop profile for the network-control queue:
[edit class-of-service] user@switch# set drop-profiles dp-nc interpolate fill-level 75 fill-level 100 drop-probability 0 drop-probability 50
Configure the scheduler that defines the minimum guaranteed bandwidth, priority, maximum bandwidth, and drop profiles for the best-effort queue:
[edit class-of-service] user@switch# set schedulers be-sched priority low transmit-rate percent 90 user@switch# set schedulers be-sched shaping-rate percent 100 user@switch# set schedulers be-sched drop-profile-map loss-priority low protocol any drop-profile dp-be-low user@switch# set schedulers be-sched drop-profile-map loss-priority high protocol any drop-profile dp-be-high
Configure the scheduler that defines the minimum guaranteed bandwidth, priority, maximum bandwidth, and drop profile for the network-control queue:
[edit class-of-service] user@switch# set schedulers nc-sched priority low transmit-rate percent 10 user@switch# set schedulers nc-sched shaping-rate percent 100 user@switch# set schedulers nc-sched drop-profile-map loss-priority low protocol any drop-profile dp-nc
Configure the scheduler that defines the minimum guaranteed bandwidth, priority, and maximum bandwidth for the FCoE queue:
[edit class-of-service] user@switch# set schedulers fcoe-sched priority low transmit-rate percent 60 user@switch# set schedulers fcoe-sched shaping-rate percent 100
Configure the scheduler that defines the minimum guaranteed bandwidth, priority, and maximum bandwidth for the no-loss queue:
[edit class-of-service] user@switch# set schedulers nl-sched priority low transmit-rate percent 40 user@switch# set schedulers nl-sched shaping-rate percent 100
Configure the scheduler that defines the minimum guaranteed bandwidth, priority, maximum bandwidth, and drop profile for the mcast queue:
[edit class-of-service] user@switch# set schedulers mcast-sched priority low transmit-rate percent 100 user@switch# set schedulers mcast-sched shaping-rate percent 100
Map the schedulers to the appropriate forwarding classes:
[edit class-of-service] user@switch# set scheduler-maps be-map forwarding-class best-effort scheduler be-sched user@switch# set scheduler-maps be-map forwarding-class network-control scheduler nc-sched user@switch# set scheduler-maps nl-map forwarding-class fcoe scheduler fcoe-sched user@switch# set scheduler-maps nl-map forwarding-class no-loss scheduler nl-sched user@switch# set scheduler-maps mcast-map forwarding-class mcast scheduler mcast-sched
Define the traffic control profile for the best-effort priority group (queue to scheduler mapping, minimum guaranteed bandwidth, and maximum bandwidth):
[edit class-of-service] user@switch# set traffic-control-profiles be-tcp scheduler-map be-map guaranteed-rate percent 25 user@switch# set traffic-control-profiles be-tcp shaping-rate percent 100
Define the traffic control profile for the guaranteed delivery priority group (queue to scheduler mapping, minimum guaranteed bandwidth, and maximum bandwidth):
[edit class-of-service] user@switch# set traffic-control-profiles nl-tcp scheduler-map nl-map guaranteed-rate percent 50 user@switch# set traffic-control-profiles nl-tcp shaping-rate percent 100
Define the traffic control profile for the multidestination priority group (queue to scheduler mapping, minimum guaranteed bandwidth, and maximum bandwidth):
[edit class-of-service] user@switch# set traffic-control-profiles mcast-tcp scheduler-map mcast-map guaranteed-rate percent 25 user@switch# set traffic-control-profiles mcast-tcp shaping-rate percent 100
Apply the three forwarding class sets and the appropriate traffic control profiles to the Node device ND1 access interfaces and fabric interface:
[edit class-of-service] user@switch# set interfaces ND1:xe-0/0/20 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp user@switch# set interfaces ND1:xe-0/0/20 forwarding-class-set noloss-pg output-traffic-control-profile nl-tcp user@switch# set interfaces ND1:xe-0/0/20 forwarding-class-set multidestination-pg output-traffic-control-profile mcast-tcp user@switch# set interfaces ND1:xe-0/0/21 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp user@switch# set interfaces ND1:xe-0/0/21 forwarding-class-set noloss-pg output-traffic-control-profile nl-tcp user@switch# set interfaces ND1:xe-0/0/21 forwarding-class-set multidestination-pg output-traffic-control-profile mcast-tcp user@switch# set interfaces ND1:fte-0/1/0 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp user@switch# set interfaces ND1:fte-0/1/0 forwarding-class-set noloss-pg output-traffic-control-profile nl-tcp user@switch# set interfaces ND1:fte-0/1/0 forwarding-class-set multidestination-pg output-traffic-control-profile mcast-tcp
Apply the three forwarding class sets and the appropriate traffic control profiles to the Node device ND2 access interfaces and fabric interface:
[edit class-of-service] user@switch# set interfaces ND2:xe-0/0/20 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp user@switch# set interfaces ND2:xe-0/0/20 forwarding-class-set noloss-pg output-traffic-control-profile nl-tcp user@switch# set interfaces ND2:xe-0/0/20 forwarding-class-set multidestination-pg output-traffic-control-profile mcast-tcp user@switch# set interfaces ND2:xe-0/0/21 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp user@switch# set interfaces ND2:xe-0/0/21 forwarding-class-set noloss-pg output-traffic-control-profile nl-tcp user@switch# set interfaces ND2:xe-0/0/21 forwarding-class-set multidestination-pg output-traffic-control-profile mcast-tcp user@switch# set interfaces ND2:fte-0/1/0 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp user@switch# set interfaces ND2:fte-0/1/0 forwarding-class-set noloss-pg output-traffic-control-profile nl-tcp user@switch# set interfaces ND2:fte-0/1/0 forwarding-class-set multidestination-pg output-traffic-control-profile mcast-tcp
Configure a congestion notification profile to enable PFC on the FCoE and no-loss queue IEEE 802.1 code points:
[edit class-of-service] user@switch# set congestion-notification-profile nl-cnp input ieee-802.1 code-point 011 pfc user@switch# set congestion-notification-profile nl-cnp input ieee-802.1 code-point 100 pfc
Apply the PFC configuration to the access interfaces on Node device ND1:
[edit class-of-service] user@switch# set interfaces ND1:xe-0/0/20 congestion-notification-profile nl-cnp set interfaces ND1:xe-0/0/21 congestion-notification-profile nl-cnp
Apply the PFC configuration to the access interfaces on Node device ND2:
[edit class-of-service] user@switch# set interfaces ND2:xe-0/0/20 congestion-notification-profile nl-cnp set interfaces ND2:xe-0/0/21 congestion-notification-profile nl-cnp
Configuring QFX3500 Interconnect Device ICD1
Step-by-Step Procedure
To perform a step-by-step configuration of drop profiles for lossy traffic, queue schedulers, and fabric and Clos fabric interfaces:
Configure the drop profile for the best-effort low loss-priority queue:
[edit class-of-service] user@switch# set drop-profiles fab-dp-be-low interpolate fill-level 20 fill-level 50 drop-probability 0 drop-probability 80
Configure the drop profile for the best-effort high loss-priority queue:
[edit class-of-service] user@switch# set drop-profiles fab-dp-be-high interpolate fill-level 5 fill-level 35 drop-probability 0 drop-probability 100
Configure the fabric scheduler that defines the minimum guaranteed bandwidth, maximum bandwidth, and drop profiles for the best-effort (fabric_fcset_be) queue:
[edit class-of-service] user@switch# set schedulers fab-be-sched transmit-rate percent 25 user@switch# set schedulers fab-be-sched shaping-rate percent 100 user@switch# set schedulers fab-be-sched drop-profile-map loss-priority low protocol any drop-profile fab-dp-be-low user@switch# set schedulers fab-be-sched drop-profile-map loss-priority high protocol any drop-profile fab-dp-be-high
Configure the fabric scheduler that defines the minimum guaranteed bandwidth and maximum bandwidth for the FCoE (fabric_fcset_noloss1) queue:
[edit class-of-service] user@switch# set schedulers fab-fcoe-sched transmit-rate percent 30 user@switch# set schedulers fab-fcoe-sched shaping-rate percent 100
Configure the fabric scheduler that defines the minimum guaranteed bandwidth and maximum bandwidth for the no-loss (fabric_fcset_noloss2) queue:
[edit class-of-service] user@switch# set schedulers fab-nl-sched transmit-rate percent 25 user@switch# set schedulers fab-nl-sched shaping-rate percent 100
Configure the fabric scheduler that defines the minimum guaranteed bandwidth, maximum bandwidth, and drop profile for the multidestination traffic (fabric_fcset_mcast1) queue:
[edit class-of-service] user@switch# set schedulers fab-mcast-sched transmit-rate percent 20 user@switch# set schedulers fab-mcast-sched shaping-rate percent 100
Map the fabric schedulers to the appropriate fabric fc-sets in the fabric forwarding class scheduler map:
[edit class-of-service] user@switch# set scheduler-map-forwarding-class-sets fab-traffic-map forwarding-class-set fabric_fcset_be scheduler fab-be-sched user@switch# set scheduler-map-forwarding-class-sets fab-traffic-map forwarding-class-set fabric_fcset_noloss1 scheduler fab-fcoe-sched user@switch# set scheduler-map-forwarding-class-sets fab-traffic-map forwarding-class-set fabric_fcset_noloss2 scheduler fab-nl-sched user@switch# set scheduler-map-forwarding-class-sets fab-traffic-map forwarding-class-set fabric_fcset_mcast1 scheduler fab-mcast-sched
To configure scheduling on the interfaces, apply the scheduler map to the Interconnect device fabric interfaces and Clos fabric interfaces:
[edit class-of-service] user@switch# set interfaces ICD1:fte-0/0/3 scheduler-map-forwarding-class-sets fab-traffic-map user@switch# set interfaces ICD1:fte-1/0/7 scheduler-map-forwarding-class-sets fab-traffic-map user@switch# set interfaces ICD1:bfte-*/*/* scheduler-map-forwarding-class-sets fab-traffic-map
Results
Display the results of the CoS configuration on QFX3500 Node devices ND1 and ND2. The system shows only the explicitly configured parameters; it does not show default parameters such as the classifier configuration or the default forwarding classes. In this example, the three lossy forwarding classes (best-effort, network-control, and mcast) are not shown because the example uses the default configuration for these forwarding classes. The results on both Node devices are similar, except the interface names are different because the interface names include the Node device name. The results below are for Node device ND1:
user@switch> show configuration class-of-service drop-profiles { dp-be-low { interpolate { fill-level [ 25 50 ]; drop-probability [ 0 80 ]; } } dp-be-high { interpolate { fill-level [ 10 40 ]; drop-probability [ 0 100 ]; } } dp-nc { interpolate { fill-level [ 75 100 ]; drop-probability [ 0 50 ]; } } } forwarding-classes { class fcoe queue-num 3 no-loss; class no-loss queue-num 4 no-loss; } traffic-control-profiles { be-tcp { scheduler-map be-map; shaping-rate percent 100; guaranteed-rate percent 25; } nl-tcp { scheduler-map nl-map; shaping-rate percent 100; guaranteed-rate percent 50; } mcast-tcp { scheduler-map mcast-map; shaping-rate percent 100; guaranteed-rate percent 25; } } forwarding-class-sets { best-effort-pg { class best-effort; class network-control; } noloss-pg { class fcoe; class no-loss; } multidestination-pg { class mcast; } } congestion-notification-profile { nl-cnp { input { ieee-802.1 { code-point 011 { pfc; } code-point 100 { pfc; } } } } } interfaces { ND1:xe-0/0/20 { congestion-notification-profile nl-cnp; forwarding-class-set { best-effort-pg { output-traffic-control-profile be-tcp; } noloss-pg { output-traffic-control-profile nl-tcp; } multidestination-pg { output-traffic-control-profile mcast-tcp; } } } ND1:xe-0/0/21 { congestion-notification-profile nl-cnp; forwarding-class-set { best-effort-pg { output-traffic-control-profile be-tcp; } noloss-pg { output-traffic-control-profile nl-tcp; } multidestination-pg { output-traffic-control-profile mcast-tcp; } } } ND1:fte-0/1/0 { forwarding-class-set { best-effort-pg { output-traffic-control-profile be-tcp; } noloss-pg { output-traffic-control-profile nl-tcp; } multidestination-pg { output-traffic-control-profile mcast-tcp; } } } } scheduler-maps { be-map { forwarding-class best-effort scheduler be-sched; forwarding-class network-control scheduler nc-sched; } nl-map { forwarding-class fcoe scheduler fcoe-sched; forwarding-class no-loss scheduler nl-sched; } mcast-map { forwarding-class mcast scheduler mcast-sched; } } schedulers { be-sched { transmit-rate percent 90; shaping-rate percent 100; priority low; drop-profile-map loss-priority low protocol any drop-profile dp-be-low; drop-profile-map loss-priority high protocol any drop-profile dp-be-high; } fcoe-sched { transmit-rate percent 60; shaping-rate percent 100; priority low; } mcast-sched { transmit-rate percent 100; shaping-rate percent 100; priority low; } nc-sched { transmit-rate percent 10; shaping-rate percent 100; priority low; drop-profile-map loss-priority low protocol any drop-profile dp-nc; } nl-sched { transmit-rate percent 40; shaping-rate percent 100; priority low; } }
Display the results of the CoS configuration on QFX3500 Interconnect device ICD1. The system shows only the explicitly configured parameters; it does not show default parameters:
user@switch> show configuration class-of-service drop-profiles { fab-dp-be-low { interpolate { fill-level [ 20 50 ]; drop-probability [ 0 80 ]; } } fab-dp-be-high { interpolate { fill-level [ 5 35 ]; drop-probability [ 0 100 ]; } } } interfaces { ICD1:fte-0/0/3 { scheduler-map-forwarding-class-sets fab-traffic-map; } ICD1:fte-1/0/7 { scheduler-map-forwarding-class-sets fab-traffic-map; } ICD1:bfte-*/*/* { scheduler-map-forwarding-class-sets fab-traffic-map; } } scheduler-maps { fab-traffic-map { forwarding-class-set fabric_fcset_be scheduler fab-be-sched; forwarding-class-set fabric_fcset_noloss1 scheduler fab-fcoe-sched; forwarding-class-set fabric_fcset_noloss2 scheduler fab-nl-sched; forwarding-class-set fabric_fcset_mcast1 scheduler fab-mcast-sched; } } schedulers { fab-be-sched { transmit-rate percent 25; shaping-rate percent 100; drop-profile-map loss-priority low protocol any drop-profile fab-dp-be-low; drop-profile-map loss-priority high protocol any drop-profile fab-dp-be-high; } fab-fcoe-sched { transmit-rate percent 30; shaping-rate percent 100; } fab-nl-sched { transmit-rate percent 25; shaping-rate percent 100; } fab-mcast-sched { transmit-rate percent 20; shaping-rate percent 100; } }
Verification
To verify that the hierarchical scheduling components have been created and are operating properly, perform these tasks:
- Verifying Lossless Forwarding Class Configuration on the Node Devices
- Verifying Forwarding Class Set Configuration on the Node Devices
- Verifying Drop Profile Configuration on the Node Devices
- Verifying Drop Profile Configuration on the Interconnect Device
- Verifying Queue Scheduler Configuration and Mapping on the Node Devices
- Verifying Fabric Queue Scheduler Configuration and Mapping on the Interconnect Device
- Verifying Traffic Control Profile Configuration on the Node Devices
- Verifying That PFC Is Enabled on Lossless Queues on the Node Devices
- Verifying Access and Fabric Interface Scheduling Configuration on the Node Devices
- Verifying Fabric Interface Scheduling Configuration on the Interconnect Device
Verifying Lossless Forwarding Class Configuration on the Node Devices
Purpose
On Node devices ND1 and ND2, verify that the two lossless forwarding classes (fcoe and no-loss) have been configured. The system shows only the explicitly configured forwarding classes, so the default configuration of the best-effort, network-control, and mcast forwarding classes is not shown.
Action
List the forwarding classes using the operational mode
command show configuration class-of-service forwarding-classes
:
user@switch> show configuration class-of-service forwarding-classes class fcoe queue-num 3 no-loss; class no-loss queue-num 4 no-loss;
Meaning
The show configuration class-of-service forwarding-classes
command lists each of the configured forwarding classes, the queue
to which the forwarding class is mapped, and whether the forwarding
class has been configured to be lossless with the no-loss
option. The command output shows that:
Forwarding class
fcoe
maps to queue3
and is configured as a lossless queue with theno-loss
optionForwarding class
no-loss
maps to queue4
and is configured as a lossless queue with theno-loss
option
Verifying Forwarding Class Set Configuration on the Node Devices
Purpose
Verify that the correct forwarding classes belong to the appropriate fc-set.
Action
List the fc-sets on Node devices ND1 and ND2 using the
operational mode command show class-of-service forwarding-class-set
:
user@switch> show class-of-service forwarding-class-set Forwarding class set: best-effort-pg, Type: normal-type, Forwarding class set index: 19907 Forwarding class Index best-effort 0 network-control 5 Forwarding class set: noloss-pg, Type: normal-type, Forwarding class set index: 43700 Forwarding class Index fcoe 2 no-loss 3 Forwarding class set: multidestination-pg, Type: normal-type, Forwarding class set index: 60758 Forwarding class Index mcast 4
Meaning
The show class-of-service forwarding-class-set
command lists all of the configured fc-sets (priority groups), the
forwarding classes (priorities) that belong to each fc-set, and the
internal index number of each fc-set. The command output shows that:
The fc-set
best-effort-pg
includes the forwarding classesbest-effort
andnetwork-control
.The fc-set
noloss-pg
includes the forwarding classesfcoe
andno-loss
.The fc-set
multidestination-pg
includes the forwarding classmcast
.
Verifying Drop Profile Configuration on the Node Devices
Purpose
On Node devices ND1 and ND2, verify that the drop profiles dp-be-low
, dp-be-high
, and dp-nc
are
configured with the correct fill levels and drop probabilities.
Action
On Node devices ND1 and ND2, list the drop profiles
using the operational mode command show configuration class-of-service
drop-profiles
:
user@switch> show configuration class-of-service drop-profiles dp-be-low { interpolate { fill-level [ 25 50 ]; drop-probability [ 0 80 ]; } } dp-be-high { interpolate { fill-level [ 10 40 ]; drop-probability [ 0 100 ]; } } dp-nc { interpolate { fill-level [ 75 100 ]; drop-probability [ 0 50 ]; }
Meaning
The show configuration class-of-service drop-profiles
command lists the drop profiles and their properties. The command
output shows that there are three drop profiles configured, dp-be-low
, dp-be-high
, and dp-nc
. The output also shows
that:
For
dp-be-low
, the drop start point (the first fill level) is when the queue is 25 percent filled, the drop end point (the second fill level) occurs when the queue is 50 percent filled, and the drop probability at the drop end point is 80 percent.For
dp-be-high
, the drop start point (the first fill level) is when the queue is 10 percent filled, the drop end point (the second fill level) occurs when the queue is 40 percent filled, and the drop probability at the drop end point is 100 percent.For
dp-nc
, the drop start point (the first fill level) is when the queue is 75 percent filled, the drop end point (the second fill level) occurs when the queue is 100 percent filled, and the drop probability at the drop end point is 50 percent.
Verifying Drop Profile Configuration on the Interconnect Device
Purpose
On Interconnect device ICD1, verify that drop profiles fab-dp-be-low
and fab-dp-be-high
are configured
with the correct fill levels and drop probabilities.
Action
List the drop profiles using the operational mode command show configuration class-of-service drop-profiles
:
user@switch> show configuration class-of-service drop-profiles fab-dp-be-low { interpolate { fill-level [ 20 50 ]; drop-probability [ 0 80 ]; } } fab-dp-be-high { interpolate { fill-level [ 5 35 ]; drop-probability [ 0 100 ]; } }
Meaning
The show configuration class-of-service drop-profiles
command lists the drop profiles and their properties. The command
output shows that there are two drop profiles configured, fab-dp-be-low
and fab-dp-be-high
. The output also shows that:
For
fab-dp-be-low
, the drop start point (the first fill level) is when the queue is 20 percent filled, the drop end point (the second fill level) occurs when the queue is 50 percent filled, and the drop probability at the drop end point is 80 percent.For
fab-dp-be-high
, the drop start point (the first fill level) is when the queue is 5 percent filled, the drop end point (the second fill level) occurs when the queue is 35 percent filled, and the drop probability at the drop end point is 100 percent.
Verifying Queue Scheduler Configuration and Mapping on the Node Devices
Purpose
Verify that the Node device ND1 and ND2 queue schedulers are configured with the correct bandwidth parameters and priorities, mapped to the correct forwarding classes and queues, and mapped to the correct drop profiles.
Action
List the scheduler maps using the operational mode command show class-of-service scheduler-map
:
user@switch> show class-of-service scheduler-map Scheduler map: be-map, Index: 64023 Scheduler: be-sched, Forwarding class: best-effort, Index: 13005 Transmit rate: 90 percent, Rate Limit: none, Buffer size: remainder, Buffer Limit: none, Priority: low Excess Priority: unspecified Shaping rate: 100 percent, Drop profiles: Loss priority Protocol Index Name Low any 55387 dp-be-low Medium high any 1 <default-drop-profile> High any 4369 dp-be-high Scheduler: nc-sched, Forwarding class: network-control, Index: 45740 Transmit rate: 10 percent, Rate Limit: none, Buffer size: remainder, Buffer Limit: none, Priority: low Excess Priority: unspecified Shaping rate: 100 percent, Drop profiles: Loss priority Protocol Index Name Low any 44207 dp-nc Medium high any 1 <default-drop-profile> High any 1 <default-drop-profile> Scheduler map: nl-map, Index: 61447 Scheduler: fcoe-sched, Forwarding class: fcoe, Index: 37289 Transmit rate: 60 percent, Rate Limit: none, Buffer size: remainder, Buffer Limit: none, Priority: low Excess Priority: unspecified Shaping rate: 100 percent, Drop profiles: Loss priority Protocol Index Name Low any 44207 <default-drop-profile> Medium high any 1 <default-drop-profile> High any 1 <default-drop-profile> Scheduler: nl-sched, Forwarding class: no-loss, Index: 29359 Transmit rate: 40 percent, Rate Limit: none, Buffer size: remainder, Buffer Limit: none, Priority: low Excess Priority: unspecified Shaping rate: 100 percent, Drop profiles: Loss priority Protocol Index Name Low any 44207 <default-drop-profile> Medium high any 1 <default-drop-profile> High any 1 <default-drop-profile> Scheduler map: mcast-map, Index: 63239 Scheduler: mcast-sched, Forwarding class: mcast, Index: 29359 Transmit rate: 100 percent, Rate Limit: none, Buffer size: remainder, Buffer Limit: none, Priority: low Excess Priority: unspecified Shaping rate: 100 percent, Drop profiles: Loss priority Protocol Index Name Low any 1 <default-drop-profile> Medium high any 1 <default-drop-profile> High any 1 <default-drop-profile>
Meaning
The show class-of-service scheduler-map
command
lists the three configured scheduler maps. For each scheduler map,
the command output includes:
The name of the scheduler map (
Scheduler map
field)The name of the scheduler (
Scheduler
field)The forwarding classes mapped to the scheduler (
Forwarding class
field)The minimum guaranteed queue bandwidth (
Transmit rate
field)The scheduling priority (
Priority
field)The maximum bandwidth in the priority group that the queue can consume (
Shaping rate
field)The drop profile loss priority (
Loss priority
field) for each drop profile name (name
field)
The command output shows that:
The scheduler map
be-map
has been created and has these properties:There are two schedulers,
be-sched
andnc-sched
.The scheduler
be-sched
has one forwarding class,best-effort
.Scheduler
be-sched
forwarding classbest-effort
has a minimum guaranteed bandwidth of90 percent
, can consume a maximum of100 percent
of the priority group bandwidth, and uses the drop profiledp-be-low
for low loss-priority traffic, the default drop profile for medium-high loss-priority traffic, and the drop profiledp-be-high
for high loss-priority traffic.The scheduler
nc-sched
has one forwarding class,network-control
.The
network-control
forwarding class has a minimum guaranteed bandwidth of10 percent
, can consume a maximum of100 percent
of the priority group bandwidth, and uses the drop profiledp-nc
for low loss-priority traffic and the default drop profile for medium-high and high loss priority traffic.
The scheduler map
nl-map
has been created and has these properties:There are two schedulers,
fcoe-sched
andnl-sched
.The scheduler
fcoe-sched
has one forwarding class,fcoe
.The
fcoe
forwarding class has a minimum guaranteed bandwidth of60 percent
, and can consume a maximum of100 percent
of the priority group bandwidth.The scheduler
nl-sched
has one forwarding class,no-loss
.The
no-loss
forwarding class has a minimum guaranteed bandwidth of40 percent
, and can consume a maximum of100 percent
of the priority group bandwidth.
The scheduler map
mcast-map
has been created and has these properties:There is one scheduler,
mcast-sched
.The scheduler
mcast-sched
has one forwarding class,mcast
.The
mcast
forwarding class has a minimum guaranteed bandwidth of100 percent
, and can consume a maximum of100 percent
of the priority group bandwidth.
Verifying Fabric Queue Scheduler Configuration and Mapping on the Interconnect Device
Purpose
Verify that the Interconnect device ICD1 fabric queue schedulers are configured with the correct bandwidth parameters, mapped to the correct fabric fc-sets, and mapped to the correct drop profiles.
Action
List the fabric scheduler maps using the operational
mode command show class-of-service scheduler-map-forwarding-class-sets
:
user@switch> show class-of-service scheduler-map-forwarding-class-sets Scheduler map forwarding class set: fab-traffic-map, Index: 2 Scheduler: fab-be-sched, Forwarding class set: fabric_fcset_be, Index: 21 Transmit rate: 25 percent, Rate Limit: none, Buffer size: 25 percent, Buffer Limit: none, Priority: low Excess Priority: unspecified Shaping rate: 100 percent, Drop profiles: Loss priority Protocol Index Name Low any 55387 fab-dp-be-low Medium high any 1 <default-drop-profile> High any 4369 fab-dp-be-high Scheduler: fab-fcoe-sched, Forwarding class set: fabric_fcset_noloss1, Index: 23 Transmit rate: 30 percent, Rate Limit: none, Buffer size: 30 percent, Buffer Limit: none, Priority: low Excess Priority: unspecified Shaping rate: 100 percent, Drop profiles: Loss priority Protocol Index Name Low any 1 <default-drop-profile> Medium high any 1 <default-drop-profile> High any 1 <default-drop-profile> Scheduler: fab-nl-sched, Forwarding class set: fabric_fcset_noloss2, Index: 27 Transmit rate: 25 percent, Rate Limit: none, Buffer size: 25 percent, Buffer Limit: none, Priority: low Excess Priority: unspecified Shaping rate: 100 percent, Drop profiles: Loss priority Protocol Index Name Low any 1 <default-drop-profile> Medium high any 1 <default-drop-profile> High any 1 <default-drop-profile> Scheduler: fab-mcast-sched, Forwarding class set: fabric_fcset_multicast1, Index: 32 Transmit rate: 20 percent, Rate Limit: none, Buffer size: remainder, Buffer Limit: none, Priority: low Excess Priority: unspecified Shaping rate: 100 percent, Drop profiles: Loss priority Protocol Index Name Low any 1 <default-drop-profile> Medium high any 1 <default-drop-profile> High any 1 <default-drop-profile>
Meaning
The show class-of-service scheduler-map-forwarding-class-sets
command lists the configured fabric scheduler map. The command output
includes:
The name of the fabric scheduler map (
Scheduler map forwarding class set
field)The name of the fabric scheduler (
Scheduler
field)The fabric fc-sets mapped to the scheduler (
Forwarding class set
field)The minimum guaranteed queue bandwidth (
Transmit rate
field)The maximum bandwidth in the priority group that the queue can consume (
Shaping rate
field)The drop profile loss priority (
Loss priority
field) for each drop profile name (Name
field)
The command output shows that:
The fabric scheduler map
fab-traffic-map
has been created and has these properties:There are four fabric schedulers,
fab-be-sched
,fab-fcoe-sched
,fab-nl-sched
, andfab-mcast-sched
.The fabric scheduler
fab-be-sched
has one fabric fc-set,fabric_fcset_be
.The fabric fc-set
fabric_fcset_be
has a minimum guaranteed bandwidth of25 percent
, can consume a maximum of100 percent
of the priority group bandwidth, and uses the drop profilefab-dp-be-low
for low loss-priority traffic, the default drop profile for medium-high loss-priority traffic, and the drop profilefab-dp-be-high
for high loss-priority traffic.The fabric scheduler
fab-fcoe-sched
has one fabric fc-set,fabric_fcset_noloss1
.The
fabric_fcset_noloss1
fabric fc-set has a minimum guaranteed bandwidth of30 percent
, and can consume a maximum of100 percent
of the priority group bandwidth.The fabric scheduler
fab-nl-sched
has one fabric fc-set,fabric_fcset_noloss2
.The
fabric_fcset_noloss2
fabric fc-set has a minimum guaranteed bandwidth of25 percent
, and can consume a maximum of100 percent
of the priority group bandwidth.The fabric scheduler
fab-mcast-sched
has one fabric fc-set,fabric_fcset_mcast1
.The
fabric_fcset_multicast1
fabric fc-set has a minimum guaranteed bandwidth of20 percent
, and can consume a maximum of100 percent
of the priority group bandwidth.
Verifying Traffic Control Profile Configuration on the Node Devices
Purpose
Verify that the traffic control profiles (priority
groups) be-tcp
, nl-tcp
, and mcast-tcp
have been created with the correct bandwidth parameters and scheduler
mapping.
Action
List the traffic control profiles using the operational
mode command show class-of-service traffic-control-profile
:
user@switch> show class-of-service traffic-control-profile Traffic control profile: be-tcp, Index: 40535 Shaping rate: 100 percent Scheduler map: be-map Guaranteed rate: 25 percent Traffic control profile: nl-tcp, Index: 37959 Shaping rate: 100 percent Scheduler map: nl-map Guaranteed rate: 50 percent Traffic control profile: mcast-tcp, Index: 47661 Shaping rate: 100 percent Scheduler map: mcast-map Guaranteed rate: 25 percent
Meaning
The show class-of-service traffic-control-profile
command lists all of the configured traffic control profiles. For
each traffic control profile, the command output includes:
The name of the traffic control profile (
Traffic control profile
)The maximum port bandwidth the priority group can consume (
Shaping rate
)The scheduler map associated with the traffic control profile (
Scheduler map
)The minimum guaranteed priority group port bandwidth (
Guaranteed rate
)
The command output shows that:
The traffic control profile
be-tcp
can consume a maximum of100 percent
of the port bandwidth, is associated with the scheduler mapbe-map
, and has a minimum guaranteed bandwidth of25 percent
of port bandwidth.The traffic control profile
nl-tcp
can consume a maximum of100 percent
of the port bandwidth, is associated with the scheduler mapnl-map
, and has a minimum guaranteed bandwidth of50 percent
.The traffic control profile
mcast-tcp
can consume a maximum of100 percent
of the port bandwidth, is associated with the scheduler mapmcast-map
, and has a minimum guaranteed bandwidth of25 percent
.
Verifying That PFC Is Enabled on Lossless Queues on the Node Devices
Purpose
Verify that PFC is enabled on the correct queues (as mapped to IEEE 802.1p priorities in the forwarding class configuration) for lossless transport.
Action
List the congestion notification profiles using the
operational mode command show class-of-service congestion-notification
:
user@switch> show class-of-service congestion-notification Type: Input, Name: nl-cnp, Index: 51687 Priority PFC 000 Disabled 001 Disabled 010 Disabled 011 Enabled 100 Enabled 101 Disabled 110 Disabled 111 Disabled
Meaning
The show class-of-service congestion-notification
command lists all of the congestion notification profiles and the
IEEE 802.1p code points with PFC enabled. The command output shows
that PFC is enabled for code points 011
(fcoe
queue) and 100
(no-loss
queue) for the nl-cnp
congestion notification profile.
Verifying Access and Fabric Interface Scheduling Configuration on the Node Devices
Purpose
Verify that the correct fc-sets, traffic control profiles, and congestion notification profiles are mapped to the correct interfaces on Node devices ND1 and ND2.
Action
List the interfaces on Node devices ND1 and ND2 using
the operational mode command show configuration class-of-service
interfaces
. For example, the output on Node device ND1 shows:
user@switch> show configuration class-of-service interfaces ND1:xe-0/0/20 { forwarding-class-set { best-effort-pg { output-traffic-control-profile be-tcp; } noloss-pg { output-traffic-control-profile nl-tcp; } multidestination-pg { output-traffic-control-profile mcast-tcp; } } congestion-notification-profile nl-cnp; } ND1:xe-0/0/21 { forwarding-class-set { best-effort-pg { output-traffic-control-profile be-tcp; } noloss-pg { output-traffic-control-profile nl-tcp; } multidestination-pg { output-traffic-control-profile mcast-tcp; } } congestion-notification-profile nl-cnp; } ND1:fte-0/1/0 { forwarding-class-set { best-effort-pg { output-traffic-control-profile be-tcp; } noloss-pg { output-traffic-control-profile nl-tcp; } multidestination-pg { output-traffic-control-profile mcast-tcp; } } }
Meaning
The show configuration class-of-service interfaces
command shows that the fc-sets and (output) traffic control profiles
mapped to the interfaces are:
best-effort-pg
fc-set withbe-tcp
traffic control profilenoloss-pg
fc-set withnl-tcp
traffic control profilemultidestination-pg
fc-set withmcast-tcp
traffic control profile
The command also shows that the access interfaces include the
congestion notification profile nl-cnp
to enable PFC on
the IEEE 802.1p code points of lossless traffic.
Verifying Fabric Interface Scheduling Configuration on the Interconnect Device
Purpose
Verify that the correct fabric scheduler maps are associated with the correct fabric and Clos fabric interfaces on Interconnect device ICD1.
Action
List the interfaces using the operational mode command show configuration class-of-service interfaces
:
user@switch> show configuration class-of-service interfaces ICD1:fte-0/0/3 { scheduler-map-forwarding-class-set fab-traffic-map; } ICD1:fte-1/0/7 scheduler-map-forwarding-class-set fab-traffic-map; } ICD1:bfte-*/*/* { scheduler-map-forwarding-class-set fab-traffic-map; }
Meaning
The show configuration class-of-service interfaces
command shows that the same fabric forwarding class scheduler map
is on all of the interfaces:
fab-traffic-map