Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Example: Configuring CoS Hierarchical Port Scheduling (ETS)

 

Hierarchical port scheduling defines the class-of-service (CoS) properties of output queues, which are mapped to forwarding classes. Traffic is classified into forwarding classes based on code point (priority), so mapping queues to forwarding classes also maps queues to priorities). Hierarchical port scheduling enables you to group priorities that require similar CoS treatment into priority groups. You define the port bandwidth resources for a priority group, and you define the amount of the priority group’s resources that each priority in the group can use.

Hierarchical port scheduling is the Junos OS implementation of enhanced transmission selection (ETS), as described in IEEE 802.1Qaz. One major benefit of hierarchical port scheduling is greater port bandwidth utilization. If a priority group on a port does not use all of its allocated bandwidth, other priority groups on that port can use that bandwidth. Also, if a priority within a priority group does not use its allocated bandwidth, other priorities within that priority group can use that bandwidth.

Configuring hierarchical scheduling is a multistep procedure that includes:

  • Mapping forwarding classes to queues

  • Defining forwarding class sets (priority groups)

  • Defining behavior aggregate classifiers

  • Configuring priority-based flow control (PFC) for lossless priorities (queues)

  • Applying classifiers and PFC configuration to ingress interfaces

  • Defining drop profiles

  • Defining schedulers

  • Mapping forwarding classes to schedulers

  • Defining traffic control profiles

  • Assigning priority groups and traffic control profiles to egress ports

Note

OCX Series switches do not support lossless transport and do not support PFC. Although this example includes configuring lossless transport with PFC, the portions of the example that do not pertain to lossless transport still apply to OCX Series switches. (You can configure hierarchical scheduling on OCX Series switches, but you cannot configure lossless transport or lossless forwarding classes.)

This example describes how to configure hierarchical scheduling:

Requirements

This example uses the following hardware and software components:

  • One switch (this example was tested on a Juniper Networks QFX3500 Switch)

  • Junos OS Release 11.1 or later for the QFX Series or Junos OS Release 14.1X53-D20 or later for the OCX Series

Overview

Keep the following considerations in mind when you plan the port bandwidth allocation for priority groups and for individual priorities:

  • How much traffic and what types of traffic you expect to traverse the system.

  • How you want to divide different types of traffic into priorities (forwarding classes) to apply different CoS treatment to different types of traffic. Dividing traffic into priorities includes:

    • Mapping the code points of ingress traffic to forwarding classes using behavior aggregate (BA) classifiers. This classifies incoming traffic into the appropriate forwarding class based on code point.

    • Mapping forwarding classes to output queues. This defines the output queue for each type of traffic.

    • Attaching the BA classifier to the desired ingress interfaces so that incoming traffic maps to the desired forwarding classes and queues.

  • How you want to organize priorities into priority groups (forwarding class sets).

    Traffic that requires similar treatment usually belongs in the same priority group. To do this, place forwarding classes that require similar bandwidth, loss, and other characteristics in the same forwarding class set. For example, you can map all types of best-effort traffic forwarding classes into one forwarding class set.

  • How much of the port bandwidth you want to allocate to each priority group and to each of the priorities in each priority group. The following considerations apply to bandwidth allocation:

    • Estimate how much traffic you expect in each forwarding class, and how much traffic you expect in each forwarding class set (the amount of traffic you expect in a forwarding class set is the aggregate amount of traffic in the forwarding classes that belong to the forwarding class set).

    • The combined minimum guaranteed bandwidth of the priorities (forwarding classes) in a priority group should not exceed the minimum guaranteed bandwidth of the priority group (forwarding class set). The transmit rate scheduler parameter defines the minimum guaranteed bandwidth for forwarding classes. Scheduler maps associate schedulers with forwarding classes.

    • The combined minimum guaranteed bandwidth of the priority groups (forwarding class sets) on a port should not exceed the port’s total bandwidth. The guaranteed rate parameter in the traffic control profile defines the minimum bandwidth for a forwarding class set. Associating a scheduler map with a traffic control profile sets the scheduling for the individual forwarding classes in the forwarding class set.

This example creates hierarchical port scheduling by defining priority groups for best effort, guaranteed delivery, and high-performance computing (HPC) traffic. Each priority group includes priorities that need to receive similar CoS treatment. Each priority group and each priority within each priority group receive the CoS resources needed to service their flows. Lossless priorities use PFC to prevent packet loss when the network experiences congestion.

Topology

Table 1 shows the configuration components for this example.

Note

OCX Series switches do not support lossless transport and do not support PFC. If you eliminate the configuration elements for the default lossless fcoe and no-loss forwarding classes (including classifier, forwarding class set, scheduler, and traffic control profile configuration for those forwarding classes) and for PFC, this example works for OCX Series switches. However, because the default fcoe and no-loss forwarding classes do not carry traffic on OCX Series switches, you can apply the bandwidth allocated to those forwarding classes to other forwarding classes. By default, the active forwarding classes (best-effort, network-control, and mcast) share the unused bandwidth assigned to the fcoe and no-loss forwarding classes.

Table 1: Components of the Hierarchical Port Scheduling (ETS) Configuration Topology

Property

Settings

Hardware

QFX3500 switch

Mapping of forwarding classes (priorities) to queues

best-effort to queue 0

be2 to queue 1

fcoe (Fibre Channel over Ethernet) to queue 3

no-loss to queue 4

hpc (high-performance computing) to queue 5

network-control to queue 7

Note: On switches that do not support the ELS CLI, if you are using Junos OS Release 12.2 or later, use the default forwarding-class-to-queue mapping for the lossless fcoe and no-loss forwarding classes. If you explicitly configure the default lossless forwarding classes, the traffic mapped to those forwarding classes is treated as lossy (best-effort) traffic and does not receive lossless treatment.

On switches that do not support the ELS CLI, in Junos OS Release 12.3 and later, you can include the no-loss packet drop attribute in the explicit forwarding class configuration to configure a lossless forwarding class.

Forwarding class sets (priority groups)

best-effort-pg: contains forwarding classes best-effort, be2, and network control

guar-delivery-pg: contains forwarding classes fcoe and no-loss

hpc-pg: contains forwarding class hpc

Behavior aggregate classifier (maps forwarding classes and loss priorities to incoming packets by IEEE 802.1 code point)

Name—hsclassifier1

Code point mapping:

  • 000 to forwarding class best-effort and loss priority low

  • 001 to forwarding class be2 and loss priority high

  • 011 to forwarding class fcoe and loss priority low

  • 100 to forwarding class no-loss and loss priority low

  • 101 to forwarding class hpc and loss priority low

  • 110 to forwarding class network-control and loss priority low

PFC

Congestion notification profile name—gd-cnp

PFC enabled on code points: 011 (fcoe priority), 010 (no-loss priority)

Drop profiles

Note: The fcoe and no-loss priorities (queues) do not use drop profiles because they are lossless traffic classes.

dp-be-low: drop start point 25, drop end point 50, maximum drop rate 80

dp-be-high: drop start point 10, drop end point 40, maximum drop rate 100

dp-hpc: drop start point 75, drop end point 90, maximum drop rate 75

dp-nc: drop start point 80, drop end point 100, maximum drop rate 100

Queue schedulers

be-sched: minimum bandwidth 3g, maximum bandwidth 100%, priority low, drop profiles dp-be-low and dp-be-high

fcoe-sched: minimum bandwidth 2.5g, maximum bandwidth 100%, priority low

hpc-sched: minimum bandwidth 2g, maximum bandwidth 100%, priority low, drop profile dp-hpc

nc-sched: minimum bandwidth 500m, maximum bandwidth 100%, priority low, drop profile dp-nc

nl-sched: minimum bandwidth 2g, maximum bandwidth 100%, priority low

Forwarding class-to-scheduler mapping

Scheduler map be-map:

Forwarding class best-effort, scheduler be-sched

Forwarding class be2, scheduler be-sched

Forwarding class network-control, scheduler nc-sched

Scheduler map gd-map:

Forwarding class fcoe, scheduler fcoe-sched

Forwarding class no-loss, scheduler nl-sched

Scheduler map hpc-map:

Forwarding class hpc, scheduler hpc-sched

Traffic control profiles

be-tcp: scheduler map be-map, minimum bandwidth 3.5g, maximum bandwidth 100%

gd-tcp: scheduler map gd-map, minimum bandwidth 4.5g, maximum bandwidth 100%

hpc-tcp: scheduler map hpc-map, minimum bandwidth 2g, maximum bandwidth 100%

Interfaces

This example configures hierarchical port scheduling on interfaces xe-0/0/20 and xe-0/0/21. Because traffic is bidirectional, you apply the ingress and egress configuration components to both interfaces:

  • Classifier Name—hsclassifier1

  • Forwarding class sets—best-effort-pg, guar-deliver-pg, hpc-pg

  • Congestion notification profile—gd-cnp

Figure 1 shows a block diagram of the configuration components and the configuration flow of the CLI statements used in the example. You can perform the configuration steps in a different sequence if you want.

Figure 1: Hierarchical Port Scheduling Components Block Diagram
Hierarchical
Port Scheduling Components Block Diagram

Figure 2 shows a block diagram of the hierarchical scheduling packet flow from ingress to egress.

Figure 2: Hierarchical Port Scheduling Packet Flow Block Diagram
Hierarchical
Port Scheduling Packet Flow Block Diagram

Configuration

CLI Quick Configuration

To quickly configure hierarchical port scheduling on systems that support lossless transport, copy the following commands, paste them in a text file, remove line breaks, change variables and details to match your network configuration, and then copy and paste the commands into the CLI at the [edit class-of-service] hierarchy level:

[edit class-of-service]


set forwarding-classes class best-effort queue-num 0

set forwarding-classes class be2 queue-num 1

set forwarding-classes class hpc queue-num 5

set forwarding-classes class network-control queue-num 7

set forwarding-class-sets best-effort-pg class best-effort

set forwarding-class-sets best-effort-pg class be2

set forwarding-class-sets best-effort-pg class network-control

set forwarding-class-sets guar-delivery-pg class fcoe

set forwarding-class-sets guar-delivery-pg class no-loss

set forwarding-class-sets hpc-pg class hpc

set classifiers ieee-802.1 hsclassifier1 forwarding-class best-effort loss-priority low code-points 000

set classifiers ieee-802.1 hsclassifier1 forwarding-class be2 loss-priority high code-points 001

set classifiers ieee-802.1 hsclassifier1 forwarding-class fcoe loss-priority low code-points 011

set classifiers ieee-802.1 hsclassifier1 forwarding-class no-loss loss-priority low code-points 100

set classifiers ieee-802.1 hsclassifier1 forwarding-class hpc loss-priority low code-points 101

set classifiers ieee-802.1 hsclassifier1 forwarding-class network-control loss-priority low code-points 110

set congestion-notification-profile gd-cnp input ieee-802.1 code-point 011 pfc

set congestion-notification-profile gd-cnp input ieee-802.1 code-point 100 pfc

set interfaces xe-0/0/20 unit 0 classifiers ieee-802.1 hsclassifier1

set interfaces xe-0/0/21 unit 0 classifiers ieee-802.1 hsclassifier1

set interfaces xe-0/0/20 congestion-notification-profile gd-cnp

set interfaces xe-0/0/21 congestion-notification-profile gd-cnp

set drop-profiles dp-be-low interpolate fill-level 25 fill-level 50 drop-probability 0 drop-probability 80

set drop-profiles dp-be-high interpolate fill-level 10 fill-level 40 drop-probability 0 drop-probability 100

set drop-profiles dp-nc interpolate fill-level 80 fill-level 100 drop-probability 0 drop-probability 100

set drop-profiles dp-hpc interpolate fill-level 75 fill-level 90 drop-probability 0 drop-probability 75

set schedulers be-sched priority low transmit-rate 3g

set schedulers be-sched shaping-rate percent 100

set schedulers be-sched drop-profile-map loss-priority low protocol any drop-profile dp-be-low

set schedulers be-sched drop-profile-map loss-priority high protocol any drop-profile dp-be-high

set schedulers fcoe-sched priority low transmit-rate 2500m

set schedulers fcoe-sched shaping-rate percent 100

set schedulers hpc-sched priority low transmit-rate 2g

set schedulers hpc-sched shaping-rate percent 100

set schedulers hpc-sched drop-profile-map loss-priority low protocol any drop-profile dp-hpc

set schedulers nc-sched priority low transmit-rate 500m

set schedulers nc-sched shaping-rate percent 100

set schedulers nc-sched drop-profile-map loss-priority low protocol any drop-profile dp-nc

set schedulers nl-sched priority low transmit-rate 2g

set schedulers nl-sched shaping-rate percent 100

set scheduler-maps be-map forwarding-class best-effort scheduler be-sched

set scheduler-maps be-map forwarding-class be2 scheduler be-sched

set scheduler-maps be-map forwarding-class network-control scheduler nc-sched

set scheduler-maps gd-map forwarding-class fcoe scheduler fcoe-sched

set scheduler-maps gd-map forwarding-class no-loss scheduler nl-sched

set scheduler-maps hpc-map forwarding-class hpc scheduler hpc-sched

set traffic-control-profiles be-tcp scheduler-map be-map guaranteed-rate 3500m

set traffic-control-profiles be-tcp shaping-rate percent 100

set traffic-control-profiles gd-tcp scheduler-map gd-map guaranteed-rate 4500m

set traffic-control-profiles gd-tcp shaping-rate percent 100

set traffic-control-profiles hpc-tcp scheduler-map hpc-map guaranteed-rate 2g

set traffic-control-profiles hpc-tcp shaping-rate percent 100

set interfaces xe-0/0/20 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp

set interfaces xe-0/0/20 forwarding-class-set guar-delivery-pg output-traffic-control-profile gd-tcp

set interfaces xe-0/0/20 forwarding-class-set hpc-pg output-traffic-control-profile hpc-tcp

set interfaces xe-0/0/21 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp

set interfaces xe-0/0/21 forwarding-class-set guar-delivery-pg output-traffic-control-profile gd-tcp

set interfaces xe-0/0/21 forwarding-class-set hpc-pg output-traffic-control-profile hpc-tcp

OCX Series Switches

Because OCX Series switches do not support lossless transport, the following subset of the configuration eliminates the lossless configuration elements and provides hierarchical port scheduling for the best-effort, be2, hpc, and network-control forwarding classes. In addition, on OCX Series switches, you would probably use DSCP classifiers and code points instead of IEEE classifiers and code points. To quickly configure hierarchical port scheduling on an OCX Series switch, copy the following commands, paste them in a text file, remove line breaks, change variables and details to match your network configuration, and then copy and paste the commands into the CLI at the [edit class-of-service] hierarchy level:

[edit class-of-service]


set forwarding-classes class best-effort queue-num 0

set forwarding-classes class be2 queue-num 1

set forwarding-classes class hpc queue-num 5

set forwarding-classes class network-control queue-num 7

set forwarding-class-sets best-effort-pg class best-effort

set forwarding-class-sets best-effort-pg class be2

set forwarding-class-sets best-effort-pg class network-control



set forwarding-class-sets hpc-pg class hpc

set classifiers ieee-802.1 hsclassifier1 forwarding-class best-effort loss-priority low code-points 000

set classifiers ieee-802.1 hsclassifier1 forwarding-class be2 loss-priority high code-points 001



set classifiers ieee-802.1 hsclassifier1 forwarding-class hpc loss-priority low code-points 101

set classifiers ieee-802.1 hsclassifier1 forwarding-class network-control loss-priority low code-points 110



set interfaces xe-0/0/20 unit 0 classifiers ieee-802.1 hsclassifier1

set interfaces xe-0/0/21 unit 0 classifiers ieee-802.1 hsclassifier1

set drop-profiles dp-be-low interpolate fill-level 25 fill-level 50 drop-probability 0 drop-probability 80

set drop-profiles dp-be-high interpolate fill-level 10 fill-level 40 drop-probability 0 drop-probability 100

set drop-profiles dp-nc interpolate fill-level 80 fill-level 100 drop-probability 0 drop-probability 100

set drop-profiles dp-hpc interpolate fill-level 75 fill-level 90 drop-probability 0 drop-probability 75

set schedulers be-sched priority low transmit-rate 3g

set schedulers be-sched shaping-rate percent 100

set schedulers be-sched drop-profile-map loss-priority low protocol any drop-profile dp-be-low

set schedulers be-sched drop-profile-map loss-priority high protocol any drop-profile dp-be-high

set schedulers hpc-sched priority low transmit-rate 2g

set schedulers hpc-sched shaping-rate percent 100

set schedulers hpc-sched drop-profile-map loss-priority low protocol any drop-profile dp-hpc

set schedulers nc-sched priority low transmit-rate 500m

set schedulers nc-sched shaping-rate percent 100

set schedulers nc-sched drop-profile-map loss-priority low protocol any drop-profile dp-nc

set scheduler-maps be-map forwarding-class best-effort scheduler be-sched

set scheduler-maps be-map forwarding-class be2 scheduler be-sched

set scheduler-maps be-map forwarding-class network-control scheduler nc-sched

set scheduler-maps hpc-map forwarding-class hpc scheduler hpc-sched

set traffic-control-profiles be-tcp scheduler-map be-map guaranteed-rate 3500m

set traffic-control-profiles be-tcp shaping-rate percent 100

set traffic-control-profiles hpc-tcp scheduler-map hpc-map guaranteed-rate 2g

set traffic-control-profiles hpc-tcp shaping-rate percent 100

set interfaces xe-0/0/20 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp

set interfaces xe-0/0/20 forwarding-class-set hpc-pg output-traffic-control-profile hpc-tcp

set interfaces xe-0/0/21 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp

set interfaces xe-0/0/21 forwarding-class-set hpc-pg output-traffic-control-profile hpc-tcp

Step-by-Step Procedure

To perform a step-by-step configuration of the forwarding classes (priorities), forwarding class sets (priority groups), classifiers, queue schedulers, PFC, traffic control profiles, and interfaces to set up hierarchical port scheduling (ETS):

  1. Configure the forwarding classes (priorities) and map them to unicast output queues (do not explicitly map the fcoe and no-loss forwarding classes to output queues; use the default configuration):
    [edit class-of-service]

    user@switch# set forwarding-classes class best-effort queue-num 0

    user@switch# set forwarding-classes class be2 queue-num 1

    user@switch# set forwarding-classes class hpc queue-num 5

    user@switch# set forwarding-classes class network-control queue-num 7



  2. Configure forwarding class sets (priority groups) to group forwarding classes (priorities) that require similar CoS treatment:
    [edit class-of-service]

    user@switch# set forwarding-class-sets best-effort-pg class best-effort

    user@switch# set forwarding-class-sets best-effort-pg class be2

    user@switch# set forwarding-class-sets best-effort-pg class network-control

    user@switch# set forwarding-class-sets guar-delivery-pg class fcoe

    user@switch# set forwarding-class-sets guar-delivery-pg class no-loss

    user@switch# set forwarding-class-sets hpc-pg class hpc
    Note

    On OCX Series switches, you would not configure the guar-delivery-pg forwarding class set for lossless traffic.



  3. Configure a classifier to set the loss priority and IEEE 802.1 code points assigned to each forwarding class at the ingress:
    [edit class-of-service]

    user@switch# set classifiers ieee-802.1 hsclassifier1 forwarding-class best-effort loss-priority low code-points 000

    user@switch# set classifiers ieee-802.1 hsclassifier1 forwarding-class be2 loss-priority high code-points 001

    user@switch# set classifiers ieee-802.1 hsclassifier1 forwarding-class fcoe loss-priority low code-points 011

    user@switch# set classifiers ieee-802.1 hsclassifier1 forwarding-class no-loss loss-priority low code-points 100

    user@switch# set classifiers ieee-802.1 hsclassifier1 forwarding-class hpc loss-priority low code-points 101

    user@switch# set classifiers ieee-802.1 hsclassifier1 forwarding-class network-control loss-priority low code-points 110
    Note

    On OCX Series switches, you would not configure the fcoe and no-loss portions of the classifier.



  4. Configure a congestion notification profile to enable PFC on the FCoE and no-loss queue IEEE 802.1 code points:
    [edit class-of-service]

    user@switch# set congestion-notification-profile gd-cnp input ieee-802.1 code-point 011 pfc

    user@switch# set congestion-notification-profile gd-cnp input ieee-802.1 code-point 100 pfc
    Note

    This step does not apply to OCX Series switches, which do not support PFC.



  5. Assign the classifier to the interfaces:
    [edit class-of-service]

    user@switch# set interfaces xe-0/0/20 unit 0 classifiers ieee-802.1 hsclassifier1

    user@switch# set interfaces xe-0/0/21 unit 0 classifiers ieee-802.1 hsclassifier1



  6. Apply the PFC configuration to the interfaces:
    [edit class-of-service]

    user@switch# set interfaces xe-0/0/20 congestion-notification-profile gd-cnp

    user@switch# set interfaces xe-0/0/21 congestion-notification-profile gd-cnp
    Note

    This step does not apply to OCX Series switches, which do not support PFC.



  7. Configure the drop profile for the best-effort low loss-priority queue:
    [edit class-of-service]

    user@switch# set drop-profiles dp-be-low interpolate fill-level 25 fill-level 50 drop-probability 0 drop-probability 80



  8. Configure the drop profile for the best-effort high loss-priority queue:
    [edit class-of-service]

    user@switch# set drop-profiles dp-be-high interpolate fill-level 10 fill-level 40 drop-probability 0 drop-probability 100



  9. Configure the drop profile for the network-control queue:
    [edit class-of-service]

    user@switch# set drop-profiles dp-nc interpolate fill-level 80 fill-level 100 drop-probability 0 drop-probability 100



  10. Configure the drop profile for the high-performance computing queue:
    [edit class-of-service]

    user@switch# set drop-profiles dp-hpc interpolate fill-level 75 fill-level 90 drop-probability 0 drop-probability 75



  11. Define the minimum guaranteed bandwidth, priority, maximum bandwidth, and drop profiles for the best-effort queue:
    [edit class-of-service]

    user@switch# set schedulers be-sched priority low transmit-rate 3g

    user@switch# set schedulers be-sched shaping-rate percent 100

    user@switch# set schedulers be-sched drop-profile-map loss-priority low protocol any drop-profile dp-be-low

    user@switch# set schedulers be-sched drop-profile-map loss-priority high protocol any drop-profile dp-be-high



  12. Define the minimum guaranteed bandwidth, priority, and maximum bandwidth for the FCoE queue:
    [edit class-of-service]

    user@switch# set schedulers fcoe-sched priority low transmit-rate 2500m

    user@switch# set schedulers fcoe-sched shaping-rate percent 100
    Note

    This step does not apply to OCX Series switches, which do not support lossless transport.



  13. Define the minimum guaranteed bandwidth, priority, maximum bandwidth, and drop profile for the high-performance computing queue:
    [edit class-of-service]

    user@switch# set schedulers hpc-sched priority low transmit-rate 2g

    user@switch# set schedulers hpc-sched shaping-rate percent 100

    user@switch# set schedulers hpc-sched drop-profile-map loss-priority low protocol any drop-profile dp-hpc



  14. Define the minimum guaranteed bandwidth, priority, maximum bandwidth, and drop profile for the network-control queue:
    [edit class-of-service]

    user@switch# set schedulers nc-sched priority low transmit-rate 500m

    user@switch# set schedulers nc-sched shaping-rate percent 100

    user@switch# set schedulers nc-sched drop-profile-map loss-priority low protocol any drop-profile dp-nc



  15. Define the minimum guaranteed bandwidth, priority, and maximum bandwidth for the no-loss queue:
    [edit class-of-service]

    user@switch# set schedulers nl-sched priority low transmit-rate 2g

    user@switch# set schedulers nl-sched shaping-rate percent 100
    Note

    This step does not apply to OCX Series switches, which do not support lossless transport.



  16. Map the schedulers to the appropriate forwarding classes (queues):
    [edit class-of-service]

    user@switch# set scheduler-maps be-map forwarding-class best-effort scheduler be-sched

    user@switch# set scheduler-maps be-map forwarding-class be2 scheduler be-sched

    user@switch# set scheduler-maps be-map forwarding-class network-control scheduler nc-sched

    user@switch# set scheduler-maps gd-map forwarding-class fcoe scheduler fcoe-sched

    user@switch# set scheduler-maps gd-map forwarding-class no-loss scheduler nl-sched

    user@switch# set scheduler-maps hpc-map forwarding-class hpc scheduler hpc-sched
    Note

    On OCX Series switches, because lossless transport is not supported, you would not configure the gd-map scheduler map.



  17. Define the traffic control profile for the best-effort priority group (queue scheduler to mapping, minimum guaranteed bandwidth, and maximum bandwidth):
    [edit class-of-service]

    user@switch# set traffic-control-profiles be-tcp scheduler-map be-map guaranteed-rate 3500m

    user@switch# set traffic-control-profiles be-tcp shaping-rate percent 100



  18. Define the traffic control profile for the guaranteed delivery priority group (queue to scheduler mapping, minimum guaranteed bandwidth, and maximum bandwidth):
    [edit class-of-service]

    user@switch# set traffic-control-profiles gd-tcp scheduler-map gd-map guaranteed-rate 4500m

    user@switch# set traffic-control-profiles gd-tcp shaping-rate percent 100
    Note

    This step does not apply to OCX Series switches, which do not support lossless transport.



  19. Define the traffic control profile for the high-performance computing priority group (queue to scheduler mapping, minimum guaranteed bandwidth, and maximum bandwidth):
    [edit class-of-service]

    user@switch# set traffic-control-profiles hpc-tcp scheduler-map hpc-map guaranteed-rate 2g

    user@switch# set traffic-control-profiles hpc-tcp shaping-rate percent 100



  20. Apply the three priority groups (forwarding class sets) and the appropriate traffic control profiles to the egress ports:
    [edit class-of-service]

    user@switch# set interfaces xe-0/0/20 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp

    user@switch# set interfaces xe-0/0/20 forwarding-class-set guar-delivery-pg output-traffic-control-profile gd-tcp

    user@switch# set interfaces xe-0/0/20 forwarding-class-set hpc-pg output-traffic-control-profile hpc-tcp

    user@switch# set interfaces xe-0/0/21 forwarding-class-set best-effort-pg output-traffic-control-profile be-tcp

    user@switch# set interfaces xe-0/0/21 forwarding-class-set guar-delivery-pg output-traffic-control-profile gd-tcp

    user@switch# set interfaces xe-0/0/21 forwarding-class-set hpc-pg output-traffic-control-profile hpc-tcp
    Note

    Because OCX Series switches do not support lossless transport, on OCX Series switches, you would not apply the guar-deliver-pg forwarding class set and the gd-tcp traffic control profile to interfaces.



Results

Display the results of the configuration (the system shows only the explicitly configured parameters; it does not show default parameters such as the fcoe and no-loss lossless forwarding classes). On OCX Series switches, you would not see the lossless configuration components in the output:





Tip

To quickly configure the interfaces, issue the load merge terminal command, and then copy the hierarchy and paste it into the switch terminal window.

Verification

Note

The verification output is based on the full example configuration. On OCX Series switches, you do not see lossless configuration components in the output. Comments about lossless configuration components do not apply to OCX Series switches.

To verify that you created the hierarchical port scheduling components and they are operating properly, perform these tasks:

Verifying the Forwarding Classes (Priorities)

Purpose

Verify that you created the forwarding classes and mapped them to the correct queues. (The system shows only the explicitly configured forwarding classes. It does not show default forwarding classes such as fcoe and no-loss.)

Action

List the forwarding classes using the operational mode command show class-of-service forwarding-class:

Meaning

The show class-of-service forwarding-class command lists all of the configured forwarding classes, the internal identification number of each forwarding class, the queues that are mapped to the forwarding classes, the policing priority, and whether the forwarding class is lossless (no-loss packet drop attribute enabled) or lossy forwarding class (no-loss packet drop attribute disabled). The command output shows that:

  • Forwarding class best-effort maps to queue 0 and is lossy

  • Forwarding class be2 maps to queue 1 and is lossy

  • Forwarding class hpc maps to queue 5 and is lossy

  • Forwarding class network-control maps to queue 7 and is lossy

In addition, the command lists the default multicast (multidestination) forwarding class and the default queue to which it is mapped.

Verifying the Forwarding Class Sets (Priority Groups)

Purpose

Verify that you created the priority groups and that the correct priorities (forwarding classes) belong to the appropriate priority group.

Action

List the forwarding class sets using the operational mode command show class-of-service forwarding-class-set:

Meaning

The show class-of-service forwarding-class-set command lists all of the configured forwarding class sets (priority groups), the forwarding classes (priorities) that belong to each priority group, and the internal index number of each priority group. The command output shows that:

  • The forwarding class set best-effort-pg includes the forwarding classes best-effort, be2, and network-control.

  • The forwarding class set guar-delivery-pg includes the forwarding classes fcoe and no-loss.

  • The forwarding class set hpc-pg includes the forwarding class hpc.

Verifying the Classifier

Purpose

Verify that the classifier maps forwarding classes to the correct IEEE 802.1p code points and packet loss priorities.

Action

List the classifier configured for hierarchical port scheduling using the operational mode command show class-of-service classifier name hsclassifier1:

user@switch> show class-of-service classifier name hsclassifier1

Meaning

The show class-of-service classifier name hsclassifier1 command lists all of the IEEE 802.1p code points and the loss priorities mapped to all of the forwarding classes in the classifier. The command output shows that the forwarding classes best-effort, be2, no-loss, fcoe, hpc, and network-control have been created and mapped to IEEE 802.1p code points and loss priorities.

Verifying Priority-Based Flow Control

Purpose

Verify that PFC is enabled on the correct priorities for lossless transport.

Action

List the congestion notification profiles using the operational mode command show class-of-service congestion-notification:

Meaning

The show class-of-service congestion-notification command lists all of the congestion notification profiles and the IEEE 802.1p code points with PFC enabled. The command output shows that PFC is enabled for code points 011 (fcoe priority and queue) and 100 (no-loss priority and queue) for the gd-cnp congestion notification profile.

The command also shows the default cable length (100 meters), the default maximum receive unit (2500 bytes), and the default mapping of priorities to output queues because this example does not include configuring these options.

Verifying the Output Queue Schedulers

Purpose

Verify that you created the output queue schedulers with the correct bandwidth parameters and priorities, mapped to the correct queues, and mapped to the correct drop profiles.

Action

List the scheduler maps using the operational mode command show class-of-service scheduler-map:

Meaning

The show class-of-service scheduler-map command lists all of the configured scheduler maps. For each scheduler map, the command output includes:

  • The name of the scheduler map (scheduler-map field)

  • The name of the scheduler (scheduler field)

  • The forwarding classes mapped to the scheduler (forwarding-class field)

  • The minimum guaranteed queue bandwidth (transmit-rate field)

  • The scheduling priority (priority field)

  • The maximum bandwidth in the priority group the queue can consume (shaping-rate field)

  • The drop profile loss priority (loss priority field) for each drop profile name (name field)

The command output shows that:

  • The scheduler map be-map was created and has these properties:

    • There are two schedulers, be-sched and nc-sched.

    • The scheduler be-sched has two forwarding classes, best-effort and be2.

    • Scheduler be-sched forwarding classes best-effort and be2 share a minimum guaranteed bandwidth of 3,000,000,000 bps, can consume a maximum of 100 percent of the priority group bandwidth, and use the drop profile dp-be-low for low loss-priority traffic, the default drop profile for medium-high loss-priority traffic, and the drop profile dp-be-high for high loss-priority traffic.

    • The scheduler nc-sched has one forwarding class, network-control.

    • The network-control forwarding class has a minimum guaranteed bandwidth of 500,000,000 bps, can consume a maximum of 100 percent of the priority group bandwidth, and uses the drop profile dp-nc for low loss-priority traffic and the default drop profile for medium-high and high loss priority traffic.

  • The scheduler map gd-map was created and has these properties:

    • There are two schedulers, fcoe-sched and nl-sched.

    • The scheduler fcoe-sched has one forwarding class, fcoe.

    • The fcoe forwarding class has a minimum guaranteed bandwidth of 2,500,000,000 bps, and can consume a maximum of 100 percent of the priority group bandwidth.

    • The scheduler nl-sched has one forwarding class, no-loss.

    • The no-loss forwarding class has a minimum guaranteed bandwidth of 2,000,000,000 bps, and can consume a maximum of 100 percent of the priority group bandwidth.

  • The scheduler map hpc-map was created and has these properties:

    • There is one scheduler, hpc-sched.

    • The scheduler hpc-sched has one forwarding class, hpc.

    • The hpc forwarding class has a minimum guaranteed bandwidth of 2,000,000,000 bps, can consume a maximum of 100 percent of the priority group bandwidth, and uses the drop profile dp-hpc for low loss-priority traffic and the default drop profile for medium-high and high loss-priority traffic.

Verifying the Drop Profiles

Purpose

Verify that you created the drop profiles dp-be-high, dp-be-low, dp-hpc, and dp-nc with the correct fill levels and drop probabilities.

Action

List the drop profiles using the operational mode command show configuration class-of-service drop-profiles:

user@switch> show configuration class-of-service drop-profiles

Meaning

The show configuration class-of-service drop-profiles command lists the drop profiles and their properties. The command output shows that there are four drop profiles configured, dp-be-high, dp-be-low, dp-hpc, and dp-nc. The output also shows that:

  • For dp-be-low, the drop start point (the first fill level) is when the queue is 25 percent filled, the drop end point (the second fill level) occurs when the queue is 50 percent filled, and the drop probability at the drop end point is 80 percent.

  • For dp-be-high, the drop start point (the first fill level) is when the queue is 10 percent filled, the drop end point (the second fill level) occurs when the queue is 40 percent filled, and the drop probability at the drop end point is 100 percent.

  • For dp-hpc, the drop start point (the first fill level) is when the queue is 75 percent filled, the drop end point (the second fill level) occurs when the queue is 90 percent filled, and the drop probability at the drop end point is 75 percent.

  • For dp-nc, the drop start point (the first fill level) is when the queue is 80 percent filled, the drop end point (the second fill level) occurs when the queue is 100 percent filled, and the drop probability at the drop end point is 100 percent.

Verifying the Priority Group Output Schedulers (Traffic Control Profiles)

Purpose

Verify that you created the traffic control profiles be-tcp, gd-tcp, and hpc-tcp with the correct bandwidth parameters and scheduler mapping.

Action

List the traffic control profiles using the operational mode command show class-of-service traffic-control-profile:

Meaning

The show class-of-service traffic-control-profile command lists all of the configured traffic control profiles. For each traffic control profile, the command output includes:

  • The name of the traffic control profile (traffic-control-profile)

  • The maximum port bandwidth the priority group can consume (shaping-rate)

  • The scheduler map associated with the traffic control profile (scheduler-map)

  • The minimum guaranteed priority group port bandwidth (guaranteed-rate)

The command output shows that:

  • The traffic control profile be-tcp can consume a maximum of 100 percent of the port bandwidth, is associated with the scheduler map be-map, and has a minimum guaranteed bandwidth of 3,500,000,000 bps.

  • The traffic control profile gd-tcp can consume a maximum of 100 percent of the port bandwidth, is associated with the scheduler map gd-map, and has a minimum guaranteed bandwidth of 4,500,000,000 bps.

  • The traffic control profile hpc-tcp can consume a maximum of 100 percent of the port bandwidth, is associated with the scheduler map hpc-map, and has a minimum guaranteed bandwidth of 2,000,000,000 bps.

Verifying the Interface Configuration

Purpose

Verify that the classifier, the congestion notification profile, and the forwarding class sets are configured on interfaces xe-0/0/20 and xe-0/0/21.

Action

List the interfaces using the operational mode commands show configuration class-of-service interfaces xe-0/0/20 and show configuration class-of-service interfaces xe-0/0/21:

user@switch> show configuration class-of-service interfaces xe-0/0/20
user@switch> show configuration class-of-service interfaces xe-0/0/21

Meaning

The show configuration class-of-service interfaces interface-name command shows that each interface includes the forwarding class sets best-effort-pg, guar-delivery-pg, and hpc-pg, congestion notification profile gd-cnp, and the IEEE 802.1p classifier hsclassifier1.