Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Managing Congestion on the Egress Interface by Configuring the Scheduler Buffer Size

To control congestion at the output stage, you can configure the delay-buffer bandwidth. The delay-buffer bandwidth provides packet buffer space to absorb burst traffic up to the specified duration of delay. Once the specified delay buffer becomes full, packets with 100 percent drop probability are dropped from the head of the buffer.

The default scheduler transmission rate for queues 0 through 7 are 95, 0, 0, 0, 0, 0, 0, and 5 percent of the total available bandwidth.

The default buffer size percentages for queues 0 through 7 are 95, 0, 0, 0, 0, 0, 0, and 5 percent of the total available buffer. The total available buffer per queue differs by PIC type.

To configure the buffer size, include the buffer-size statement at the [edit class-of-service schedulers scheduler-name] hierarchy level:

For each scheduler, you can configure the buffer size as one of the following:

  • A percentage of the total buffer. The total buffer per queue is based on microseconds and differs by routing device type, as shown in Table 1.

  • The remaining buffer available. The remainder is the buffer percentage that is not assigned to other queues. For example, if you assign 40 percent of the delay buffer to queue 0, allow queue 3 to keep the default allotment of 5 percent, and assign the remainder to queue 7, then queue 7 uses approximately 55 percent of the delay buffer.

  • Shared from the interface’s buffer pool. On PTX Series routers, set a queue’s buffer to be up to 100 percent of the interface’s buffer. This option allows the queue’s buffer to grow as large as 100 percent of the interface's buffer if and only if it is the only active queue for the interface.

  • A temporal value, in microseconds. For the temporal setting, the queuing algorithm starts dropping packets when it queues more than a computed number of bytes. This maximum is computed by multiplying the transmission rate of the queue by the configured temporal value. The buffer size temporal value per queue differs by routing device type, as shown in Table 1. The maximums apply to the logical interface, not each queue.

    Note:

    In general, the default temporal buffer value is inversely related to the speed, or shaping rate, of the interface. As the speed of the interface increases, the interface needs less and less buffer to hold data, as it is possible for the interface to send more and more data.

Table 1: Buffer Size Temporal Value Ranges by Routing Device Type

Routing Devices

Temporal Value Ranges

M320 and T Series router FPCs, Type 1 and Type 2

1 through 80,000 microseconds

M320 and T Series router FPCs, Type 3. All ES cards (Type 1, 2, 3, and 4).

1 through 50,000 microseconds

For PICs with greater than 40 Gbps of total bandwidth, the maximum temporal buffer size that can be configured for a scheduler is 40,000 microseconds instead of 50,000 microseconds.

M120 router FEBs and MX Series router nonenhanced Queuing DPCs, and EX Series switches

1 through 100,000 microseconds

M5, M7i, M10, and M10i router FPCs

1 through 100,000 microseconds

Other M Series router FPCs

1 through 200,000 microseconds

PTX Series Packet Transport Routers

1 through 100,000 microseconds

IQ PICs on all routers

1 through 100,000 microseconds

With Large Buffer Sizes Enabled

IQ PICs on all routers

1 through 500,000 microseconds

Gigabit Ethernet IQ VLANs

With shaping rate up to 10 Mbps

1 through 400,000 microseconds

With shaping rate up to 20 Mbps

1 through 300,000 microseconds

With shaping rate up to 30 Mbps

1 through 200,000 microseconds

With shaping rate up to 40 Mbps

1 through 150,000 microseconds

With shaping rate above 40 Mbps

1 through 100,000 microseconds

For more information about configuring delay buffers, see the following subtopics:

Configuring Large Delay Buffers for Slower Interfaces

By default, T1, E1, and NxDS0 interfaces and DLCIs configured on channelized IQ PICs are limited to 100,000 microseconds of delay buffer. (The default average packet size on the IQ PIC is 40 bytes.) For these interfaces, it might be necessary to configure a larger buffer size to prevent congestion and packet dropping. You can do so on the following PICs:

  • Channelized IQ

  • 4-port E3 IQ

  • Gigabit Ethernet IQ and IQ2

Congestion and packet dropping occur when large bursts of traffic are received by slower interfaces. This happens when faster interfaces pass traffic to slower interfaces, which is often the case when edge devices receive traffic from the core of the network. For example, a 100,000-microsecond T1 delay buffer can absorb only 20 percent of a 5000-microsecond burst of traffic from an upstream OC3 interface. In this case, 80 percent of the burst traffic is dropped.

Table 2 shows some recommended buffer sizes needed to absorb typical burst sizes from various upstream interface types.

Table 2: Recommended Delay Buffer Sizes

Length of Burst

Upstream Interface

Downstream Interface

Recommended Buffer on Downstream Interface

5000 microseconds

OC3

E1 or T1

500,000 microseconds

5000 microseconds

E1 or T1

E1 or T1

100,000 microseconds

1000 microseconds

T3

E1 or T1

100,000 microseconds

To ensure that traffic is queued and transmitted properly on E1, T1, and NxDS0 interfaces and DLCIs, you can configure a buffer size larger than the default maximum. To enable larger buffer sizes to be configured:

Include the q-pic-large-buffer (large-scale | small-scale) statement at the [edit chassis fpc slot-number pic pic-number] hierarchy level.

If you specify the large-scale option, the feature supports a larger number of interfaces. If you specify small-scale, the default, then the feature supports a smaller number of interfaces.

When you include the q-pic-large-buffer statement in the configuration, the larger buffer is transparently available for allocation to scheduler queues. The larger buffer maximum varies by interface type, as shown in Table 3.

Table 3: Maximum Delay Buffer with q-pic-large-buffer Enabled by Interface

Platform, PIC, or Interface Type

Maximum Buffer Size

With Large Buffer Sizes Not Enabled

M320 and T Series router FPCs, Type 1 and Type 2

80,000 microseconds

M320 and T Series router FPCs, Type 3

50,000 microseconds

Other M Series router FPCs

200,000 microseconds

IQ PICs on all routers

100,000 microseconds

With Large Buffer Sizes Enabled

Channelized T3 and channelized OC3 DLCIs—Maximum sizes vary by shaping rate:

With shaping rate from 64,000 through 255,999 bps

4,000,000 microseconds

With shaping rate from 256,000 through 511,999 bps

2,000,000 microseconds

With shaping rate from 512,000 through 1,023,999 bps

1,000,000 microseconds

With shaping rate from 1,024,000 through 2,048,000 bps

500,000 microseconds

With shaping rate from 2,048,001 bps through 10 Mbps

400,000 microseconds

With shaping rate from 10,000,001 bps through 20 Mbps

300,000 microseconds

With shaping rate from 20,000,001 bps through 30 Mbps

200,000 microseconds

With shaping rate from 30,000,001 bps through 40 Mbps

150,000 microseconds

With shaping rate from 40,000,001 bps and above

100,000 microseconds

NxDS0 IQ Interfaces—Maximum sizes vary by channel size:

1xDSO through 3xDS0

4,000,000 microseconds

4xDSO through 7xDS0

2,000,000 microseconds

8xDSO through 15xDS0

1,000,000 microseconds

16xDSO through 32xDS0

500,000 microseconds

Other IQ interfaces

500,000 microseconds

If you configure a delay buffer larger than the new maximum, the candidate configuration can be committed successfully. However, the setting is rejected by the packet forwarding component and a system log warning message is generated.

For interfaces that support DLCI queuing, the large buffer is supported for DLCIs on which the configured shaping rate is less than or equal to the physical interface bandwidth. For instance, when you configure a Frame Relay DLCI on a Channelized T3 IQ PIC, and you configure the shaping rate to be 1.5 Mbps, the amount of delay buffer that can be allocated to the DLCI is 500,000 microseconds, which is equivalent to a T1 delay buffer. For more information about DLCI queuing, see Applying Scheduler Maps and Shaping Rate to DLCIs and VLANs.

For NxDS0 interfaces, the larger buffer sizes can be up to 4,000,000 microseconds, depending on the number of DS0 channels in the NxDS0 interface. For slower NxDS0 interfaces with fewer channels, the delay buffer can be relatively larger than for faster NxDS0 interfaces with more channels. This is shown in Table 5.

You can allocate the delay buffer as either a percentage or a temporal value. The resulting delay buffer is calculated differently depending how you configure the delay buffer, as shown in Table 4.

Table 4: Delay-Buffer Calculations

Delay Buffer Configuration

Formula

Example

Percentage

available interface bandwidth * configured percentage buffer-size * maximum buffer = queue buffer

If you configure a queue on a T1 interface to use 30 percent of the available delay buffer, the queue receives 28,125 bytes of delay buffer:

sched-expedited {
    transmit-rate percent 30;
    buffer-size percent 30;
}

1.5 Mbps * 0.3 * 500,000 microseconds = 225,000 bits = 28,125 bytes

Temporal

available interface bandwidth * configured percentage transmit-rate * configured temporal buffer-size = queue buffer

If you configure a queue on a T1 interface to use 500,000 microseconds of delay buffer and you configure the transmission rate to be 20 percent, the queue receives 18,750 bytes of delay buffer:

sched-best { 											
    transmit-rate percent 20;
    buffer-size temporal 500000; 										
}

1.5 Mbps * 0.2 * 500,000 microseconds = 150,000 bits = 18,750 bytes

Percentage, with buffer size larger than transmit rate

 

In this example, the delay buffer is allocated twice the transmit rate. Maximum delay buffer latency can be up to twice the 500,000-microsecond delay buffer if the queue’s transmit rate cannot exceed the allocated transmit rate.

sched-extra-buffer {
    transmit-rate percent 10;
    buffer-size percent 20;
}

FRF.16 LSQ bundles

For total bundle bandwidth < T1 bandwidth, the delay-buffer rate is 1 second.

For total bundle bandwidth >= T1 bandwidth, the delay-buffer rate is 200 milliseconds (ms).

 

Configuring the Maximum Delay Buffer for NxDS0 Interfaces

Because NxDS0 interfaces carry less bandwidth than a T1 or E1 interface, the buffer size on an NxDS0 interface can be relatively larger, depending on the number of DS0 channels combined. The maximum delay buffer size is calculated with the following formula:

For example, a 1xDS0 interface has a speed of 64 kilobits per second (Kbps). At this rate, the maximum delay buffer time is 4,000,000 microseconds. Therefore, the delay buffer size is 32 kilobytes (KB):

Table 5 shows the delay-buffer calculations for 1xDS0 through 32xDS0 interfaces.

Table 5: NxDS0 Transmission Rates and Delay Buffers

Interface Speed

Delay Buffer Size

1xDS0 Through 4xDS0: Maximum Delay Buffer Time Is 4,000,000 Microseconds

1xDS0: 64 Kbps

32 KB

2xDS0: 128 Kbps

64  KB

3xDS0: 192 Kbps

96 KB

4xDS0 Through 7xDS0: Maximum Delay Buffer Time Is 2,000,000 Microseconds

4xDS0: 256 Kbps

64 KB

5xDS0: 320 Kbps

80 KB

6xDS0: 384 Kbps

96 KB

7xDS0: 448 Kbps

112 KB

8xDS0 Through 15xDS0: Maximum Delay Buffer Time Is 1,000,000 Microseconds

8xDS0: 512 Kbps

64 KB

9xDS0: 576 Kbps

72 KB

10xDS0: 640 Kbps

80 KB

11xDS0: 704 Kbps

88 KB

12xDS0: 768 Kbps

96 KB

13xDS0: 832 Kbps

104 KB

14xDS0: 896vKbps

112 KB

15xDS0: 960 Kbps

120 KB

16xDS0 Through 32xDS0: Maximum Delay Buffer Time Is 500,000 Microseconds

16xDS0: 1024 Kbps

64 KB

17xDS0: 1088 Kbps

68 KB

18xDS0: 1152 Kbps

72 KB

19xDS0: 1216 Kbps

76 KB

20xDS0: 1280 Kbps

80 KB

21xDS0: 1344 Kbps

84 KB

22xDS0: 1408 Kbps

88 KB

23xDS0: 1472 Kbps

92 KB

24xDS0: 1536 Kbps

96 KB

25xDS0: 1600 Kbps

100 KB

26xDS0: 1664 Kbps

104 KB

27xDS0: 1728 Kbps

108 KB

28xDS0: 1792 Kbps

112 KB

29xDS0: 1856 Kbps

116 KB

30xDS0: 1920 Kbps

120 KB

31xDS0: 1984 Kbps

124 KB

32xDS0: 2048 Kbps

128 KB

Example: Configuring Large Delay Buffers for Slower Interfaces

Set large delay buffers on interfaces configured on a Channelized OC12 IQ PIC. The CoS configuration binds a scheduler map to the interface specified in the chassis configuration. For information about the delay-buffer calculations in this example, see Table 4.

To configure a large delay buffer:

  1. Specify the FPC and PIC for which you want to configure large delay buffers.
  2. Enable large delay buffering.
  3. Specify the maximum number of queues per interface.
  4. Verify the configuration.
  5. Save the configuration.

Example: Configuring the Delay Buffer Value for a Scheduler

You can assign to a physical or logical interface, a scheduler map that is composed of different schedulers (or queues). The physical interface’s large delay buffer can be distributed to the different schedulers (or queues) using the transmit-rate and buffer-size statements at the [edit class-of-service schedulers scheduler-name] hierarchy level.

This example shows two schedulers, sched-best and sched-exped, with the delay buffer size configured as a percentage (20 percent) and temporal value (300,000 microseconds), respectively. The sched-best scheduler has a transmit rate of 10 percent. The sched-exped scheduler has a transmit rate of 20 percent.

The sched-best scheduler’s delay buffer is twice that of the specified transmit rate of 10 percent. Assuming that the sched-best scheduler is assigned to a T1 interface, this scheduler receives 20 percent of the total 500,000 microseconds of the T1 interface’s delay buffer. Therefore, the scheduler receives 18,750 bytes of delay buffer:

Assuming that the sched-exped scheduler is assigned to a T1 interface, this scheduler receives 300,000 microseconds of the T1 interface’s 500,000-microsecond delay buffer with the traffic rate at 20 percent. Therefore, the scheduler receives 11,250 bytes of delay buffer:

To configure this example:

  1. Configure the sched-best scheduler.
  2. Specify the transmit-rate of 10 percent.
  3. Specify the buffer size as 20 percent.
  4. Configure the sched-exped scheduler.
  5. Specify the transmit-rate of 20 percent.
  6. Specify the buffer size temporal value (300,000 microseconds).
  7. Verify the configuration.
  8. Save the configuration.

Example: Configuring the Physical Interface Shaping Rate

In general, the physical interface speed is the basis for calculating the delay buffer size. However, when you include the shaping-rate statement, the shaping rate becomes the basis for calculating the delay buffer size. For more information, see Table 5.

This example configures the shaping rate on a T1 interface to 200 Kbps, which means that the T1 interface bandwidth is set to 200 Kbps instead of 1.5 Mbps. Because 200 Kbps is less than 4xDS0, this interface receives 4 seconds of delay buffer, or 800 Kbps of traffic, which is 800 KB for a full second.

  1. Specify the interface on which you want to configure the shaping rate..
  2. Specify the shaping rate.
  3. Verify the configuration.
  4. Save the configuration.

Complete Configuration

This example shows a Channelized OC12 IQ PIC in FPC slot 0, PIC slot 0 and a channelized T1 interface with Frame Relay encapsulation. It also shows a scheduler map configuration on the physical interface.

Enabling and Disabling the Memory Allocation Dynamic per Queue

In the Junos OS, the memory allocation dynamic (MAD) is a mechanism that dynamically provisions extra delay buffer when a queue is using more bandwidth than it is allocated in the transmit rate setting. With this extra buffer, queues absorb traffic bursts more easily, thus avoiding packet drops. The MAD mechanism can provision extra delay buffer only when extra transmission bandwidth is being used by a queue. This means that the queue might have packet drops if there is no surplus transmission bandwidth available.

For Juniper Networks M320 Multiservice Edge Routers, MX Series 5G Universal Routing Platforms, T Series Core Routers, and EX Series Ethernet Switches only, the MAD mechanism is enabled unless the delay buffer is configured with a temporal setting for a given queue. The MAD mechanism is particularly useful for forwarding classes carrying latency-immune traffic for which the primary requirement is maximum bandwidth utilization. In contrast, for latency-sensitive traffic, you might wish to disable the MAD mechanism because large delay buffers are not optimum.

MAD support is dependent on the FPC and Packet Forwarding Engine, not the PIC. All M320, MX Series, and T Series router and EX Series switches’ FPCs and Packet Forwarding Engines support MAD. No Modular Port Concentrators (MPCs) and IQ, IQ2, IQ2E or IQE PICs support MAD.

To enable the MAD mechanism on supported hardware:

Include the buffer-size percent statement at the [edit class-of-service schedulers scheduler-name] hierarchy level:

The minimum buffer allocated to any queue is 18,432 bytes. If a queue is configured to have a buffer size less than 18K, the queue retains a buffer size of 18,432 bytes.

If desired, you can configure a buffer size that is greater than the configured transmission rate. The buffer can accommodate packet bursts that exceed the configured transmission rate, if sufficient excess bandwidth is available. For example:

As stated previously, you can use a temporal delay buffer configuration to disable the MAD mechanism on a queue, thus limiting the size of the delay buffer. However, the effective buffer latency for a temporal queue is bounded not only by the buffer size value but also by the associated drop profile. If a drop profile specifies a drop probability of 100 percent at a fill-level less than 100 percent, the effective maximum buffer latency is smaller than the buffer size setting. This is because the drop profile specifies that the queue drop packets before the queue’s delay buffer is 100 percent full.

Such a configuration might look like the following example: