Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Configuring Link Services Interfaces

 

Juniper Networks devices support link services on the lsq-0/0/0 link services queuing interface which includes multilink services like MLPP, MLFR and CRTP. The topics below discuss the overview of link services, configuration details and verification of the link services on SRX series devices.

Link services include the multilink services Multilink Point-to-Point Protocol (MLPPP), Multilink Frame Relay (MLFR), and Compressed Real-Time Transport Protocol (CRTP). Juniper Networks devices support link services on the lsq-0/0/0 link services queuing interface.

You configure the link services queuing interface (lsq-0/0/0) on a Juniper Networks device to support multilink services and CRTP.

The link services queuing interface on SRX Series devices consists of services provided by the following interfaces on the Juniper Networks M Series and T Series routing platforms: multilink services interface (ml-fpc/pic/port), link services interface (ls-fpc/pic/port), and link services intelligent queuing interface (lsq-fpc/pic/port). Although the multilink services, link services, and link services intelligent queuing (IQ) interfaces on M Series and T Series routing platforms are installed on Physical Interface Cards (PICs), the link services queuing interface on SRX Series devices is an internal interface only and is not associated with a physical medium or Physical Interface Module (PIM).

Note

(ls-fpc/pic/port) is not supported on SRX Series devices.

This section contains the following topics.

Services Available on a Link Services Interface

The link services interface is a logical interface available by default. Table 1 summarizes the services available on the interface.

Table 1: Services Available on a Link Services Interface

Services

Purpose

More Information

Multilink bundles by means of MLPPP and MLFR encapsulation

Aggregates multiple constituent links into one larger logical bundle to provide additional bandwidth, load balancing, and redundancy.

Note: Dynamic call admission control (DCAC) configurations are not supported on Link Services Interfaces.

Link fragmentation and interleaving (LFI)

Reduces delay and jitter on links by breaking up large data packets and interleaving delay-sensitive voice packets with the resulting smaller packets.

Understanding Link Fragmentation and Interleaving Configuration

Compressed Real-Time Transport Protocol (CRTP)

Reduces the overhead caused by Real-Time Transport Protocol (RTP) on voice and video packets.

Compressed Real-Time Transport Protocol Overview

Class-of-service (CoS) classifiers, forwarding classes, schedulers and scheduler maps, and shaping rates

Provides a higher priority to delay-sensitive packets—by configuring CoS, such as the following:

  • Classifiers—To classify different types of traffic, such as voice, data, and network control packets.

  • Forwarding classes—To direct different types of traffic to different output queues.

  • Fragmentation map—To define mapping between forwarding class and multilink class, and forwarding class and fragment threshold. In forwarding class and multilink class mapping, drop timeout can be configured.

  • Schedulers and scheduler maps—To define properties for the output queues such as delay-buffer, transmission rate, and transmission priority.

  • Shaping rate—To define certain bandwidth usage by an interface.

Link Services Exceptions

The link and multilink services implementation on SRX Series devices is similar to the implementation on the M Series and T Series routing platforms, with the following exceptions:

  • Support for link and multilink services are on the lsq-0/0/0 interface instead of the ml-fpc/pic/port, lsq-fpc/pic/port, and ls-fpc/pic/port interfaces.

  • When LFI is enabled, fragmented packets are queued in a round-robin fashion on the constituent links to enable per-packet and per-fragment load balancing. See Queuing with LFI.

  • Support for per-unit scheduling is on all types of constituent links (on all types of interfaces).

  • Support for Compressed Real-Time Transport Protocol (CRTP) is for both MLPPP and PPP.

Configuring Multiclass MLPPP

For lsq-0/0/0 on Juniper Networks device, with MLPPP encapsulation, you can configure multiclass MLPPP. If you do not configure multiclass MLPPP, fragments from different classes cannot be interleaved. All fragments for a single packet must be sent before the fragments from another packet are sent. Non-fragmented packets can be interleaved between fragments of another packet to reduce latency seen by non-fragmented packets. In effect, latency-sensitive traffic is encapsulated as regular PPP traffic, and bulk traffic is encapsulated as multilink traffic. This model works as long as there is a single class of latency-sensitive traffic, and there is no high-priority traffic that takes precedence over latency-sensitive traffic. This approach to LFI, used on the Link Services PIC, supports only two levels of traffic priority, which is not sufficient to carry the four-to-eight forwarding classes that are supported by M series and T series routing platforms.

Multiclass MLPPP makes it possible to have multiple classes of latency-sensitive traffic that are carried over a single multilink bundle with bulk traffic. In effect, multiclass MLPPP allows different classes of traffic to have different latency guarantees. With multiclass MLPPP, you can map each forwarding class into a separate multilink class, thus preserving priority and latency guarantees.

Note

Configuring both LFI and multiclass MLPPP on the same bundle is not necessary, nor is it supported, because multiclass MLPPP represents a superset of functionality. When you configure multiclass MLPPP, LFI is automatically enabled.

The Junos OS PPP implementation does not support the negotiation of address field compression and protocol field compression PPP NCP options, which means that the software always sends a full 4-byte PPP header.

The Junos OS implementation of multiclass MLPPP does not support compression of common header bytes.

Multiclass MLPPP greatly simplifies packet ordering issues that occur when multiple links are used. Without multiclass MLPPP, all voice traffic belonging to a single flow is hashed to a single link to avoid packet ordering issues. With multiclass MLPPP, you can assign voice traffic to a high-priority class, and you can use multiple links.

To configure multiclass MLPPP on a link services IQ interface, you must specify how many multilink classes should be negotiated when a link joins the bundle, and you must specify the mapping of a forwarding class into an multiclass MLPPP class.

To specify how many multilink classes should be negotiated when a link joins the bundle, include the multilink-max-classes statement:

You can include this statement at the following hierarchy levels:

  • [edit interfaces interface-name unit logical-unit-number]

  • [edit logical-routers logical-router-name interfaces interface-name unit logical-unit-number]

The number of multilink classes can be 1 through 8. The number of multilink classes for each forwarding class must not exceed the number of multilink classes to be negotiated.

To specify the mapping of a forwarding class into a multiclass MLPPP class, include the multilink-class statement at the [edit class-of-service fragmentation-maps forwarding-class class-name] hierarchy level:

The multilink class index number can be 0 through 7. The multilink-class statement and the no-fragmentation statement are mutually exclusive.

To view the number of multilink classes negotiated, issue the show interfaces lsq-0/0/0.logical-unit-number detail command.

Queuing with LFI

LFI or non-LFI packets are placed into queues on constituent links based on the queues in which they arrive. No changes in the queue number occur while the fragmented, non-fragmented, or LFI packets are being queued.

For example, assume that Queue Q0 is configured with fragmentation threshold 128, Q1 is configured with no fragmentation, and Q2 is configured with fragmentation threshold 512. Q0 is receiving stream of traffic with packet size 512. Q1 is receiving voice traffic of 64 bytes, and Q2 is receiving stream of traffic with 128-byte packets. Next the stream on Q0 gets fragmented and queued up into Q0 of a constituent link. Also, all packets on Q2 are queued up on Q0 on constituent link. The stream on Q1 is considered to be LFI because no fragmentation is configured. All the packets from Q0 and Q2 are queued up on Q0 of constituent link. All the packets from Q1 are queued up on Q2 of constituent link.

Using lsq-0/0/0, CRTP can be applied on LFI and non-LFI packets. There will be no changes in their queue numbers because of CRTP.

Queuing on Q2s of Constituent Links

When using class of service on a multilink bundle, all Q2 traffic from the multilink bundle is queued to Q2 of constituent links based on a hash computed from the source address, destination address, and the IP protocol of the packet. If the IP payload is TCP or UDP traffic, the hash also includes the source port and destination port. As a result of this hash algorithm, all traffic belonging to one traffic flow is queued to Q2 of one constituent link. This method of traffic delivery to the constituent link is applied at all times, including when the bundle has not been set up with LFI.

Compressed Real-Time Transport Protocol Overview

Real-Time Transport Protocol (RTP) can help achieve interoperability among different implementations of network audio and video applications. However, in some cases, the header, which includes the IP, UDP, and RTP headers, can be too large (around 40 bytes) on networks using low-speed lines such as dial-up modems. Compressed Real-Time Transport Protocol (CRTP) can be configured to reduce network overhead on low-speed links. CRTP replaces the IP, UDP, and RTP headers with a 2-byte context ID (CID), reducing the header overhead considerably.

Figure 1 shows how CRTP compresses the RTP header in a voice packet by reducing a 40-byte header to a 2-byte header.

Figure 1: CRTP
CRTP

You can configure CRTP with MLPPP or PPP logical interface encapsulation on link services interfaces. See Example: Configuring an MLPPP Bundle.

Real-time and non-real-time data frames are carried together on lower-speed links without causing excessive delays to the real-time traffic. See Understanding Link Fragmentation and Interleaving Configuration.

Configuring Fragmentation by Forwarding Class

For lsq-0/0/0, you can specify fragmentation properties for specific forwarding classes. Traffic on each forwarding class can be either multilink encapsulated (fragmented and sequenced) or non-encapsulated (hashed with no fragmentation). By default, traffic in all forwarding classes is multilink encapsulated.

When you do not configure fragmentation properties for the queues on MLPPP interfaces, the fragmentation threshold you set at the [edit interfaces interface-name unit logical-unit-number fragment-threshold] hierarchy level is the fragmentation threshold for all forwarding classes within the MLPPP interface. For MLFR FRF.16 interfaces, the fragmentation threshold you set at the [edit interfaces interface-name mlfr-uni-nni-bundle-options fragment-threshold] hierarchy level is the fragmentation threshold for all forwarding classes within the MLFR FRF.16 interface.

If you do not set a maximum fragment size anywhere in the configuration, packets are still fragmented if they exceed the smallest maximum transmission unit (MTU) or maximum received reconstructed unit (MRRU) of all the links in the bundle. A non-encapsulated flow uses only one link. If the flow exceeds a single link, then the forwarding class must be multilink encapsulated, unless the packet size exceeds the MTU/MRRU.

Even if you do not set a maximum fragment size anywhere in the configuration, you can configure the MRRU by including the mrru statement at the [edit interfaces lsq-0/0/0 unit logical-unit-number] or [edit interfaces interface-name mlfr-uni-nni-bundle-options] hierarchy level. The MRRU is similar to the MTU, but is specific to link services interfaces. By default the MRRU size is 1504 bytes, and you can configure it to be from 1500 through 4500 bytes.

To configure fragmentation properties on a queue, include the fragmentation-maps statement at the [edit class-of-service] hierarchy level:

To set a per-forwarding class fragmentation threshold, include the fragment-threshold statement in the fragmentation map. This statement sets the maximum size of each multilink fragment.

To set traffic on a queue to be non-encapsulated rather than multilink encapsulated, include the no-fragmentation statement in the fragmentation map. This statement specifies that an extra fragmentation header is not prepended to the packets received on this queue and that static link load balancing is used to ensure in-order packet delivery.

For a given forwarding class, you can include either the fragment-threshold or no-fragmentation statement; they are mutually exclusive.

You use the multilink-class statement to map a forwarding class into a multiclass MLPPP. For a given forwarding class, you can include either the multilink-class or no-fragmentation statement; they are mutually exclusive.

To associate a fragmentation map with a multilink PPP interface or MLFR FRF.16 DLCI, include the fragmentation-map statement at the [edit class-of-service interfaces interface-name unit logical-unit-number] hierarchy level:

Configuring Link-Layer Overhead

Link-layer overhead can cause packet drops on constituent links because of bit stuffing on serial links. Bit stuffing is used to prevent data from being interpreted as control information.

By default, 4 percent of the total bundle bandwidth is set aside for link-layer overhead. In most network environments, the average link-layer overhead is 1.6 percent. Therefore, we recommend 4 percent as a safeguard.

For lsq-0/0/0 on Juniper Networks device, you can configure the percentage of bundle bandwidth to be set aside for link-layer overhead. To do this, include the link-layer-overhead statement:

You can include this statement at the following hierarchy levels:

  • [edit interfaces interface-name mlfr-uni-nni-bundle-options]

  • [edit interfaces interface-name unit logical-unit-number]

  • [edit logical-routers logical-router-name interfaces interface-name unit logical-unit-number]

You can configure the value to be from 0 percent through 50 percent.

Before you begin:

  • Install device hardware.

  • Establish basic connectivity. See the Getting Started Guide for your device.

  • Have a basic understanding of physical and logical interfaces and Juniper Networks interface conventions. See Understanding Interfaces

Plan how you are going to use the link services interface on your network. See Link Services Interfaces Overview.

To configure link services on an interface, perform the following tasks:

  1. Configure link fragmentation and interleaving (LFI). See Example: Configuring Link Fragmentation and Interleaving.
  2. Configure classifiers and forwarding classes. See Example: Defining Classifiers and Forwarding Classes.
  3. Configure scheduler maps. See Understanding How to Define and Apply Scheduler Maps.
  4. Configure interface shaping rates. See Example: Configuring Interface Shaping Rates
  5. Configure an MLPPP bundle. See Example: Configuring an MLPPP Bundle.
  6. To configure MLFR, see Example: Configuring Multilink Frame Relay FRF.15 or Example: Configuring Multilink Frame Relay FRF.16
  7. To configure CRTP, see Example: Configuring the Compressed Real-Time Transport Protocol

Confirm that the configuration is working properly.

Purpose

Verify the link services interface statistics.

Action

The sample output provided in this section is based on the configurations provided in Example: Configuring an MLPPP Bundle. To verify that the constituent links are added to the bundle correctly and the packets are fragmented and transmitted correctly, take the following actions:

  1. On device R0 and device R1, the two devices used in this example, configure MLPPP and LFI as described in Example: Configuring an MLPPP Bundle.
  2. From the CLI, enter the ping command to verify that a connection is established between R0 and R1.
  3. Transmit 10 data packets, 200 bytes each, from R0 to R1.
  4. On R0, from the CLI, enter the show interfaces interface-name statistics command.
user@R0> show interfaces lsq-0/0/0 statistics detail

This output shows a summary of interface information. Verify the following information:

  • Physical interface—The physical interface is Enabled. If the interface is shown as Disabled, do either of the following:

    • In the CLI configuration editor, delete the disable statement at the [edit interfaces interface-name] level of the configuration hierarchy.

    • In the J-Web configuration editor, clear the Disable check box on the Interfaces>interface-name page.

  • Physical link—The physical link is Up. A link state of Down indicates a problem with the interface module, interface port, or physical connection (link-layer errors).

  • Last flapped—The Last Flapped time is an expected value. The Last Flapped time indicates the last time the physical interface became unavailable and then available again. Unexpected flapping indicates likely link-layer errors.

  • Traffic statistics—Number and rate of bytes and packets received and transmitted on the interface. Verify that the number of inbound and outbound bytes and packets match the expected throughput for the physical interface. To clear the statistics and see only new changes, use the clear interfaces statistics interface-name command.

  • Queue counters—Name and number of queues are as configured. This sample output shows that 10 data packets were transmitted and no packets were dropped.

  • Logical interface—Name of the multilink bundle you configured—lsq-0/0/0.0.

  • Bundle options—Fragmentation threshold is correctly configured, and fragment interleaving is enabled.

  • Bundle errors—Any packets and fragments dropped by the bundle.

  • Statistics—The fragments and packets are received and transmitted correctly by the device. All references to traffic direction (input or output) are defined with respect to the device. Input fragments received by the device are assembled into input packets. Output packets are segmented into output fragments for transmission out of the device.

    In this example, 10 data packets of 200 bytes were transmitted. Because the fragmentation threshold is set to 128 bytes, all data packets were fragmented into two fragments. The sample output shows that 10 packets and 20 fragments were transmitted correctly.

  • Link—The constituent links are added to this bundle and are receiving and transmitting fragments and packets correctly. The combined number of fragments transmitted on the constituent links must be equal to the number of fragments transmitted from the bundle. This sample output shows that the bundle transmitted 20 fragments and the two constituent links se-1/0/0.0 and se-1/0/1.0.0 correctly transmitted 10+10=20 fragments.

  • Destination and Local—IP address of the remote side of the multilink bundle and the local side of the multilink bundle. This sample output shows that the destination address is the address on R1 and the local address is the address on R0.

Verifying Link Services CoS Configuration

Purpose

Verify CoS configurations on the link services interface.

Action

From the CLI, enter the following commands:

  • show class-of-service interface interface-name

  • show class-of-service classifier name classifier-name

  • show class-of-service scheduler-map scheduler-map-name

The sample output provided in this section is based on the configurations provided inExample: Configuring an MLPPP Bundle.

user@R0> show class-of-service interface lsq-0/0/0
user@R0> show class-of-service interface ge-0/0/1
user@R0> show class-of-service classifier name classify_input
user@R0> show class-of-service scheduler-map s_map

These output examples show a summary of configured CoS components. Verify the following information:

  • Logical Interface—Name of the multilink bundle and the CoS components applied to the bundle. The sample output shows that the multilink bundle is lsq-0/0/0.0, and the CoS scheduler-map s_map is applied to it.

  • Classifier—Code points, forwarding classes, and loss priorities assigned to the classifier. The sample output shows that a default classifier, ipprec-compatibility, was applied to the lsq-0/0/0 interface and the classifier classify_input was applied to the ge-0/0/1 interface.

  • Scheduler—Transmit rate, buffer size, priority, and loss priority assigned to each scheduler. The sample output displays the data, voice, and network control schedulers with all the configured values.

Understanding the Internal Interface LSQ-0/0/0 Configuration

The link services interface is an internal interface only. It is not associated with a physical medium or PIM. Within an SRX Series device, packets are routed to this interface for link bundling or compression.

It may be required that you upgrade your configuration to use the internal interface lsq-0/0/0 as the link services queuing interface instead of ls-0/0/0, which has been deprecated. You can also roll back your modified configuration to use ls-0/0/0.

This example shows how to upgrade from ls-0/0/0 to lsq-0/0/0 (or to reverse the change) for multilink services.

Requirements

This procedure is only necessary if you are still using ls-0/0/0 instead of lsq-/0/0/0 or if you need to revert to the old interface.

Overview

In this example, you rename the link services internal interface from ls-0/0/0 to lsq-0/0/0 or vice versa. You rename all occurrences of ls-0/0/0 in the configuration to lsq-0/0/0 and configure the fragmentation map by adding no fragmentation. You specify no fragmentation after the name of queue 2, if queue 2 is configured, or after assured forwarding. You then attach the fragmentation map configured in the preceding step to lsq-0/0/0 and specify the unit number as 6 of the multilink bundle for which interleave fragments is configured.

Then you roll back the configuration from lsq-0/0/0 to ls-0/0/0. You rename all occurrences in the configuration from lsq-0/0/0 to ls-0/0/0. You delete the fragmentation map if it is configured under the [class-of-service] hierarchy and delete the fragmentation map if it is assigned to lsq-0/0/0. You can delete multilink-max-classes if it is configured for lsq-0/0/0 under the [interfaces] hierarchy. You then delete link-layer-overhead if it is configured for lsq-0/0/0 under the [interfaces] hierarchy.

If no fragmentation is configured on any forwarding class and the fragmentation map is assigned to lsq-0/0/0, then you configure interleave fragments for the ls-0/0/0 interface. Finally, you configure the classifier for LFI packets to refer to queue 2. (The ls-0/0/0 interface treats queue 2 as the LFI queue.)

Configuration

CLI Quick Configuration

To quickly upgrade from ls-0/0/0 to lsq-0/0/0 (or reverse the change), copy the following commands and paste them into the CLI:

Step-by-Step Procedure

The following example requires you to navigate various levels in the configuration hierarchy. For instructions on how to do that, see Using the CLI Editor in Configuration Mode.

To upgrade from ls-0/0/0 to lsq-0/0/0 or to reverse that change:

  1. Rename all the occurrences of ls-0/0/0 in the configuration.
  2. Configure the fragmentation map.
  3. Specify the unit number of the multilink bundle.
  4. Roll back the configuration for all occurrences in the configuration.
  5. Delete fragmentation map under class of service.
  6. Delete fragmentation map if it is assigned to the lsq-0/0/0 interface.
  7. Delete multilink max classes if it is configured for lsq-0/0/0.Note

    Multilink-max-classes is not supported and is most likely not configured.

  8. Delete link-layer-overhead if it is configured for lsq-0/0/0.
  9. Delete link-layer-overhead if it is configured for lsq-0/0/0:0.
  10. Configure interleave fragments for the ls-0/0/0 interface.

Results

From configuration mode, confirm your configuration by entering the show class-of-service command. If the output does not display the intended configuration, repeat the configuration instructions in this example to correct it.

If you are done configuring the device, enter commit from configuration mode.

Verification

Confirm that the configuration is working properly.

Purpose

Verify the link services internal interface ls-0/0/0 changed to lsq-0/0/0.

Action

From operational mode, enter the show class-of-service command.

To solve configuration problems on a link services interface:

Problem

Description: You are configuring a multilink bundle, but you also have traffic without MLPPP encapsulation passing through constituent links of the multilink bundle. Do you apply all CoS components to the constituent links, or is applying them to the multilink bundle enough?

Solution

You can apply a scheduler map to the multilink bundle and its constituent links. Although you can apply several CoS components with the scheduler map, configure only the ones that are required. We recommend that you keep the configuration on the constituent links simple to avoid unnecessary delay in transmission.

Table 2 shows the CoS components to be applied on a multilink bundle and its constituent links.

Cos Component

Multilink Bundle

Constituent Links

Explanation

Classifier

Yes

No

CoS classification takes place on the incoming side of the interface, not on the transmitting side, so no classifiers are needed on constituent links.

Forwarding class

Yes

No

Forwarding class is associated with a queue, and the queue is applied to the interface by a scheduler map. The queue assignment is predetermined on the constituent links. All packets from Q2 of the multilink bundle are assigned to Q2 of the constituent link, and packets from all the other queues are queued to Q0 of the constituent link.

Scheduler map

Yes

Yes

Apply scheduler maps on the multilink bundle and the constituent link as follows:

  • Transmit rate—Make sure that the relative order of the transmit rate configured on Q0 and Q2 is the same on the constituent links as on the multilink bundle.

  • Scheduler priority—Make sure that the relative order of the scheduler priority configured on Q0 and Q2 is the same on the constituent links as on the multilink bundle.

  • Buffer size—Because all non-LFI packets from the multilink bundle transit on Q0 of the constituent links, make sure that the buffer size on Q0 of the constituent links is large enough.

  • RED drop profile—Configure a RED drop profile on the multilink bundle only. Configuring the RED drop profile on the constituent links applies a back pressure mechanism that changes the buffer size and introduces variation. Because this behavior might cause fragment drops on the constituent links, make sure to leave the RED drop profile at the default settings on the constituent links.

Shaping rate for a per-unit scheduler or an interface-level scheduler

No

Yes

Because per-unit scheduling is applied only at the end point, apply this shaping rate to the constituent links only. Any configuration applied earlier is overwritten by the constituent link configuration.

Transmit-rate exact or queue-level shaping

Yes

No

The interface-level shaping applied on the constituent links overrides any shaping on the queue. Thus apply transmit-rate exact shaping on the multilink bundle only.

Rewrite rules

Yes

No

Rewrite bits are copied from the packet into the fragments automatically during fragmentation. Thus what you configure on the multilink bundle is carried on the fragments to the constituent links.

Virtual channel group

Yes

No

Virtual channel groups are identified through firewall filter rules that are applied on packets only before the multilink bundle. Thus you do not need to apply the virtual channel group configuration to the constituent links.

See also

  • See the Junos OS Class of Service Configuration Guide for Security Devices

Problem

Description: To test jitter and latency, you send three streams of IP packets. All packets have the same IP precedence settings. After configuring LFI and CRTP, the latency increased even over a noncongested link. How can you reduce jitter and latency?

Solution

To reduce jitter and latency, do the following:

  1. Make sure that you have configured a shaping rate on each constituent link.
  2. Make sure that you have not configured a shaping rate on the link services interface.
  3. Make sure that the configured shaping rate value is equal to the physical interface bandwidth.
  4. If shaping rates are configured correctly, and jitter still persists, contact the Juniper Networks Technical Assistance Center (JTAC).

See also

Determine If LFI and Load Balancing Are Working Correctly

Problem

Description: In this case, you have a single network that supports multiple services. The network transmits data and delay-sensitive voice traffic. After configuring MLPPP and LFI, make sure that voice packets are transmitted across the network with very little delay and jitter. How can you find out if voice packets are being treated as LFI packets and load balancing is performed correctly?

Solution

When LFI is enabled, data (non-LFI) packets are encapsulated with an MLPPP header and fragmented to packets of a specified size. The delay-sensitive, voice (LFI) packets are PPP-encapsulated and interleaved between data packet fragments. Queuing and load balancing are performed differently for LFI and non-LFI packets.

To verify that LFI is performed correctly, determine that packets are fragmented and encapsulated as configured. After you know whether a packet is treated as an LFI packet or a non-LFI packet, you can confirm whether the load balancing is performed correctly.

Solution Scenario—Suppose two Juniper Networks devices, R0 and R1, are connected by a multilink bundle lsq-0/0/0.0 that aggregates two serial links, se-1/0/0 and se-1/0/1. On R0 and R1, MLPPP and LFI are enabled on the link services interface and the fragmentation threshold is set to 128 bytes.

In this example, we used a packet generator to generate voice and data streams. You can use the packet capture feature to capture and analyze the packets on the incoming interface.

The following two data streams were sent on the multilink bundle:

  • 100 data packets of 200 bytes (larger than the fragmentation threshold)

  • 500 data packets of 60 bytes (smaller than the fragmentation threshold)

The following two voice streams were sent on the multilink bundle:

  • 100 voice packets of 200 bytes from source port 100

  • 300 voice packets of 200 bytes from source port 200

To confirm that LFI and load balancing are performed correctly:

Note

Only the significant portions of command output are displayed and described in this example.

  1. Verify packet fragmentation. From operational mode, enter the show interfaces lsq-0/0/0 command to check that large packets are fragmented correctly.
    user@R0#> show interfaces lsq-0/0/0

    Meaning—The output shows a summary of packets transiting the device on the multilink bundle. Verify the following information on the multilink bundle:

    • The total number of transiting packets = 1000

    • The total number of transiting fragments=1100

    • The number of data packets that were fragmented =100

    The total number of packets sent (600 + 400) on the multilink bundle match the number of transiting packets (1000), indicating that no packets were dropped.

    The number of transiting fragments exceeds the number of transiting packets by 100, indicating that 100 large data packets were correctly fragmented.

    Corrective Action—If the packets are not fragmented correctly, check your fragmentation threshold configuration. Packets smaller than the specified fragmentation threshold are not fragmented.

  2. Verify packet encapsulation. To find out whether a packet is treated as an LFI or non-LFI packet, determine its encapsulation type. LFI packets are PPP encapsulated, and non-LFI packets are encapsulated with both PPP and MLPPP. PPP and MLPPP encapsulations have different overheads resulting in different-sized packets. You can compare packet sizes to determine the encapsulation type.

    A small unfragmented data packet contains a PPP header and a single MLPPP header. In a large fragmented data packet, the first fragment contains a PPP header and an MLPPP header, but the consecutive fragments contain only an MLPPP header.

    PPP and MLPPP encapsulations add the following number of bytes to a packet:

    • PPP encapsulation adds 7 bytes:

      4 bytes of header+2 bytes of frame check sequence (FCS)+1 byte that is idle or contains a flag

    • MLPPP encapsulation adds between 6 and 8 bytes:

      4 bytes of PPP header+2 to 4 bytes of multilink header

    Figure 2 shows the overhead added to PPP and MLPPP headers.

    Figure 2: PPP and MLPPP Headers
    PPP and MLPPP Headers

    For CRTP packets, the encapsulation overhead and packet size are even smaller than for an LFI packet. For more information, see Example: Configuring the Compressed Real-Time Transport Protocol.

    Table 3 shows the encapsulation overhead for a data packet and a voice packet of 70 bytes each. After encapsulation, the size of the data packet is larger than the size of the voice packet.

    Table 3: PPP and MLPPP Encapsulation Overhead

    Packet Type

    Encapsulation

    Initial Packet Size

    Encapsulation Overhead

    Packet Size after Encapsulation

    Voice packet (LFI)

    PPP

    70 bytes

    4 + 2 + 1 = 7 bytes

    77 bytes

    Data fragment (non-LFI) with short sequence

    MLPPP

    70 bytes

    4 + 2 + 1 + 4 + 2 = 13 bytes

    83 bytes

    Data fragment (non-LFI) with long sequence

    MLPPP

    70 bytes

    4 + 2 + 1 + 4 + 4 = 15 bytes

    85 bytes

    From operational mode, enter the show interfaces queue command to display the size of transmitted packet on each queue. Divide the number of bytes transmitted by the number of packets to obtain the size of the packets and determine the encapsulation type.

  3. Verify load balancing. From operational mode, enter the show interfaces queue command on the multilink bundle and its constituent links to confirm whether load balancing is performed accordingly on the packets.
    user@R0> show interfaces queue lsq-0/0/0
    user@R0> show interfaces queue se-1/0/0
    user@R0> show interfaces queue se-1/0/1

    Meaning—The output from these commands shows the packets transmitted and queued on each queue of the link services interface and its constituent links. Table 4 shows a summary of these values. (Because the number of transmitted packets equaled the number of queued packets on all the links, this table shows only the queued packets.)

    Table 4: Number of Packets Transmitted on a Queue

    Packets Queued

    Bundle lsq-0/0/0.0

    Constituent Link se-1/0/0

    Constituent Link se-1/0/1

    Explanation

    Packets on Q0

    600

    350

    350

    The total number of packets transiting the constituent links (350+350 = 700) exceeded the number of packets queued (600) on the multilink bundle.

    Packets on Q2

    400

    100

    300

    The total number of packets transiting the constituent links equaled the number of packets on the bundle.

    Packets on Q3

    0

    19

    18

    The packets transiting Q3 of the constituent links are for keepalive messages exchanged between constituent links. Thus no packets were counted on Q3 of the bundle.

    On the multilink bundle, verify the following:

    • The number of packets queued matches the number transmitted. If the numbers match, no packets were dropped. If more packets were queued than were transmitted, packets were dropped because the buffer was too small. The buffer size on the constituent links controls congestion at the output stage. To correct this problem, increase the buffer size on the constituent links.

    • The number of packets transiting Q0 (600) matches the number of large and small data packets received (100+500) on the multilink bundle. If the numbers match, all data packets correctly transited Q0.

    • The number of packets transiting Q2 on the multilink bundle (400) matches the number of voice packets received on the multilink bundle. If the numbers match, all voice LFI packets correctly transited Q2.

    On the constituent links, verify the following:

    • The total number of packets transiting Q0 (350+350) matches the number of data packets and data fragments (500+200). If the numbers match, all the data packets after fragmentation correctly transited Q0 of the constituent links.

      Packets transited both constituent links, indicating that load balancing was correctly performed on non-LFI packets.

    • The total number of packets transiting Q2 (300+100) on constituent links matches the number of voice packets received (400) on the multilink bundle. If the numbers match, all voice LFI packets correctly transited Q2.

      LFI packets from source port 100 transited se-1/0/0, and LFI packets from source port 200 transited se-1/0/1. Thus all LFI (Q2) packets were hashed based on the source port and correctly transited both constituent links.

    Corrective Action—If the packets transited only one link, take the following steps to resolve the problem:

    1. Determine whether the physical link is up (operational) or down (unavailable). An unavailable link indicates a problem with the PIM, interface port, or physical connection (link-layer errors). If the link is operational, move to the next step.
    2. Verify that the classifiers are correctly defined for non-LFI packets. Make sure that non-LFI packets are not configured to be queued to Q2. All packets queued to Q2 are treated as LFI packets.
    3. Verify that at least one of the following values is different in the LFI packets: source address, destination address, IP protocol, source port, or destination port. If the same values are configured for all LFI packets, the packets are all hashed to the same flow and transit the same link.
  4. Use the results to verify load balancing.

Determine Why Packets Are Dropped on a PVC Between a Juniper Networks Device and a Third-Party Device

Problem

Description: You are configuring a permanent virtual circuit (PVC) between T1, E1, T3, or E3 interfaces on a Juniper Networks device and a third-party device, and packets are being dropped and ping fails.

Solution

If the third-party device does not have the same FRF.12 support as the Juniper Networks device or supports FRF.12 in a different way, the Juniper Networks device interface on the PVC might discard a fragmented packet containing FRF.12 headers and count it as a "Policed Discard."

As a workaround, configure multilink bundles on both peers, and configure fragmentation thresholds on the multilink bundles.