Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Configuring MLPPP

Understanding MLPPP

Multilink Point-to-Point Protocol (MLPPP) enables you to bundle multiple PPP links into a single multilink bundle. Multilink bundles provide additional bandwidth, load balancing, and redundancy by aggregating low-speed links, such as T1 and E1 links.

You configure multilink bundles as logical units or channels on the link services interface. With MLPPP, multilink bundles are configured as logical units on the link service interface—for example, lsq-0/0/0.0, lsq-0/0/0.1. After creating multilink bundles, you add constituent links to the bundle. The constituent links are the low-speed physical links that are to be aggregated.

The following rules apply when you add constituent links to a multilink bundle:

  • On each multilink bundle, add only interfaces of the same type. For example, you can add either T1 or E1, but not both.

  • Only interfaces with a PPP encapsulation can be added to an MLPPP bundle.

  • If an interface is a member of an existing bundle and you add it to a new bundle, the interface is automatically deleted from the existing bundle and added to the new bundle.

With MLPPP bundles, you can use PPP Challenge Handshake Authentication Protocol (CHAP) and Password Authentication Protocol (PAP) for secure transmission over the PPP interfaces. For more information, see Configuring the PPP Challenge Handshake Authentication Protocol and Configuring the PPP Password Authentication Protocol.

MLPPP Support on ACX Series Routers

ACX Series routers support MLPPP encapsulations. MLPPP is supported on ACX1000, ACX2000, ACX2100 routers, and with Channelized OC3/STM1 (Multi-Rate) MICs with SFP and 16-port Channelized E1/T1 Circuit Emulation MIC on ACX4000 routers.

The following table shows the maximum number of multilink bundles you can create on ACX Series routers:

Table 1: Multilink Bundles Supported by ACX Series Routers

ACX Platform

Maximum Bundles

Maximum Links

Maximum Links Per Bundle

ACX2000

ACX2100

16

16

16

ACX4000ACX-MIC-16CHE1-T1-CE

16

16

16

ACX4000ACX-MIC-4COC3-1COC12CE

50

336

16

ACX1000

8

8

8

Guidelines for Configuring MLPPP With LSQ Interfaces on ACX Series Routers

You can configure MLPPP bundle interfaces with T1/E1 member links. The traffic that is transmitted over the MLPPP bundle interface is spread over the member links in a round-robin manner. If the packet size is higher than the fragmentation size configured on the MLPPP interface, the packet are fragmented. The fragments are also sent over member links in a round-robin pattern. The PPP control packets received on the interface are terminated on the router. The fragmentation size is configured at the MLPPP bundle-level. This fragmentation size is applied to all the packets on the bundle, regardless of the multilink class.

Multiclass MLPPP segregates the multilink protocol packets in to multiple classes. ACX routers support up to a maximum of four classes. One queue is associated with each of the four classes of multiclass MLPPP. The packets can be classified to be part of one of the classes. These packets take the queue associated with the class. The packets inside a queue are served in first-in first-out (FIFO) sequence.

Multiclass MLPPP is required to provide preferential treatment to high-priority, delay-sensitive traffic. The delay-sensitive smaller real-time frames are classified such that they end up in higher priority queue. While a lower priority packet is being fragmented, if a higher priority packet is enqueued, the lower priority fragmentation is suspended, the higher priority packet is fragmented and enqueued for transmission, and then the lower priority packet fragmentation is resumed.

Traditional LSQ interfaces (anchored on PICs) are supported to combine T1/E1 interfaces in an MLPPP bundle interface. Inline services (si-) interfaces and inline LSQ interfaces are not supported in MLPPP bundles. On ACX routers, MLPPP bundling is performed on the TDM MICs and traditional LSQ model is most effective mechanism. You can configure channelized OC interfaces (t1-x/y/z:n:m, e1-x/y/z:n) as members of an MLPPP bundle interface. A maximum of 16 member links per bundle is supported. The MPLS, ISO, and inet address families are supported. The ISO address family is supported only for IS-IS. You can configure MLPPP bundles on network-to-network interface (NNI) direction of an Ethernet pseudowire. Interleaving using multiclass MLPPP is supported.

Keep the following points in mind when you configure MLPPP bundles on ACX routers:

  • The physical links must be of the same type and bandwidth.

  • Round-robin packet distribution is performed over the member links.

  • To add a T1 or E1 member link to the MLPPP bundle as link services LSQ interfaces, include the bundle statement at the [edit interfaces t1-fpc/pic/port unit logical-unit-number family mlppp] hierarchy level:

  • To configure the link services LSQ interface properties, include the following statements at the [edit interfaces lsq-fpc/pic/port unit logical-unit-number] hierarchy level:

    You can configure the address family as MPLS for the LSQ interfaces in an MLPPP bundle.

  • PPP control protocol support depends on the processing of the PPP application for MLPPP bundle interfaces IPv4, Internet Protocol Control Protocol (IPCP), PPP Challenge Handshake Authentication Protocol (CHAP), and Password Authentication Protocol (PAP) applications are supported for PPP.

  • Drop timeout configuration is not applicable to ACX routers

  • The member links across MICs cannot be bundled. Only physical interfaces on the same MIC can be bundled.

  • Fractional T1 and E1 interfaces are not supported. CoS is supported only for full T1 and E1 interfaces. Selective time slots of T1/E1 cannot be used and full T1/E1 interfaces must be used.

  • Detailed statistics displayed depend on the parameters supported by the hardware. The counters that are supported by the hardware are displayed with appropriate values in the output of the show interfaces lsq-fpc/pic/port detail command.

    In the following sample output, the fields that are displayed with a value of 0 denote the fields that are not supported for computation by ACX routers. In the lsq- interface statistics, non-fragment statistics of the bundle are not accounted. Non-fragments are typically treated as single-fragment frames and counted in the fragment statistics.

  • For modifying the frame checksum (FCS) in the set of T1 options or E1 options on a MLPPP bundle member link, you must remove the member link out of the bundle by deactivating the link or unconfiguring it as a bundle member, and add the link back to the bundle after FCS modification. You must first remove the link from the bundle and modify FCS. If you are configuring FCS for the first time on the member link, specify the value before it is added to the bundle.

The following MLPPP functionalities are not supported on ACX Series routers:

  • Member links across MICs.

  • Fragmentation per class (only configurable at bundle level).

  • IPv6 address family header compression (no address and control field compression [ACFC] or protocol field compression [PFC]).

  • Prefix elision as defined in RFC 2686, The Multi-Class Extension to Multi-Link PPP.

  • A functionality that resembles link fragmentation and interleaving (LFI) can be achieved using multiclass MLPPP (RFC 2686), which interleaves the high priority packets between lower priority packets. This methodology ensures that the delay desitive packets are sent as soon as they arrive. While LFI-classified packets are sent to a specific member link as PPP packets, the ACX implementation of interleaving contains multilink PPP (also referred to as PPP Multilink, MLP, and MP) headers and fragments that are sent on all member links in a round-robin manner.

  • PPP over MLPPP bundle interfaces.

Example: Configuring an MLPPP Bundle on ACX Series

Requirements

You require ACX Series routers to configure the following example

Overview

The following is a sample for configuring an MLPPP bundle on ACX Series routers:

Configuration

CLI Quick Configuration

Procedure

Step-by-Step Procedure

Configuring LSQ Interfaces as NxT1 or NxE1 Bundles Using MLPPP on ACX Series

LSQ interfaces support both T1 and E1 physical interfaces. These instructions apply to T1 interfaces, but the configuration for E1 interfaces is similar.

To configure an NxT1 bundle using MLPPP, you aggregate N different T1 links into a bundle. The NxT1 bundle is called a logical interface, because it can represent, for example, a routing adjacency. To aggregate T1 links into a an MLPPP bundle, include the bundle statement at the [edit interfaces t1-fpc/pic/port unit logical-unit-number family mlppp] hierarchy level:

To configure the LSQ interface properties, include the following statements at the [edit interfaces lsq-fpc/pic/port unit logical-unit-number] hierarchy level:

Note:

ACX Series routers do not support drop-timeout and link-layer-overhead properties.

The logical link services IQ interface represents the MLPPP bundle. For the MLPPP bundle, there are four associated queues on M Series routers and eight associated queues on M320 and T Series routers. A scheduler removes packets from the queues according to a scheduling policy. Typically, you designate one queue to have strict priority, and the remaining queues are serviced in proportion to weights you configure.

For MLPPP, assign a single scheduler map to the link services IQ interface (lsq) and to each constituent link. The default schedulers for M Series and T Series routers, which assign 95, 0, 0, and 5 percent bandwidth for the transmission rate and buffer size of queues 0, 1, 2, and 3, are not adequate when you configure LFI or multiclass traffic. Therefore, for MLPPP, you should configure a single scheduler with nonzero percent transmission rates and buffer sizes for queues 0 through 3, and assign this scheduler to the link services IQ interface (lsq) and to each constituent link..

Note:

For M320 and T Series routers, the default scheduler transmission rate and buffer size percentages for queues 0 through 7 are 95, 0, 0, 5, 0, 0, 0, and 0 percent.

If the bundle has more than one link, you must include the per-unit-scheduler statement at the [edit interfaces lsq-fpc/pic/port] hierarchy level:

To configure and apply the scheduling policy, include the following statements at the [edit class-of-service] hierarchy level:

For link services IQ interfaces, a strict-high-priority queue might starve the other three queues because traffic in a strict-high priority queue is transmitted before any other queue is serviced. This implementation is unlike the standard Junos CoS implementation in which a strict-high-priority queue does round-robin with high-priority queues, as described in the Junos OS Class of Service User Guide for Routing Devices.

After the scheduler removes a packet from a queue, a certain action is taken. The action depends on whether the packet came from a multilink encapsulated queue (fragmented and sequenced) or a nonencapsulated queue (hashed with no fragmentation). Each queue can be designated as either multilink encapsulated or nonencapsulated, independently of the other. By default, traffic in all forwarding classes is multilink encapsulated. To configure packet fragmentation handling on a queue, include the fragmentation-maps statement at the [edit class-of-service] hierarchy level:

For NxT1 bundles using MLPPP, the byte-wise load balancing used in multilink-encapsulated queues is superior to the flow-wise load balancing used in nonencapsulated queues. All other considerations are equal. Therefore, we recommend that you configure all queues to be multilink encapsulated. You do this by including the fragment-threshold statement in the configuration. You use the multilink-class statement to map a forwarding class into a multiclass MLPPP. For more information about fragmentation maps, see Configuring CoS Fragmentation by Forwarding Class on LSQ Interfaces.

When a packet is removed from a multilink-encapsulated queue, the software gives the packet an MLPPP header. The MLPPP header contains a sequence number field, which is filled with the next available sequence number from a counter. The software then places the packet on one of the N different T1 links. The link is chosen on a packet-by-packet basis to balance the load across the various T1 links.

If the packet exceeds the minimum link MTU, or if a queue has a fragment threshold configured at the [edit class-of-service fragmentation-maps map-name forwarding-class class-name] hierarchy level, the software splits the packet into two or more fragments, which are assigned consecutive multilink sequence numbers. The outgoing link for each fragment is selected independently of all other fragments.

If you do not include the fragment-threshold statement in the fragmentation map, the fragmentation threshold you set at the [edit interfaces interface-name unit logical-unit-number] hierarchy level is the default for all forwarding classes. If you do not set a maximum fragment size anywhere in the configuration, packets are fragmented if they exceed the smallest MTU of all the links in the bundle.

Even if you do not set a maximum fragment size anywhere in the configuration, you can configure the maximum received reconstructed unit (MRRU) by including the mrru statement at the [edit interfaces lsq-fpc/pic/port unit logical-unit-number] hierarchy level. The MRRU is similar to the MTU, but is specific to link services interfaces. By default the MRRU size is 1500 bytes, and you can configure it to be from 1500 through 4500 bytes. For more information, see Configuring MRRU on Multilink and Link Services Logical Interfaces.

When a packet is removed from a nonencapsulated queue, it is transmitted with a plain PPP header. Because there is no MLPPP header, there is no sequence number information. Therefore, the software must take special measures to avoid packet reordering. To avoid packet reordering, the software places the packet on one of the N different T1 links. The link is determined by hashing the values in the header. For IP, the software computes the hash based on source address, destination address, and IP protocol. For MPLS, the software computes the hash based on up to five MPLS labels, or four MPLS labels and the IP header.

For UDP and TCP the software computes the hash based on the source and destination ports, as well as source and destination IP addresses. This guarantees that all packets belonging to the same TCP/UDP flow always pass through the same T1 link, and therefore cannot be reordered. However, it does not guarantee that the load on the various T1 links is balanced. If there are many flows, the load is usually balanced.

The N different T1 interfaces link to another router, which can be from Juniper Networks or another vendor. The router at the far end gathers packets from all the T1 links. If a packet has an MLPPP header, the sequence number field is used to put the packet back into sequence number order. If the packet has a plain PPP header, the software accepts the packet in the order in which it arrives and makes no attempt to reassemble or reorder the packet.

Example: Configuring an LSQ Interface as an NxT1 Bundle Using MLPPP

Understanding Multiclass MLPPP

Multiclass MLPPP makes it possible to have multiple classes of latency-sensitive traffic that are carried over a single multilink bundle with bulk traffic. In effect, multiclass MLPPP allows different classes of traffic to have different latency guarantees. With multiclass MLPPP, you can map each forwarding class into a separate multilink class, thus preserving priority and latency guarantees. Multiclass MLPPP is defined in RFC 2686, The Multi-Class Extension to Multi-Link PPP. You can only configure multiclass MLPPP for link services intelligent queuing (LSQ) interfaces (lsq-) with MLPPP encapsulation.

Multiclass MLPPP greatly simplifies packet ordering issues that occur when multiple links are used. Without multiclass MLPPP, all voice traffic belonging to a single flow is hashed to a single link to avoid packet ordering issues. With multiclass MLPPP, you can assign voice traffic to a high-priority class, and you can use multiple links. For more information about voice services support on LSQ interfaces, see Configuring Services Interfaces for Voice Services.

If you do not configure multiclass MLPPP, fragments from different classes cannot be interleaved. All fragments for a single packet must be sent before the fragments from another packet are sent. Nonfragmented packets can be interleaved between fragments of another packet to reduce latency seen by nonfragmented packets. In effect, latency-sensitive traffic is encapsulated as regular PPP traffic, and bulk traffic is encapsulated as multilink traffic. This model works as long as there is a single class of latency-sensitive traffic, and there is no high-priority traffic that takes precedence over latency-sensitive traffic.

This approach to link fragmentation interleaving (LFI), used on the Link Services PIC, supports only two levels of traffic priority, which is not sufficient to carry the four to eight forwarding classes that are supported by M Series and T Series routers. For more information about the Link Services PIC support of LFI, see Configuring Delay-Sensitive Packet Interleaving on Link Services Logical Interfaces.

Note:

ACX Series routers do not support LFI.

Configuring both LFI and multiclass MLPPP on the same bundle is not necessary, nor is it supported, because multiclass MLPPP represents a superset of functionality. When you configure multiclass MLPPP, LFI is automatically enabled.

Note:

The Junos OS implementation of multiclass MLPPP does not support compression of common header bytes, which is referred to in RFC 2686 as “prefix elision.”

Configuring Multiclass MLPPP on LSQ Interfaces

To configure multiclass MLPPP on a LSQ interface, you must specify how many multilink classes should be negotiated when a link joins the bundle, and you must specify the mapping of a forwarding class into an multiclass MLPPP class.

  1. To specify how many multilink classes should be negotiated when a link joins the bundle, include the multilink-max-classes statement:

    You can include this statement at the following hierarchy levels:

    • [edit interfaces interface-name unit logical-unit-number]

    • [edit logical-systems logical-system-name interfaces interface-name unit logical-unit-number]

    The number of multilink classes can be 1 through 8. The number of multilink classes for each forwarding class must not exceed the number of multilink classes to be negotiated.

    Note:

    In ACX Series routers, the multilink classes can be 1 through 4.

  2. To specify the mapping of a forwarding class into a multiclass MLPPP class, include the multilink-class statement at the [edit class-of-service fragmentation-maps map-name forwarding-class class-name] hierarchy level:

    The multilink class index number can be 0 through 7. The multilink-class statement and no-fragmentation statements are mutually exclusive.

    Note:

    In ACX Series routers, the multilink class index number can be 0 through 3. ACX Series routers do not support the no-fragmentation statement for fragmentation map.

  3. To view the number of multilink classes negotiated, issue the show interfaces lsq-fpc/pic/port.logical-unit-number detail command.

Considerations for link services IQ (lsq) interfaces on ACX Series routers:

  • The maximum number of multilink classes to be negotiated when a link joins the bundle that you can specify by using the multilink-max-classes statement at the [edit interfaces interface-name unit logical-unit-number] hierarchy level is limited to 4.

  • Fragmentation size is not specified under fragmentation map; instead, fragmentation size configured on the bundle is used.

  • Compressed Real-Time Transport Protocol (RTP) is not supported.

  • HDLC address and control field compression (ACFC) and PPP protocol field compression (PFC) are not supported.