Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Inline Service Interfaces Configuration for PPPoE and LNS Subscribers

 

Enabling Inline Service Interfaces for PPPoE and LNS Subscribers

The inline service (si) interface is a virtual physical interface that resides on lookup engines. The si interface, referred to as an anchor interface, makes it possible to support multilink PPP (MLPPP) bundles without a special services PIC. The si interface is supported on MLPPP on the MX Series.

Four inline service interfaces are configurable per MPC-occupied chassis slot. The following MPC2 slots are supported:

  • MPC2-3D contains two lookup engines, each with two si interfaces.

  • PC1-3D contains only one lookup engine, which hosts all four si interfaces.

You can configure the following inline service interfaces as anchor interfaces for MLPPP bundles: si-slot/0/0, si-slot/1/0, si-slot/2/0, and si-slot/3/0.

  • For MLPPP over PPPoE subscribers, family mlppp is supported in the pp0 member link logical interface, and the bundle is an si logical interface.

  • For MLPPP over LNS subscribers, family mlppp is supported in the si- member link logical interface, and the bundle is an si logical interface.

You enable inline services for PICs 0 through 3 individually by setting the inline-services statement at the [edit chassis] hierarchy level for the FPCs.

The following example shows how to enable inline services for PIC 0 in MPC slot 1, and PIC 1 in MPC slot 5, and set 10g as the bandwidth for tunnel traffic. As a result, both si-1/0/0 and si-5/0/0 are created for the specified PICs as well.

To enable inline service interfaces:

  1. Access an MPC-occupied slot and the PIC where the interface is to be enabled.

    [edit chassis]

    user@host# edit fpc slot-number pic number

  2. Enable the interface and specify the amount of bandwidth reserved on each lookup engine for tunnel traffic using inline services.

    [edit chassis fpc slot-number pic number]

    user@host# set inline-services bandwidth

The following sample output shows the bandwidth configuration for the lookup engine for tunnel traffic:

Configuring Inline Service Interface for PPPoE and LNS Subscribers

The inline service (si) interface is a virtual physical interface that resides on lookup engines. The si interface, referred to as an anchor interface, makes it possible to support multilink PPP (MLPPP) bundles without a special services PIC. The si interface is supported on MLPPP on the MX Series. Four inline service interfaces are configurable per MPC-occupied chassis slot.

For existing Layer 2 and Layer 3 services, the si interface unit 0 is currently used to store the unilist next-hop information. However, you must reserve and configure si interface unit 0 and set family inet for both PPPoE and LNS subscribers because the si interface implements the bundle functionality. Setting family inet6 is ignored by the system.

The following example shows how to configure inline services for PIC 0 in MPC slot 1, and PIC 1 in MPC slot 5, and set unit 0 family inet for both.

To configure inline service interfaces:

  1. Access the service interface.

    [edit interfaces]

    user@host# edit si-slot/pic/port

  2. (Optional; for per-session shaping only) Enable the inline service interface for hierarchical schedulers and limit the number of scheduler levels to two.

    [edit interfaces si-slot/pic/port]

    user@host# set hierarchical-scheduler maximum-hierarchy-levels 2

  3. (Optional; for per-session shaping only) Configure services encapsulation for the inline service interface.

    [edit interfaces si-slot/pic/port]

    user@host# set encapsulation generic-services

  4. Reserve and configure the IPv4 family (inet) on the reserved unit 0 logical interface for PPPoE and LNS subscribers and bundle functionality.

    [edit interfaces si-slot/pic/port]

    user@host# set unit 0 family inet

The following sample output shows the configuration for services encapsulation and IPv4 family (inet) for the two interfaces:

Configuring Service Device Pools for Load Balancing PPPoE and LNS Subscribers

With dynamic L2TP network server (LNS) configuration, you can replace services-interfaces with a service-device-pool in the tunnel-group for load balancing LNS subscribers. Optionally, you can use the service-device-pool statement for load balancing to dynamically select the inline service (si) interface for both bundle (PPPoE or LNS subscribers) and LNS member link, respectively.

Note

The service-device-pool configuration enables interface overlap, which can result in overusage of the overlapped interfaces.

Before you begin, enable the inline service interfaces for all FPC slots and PICs. See Enabling Inline Service Interfaces for PPPoE and LNS Subscribers.

The following example shows how to configure two service device pools (pool1 and pool2) for inline services for load balancing the bundle and LNS member link.

To configure two service device pools:

  1. Create the tunnel group.

    [edit services l2tp]

    user@host# set tunnel-group name

  2. Define the service device pools to assign si interfaces for load balancing.

    [edit services l2tp]

    user@host# set service-device-pool pool-name

The following sample output shows that all referenced FPC slots and PICs are enabled for inline services: