Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

L2TP LNS Inline Service Interfaces

Configuring an L2TP LNS with Inline Service Interfaces

The L2TP LNS feature license must be installed before you begin the configuration. Otherwise, a warning message is displayed when the configuration is committed.

To configure an L2TP LNS with inline service interfaces:

  1. (Optional) Configure a user group profile that defines the PPP configuration for tunnel subscribers.
  2. (Optional) Configure PPP attributes for subscribers on inline service interfaces.
  3. Configure inline IP reassembly.
  4. Configure an L2TP access profile that defines the L2TP parameters for each LNS client (LAC).
  5. (Optional) Configure a AAA access profile to override the access profile configured under the routing instance.
  6. Configure a pool of addresses to be dynamically assigned to tunneled PPP subscribers.
  7. Configure the peer interface to terminate the tunnel and the PPP server-side IPCP address.
  8. Enable inline service interfaces on an MPC.
  9. Configure a service interface.
  10. Configure options for each inline service logical interface.
  11. (Optional) Configure an aggregated inline service interface and 1:1 stateful redundancy.
  12. Configure the L2TP tunnel group.
  13. (Optional) Configure a dynamic profile that dynamically creates L2TP logical interfaces.
  14. (Optional) Configure a service interface pool for dynamic LNS sessions.
  15. (Optional) Specify how many times L2TP retransmits unacknowledged control messages.
  16. (Optional) Specify how long a tunnel can remain idle before being torn down.
  17. (Optional) Specify the L2TP receive window size for the L2TP tunnel. The receive window size specifies the number of packets a peer can send before waiting for an acknowledgment from the router.
  18. (Optional) Specify how long the L2TP retains information about terminated dynamic tunnels, sessions, and destinations.
  19. (Optional) Configure the L2TP destination lockout timeout.
  20. (Optional) Configure L2TP tunnel switching.
  21. (Optional) Prevent the creation of new sessions, destinations, or tunnels for L2TP.
  22. (Optional) Configure whether the L2TP failover protocol is negotiated or the silent failover method is used for resynchronization.
  23. (Optional) Enable SNMP statistics counters.
  24. (Optional) Configure trace options for troubleshooting the configuration.

You also need to configure CoS for LNS sessions. For more information, see Configuring Dynamic CoS for an L2TP LNS Inline Service.

Applying PPP Attributes to L2TP LNS Subscribers per Inline Service Interface

You can configure PPP attributes that are applied by the LNS on the inline service (si) interface to the PPP subscribers tunneled from the LAC. Because you are configuring the attributes per interface rather than with a user group profile, the attributes for subscribers can be varied with a finer granularity. This configuration matches that used for terminated PPPoE subscribers.

To configure the PPP attributes for dynamically created si interfaces:

  1. Specify the predefined dynamic interface and logical interface variables in the dynamic profile.
  2. Configure the interval between PPP keepalive messages for the L2TP tunnel terminating on the LNS.
  3. Configure PPP authentication methods that apply to tunneled PPP subscribers at the LNS.
  4. Specify a set of AAA options that is used for authentication and authorization of tunneled PPP subscribers at the LNS that are logging in by means of the subscriber and AAA contexts that are specified in the AAA options set.

    The option set is configured with the aaa-options aaa-options-name statement at the [edit access] hierarchy level.

  5. Configure the router to prompt Customer Premises Equipment (CPE) to negotiate both primary and secondary DNS addresses during IPCP negotiation for tunneled PPP subscribers at the LNS.
  6. (Optional) Disable validation of the PPP magic number during LCP negotiation and in LCP keepalive (echo-request/echo-reply) exchanges. Prevents comparison of received magic number with internally generated magic number, so that a mismatch does not cause session termination.

To configure the PPP attributes for statically created si interfaces:

  1. Specify the logical inline service interface.

  2. Configure the interval between PPP keepalive messages for the L2TP tunnel terminating on the LNS.

  3. Configure the number of keepalive packets a destination must fail to receive before the network takes down a link.

    Note:

    The keepalives up-count option is typically not used for subscriber management.

  4. Configure PPP authentication methods that apply to tunneled PPP subscribers at the LNS.

  5. Configure the router to prompt the Customer Premises Equipment (CPE) to negotiate both primary and secondary DNS addresses during IPCP negotiation for tunneled PPP subscribers at the LNS.

Best Practice:

Although all other statements subordinate to ppp-options—including those subordinate to chap and pap—are supported, they are typically not used for subscriber management. We recommend that you leave these other statements at their default values.

Note:

You can also configure PPP attributes with a user group profile that applies the attributes to all subscribers with that profile on a LAC client. See Applying PPP Attributes to L2TP LNS Subscribers with a User Group Profile for more information. When you configure the PPP attributes for L2TP LNS subscribers both on the si interface and in user group profiles, the inline service interface configuration takes precedence over the user group profile configuration.

Note:

When PPP options are configured in both a group profile and a dynamic profile, the dynamic profile configuration takes complete precedence over the group profile when the dynamic profile includes one or more of the PPP options that can be configured in the group profile. Complete precedence means that there is no merging of options between the profiles. The group profile is applied to the subscriber only when the dynamic profile does not include any PPP option available in the group profile.

Applying PPP Attributes to L2TP LNS Subscribers with a User Group Profile

You can configure a user group profile that enables the LNS to apply PPP attributes to the PPP subscribers tunneled from the LAC. The user group profile is associated with clients (LACs) in the L2TP access profile. Consequently all subscribers handled by a given client share the same PPP attributes.

To configure a user group profile:

  1. Create the profile.
  2. Configure the interval between PPP keepalive messages for the L2TP tunnel terminating on the LNS.
    Note:

    Changes to the keepalive interval in a user group profile affect only new L2TP sessions that come up after the change. Existing sessions are not affected.

  3. Configure PPP authentication methods that apply to tunneled PPP subscribers at the LNS.
  4. Specify a set of AAA options that is used for authentication and authorization of tunneled PPP subscribers at the LNS that are logging in by means of the subscriber and AAA contexts that are specified in the AAA options set.

    The option set is configured with the aaa-options aaa-options-name statement at the [edit access] hierarchy level.

  5. Configure the router to prompt the Customer Premises Equipment (CPE) to negotiate both primary and secondary DNS addresses during IPCP negotiation for tunneled PPP subscribers at the LNS.
  6. (Optional) Disable the Packet Forwarding Engine from performing a validation check for PPP magic numbers received from a remote peer in LCP keepalive (Echo-Request/Echo-Reply) exchanges. This prevents PPP from terminating the session when the number does not match the value agreed upon during LCP negotiation. This capability is useful when the remote PPP peers include arbitrary magic numbers in the keepalive packets. Configuring this statement has no effect on LCP magic number negotiation or on the exchange of keepalives when the remote peer magic number is the expected negotiated number.
  7. Configure how long the PPP subscriber session can be idle before it is considered to have timed out.
Note:

You can also configure PPP attributes on a per-interface basis. See Applying PPP Attributes to L2TP LNS Subscribers per Inline Service Interface for more information. When you configure the PPP attributes for L2TP LNS subscribers both on the si interface and in user group profiles, the inline service interface configuration takes precedence over the user group profile configuration.

Note:

When PPP options are configured in both a group profile and a dynamic profile, the dynamic profile configuration takes complete precedence over the group profile when the dynamic profile includes one or more of the PPP options that can be configured in the group profile. Complete precedence means that there is no merging of options between the profiles. The group profile is applied to the subscriber only when the dynamic profile does not include any PPP option available in the group profile.

Configuring an L2TP Access Profile on the LNS

Access profiles define how to validate Layer 2 Tunneling Protocol (L2TP) connections and session requests. Within each L2TP access profile, you configure one or more clients (LACs). The client characteristics are used to authenticate LACs with matching passwords, and to establish attributes of the client tunnel and session. You can configure multiple access profiles and multiple clients within each profile.

To configure an L2TP access profile:

  1. Create the access profile.
  2. Configure characteristics for one or more clients (LACs).
    Note:

    Except for the special case of the default client, the LAC client name that you configure in the access profile must match the hostname of the LAC. In the case of a Juniper Networks router acting as the LAC, the hostname is configured in the LAC tunnel profile with the gateway gateway-name statement at the [edit access tunnel-profile profile-name tunnel tunnel-id source-gateway] hierarchy level. Alternatively, the client name can be returned from RADIUS in the attribute, Tunnel-Client-Auth-Id [90].

    Note:

    Use default as the client name when you want to define a default tunnel client. The default client enables the authentication of multiple LACs with the same secret and L2TP attributes. This behavior is useful when, for example, many new LACs are added to the network, because it enables the LACs to be used without additional LNS profile configuration.

    Use default only on MX Series routers. The equivalent client name on M Series routers is *.

  3. (Optional) Specify a local access profile that overrides the global access profile and the tunnel group AAA access profile to configure RADIUS server settings for the client.
  4. Configure the LNS to renegotiate the link control protocol (LCP) with the PPP client. tunneled from the client.
  5. Configure one or more dynamic service profiles to apply services to all subscribers on the LAC. You can optionally pass parameter to the services in the same statement.
  6. Configure the maximum number of sessions allowed in a tunnel from the client (LAC).
  7. Configure the LNS to override result codes 4 and 5 with result code 2 in CDN messages it sends to the LAC when the number of L2TP sessions reaches the configured maximum value. Some third-party LACs cannot fail over to another LNS unless the result code has a value of 2.
  8. Configure the tunnel password used to authenticate the client (LAC).
  9. (Optional) Associate a group profile containing PPP attributes to apply for the PPP sessions being tunneled from this LAC client.
    Note:

    If user-group-profile is modified or deleted, the existing LNS subscribers, which were using this Layer 2 Tunneling Protocol client configuration, go down.

Configuring a AAA Local Access Profile on the LNS

For some LNS tunnels, you might wish to override the access profile configured at the routing instance that hosts the tunnel with a particular RADIUS server configuration. You can configure a local access profile to do so. You can subsequently use the aaa-access-profile statement to apply the local access profile to a tunnel group or LAC client.

A local access profile applied to a client overrides a local access profile applied to a tunnel group, which in turn overrides the access profile for the routing instance.

To configure an AAA local access profile:

  1. Create the access profile.
  2. Configure the order of AAA authentication methods.
  3. Configure the RADIUS server attributes, such as the authentication password.

Configuring an Address-Assignment Pool for L2TP LNS with Inline Services

You can configure pools of addresses that can be dynamically assigned to the tunneled PPP subscribers. The pools must be local to the routing instance where the subscriber comes up. The configured pools are supplied in the RADIUS Framed-Pool and Framed-IPv6-Pool attributes. Pools are optional when Framed-IP-Address is sent by RADIUS.

To configure an address-assignment pool, you must specify the name of the pool and configure the addresses for the pool.

You can optionally configure multiple named ranges, or subsets, of addresses within an address-assignment pool. During dynamic address assignment, a client can be assigned an address from a specific named range. To create a named range, you specify a name for the range and define the address range.

Note:

Be sure to use the address-assignment pools (address-assignment) statement rather than the address pools (address-pool) statement.

For more information about address assignment pools, see Address-Assignment Pools Overview and Address-Assignment Pool Configuration Overview.

To configure an IPv4 address-assignment pool for L2TP LNS:

  1. Configure the name of the pool and specify the IPv4 family.
  2. Configure the network address and the prefix length of the addresses in the pool.
  3. Configure the name of the range and the lower and upper boundaries of the addresses in the range.

For example, to configure an IPv4 address-assignment pool:

To configure an IPv6 address-assignment pool for L2TP LNS:

  1. Configure the name of the pool and specify the IPv6 family.

  2. Configure the IPv6 network prefix for the address pool. The prefix specification is required when you configure an IPv6 address-assignment pool.

  3. Configure the name of the range and define the range. You can define the range based on the lower and upper boundaries of the prefixes in the range, or based on the length of the prefixes in the range.

For example, to configure an IPv6 address-assignment pool:

Configuring the L2TP LNS Peer Interface

The peer interface connects the LNS to the cloud towards the LACs so that IP packets can be exchanged between the tunnel endpoints. MPLS and aggregated Ethernet can also be used to reach the LACs.

Note:

On MX Series routers, you must configure the peer interface on an MPC.

To configure the LNS peer interface:

  1. Specify the interface name.
  2. Enable VLANs.
  3. Specify the logical interface, bind a VLAN tag ID to the interface, and configure the address family and the IP address for the logical interface.
    Note:

    The IPv6 address family is not supported as a tunnel endpoint.

Enabling Inline Service Interfaces

The inline service interface is a virtual physical interface that resides on the Packet Forwarding Engine. This si interface, referred to as an anchor interface, makes it possible to provide L2TP services without a special services PIC. The inline service interface is supported only by MPCs on MX Series routers. Four inline service interfaces are configurable per MPC-occupied chassis slot.

Note:

On MX80 and MX104 routers, you can configure only four inline services physical interfaces as anchor interfaces for L2TP LNS sessions: si-1/0/0, si-1/1/0, si-1/2/0, and si-1/3/0. You cannot configure si-0/0/0 for this purpose on MX80 and MX104 routers.

Although the range of bandwidth values is 1 Gbps through 400 Gbps, you cannot configure the bandwidth in absolute numbers such as 12,345,878,000 bps. You must use the options available in the CLI statement:

  • 1g

  • 10g through 100g in 10 Gbps increments: 10g, 20g, 30g, 40g, 50g, 60g, 70g, 80g, 90g, 100g

  • 100g through 400g in 100 Gbps increments: 100g, 200g, 300g, 400g

The maximum bandwidth available varies among MPCs, as shown in Table 1. A system log message is generated when you configure a bandwidth higher than is supported on the MPC.

Table 1: Maximum Bandwidth for Inline Services per MPC

MPC

Maximum Supported Bandwidth

MPC2E NG, MPC2E NG Q,

80 Gbps

MPC3E NG, MPC3E NG Q

130 Gbps

100GE and 40GE MPC3 and MICs

40 Gbps

MPC4E

130 Gbps

MPC5E

130 Gbps

MPC6E

130 Gbps

MPC7E

240 Gbps

MPC8E

240 Gbps

400 Gbps in 1.6 Tbps upgraded mode

MPC9E

400 Gbps

To enable inline service interfaces:

  1. Access an MPC-occupied slot and the PIC where the interface is to be enabled.
  2. Enable the interface and optionally specify the amount of bandwidth reserved on each Packet Forwarding Engine for tunnel traffic using inline services. Starting in Junos OS Release 16.2, you are not required to explicitly specify a bandwidth for L2TP LNS tunnel traffic using inline services. When you do not specify a bandwidth, the maximum bandwidth supported on the PIC is automatically available for the inline services; inline services can use up to this maximum value. In earlier releases, you must specify a bandwidth when you enable inline services with the inline-services statement.

Configuring an Inline Service Interface for L2TP LNS

The inline service interface is a virtual physical service interface that resides on the Packet Forwarding Engine. This si interface, referred to as an anchor interface, makes it possible to provide L2TP services without a special services PIC. The inline service interface is supported only by MPCs on MX Series routers. Four inline service interfaces are configurable per MPC-occupied chassis slot.

You can maximize the number of sessions that can be shaped in one service interface by setting the maximum number of hierarchy levels to two. In this case, each LNS session consumes one L3 node in the scheduler hierarchy for shaping.

If you do not specify the number of levels (two is the only option), then the number of LNS sessions that can be shaped on the service interface is limited to the number of L2 nodes, or 4096 sessions. Additional sessions still come up, but they are not shaped.

To configure an inline service interface:

  1. Access the service interface.
  2. (Optional; for per-session shaping only) Enable the inline service interface for hierarchical schedulers and limit the number of scheduler levels to two.
  3. (Optional; for per-session shaping only) Configure services encapsulation for inline service interface.
  4. Configure the IPv4 family on the reserved unit 0 logical interface.

Configuring Options for the LNS Inline Services Logical Interface

You must specify characteristics—dial-options—for each of the inline services logical interfaces that you configure for the LNS. LNS on MX Series routers supports only one session per logical interface, so you must configure it as a dedicated interface; the shared option is not supported. (LNS on M Series routers supports dedicated and shared options.) You also configure an identifying name for the logical interface that matches the name you specify in the access profile.

You must specify the inet address family for each static logical interface or in the dynamic profile for dynamic LNS interfaces. Although the CLI accepts either inet or inet6 for static logical interfaces, the subscriber cannot log in successfully unless the address family inet is configured.

Note:

For dynamic interface configuration, see Configuring a Dynamic Profile for Dynamic LNS Sessions.

To configure the static logical interface options:

  1. Access the inline services logical interface.
  2. Specify an identifier for the logical interface.
  3. Configure the logical interface to be used for only one session at a time.
  4. Configure the address family for each logical interface and enable the local address on the LNS that provides local termination for the L2TP tunnel to be derived from the specified interface name.

LNS 1:1 Stateful Redundancy Overview

By default, when an inline service (si) anchor interface goes down—for example, when the card hosting the interface fails or restarts—L2TP subscriber traffic is lost. When the PPP keepalive timer for the tunnel subsequently expires, the control plane goes down and the PPP client is disconnected. Consequently, the client must then reconnect.

You can avoid traffic loss in these circumstances by configuring an aggregated inline service interface (asi) bundle to provide 1:1 stateful redundancy, also called hot standby or active-backup redundancy. The bundle consists of a pair of si physical interfaces, the primary (active) member link and the secondary (standby or backup) member link. These interfaces must be configured on different MPCs; redundancy is not achievable if you configure the primary and secondary interface on the same MPC because both member interfaces go down if the card goes down.

When subscribers log in and 1:1 redundancy is configured, the L2TP session is established over an underlying virtual logical interface (asix.0) over the asi0 physical interface. Individual subscriber logical interfaces are created on the underlying interface in the format, asiX.logical-unit-number. The session remains up in the event of a failure or a restart on the MPC hosting the primary member link interface. All the data traffic destined for this L2TP session automatically moves over to the secondary member link interface on the other MPC.

Configuring 1:1 LNS Stateful Redundancy on Aggregated Inline Service Interfaces

You can create an aggregated inline service interface (asi) bundle to provide 1:1 LNS stateful redundancy for inline service (si) anchor interfaces. The bundle pairs two interfaces that reside on different MPCs as primary and secondary links. LNS sessions are subsequently established over a virtual logical interface, asiX.logical-unit-number. LNS session failover occurs when either the primary anchor interface goes down or the card is restarted with the request chassis fpc restart command. When this happens, the secondary link—on a different MPC—becomes active and all the LNS data traffic destined for the session automatically moves over to the secondary interface. The subscriber session remains up on the asiX.logical-unit-number virtual interface. No traffic statistics are lost. When this redundancy is not configured, subscriber traffic is lost, the keepalives expire, and the PPP client is disconnected and must reconnect.

Before you begin, you must do the following:

Best Practice:

Follow these guidelines:

  • You must configure unit 0 family inet for each bundle; otherwise, the session fails to come up.

  • The primary (active) and secondary (backup) interfaces must be on different MPCs.

  • The bandwidth configured at the [edit chassis fpc slot pic number inline-services bandwidth] hierarchy level must be the same for both member links.

  • An si interface configured as a member of an aggregated inline service interface bundle cannot be configured as a member of another bundle group.

  • An si interface configured as a member of an aggregated inline service interface bundle cannot also be used for any function that is not related to aggregated services; for example, it cannot be used for inline IP reassembly.

  • When you configure an si interface as a member of an aggregated inline services bundle, you can no longer configure that si interface independently. You can configure only the parent bundle; the bundle’s configuration is applied immediately to all member interfaces.

To configure 1:1 LNS stateful redundancy:

  1. On one MPC, specify the primary (active) inline services member link in the bundle.
  2. Configure the amount of bandwidth reserved on this MPC for tunnel traffic using the primary inline service interface.
  3. On a different MPC, specify the secondary(backup) inline services member link in the bundle.
    Note:

    If you configure the active and backup member links on the same MPC, the subsequent commit of the configuration fails.

  4. Configure the amount of bandwidth reserved on this MPC for tunnel traffic using the secondary inline service interface.
  5. Assign the aggregated inline service interface bundle to an L2TP tunnel group by either of the following methods:
    • Assign a single bundle by specifying the name of the aggregated inline service physical interface.

    • Assign one or more pools of bundles to the tunnel group.

      Note:

      A pool can be mixed; that is, it can include both aggregated inline service interface bundles and individual inline service interfaces. The individual interfaces must not be members of existing bundles.

The following sample configuration creates bundle asi0 with member links on MPCs in slot 1 and slot 2, then assigns the bundle to provide redundancy for L2TP sessions on tunnel group tg1:

Verifying LNS Aggregated Inline Service Interface 1:1 Redundancy

Purpose

View information about aggregated inline service interface bundles, individual member links, and redundancy status.

Action

  • To view summary information about an aggregated inline service interface bundle:

  • To view detailed information about an aggregated inline service interface bundle:

  • To view information about an individual member interface in an aggregated inline service interface bundle:

  • To view redundancy status for aggregated inline service interface bundles:

    That sample output shows that both aggregated Ethernet and aggregated inline service interfaces are configured for redundancy. To display only one of the aggregated inline service interface bundles:

  • To view detailed information about all configured redundancy interfaces:

L2TP Session Limits and Load Balancing for Service Interfaces

The LNS load balances subscriber sessions across the available service interfaces in a device pool based on the number of sessions currently active on the interfaces. You can configure a maximum limit per service interface (si) and per aggregated service interface (asi). In the case of asi interfaces, you cannot configure a limit for the individual si member interfaces in the bundle.

Session Limits on Service Interfaces

When an L2TP session request is initiated for a service interface, the LNS checks the number of current active sessions on that interface against the maximum number of sessions allowed for the individual service interface or aggregated service interface. The LNS determines whether the current session count (displayed by the show services l2tp summary command) is less than the configured limit. When that is true or when no limit is configured, the check passes and the session can be established. If the current session count is equal to the configured limit, then the LNS rejects the session request. No subsequent requests can be accepted on that interface until the number of active requests drops below the configured maximum. When a session request is rejected for an si or asi interface, the LNS returns a CDN message with the result code set to 2 and the error code set to 4.

For example, suppose a single service interface is configured in the tunnel group. The current L2TP session count is 1500, with a configured limit of 2000 sessions. When a new session is requested, the limit check passes and the session request is accepted.

Interface

Configured Session Limit

Current Session Count

Session Limit Check Result

si-0/0/0

2000

1500

Pass

The limit check continues to pass and session requests are accepted until 500 requests have been accepted, making the current session count 2000, which matches the configured maximum. The session limit check fails for all subsequent requests and all requests are rejected until the current session count on the interface drops below 2000, so that the limit check can pass.

Interface

Configured Session Limit

Current Session Count

Session Limit Check Result

si-0/0/0

2000

2000

Fail

When the session limit is set to zero for an interface , no session requests can be accepted. If that is the only interface in the tunnel group, then all session requests in the group are rejected until the session limit is increased from zero or another service interface is added to the tunnel group.

When a service interface In a service device pool has reached the maximum configured limit or it has a configured limit of zero, the LNS skips that interface when a session request is made and selects another interface in the pool to check the session limit. This continues until an interface passes and the session is accepted or no other interface remains in the pool to be selected.

Session Load Balancing Across Service Interfaces

The behavior for session load distribution in a service device pool changed in Junos OS Release 16.2. When a service interface has a lower session count than another interface in the pool and both interfaces are below their maximum session limit, subsequent sessions are distributed to the interface with fewer sessions.

In earlier releases, sessions are distributed in a strictly round-robin manner, regardless of session count. The old behavior can result in uneven session distribution when the Packet Forwarding Engine is rebooted or a service interface goes down and comes back up.

For example, consider the following scenario using the old round-robin distribution behavior for a pool with two service interfaces:

  1. Two hundred sessions are evenly distributed across the two service interfaces.

    • si-0/0/0 has 100 sessions.

    • si-1/0/0 has 100 sessions.

  2. The si-1/0/0 interface reboots. When it comes back, initially sessions are up only on si-0/0/0.

    • si-0/0/0 has 100 sessions.

    • si-1/0/0 has 0 sessions.

  3. As the sessions formerly on si-1/0/0 reconnect, they are distributed equally across both service interfaces. When all 100 sessions are back up, the distribution is significantly unbalanced.

    • si-0/0/0 has 150 sessions.

    • si-1/0/0 has 50 sessions.

  4. After 100 new sessions connect, si-0/0/0 reaches its maximum limit. Subsequent sessions are accepted only on si-1/0/0.

    • si-0/0/0 has 200 sessions.

    • si-1/0/0 has 100 sessions.

  5. After 100 more sessions connect, si-1/0/0 reaches its maximum limit. No more sessions can be accepted until the session count drops below 200 for one of the interfaces.

    • si-0/0/0 has 200 sessions.

    • si-1/0/0 has 200 sessions.

Now consider the same scenario using the current load distribution behavior based on the number of attached sessions. The device pool again has two service interfaces each with a configured maximum limit of 200 sessions:

  1. Two hundred sessions are evenly distributed across the two service interfaces.

    • si-0/0/0 has 100 sessions.

    • si-1/0/0 has 100 sessions.

  2. The si-1/0/0 interface reboots. When it comes back up, sessions are up initially only on si-0/0/0.

    • si-0/0/0 has 100 sessions.

    • si-1/0/0 has 0 sessions.

  3. As the sessions formerly on si-1/0/0 reconnect, they are distributed according to the session load on each interface. Because both interfaces are below their maximum limit, and si-1/0/0 has fewer sessions than si-0/0/0, sessions are initially distributed only to si-1/0/0.

    1. After 1 new session:

      • si-0/0/0 has 100 sessions.

      • si-1/0/0 has 1 session.

    2. After 10 new sessions:

      • si-0/0/0 has 100 sessions.

      • si-1/0/0 has 10 sessions.

    3. After 100 new sessions:

      • si-0/0/0 has 100 sessions.

      • si-1/0/0 has 100 sessions.

  4. Because both interfaces now have the same session count, the next session (#101) is distributed randomly between the two interfaces. The next session after that (#102) goes to the interface with the lower session count. That makes the interfaces equal again, so the next session (#103) is randomly distributed. This pattern repeats until the maximum limit of 200 sessions for both interfaces.

    • si-0/0/0 has 200 sessions.

    • si-1/0/0 has 200 sessions.

    No more sessions can be accepted on either interface until the number of sessions drops below 200 on one of the interfaces.

The load balancing behavior is the same for aggregated service interfaces. An asi interface is selected from a pool based on the current session count for the asi interface. When that count is less than the maximum, the LNS checks current session count for the active si interface in the asi bundle. When that count is less than the maximum, the session can be established on the asi interface.

In a mixed device pool that has both service interfaces and aggregated service interfaces, sessions are distributed to the interface, either asi or si, that has the lowest session count. When the session count of an interface of either type reaches its limit, it can no longer accept sessions until the count drops below the maximum.

You can use the session limit configuration to achieve a session limit on particular Packet Forwarding Engines. Suppose you want a limit of 100 sessions on a PFE0, which has two service interfaces. You can set the max limit on each interface to 50, or any other combination that adds up to 100 to establish the PFE0 limit.

Example: Configuring an L2TP LNS

This example shows how you can configure an L2TP LNS on an MX Series router to provide tunnel endpoints for an L2TP LAC in your network. This configuration includes a dynamic profile for dual-stack subscribers.

Requirements

This L2TP LNS example requires the following hardware and software:

  • MX Series 5G Universal Routing Platform

  • One or more MPCs

  • Junos OS Release 11.4 or later

No special configuration beyond device initialization is required before you can configure this feature.

You must configure certain standard RADIUS attributes and Juniper Networks VSAs in the attribute return list on the AAA server associated with the LNS for this example to work. Table 2 lists the attributes with their required order setting and values. We recommend that you use the most current Juniper Networks RADIUS dictionary, available in the Downloads box on the Junos OS Subscriber Management page at https://www.juniper.net/documentation/en_US/junos/information-products/pathway-pages/subscriber-access/index.html.

Table 2: VSA and Standard RADIUS Attribute Names, Order, and Values Required for Example

VSA Name [Number]

Order

Value

CoS-Parameter-Type [26–108]

1

T01 Multiplay

CoS-Parameter-Type [26–108]

2

T02 10m

CoS-Parameter-Type [26–108]

3

T08 -36

CoS-Parameter-Type [26–108]

4

T07 cell-mode

Framed-IPv6-Pool [100]

0

jnpr_ipv6_pool

Framed-Pool [88]

0

jnpr_pool

Egress-Policy-Name [26-11]

0

classify

Ingress-Policy-Name [26-10]

0

classify

Virtual-Router [26-1]

0

default

Overview

The LNS employs user group profiles to apply PPP attributes to the PPP subscribers that are tunneled from the LAC. LACs in the network are clients of the LNS. The clients are associated with user group profiles in the L2TP access profile configured on the LNS. In this example, the user group profile ce-l2tp-group-profile specifies the following PPP attributes:

  • A 30-second interval between PPP keepalive messages for L2TP tunnels from the client LAC terminating on the LNS.

  • A 200-second interval that defines how long the PPP subscriber session can be idle before it is considered to have timed out.

  • Both PAP and CHAP as the PPP authentication methods that apply to tunneled PPP subscribers at the LNS.

The L2TP access profile ce-l2tp-profile defines a set of L2TP parameters for each client LAC. In this example, the user group profile ce-l2tp-group-profile is associated with both clients, lac1 and lac2. Both clients are configured to have the LNS renegotiate the link control protocol (LCP) with the PPP client rather than accepting the pre-negotiated LCP parameters that the LACs pass to the LNS. LCP renegotiation also causes authentication to be renegotiated by the LNS; the authentication method is specified in the user group profile. The maximum number of sessions allowed per tunnel is set to 1000 for lac1 and to 4000 for lac2. A different password is configured for each LAC.

A local AAA access profile, aaa-profile, enables you to override the global AAA access profile, so that you can specify an authentication order, a RADIUS server that you want to use for L2TP, and a password for the server.

In this example, an address pool defines a range of IP addresses that the LNS allocates to the tunneled PPP sessions. This example defines ranges of IPv4 and IPv6 addresses.

Two inline service interfaces are enabled on the MPC located in slot 5 of the router. For each interface, 10 Gbps of bandwidth is reserved for tunnel traffic on the interface’s associated PFE. These anchor interfaces serve as the underlying physical interface. To enable CoS queue support on the individual logical inline service interfaces, you must configure both services encapsulation (generic-services) and hierarchical scheduling support on the anchors. The IPv4 address family is configured for both anchor interfaces. Both anchor interfaces are specified in the lns_p1 service device pool. The LNS can balance traffic loads across the two anchor interfaces when the tunnel group includes the pool.

This example uses the dynamic profile dyn-lns-profile2 to specify characteristics of the L2TP sessions that are created or assigned dynamically when a subscriber is tunneled to the LNS. For many of the characteristics, a predefined variable is set; the variables are dynamically replaced with the appropriate values when a subscriber is tunneled to the LNS.

The interface to which the tunneled PPP client connects ($junos-interface-name) is dynamically created in the routing instance ($junos-routing-instance) assigned to the subscriber. Routing options for access routes include the route’s next hop address ($junos-framed-route-nexthop), metric ($junos-framed-route-cost), and preference ($junos-framed-route-distance). For access-internal routes, a dynamic IP address variable ($junos-subscriber-ip-address) is set.

The logical inline service interfaces are defined by the name of a configured anchor interface ($junos-interface-ifd-name) and a logical unit number ($junos-interface-unit). The profile assigns l2tp-encapuslation as the identifier for the logical interface and specifies that each interface can be used for only a single session at a time.

The IPv4 address is set to a value returned from the AAA server. For IPv4 traffic an input firewall filter $junos-input-filter and an output firewall filter $junos-output-filter are attached to the interface. The loopback variable ($junos-loopback-interface) derives an IP address from a loopback interface (lo) configured in the routing instance and uses it in IPCP negotiation as the PPP server address. Because this is a dual-stack configuration, the IPv6 address family is also set, with the addresses provided by the $junos-ipv6-address variable.

The $junos-ipv6-address variable is used because Router Advertisement Protocol is also configured. This variable enables AAA to allocate the first address in the prefix to be reserved as the local address for the interface. The minimal configuration for the Router Advertisement Protocol in the dynamic profile specifies the $junos-interface-name and $junos-ipv6-ndra-prefix variables to dynamically assign a prefix value in IPv6 neighbor discovery router advertisements.

The dynamic profile also includes the class of service configuration that is applied to the tunnel traffic. The traffic-control profile (tc-profile) includes variables for the scheduler map ($junos-cos-scheduler-map), shaping rate ($junos-cos-shaping-rate), overhead accounting ($junos-cos-shaping-mode), and byte adjustment $junos-cos-byte-adjust). The dynamic profile applies the CoS configuration—including the forwarding class, the output traffic-control profile, and the rewrite rules—to the dynamic service interfaces.

The tg-dynamic tunnel group configuration specifies the access profile ce-l2tp-profile, the local AAA profile aaa-profile, and the dynamic profile dyn-lns-profile2 that are used to dynamically create LNS sessions and define the characteristics of the sessions. The lns_p1 service device pool associates a pool of service interfaces with the group to enable LNS to balance traffic across the interfaces. The local gateway address 203.0.113.2 corresponds to the remote gateway address that is configured on the LAC. The local gateway name ce-lns corresponds to the remote gateway name that is configured on the LAC.

Note:

This example does not show all possible configuration choices.

Configuration

Procedure

CLI Quick Configuration

To quickly configure an L2TP LNS, copy the following commands, paste them in a text file, remove any line breaks, and then copy and paste the commands into the CLI.

Step-by-Step Procedure

The following example requires you to navigate various levels in the configuration hierarchy. For instructions on how to do that, see Using the CLI Editor in Configuration Mode.

To configure an L2TP LNS with inline service interfaces:

  1. Configure a user group profile that defines the PPP configuration for tunnel subscribers.

  2. Configure an L2TP access profile that defines the L2TP parameters for each client LAC. This includes associating a user group profile with the client and specifying the identifier for the inline services logical interface that represents an L2TP session on the LNS.

    Note:

    If user-group-profile is modified or deleted, the existing LNS subscribers, which were using this Layer 2 Tunneling Protocol client configuration, go down.

  3. Configure a AAA access profile to override the global access profile for the order of AAA authentication methods and server attributes.

  4. Configure IPv4 and IPv6 address-assignment pools to allocate addresses for the clients (LACs).

  5. Configure the peer interface to terminate the tunnel and the PPP server-side IPCP address (loopback address).

  6. Enable inline service interfaces on an MPC.

  7. Configure the anchor service interfaces with services encapsulation, hierarchical scheduling, and the address family.

  8. Configure a pool of service interfaces for dynamic LNS sessions.

  9. Configure a dynamic profile that dynamically creates L2TP logical interfaces for dual-stack subscribers.

  10. Configure shaping, scheduling, and rewrite rules, and apply in the dynamic profile to tunnel traffic.

  11. Configure the L2TP tunnel group to bring up dynamic LNS sessions using the pool of inline service interfaces to enable load-balancing.

Results

From configuration mode, confirm the access profile, group profile, AAA profile, and address-assignment pools configuration by entering the show access command. Confirm the inline services configuration by entering the show chassis command. Confirm the interface configuration by entering the show interfaces command. Confirm the dynamic profile configuration by entering the show dynamic-profiles command. Confirm the tunnel group configuration by entering the show services l2tp command. If the output does not display the intended configuration, repeat the configuration instructions in this example to correct it.

When you are done configuring the device, enter commit from configuration mode.

Configuring an L2TP Tunnel Group for LNS Sessions with Inline Services Interfaces

The L2TP tunnel group specifies attributes that apply to L2TP tunnels and sessions from a group of LAC clients. These attributes include the access profile used to validate L2TP connection requests made to the LNS on the local gateway address, a local access profile that overrides the global access profile, the keepalive timer, and whether the IP ToS value is reflected.

Note:

If you delete a tunnel group, all L2TP sessions in that tunnel group are terminated. If you change the value of the local-gateway-address, service-device-pool, or service-interface statements, all L2TP sessions using those settings are terminated. If you change or delete other statements at the [edit services l2tp tunnel-group name] hierarchy level, new tunnels you establish use the updated values but existing tunnels and sessions are not affected.

To configure the LNS tunnel group:

  1. Create the tunnel group.
    Note:

    You can create up to 256 tunnel groups.

  2. Specify the service anchor interface responsible for L2TP processing on the LNS.

    This service anchor interface is required for static LNS sessions, and for dynamic LNS sessions that do not balance traffic across a pool of anchor interfaces. The interface is configured at the [edit interfaces] hierarchy level.

  3. (Optional; for load-balancing dynamic LNS sessions only) Specify a pool of inline service anchor interfaces to enable load-balancing of L2TP traffic across the interfaces.

    The pool is defined at the [edit services service-device-pools] hierarchy level.

  4. (For dynamic LNS sessions only) Specify the name of the dynamic profile that defines and instantiates inline service interfaces for L2TP tunnels

    The profile is defined at the [edit dynamic-profiles] hierarchy level.

  5. Specify the access profile that validates all L2TP connection requests to the local gateway address.
  6. Configure the local gateway address on the LNS; corresponds to the IP address that is used by LACs to identify the LNS.
  7. (Optional) Configure the local gateway name on the LNS, returned in the SCCRP message to the LAC. The name must match the remote gateway name configured on the LAC, or the tunnel cannot be created.
  8. (Optional) Configure the interval at which the LNS sends hello messages if it has received no messages from the LAC.
  9. (Optional) Specify a local access profile that overrides the global access profile to configure RADIUS server settings for the tunnel group.

    This local profile is configured at the [edit access profile] hierarchy level.

  10. (Optional) Configure the LNS to reflect the IP ToS value from the inner IP header to the outer IP header (applies to CoS configurations).
  11. (Optional) Specify a dynamic service profile to be applied to the L2TP session at login, along with any parameters to pass to the service.

Applying Services to an L2TP Session Without Using RADIUS

Services are applied to L2TP sessions for activation or later modified by vendor-specific attributes (VSAs) from the RADIUS server or in RADIUS Change of Authorization (CoA) requests. Starting in Junos OS Release 18.1R1, you can apply services to L2TP sessions by means of dynamic service profiles without involving RADIUS. In multivendor environments, customers might use only standard RADIUS attributes to simplify management by avoiding the use of VSAs from multiple vendors. However, this complicates the application of services to L2TP sessions because VSAs are generally required to apply services. Local dynamic service profile activation enables you to avoid that problem. You can also use local service profile activation to provide default services when RADIUS servers are down.

You can apply services to all subscribers in a tunnel group or to all subscribers using a particular LAC. You can configure a maximum of 12 services per tunnel group or LAC hostname.

After configuring one or more dynamic service profiles that define services, you apply them in the tunnel group or in the access profile configuration for a LAC client by specifying the service profile names. You can list more than one profile to be activated, separated by an ampersand (&). You can also specify parameters to be used by the service profile that might override values configured in the profile itself, such as a downstream shaping rate for a CoS service.

The locally configured list of services (via service profiles) serves as local authorization that is applied by authd during client session activation. This list of services is subject to the same validation and processing as services originating from external authority, such as RADIUS. These services are presented during subscriber login.

You can still use RADIUS VSAs or CoA requests in concert with the service profiles. If services are sourced from an external authority as authorization during authentication or during subscriber session provisioning (activation), the services from the external authority take strict priority over those in the local configuration. If a service applied with RADIUS is the same as a service applied with a service profile in the CLI, but with different parameters, the RADIUS service is applied with a new session ID and takes precedence over the earlier service profile.

You can issue commands to deactivate or reactivate any service you have previously activated for a tunnel group or LAC.

Define the dynamic service profiles that you want to later apply to a tunnel group or LAC.

To apply service profiles to all subscribers in a tunnel group:

  • Specify one or more service profiles and any parameters to be passed to the services.

To apply service profiles to all subscribers for a particular LAC:

  • Specify one or more service profiles and any parameters to be passed to the services.

    Note:

    When service profiles are configured for a LAC client and for a tunnel group that uses that client, only the LAC client service profile is applied. It overrides the tunnel group configuration. For example, in the following configuration, the tunnel group, tg-LAC-3, uses the LAC client, LAC-3, so the LAC3 configuration overrides the tunnel group configuration. Consequently only the cos-A3 service is activated for subscribers in the tunnel group, rather than Cos2 and fw1. The shaping rate passed for the service is 24 Mbps.

You can deactivate any service applied to a subscriber session by issuing the following command:

You can reactivate any service applied to a subscriber session by issuing the following command:

To display the services sessions for all current subscriber sessions, use the show subscribers extensive or show network-access aaa subscribers session-id id-number detail command.

To understand how local service application works, the following examples illustrate the various configuration possibilities. First, consider the following dynamic service profile configurations, cos2 and fw1:

The following statement applies both services to all subscribers in tunnel group tg1; a parameter value of 31 Mbps is passed to the cos2 service:

In the cos2 service profile, the shaping rate is provided by a user-defined variable with a default value of 10m, or 1Mbps. After the L2TP session is up, cos2 and fw1 are activated with service session IDs of 34 and 35, respectively.

The parameter passed to cos2 is used as the value for $shaping-rate; consequently the shaping rate for the service is adjusted from the default value of 10 Mbps to 31 Mbps, as shown in the following command output. Although the output indicates the adjusting application is RADIUS CoA, the adjustment is a consequence of the parameter passed to the service profile. That operation uses the same internal framework as a CoA and is reported as such.

Now the cos2 service is deactivated from the CLI for subscriber session 27.

The following output shows cos2 is gone, leaving only fw1 as an active service.

The following command reactivates cos2 for subscriber session 27.

The reactivated cos2 service has a new service session ID of 36.

The reactivated cos2 service uses the default shaping rate, 10 Mbps, from the service profile.

Next, a RADIUS CoA request is received, which includes the Activate-Service VSA (26-65). The VSA specifies and activates the service and specifies a change in the shaping rate of cos2 from the default 10 Mbps to 12 Mbps. The cos2 service session 36 still appears in the output, but is superseded by the new service session initiated by the CoA, 49.

When a service is applied by both the CLI configuration and a RADIUS VSA (26-65), but with different parameters, the RADIUS configuration overrides the CLI configuration. In the following example, the CLI configuration applies the cos2 service profile with a value of 31 Mbps for the shaping rate.

The RADIUS Access-Accept message service activation VSA (26-65) applies cos2 with a value of 21 Mbps for the shaping rate.

The CLI configuration activates service session 22 with a shaping rate of 31 Mbps. The RADIUS VSA activates service session 23 with a shaping rate of 21 Mbps.

Configuring a Pool of Inline Services Interfaces for Dynamic LNS Sessions

You can create a pool of inline service interfaces, also known as a service device pool, to enable load-balancing of L2TP traffic across the interfaces. The pool is supported for dynamic LNS configurations, where it provides a set of logical interfaces that can be dynamically created and allocated to L2TP sessions on the LNS. The pool is assigned to an LNS tunnel group. L2TP maintains the state of each inline service interface and uses a round-robin method to evenly distribute the load among available interfaces when new session requests are accepted.

Note:

Load balancing is available only for dynamically created subscriber interfaces.

LNS sessions anchored on an MPC are not affected by a MIC failure as long as some other path to the peer LACs exists. If the MPC hosting the peer interface fails and there is no path to peer LACs, the failure initiates termination and clean-up of all the sessions on the MPC.

If the MPC anchoring the LNS sessions itself fails, the Routing Engine does not relocate sessions to another slot and all sessions are terminated immediately. New sessions can come up on another available interface when the client retries.

To configure the service device pool:

  1. Create the pool.
  2. Specify the inline service interfaces that make up the pool.

Configuring a Dynamic Profile for Dynamic LNS Sessions

You can configure L2TP to dynamically assign inline service interfaces for L2TP tunnels. You must define one or more dynamic profiles and assign a profile to each tunnel group. The LNS supports IPv4-only, IPv6-only, and dual-stack IPv4/IPv6 sessions.

To configure the L2TP dynamic profile:

  1. Create the dynamic profile.
  2. Configure the interface to be dynamically assigned to the routing instance used by the tunneled PPP clients.
  3. Configure the routing options for access routes in the routing instance.
  4. Configure the routing options for access-internal routes in the routing instance.
  5. Define the interfaces used by the dynamic profile. The variable is dynamically replaced by one of the configured inline service interfaces.
  6. Configure the inline services logical interfaces to be dynamically instantiated.
  7. Specify an identifier for the logical interfaces.
  8. Configure each logical interface to be used for only one session at a time.
  9. Configure the address family for the logical interfaces and enable the local address on the LNS that provides local termination for the L2TP tunnel to be derived from the specified interface name.
    Note:

    Dynamic LNS sessions require you to include the dial-options statement in the dynamic profile, which in turn requires you to include the family inet statement. This has the following consequences:

    • You must always configure family inet regardless of whether you configure IPv4-only, IPv6-only, or dual-stack interfaces in the profile.

    • When you configure IPv4-only interfaces, you configure only family inet and you must configure the interface address under family inet.

    • When you configure IPv6-only interfaces , you must also configure family inet6 and you must configure the interface address under family inet6. You do not configure the address under family inet.

    • When you configure dual-stack, IPv4/IPv6 interfaces, you configure both family inet and family inet6 and an interface address under each family.

    For IPv4-only interfaces:

    For IPv6-only interfaces:

    For dual-stack IPv4/IPv6 interfaces:

    Note:

    If Router Advertisement Protocol is configured, then you configure a numbered address rather than an unnumbered address for the IPv6 local address:

    See Broadband Subscriber Sessions User Guide for information about using variables for IPv6-only and dual-stack addressing in dynamic profiles.

Release History Table
Release
Description
18.1R1
Starting in Junos OS Release 18.1R1, you can apply services to L2TP sessions by means of dynamic service profiles without involving RADIUS.
16.2R1
Starting in Junos OS Release 16.2, you are not required to explicitly specify a bandwidth for L2TP LNS tunnel traffic using inline services.