LDP Configuration
Minimum LDP Configuration
To enable LDP with minimal configuration:
Enable all relevant interfaces under family MPLS. In the case of directed LDP, the loopback interface needs to be enabled with family MPLS.
(Optional) Configure the relevant interfaces under the
[edit protocol mpls]hierarchy level.Enable LDP on a single interface, include the
ldpstatement and specify the interface using theinterfacestatement.
This is the minimum LDP configuration. All other LDP configuration statements are optional.
ldp { interface interface-name; }
To enable LDP on all interfaces, specify all for interface-name.
For a list of hierarchy levels at which you can include these statements, see the statement summary sections.
Enabling and Disabling LDP
LDP is routing-instance-aware. To enable LDP on a specific interface, include the following statements:
ldp { interface interface-name; }
For a list of hierarchy levels at which you can include these statements, see the statement summary sections.
To enable LDP on all interfaces, specify all for interface-name.
If you have configured interface properties on a group of interfaces
and want to disable LDP on one of the interfaces, include the interface statement with the disable option:
interface interface-name { disable; }
For a list of hierarchy levels at which you can include this statement, see the statement summary section.
Configuring the LDP Timer for Hello Messages
LDP hello messages enable LDP nodes to discover one another and to detect the failure of a neighbor or the link to the neighbor. Hello messages are sent periodically on all interfaces where LDP is enabled.
There are two types of LDP hello messages:
Link hello messages—Sent through the LDP interface as UDP packets addressed to the LDP discovery port. Receipt of an LDP link hello message on an interface identifies an adjacency with the LDP peer router.
Targeted hello messages—Sent as UDP packets addressed to the LDP discovery port at a specific address. Targeted hello messages are used to support LDP sessions between routers that are not directly connected. A targeted router determines whether to respond or ignore a targeted hello message. A targeted router that chooses to respond does so by periodically sending targeted hello messages back to the initiating router.
By default, LDP sends hello messages every 5 seconds for link hello messages and every 15 seconds for targeted hello messages. You can configure the LDP timer to alter how often both types of hello messages are sent. However, you cannot configure a time for the LDP timer that is greater than the LDP hold time. For more information, see Configuring the Delay Before LDP Neighbors Are Considered Down.
- Configuring the LDP Timer for Link Hello Messages
- Configuring the LDP Timer for Targeted Hello Messages
Configuring the LDP Timer for Link Hello Messages
To modify how often LDP sends link hello messages, specify a
new link hello message interval for the LDP timer using the hello-interval statement:
hello-interval seconds;
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
Configuring the LDP Timer for Targeted Hello Messages
To modify how often LDP sends targeted hello messages, specify
a new targeted hello message interval for the LDP timer by configuring
the hello-interval statement as an option for the targeted-hello statement:
targeted-hello { hello-interval seconds; }
For a list of hierarchy levels at which you can include these statements, see the statement summary sections for these statements.
Configuring the Delay Before LDP Neighbors Are Considered Down
The hold time determines how long an LDP node should wait for a hello message before declaring a neighbor to be down. This value is sent as part of a hello message so that each LDP node tells its neighbors how long to wait. The values sent by each neighbor do not have to match.
The hold time should normally be at least three times the hello interval. The default is 15 seconds for link hello messages and 45 seconds for targeted hello messages. However, it is possible to configure an LDP hold time that is close to the value for the hello interval.
By configuring an LDP hold time close to the hello interval (less than three times the hello interval), LDP neighbor failures might be detected more quickly. However, this also increases the possibility that the router might declare an LDP neighbor down that is still functioning normally. For more information, see Configuring the LDP Timer for Hello Messages.
The LDP hold time is also negotiated automatically between LDP peers. When two LDP peers advertise different LDP hold times to one another, the smaller value is used. If an LDP peer router advertises a shorter hold time than the value you have configured, the peer router’s advertised hold time is used. This negotiation can affect the LDP keepalive interval as well.
If the local LDP hold time is not shortened during LDP peer negotiation, the user-configured keepalive interval is left unchanged. However, if the local hold time is reduced during peer negotiation, the keepalive interval is recalculated. If the LDP hold time has been reduced during peer negotiation, the keepalive interval is reduced to one-third of the new hold time value. For example, if the new hold-time value is 45 seconds, the keepalive interval is set to 15 seconds.
This automated keepalive interval calculation can cause different keepalive intervals to be configured on each peer router. This enables the routers to be flexible in how often they send keepalive messages, because the LDP peer negotiation ensures they are sent more frequently than the LDP hold time.
When you reconfigure the hold-time interval, changes do not
take effect until after the session is reset. The hold time is negotiated
when the LDP peering session is initiated and cannot be renegotiated
as long as the session is up (required by RFC 5036, LDP Specification). To manually force the LDP session
to reset, issue the clear ldp session command.
- Configuring the LDP Hold Time for Link Hello Messages
- Configuring the LDP Hold Time for Targeted Hello Messages
Configuring the LDP Hold Time for Link Hello Messages
To modify how long an LDP node should wait for a link hello
message before declaring the neighbor down, specify a new time in
seconds using the hold-time statement:
hold-time seconds;
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
Configuring the LDP Hold Time for Targeted Hello Messages
To modify how long an LDP node should wait for a targeted hello
message before declaring the neighbor down, specify a new time in
seconds using the hold-time statement as an option for
the targeted-hello statement:
targeted-hello { hold-time seconds; }
For a list of hierarchy levels at which you can include these statements, see the statement summary sections for these statements.
Enabling Strict Targeted Hello Messages for LDP
Use strict targeted hello messages to prevent LDP sessions
from being established with remote neighbors that have not been specifically
configured. If you configure the strict-targeted-hellos statement, an LDP peer does not respond to targeted hello messages
coming from a source that is not one of its configured remote neighbors.
Configured remote neighbors can include:
Endpoints of RSVP tunnels for which LDP tunneling is configured
Layer 2 circuit neighbors
If an unconfigured neighbor sends a hello message, the
LDP peer ignores the message and logs an error (with the error trace flag) indicating the source. For example, if the LDP peer
received a targeted hello from the Internet address 10.0.0.1 and no
neighbor with this address is specifically configured, the following
message is printed to the LDP log file:
LDP: Ignoring targeted hello from 10.0.0.1
To enable strict targeted hello messages, include the strict-targeted-hellos statement:
strict-targeted-hellos;
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
Configuring the Interval for LDP Keepalive Messages
The keepalive interval determines how often a message is sent over the session to ensure that the keepalive timeout is not exceeded. If no other LDP traffic is sent over the session in this much time, a keepalive message is sent. The default is 10 seconds. The minimum value is 1 second.
The value configured for the keepalive interval can be altered during LDP session negotiation if the value configured for the LDP hold time on the peer router is lower than the value configured locally. For more information, see Configuring the Delay Before LDP Neighbors Are Considered Down.
To modify the keepalive interval, include the keepalive-interval statement:
keepalive-interval seconds;
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
Configuring the LDP Keepalive Timeout
After an LDP session is established, messages must be exchanged periodically to ensure that the session is still working. The keepalive timeout defines the amount of time that the neighbor LDP node waits before deciding that the session has failed. This value is usually set to at least three times the keepalive interval. The default is 30 seconds.
To modify the keepalive interval, include the keepalive-timeout statement:
keepalive-timeout seconds;
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
The value configured for the keepalive-timeout statement
is displayed as the hold time when you issue the show ldp session
detail command.
Configuring LDP Route Preferences
When several protocols calculate routes to the same destination, route preferences are used to select which route is installed in the forwarding table. The route with the lowest preference value is selected. The preference value can be a number in the range 0 through 255. By default, LDP routes have a preference value of 9.
To modify the route preferences, include the preference statement:
preference preference;
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
LDP Graceful Restart
LDP graceful restart enables a router whose LDP control plane is undergoing a restart to continue to forward traffic while recovering its state from neighboring routers. It also enables a router on which helper mode is enabled to assist a neighboring router that is attempting to restart LDP.
During session initialization, a router advertises its ability to perform LDP graceful restart or to take advantage of a neighbor performing LDP graceful restart by sending the graceful restart TLV. This TLV contains two fields relevant to LDP graceful restart: the reconnect time and the recovery time. The values of the reconnect and recovery times indicate the graceful restart capabilities supported by the router.
When a router discovers that a neighboring router is restarting, it waits until the end of the recovery time before attempting to reconnect. The recovery time is the length of time a router waits for LDP to restart gracefully. The recovery time period begins when an initialization message is sent or received. This time period is also typically the length of time that a neighboring router maintains its information about the restarting router, allowing it to continue to forward traffic.
You can configure LDP graceful restart in both the master instance for the LDP protocol and for a specific routing instance. You can disable graceful restart at the global level for all protocols, at the protocol level for LDP only, and on a specific routing instance. LDP graceful restart is disabled by default, because at the global level, graceful restart is disabled by default. However, helper mode (the ability to assist a neighboring router attempting a graceful restart) is enabled by default.
The following are some of the behaviors associated with LDP graceful restart:
Outgoing labels are not maintained in restarts. New outgoing labels are allocated.
When a router is restarting, no label-map messages are sent to neighbors that support graceful restart until the restarting router has stabilized (label-map messages are immediately sent to neighbors that do not support graceful restart). However, all other messages (keepalive, address-message, notification, and release) are sent as usual. Distributing these other messages prevents the router from distributing incomplete information.
Helper mode and graceful restart are independent. You can disable graceful restart in the configuration, but still allow the router to cooperate with a neighbor attempting to restart gracefully.
Configuring the Prefixes Advertised into LDP from the Routing Table
You can control the set of prefixes that are advertised into
LDP and cause the router to be the egress router for those prefixes.
By default, only the loopback address is advertised into LDP. To configure
the set of prefixes from the routing table to be advertised into LDP,
include the egress-policy statement:
egress-policy policy-name;
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
If you configure an egress policy for LDP that does not include the loopback address, it is no longer advertised in LDP. To continue to advertise the loopback address, you need to explicitly configure it as a part of the LDP egress policy.
The named policy (configured at the [edit policy-options] or [edit logical-systems logical-system-name policy-options] hierarchy level) is applied to all routes
in the routing table. Those routes that match the policy are advertised
into LDP. You can control the set of neighbors to which those prefixes
are advertised by using the export statement. Only from operators are considered; you can use any valid from operator. For more information, see the Junos OS Routing Protocols Library for Routing Devices.
ACX Series routers do not support [edit logical-systems] hierarchy level.
Example: Configuring the Prefixes Advertised into LDP
Advertise all connected routes into LDP:
[edit protocols]
ldp {
egress-policy connected-only;
}
policy-options {
policy-statement connected-only {
from {
protocol direct;
}
then accept;
}
}
Configuring FEC Deaggregation
When an LDP egress router advertises multiple prefixes, the prefixes are bound to a single label and aggregated into a single forwarding equivalence class (FEC). By default, LDP maintains this aggregation as the advertisement traverses the network.
Normally, because an LSP is not split across multiple next hops and the prefixes are bound into a single LSP, load-balancing across equal-cost paths does not occur. You can, however, load-balance across equal-cost paths if you configure a load-balancing policy and deaggregate the FECs.
Deaggregating the FECs causes each prefix to be bound to a separate label and become a separate LSP.
To configure deaggregated FECs, include the deaggregate statement:
deaggregate;
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
For all LDP sessions, you can configure deaggregated FECs only globally.
Deaggregating a FEC allows the resulting multiple LSPs to be distributed across multiple equal-cost paths and distributes LSPs across the multiple next hops on the egress segments but installs only one next hop per LSP.
To aggregate FECs, include the no-deaggregate statement:
no-deaggregate;
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
For all LDP sessions, you can configure aggregated FECs only globally.
Configuring Policers for LDP FECs
You can configure the Junos OS to track and police traffic for LDP FECs. LDP FEC policers can be used to do any of the following:
Track or police the ingress traffic for an LDP FEC.
Track or police the transit traffic for an LDP FEC.
Track or police LDP FEC traffic originating from a specific forwarding class.
Track or police LDP FEC traffic originating from a specific virtual routing and forwarding (VRF) site.
Discard false traffic bound for a specific LDP FEC.
To police traffic for an LDP FEC, you must first configure a
filter. Specifically, you need to configure either the interface statement or the interface-set statement at the [edit firewall family protocol-family filter filter-name term term-name from] hierarchy level. The interface statement allows you to
match the filter to a single interface. The interface-set statement allows you to match the filter to multiple interfaces.
For more information on how to configure the interface statement, the interface-set statement, and policers
for LDP FECs, see the Routing Policies, Firewall Filters, and Traffic Policers User Guide.
Once you have configured the filters, you need to include them
in the policing statement configuration for LDP. To configure
policers for LDP FECs, include the policing statement:
policing { fec fec-address { ingress-traffic filter-name; transit-traffic filter-name; } }
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
The policing statement includes the following options:
fec—Specify the FEC address for the LDP FEC you want to police.ingress-filter—Specify the name of the ingress traffic filter.transit-traffic—Specify the name of the transit traffic filter.
Configuring LDP IPv4 FEC Filtering
By default, when a targeted LDP session is established, the Junos OS always exchanges both the IPv4 forwarding equivalence classes (FECs) and the Layer 2 circuit FECs over the targeted LDP session. For an LDP session to an indirectly connected neighbor, you might only want to export Layer 2 circuit FECs to the neighbor if the session was specifically configured to support Layer 2 circuits or VPLS.
In a mixed vendor network where all non-BGP prefixes are advertised into LDP, the LDP database can become large. For this type of environment, it can be useful to prevent the advertisement of IPv4 FECs over LDP sessions formed because of Layer 2 circuit or LDP VPLS configuration. Similarly, it can be useful to filter any IPv4 FECs received in this sort of environment.
If all the LDP neighbors associated with an LDP session are
Layer 2 only, you can configure the Junos OS to advertise only
Layer 2 circuit FECs by configuring the l2-smart-policy statement. This feature also automatically filters out the IPv4
FECs received on this session. Configuring an explicit export or import
policy that is activated for l2-smart-policy disables this
feature in the corresponding direction.
If one of the LDP session’s neighbors is formed because of a discovered adjacency or if the adjacency is formed because of an LDP tunneling configuration on one or more RSVP LSPs, the IPv4 FECs are advertised and received using the default behavior.
To prevent LDP from exporting IPv4 FECs over LDP sessions with
Layer 2 neighbors only and to filter out IPv4 FECs received over
such sessions, include the l2-smart-policy statement:
l2-smart-policy;
For a list of hierarchy levels at which you can configure this statement, see the statement summary for this statement.
Configuring BFD for LDP LSPs
You can configure Bidirectional Forwarding Detection (BFD) for LDP LSPs. The BFD protocol is a simple hello mechanism that detects failures in a network. Hello packets are sent at a specified, regular interval. A neighbor failure is detected when the router stops receiving a reply after a specified interval. BFD works with a wide variety of network environments and topologies. The failure detection timers for BFD have shorter time limits than the failure detection mechanisms of static routes, providing faster detection.
An error is logged whenever a BFD session for a path fails. The following shows how BFD for LDP LSP log messages might appear:
RPD_LDP_BFD_UP: LDP BFD session for FEC 10.255.16.14/32 is up RPD_LDP_BFD_DOWN: LDP BFD session for FEC 10.255.16.14/32 is down
You can also configure BFD for RSVP LSPs, as described in Configuring BFD for RSVP-Signaled LSPs.
The BFD failure detection timers are adaptive and can be adjusted
to be more or less aggressive. For example, the timers can adapt to
a higher value if the adjacency fails, or a neighbor can negotiate
a higher value for a timer than the configured value. The timers adapt
to a higher value when a BFD session flap occurs more than three times
in a span of 15 seconds. A back-off algorithm increases the receive
(Rx) interval by two if the local BFD instance is the reason for the
session flap. The transmission (Tx) interval is increased by two if
the remote BFD instance is the reason for the session flap. You can
use the clear bfd adaptation command to return BFD interval
timers to their configured values. The clear bfd adaptation command is hitless, meaning that the command does not affect traffic
flow on the routing device.
To enable BFD for LDP LSPs, include the oam and bfd-liveness-detection statements:
oam { bfd-liveness-detection { detection-time threshold milliseconds; ecmp; failure-action { remove-nexthop; remove-route; } holddown-interval seconds; ingress-policy ingress-policy-name; minimum-interval milliseconds; minimum-receive-interval milliseconds; minimum-transmit-interval milliseconds; multiplier detection-time-multiplier; no-adaptation; transmit-interval { minimum-interval milliseconds; threshold milliseconds; } version (0 | 1 | automatic); } fec fec-address { bfd-liveness-detection { detection-time threshold milliseconds; ecmp; failure-action { remove-nexthop; remove-route; } holddown-interval milliseconds; ingress-policy ingress-policy-name; minimum-interval milliseconds; minimum-receive-interval milliseconds; minimum-transmit-interval milliseconds; multiplier detection-time-multiplier; no-adaptation; transmit-interval { minimum-interval milliseconds; threshold milliseconds; } version (0 | 1 | automatic); } no-bfd-liveness-detection; periodic-traceroute { disable; exp exp-value; fanout fanout-value; frequency minutes; paths number-of-paths; retries retry-attempts; source address; ttl ttl-value; wait seconds; } } lsp-ping-interval seconds; periodic-traceroute { disable; exp exp-value; fanout fanout-value; frequency minutes; paths number-of-paths; retries retry-attempts; source address; ttl ttl-value; wait seconds; } }
You can enable BFD for the LDP LSPs associated with a specific
forwarding equivalence class (FEC) by configuring the FEC address
using the fec option at the [edit protocols ldp] hierarchy level. Alternatively, you can configure an Operation Administration
and Management (OAM) ingress policy to enable BFD on a range of FEC
addresses. For more information, see Configuring OAM Ingress Policies for LDP.
You cannot enable BFD LDP LSPs unless their equivalent FEC addresses are explicitly configured or OAM is enabled on the FECs using an OAM ingress policy. If BFD is not enabled for any FEC addresses, the BFD session will not come up.
You can configure the oam statement at the
following hierarchy levels:
[edit protocols ldp][edit logical-systems logical-system-name protocols ldp]
ACX Series routers do not support [edit logical-systems] hierarchy level.
The oam statement includes the following options:
fec—Specify the FEC address. You must either specify a FEC address or configure an OAM ingress policy to ensure that the BFD session comes up.lsp-ping-interval—Specify the duration of the LSP ping interval in seconds. To issue a ping on an LDP-signaled LSP, use theping mpls ldpcommand. For more information, see the CLI Explorer.
The bfd-liveness-detection statement includes
the following options:
ecmp—Cause LDP to establish BFD sessions for all ECMP paths configured for the specified FEC. If you configure theecmpoption, you must also configure theperiodic-traceroutestatement for the specified FEC. If you do not do so, the commit operation fails. You can configure theperiodic-traceroutestatement at the global hierarchy level ([edit protocols ldp oam]) while only configuring theecmpoption for a specific FEC ([edit protocols ldp oam fec address bfd-liveness-detection]).holddown-interval—Specify the duration the BFD session should remain up before adding the route or next hop. Specifying a time of 0 seconds causes the route or next hop to be added as soon as the BFD session comes back up.
minimum-interval—Specify the minimum transmit and receive interval. If you configure theminimum-intervaloption, you do not need to configure theminimum-receive-intervaloption or theminimum-transmit-intervaloption.minimum-receive-interval—Specify the minimum receive interval. The range is from 1 through 255,000 milliseconds.minimum-transmit-interval—Specify the minimum transmit interval. The range is from 1 through 255,000 milliseconds.multiplier—Specify the detection time multiplier. The range is from 1 through 255.version—Specify the BFD version. The options are BFD version 0 or BFD version 1. By default, the Junos OS software attempts to automatically determine the BFD version.
Configuring ECMP-Aware BFD for LDP LSPs
When you configure BFD for a FEC, a BFD session is established for only one active local next-hop for the router. However, you can configure multiple BFD sessions, one for each FEC associated with a specific equal-cost multipath (ECMP) path. For this to function properly, you also need to configure LDP LSP periodic traceroute. (See Configuring LDP LSP Traceroute.) LDP LSP traceroute is used to discover ECMP paths. A BFD session is initiated for each ECMP path discovered. Whenever a BFD session for one of the ECMP paths fails, an error is logged.
LDP LSP traceroute is run periodically to check the integrity of the ECMP paths. The following might occur when a problem is discovered:
If the latest LDP LSP traceroute for a FEC differs from the previous traceroute, the BFD sessions associated with that FEC (the BFD sessions for address ranges that have changed from previous run) are brought down and new BFD sessions are initiated for the destination addresses in the altered ranges.
If the LDP LSP traceroute returns an error (for example, a timeout), all the BFD sessions associated with that FEC are torn down.
To configure LDP to establish BFD sessions for all ECMP paths
configured for the specified FEC, include the ecmp statement.
ecmp;
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
Along with the ecmp statement, you must also include
the periodic-traceroute statement, either in the global
LDP OAM configuration (at the [edit protocols ldp oam] or [edit logical-systems logical-system-name protocols
ldp oam] hierarchy level) or in the configuration for the specified
FEC (at the [edit protocols ldp oam fec address] or [edit logical-systems logical-system-name protocols ldp oam fec address] hierarchy
level). Otherwise, the commit operation fails.
ACX Series routers do not support [edit logical-systems] hierarchy level.
Configuring a Failure Action for the BFD Session on an LDP LSP
You can configure route and next-hop properties in the event of a BFD session failure event on an LDP LSP. The failure event could be an existing BFD session that has gone down or could be a BFD session that never came up. LDP adds back the route or next hop when the relevant BFD session comes back up.
You can configure one of the following failure action
options for the failure-action statement in the event of
a BFD session failure on the LDP LSP:
remove-nexthop—Removes the route corresponding to the next hop of the LSP's route at the ingress node when a BFD session failure event is detected.remove-route—Removes the route corresponding to the LSP from the appropriate routing tables when a BFD session failure event is detected. If the LSP is configured with ECMP and a BFD session corresponding to any path goes down, the route is removed.
To configure a failure action in the event of a BFD session
failure on an LDP LSP, include either the remove-nexthop option or the remove-route option for the failure-action statement:
failure-action { remove-nexthop; remove-route; }
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
Configuring the Holddown Interval for the BFD Session
You can specify the duration the BFD session should be
up before adding a route or next hop by configuring the holddown-interval statement at either the [edit protocols ldp oam bfd-livenesss-detection] hierarchy level or at the [edit protocols ldp oam fec address
bfd-livenesss-detection] hierarchy level. Specifying a time
of 0 seconds causes the route or next hop to be added as soon as the
BFD session comes back up.
holddown-interval seconds;
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
Mapping Client and Server for Segment Routing to LDP Interoperability
Segment routing mapping server and client support enables interoperability between network islands that run LDP and segment routing (SR or SPRING). This interoperability is useful during a migration from LDP to SR. During the transition there can be islands (or domains) with devices that support either only LDP, or only segment routing. For these devices to interwork the LDP segment routing mapping server (SRMS) and segment routing mapping client (SRMC) functionality is required. You enable these server and client functions on a device in the segment routing network.
SR mapping server and client functionality is supported with either OSPF or ISIS.
- Overview of Segment Routing to LDP Interoperability
- Segment Routing to LDP Interoperability Using OSPF
- Interoperability of Segment Routing with LDP Using ISIS
Overview of Segment Routing to LDP Interoperability
Figure 1 shows a simple LDP network topology to illustrate how interoperability of segment routing devices with LDP devices works. Keep in mind that both OSPF and ISIS are supported, so for now we'll keep things agnostic with regard to the IGP. The sample topology has six devices, R1 through R6, in a network that is undergoing a migration from LDP to segment routing.
In the topology, devices R1, R2, and R3 are configured for segment routing only. Devices R5 and R6 are part of a legacy LDP domain and do not currently support SR. Device R4 supports both LDP and segment routing. The loopback addresses of all devices are shown. These loopbacks are advertised as egress FECs in the LDP domain and as SR node IDs in the SR domain. Interoperability is based on mapping a LDP FEC into a SR node ID, and vice versa.
For R1 to interwork with R6, both an LDP segment routing mapping server (SRMS) and a segment routing mapping client (SRMC) are needed. Its easier to understand the role of the SRMS and SRMC by looking at the traffic flow in a unidirectional manner. Based on Figure 1, we'll say that traffic flowing from left to right originates in the SR domain and terminates in the LDP domain. In like fashion, traffic that flows from right to left originates in the LDP domain and terminates in the SR domain.
The SRMS provides the information needed to stitch traffic in the left to right direction. The SRMC provides mapping for traffic that flows from right to left.
- Left to right Traffic Flow: The Segment Routing Mapping Server
The SRMS facilitates LSP stitching between the SR and LDP domains. The server maps LDP FECs into SR node IDs. You configure the LDP FECs to be mapped under the
[edit routing-options source-packet-routing]hierarchy level. Normally you need to map all LDP node loopback addresses for full connectivity. As shown below, you can map contiguous prefixes in a single range statement. If the LDP node loopbacks are not contiguous you need to define multiple mapping statements.You apply the SRMS mapping configuration under the
[edit protocols ospf]or[edit protocols isis]hierarchy level. This choice depends on which IGP is being used. Note that both the SR and LDP nodes share a common, single area/level, IGP routing domain.The SRMS generates an extended prefix list LSA (or LSP in the case of ISIS). The information in this LSA allows the SR nodes to map LDP prefixes (FECs) to SR Node IDs. The mapped routes for the LDP prefixes are installed in the
inet.3andmpls.0routing tables of the SR nodes to facilitate LSP ingress and stitching operations for traffic in the left to right direction.The extended LSA (or LSP) is flooded throughout the (single) IGP area. This means you are free to place the SRMS configuration on any router in the SR domain. The SRMS node does not have to run LDP.
- Right to Left Traffic Flow: The Segment Routing Mapping Client
To interoperate in the right to left direction, that is, from the LDP island to the SR island, you simply enable segment routing mapping client functionality on a node that speaks both SR and LDP. In our example that is R4. You activate SRMC functionality with the
mapping-clientstatement at the[edit protocols ldp]hierarchy.The SRMC configuration automatically activates an LDP egress policy to advertise the SR domain's node and prefix SIDs as LDP egress FECs. This provides the LDP nodes with LSP reachability to the nodes in the SR domain.
- The SRMC function must be configured on a router that attaches to both the SR and LSP domains. If desired, the same node can also function as the SRMS.
Segment Routing to LDP Interoperability Using OSPF
Refer to Figure 1, assume that device R2 (in the segment routing network) is the SRMS.
-
Define the SRMS function:
[edit routing-options source-packet-routing ] user@R2# set mapping-server-entry ospf-mapping-server prefix-segment-range ldp-lo0s start-prefix 192.168.0.5 user@R2# set mapping-server-entry ospf-mapping-server prefix-segment-range ldp-lo0s start-index 1000 user@R2# set mapping-server-entry ospf-mapping-server prefix-segment-range ldp-lo0s size 2This configuration creates a mapping block for both the LDP device loopback addresses in the sample topology. The initial Segment ID (SID) index mapped to R5's loopback is
1000. Specifying size2results in SID index 10001 being mapped to R6's loopback address.Note:The IP address used as the
start-prefixis a loopback address of a device in the LDP network (R5, in this example). For full connectivity you must map all the loopback addresses of the LDP routers into the SR domain. If the loopback addresses are contiguous, you can do this with a singleprefix-segment-rangestatement. Non-contiguous loopbacks requires definition of multiple prefix mapping statements.Our example uses contiguous loopbacks so a single
prefix-segment-rangeis shown above. Here's an example of multiple mappings to support the case of two LDP nodes with non-contiguous loopback addressing:[edit routing-options source-packet-routing] show mapping-server-entry map-server-name { prefix-segment-range lo1 { start-prefix 192.168.0.5/32; start-index 1000; size 1; } prefix-segment-range lo2 { start-prefix 192.168.0.10/32; start-index 2000; size 1; } } } -
Next, configure OSPF support for the extended LSA used to flood the mapped prefixes.
[edit protocols] user@R2# set ospf source-packet-routing mapping-server ospf-mapping-serverOnce the mapping server configuration is committed on device R2, the extended prefix range TLV is flooded across the OSPF area. The devices capable of segment routing (R1, R2, and R3) install OSPF segment routing routes for the specified loopback address (R5 and R6 in this example), with a segment ID (SID) index. The SID index is also updated in the
mpls.0routing table by the segment routing devices. -
Enable SRMC functionality. For our sample topology you must enable SRMC functionality on R4.
[edit protocols] user@R4# set ldp sr-mapping-clientOnce the mapping client configuration is committed on device R4, the SR node IDs and label blocks are advertised as egress FECs to router R5, which then re-advertises them to R6.
Support for stitching segment routing and LDP next-hops with OSPF began in Junos OS 19.1R1.
Unsupported Features and Functionality for Segment Routing interoperability with LDP using OSPF
-
Prefix conflicts are only detected at the SRMS. When there is a prefix range conflict, the prefix SID from the lower router ID prevails. In such cases, a system log error message—
RPD_OSPF_PFX_SID_RANGE_CONFLICT—is generated. -
IPv6 prefixes are not supported.
-
Flooding of the OSPF Extended Prefix Opaque LSA across AS boundaries (inter-AS) is not supported.
-
Inter-area LDP mapping server functionality is not supported.
-
ABR functionality of Extended Prefix Opaque LSA is not supported.
-
ASBR functionality of Extended Prefix Opaque LSA is not supported.
-
The segment routing mapping server Preference TLV is not supported.
Interoperability of Segment Routing with LDP Using ISIS
Refer to Figure 1, assume that device R2 (in the segment routing network) is the SRMS. The following configuration is added for the mapping function:
-
Define the SRMS function:
[edit routing-options source-packet-routing ] user@R2# set mapping-server-entry isis-mapping-server prefix-segment-range ldp-lo0s start-prefix 192.168.0.5 user@R2# set mapping-server-entry isis-mapping-server prefix-segment-range ldp-lo0s start-index 1000 user@R2# set mapping-server-entry isis-mapping-server prefix-segment-range ldp-lo0s size 2This configuration creates a mapping block for both the LDP device loopback addresses in the sample topology. The initial segment ID (SID) index mapped to R5's loopback is
1000. Specifying size2results in SID index 10001 being mapped to R6's loopback address.Note:The IP address used as the
start-prefixis a loopback address of a device in the LDP network (R5, in this example). For full connectivity you must map all the loopback addresses of the LDP routers in the SR domain. If the loopback addresses are contiguous, you can do this with aprefix-segment-rangestatement. Non-contiguous loopbacks require the definition of multiple mapping statements.Our example uses contiguous loopbacks so a single
prefix-segment-rangeis shown above. Here is an example of prefix mappings to handle the case of two LDP routers with non-contiguous loopback addressing:[edit routing-options source-packet-routing] show mapping-server-entry map-server-name { prefix-segment-range lo1 { start-prefix 192.168.0.5/32; start-index 1000; size 1; } prefix-segment-range lo2 { start-prefix 192.168.0.10/32; start-index 2000; size 1; } } } -
Next, configure ISIS support for the extended LSP used to flood the mapped prefixes.
[edit protocols] user@R2# set isis source-packet-routing mapping-server isis-mapping-serverOnce the mapping server configuration is committed on device R2, the extended prefix range TLV is flooded across the OSPF area. The devices capable of segment routing (R1, R2, and R3) install ISIS segment routing routes for the specified loopback address (R5 and R6 in this example), with a segment ID (SID) index. The SID index is also updated in the
mpls.0routing table by the segment routing devices. -
Enable SRMC functionality. For our sample topology you must enable SRMC functionality on R4.
[edit protocols] user@R4# set ldp sr-mapping-clientOnce the mapping client configuration is committed on device R4, the SR node IDs and label blocks are advertised as egress FECs to router R5, and from there on to R6.
Support for stitching segment routing and LDP next-hops with ISIS began in Junos OS 17.4R1.
Unsupported Features and Functionality for Interoperability of Segment Routing with LDP using ISIS
-
Penultimate-hop popping behavior for label binding TLV is not supported.
-
Advertising of range of prefixes in label binding TLV is not supported.
-
Segment Routing Conflict Resolution is not supported.
-
LDP traffic statistics does not work.
-
Nonstop active routing (NSR) and graceful Routing Engine switchover (GRES) is not supported.
-
ISIS inter-level is not supported.
-
RFC 7794, IS-IS Prefix Attributes for Extended IPv4 is not supported.
-
Redistributing LDP route as a prefix-sid at the stitching node is not supported.
Miscellaneous LDP Properties
The following sections describe how to configure a number of miscellaneous LDP properties.
- Configure LDP to Use the IGP Route Metric
- Prevent Addition of Ingress Routes to the inet.0 Routing Table
- Multiple-Instance LDP and Carrier-of-Carriers VPNs
- Configure MPLS and LDP to Pop the Label on the Ultimate-Hop Router
- Enable LDP over RSVP-Established LSPs
- Enable LDP over RSVP-Established LSPs in Heterogeneous Networks
- Configure the TCP MD5 Signature for LDP Sessions
- Configuring LDP Session Protection
- Disabling SNMP Traps for LDP
- Configuring LDP Synchronization with the IGP on LDP Links
- Configuring LDP Synchronization with the IGP on the Router
- Configuring the Label Withdrawal Timer
- Ignoring the LDP Subnet Check
Configure LDP to Use the IGP Route Metric
Use the track-igp-metric statement if you want the interior gateway
protocol (IGP) route metric to be used for the LDP routes instead of the default LDP
route metric (the default LDP route metric is 1).
To use the IGP route metric, include the
track-igp-metric
statement:
track-igp-metric;
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
Prevent Addition of Ingress Routes to the inet.0 Routing Table
By configuring the no-forwarding statement, you can prevent ingress
routes from being added to the inet.0 routing table instead of the inet.3 routing
table even if you enabled the traffic-engineering bgp-igp statement
at the [edit protocols mpls] or the [edit logical-systems
logical-system-name protocols mpls] hierarchy
level. By default, the no-forwarding statement is disabled.
ACX Series routers do not support the [edit logical-systems]
hierarchy level.
To omit ingress routes from the inet.0 routing table, include the
no-forwarding
statement:
no-forwarding;
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
Multiple-Instance LDP and Carrier-of-Carriers VPNs
By configuring multiple LDP routing instances, you can use LDP to advertise labels in a carrier-of-carriers VPN from a service provider provider edge (PE) router to a customer carrier customer edge (CE) router. This is especially useful when the carrier customer is a basic Internet service provider (ISP) and wants to restrict full Internet routes to its PE routers. By using LDP instead of BGP, the carrier customer shields its other internal routers from the Internet. Multiple-instance LDP is also useful when a carrier customer wants to provide Layer 2 or Layer 3 VPN services to its customers.
For an example of how to configure multiple LDP routing instances for carrier-of-carriers VPNs, see the Multiple Instances for Label Distribution Protocol User Guide.
Configure MPLS and LDP to Pop the Label on the Ultimate-Hop Router
The default advertised label is label 3 (Implicit Null label). If label 3 is advertised, the penultimate-hop router removes the label and sends the packet to the egress router. If ultimate-hop popping is enabled, label 0 (IPv4 Explicit Null label) is advertised. Ultimate-hop popping ensures that any packets traversing an MPLS network include a label.
To configure ultimate-hop popping, include the
explicit-null
statement:
explicit-null;
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
Juniper Networks routers queue packets based on the incoming label. Routers from other vendors might queue packets differently. Keep this in mind when working with networks containing routers from multiple vendors.
For more information about labels, see MPLS Label Overview and MPLS Label Allocation.
Enable LDP over RSVP-Established LSPs
You can run LDP over LSPs established by RSVP, effectively tunneling the
LDP-established LSP through the one established by RSVP. To do so, enable LDP on the
lo0.0 interface (see Enabling and
Disabling LDP). You must also configure the LSPs over which you want LDP
to operate by including the ldp-tunneling statement at the
[edit protocols mpls
label-switched-path
lsp-name] hierarchy level:
[edit]
protocols {
mpls {
label-switched-path lsp-name {
from source;
to destination;
ldp-tunneling;
}
}
}
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
LDP can be tunneled over a RSVP session that has link protection enabled. Starting with Junos OS Release 21.1R1, displaying details about the LDP tunneled route displays both the primary and bypass LSP next hops. In prior Junos OS releases, the bypass LSP next hop displayed the next hop for the primary LSP.
Enable LDP over RSVP-Established LSPs in Heterogeneous Networks
Some other vendors use an OSPF metric of 1 for the loopback address. Juniper Networks routers use an OSPF metric of 0 for the loopback address. This might require that you manually configure the RSVP metric when deploying LDP tunneling over RSVP LSPs in heterogeneous networks.
When a Juniper Networks router is linked to another vendor’s router through an RSVP tunnel, and LDP tunneling is also enabled, by default the Juniper Networks router might not use the RSVP tunnel to route traffic to the LDP destinations downstream of the other vendor’s egress router if the RSVP path has a metric of 1 larger than the physical OSPF path.
To ensure that LDP tunneling functions properly in heterogeneous networks, you can
configure OSPF to ignore the RSVP LSP metric by including the
ignore-lsp-metrics
statement:
ignore-lsp-metrics;
You can configure this statement at the following hierarchy levels:
-
[edit protocols ospf traffic-engineering shortcuts] -
[edit logical-systems logical-system-name protocols ospf traffic-engineering shortcuts]
ACX Series routers do not support the [edit logical-systems]
hierarchy level.
To enable LDP over RSVP LSPs, you also still need to complete the procedure in Section Enable LDP over RSVP-Established LSPs.
Configure the TCP MD5 Signature for LDP Sessions
You can configure an MD5 signature for an LDP TCP connection to protect against the introduction of spoofed TCP segments into LDP session connection streams. For more information about TCP authentication, see TCP. For how to use TCP Authentication Option (TCP-AO) instead of TCP MD5, see TCP Authentication Option (TCP-AO).
A router using the MD5 signature option is configured with a password for each peer for which authentication is required. The password is stored encrypted.
LDP hello adjacencies can still be created even when peering interfaces are configured with different security signatures. However, the TCP session cannot be authenticated and is never established.
You can configure Hashed Message Authentication Code (HMAC) and MD5 authentication for LDP sessions as a per-session configuration or a subnet match (that is, longest prefix match) configuration. The support for subnet-match authentication provides flexibility in configuring authentication for automatically targeted LDP (TLDP) sessions. This makes the deployment of remote loop-free alternate (LFA) and FEC 129 pseudowires easy.
To configure an MD5 signature for an LDP TCP connection, include the
authentication-key
statement as part of the session group:
[edit protocols ldp]
session-group prefix-length {
authentication-key md5-authentication-key;
}
Use the session-group statement to configure the address for the
remote end of the LDP session.
The md5-authentication-key, or password, in the
configuration can be up to 69 characters long. Characters can include any ASCII
strings. If you include spaces, enclose all characters in quotation marks.
You can also configure an authentication key update mechanism for the LDP routing protocol. This mechanism allows you to update authentication keys without interrupting associated routing and signaling protocols such as Open Shortest Path First (OSPF) and Resource Reservation Setup Protocol (RSVP).
To configure the authentication key update mechanism, include the
key-chain statement at the [edit security
authentication-key-chains] hierarchy level, and specify the
key option to create a keychain consisting of several
authentication keys.
[edit security authentication-key-chains]
key-chain key-chain-name {
key key {
secret secret-data;
start-time yyyy-mm-dd.hh:mm:ss;
}
}
To configure the authentication key update mechanism for the LDP routing protocol,
include the
authentication-key-chain
statement at the [edit protocols ldp] hierarchy level to
associate the protocol with the [edit security
suthentication-key-chains] authentication keys. You must also configure
the authentication algorithm by including the authentication-algorithm
algorithm statement the [edit protocols
ldp] hierarchy level.
[edit protocols ldp]
group group-name {
neighbor address {
authentication-algorithm algorithm;
authentication-key-chain key-chain-name;
}
}
For more information about the authentication key update feature, see Configuring the Authentication Key Update Mechanism for BGP and LDP Routing Protocols.
Configuring LDP Session Protection
An LDP session is normally created between a pair of routers that are connected by one or more links. The routers form one hello adjacency for every link that connects them and associate all the adjacencies with the corresponding LDP session. When the last hello adjacency for an LDP session goes away, the LDP session is terminated. You might want to modify this behavior to prevent an LDP session from being unnecessarily terminated and reestablished.
You can configure the Junos OS to leave the LDP session between two routers up even
if there are no hello adjacencies on the links connecting the two routers by
configuring the session-protection statement. You can optionally
specify a time in seconds using the timeout option. The session
remains up for the duration specified as long as the routers maintain IP network
connectivity.
session-protection { timeout seconds; }
For a list of hierarchy levels at which you can include this statement, see the statement summary section.
Disabling SNMP Traps for LDP
Whenever an LDP LSP makes a transition from up to down, or down to up, the router sends an SNMP trap. However, it is possible to disable the LDP SNMP traps on a router, logical system, or routing instance.
For information about the LDP SNMP traps and the proprietary LDP MIB, see the SNMP MIB Explorer..
To disable SNMP traps for LDP, specify the trap disable option for
the log-updown statement:
log-updown { trap disable; }
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
Configuring LDP Synchronization with the IGP on LDP Links
LDP is a protocol for distributing labels in non-traffic-engineered applications. Labels are distributed along the best path determined by the IGP. If synchronization between LDP and the IGP is not maintained, the LSP goes down. When LDP is not fully operational on a given link (a session is not established and labels are not exchanged), the IGP advertises the link with the maximum cost metric. The link is not preferred but remains in the network topology.
LDP synchronization is supported only on active point-to-point interfaces and LAN interfaces configured as point-to-point under the IGP. LDP synchronization is not supported during graceful restart.
To advertise the maximum cost metric until LDP is operational for synchronization,
include the ldp-synchronization statement:
ldp-synchronization { disable; hold-time seconds; }
To disable synchronization, include the disable statement. To
configure the time period to advertise the maximum cost metric for a link that is
not fully operational, include the hold-time statement.
For a list of hierarchy levels at which you can configure this statement, see the statement summary section for this statement.
Configuring LDP Synchronization with the IGP on the Router
You can configure the time the LDP waits before informing the IGP that the LDP neighbor and session for an interface are operational. For large networks with numerous FECs, you might need to configure a longer value to allow enough time for the LDP label databases to be exchanged.
To configure the time the LDP waits before informing the IGP that the LDP neighbor
and session are operational, include the igp-synchronization
statement and specify a time in seconds for the holddown-interval
option:
igp-synchronization holddown-interval seconds;
For a list of hierarchy levels at which you can configure this statement, see the statement summary section for this statement.
Configuring the Label Withdrawal Timer
The label withdrawal timer delays sending a label withdrawal message for a FEC to a
neighbor. When an IGP link to a neighbor fails, the label associated with the FEC
has to be withdrawn from all the upstream routers if the neighbor is the next hop
for the FEC. After the IGP converges and a label is received from a new next hop,
the label is readvertised to all the upstream routers. This is the typical network
behavior. By delaying label withdrawal by a small amount of time (for example, until
the IGP converges and the router receives a new label for the FEC from the
downstream next hop), the label withdrawal and sending a label mapping soon could be
avoided. The label-withdrawal-delay statement allows you to
configure this delay time. By default, the delay is 60 seconds.
If the router receives the new label before the timer runs out, the label withdrawal timer is canceled. However, if the timer runs out, the label for the FEC is withdrawn from all of the upstream routers.
By default, LDP waits for 60 seconds before withdrawing labels to avoid resignaling
LSPs multiple times while the IGP is reconverging. To configure the label withdrawal
delay time in seconds, include the label-withdrawal-delay
statement:
label-withdrawal-delay seconds;
For a list of hierarchy levels at which you can configure this statement, see the statement summary section for this statement.
Ignoring the LDP Subnet Check
In Junos OS Release 8.4 and later releases, an LDP source address subnet check is performed during the neighbor establishment procedure. The source address in the LDP link hello packet is matched against the interface address. This causes an interoperability issue with some other vendors’ equipment.
To disable the subnet check, include the allow-subnet-mismatch
statement:
allow-subnet-mismatch;
This statement can be included at the following hierarchy levels:
-
[edit protocols ldp interface interface-name] -
[edit logical-systems logical-system-name protocols ldp interface interface-name]
ACX Series routers do not support [edit logical-systems]
hierarchy level.
See Also
Configuring LDP LSP Traceroute
You can trace the route followed by an LDP-signaled LSP. LDP LSP traceroute is based on RFC 4379, Detecting Multi-Protocol Label Switched (MPLS) Data Plane Failures. This feature allows you to periodically trace all paths in a FEC. The FEC topology information is stored in a database accessible from the CLI.
A topology change does not automatically trigger a trace of an LDP LSP. However, you can manually initiate a traceroute. If the traceroute request is for an FEC that is currently in the database, the contents of the database are updated with the results.
The periodic traceroute feature applies to all FECs specified
by the oam statement configured at the [edit protocols
ldp] hierarchy level. To configure periodic LDP LSP traceroute,
include the periodic-traceroute statement:
periodic-traceroute { disable; exp exp-value; fanout fanout-value; frequency minutes; paths number-of-paths; retries retry-attempts; source address; ttl ttl-value; wait seconds; }
You can configure this statement at the following hierarchy levels:
You can configure the periodic-traceroute statement
by itself or with any of the following options:
exp—Specify the class of service to use when sending probes.fanout—Specify the maximum number of next hops to search per node.frequency—Specify the interval between traceroute attempts.paths—Specify the maximum number of paths to search.retries—Specify the number of attempts to send a probe to a specific node before giving up.source—Specify the IPv4 source address to use when sending probes.ttl—Specify the maximum time-to-live value. Nodes that are beyond this value are not traced.wait—Specify the wait interval before resending a probe packet.
Collecting LDP Statistics
LDP traffic statistics show the volume of traffic that has passed through a particular FEC on a router.
When you configure the traffic-statistics statement
at the [edit protocols ldp] hierarchy level, the LDP traffic
statistics are gathered periodically and written to a file. You can
configure how often statistics are collected (in seconds) by
using the interval option. The default collection interval
is 5 minutes. You must configure an LDP statistics file; otherwise,
LDP traffic statistics are not gathered. If the LSP goes down, the
LDP statistics are reset.
To collect LDP traffic statistics, include the traffic-statistics statement:
traffic-statistics { file filename <files number> <size size> <world-readable | no-world-readable>; interval interval; no-penultimate-hop; }
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
This section includes the following topics:
- LDP Statistics Output
- Disabling LDP Statistics on the Penultimate-Hop Router
- LDP Statistics Limitations
LDP Statistics Output
The following sample output is from an LDP statistics file:
FEC Type Packets Bytes Shared
10.255.350.448/32 Transit 0 0 No
Ingress 0 0 No
10.255.350.450/32 Transit 0 0 Yes
Ingress 0 0 No
10.255.350.451/32 Transit 0 0 No
Ingress 0 0 No
172.16.220.1/32 Transit 0 0 Yes
Ingress 0 0 No
172.16.220.2/32 Transit 0 0 Yes
Ingress 0 0 No
172.16.220.3/32 Transit 0 0 Yes
Ingress 0 0 No
May 28 15:02:05, read 12 statistics in 00:00:00 seconds
The LDP statistics file includes the following columns of data:
FEC—FEC for which LDP traffic statistics are collected.Type—Type of traffic originating from a router, eitherIngress(originating from this router) orTransit(forwarded through this router).Packets—Number of packets passed by the FEC since its LSP came up.Bytes—Number of bytes of data passed by the FEC since its LSP came up.Shared—AYesvalue indicates that several prefixes are bound to the same label (for example, when several prefixes are advertised with an egress policy). The LDP traffic statistics for this case apply to all the prefixes and should be treated as such.read—This number (which appears next to the date and time) might differ from the actual number of the statistics displayed. Some of the statistics are summarized before being displayed.
Disabling LDP Statistics on the Penultimate-Hop Router
Gathering LDP traffic statistics at the penultimate-hop router
can consume excessive system resources, on next-hop routes in particular.
This problem is exacerbated if you have configured the deaggregate statement in addition to the traffic-statistics statement.
For routers reaching their limit of next-hop route usage, we recommend
configuring the no-penultimate-hop option for the traffic-statistics statement:
traffic-statistics { no-penultimate-hop; }
For a list of hierarchy levels at which you can configure the traffic-statistics statement, see the statement summary section
for this statement.
When you configure the no-penultimate-hop option,
no statistics are available for the FECs that are the penultimate
hop for this router.
Whenever you include or remove this option from the configuration, the LDP sessions are taken down and then restarted.
The following sample output is from an LDP statistics file showing
routers on which the no-penultimate-hop option is configured:
FEC Type Packets Bytes Shared
10.255.245.218/32 Transit 0 0 No
Ingress 4 246 No
10.255.245.221/32 Transit statistics disabled
Ingress statistics disabled
192.168.1.0/24 Transit statistics disabled
Ingress statistics disabled
192.168.3.0/24 Transit statistics disabled
Ingress statistics disabled
LDP Statistics Limitations
The following are issues related to collecting LDP statistics
by configuring the traffic-statistics statement:
You cannot clear the LDP statistics.
If you shorten the specified interval, a new LDP statistics request is issued only if the statistics timer expires later than the new interval.
A new LDP statistics collection operation cannot start until the previous one has finished. If the interval is short or if the number of LDP statistics is large, the time gap between the two statistics collections might be longer than the interval.
When an LSP goes down, the LDP statistics are reset.
Tracing LDP Protocol Traffic
The following sections describe how to configure the trace options to examine LDP protocol traffic:
- Tracing LDP Protocol Traffic at the Protocol and Routing Instance Levels
- Tracing LDP Protocol Traffic Within FECs
- Examples: Tracing LDP Protocol Traffic
Tracing LDP Protocol Traffic at the Protocol and Routing Instance Levels
To trace LDP protocol traffic, you can specify options in the
global traceoptions statement at the [edit routing-options] hierarchy level, and you can specify LDP-specific options by including
the traceoptions statement:
traceoptions { file filename <files number> <size size> <world-readable | no-world-readable>; flag flag <flag-modifier> <disable>; }
For a list of hierarchy levels at which you can include this statement, see the statement summary section for this statement.
Use the file statement to specify the name of the
file that receives the output of the tracing operation. All files
are placed in the directory /var/log. We recommend that you place
LDP-tracing output in the file ldp-log.
The following trace flags display the operations associated with the sending and receiving of various LDP messages. Each can carry one or more of the following modifiers:
address—Trace the operation of address and address withdrawal messages.binding—Trace label-binding operations.error—Trace error conditions.event—Trace protocol events.initialization—Trace the operation of initialization messages.label—Trace the operation of label request, label map, label withdrawal, and label release messages.notification—Trace the operation of notification messages.packets—Trace the operation of address, address withdrawal, initialization, label request, label map, label withdrawal, label release, notification, and periodic messages. This modifier is equivalent to setting theaddress,initialization,label,notification, andperiodicmodifiers.You can also configure the
filterflag modifier with thematch-on addresssub-option for thepacketsflag. This allows you to trace based on the source and destination addresses of the packets.path—Trace label-switched path operations.path—Trace label-switched path operations.periodic—Trace the operation of hello and keepalive messages.route—Trace the operation of route messages.state—Trace protocol state transitions.
Tracing LDP Protocol Traffic Within FECs
LDP associates a forwarding equivalence class (FEC) with each LSP it creates. The FEC associated with an LSP specifies which packets are mapped to that LSP. LSPs are extended through a network as each router chooses the label advertised by the next hop for the FEC and splices it to the label it advertises to all other routers.
You can trace LDP protocol traffic within a specific FEC and
filter LDP trace statements based on an FEC. This is useful when you
want to trace or troubleshoot LDP protocol traffic associated with
an FEC. The following trace flags are available for this purpose: route, path, and binding.
The following example illustrates how you might configure the
LDP traceoptions statement to filter LDP trace statements
based on an FEC:
[edit protocols ldp traceoptions] set flag route filter match-on fec policy "filter-policy-for-ldp-fec";
This feature has the following limitations:
The filtering capability is only available for FECs composed of IP version 4 (IPv4) prefixes.
Layer 2 circuit FECs cannot be filtered.
When you configure both route tracing and filtering, MPLS routes are not displayed (they are blocked by the filter).
Filtering is determined by the policy and the configured value for the
match-onoption. When configuring the policy, be sure that the default behavior is alwaysreject.The only
match-onoption isfec. Consequently, the only type of policy you should include is a route-filter policy.
Examples: Tracing LDP Protocol Traffic
Trace LDP path messages in detail:
[edit]
protocols {
ldp {
traceoptions {
file ldp size 10m files 5;
flag path;
}
}
}
Trace all LDP outgoing messages:
[edit]
protocols {
ldp {
traceoptions {
file ldp size 10m files 5;
flag packets;
}
}
}
Trace all LDP error conditions:
[edit]
protocols {
ldp {
traceoptions {
file ldp size 10m files 5;
flag error;
}
}
}
Trace all LDP incoming messages and all label-binding operations:
[edit]
protocols {
ldp {
traceoptions {
file ldp size 10m files 5 world-readable;
flag packets receive;
flag binding;
}
interface all {
}
}
}
Trace LDP protocol traffic for an FEC associated with the LSP:
[edit]
protocols {
ldp {
traceoptions {
flag route filter match-on fec policy filter-policy-for-ldp-fec;
}
}
}
Change History Table
Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.