Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

MPLS Traffic Engineering Configuration

 

MPLS and Traffic Engineering

Traffic engineering allows you to control the path that data packets follow, bypassing the standard routing model, which uses routing tables. Traffic engineering moves flows from congested links to alternate links that would not be selected by the automatically computed destination-based shortest path. With traffic engineering, you can:

  • Make more efficient use of expensive long-haul fibers.

  • Control how traffic is rerouted in the face of single or multiple failures.

  • Classify critical and regular traffic on a per-path basis.

The core of the traffic engineering design is based on building label-switched paths (LSPs) among routers. An LSP is connection-oriented, like a virtual circuit in Frame Relay or ATM. LSPs are not reliable: Packets entering an LSP do not have delivery guarantees, although preferential treatment is possible. LSPs also are similar to unidirectional tunnels in that packets entering a path are encapsulated in an envelope and switched across the entire path without being touched by intermediate nodes. LSPs provide fine-grained control over how packets are forwarded in a network. To provide reliability, an LSP can use a set of primary and secondary paths.

LSPs can be configured for BGP traffic only (traffic whose destination is outside of an autonomous system [AS]). In this case, traffic within the AS is not affected by the presence of LSPs. LSPs can also be configured for both BGP and interior gateway protocol (IGP) traffic; therefore, both intra-AS and inter-AS traffic is affected by the LSPs.

MPLS Traffic Engineering and Signaling Protocols Overview

Traffic engineering facilitates efficient and reliable network operations while simultaneously optimizing network resources and traffic performance. Traffic engineering provides the ability to move traffic flow away from the shortest path selected by the interior gateway protocol (IGP) to a potentially less congested physical path across a network. To support traffic engineering, besides source routing, the network must do the following:

  • Compute a path at the source by taking into account all the constraints, such as bandwidth and administrative requirements.

  • Distribute the information about network topology and link attributes throughout the network once the path is computed.

  • Reserve network resources and modify link attributes.

When transit traffic is routed through an IP network, MPLS is often used to engineer its passage. Although the exact path through the transit network is of little importance to either the sender or the receiver of the traffic, network administrators often want to route traffic more efficiently between certain source and destination address pairs. By adding a short label with specific routing instructions to each packet, MPLS switches packets from router to router through the network rather than forwarding packets based on next-hop lookups. The resulting routes are called label-switched paths (LSPs). LSPs control the passage of traffic through the network and speed traffic forwarding.

You can create LSPs manually, or through the use of signaling protocols. Signaling protocols are used within an MPLS environment to establish LSPs for traffic across a transit network. Junos OS supports two signaling protocols—LDP and the Resource Reservation Protocol (RSVP).

MPLS traffic engineering uses the following components:

  • MPLS LSPs for packet forwarding

  • IGP extensions for distributing information about the network topology and link attributes

  • Constrained Shortest Path First (CSPF) for path computation and path selection

  • RSVP extensions to establish the forwarding state along the path and to reserve resources along the path

Junos OS also supports traffic engineering across different OSPF regions.

Traffic Engineering Capabilities

The task of mapping traffic flows onto an existing physical topology is called traffic engineering. Traffic engineering provides the ability to move traffic flow away from the shortest path selected by the interior gateway protocol (IGP) and onto a potentially less congested physical path across a network.

Traffic engineering provides the capabilities to do the following:

  • Route primary paths around known bottlenecks or points of congestion in the network.

  • Provide precise control over how traffic is rerouted when the primary path is faced with single or multiple failures.

  • Provide more efficient use of available aggregate bandwidth and long-haul fiber by ensuring that subsets of the network do not become overutilized while other subsets of the network along potential alternate paths are underutilized.

  • Maximize operational efficiency.

  • Enhance the traffic-oriented performance characteristics of the network by minimizing packet loss, minimizing prolonged periods of congestion, and maximizing throughput.

  • Enhance statistically bound performance characteristics of the network (such as loss ratio, delay variation, and transfer delay) required to support a multiservices Internet.

Components of Traffic Engineering

In the Junos® operating system (OS), traffic engineering is implemented with MPLS and RSVP. Traffic engineering is composed of four functional components:

Configuring Traffic Engineering for LSPs

When you configure an LSP, a host route (a 32-bit mask) is installed in the ingress router toward the egress router; the address of the host route is the destination address of the LSP. The bgp option for the traffic engineering statement at the [edit protocols mpls] hierarchy level is enabled by default (you can also explicitly configure the bgp option), allowing only BGP to use LSPs in its route calculations. The other traffic-engineering statement options allow you to alter this behavior in the master routing instance. This functionality is not available for specific routing instances. Also, you can enable only one of the traffic-engineering statement options (bgp, bgp-igp, bgp-igp-both-ribs, or mpls-forwarding) at a time.

Note

Enabling or disabling any of the traffic-engineering statement options causes all the MPLS routes to be removed and then reinserted into the routing tables.

You can configure OSPF and traffic engineering to advertise the LSP metric in summary link-state advertisements (LSAs) as described in the section Advertising the LSP Metric in Summary LSAs.

The following sections describe how to configure traffic engineering for LSPs:

Using LSPs for Both BGP and IGP Traffic Forwarding

You can configure BGP and the IGPs to use LSPs for forwarding traffic destined for egress routers by including the bgp-igp option for the traffic-engineering statement. The bgp-igp option causes all inet.3 routes to be moved to the inet.0 routing table.

On the ingress router, include bgp-igp option for the traffic-engineering statement:

You can include this statement at the following hierarchy levels:

  • [edit protocols mpls]

  • [edit logical-systems logical-system-name protocols mpls]

    Note

    The bgp-igp option for the traffic-engineering statement cannot be configured for VPN). VPNs require that routes be in the inet.3 routing table.

Using LSPs for Forwarding in Virtual Private Networks

VPNs require that routes remain in the inet.3 routing table to function properly. For VPNs, configure the bgp-igp-both-ribs option of the traffic-engineering statement to cause BGP and the IGPs to use LSPs for forwarding traffic destined for egress routers. The bgp-igp-both-ribs option installs the ingress routes in both the inet.0 routing table (for IPv4 unicast routes) and the inet.3 routing table (for MPLS path information).

On the ingress router, include the traffic-engineering bgp-igp-both-ribs statement:

You can include this statement at the following hierarchy levels:

  • [edit protocols mpls]

  • [edit logical-systems logical-system-name protocols mpls]

When you use the bgp-igp-both-ribs statement, the routes from the inet.3 table get copied into the inet.0 table. The copied routes are LDP-signaled or RSVP-signaled, and are likely to have a lower preference than other routes in inet.0. Routes with a lower preference are more likely to be chosen as the active routes. This can be a problem because routing policies only act upon active routes. To prevent this problem, use the mpls-forwarding option instead.

Using RSVP and LDP Routes for Forwarding but Not Route Selection

If you configure the bgp-igp or bgp-igp-both-ribs options for the traffic-engineering statement, high-priority LSPs can supersede IGP routes in the inet.0 routing table. IGP routes might no longer be redistributed since they are no longer the active routes.

If you configure the mpls-forwarding option for the traffic-engineering statement, LSPs are used for forwarding but are excluded from route selection. These routes are added to both the inet.0 and inet.3 routing tables. LSPs in the inet.0 routing table are given a low preference when the active route is selected. However, LSPs in the inet.3 routing table are given a normal preference and are therefore used for selecting forwarding next hops.

When you activate the mpls-forwarding option, routes whose state is ForwardingOnly are preferred for forwarding even if their preference is lower than that of the currently active route. To examine the state of a route, execute a show route detail command.

To use LSPs for forwarding but exclude them from route selection, include the mpls-forwarding option for the traffic-engineering statement:

You can include this statement at the following hierarchy levels:

  • [edit protocols mpls]

  • [edit logical-systems logical-system-name protocols mpls]

When you configure the mpls-forwarding option, IGP shortcut routes are copied to the inet.0 routing table only.

Unlike the bgp-igp-both-ribs option, the mpls-forwarding option allows you to use the LDP-signaled and RSVP-signaled routes for forwarding, and keep the BGP and IGP routes active for routing purposes so that routing policies can act upon them.

For example, suppose a router is running BGP and it has a BGP route of 10.10.10.1/32 that it needs to send to another BGP speaker. If you use the bgp-igp-both-ribs option, and your router also has a label-switched-path (LSP) to 10.10.10.1, the MPLS route for 10.10.10.1 becomes active in the inet.0 routing table. This prevents your router from advertising the 10.10.10.1 route to the other BGP router. On the other hand, if you use the mpls-forwarding option instead of the bgp-igp-both-ribs option, the 10.10.10.1/32 BGP route is advertised to the other BGP speaker, and the LSP is still used to forward traffic to the 10.10.10.1 destination.

Advertising the LSP Metric in Summary LSAs

You can configure MPLS and OSPF to treat an LSP as a link. This configuration allows other routers in the network to use this LSP. To accomplish this goal, you need to configure MPLS and OSPF traffic engineering to advertise the LSP metric in summary LSAs.

For MPLS, include the traffic-engineering bgp-igp and label-switched-path statements:

You can include these statements at the following hierarchy levels:

  • [edit protocols mpls]

  • [edit logical-systems logical-system-name protocols mpls]

For OSPF, include the lsp-metric-into-summary statement:

You can include this statement at the following hierarchy levels:

  • [edit protocols ospf traffic-engineering shortcuts]

  • [edit logical-systems logical-system-name protocols ospf traffic-engineering shortcuts]

For more information about OSPF traffic engineering, see the Junos OS Routing Protocols Library.

Enabling Interarea Traffic Engineering

The Junos OS can signal a contiguous traffic-engineered LSP across multiple OSPF areas. The LSP signaling must be done using either nesting or contiguous signaling, as described in RFC 4206, Label-Switched Paths (LSP) Hierarchy with Generalized Multi-Protocol Label Switching (GMPLS) Traffic Engineering (TE). However, contiguous signaling support is limited to just basic signaling. Reoptimization is not supported with contiguous signaling.

The following describes some of the interarea traffic engineering features:

  • Interarea traffic engineering can be enabled when the loose-hop area border routers (ABRs) are configured on the ingress router using CSPF for the Explicit Route Object (ERO) calculation within an OSPF area. ERO expansion is completed on the ABRs.

  • Interarea traffic engineering can be enabled when CSPF is enabled, but without ABRs specified in the LSP configuration on the ingress router (ABRs can be automatically designated).

  • Differentiated Services (DiffServ) traffic engineering is supported as long as the class type mappings are uniform across multiple areas.

To enable interarea traffic engineering, include the expand-loose-hop statement in the configuration for each LSP transit router:

You can include this statement at the following hierarchy levels:

  • [edit protocols mpls]

  • [edit logical-systems logical-system-name protocols mpls]

Enabling Inter-AS Traffic Engineering for LSPs

Generally, traffic engineering is possible for LSPs that meet the following conditions:

  • Both ends of the LSP are in the same OSPF area or at the same IS-IS level.

  • The two ends of the LSP are in different OSPF areas within the same autonomous system (AS). LSPs that end in different IS-IS levels are not supported.

  • The two ends of an explicit-path LSP are in different OSPF ASs and the autonomous system border routers (ASBRs) are configured statically as the loose hops supported on the explicit-path LSP. For more information, see Configuring Explicit-Path LSPs.

Without statically defined ASBRs on LSPs, traffic engineering is not possible between one routing domain, or AS, and another. However, when the ASs are under the control of single service provider, it is possible in some cases to have traffic engineered LSPs span the ASs and dynamically discover the OSPF ASBRs linking them (IS-IS is not supported with this feature).

Inter-AS traffic engineered LSPs are possible as long as certain network requirements are met, none of the limiting conditions apply, and OSPF passive mode is configured with EBGP. Details are provided in the following sections:

Inter-AS Traffic Engineering Requirements

The proper establishment and functioning of inter-AS traffic engineered LSPs depend on the following network requirements, all of which must be met:

  • All ASs are under control of a single service provider.

  • OSPF is used as the routing protocol within each AS, and EBGP is used as the routing protocol between the ASs.

  • ASBR information is available inside each AS.

  • EBGP routing information is distributed by OSPF, and an IBGP full mesh is in place within each AS.

  • Transit LSPs are not configured on the inter-AS links, but are configured between entry and exit point ASBRs on each AS.

  • The EBGP link between ASBRs in different ASs is a direct link and must be configured as a passive traffic engineering link under OSPF. The remote link address itself, not the loopback or any other link address, is used as the remote node identifier for this passive link. For more information about OSPF passive traffic engineering mode configuration, see Configuring OSPF Passive TE Mode.

In addition, the address used for the remote node of the OSPF passive traffic engineering link must be the same as the address used for the EBGP link. For more information about OSPF and BGP in general, see the Junos OS Routing Protocols Library.

Inter-AS Traffic Engineering Limitations

Only LSP hierarchical, or nested, signaling is supported for inter-AS traffic engineered LSPs. Only point-to-point LSPs are supported (there is no point-to-multipoint support).

In addition, the following limitations apply. Any one of these conditions is sufficient to render inter-AS traffic engineered LSPs impossible, even if the above requirements are met.

  • The use of multihop BGP is not supported.

  • The use of policers or topologies that prevent BGP routes from being known inside the AS is not supported.

  • Multiple ASBRs on a LAN between EBGP peers are not supported. Only one ASBR on a LAN between EBGP peers is supported (others ASBRs can exist on the LAN, but cannot be advertised).

  • Route reflectors or policies that hide ASBR information or prevent ASBR information from being distributed inside the ASs are not supported.

  • Bidirectional LSPs are not supported (LSPs are unidirectional from the traffic engineering perspective).

  • Topologies with both inter-AS and intra-AS paths to the same destination are not supported.

In addition, several features that are routine with all LSPs are not supported with inter-AS traffic engineering:

  • Admin group link colors are not supported.

  • Secondary standby is not supported.

  • Reoptimization is not supported.

  • Crankback on transit routers is not supported.

  • Diverse path calculation is not supported.

  • Graceful restart is not supported.

These lists of limitations or unsupported features with inter-AS traffic engineered LSPs are not exhaustive.

Configuring OSPF Passive TE Mode

Ordinarily, interior routing protocols such as OSPF are not run on links between ASs. However, for inter-AS traffic engineering to function properly, information about the inter-AS link, in particular, the address on the remote interface, must be made available inside the AS. This information is not normally included either in EBGP reachability messages or in OSPF routing advertisements.

To flood this link address information within the AS and make it available for traffic engineering calculations, you must configure OSPF passive mode for traffic engineering on each inter-AS interface. You must also supply the remote address for OSPF to distribute and include in the traffic engineering database.

To configure OSPF passive mode for traffic engineering on an inter-AS interface, include the passive statement for the link at the [edit protocols ospf area area-id interface interface-name] hierarchy level:

OSPF must be properly configured on the router. The following example configures the inter-AS link so-1/1/0 to distribute traffic engineering information with OSPF within the AS. The remote IP address is 192.168.207.2.

Packet Forwarding Component

The packet forwarding component of the Junos traffic engineering architecture is MPLS, which is responsible for directing a flow of IP packets along a predetermined path across a network. This path is called a label-switched path (LSP). LSPs are simplex; that is, the traffic flows in one direction from the head-end (ingress) router to a tail-end (egress) router. Duplex traffic requires two LSPs: one LSP to carry traffic in each direction. An LSP is created by the concatenation of one or more label-switched hops, allowing a packet to be forwarded from one router to another across the MPLS domain.

When an ingress router receives an IP packet, it adds an MPLS header to the packet and forwards it to the next router in the LSP. The labeled packet is forwarded along the LSP by each router until it reaches the tail end of the LSP, the egress router. At this point the MPLS header is removed, and the packet is forwarded based on Layer 3 information such as the IP destination address. The value of this scheme is that the physical path of the LSP is not limited to what the IGP would choose as the shortest path to reach the destination IP address.

Packet Forwarding Based on Label Swapping

The packet forwarding process at each router is based on the concept of label swapping. This concept is similar to what occurs at each Asynchronous Transfer Mode (ATM) switch in a permanent virtual circuit (PVC). Each MPLS packet carries a 4-byte encapsulation header that contains a 20-bit, fixed-length label field. When a packet containing a label arrives at a router, the router examines the label and copies it as an index to its MPLS forwarding table. Each entry in the forwarding table contains an interface-inbound label pair mapped to a set of forwarding information that is applied to all packets arriving on the specific interface with the same inbound label.

How a Packet Traverses an MPLS Backbone

This section describes how an IP packet is processed as it traverses an MPLS backbone network.

At the entry edge of the MPLS backbone, the IP header is examined by the ingress router. Based on this analysis, the packet is classified, assigned a label, encapsulated in an MPLS header, and forwarded toward the next hop in the LSP. MPLS provides a high degree of flexibility in the way that an IP packet can be assigned to an LSP. For example, in the Junos traffic engineering implementation, all packets arriving at the ingress router that are destined to exit the MPLS domain at the same egress router are forwarded along the same LSP.

Once the packet begins to traverse the LSP, each router uses the label to make the forwarding decision. The MPLS forwarding decision is made independently of the original IP header: the incoming interface and label are used as lookup keys into the MPLS forwarding table. The old label is replaced with a new label, and the packet is forwarded to the next hop along the LSP. This process is repeated at each router in the LSP until the packet reaches the egress router.

When the packet arrives at the egress router, the label is removed and the packet exits the MPLS domain. The packet is then forwarded based on the destination IP address contained in the packet’s original IP header according to the traditional shortest path calculated by the IP routing protocol.

Information Distribution Component

Traffic engineering requires detailed knowledge about the network topology as well as dynamic information about network loading. To implement the information distribution component, simple extensions to the IGPs are defined. Link attributes are included as part of each router’s link-state advertisement. IS-IS extensions include the definition of new type length values (TLVs), whereas OSPF extensions are implemented with opaque link-state advertisements (LSAs). The standard flooding algorithm used by the link-state IGPs ensures that link attributes are distributed to all routers in the routing domain. Some of the traffic engineering extensions to be added to the IGP link-state advertisement include maximum link bandwidth, maximum reserved link bandwidth, current bandwidth reservation, and link coloring.

Each router maintains network link attributes and topology information in a specialized traffic engineering database. The traffic engineering database is used exclusively for calculating explicit paths for the placement of LSPs across the physical topology. A separate database is maintained so that the subsequent traffic engineering computation is independent of the IGP and the IGP’s link-state database. Meanwhile, the IGP continues its operation without modification, performing the traditional shortest-path calculation based on information contained in the router’s link-state database.

Path Selection Component

After network link attributes and topology information are flooded by the IGP and placed in the traffic engineering database, each ingress router uses the traffic engineering database to calculate the paths for its own set of LSPs across the routing domain. The path for each LSP can be represented by either a strict or loose explicit route. An explicit route is a preconfigured sequence of routers that should be part of the physical path of the LSP. If the ingress router specifies all the routers in the LSP, the LSP is said to be identified by a strict explicit route. If the ingress router specifies only some of the routers in the LSP, the LSP is described as a loose explicit route. Support for strict and loose explicit routes allows the path selection process to be given broad latitude whenever possible, but to be constrained when necessary.

The ingress router determines the physical path for each LSP by applying a Constrained Shortest Path First (CSPF) algorithm to the information in the traffic engineering database. CSPF is a shortest-path-first algorithm that has been modified to take into account specific restrictions when the shortest path across the network is calculated. Input into the CSPF algorithm includes:

  • Topology link-state information learned from the IGP and maintained in the traffic engineering database

  • Attributes associated with the state of network resources (such as total link bandwidth, reserved link bandwidth, available link bandwidth, and link color) that are carried by IGP extensions and stored in the traffic engineering database

  • Administrative attributes required to support traffic traversing the proposed LSP (such as bandwidth requirements, maximum hop count, and administrative policy requirements) that are obtained from user configuration

As CSPF considers each candidate node and link for a new LSP, it either accepts or rejects a specific path component based on resource availability or whether selecting the component violates user policy constraints. The output of the CSPF calculation is an explicit route consisting of a sequence of router addresses that provides the shortest path through the network that meets the constraints. This explicit route is then passed to the signaling component, which establishes the forwarding state in the routers along the LSP.

Signaling Component

An LSP is not known to be workable until it is actually established by the signaling component. The signaling component, which is responsible for establishing LSP state and distributing labels, relies on a number of extensions to RSVP:

  • The Explicit Route object allows an RSVP path message to traverse an explicit sequence of routers that is independent of conventional shortest-path IP routing. The explicit route can be either strict or loose.

  • The Label Request object permits the RSVP path message to request that intermediate routers provide a label binding for the LSP that it is establishing.

  • The Label object allows RSVP to support the distribution of labels without changing its existing mechanisms. Because the RSVP Resv message follows the reverse path of the RSVP path message, the Label object supports the distribution of labels from downstream nodes to upstream nodes.

Offline Path Planning and Analysis

Despite the reduced management effort resulting from online path calculation, an offline planning and analysis tool is still required to optimize traffic engineering globally. Online calculation takes resource constraints into account and calculates one LSP at a time. The challenge with this approach is that it is not deterministic. The order in which LSPs are calculated plays a critical role in determining each LSP’s physical path across the network. LSPs that are calculated early in the process have more resources available to them than LSPs calculated later in the process because previously calculated LSPs consume network resources. If the order in which the LSPs are calculated is changed, the resulting set of physical paths for the LSPs also can change.

An offline planning and analysis tool simultaneously examines each link’s resource constraints and the requirements of each LSP. Although the offline approach can take several hours to complete, it performs global calculations, compares the results of each calculation, and then selects the best solution for the network as a whole. The output of the offline calculation is a set of LSPs that optimizes utilization of network resources. After the offline calculation is completed, the LSPs can be established in any order because each is installed according to the rules for the globally optimized solution.

Flexible LSP Calculation and Configuration

Traffic engineering involves mapping traffic flow onto a physical topology. You can determine the paths online using constraint-based routing. Regardless of how the physical path is calculated, the forwarding state is installed across the network through RSVP.

The Junos OS supports the following ways to route and configure an LSP:

  • You can calculate the full path for the LSP offline and individually configure each router in the LSP with the necessary static forwarding state. This is analogous to the way some Internet service providers (ISPs) configure their IP-over-ATM cores.

  • You can calculate the full path for the LSP offline and statically configure the ingress router with the full path. The ingress router then uses RSVP as a dynamic signaling protocol to install a forwarding state in each router along the LSP.

  • You can rely on constraint-based routing to perform dynamic online LSP calculation. You configure the constraints for each LSP; then the network itself determines the path that best meets those constraints. Specifically, the ingress router calculates the entire LSP based on the constraints and then initiates signaling across the network.

  • You can calculate a partial path for an LSP offline and statically configure the ingress router with a subset of the routers in the path; then you can permit online calculation to determine the complete path.

    For example, consider a topology that includes two east-west paths across the United States: one in the north through Chicago and one in the south through Dallas. If you want to establish an LSP between a router in New York and one in San Francisco, you can configure the partial path for the LSP to include a single loose-routed hop of a router in Dallas. The result is an LSP routed along the southern path. The ingress router uses CSPF to compute the complete path and RSVP to install the forwarding state along the LSP.

  • You can configure the ingress router with no constraints whatsoever. In this case, normal IGP shortest-path routing is used to determine the path of the LSP. This configuration does not provide any value in terms of traffic engineering. However, it is easy and might be useful in situations when services such as virtual private networks (VPNs) are needed.

In all these cases, you can specify any number of LSPs as backups for the primary LSP, thus allowing you to combine more than one configuration approach. For example, you might explicitly compute the primary path offline, set the secondary path to be constraint-based, and have the tertiary path be unconstrained. If a circuit on which the primary LSP is routed fails, the ingress router notices the outage from error notifications received from a downstream router or by the expiration of RSVP soft-state information. Then the router dynamically forwards traffic to a hot-standby LSP or calls on RSVP to create a forwarding state for a new backup LSP.

Role of an Interior Gateway Protocol

An interior gateway protocol (IGP) is a type of protocol used for exchanging routing information between devices within an autonomous system (AS). Based on the method of computing the best path to a destination, the IGPs are divided into two categories:

  • Link-state protocols—Advertise information about the network topology (directly connected links and the state of those links) to all routers using multicast addresses and triggered routing updates until all the routers running the link-state protocol have identical information about the internetwork. The best path to a destination is calculated based on constraints such as maximum delay, minimum available bandwidth, and resource class affinity.

    OSPF and IS-IS are examples of link-state protocols.

  • Distance vector protocols—Advertise complete routing table information to directly connected neighbors using a broadcast address. The best path is calculated based on the number of hops to the destination network.

    RIP is an example of a distance vector protocol.

As the name implies, the role of an IGP is to provide routing connectivity within or internal to a given routing domain. A routing domain is a set of routers under common administrative control that share a common routing protocol. An AS can consist of multiple routing domains, where IGP functions to advertise and learn network prefixes (routes) from neighboring routers to build a route table that ultimately contains entries for all sources advertising reachability for a given prefix. IGP executes a route selection algorithm to select the best path between the local router and each destination, and provides full connectivity among the routers making up a routing domain.

In addition to advertising internal network reachability, IGPs are often used to advertise routing information that is external to that IGP's routing domain through a process known as route redistribution. Route redistribution is the process of exchanging routing information among distinct routing protocols to tie multiple routing domains together when intra-AS connectivity is desired.

Limitations of an Interior Gateway Protocol

While each individual IGP has its own advantages and limitations, the biggest limitations of IGP in general are performance and scalability.

IGPs are designed to handle the task of acquiring and distributing network topology information for traffic engineering purposes. While this model has served well, IGPs have inherent scaling limitations when it comes to distributing large databases. IGPs can autodetect neighbors, with which they acquire intra-area network topology information. However, the link-state database or a traffic engineering database has the scope of a single area or AS, thereby limiting applications, such as end-to-end traffic engineering, the benefit of having external visibility to make better decisions.

For label-switched networks, such as MPLS and Generalized MPLS (GMPLS), most existing traffic engineering solutions work in a single routing domain. These solutions do not work when a route from the ingress node to the egress node leaves the routing area or AS of the ingress node. In such cases, the path computation problem becomes complicated because of the unavailability of the complete routing information throughout the network. This is because service providers usually choose not to leak routing information beyond the routing area or AS for scalability constraints and confidentiality concerns.

Need for Spanning Link-State Distribution

One of the limitations of IGP is its inability to span link-state distribution outside a single area or AS. However, spanning link-state information acquired by an IGP across multiple areas or ASs has the following needs:

  • LSP path computation—This information is used to compute the path for MPLS LSPs across multiple routing domains, for example an inter-area TE LSP.

  • External path computing entities—External path computing entities, such as Application Layer Traffic Optimization (ALTO) and Path Computation Elements (PCE), perform path computations based on the network topology and current state of connections within the network, including traffic engineering information. This information is typically distributed by IGPs within the network.

    However, because the external path computing entities cannot extract this information from the IGPs, they perform network monitoring to optimize network services.

Using BGP as a Solution

Overview

To meet the needs for spanning link-state distribution across multiple domains, an exterior gateway protocol (EGP) is required to collect link-state and traffic engineering information from an IGP area, share it with external component, and use it for computing paths for interdomain MPLS LSPs.

BGP is a standardized EGP designed to exchange routing and reachability information between autonomous systems (ASs). BGP is a proven protocol that has better scaling properties because it can distribute millions of entries (for example, VPN prefixes) in a scalable fashion. BGP is the only routing protocol in use today that is suited to carry all of the routes in the Internet. This is largely because BGP runs on top of TCP and can make use of TCP flow control. In contrast, the internal gateway protocols (IGPs) do not have flow control. When IGPs have too much route information, they begin to churn. When BGP has a neighboring speaker that is sending information too quickly, BGP can throttle down the neighbor by delaying TCP acknowledgments.

Another benefit of BGP is that it uses type, length, value (TLV) tuples and network layer reachability information (NLRI) that provide seemingly endless extensibility without the need for the underlying protocol to be altered.

The distribution of link-state information across domains is regulated using policies to protect the interests of the service provider. This requires a control over the topology distribution using policies. BGP with its implemented policy framework serves well in the interdomain route distribution. In Junos OS, BGP is completely policy driven. The operator must explicitly configure neighbors to peer with and explicitly accept routes into BGP. Furthermore, routing policy is used to filter and modify routing information. Thus, routing policies provide complete administrative control over the routing tables.

Although, within an AS, both IGP-TE and BGP-TE provide the same set of information, BGP-TE has better scaling characteristics that are inherited from the standard BGP protocol. This makes BGP-TE a more scalable choice for acquiring multi-area/multi-AS topology information.

By using BGP as a solution, the IGP-acquired information is used for distribution into BGP. The ISPs can selectively expose this information with other ISPs, service providers, and content distribution networks (CDNs) through normal BGP peering. This allows for aggregation of the IGP-acquired information across multiple areas and ASs, such that an external path computing entity can access the information by passively listening to a route reflector.

Implementation

In Junos OS, the IGPs install topology information into a database called the traffic engineering database. The traffic engineering database contains the aggregated topology information. To install IGP topology information into traffic engineering database, use the set igp-topology configuration statement at the [edit protocols isis traffic-engineering] and [edit protocols ospf traffic-engineering] hierarchy levels. The mechanism to distribute link-state information using BGP includes the process of advertising the traffic engineering database into BGP-TE (import), and installing entries from BGP-TE into the traffic engineering database (export).

Traffic Engineering Database Import

To advertise the traffic engineering database into BGP-TE, the link and node entries in the traffic engineering database are converted in the form of routes. These converted routes are then installed by the traffic engineering database on behalf of the corresponding IGP, into a user-visible routing table called lsdist.0, on conditions subject to route policies. The procedure of leaking entries from the traffic engineering database into lsdist.0 is called traffic engineering database import as illustrated in Figure 1.

There are polices to govern the traffic engineering database import process. By default, no entries are leaked from the traffic engineering database into the lsdist.0 table.

Starting in Junos OS Release 17.4R1, the traffic engineering database installs interior gateway protocol (IGP) topology information in addition to RSVP-TE topology information in the lsdist.0 routing table as illustrated in Figure 1. Prior to Junos OS Release 17.4R1, the traffic engineering database only exported RSVP-TE topology information. Now you can monitor both IGP and traffic engineering topology information. The BGP-LS reads IGP entries from lsdist.0 and advertises these entries to the BGP peers. To import IGP topology information into BGP-LS from lsdist.0, use the set bgp-ls configuration statement at the [edit protocols mpls traffic-engineering database import igp-topology] hierarchy level.

Traffic Engineering Database Export

BGP can be configured to export or advertise routes from the lsdist.0 table, subject to policy. This is common for any kind of route origination in BGP. In order to advertise BGP-TE into the traffic engineering database, BGP needs to be configured with the BGP-TE address family, and an export policy that selects routes for redistribution into BGP.

BGP then propagates these routes like any other NLRI. BGP peers that have the BGP-TE family configured and negotiated receive BGP-TE NLRIs. BGP stores the received BGP-TE NLRIs in the form of routes in the lsdist.0 table, which is the same table that stores locally originated BGP-TE routes. The BGP-installed routes in lsdist.0 are then distributed to other peers like any other route. Thus, the standard route selection procedure applies to BGP-TE NLRIs received from multiple speakers.

To achieve interdomain TE, the routes in lsdist.0 are leaked into the traffic engineering database through a policy. This process is called traffic engineering database export as illustrated in Figure 1.

There are polices to govern the traffic engineering database export process. By default, no entries are leaked from the lsdist.0 table into the traffic engineering database.

Note

For SDN applications, such as PCE and ALTO, the BGP-TE advertised information cannot leak into the traffic engineering database of a router. In such cases, an external server that peers with the routers using BGP-TE is used to move topology information up into the sky/orchestration system that spans the network. These external servers can be deemed as BGP-TE consumers, where they receive BGP-TE routes, but do not advertise them.

Assigning Credibility Values

Once the entries are installed in the traffic engineering database, the BGP-TE learned information is made available for CSPF path computation. The traffic engineering database uses a protocol preference scheme that is based on credibility values. A protocol with a higher credibility value is preferred over a protocol with a lower credibility value. BGP-TE has the capability to advertise information learned from multiple protocols at the same time, and so in addition to the IGP-installed entries in the traffic engineering database, there can be BGP-TE installed entries that correspond to more than one protocol. The traffic engineering database export component creates a traffic engineering database protocol and credibility level for each protocol that BGP-TE supports. These credibility values are configurable in the CLI.

The credibility order for the BGP-TE protocols is as follows:

  • Unknown—80

  • OSPF—81

  • ISIS Level 1—82

  • ISIS Level 2—83

  • Static—84

  • Direct—85

Cross-Credibility Path Computation

After you assign credibility values, each credibility level is treated as an individual plane. The Constrained Shorted Path First algorithm starts with the highest assigned credibility to the lowest, finding a path within that credibility level.

With BGP-TE, it is essential to compute paths across credibility levels to compute inter-AS paths. For example, different credibility settings are seen on a device from area 0 that computes the path through area 1, because area 0 entries are installed by OSPF, and area 1 entries are installed by BGP-TE.

To enable path computation across credibility levels, include the cross-credibility-cspf statement at the edit protocols mpls, [edit protocols mpls label-switched-path lsp-name], and [edit protocols rsvp] hierarchy levels. At the [edit protocols rsvp] hierarchy level, enabling cross-credibility-cspf impacts bypass LSPs and loose hop expansion in transit.

Configuring cross-credibility-cspf enables path computation across credibility levels using the Constrained Shortest Path First algorithm, wherein the constraint is not performed on a credibility-by-credibility basis, but as a single constraint ignoring the assigned credibility values.

BGP-TE NLRIs and TLVs

Like other BGP routes, BGP-TE NLRIs can also be distributed through a route reflector that speaks BGP-TE NLRI. Junos OS implements the route reflection support for the BGP-TE family.

The following is a list of supported NLRIs:

  • Link NLRI

  • Node NLRI

  • IPv4 Prefix NLRI (receive and propagate)

  • IPv6 Prefix NLRI (receive and propagate)

Note

Junos OS does not provide support for the route-distinguisher form of the above NRLIs.

The following is a list of supported fields in link and node NLRIs:

  • Protocol-ID—NLRI originates with the following protocol values:

    • ISIS-L1

    • ISIS-L2

    • OSPF

  • Identifier—This value is configurable. By default, the identifier value is set to 0.

  • Local/Remote node descriptor—These include:

    • Autonomous system

    • BGP-LS Identifier—This value is configurable. By default, the BGP-LS identifier value is set to 0

    • Area-ID

    • IGP router-ID

  • Link descriptors (Only for link NLRI)—This includes:

    • Link Local/Remote Identifiers

    • IPv4 interface address

    • IPv4 neighbor address

    • IPv6 neighbor/interface address—The IPv6 neighbor and interface addresses are not originated, but only stored and propagated when received.

    • Multi-topology ID—This value is not originated, but stored and propagated when received.

The following is a list of supported LINK_STATE attribute TLVs:

  • Link attributes:

    • Administrative group

    • Max link bandwidth

    • Max reservable bandwidth

    • Unreserved bandwidth

    • TE default metric

    • SRLG

    • The following TLVs, which are not originated, but only stored and propagated when received:

      • Opaque link attributes

      • MPLS protocol mask

      • Metric

      • Link protection type

      • Link name attribute

  • Node attributes:

    • IPv4 Router-ID

    • Node flag bits—Only the overload bit is set.

    • The following TLVs, which are not originated, but only stored and propagated when received:

      • Multi-topology

      • OSPF-specific node properties

      • Opaque node properties

      • Node name

      • IS-IS area identifier

      • IPv6 Router-ID

    • Prefix attributes—These TLVs are stored and propagated like any other unknown TLVs.

Supported and Unsupported Features

Junos OS supports the following features with link-state distribution using BGP:

  • Advertisement of multiprotocol assured forwarding capability

  • Transmission and reception of node and link-state BGP and BGP-TE NLRIs

  • Nonstop active routing for BGP-TE NLRIs

  • Policies

Junos OS does not support the following functionality for link-state distribution using BGP:

  • Aggregated topologies, links, or nodes

  • Route distinguisher support for BGP-TE NLRIs

  • Multi-topology identifiers

  • Multi-instance identifiers (excluding the default instance ID 0)

  • Advertisement of the link and node area TLV

  • Advertisement of MPLS signaling protocols

  • Importing node and link information with overlapping address

BGP Link-State Extensions for Source Packet Routing in Networking (SPRING)

Starting in Junos OS Release 17.2R1, the BGP link-state address family is extended to distribute the source packet routing in networking (SPRING) topology information to software-defined networking (SDN) controllers. BGP typically learns the link-state information from IGP and distributes it to BGP peers. Besides BGP, the SDN controller can get link-state information directly from IGP if the controller is a part of an IGP domain. However, BGP link-state distribution provides a scalable mechanism to export the topology information. BGP link-state extensions for SPRING is supported on interdomain networks.

Source Packet Routing in Networking (SPRING)

SPRING is a control-plane architecture that enables an ingress router to steer a packet through a specific set of nodes and links in the network without relying on the intermediate nodes in the network to decide the actual path it must take. SPRING engages IGPs, such as IS-IS and OSPF, for advertising network segments. Network segments can represent any instruction, topological or service-based. Within IGP topologies, IGP segments are advertised by the link-state routing protocols. There are two types of IGP segments:

Adjacency segmentA one-hop path over a specific adjacency between two nodes in the IGP
Prefix segmentA multi-hop, equal-cost, multipath-aware shortest-path to a prefix, as per the state of the IGP topology

When SPRING in enabled in a BGP network, BGP link-state address family learns the SPRING information from the IGP link-state routing protocols and advertises segments in the form of segment identifiers (SIDs). BGP link-state address family has been extended to carry SIDs and other SPRING-related information to BGP peers. The route reflector can steer a packet through a desired set of nodes and links by prepending the packet with an appropriate combination of tunnels. This feature allows BGP link-state address family to also advertise the SPRING information to BGP peers.

Flow of BGP Link-State SPRING Data

Figure 2 depicts the data flow of BGP link-state SPRING data that IS-IS pushes to the traffic engineering database.

Figure 2: BGP Link-State Source Packet Routing in Networking (SPRING)
BGP Link-State Source Packet Routing
in Networking (SPRING)
  • IGP pushes the SPRING attributes to the traffic engineering database.

  • SPRING capabilities and algorithm information are carried forward as node attributes into the traffic engineering database.

  • Adjacent SID and LAN adjacent SID information are carried as link attributes.

  • Prefix SID or node-SID information is carried as prefix attributes.

  • A new set or a change to existing attributes triggers IGP updates to the traffic engineering database with new data.

  • RSVP is a prerequisite for link attributes.

    Caution

    If traffic engineering is disabled at the IGP level, none of the attributes are pushed to the traffic engineering database.

  • All parameters in the BGP traffic engineering NLRI, including the link, node, and prefix descriptors are derived from entries in the traffic engineering database.

  • The traffic engineering database imports route entries into the lsdist.0 routing table from IGP subject to policy.

  • The default policy of BGP is to export routes, which are known to BGP only. You configure an export policy for non-BGP routes in the lsdis.0 routing table. This policy advertises an entry learned from the traffic engineering database.

Supported BGP Link-State Attributes and TLVs, and Unsupported Features for BGP Link-State with SPRING

BGP link-state with SPRING supports the following attributes and type, length, and values (TLVs) that are originated, received, and propagated in the network:

Node attributes

  • Segment routing Capabilities

  • Segment routing Algorithm

Link attributes

  • Adjacent-SID

  • LAN Adjacent-SID

Prefix descriptors

  • IP reachability information

Prefix attributes

  • Prefix SID

The following list supports TLVs that are not originated, but only received and propagated in the network:

Prefix descriptors

  • Multitopology ID

  • OSPF route type

Prefix attributes

  • Range

  • Binding SID

Junos OS does not support the following features with BGP link-state with SPRING extensions:

  • IPv6 prefix origination

  • Multitopology identifiers

  • Traffic engineering database export for SPRING parameters

  • New TLVs with tcpdump (existing TLVs are also not supported).

  • SPRING over IPv6

Verifying NLRI Node Learned Through BGP with OSPF as IGP

The following is a sample output to verify the NLRI node learned through BGP with OSPF as the IGP:

Purpose

Verify the lsdist.0 routing table entries.

Action

From operational mode, run the show route table lsdist.0 command.

user@host> show route table lsdist.0 te-node-ip 7.7.7.7 extensive

Meaning

The routes are appearing in the lsdist.0 routing table.

Verifying the Prefix NLRI Learned Through BGP with OSPF as IGP

The following is a sample output to verify the prefix NLRI learned through BGP with OSPF as the IGP:

Purpose

Verify the lsdist.0 routing table entries.

Action

From operational mode, run the show route table lsdist.0 command.

user@host> show route table lsdist.0 te-ipv4-prefix-node-ip 7.7.7.7 extensive

Meaning

The routes are appearing in the lsdist.0 routing table.

This example shows how to configure BGP to carry link-state information across multiple domains, which is used for computing paths for MPLS LSPs spanning multiple domains, such as inter-area TE LSP, and providing a scalable and policy-controlled means for external path computing entities, such as ALTO and PCE, to acquire network topology.

Requirements

This example uses the following hardware and software components:

  • Four routers that can be a combination of M Series, MX Series, or T Series routers

  • Junos OS Release 14.2 or later running on all the routers

Before you begin:

  1. Configure the device interfaces.

  2. Configure the autonomous system numbers and router IDs for the devices.

  3. Configure the following protocols:

    • RSVP

    • MPLS

    • BGP

    • IS-IS

    • OSPF

Overview

Starting with Junos OS Release 14.2, a new mechanism to distribute topology information across multiple areas and autonomous systems (ASs) is introduced by extending the BGP protocol to carry link -state information, which was initially acquired using IGP. The IGP protocols have scaling limitations when it comes to distributing large databases. BGP is not only a more scalable vehicle for carrying multi-area and multi-AS topology information, but also provides the policy controls that can be useful for multi-AS topology distribution. The BGP link-state topology information is used for computing paths for MPLS label-switched paths (LSPs) spanning multiple domains, such as inter-area TE LSP, and providing a scalable and policy-controlled means for external path computing entities, such as ALTO and PCE, to acquire network topology.

Starting with Junos OS Release 17.1R1, link state distribution using BGP is supported on QFX10000 switches.

Topology

In Figure 3, Routers R0 and R1 and Routers R2 and R3 belong to different autonomous systems. Routers R0 and R1 run OSPF, and Routers R2 and R3 run IS-IS.

Configuration

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

R0

R1

R2

R3

Step-by-Step Procedure

The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode.

To configure Router R1:

  1. Configure the Router R1 interfaces.
  2. Configure the router ID and autonomous system of Router R1.
  3. Enable RSVP on all the interfaces of Router R1 (excluding the management interface).
  4. Enable MPLS on all the interfaces of Router R1 (excluding the management interface).
  5. Configure the BGP group for Router R1 to peer with Router R0, and assign the local address and neighbor address.
  6. Include the BGP-TE signaling network layer reachability information (NLRI) to the ibgp BGP group.
  7. Enable export of policy nlri2bgp on Router R1.
  8. Configure the BGP group for Router R1 to peer with Router R2, and assign the local address and neighbor autonomous system to the ebgp BGP group.
  9. Include the BGP-TE signaling NLRI to the ebgp BGP group.
  10. Enable passive traffic-engineering on the inter-AS link.
  11. Enable OSPF on the interface connecting Router R1 to Router R0 and on the loopback interface of Router R1, and enable traffic engineering capabilities.
  12. Enable passive traffic-engineering on the inter-AS link.
  13. Configure policies to accept traffic from BGP-TE NLRI.

Results

From configuration mode, confirm your configuration by entering the show interfaces, show routing-options, show protocols, and show policy-options commands. If the output does not display the intended configuration, repeat the instructions in this example to correct the configuration.

Step-by-Step Procedure

The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode.

To configure Router R2:

  1. Configure the Router R2 interfaces.
  2. Configure the router ID and autonomous system of Router R2.
  3. Enable RSVP on all the interfaces of Router R2 (excluding the management interface).
  4. Enable MPLS on all the interfaces of Router R2 (excluding the management interface).
  5. Enable import of traffic engineering database parameters using the ted2nlri policy.
  6. Configure the BGP group for Router R2 to peer with Router R1.
  7. Include the BGP-TE signaling NLRI to the ebgp BGP group.
  8. Assign the local address and neighbor autonomous system to the ebgp BGP group.
  9. Enable export of policy nlri2bgp on Router R2.
  10. Enable IS-IS on the interface connecting Router R2 with Router R3 and the loopback interface of Router R2.
  11. Enable only IS-IS advertising on the interface connecting Router R2 with Router R1.
  12. Configure traffic engineering capability on Router R2.
  13. Enable only OSPF advertisements on the interface connecting Router R2 with Router R1.
  14. Configure policies to accept traffic from the BGP-TE NLRI.

Results

From configuration mode, confirm your configuration by entering the show interfaces, show routing-options, show protocols, and show policy-options commands. If the output does not display the intended configuration, repeat the instructions in this example to correct the configuration.

Verification

Verify that the configuration is working properly.

Verifying the BGP Summary Status

Purpose

Verify that BGP is up and running on Routers R0 and R1.

Action

From operational mode, run the show bgp summary command.

user@R0> show bgp summary

From operational mode, run the show bgp summary command.

user@R1> show bgp summary

Meaning

Router R0 is peered with Router R1.

Verifying the MPLS LSP Status

Purpose

Verify the status of the MPLS LSP on Router R0.

Action

From operational mode, run the show mpls lsp command.

user@R0> show mpls lsp

Meaning

The MPLS LSP from Router R0 to Router R3 is established.

Verifying the lsdist.0 Routing Table Entries

Purpose

Verify the lsdist.0 routing table entries on Routers R0, R1, and R2.

Action

From operational mode, run the show route table lsdist.0 command.

user@R0> show route table lsdist.0

From operational mode, run the show route table lsdist.0 command.

user@R1> show route table lsdist.0

From operational mode, run the show route table lsdist.0 command.

user@R2> show route table lsdist.0

Meaning

The routes are appearing in the lsdist.0 routing table.

Verifying the Traffic Engineering Database Entries

Purpose

Verify the traffic engineering database entries on Router R0.

Action

From operational mode, run the show ted database command.

user@R0> show ted database

Meaning

The routes are appearing in the traffic engineering database.

You can enable distribution of topology information across multiple areas and autonomous systems (ASs) by extending the BGP protocol to carry link-state information, which was initially acquired using IGP. The IGP protocols have scaling limitations when it comes to distributing large databases. BGP is not only a more scalable vehicle for carrying multi-area and multi-AS topology information, but also provides the policy controls that can be useful for multi-AS topology distribution. The BGP link-state topology information is used for computing paths for MPLS LSPs spanning multiple domains, such as inter-area TE LSP, and providing a scalable and policy-controlled means for external path computing entities, such as ALTO and PCE, to acquire network topology.

Before you begin:

  1. Configure the device interfaces.

  2. Configure the router ID and autonomous system number for the device.

  3. Configure the following protocols:

    • RSVP

    • MPLS

    • IS-IS

    • OSPF

To enable link-state distribution using BGP:

  1. Configure an internal BGP group, and assign the local address and neighbor address for the group.
  2. Include the BGP-TE signaling network layer reachability information (NLRI) to the internal BGP group.
  3. Enable export of policy on the device.
  4. Configure an external BGP group, and assign the local address and neighbor autonomous system to the group.
  5. Include the BGP-TE signaling NLRI to the external BGP group.
  6. In configuration mode, go to the following hierarchy level:
  7. Configure policies to accept traffic from the BGP-TE NLRI.
  8. On the remote connecting device, configure policy to accept the OSPF and IS-IS traffic.
  9. Verify and commit the configuration.

    For example:

    R1

    R2

Improving Traffic Engineering Database Accuracy with RSVP PathErr Messages

An essential element of RSVP-based traffic engineering is the traffic engineering database. The traffic engineering database contains a complete list of all network nodes and links participating in traffic engineering, and a set of attributes each of those links can hold. (For more information about the traffic engineering database, see Constrained-Path LSP Computation.) One of the most important link attributes is bandwidth.

Bandwidth availability on links changes quickly as RSVP LSPs are established and terminated. It is likely that the traffic engineering database will develop inconsistencies relative to the real network. These inconsistencies cannot be fixed by increasing the rate of IGP updates.

Link availability can share the same inconsistency problem. A link that becomes unavailable can break all existing RSVP LSPs. However, its unavailability might not readily be known by the network.

When you configure the rsvp-error-hold-time statement, a source node (ingress of an RSVP LSP) learns from the failures of its LSP by monitoring PathErr messages transmitted from downstream nodes. Information from the PathErr messages is incorporated into subsequent LSP computations, which can improve the accuracy and speed of LSP setup. Some PathErr messages are also used to update traffic engineering database bandwidth information, reducing inconsistencies between the traffic engineering database and the network.

You can control the frequency of IGP updates by using the update-threshold statement. See Configuring the RSVP Update Threshold on an Interface.

This section discusses the following topics:

PathErr Messages

PathErr messages report a wide variety of problems by means of different code and subcode numbers. You can find a complete list of these PathErr messages in RFC 2205, Resource Reservation Protocol (RSVP), Version 1, Functional Specification and RFC 3209, RSVP-TE: Extensions to RSVP for LSP Tunnels.

When you configure the rsvp-error-hold-time statement, two categories of PathErr messages, which specifically represent link failures, are examined:

  • Link bandwidth is low for this LSP: Requested bandwidth unavailable—code 1, subcode 2

    This type of PathErr message represents a global problem that affects all LSPs transiting the link. They indicate that the actual link bandwidth is lower than that required by the LSP, and that it is likely that the bandwidth information in the traffic engineering database is an overestimate.

    When this type of error is received, the available link bandwidth is reduced in the local traffic engineering database, affecting all future LSP computations.

  • Link unavailable for this LSP:

    • Admission Control failure—code 1, any subcode except 2

    • Policy Control failures—code 2

    • Service Preempted—code 12

    • Routing problem—no route available toward destination—code 24, subcode 5

    These types of PathErr messages are generally pertinent to the specified LSP. The failure of this LSP does not necessarily imply that other LSPs could also fail. These errors can indicate maximum transfer unit (MTU) problems, service preemption (either manually initiated by the operator or by another LSP with a higher priority), that a next-hop link is down, that a next-hop neighbor is down, or service rejection because of policy considerations. It is best to route this particular LSP away from the link.

Identifying the Problem Link

Each PathErr message includes the sender’s IP address. This information is propagated unchanged toward the ingress router. A lookup in the traffic engineering database can identify the node that originated the PathErr message.

Each PathErr message carries enough information to identify the RSVP session that triggered the message. If this is a transit router, it simply forwards the message. If this router is the ingress router (for this RSVP session), it has the complete list of all nodes and links the session should traverse. Coupled with the originating node information, the link can be uniquely identified.

Configuring the Router to Improve Traffic Engineering Database Accuracy

To improve the accuracy of the traffic engineering database, configure the rsvp-error-hold-time statement. When this statement is configured, a source node (ingress of an RSVP LSP) learns from the failures of its LSP by monitoring PathErr messages transmitted from downstream nodes. Information from the PathErr messages is incorporated into subsequent LSP computations, which can improve the accuracy and speed of LSP setup. Some PathErr messages also are used to update traffic engineering database bandwidth information, reducing inconsistencies between the traffic engineering database and the network.

To configure how long MPLS should remember RSVP PathErr messages and consider them in CSPF computation, include the rsvp-error-hold-time statement:

You can include this statement at the following hierarchy levels:

  • [edit protocols mpls]

  • [edit logical-systems logical-system-name protocols mpls]

The time can be a value from 1 to 240 seconds. The default is 25 seconds. Configuring a value of 0 disables the monitoring of PathErr messages.

Related Documentation

Release History Table
Release
Description
Starting in Junos OS Release 17.4R1, the traffic engineering database installs interior gateway protocol (IGP) topology information in addition to RSVP-TE topology information in the lsdist.0 routing table
Starting in Junos OS Release 17.2R1, the BGP link-state address family is extended to distribute the source packet routing in networking (SPRING) topology information to software-defined networking (SDN) controllers.
Starting with Junos OS Release 17.1R1, link state distribution using BGP is supported on QFX10000 switches.