Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

BGP Labeled Unicast Egress Peer Engineering Using cRPD as Ingress

 

About This Network Configuration Example

In this NCE we show how to use the containerized routing protocol process (cRPD) for BGP Labeled Unicast (LU) Egress Peer Engineering (EPE). A data center network can perform traffic engineering decisions at the servers that originate traffic. You can set up your application on a server to steer traffic to an egress peer where the traffic exits the network.

The following document describes BGP LU egress peer engineering using routing hardware devices:

Configuring Egress Peer Traffic Engineering by Using BGP Labeled Unicast and Enabling MPLS Fast Reroute

This NCE shows how to set up BGP LU egress peer engineering using cRPD as the ingress-node.

Use Case Overview

This use case uses the following topology.

cRPD allows you to deploy EPE in any data center that runs Linux servers with a simple API between cRPD and the controller. The ability to take incisive EPE decisions on a per-server, per-application, and a per-destination prefix level optimizes the network’s performance, while directly impacting money spent on peering and transit.

The transport tunneling mechanism from cRPD to ASBRs can either be IP (GRE) or MPLS (LDP or EBGP-LU fabric). cRPD runs with applications on the server, sharing the same Linux network-namespace. cRPD learns the EPE intent from the network operator or controller, and installs routes into the Linux routing table to perform the intended traffic engineering.

The operator just defines the IPv4 or IPv6 egress peer address as the desired egress nexthop for the route. This simplifies the API between the cRPD and the network controller. The controller does not need to monitor the transport encapsulation (that is, the BGP LU label or the transport tunnel encapsulation attributes) to reach the egress nexthop and notify the server. This simplifies the controller-to-server communication and provides stability.

cRPD determines the required MPLS encapsulation to send the traffic to appropriate exit peer. To do so, cRPD participates in BGP routing in the network by peering with the RRs, and learns the EPE exit peers via BGP LU routes. It also learns the feasible set of exit peers for a specific IPv4/IPv6 service route using BGP Add path.

cRPD uses recursive nexthop resolution to determine the appropriate encapsulation to reach the egress nexthop, which lets you use a simple API from the controller to the cRPD. Also, cRPD automatically falls back to the BGP best path if the EPE nexthop that the controller installed becomes unreachable. This keeps traffic flowing while the controller determines the new optimal exit point.

Technical Overview

To deploy cRPD, you need a Linux server that supports Docker. You should launch the cRPD instance and the application with same network namespace. View cRPD as the network helper for those applications.

Though cRPD is containerized, the applications can either be containerized applications or standalone Linux applications that use the default namespace. In the example used in this NCE, we are using a ping utility on the Linux server to simulate the data center application. The Linux server is simulated using a VM, and is connected to the Peer routers in the network.