Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Understanding Containerized RPD

 

Containerized routing protocol process (cRPD) is Juniper’s routing protocol process (rpd) decoupled from Junos OS and packaged as a Docker container to run in Linux-based environments. rpd runs as user space application and learns route state through various routing protocols and maintains the complete set in routing information base (RIB), also known as routing table. The rpd process is also responsible for downloading the routes into the forwarding information base (FIB) also known as forwarding table based on local selection criteria. Whereas the Packet Forwarding Engine (PFE) in Juniper Networks router holds the FIB and does packet forwarding, for cRPD. The host Linux kernel stores the FIB and performs packet forwarding. cRPD can also be deployed to provide control plane-only services such as BGP route reflection.

Note

Route Reflection networking service must not depend on the same hardware or the controllers that host the application software containers that are using the reachability learnt by using the Route Reflection service. cRR service must work independently.

Routing Protocol Process

Within Junos OS, the routing protocol process controls the routing protocols that run on a router. The rpd process starts all configured routing protocols and handles all routing messages. It maintains one or more routing tables, which consolidate the routing information learned from all routing protocols. From this routing information, the routing protocol process determines the active routes to network destinations and installs these routes into the Routing Engine’s forwarding table. Finally, rpd implements a routing policy, which enables you to control the routing information that is transferred between the routing protocols and the routing table. Using the routing policy, you can filter and limit the transfer of information as well as set properties associated with specific routes.

Routing Engine Kernel

The Routing Engine software consists of several software processes that control router functionality and a kernel that provides the communication among all the processes.

The Routing Engine kernel provides the link between the routing tables and the Routing Engine’s forwarding table. The kernel is also responsible for all communication with the Packet Forwarding Engine, which includes keeping the Packet Forwarding Engine’s copy of the forwarding table synchronized with the master copy in the Routing Engine.

The RPD runs natively on Linux and communicates program routes into the Linux kernel using Netlink. The Netlink messages are used to install FIB state generated by RPD to Linux kernel, to interact with mgd and cli for configuration and management, to maintain protocol sessions using PPMD, and to detect liveness using BFD. RPD learns the interface attributes such as their name, addresses, MTU settings and link status from the netlink messages. Netlink acts as an interface to the kernel components.

cRPD Overview

cRPD maintains the route state information in RIB and forwards the routes into FIB based on local route selection criteria. cRPD contains the RPD, PPMD, CLI, MGD, and BFD processes. cRPD defines how routing protocols such as ISIS, OSPF, and BGP operate on the device, including selecting routes and maintaining forwarding tables.

Figure 1 shows the cRPD overview.

Figure 1: cRPD Overview
cRPD Overview

The network interfaces learnt by the underlying OS kernel are exposed to the RPD on Linux container. RPD learns about all the network interfaces and add route state for all the network interfaces. If there are additional Docker containers running in the system, all the containers and application running directly on the host can access to the same set of network interfaces and state.

When multiple cRPD running on a system, that is in bridge mode, containers are connected to the host network stack through bridges. cRPD is connected to the Linux OS using bridges. The host interfaces are connected to the container using bridges. Multiple containers can connect to the same bridge and communicate with one another. The default Docker bridge enables NAT. NAT bridges are used as a management port into the container. This means the clients cannot initiate connections to the cRR and cRR is in control of which clients it connects to. If the bridge is connected to the host OS network interfaces, external communication is feasible. For routing purposes, it is possible to assign all or a subset of physical interfaces for use by a Docker container. This mode is useful for containerized Route Reflector and partitioning the system into multiple routing domains.

Figure 2 shows the architecture of cRPD able to run natively on Linux.

Figure 2: cRPD on Linux Architecture
cRPD on Linux Architecture

Benefits

  • The use of containers reduces the time required for service boot up from several minutes to a few seconds, which results in faster deployment.

  • You can run cRPD on any Linux server that supports Docker.

  • With a small footprint and minimum resource reservation requirements, cRPD can easily scale to keep up with customers’ peak demand.

  • Provides significantly higher density without requiring resource reservation on the host than what is offered by VM-based solutions.

  • Well proven or a stable routing software on Linux with cRPD.

Docker Overview

Docker is an open source software platform that simplifies the creation, management, and teardown of a virtual container that can run on any Linux server. A Docker container is an open source software development platform, with its main benefit being to package applications in “containers” to allow them to be portable among any system running the Linux operating system (OS). A container provides an OS-level virtualization approach for an application and associated dependencies that allow the application to run on a specific platform. Containers are not VMs, rather they are isolated virtual environments with dedicated CPU, memory, I/O, and networking.

A container image is a lightweight, standalone, executable package of a piece of software. Because containers include all dependencies for an application, multiple containers with conflicting dependencies can run on the same Linux distribution. Containers use the host OS Linux kernel features, such as groups and namespace isolation, to allow multiple containers to run in isolation on the same Linux host OS. An application in a container can have a small memory footprint because the container does not require a guest OS, which is required with VMs, because it shares the kernel of its Linux host’s OS.

Containers have a high spin-up speed and can take much less time to boot up as compared to VMs. This enables you to install, run, and upgrade applications quickly and efficiently.

Figure 3 provides an overview of a typical Docker container environment.

Figure 3: Docker Container Environment
Docker Container Environment

Supported Features on cRPD

cRPD supports the following features:

  • BGP Route Reflector in the Linux container

  • BGP add-path, multipath, graceful restart helper mode

  • BGP, OSPF, OSPFv3, IS-IS, and Static

  • BMP, BFD, and Linux-FIB

  • Equal-Cost Multipath (ECMP)

  • JET for Programmable RPD

  • Junos OS CLI

  • Management using open interfaces NETCONF, and SSH

  • IPv4 and IPv6 routing

  • MPLS routing

Licensing

The cRPD software features require a license to activate the feature. To understand more about cRPD licenses, see Supported Features on cRPD, Juniper Agile Licensing Guide, and Managing cRPD Licenses.

Use case: Egress Peer Traffic Engineering using BGP Add-Path

Service providers meet growing traffic demands. They need services which keep their capital expenditure and operating expenditure low. Juniper provides tools and applications to deploy, configure, manage and maintain this complexity.

Egress peer traffic engineering (TE) allows a central controller to instruct an ingress router in a domain to direct traffic towards a specific egress router and a specific external interface to reach a particular destination out of the network for optimum utilization of the advertised egress routes.

The Internet – a public global network of networks – is built as system of interconnected networks of Service Provider (SP) infrastructures. These networks are often represented as Autonomous Systems (ASs) each has globally unique Autonomous System Number (ASN). The data-plane interconnection link (NNI) and control-plane (eBGP) direct connection between two ASs allows Internet traffic to travel between the two, usually as part of a formal agreement called peering.

A SP has multiple peering relationship with multiple other SPs. They are usually geographically distributed, differ in number and bandwidth of the NNI link, and use various business or cost models

Figure 4: Peering Among Service Providers
Peering Among Service Providers

In the context of AS peering, traffic egress assumes that the destination network address is reachable through a certain peer AS. So, for example, a device in Peer AS#2 can reach a destination IP address in Peer AS#4 through Service Provider AS#1, This reachability information is provided by a peer AS using an eBGP Network Layer Reachability Information (NLRI) advertisement. An AS typically advertises IP addresses that belong to it, but an AS may also advertise addresses learned from another AS. For example, Peer AS#2 can advertise to SP (AS#1) addresses it has received from Peer AS#3, Peer AS#7 and even Peer AS#8, Peer AS#9, Peer AS#4 and Peer AS#5. It all depends of the BGP routing policies between the individuals ASs. Therefore, a given destination IP prefix can be reached through multiple peering ASs and over multiple NNIs. It is the role of routers and network operators in the SP network to select “best” exit interface for each destination prefix.

The need for engineering the way that traffic exits the service provider AS is critical for ensuring cost efficiency while providing a good end user experience at the same time. The definition of “best” exit interface is a combination of cost as well as latency and traffic loss.

For more information, see Fundamentals of Egress Peering Engineering and BGP Labeled Unicast Egress Peer Engineering Using cRPD as Ingress.