Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?


What Is the Juniper® Cloud-Native Router?


Juniper Cloud-Native Router is a container-based software solution that combines the JCNR-controller (cRPD-based control plane) and the JCNR-vRouter. With the cloud-native router, you can enable Junos OS-based routing or switching control with enhanced forwarding capabilities.

JCNR-controller running on a Kubernetes compute-host provides control plane management functionality and uses the routing or forwarding capabilities provided by either the Linux kernel or the JCNR-vRouter.

The Dataplane Development Kit (DPDK) is an open source set of libraries and drivers. DPDK enables fast packet processing by allowing network interface cards (NICs) to send direct memory access (DMA) packets directly into an application’s address space. The applications poll for packets, to avoid the overhead of interrupts from the NIC. Integrating with DPDK allows a vRouter to process more packets per second than is possible when the vRouter runs as a kernel module.

In this integrated solution, JCNR-Controller uses gRPC-based services to exchange messages and to communicate with JCNR-vRouter, thus creating the fully functional Cloud-Native Router. This close communication allows you to:

  • Learn about fabric and workload interfaces

  • Provision DPDK- or kernel-based interfaces for Kubernetes pods as needed

  • Configure IPv4 and IPv6 address allocation for Pods

  • Install routes into routing tables

  • Run routing protocols such as ISIS, BGP, and OSPF


  • Higher packet forwarding performance with DPDK-based JCNR-vRouter

  • Easy deployment, removal, and upgrade on general purpose compute devices using Helm

  • Full routing, switching, and forwarding stacks in software

  • Basic L2 functionality, such as MAC learning, MAC aging, MAC limiting, and L2 statistics

  • L2 reachability to Radio Units (RU) for management traffic

  • L2 or L3 reachability to physical distributed units (DU) such as 5G millimeter wave DUs or 4G DUs

  • VLAN tagging

  • Bridge domains

  • Trunk and access ports

  • Supports multiple virtual functions (VF) on Ethernet NICs

  • Support for bonded VF interfaces

  • Configurable L2 access control lists (ACLs)

  • Rate limiting of egress broadcast, unknown unicast, and multicast traffic on fabric interfaces

  • IPv4 and IPv6 routing

  • Out-of-the-box software-based open radio access network (O-RAN) support

  • Quick spin up with containerized deployment

  • Highly scalable solution

Figure 1: Components of Juniper Cloud-Native Router Components of Juniper Cloud-Native Router

Juniper Networks refers to primary nodes and backup nodes. Kubernetes refers to master nodes and worker nodes. References in this guide to primary and backup correlate with master and worker in the Kubernetes world.

Kubernetes is an orchestration platform for running containerized applications in a clustered computing environment. It provides automatic deployment, scaling, networking, and management of containerized applications.

A Kubernetes pod consists of one or more containers, with each pod representing an instance of the application. A pod is the smallest unit that Kubernetes can manage. All containers in the pod share the same network name space.

We rely on Kubernetess to orchestrate the infrastructure that the cloud-native router needs to operate. However, we do not supply Kubernetes installation or management instructions in this documentation. See for Kubernetes documentation. Currently, Juniper Cloud-Native Router requires that the Kubernetes cluster be a standalone cluster, meaning that the Kubernetes primary and backup functions both run on a single node.

Juniper Cloud-Native Router Components

Juniper Cloud-Native Router Controller

The JCNR-Controller (cRPD) is the control-plane part of the cloud-native router solution. You use the controller to communicate with the other elements of the cloud-native router. Configuration, policies and rules that you set on the controller at deploy time are communicated to other components, primarily the JCNR-vRouter agent and JCNR-vRouter, for implementation.

For example, firewall filters (ACLs) are supported on cRPD to configure L2 access lists with deny rules. cRPD sends the configuration information to the JCNR-vRouter through the JCNR-vRouter agent.

Juniper Cloud-Native Router Controller Functionality:

  • Exposes Junos OS compatible CLI configuration and operation commands that are accessible to external automation and orchestration systems using the NETCONF protocol.

  • Supports JCNR-vRouter as the high-speed forwarding plane. This enables applications that are built using the DPDK framework to send and receive packets directly to the application and the JCNR-vRouter without passing through the kernel

  • Support for configuration of VLAN-tagged sub-interfaces on physical function (PF), virtual function (VF), virtio, access, and trunk interfaces managed by the DPDK-enabled JCNR-vRouter

  • Supports configuration of bridge domains

  • Advertises DPDK application reachability to core network using routing protocols primarily with BGP and IS-IS

  • Distributes L3 network reachability information of the pods inside and outside a cluster

Juniper Cloud-Native Router vRouter

JCNR-vRouter is an alternative to the Linux bridge or the Open vSwitch (OVS) module in the Linux kernel. The pod that houses the JCNR-vRouter container also houses the JCNR-vRouter agent container. JCNR-vRouter functions to:

  • Perform L2 forwarding

  • Perform L2 rate-limiting

  • Allows the use of DPDK-based forwarding

  • Enforce L2 access control lists (ACLs)

  • Perform routing with Layer 3 virtual private networks

Juniper Cloud-Native Router-Container Network Interface (JCNR-CNI)

JCNR-CNI is a new CNI developed by Juniper to handle Juniper-developed Pods like JCNR-vRouter agent and JCNR-vRouter agent DPDK, along with DPDK-enabled application Pods and the cloud-native router controller. JCNR-CNI is a Kubernetes CNI plugin installed on each node to provision network interfaces for application pods. During pod creation, Kubernetes delegates pod interface creation and configuration to JCNR-CNI. JCNR-CNI interacts with cRPD and JCNR-vrouter to setup DPDK interfaces. When a Pod is removed, JCNR-CNI is invoked to de-provision the pod interface, configuration, and associated state in Kubernetes and cloud-native router components. JCNR-CNI works with the Multus CNI to add and configure pod interfaces.

JCNR-CNI provides the following functionality:

  • Manages the networking tasks in Kubernetes pods such as assigning IP addresses, allocating MAC addresses, and setting up interfaces between the Pod and host in a Kubernetes cluster

  • Applies L2 ACLs – The policies are sent to JCNR-vRouter for applying in the data plane

  • Acts on Pod events such as add and delete

  • Generates cRPD configuration


Juniper Cloud-Native Router uses a syslog-ng Pod to gather event logs from cRPD and vRouter and transform the logs into JSON-based notifications. The notifications are logged to a file and can be accessed from that file.