Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Juniper Cloud-Native Router - Overview

Overview

TheJuniper® Cloud-Native Router (cloud-native router) is a container-based software solution, orchestrated by Kubernetes. Cloud-native router combines the containerized routing protocol process (cRPD) and a Data Plane Development Kit (DPDK)-enabled Contrail® Networking™ vRouter (vRouter). With the cloud-native router, you can enable full Junos-based control plane with the enhanced forwarding capabilities of the DPDK-enabled vRouter.

Benefits of Juniper Cloud-Native Router

  • You can deploy the cloud-native router in either L2 (switch) or L3 (routing) mode

  • Higher packet forwarding performance with DPDK-enabled vRouter

  • Easy deployment on general-purpose compute devices

  • Full routing and forwarding stacks in software

  • Out-of-the-box software-based open radio access network (O-RAN) support

  • IPv4 and IPv6 routing and forwarding

  • Quick spin-up with containerized deployment on Kubernetes

  • Highly scalable solution

Kubernetes

Note:

Juniper Networks refers to primary nodes and backup nodes in our documentation. Kubernetes refers to master nodes and worker nodes. References in this guide to primary and backup correlate with master and worker in the Kubernetes world.

Let's talk a little about Kubernetes in this section. Kubernetes is an orchestration platform for running containerized applications in a clustered computing environment. Kubernetes provides automatic deployment, scaling, networking, and management of containerized applications. Because Juniper Cloud-Native Router is a container-based solution, we've chosen Kubernetes as the orchestration platform. For complete details about Kubernetes, including installation, cluster creation, management, and maintenance, see https://kubernetes.io/.

The major components of a Kubernetes cluster are:

  • Nodes

    Kubernetes uses two types of nodes: a primary (control) node and a compute (worker) node. A Kubernetes cluster usually consists of one or more master nodes (in active/standby mode) and one or more worker nodes. You create a node on a physical computer or a virtual machine (VM).

    Note:

    In Juniper Cloud-Native Router Release 22.X, you must provide a working, single-node Kubernetes cluster. Cloud-native router does not support multinode clusters, with primary and secondary nodes on separate VMs or bare-metal server (BMS).

  • Pods

    Pods live in nodes and provide a space for containerized applications to run. A Kubernetes pod consists of one or more containers, with each pod representing an instance of the application(s). A pod is the smallest unit that Kubernetes can manage. All containers in a pod share the same network namespace.

  • Namespaces

    In Kubernetes, pods operate within a namespace to isolate groups of resources within a cluster. All Kubernetes clusters have a kube-system namespace, which is for objects created by the Kubernetes system. Kubernetes also has a default namespace, which holds all objects that don't provide their own namespace. The last two preconfigured Kubernetes namespaces are kube-public and kube-node-lease. The kube-public namespace is used to allow unauthenticated users to read some aspects of the cluster. Node leases allow the kubelet to send heartbeats so that the control plane can detect node failure.

    In Juniper Cloud-Native Router Release 22.X, some of the pods run in the kube-systemnamespace while others provide their own namespace.

  • Kubelet

    The kubelet is the primary node agent that runs on each node. In the case of Juniper Cloud-Native Router, only a single kubelet runs on the cluster since we do not support multinode deployments.

  • Containers

    A container is a single package that consists of an entire runtime environment including the application and its:

    • Configuration files

    • Dependencies

    • Libraries

    • Other binaries

    Software that runs in containers can, for the most part, ignore the differences in the those binaries, libraries, and configurations that may exist between the container environment and the environment that hosts the container. Common container types are docker, containerd, and Container Runtime Interface using Open Container Initiative compatible runtimes (CRI-O).

    For Juniper Cloud-Native Router Release 22.X, docker is the only supported container type (container runtime).

Juniper Cloud-Native Router Components

The Juniper Cloud-Native Router solution consists of several components. This section provides a brief overview of the components of Juniper Cloud-Native Router.

Figure 1 shows the components of the Juniper Cloud-Native Router inside a Kubernetes cluster. The green-colored components are specific to the Juniper Cloud-Native Router, while the others are required third-party components.

Figure 1: Cloud-Native Router Components
Juniper Cloud-Native Router Components
  • Juniper Cloud-Native Router Controller (JCNR-controller or cRPD)

    The cRPD acts as the control plane for the cloud-native router. It performs management functions and maintains configuration information for the vRouter forwarding plane. cRPD is based on the Junos OS control plane. You can configure the JNCR-controller using:

    • YAML-formatted Helm charts

    • Third-party management platforms that use the NETCONF protocol

    • API calls to the cRPD MGD

    • Direct CLI access to the cRPD Pod

    This section is only applicable to the L3 version of JCNR.

    You can configure the requisite protocols (IGPs and BGPs), using NETCONF or CLI, to the JCNR-controller to provide reachability over tunnels. JCNR-controller establishes adjacencies for various protocols, learns and programs the forwarding information base (FIB, also known as forwarding table) using its routing protocols to the JCNR-vRouter-agent through gRPC services. JCNR-vRouter provides a bidirectional gRPC channel for communication with the JCNR-controller.

    A typical routing update for the underlay network follows the process shown below:

    • JCNR-controller learns of a new underlay network route on its vhost0 interface
    • JCNR-controller sends a gRPC-encoded route message (IPv4 or IPv6) to the vRouter agent using its gRPC interface
    • The vRouter agent performs ARP or NDP look up for the next-hop
    • The vRouter agent encapsulates the next-hop into vRouter and waits for ACK message in return
    • The vRouter agent programs the underlay route into vRouter and waits for ACK message in return

    Once a learned underlay route is no longer valid, JCNR-controller sends a route delete message to the vRouter agent which signals the vRouter to delete the route and next-hop as needed.

    A typical routing update for the overlay (Pod-to-Pod) network follows the process shown below:

    • A new remote Pod route (Pod-to-Pod) is learned by JCNR-controller through BGP.
    • JCNR-controller resolves the next-hop over an SR-MPLS tunnel whose label stack is populated by ISIS. This creates next-hop information for the Pod's IP that contains the service label and the transport labels associated with the MPLS tunnel.
    • JCNR-controller then sends a gRPC-encoded route message to the vRouter agent that contains the pod_ip, vrf name, next-hop IP, service label and between 0 and 4 transport labels.
    • The vRouter agent resolves the next-hop IP using NDP or ARP depending on whether the address is IPv6 or IPv4 respectively.
    • The vRouter agent creates an MPLS tunnel next-hop (if not already present) and programs it to the vRouter.
    • The vRouter creates the MPLS tunnel next-hop, adds it to the next-hop table and sends an ACK message to the vRouter agent in response.
    • The vRouter agent the programs the Pod route in the Pod VRF along with the next-hop created in the previous step.
    • The vRouter adds the route entry for the Pod in the Pod VRF along with a pointer to the next-hop. vRouter then sends an ACK message to vRouter agent in response.

    When the overlay route is withdrawn, JCNR-controller sends a route delete message to vRouter agent which signals the vRouter to delete the route and next-hop information from the forwarding tables.

    Access control lists (ACLs) are supported on JCNR-controller to configure the networking policy for application pods. The integration with JCNR-vRouter-agent means that these network policies are automatically shared with the JCNR-vRouter.

  • Juniper Cloud-Native Router -vRouter (JCNR-vRouter or vRouter)

    JCNR-vRouter acts as the forwarding, or data, plane for Juniper Cloud-Native Router. It interacts with the JCNR-controller through the vRouter-agent and receives and forwards packets through its various interfaces.

    JCNR-vRouter enables applications built using the DPDK framework to send and receive packets directly between the application and vRouter without passing through the kernel.

    The vRouter receives configuration and management information from JCNR-controller through the JCNR vRouter-agent using the gRPC protocol.

  • Juniper Cloud-Native Router-Container Network Interface (JCNR-CNI)

    JCNR-CNI is a Kubernetes CNI and is responsible for provisioning network interfaces for application Pods. vRouter acts as the data-plane for these application Pod interfaces. JCNR-CNI interacts with Kubernetes, JCNR-controller and JCNR-vRouter. JCNR-CNI manages the vRouter interface lifecycles and cRPD configuration. When you remove an application Pod, JCNR-CNI removes the corresponding interface configuration from cRPD and state information from the vRouter-DPDK forwarding plane.

Ports Used by Cloud-Native Router

Juniper Cloud-Native Router listens on certain TCP and UDP ports. Table 1 shows the ports, protocols, and a description for each one.

Table 1: Cloud-Native Router Listening Ports
Protocol Port Description
TCP 8085 vRouter introspect–Used to gain internal statistical information about vRouter
TCP 8070 Telemetry information-Used to see telemetry data from cloud-native router
TCP 9091 vRouter health check–cloud-native router checks to ensure contrail-vrouter-dpdk process is running, etc.
TCP 50052 gRPC port–JCNR listens on both IPv4 and IPv6
TCP 24 cRPD SSH
TCP 830 cRPD NETCONF
TCP 666 rpd
TCP 1883 Mosquito mqtt–Publish/subscribe messaging utility
TCP 9500 agentd on cRPD
TCP 21883 na-mqttd
TCP 50051 jsd on cRPD
TCP 51051 jsd on cRPD
UDP 50055 Syslog-NG