Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?


Install Juniper Cloud-Native Router

SUMMARY The Juniper Cloud-Native Router (JCNR) uses the the JCNR-Controller (cRPD-based control plane) and JCNR-CNI to provide control plane capabilities and a container network interface. Juniper Cloud-Native Router uses the DPDK-enabled vRouter to provide high-performance data plane capabilities and Syslog-NG to provide notification functions. This section explains how you can install these components of the Cloud-Native Router.

The JCNR-Controller (cRPD) is an initialization container that provides control plane functionality for the cloud-native router. The control plane is responsible for provisioning of the workload and fabric interfaces used in Juniper Cloud-Native Router. It also manages communication with the vRouter-agent and the vRouter itself over a gRPC connection.

The JCNR-CNI is the container network interface that Juniper Cloud-Native Router uses to communicate with physical interfaces on the server and pod and container network interfaces within the installation.

The Juniper Cloud-Native Router Virtual Router (vRouter) is a container application set that provides advanced forwarding plane functionality. It extends the network from the physical routers and switches into a virtual overlay network hosted in the virtualized servers. The Data Plane Development Kit (DPDK) enables the vRouter to process more packets per second than is possible when the vRouter runs as a kernel module.

The Syslog-NG is a container application that allows Juniper Cloud-Native Router to provide notifications to users about events that occur in the cloud-native router deployment.

Install Juniper Cloud-Native Router Using Helm Chart

Read this section to learn the steps required to load the cloud-native router image components into docker and install the cloud-native router components using Helm charts.

As mentioned in the System Resource Requirements, the Helm package manager for Kubernetes must be installed prior to installing Juniper Cloud-Native Router components.


We do not provide a specific path into which you must download the package and install the software. Because, of this you can copy the commands shown throughout this document and paste them into the CLI of your server.

The high-level overview of Juniper Cloud-Native Router installation is:

  1. Download the software installation package (tarball)
  2. Expand the tarball

  3. Change directory to Juniper_Cloud_Native_Router_<release number>

  4. Load the image files into Docker

  5. Enter the root password for your host server and your Juniper Cloud-Native Router

  6. Apply the secrets/jcnr-secrets.yaml to the K8s system

  7. Edit the values.yaml file to suit the needs of your installation
  8. Install the Juniper Cloud-Native Router

Each high-level procedure listed above is detailed below,

  1. Download the tarball, Juniper_Cloud_Native_Router_<release-number>.tgz, to the directory of your choice.
    How you get the tarball into a writeable directory on your server is up to you. You must perform the file transfer in binary mode so the compressed tar file will expand properly.
  2. Expand the file Juniper_Cloud_Native_Router_<release-number>.tgz.
  3. Change directory to Juniper_Cloud_Native_Router_22.3

    All remaining steps in the installation assume that your current working directory is now Juniper_Cloud_Native_Router_22.3.

  4. Load the image files, jcnr-cni-images.tar.gz, jcnr-vrouter-images.tar.gz, and syslog-ng-images.tar.gz into docker. The image files are located in the Juniper_Cloud_Native_Router_22.3/images directory relative to where you expanded the tarball in the previous step.
  5. Enter the root password for your host server and your Juniper Cloud-Native Router license file into the secrets/jcnr-secrets.yaml file.
    You must enter the password and license in base64 encoded format.
    To encode the password, create a file that has only the plain text password on a single line. Then issue the command:
    The output is a single line of random-looking text similar to:
    To encode the license file, copy the file onto your host server and issue the command:
    The output is a long single line of random-looking text similar to:

    You must obtain your license file from your account team and install it in the secrets.yaml file as instructed above. Without the proper base64-encoded license file and root password in the secrets.yaml file, the cRPD Pod does not enter Running state, but remains in CrashLoopBackOff state.

    You must copy the base64 outputs and paste them into the secrets/jcnr-secrets.yaml file in the appropriate locations.
  6. Apply the secrets/jcnr-secrets.yaml to the K8s system
  7. Edit the helm_charts/jcnr/values.yaml file.

    You must customize the Helm chart for the Juniper Cloud-Native Router installation:

    • Choose fabric interfaces–Use interface names from your host system

    • Create the VLAN id list for trunk interfaces–Use VLAN ids that fit in your network

    • Choose a fabric workload interface-Use interface names from your host system

    • Set the VLAN id for traffic on the workload interface

    • Set the severity level for JCNR-vRouter logging


      Leave the log_level set to INFO unless instructed to change it by JTAC.

    • Set the cpu core mask–physical cores, logical cores

    • Choose the fabric interface–Use interface names from your host system

    • Choose a workload interface–Use interface names from your host system

    • Set a rate limit for broadcast and multicast traffic in bytes per second

    • Set a writeable directory location for syslog-ng to store notifications

    • (Optional) If you specify a bond interface as your fabricInterface:, provide slaveInterface names from your system under the bondInterfaceConfigs: section.

    • By default restoreInterface is set to false. With this setting when vrouter pod crashes or is deleted the interfaces are not restored back to host.


    If you are using the Intel XL710 NIC, you must set ddp=false in the

    values.yaml file.

    See Sample Configuration Files for a commented example of the default helm_charts/jcnr/values.yaml file.

  8. Deploy the Juniper Cloud-Native Router using Helm
  9. Confirm Juniper Cloud-Native Router Deployment

Verify Operation of Containers

SUMMARY This task allows you to confirm that the Juniper Cloud-Native Router Pods are running.
  1. kubectl get pods -A
    The output of the kubectl command shows all of the pods in the K8s cluster in all namespaces. Successful deployment means that all pods display that they are in the running state. In this example we have marked the Juniper Cloud-Native Router Pods in bold. For example:
  2. kubectl get ds -A
    Use the kubectl get ds -A command to get a list of daemonset containers.