Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install Juniper Cloud-Native Router

SUMMARY The Juniper Cloud-Native Router (cloud-native router) uses the the JCNR-Controller (cRPD-based control plane) and JCNR-CNI to provide control plane capabilities and a container network interface. Juniper Cloud-Native Router uses the DPDK-enabled vRouter to provide high-performance data plane capabilities and Syslog-NG to provide notification functions. This section explains how you can install these components of the Cloud-Native Router.

The JCNR-Controller (cRPD) is an initialization container that provides control plane functionality for the cloud-native router. The control plane is responsible for provisioning of the workload and fabric interfaces used in Juniper Cloud-Native Router. It also manages communication with the vRouter-agent and the vRouter itself over a gRPC connection.

The JCNR-CNI is the container network interface that Juniper Cloud-Native Router uses to communicate with physical interfaces on the server and pod and container network interfaces within the installation.

The Juniper Cloud-Native Router Virtual Router (vRouter) is a container application set that provides advanced forwarding plane functionality. It extends the network from the physical routers and switches into a virtual overlay network hosted in the virtualized servers. The Data Plane Development Kit (DPDK) enables the vRouter to process more packets per second than is possible when the vRouter runs as a kernel module.

The Syslog-NG is a container application that allows Juniper Cloud-Native Router to provide notifications to users about events that occur in the cloud-native router deployment.

Install Juniper Cloud-Native Router Using Helm Chart

Read this section to learn the steps required to load the cloud-native router image components into docker and install the cloud-native router components using Helm charts.

Note:

In the installation sections of this guide, we do not, generally, specify version information when referring to file and directory names. When we do specify the version number in a file or directory name, we are referring to the current (latest) release.

Note:

It is not recommended to deploy Juniper Cloud-Native Router version 22.4 if Kubernetes cpumanager is enabled in your Kubernetes cluster.

As mentioned in the System Resource Requirements, the Helm package manager for Kubernetes must be installed prior to installing Juniper Cloud-Native Router components.

Note:

We do not provide a specific path into which you must download the package and install the software. Because, of this you can copy the commands shown throughout this document and paste them into the CLI of your server.

The high-level overview of Juniper Cloud-Native Router installation is:

  1. Download the software installation package (tarball)
  2. Expand the tarball

  3. Change directory to Juniper_Cloud_Native_Router_<release number>

  4. Load the image files into Docker

  5. Enter the root password for your host server and your Juniper Cloud-Native Router

  6. Apply the secrets/jcnr-secrets.yaml to the Kubernetes system

    Note:

    For an L2 deployment: perform step 7 below and skip step 8.

    For an L3 deployment: skip step 7 and perform step 8.

    Perform only one of step 7 or step 8.

  7. Edit the values.yaml file to suit the needs of your installation
  8. Edit the values_L3.yaml

  9. Install the Juniper Cloud-Native Router

Each high-level procedure listed above is detailed below,

  1. Download the tarball, Juniper_Cloud_Native_Router_<release-number>.tgz, to the directory of your choice.
    How you get the tarball into a writeable directory on your server is up to you. You must perform the file transfer in binary mode so the compressed tar file will expand properly.
  2. Expand the file Juniper_Cloud_Native_Router_<release-number>.tgz.
  3. Change directory to Juniper_Cloud_Native_Router_<version number>
    Note:

    All remaining steps in the installation assume that your current working directory is now Juniper_Cloud_Native_Router_22.4.

  4. Load the image files into docker from the file, jcnr-images.tar.gz, located in the Juniper_Cloud_Native_Router_22.4/images directory relative to where you expanded the tarball in the previous step.
  5. Enter the root password for your host server and your Juniper Cloud-Native Router license file into the secrets/jcnr-secrets.yaml file.
    You must enter the password and license in base64 encoded format.
    To encode the password, create a file that has only the plain text password on a single line. Then issue the command:
    The output is a single line of random-looking text similar to:
    UGFzc3cwcmQhCg==
    To encode the license file, copy the file onto your host server and issue the command:
    The output is a long single line of random-looking text similar to:
    VGhpcyBpcyBhIHJlYWxseSBtdWNoIGxvbmdlciB0ZXh0IGZpbGUgdGhhdCBpbmNsdWRlcyBsaWNlbnNlIGluZm9ybWF0aW9uCkFTREZERktERktIQUxHS0hiYW9qa2hkZmFzZGZOS0FTREdOR0FKYWRzZmxodmFibmRzZmdramh2Ym5ramFzZnVxYmF1amgyMDEwdGIydDQweGtqYjR3eTB1dmRxd3J2MGl3aGV0Ymd1YnMwcWRqZmhkc2tqdmJkc2ZramhkdmFkZnNiO2d2a2pzZGI7aWRzamc7ZmFzZGhma2pkc2J2YWlzdWRmZ3dFWUlUR1ZCMzlWRVlCVjM0OVVHQlZHQlFVOUFXR1ZJQkVSV0c5VUJWV0U5Rwo=
    Note:

    You must obtain your license file from your account team and install it in the secrets.yaml file as instructed above. Without the proper base64-encoded license file and root password in the secrets.yaml file, the cRPD Pod does not enter Running state, but remains in CrashLoopBackOff state.

    You must copy the base64 outputs and paste them into the secrets/jcnr-secrets.yaml file in the appropriate locations.
  6. Apply the secrets/jcnr-secrets.yaml to the Kubernetes system
    Note:

    For an L2 deployment: perform step 7 below and skip step 8.

    For an L3 deployment: skip step 7 below and perform step 8.

    Perform only one of step 7 or step 8, not both.

  7. For an L2 deployment, edit the helmchart/values.yaml file.

    You must customize the Helm chart for the Juniper Cloud-Native Router installation in L2 mode:

    • Choose fabric interfaces–Use interface names from your host system

    • Create the VLAN id list for trunk interfaces–Use VLAN ids that fit in your network

    • Choose a fabric workload interface-Use interface names from your host system

    • Set the VLAN id for traffic on the workload interface

    • Set the severity level for JCNR-vRouter logging

      Note:

      Leave the log_level set to INFO unless instructed to change it by JTAC.

    • Ensure that the mode option is set to "l2"

    • Set the cpu core mask–physical cores, logical cores

    • Choose the fabric interface–Use interface names from your host system

    • Choose a workload interface–Use interface names from your host system

    • (Optional) Set a rate limit for broadcast, multicast, and unknown unicast traffic in bytes per second by assigning storm control profiles

    • (Optional) Set a core pattern to determine the generated names for core files. If you leave it blank, then cloud-native router pods do not overwrite the existing core pattern

    • (Intel 810 NIC only) Enable QoS on the NIC by setting true or false (default is false)

    • Set a writeable directory location for syslog-ng to store notifications

    • (Optional) If you specify a bond interface as your fabricInterface:, provide slaveInterface names from your system under the bondInterfaceConfigs: section.

    • By default restoreInterface is set to false. With this setting when vrouter pod crashes or is deleted the interfaces are not restored back to host.

    Note:

    If you are using the Intel XL710 NIC, you must set ddp=false in the

    values.yaml file.

    See Sample Configuration Files for a commented example of the default helmchart/values.yaml file.

  8. For an L3 deployment, edit the helmchart/values_L3.yaml file.

    You must customize the Helm chart for the Juniper Cloud-Native Router installation in L3 mode:

    • Assign IP addresses to interfaces that you configure in values_L3.yaml

    • Set the severity level for JCNR-vRouter logging

      Note:

      Leave the log_level set to INFO unless instructed to change it by JTAC.

    • Ensure that the mode option is set to "l3"

    • Set the cpu core mask–physical cores, logical cores

    • (Optional) Set a core pattern to determine the generated names for core files. If you leave it blank, then cloud-native router pods do not overwrite the existing core pattern

    • Set a writeable directory location for syslog-ng to store notifications

    See Sample Configuration Files for a commented example of the default helmcharts/values_L3.yaml file.

  9. Deploy the Juniper Cloud-Native Router using Helm
    Note:

    Starting with Juniper Cloud-Native Router Release 22.4, you must specify either the values.yaml or values_L3.yaml file when you deploy the cloud-native router. You specify the YAML file in the helm install command as shown in the examples below.

    For an L2 installation, issue the command:
    For an L3 installation, issue the command:
  10. Confirm Juniper Cloud-Native Router Deployment

Verify Operation of Containers

SUMMARY This task allows you to confirm that the Juniper Cloud-Native Router Pods are running.
  1. kubectl get pods -A
    The output of the kubectl command shows all of the pods in the Kubernetes cluster in all namespaces. Successful deployment means that all pods display that they are in the running state. In this example we have marked the Juniper Cloud-Native Router Pods in bold. For example:
  2. kubectl get ds -A
    Use the kubectl get ds -A command to get a list of daemonset containers.