Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install Juniper Cloud-Native Router

SUMMARY The Juniper Cloud-Native Router (cloud-native router) uses the the JCNR-Controller (cRPD-based control plane) and JCNR-CNI to provide control plane capabilities and a container network interface. Juniper Cloud-Native Router uses the DPDK-enabled vRouter to provide high-performance data plane capabilities and Syslog-NG to provide notification functions. This section explains how you can install these components of the Cloud-Native Router.

The JCNR-Controller (cRPD) is an initialization container that provides control plane functionality for the cloud-native router. The control plane is responsible for provisioning of the workload and fabric interfaces used in Juniper Cloud-Native Router. It also manages communication with the vRouter-agent and the vRouter itself over a gRPC connection.

The JCNR-CNI is the container network interface that Juniper Cloud-Native Router uses to communicate with physical interfaces on the server and pod and container network interfaces within the installation.

The Juniper Cloud-Native Router Virtual Router (vRouter) is a container application set that provides advanced forwarding plane functionality. It extends the network from the physical routers and switches into a virtual overlay network hosted in the virtualized servers. The Data Plane Development Kit (DPDK) enables the vRouter to process more packets per second than is possible when the vRouter runs as a kernel module.

The Syslog-NG is a container application that allows Juniper Cloud-Native Router to provide notifications to users about events that occur in the cloud-native router deployment.

Install Juniper Cloud-Native Router Using Helm Chart

Read this section to learn the steps required to load the cloud-native router image components into docker and install the cloud-native router components using Helm charts.

Note:

In the installation sections of this guide, we do not, generally, specify version information when referring to file and directory names. When we do specify the version number in a file or directory name, we are referring to the current (latest) release.

Note:

It is not recommended to deploy Juniper Cloud-Native Router version 23.1 if Kubernetes cpumanager is enabled in your Kubernetes cluster.

As mentioned in the System Resource Requirements, the Helm package manager for Kubernetes must be installed prior to installing Juniper Cloud-Native Router components.

Note:

We do not provide a specific path into which you must download the package and install the software. Because, of this you can copy the commands shown throughout this document and paste them into the CLI of your server.

The high-level overview of Juniper Cloud-Native Router installation is:

  1. Download the software installation package (tarball)

  2. Expand the tarball

  3. Change directory to Juniper_Cloud_Native_Router_<release number>

  4. View the contents of the directory

  5. Load the image files into Docker

  6. Enter the root password for your host server and your Juniper Cloud-Native Router

  7. Apply the secrets/jcnr-secrets.yaml to the Kubernetes system

    Note:

    Juniper Cloud-Native Router can be deployed either in L2 or L3 mode. Perform only one of step 8 or step 9 depending on whether you want to deploy in L2 or L3 mode.

  8. Edit values.yaml to suit the needs of your installation for L2 mode.
  9. Edit values_L3.yaml to suit the needs of your installation for L3 mode.

  10. Install the Juniper Cloud-Native Router

Each high-level procedure listed above is detailed below,

  1. Download the tarball, Juniper_Cloud_Native_Router_<release-number>.tgz, to the directory of your choice.

    You must perform the file transfer in binary mode when transferring the file to your server, so that the compressed tar file expands properly.

  2. Expand the file Juniper_Cloud_Native_Router_<release-number>.tgz.
  3. Change directory to Juniper_Cloud_Native_Router_<release-number>.
    Note:

    All remaining steps in the installation assume that your current working directory is now Juniper_Cloud_Native_Router_<release-number>.

  4. View the contents in the current directory.
  5. Load the JCNR docker images to local docker. The images are available in the Juniper_Cloud_Native_Router_<release-number>/images directory.
  6. Enter the root password for your host server and your Juniper Cloud-Native Router license file into the secrets/jcnr-secrets.yaml file.

    You can view the sample contents of the jcnr-secrets.yaml file below:

    You must enter the password and license in base64 encoded format.

    To encode the password, create a file that has only the plain text password on a single line. Then issue the command:

    The output is a single line of random-looking text similar to:

    UGFzc3cwcmQhCg==

    To encode the license file, copy the file onto your host server and issue the command:

    The output is a long single line of random-looking text similar to:

    VGhpcyBpcyBhIHJlYWxseSBtdWNoIGxvbmdlciB0ZXh0IGZpbGUgdGhhdCBpbmNsdWRlcyBsaWNlbnNlIGluZm9ybWF0aW9uCkFTREZERktERktIQUxHS0hiYW9qa2hkZmFzZGZOS0FTREdOR0FKYWRzZmxodmFibmRzZmdramh2Ym5ramFzZnVxYmF1amgyMDEwdGIydDQweGtqYjR3eTB1dmRxd3J2MGl3aGV0Ymd1YnMwcWRqZmhkc2tqdmJkc2ZramhkdmFkZnNiO2d2a2pzZGI7aWRzamc7ZmFzZGhma2pkc2J2YWlzdWRmZ3dFWUlUR1ZCMzlWRVlCVjM0OVVHQlZHQlFVOUFXR1ZJQkVSV0c5VUJWV0U5Rwo=
    Note:

    You must obtain your license file from your account team and install it in the secrets.yaml file as instructed above. Without the proper base64-encoded license file and root password in the secrets.yaml file, the cRPD Pod does not enter Running state, but remains in CrashLoopBackOff state.

    You must copy the base64 outputs and paste them into the secrets/jcnr-secrets.yaml file in the appropriate locations.

  7. Apply the secrets/jcnr-secrets.yaml to the Kubernetes system.
    Note:

    Juniper Cloud-Native Router can be deployed either in L2 or L3 mode. Perform only one of step 8 or step 9 depending on whether you want to deploy in L2 or L3 mode.

  8. For an L2 deployment, edit the helmchart/values.yaml file.

    You must customize the Helm chart for the Juniper Cloud-Native Router installation in L2 mode:

    • Choose fabric interfaces–Use interface names from your host system

    • Create the VLAN id list for trunk interfaces–Use VLAN ids that fit in your network

    • Choose a fabric workload interface-Use interface names from your host system

    • Set the VLAN id for traffic on the workload interface

    • Set the severity level for JCNR-vRouter logging

      Note:

      Leave the log_level set to INFO unless instructed to change it by JTAC.

    • Ensure that the mode option is set to "l2"

    • Set the cpu core mask–physical cores, logical cores

    • (Optional) Enable noLocalSwitching: key and provide the VLAN IDs as values and set the no-local-switching: to true in the trunk interface

    • (Optional) Set the native-vlan-id: key to the VLAN interface ID where you want to accept untagged data packets

    • Choose the fabric interface–Use interface names from your host system

    • Choose a workload interface–Use interface names from your host system

    • (Optional) Set a rate limit for broadcast, multicast, and unknown unicast traffic in bytes per second by assigning storm control profiles

    • (Optional) Set a core pattern to determine the generated names for core files. If you leave it blank, then cloud-native router pods do not overwrite the existing core pattern

    • (Intel 810 NIC only) Enable QoS on the NIC by setting true or false (default is false)

    • Set a writeable directory location for syslog-ng to store notifications

    • (Optional) If you specify a bond interface as your fabricInterface:, provide slaveInterface names from your system under the bondInterfaceConfigs: section.

    • By default restoreInterface is set to false. With this setting when vrouter pod crashes or is deleted the interfaces are not restored back to host.

    Note:

    If you are using the Intel XL710 NIC, you must set ddp=false in the

    values.yaml file.

    See Sample Configuration Files for a commented example of the default helmchart/values.yaml file.

  9. For an L3 deployment, edit the helmchart/values_L3.yaml file.

    You must customize the Helm chart for the Juniper Cloud-Native Router installation in L3 mode:

    • Assign IP addresses to interfaces that you configure in values_L3.yaml

    • Set the severity level for JCNR-vRouter logging

      Note:

      Leave the log_level set to INFO unless instructed to change it by JTAC.

    • Ensure that the mode option is set to "l3"

    • Set the cpu core mask–physical cores, logical cores

    • (Optional) Set a core pattern to determine the generated names for core files. If you leave it blank, then cloud-native router pods do not overwrite the existing core pattern

    • Set a writeable directory location for syslog-ng to store notifications

    See Sample Configuration Files for a commented example of the default helmcharts/values_L3.yaml file.

  10. Deploy the Juniper Cloud-Native Router using Helm.
    For an L2 installation, issue the command:

    Sample output:

    For an L3 installation, issue the command:

    Sample output:

  11. Confirm Juniper Cloud-Native Router deployment.

    Sample output:

Verify Operation of Containers

SUMMARY This task allows you to confirm that the Juniper Cloud-Native Router Pods are running.
  1. kubectl get pods -A
    The output of the kubectl command shows all of the pods in the Kubernetes cluster in all namespaces. Successful deployment means that all pods display that they are in the running state. In this example we have marked the Juniper Cloud-Native Router Pods in bold. For example:
  2. kubectl get ds -A
    Use the kubectl get ds -A command to get a list of daemonset containers.