Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install and Verify Juniper Cloud-Native Router

The Juniper Cloud-Native Router (cloud-native router) uses the the JCNR-Controller (cRPD) to provide control plane capabilities and JCNR-CNI to provide a container network interface. Juniper Cloud-Native Router uses the DPDK-enabled vRouter to provide high-performance data plane capabilities and Syslog-NG to provide notification functions. This section explains how you can install these components of the Cloud-Native Router.

Install Juniper Cloud-Native Router Using Helm Chart

Read this section to learn the steps required to load the cloud-native router image components into docker and install the cloud-native router components using Helm charts.

  1. Review the Before You Install section to ensure the cluster has all the required configuration.
  2. Download the tarball, Juniper_Cloud_Native_Router_<release-number>.tgz, to the directory of your choice. You must perform the file transfer in binary mode when transferring the file to your server, so that the compressed tar file expands properly.
  3. Expand the file Juniper_Cloud_Native_Router_<release-number>.tgz.
  4. Change directory to Juniper_Cloud_Native_Router_<release-number>.
    Note:

    All remaining steps in the installation assume that your current working directory is now Juniper_Cloud_Native_Router_<release-number>.

  5. View the contents in the current directory.
  6. The Cloud-Native Router container images are required for deployment. Choose one of the following options:
    • Configure your cluster to deploy images from the Juniper Networks enterprise-hub.juniper.net repository. See Configure Repository Credentials for instructions on how to configure repository credentials in the deployment Helm chart.

    • Configure your cluster to deploy images from the images tarball included in the downloaded Cloud-Native Router software package. See Deploy Prepackaged Images for instructions on how to import images to the local container runtime.

  7. Enter the root password for your host server into the secrets/jcnr-secrets.yaml file at the following line:
    You must enter the password in base64-encoded format. Encode your password as follows: Copy the output of this command into secrets/jcnr-secrets.yaml.
  8. Enter your Juniper Cloud-Native Router license into the secrets/jcnr-secrets.yaml file at the following line.
    You must enter your license in base64-encoded format. Encode your license as follows:Copy the output of this command into secrets/jcnr-secrets.yaml.
    Note:

    You must obtain your license file from your account team and install it in the jcnr-secrets.yaml file as instructed above. Without the proper base64-encoded license key and root password in the jcnr-secrets.yaml file, the cRPD Pod does not enter Running state, but remains in CrashLoopBackOff state.

    Note:

    Starting with Cloud-Native Router Release 23.2, the Cloud-Native Router license format has changed. Request a new license key from the JAL portal before deploying or upgrading to 23.2 or newer releases.

  9. Apply secrets/jcnr-secrets.yaml.
  10. Customize the helm chart for your deployment using the helmchart/values.yaml file.

    See, Customize Cloud-Native Router Helm Chart for descriptions of the helm chart configurations.

  11. If you are installing Cloud-Native Router on Amazon EKS, then update the dpdkCommandAdditionalArgs key in the helmchart/charts/jcnr-vrouter/values.yaml file and set tx and rx descriptors to 256, else skip this step.

    For example:

  12. Optionally, create cRPD pods with custom configuration.
    See, Customize Cloud-Native Router Configuration using Node Annotations for creating and applying the cRPD customizations.
  13. Deploy the Juniper Cloud-Native Router using the helm chart.
    Navigate to the helmchart directory and run the following command:
  14. Confirm Juniper Cloud-Native Router deployment.

    Sample output:

Verify Installation

This section enables you to confirm a successful Cloud-Native Router deployment.
  1. Verify the state of the Cloud-Native Router pods by issuing the kubectl get pods -A command.
    The output of the kubectl command shows all of the pods in the Kubernetes cluster in all namespaces. Successful deployment means that all pods are in the running state. In this example we have marked the Juniper Cloud-Native Router Pods in bold. For example:
  2. Verify the Cloud-Native Router daemonsets by issuing the kubectl get ds -A command.

    Use the kubectl get ds -A command to get a list of daemonsets. The Cloud-Native Router daemonsets are highlighted in bold text.

  3. Verify the Cloud-Native Router statefulsets by issuing the kubectl get statefulsets -A command.

    The command output provides the statefulsets.

  4. Verify if the cRPD is licensed and has the appropriate configurations
    1. View the Accessing the Cloud-Native Router Controller (cRPD) CLI section to access the cRPD CLI.
    2. Once you have access the cRPD CLI, issue the show system license command in the cli mode to view the system licenses. For example:
    3. Issue the show configuration | display set command in the cli mode to view the cRPD default and custom configuration. The output will be based on the custom configuration and the Cloud-Native Router deployment mode.
    4. Type the exit command to exit from the pod shell.
  5. Verify the vRouter interfaces configuration
    1. View the Accessing the vRouter CLI section to access the vRouter CLI.
    2. Once you have accessed the vRouter CLI, issue the vif --list command to view the vRouter interfaces . The output will depend upon the Cloud-Native Router deployment mode and configuration. An example for L3 mode deployment, with one fabric interface configured, is provided below:
    3. Type the exit command to exit the pod shell.