Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install and Verify Juniper Cloud-Native Router for OpenShift Deployment

SUMMARY The Juniper Cloud-Native Router (cloud-native router) uses the the JCNR-Controller (cRPD) to provide control plane capabilities and JCNR-CNI to provide a container network interface. Juniper Cloud-Native Router uses the DPDK-enabled vRouter to provide high-performance data plane capabilities and Syslog-NG to provide notification functions. This section explains how you can install these components of the Cloud-Native Router on Red Hat OpenShift Container Platform (OCP).

Install Juniper Cloud-Native Router Using Helm Chart

Read this section to learn the steps required to install the cloud-native router components using Helm charts.

  1. Review the System Requirements for OpenShift Deployment to ensure the cluster has all the required configuration.
  2. Download the tarball, Juniper_Cloud_Native_Router_release-number.tgz, to the directory of your choice. You must perform the file transfer in binary mode when transferring the file to your server, so that the compressed tar file expands properly.
  3. Expand the file Juniper_Cloud_Native_Router_release-number.tgz.
  4. Change directory to Juniper_Cloud_Native_Router_release-number.
    Note:

    All remaining steps in the installation assume that your current working directory is now Juniper_Cloud_Native_Router_release-number.

  5. View the contents in the current directory.
  6. The JCNR container images are required for deployment. You may choose one of the following options:
    1. Download and deploy images from the Juniper repository—enterprise-hub.juniper.net. Review the Configure Repository Credentials topic for instructions on how to configure repository credentials in the deployment helm chart.
    2. You can upload the JCNR images to a local registry. The images are available in the Juniper_Cloud_Native_Router_release-number/images directory.
  7. Enter the root password for your host server and your Juniper Cloud-Native Router license file into the secrets/jcnr-secrets.yaml file. You must enter the password and license in base64 encoded format.

    You can view the sample contents of the jcnr-secrets.yaml file below:

    To encode the password, create a file with the plain text password on a single line. Then issue the command: To encode the license, copy the license key into a file on your host server and issue the command:You must copy the base64 outputs and paste them into the secrets/jcnr-secrets.yaml file in the appropriate locations.
    Note:

    You must obtain your license file from your account team and install it in the jcnr-secrets.yaml file as instructed above. Without the proper base64-encoded license key and root password in the jcnr-secrets.yaml file, the cRPD Pod does not enter Running state, but remains in CrashLoopBackOff state.

    Apply the secrets/jcnr-secrets.yaml to the Kubernetes system.

    Note:

    Starting with JCNR Release 23.2, the JCNR license format has changed. Request a new license key from the JAL portal before deploying or upgrading to 23.2 or newer releases.

  8. Customize the helm chart for your deployment using the helmchart/values.yaml file.

    See, Customize JCNR Helm Chart for OpenShift Deployment for descriptions of the helm chart configurations.

  9. Optionally, customize JCNR configuration.
    See, Customize JCNR Configuration for creating and applying the cRPD customizations.
  10. Deploy the Juniper Cloud-Native Router using the helm chart.
    Navigate to the helmchart directory and run the following command:
  11. Confirm Juniper Cloud-Native Router deployment.

    Sample output:

Verify Installation

SUMMARY This section enables you to confirm a successful JCNR deployment.
  1. Verify the state of the JCNR pods by issuing the kubectl get pods -A command.
    The output of the kubectl command shows all of the pods in the Kubernetes cluster in all namespaces. Successful deployment means that all pods are in the running state. In this example we have marked the Juniper Cloud-Native Router Pods in bold. For example:
  2. Verify the JCNR daemonsets by issuing the kubectl get ds -A command.

    Use the kubectl get ds -A command to get a list of daemonsets. The JCNR daemonsets are highlighted in bold text.

  3. Verify the JCNR statefulsets by issuing the kubectl get statefulsets -A command.

    The command output provides the statefulsets.

  4. Verify if the cRPD is licensed and has the appropriate configurations
    1. View the access the cRPD CLI section to access the cRPD CLI.
    2. Once you have access the cRPD CLI, issue the show system license command in the cli mode to view the system licenses. For example:
    3. Issue the show configuration | display set command in the cli mode to view the cRPD default and custom configuration. The output will be based on the custom configuration and the JCNR deployment mode.
    4. Type the exit command to exit from the pod shell.
  5. Verify the vRouter interfaces configuration
    1. View the access the vRouter CLI section to access the vRouter CLI.
    2. Once you have accessed the vRouter CLI, issue the vif --list command to view the vRouter interfaces . The output will depend upon the JCNR deployment mode and configuration. An example for L3 mode deployment, with two fabric interfaces configured, is provided below:
    3. Type the exit command to exit the pod shell.