Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install and Verify Juniper Cloud-Native Router for Baremetal Servers

SUMMARY The Juniper Cloud-Native Router (cloud-native router) uses the the JCNR-Controller (cRPD) to provide control plane capabilities and JCNR-CNI to provide a container network interface. Juniper Cloud-Native Router uses the DPDK-enabled vRouter to provide high-performance data plane capabilities and Syslog-NG to provide notification functions. This section explains how you can install these components of the Cloud-Native Router.

Install Juniper Cloud-Native Router Using Helm Chart

Read this section to learn the steps required to load the cloud-native router image components into docker and install the cloud-native router components using Helm charts.

  1. Review the System Requirements for Baremetal Servers section to ensure the cluster has all the required configuration.
  2. Download the desired JCNR software package to the directory of your choice.
    You have the option of downloading the package to install JCNR only or downloading the package to install JNCR together with Juniper cSRX. See JCNR Software Download Packages for a description of the packages available. If you don't want to install Juniper cSRX now, you can always choose to install Juniper cSRX on your working JCNR installation later.
  3. Expand the downloaded package.
  4. Change directory to the main installation directory.
    • If you're installing JCNR only, then:

      This directory contains the Helm chart for JCNR only.
    • If you're installing JCNR and cSRX at the same time, then:

      This directory contains the combination Helm chart for JCNR and cSRX.
    Note:

    All remaining steps in the installation assume that your current working directory is now either Juniper_Cloud_Native_Router_<release> or Juniper_Cloud_Native_Router_CSRX_<release>.

  5. View the contents in the current directory.
  6. Change to the helmchart directory and expand the Helm chart.
    • For JCNR only:

      The Helm chart is located in the jcnr directory.
    • For the combined JCNR and cSRX:

      The Helm chart is located in the jcnr_csrx directory.
  7. The JCNR container images are required for deployment. Choose one of the following options:
    • Configure your cluster to deploy images from the Juniper Networks enterprise-hub.juniper.net repository. See Configure Repository Credentials for instructions on how to configure repository credentials in the deployment Helm chart.

    • Configure your cluster to deploy images from the images tarball included in the downloaded JCNR software package. See Deploy Prepackaged Images for instructions on how to import images to the local container runtime.

  8. Follow the steps in Installing Your License to install your JCNR license.
  9. Enter the root password for your host server into the secrets/jcnr-secrets.yaml file at the following line:
    You must enter the password in base64-encoded format. To encode the password, create a file with the plain text password on a single line. Then issue the command: Copy the output of this command into secrets/jcnr-secrets.yaml.
  10. Apply secrets/jcnr-secrets.yaml to the cluster.
  11. If desired, configure how cores are assigned to the vRouter DPDK containers. See Allocate CPUs to the JCNR Forwarding Plane.
  12. Customize the Helm chart for your deployment using the helmchart/jcnr/values.yaml or helmchart/jcnr_csrx/values.yaml file.

    See Customize JCNR Helm Chart for Bare Metal Servers for descriptions of the Helm chart configurations.

  13. Optionally, customize JCNR configuration.
    See Customize JCNR Configuration for creating and applying the cRPD customizations.
  14. If you're installing Juniper cSRX now, then follow the procedure in Apply the cSRX License and Configure cSRX.
  15. Label the nodes where you want JCNR to be installed based on the nodeaffinity configuration (if defined in the values.yaml). For example:
  16. Deploy the Juniper Cloud-Native Router using the Helm chart.
    Navigate to the helmchart/jcnr or the helmchart/jcnr_csrx directory and run the following command:or
  17. Confirm Juniper Cloud-Native Router deployment.

    Sample output:

Verify Installation

SUMMARY This section enables you to confirm a successful JCNR deployment.
Note:

The output shown in this example procedure is affected by the number of nodes in the cluster. The output you see in your setup may differ in that regard.

  1. Verify the state of the JCNR pods by issuing the kubectl get pods -A command.
    The output of the kubectl command shows all of the pods in the Kubernetes cluster in all namespaces. Successful deployment means that all pods are in the running state. In this example we have marked the Juniper Cloud-Native Router Pods in bold. For example:
  2. Verify the JCNR daemonsets by issuing the kubectl get ds -A command.

    Use the kubectl get ds -A command to get a list of daemonsets. The JCNR daemonsets are highlighted in bold text.

  3. Verify the JCNR statefulsets by issuing the kubectl get statefulsets -A command.

    The command output provides the statefulsets.

  4. Verify if the cRPD is licensed and has the appropriate configurations
    1. View the Access cRPD CLI section for instructions to access the cRPD CLI.
    2. Once you have access the cRPD CLI, issue the show system license command in the cli mode to view the system licenses. For example:
    3. Issue the show configuration | display set command in the cli mode to view the cRPD default and custom configuration. The output will be based on the custom configuration and the JCNR deployment mode.
    4. Type the exit command to exit from the pod shell.
  5. Verify the vRouter interfaces configuration
    1. View the Access vRouter CLI section for instruction to access the vRouter CLI.
    2. Once you have accessed the vRouter CLI, issue the vif --list command to view the vRouter interfaces . The output will depend upon the JCNR deployment mode and configuration. An example for L3 mode deployment, with one fabric interface configured, is provided below:
    3. Type the exit command to exit the pod shell.