Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install and Verify Juniper Cloud-Native Router on Amazon EKS

The Juniper Cloud-Native Router uses the the JCNR-Controller (cRPD) to provide control plane capabilities and JCNR-CNI to provide a container network interface. Juniper Cloud-Native Router uses the DPDK-enabled vRouter to provide high-performance data plane capabilities and Syslog-NG to provide notification functions. This section explains how you can install these components of the Cloud-Native Router.

Install Juniper Cloud-Native Router Using Juniper Support Site Package

Read this section to learn the steps required to install the cloud-native router components using Helm charts.

  1. Review the System Requirements for EKS Deployment to ensure the setup has all the required configuration.
  2. Download the tarball, Juniper_Cloud_Native_Router_<release-number>.tgz, to the directory of your choice. You must perform the file transfer in binary mode when transferring the file to your server, so that the compressed tar file expands properly.
  3. Expand the file Juniper_Cloud_Native_Router_<release-number>.tgz.
  4. Change directory to Juniper_Cloud_Native_Router_<release-number>.
    Note:

    All remaining steps in the installation assume that your current working directory is now Juniper_Cloud_Native_Router_<release-number>.

  5. View the contents in the current directory.
  6. Enter the root password for your host server into the secrets/jcnr-secrets.yaml file at the following line:
    You must enter the password in base64-encoded format. Encode your password as follows: Copy the output of this command into secrets/jcnr-secrets.yaml.
  7. Enter your Juniper Cloud-Native Router license into the secrets/jcnr-secrets.yaml file at the following line.
    You must enter your license in base64-encoded format. Encode your license as follows:Copy the output of this command into secrets/jcnr-secrets.yaml.
    Note:

    You must obtain your license file from your account team and install it in the jcnr-secrets.yaml file as instructed above. Without the proper base64-encoded license key and root password in the jcnr-secrets.yaml file, the cRPD Pod does not enter Running state, but remains in CrashLoopBackOff state.

    Note:

    Starting with Cloud-Native Router Release 23.2, the Cloud-Native Router license format has changed. Request a new license key from the JAL portal before deploying or upgrading to 23.2 or newer releases.

  8. Apply secrets/jcnr-secrets.yaml.
  9. Create the JCNR ConfigMap if using the Virtual Router Redundancy Protocol (VRRP) for your Cloud-Native Router cluster. A sample jcnr-aws-config.yaml manifest is provided in cRPD_examples directory in the installation bundle. Apply the jcnr-aws-config.yaml to the Kubernetes system.
  10. Customize the helm chart for your deployment using the helmchart/values.yaml file.

    See, Customize JCNR Helm Chart for EKS Deployment for descriptions of the helm chart configurations and a sample helm chart for EKS deployment.

  11. Optionally, customize Cloud-Native Router configuration.
    See, Customize Cloud-Native Router Configuration for creating and applying the cRPD customizations.
  12. Install Multus CNI using the following command:
  13. Install the Amazon Elastic Block Storage (EBS) Container Storage Interface (CSI) driver.
  14. Label the nodes to which Cloud-Native Router must be installed based on the nodeAffinity defined in the values.yaml. For example:
  15. Deploy the Juniper Cloud-Native Router using the helm chart.
    Navigate to the helmchart directory and run the following command:
  16. Confirm Juniper Cloud-Native Router deployment.

    Sample output:

Install Juniper Cloud-Native Router Using AWS Marketplace Subscription

Read this section to learn the steps required to install the cloud-native router components using Helm charts.

  1. Review the System Requirements for EKS Deployment to ensure the setup has all the required configuration.
  2. Configure AWS credentials using the command: aws configure.
  3. Authenticate to the Amazon ECR repo.
  4. Download the helm package from the ECR repo.
  5. Expand the file jcnr-23.3.0.tgz.
  6. Change directory to jcnr.
    Note:

    All remaining steps in the installation assume that your current working directory is now jcnr.

  7. View the contents in the current directory.
  8. Enter the root password for your host server into the secrets/jcnr-secrets.yaml file at the following line:
    You must enter the password in base64-encoded format. Encode your password as follows: Copy the output of this command into secrets/jcnr-secrets.yaml.
  9. Enter your Juniper Cloud-Native Router license into the secrets/jcnr-secrets.yaml file at the following line.
    You must enter your license in base64-encoded format. Encode your license as follows:Copy the output of this command into secrets/jcnr-secrets.yaml.
    Note:

    You must obtain your license file from your account team and install it in the jcnr-secrets.yaml file as instructed above. Without the proper base64-encoded license key and root password in the jcnr-secrets.yaml file, the cRPD Pod does not enter Running state, but remains in CrashLoopBackOff state.

    Note:

    Starting with Cloud-Native Router Release 23.2, the Cloud-Native Router license format has changed. Request a new license key from the JAL portal before deploying or upgrading to 23.2 or newer releases.

  10. Apply secrets/jcnr-secrets.yaml.
  11. Create the JCNR ConfigMap if using the Virtual Router Redundancy Protocol (VRRP) for your Cloud-Native Router cluster. Apply the jcnr-aws-config.yaml to the Kubernetes system.
  12. Customize the helm chart for your deployment using the values.yaml file.

    See, Customize JCNR Helm Chart for EKS Deployment for descriptions of the helm chart configurations and a sample helm chart for EKS deployment.

  13. Optionally, customize Cloud-Native Router configuration.
    See, Customize Cloud-Native Router Configuration for creating and applying the cRPD customizations.
  14. Install the Amazon EBS CSI driver.
  15. Label the nodes to which Cloud-Native Router must be installed based on the nodeAffinity defined in the values.yaml. For example:
  16. Deploy the Juniper Cloud-Native Router using the helm chart.
    Run the following command:
  17. Confirm Juniper Cloud-Native Router deployment.

    Sample output:

Verify Cloud-Native Router Installation on Amazon EKS

  1. Verify the state of the Cloud-Native Router pods by issuing the kubectl get pods -A command. The output of the kubectl command shows all of the pods in the Kubernetes cluster in all namespaces. Successful deployment means that all pods are in the running state. In this example we have marked the Juniper Cloud-Native Router Pods in bold. For example:
  2. Verify the Cloud-Native Router daemonsets by issuing the kubectl get ds -A command. Use the kubectl get ds -A command to get a list of daemonsets. The Cloud-Native Router daemonsets are highlighted in bold text.
  3. Verify the Cloud-Native Router statefulsets by issuing the kubectl get statefulsets -A command. The command output provides the statefulsets.
  4. Verify if the cRPD is licensed and has the appropriate configurations.
    1. View the Access the cRPD CLI section for instructions to access the cRPD CLI.
    2. Once you have access the cRPD CLI, issue the show system license command in the cli mode to view the system licenses. For example:
    3. Issue the show configuration | display set command in the cli mode to view the cRPD default and custom configuration. The output will be based on the custom configuration and the Cloud-Native Router deployment mode.
    4. Type the exit command to exit from the pod shell.
  5. Verify the vRouter interfaces configuration.
    1. View the Access the vRouter CLI section for instructions on how to access the vRouter CLI.
    2. Once you have accessed the vRouter CLI, issue the vif --list command to view the vRouter interfaces . The output will depend upon the Cloud-Native Router deployment mode and configuration. An example for L3 mode deployment, with one fabric interface configured, is provided below:
    3. Type exit to exit from the pod shell.