Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Deploying Cloud-Native Router as a Single Pod in Pod Network Namespace

Cloud-Native Router can deployed as a single pod in pod network namespace for increased security, simplified deployment and enhanced portability. Read this topic to understand the deployment prerequisites and processes.

For pure CNF deployments, you now have the option to deploy Cloud-Native Router as a single pod in pod network namespace, offering the following advantages:

  • Consolidate all Cloud-Native Containers to a single pod—Simplifies deployment and lifecycle management.
  • Deploy Cloud-Native Router in Pod network namespace—Eliminates the requirement for host-level networking and improves security.

  • Minimise host-level dependencies—Minimises shared volume mounts and priviledged/root file system access, enhancing portability and reducing security risks.

In addition, the solution also implements the following changes:

  • Enable Physical Function (PF) and Virtual Function (VF) provisioning using Kubernetes resource requests—The interfaces should be available within the Cloud-Native Router's namespace so the JCNR pod can bind it to DPDK.:
    • VFs are derived from a PF and allocated to the Cloud-Native Router pod using the SR-IOV CNI plugin.

    • PFs are allocated to the Cloud-Native Router pod using the host-device CNI plugin.

  • Support for Nokia CPU Pooler—Manages dedicated and shared CPU resources for containers.

  • Custom Namespace Support—Ability to deploy Cloud-Native Router in a user-defined namespace.

    Note: JCNR-CNI is not supported when deploying Cloud-Native Router as a single pod in pod network namespace.
    Note: You can configure application pods that use SR-IOV VF to send traffic to Cloud-Native Router using SR-IOV CNI. The application pods must be configured with the Cloud-Native Router VF interface as gateway. The pod VF and Cloud-Native Router VF must be on the same PF.

Install Cloud-Native Router as a Single Pod in Pod Network Namespace

Read this section to learn the steps required to install the Cloud-Native Router.

Key points about single pod deployment:

  • Supported on open-source Kubernetes deployed on Rocky Linux or Ubuntu OS.
  • Supported for Cloud-Native Router CNF deployments only.

  • Service chaining with cSRX is not supported.

  • The jcnrDeploymentMode must be set to singlePod in the deployment helm chart.

  1. Ensure you have configured a server based on the System Requirements

    .
  2. Download the Cloud-Native Router software package Juniper_Cloud_Native_Router_release.tar.gz to the directory of your choice.

  3. Expand the downloaded package.

  4. Change directory to the main installation directory.

  5. View the contents in the current directory.

  6. Change to the helmchart directory and untar the jcnr-release.tgz.

  7. The Cloud-Native Router container images are required for deployment. Choose one of the following options:

    • Configure your cluster to deploy images from the Juniper Networks enterprise-hub.juniper.net repository. See Configure Repository Credentials for instructions on how to configure repository credentials in the deployment Helm chart.

    • Configure your cluster to deploy images from the images tarball included in the downloaded Cloud-Native Router software package. See Deploy Prepackaged Images for instructions on how to import images to the local container runtime.

  8. Configure the namespace, root password, and license for the Cloud-Native Router installation:

    1. Modify the namespace in the secrets/jcnr-secrets.yaml to a user-defined namespace you would like to install the Cloud-Native Router pod to:

    2. Enter the root password for your host server into the secrets/jcnr-secrets.yaml file.

      1. You must enter the password in base64-encoded format. Encode your password as follows:

      2. Copy the output of this command into secrets/jcnr-secrets.yaml.

    3. Enter the Cloud-Native Router license in base64 encoded format.

      1. Encode your license in base64. The licenseFile is the license file that you obtained from Juniper Networks.

      2. Copy and paste your base64-encoded license into secrets/jcnr-secrets.yaml.The secrets/jcnr-secrets.yaml file contains a parameter called crpd-license:

  9. Apply the secrets to the cluster.

  10. You can allocate PFs and VFs to Cloud-Native Router. Follow the instructions in Allocate VFs to Cloud-Native Router Pod and Allocate PFs to Cloud-Native Router Pod to allocate Cloud-Native Router interfaces.

  11. Configure how cores are assigned to the vRouter DPDK containers. The single-pod Cloud-Native Router installation supports either Static CPU Allocation or CPU Allocation via Nokia CPU Pooler.

  12. Configure NodePort or LoadBalancer services for accessing agent introspect, vrouter and crpd telemetry.

  13. Optionally, customize Cloud-Native Router configuration. See Customize Cloud-Native Router Configuration for creating and applying the cRPD customizations.

  14. Label the nodes where you want Cloud-Native Router to be installed based on the nodeaffinity configuration (if defined in the values.yaml). For example:

  15. Deploy the Juniper Cloud-Native Router using the Helm chart in custom namespace. Navigate to the helmchart/jcnr and run the following command:

  16. Confirm Juniper Cloud-Native Router deployment.

  17. Verify the jcnr pod is running. You can list the containers and their state. A sample output from the K9s tool is provided below.

Note: Readiness checks are not supported for single pod JCNR deployments.

Allocate VFs to Cloud-Native Router Pod

You can allocate VFs to the Cloud-Native Router Pod using the SR-IOV CNI plugin. The following steps must be performed to complete the allocation:

  1. Create VFs from SR-IOV enabled PFs.
  2. Ensure SR-IOV CNI plugin is installed in your Kubernetes environment.

  3. Create and apply NetworkAttachmentDefinition manifests for SR-IOV networks—ens1f0v1 and ens1f0v2 in the Cloud-Native Router pod's namespace:

  4. Configure the Cloud-Native Router helm chart to set jcnrDeploymentMode as singlePod and add references to the SR-IOV networks. Leave all other configuration as default.

    Note: You can configure the interface type as fabric when deploying Cloud-Native Router as a transit node or to receive traffic from an application pod using SR-IOV VF. If Cloud-Native Router is set up for overlay management access, please use the workload interface type.
  5. Continue the Cloud-Native Router installation.

Allocate PFs to Cloud-Native Router Pod

You can allocate Physical Functions (PFs) to the Cloud-Native Router pod using the host-device CNI plugin. The CNI plugin is included as a part of standard plugins on Kubernetes installation.

  1. Configure the host-device networking configuration files. Create configuration files for each PF under /etc/cni/net.d
  2. Configure the NetworkAttachmentDefinition for each PF:
  3. Configure the Cloud-Native Router helm chart to set jcnrDeploymentMode as singlePod and add references to the host-device networks. Leave all other configuration as default.

    Note: Workload type interfaces are not supported when allocating PFs to Cloud-Native Router.
  4. Continue the Cloud-Native Router installation.