Deploying Cloud-Native Router as a Single Pod in Pod Network Namespace
Cloud-Native Router can deployed as a single pod in pod network namespace for increased security, simplified deployment and enhanced portability. Read this topic to understand the deployment prerequisites and processes.
For pure CNF deployments, you now have the option to deploy Cloud-Native Router as a single pod in pod network namespace, offering the following advantages:
- Consolidate all Cloud-Native Containers to a single pod—Simplifies deployment and lifecycle management.
-
Deploy Cloud-Native Router in Pod network namespace—Eliminates the requirement for host-level networking and improves security.
-
Minimise host-level dependencies—Minimises shared volume mounts and priviledged/root file system access, enhancing portability and reducing security risks.
In addition, the solution also implements the following changes:
- Enable Physical Function (PF) and Virtual Function (VF) provisioning using Kubernetes
resource requests—The interfaces should be available within the Cloud-Native Router's
namespace so the JCNR pod can bind it to DPDK.:
-
VFs are derived from a PF and allocated to the Cloud-Native Router pod using the SR-IOV CNI plugin.
-
PFs are allocated to the Cloud-Native Router pod using the host-device CNI plugin.
-
-
Support for Nokia CPU Pooler—Manages dedicated and shared CPU resources for containers.
-
Custom Namespace Support—Ability to deploy Cloud-Native Router in a user-defined namespace.
Note: JCNR-CNI is not supported when deploying Cloud-Native Router as a single pod in pod network namespace.Note: You can configure application pods that use SR-IOV VF to send traffic to Cloud-Native Router using SR-IOV CNI. The application pods must be configured with the Cloud-Native Router VF interface as gateway. The pod VF and Cloud-Native Router VF must be on the same PF.
Install Cloud-Native Router as a Single Pod in Pod Network Namespace
Read this section to learn the steps required to install the Cloud-Native Router.
Key points about single pod deployment:
- Supported on open-source Kubernetes deployed on Rocky Linux or Ubuntu OS.
-
Supported for Cloud-Native Router CNF deployments only.
-
Service chaining with cSRX is not supported.
-
The
jcnrDeploymentModemust be set tosinglePodin the deployment helm chart.
Ensure you have configured a server based on the System Requirements
.Download the Cloud-Native Router software package Juniper_Cloud_Native_Router_release.tar.gz to the directory of your choice.
Expand the downloaded package.
tar xzvf Juniper_Cloud_Native_Router_release.tar.gz
Change directory to the main installation directory.
cd Juniper_Cloud_Native_Router_release
View the contents in the current directory.
ls helmchart images README.md secrets
Change to the
helmchartdirectory and untar the jcnr-release.tgz.cd helmchart
ls jcnr-
release.tgztar -xzvf jcnr-
release.tgzls jcnr jcnr-<release>.tgz
The Cloud-Native Router container images are required for deployment. Choose one of the following options:
Configure your cluster to deploy images from the Juniper Networks
enterprise-hub.juniper.netrepository. See Configure Repository Credentials for instructions on how to configure repository credentials in the deployment Helm chart.Configure your cluster to deploy images from the images tarball included in the downloaded Cloud-Native Router software package. See Deploy Prepackaged Images for instructions on how to import images to the local container runtime.
Configure the namespace, root password, and license for the Cloud-Native Router installation:
Modify the namespace in the
secrets/jcnr-secrets.yamlto a user-defined namespace you would like to install the Cloud-Native Router pod to:--- apiVersion: v1 kind: Namespace metadata: name: custom ---
Enter the root password for your host server into the
secrets/jcnr-secrets.yamlfile.You must enter the password in base64-encoded format. Encode your password as follows:
echo -n "password" | base64 -w0
Copy the output of this command into
secrets/jcnr-secrets.yaml.root-password: <add your password in base64 format>
Enter the Cloud-Native Router license in base64 encoded format.
Encode your license in base64. The licenseFile is the license file that you obtained from Juniper Networks.
base64 -w 0 licenseFile
Copy and paste your base64-encoded license into
secrets/jcnr-secrets.yaml.Thesecrets/jcnr-secrets.yamlfile contains a parameter calledcrpd-license:crpd-license: | <add your license in base64 format>
Apply the secrets to the cluster.
kubectl apply -f secrets/jcnr-secrets.yaml namespace/custom created secret/jcnr-secrets created
You can allocate PFs and VFs to Cloud-Native Router. Follow the instructions in Allocate VFs to Cloud-Native Router Pod and Allocate PFs to Cloud-Native Router Pod to allocate Cloud-Native Router interfaces.
Configure how cores are assigned to the vRouter DPDK containers. The single-pod Cloud-Native Router installation supports either Static CPU Allocation or CPU Allocation via Nokia CPU Pooler.
Configure NodePort or LoadBalancer services for accessing agent introspect, vrouter and crpd telemetry.
#ports that need to be expose as a service for single pod deployment #default service creation is disabled and will not be created until enableService flag is set to true enableService: true service: type: NodePort labels: {} annotations: {} clusterIP: "" externalIPs: [] # Only use if service.type is "LoadBalancer" loadBalancerIP: "" # Ports to expose on each node # Only used if service.type is "NodePort" nodePort: crpdMetricsPort: 30070 vrouterMetricsPort: 30072 agentIntrospectPort: 30085 #crpdGnmiPort: 30077 #vrouterGnmiPort: 30079Optionally, customize Cloud-Native Router configuration. See Customize Cloud-Native Router Configuration for creating and applying the cRPD customizations.
Label the nodes where you want Cloud-Native Router to be installed based on the
nodeaffinityconfiguration (if defined in the values.yaml). For example:kubectl label nodes ip-10.0.100.17.lab.net key1=jcnr --overwrite
Deploy the Juniper Cloud-Native Router using the Helm chart in
customnamespace. Navigate to thehelmchart/jcnrand run the following command:helm install jcnr . -n custom
NAME: jcnr LAST DEPLOYED: Fri Dec 22 06:04:33 2023 NAMESPACE: custom STATUS: deployed REVISION: 1 TEST SUITE: None
Confirm Juniper Cloud-Native Router deployment.
helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION jcnr custom 1 <date-time> deployed jcnr-<version> <version>
Verify the jcnr pod is running. You can list the containers and their state. A sample output from the K9s tool is provided below.
NAME↑ PF IMAGE READY STATE INIT RESTARTS PROBES(L:R) CPU/R:L MEM/R:L PORTS │ | contrail-init ● s-artifactory.juniper.net/atom-docker/cn2/bazel-build/dev/contrail-init:R25.4-27 true Completed true 0 off:off 0:0 0:0 │ │ contrail-tools ● s-artifactory.juniper.net/atom-docker/cn2/bazel-build/dev/x86_64/contrail-tools:R25.4-27 true Running false 1 off:off 0:0 0:0 │ │ contrail-vrouter-agent ● s-artifactory.juniper.net/atom-docker/cn2/bazel-build/dev/contrail-vrouter-agent:R25.4-27 true Running false 3 on:off 0:0 0:0 introspect:8085,grpc:50052 │ │ contrail-vrouter-agent-dpdk ● s-artifactory.juniper.net/atom-docker/cn2/bazel-build/dev/contrail-vrouter-dpdk:R25.4-27 true Running false 3 on:off 4000:4000 1024:1024 │ │ contrail-vrouter-kernel-init-dpdk ● s-artifactory.juniper.net/atom-docker/cn2/bazel-build/dev/contrail-vrouter-kernel-init-dpdk:R25.4-27 true Completed true 0 off:off 0:0 500:500 │ │ contrail-vrouter-telemetry-exporter ● s-artifactory.juniper.net/atom-docker/cn2/bazel-build/dev/contrail-telemetry-exporter:R25.4-27 true Running false 1 off:off 0:0 0:0 metrics:8070 │ │ crpd ● s-artifactory.juniper.net/junos-docker-local/warthog/crpd:25.4R1.3 true Running false 4 on:off 0:0 0:0 179 │ │ init ● s-artifactory.juniper.net/atom-docker/cn2/bazel-build/dev/jcnr-init:R25.4-27 true Completed true 1 off:off 0:0 0:0 │ │ install-cni ● s-artifactory.juniper.net/junos-docker-local/warthog/jcnr-cni:25.4-20251114-e8e0e38 true Completed true 0 off:off 0:0 0:0 │ │ jcnr-config-controller ● s-artifactory.juniper.net/atom-docker/cn2/bazel-build/dev/jcnr-init:R25.4-27 true Running false 1 off:off 0:0 0:0 │ │ jcnr-crpd-telemetry-exporter ● s-artifactory.juniper.net/atom-docker/cn2/bazel-build/dev/contrail-telemetry-exporter:R25.4-27 true Running false 1 off:off 0:0 0:0 metrics:8072 │ │ syslog-ng ● s-artifactory.juniper.net/contrail-docker/syslog-ng:v6 true Running false 1 on:off 0:0 0:0 syslog-tcp:6601,syslog-udp:5514╱ │
Allocate VFs to Cloud-Native Router Pod
You can allocate VFs to the Cloud-Native Router Pod using the SR-IOV CNI plugin. The following steps must be performed to complete the allocation:
- Create VFs from SR-IOV enabled PFs.
echo 2 > /sys/class/net/ens1f0/device/sriov_numvfs && sleep 1
Ensure SR-IOV CNI plugin is installed in your Kubernetes environment.
Create and apply
NetworkAttachmentDefinitionmanifests for SR-IOV networks—ens1f0v1andens1f0v2in the Cloud-Native Router pod's namespace:apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: ens1f0v1 namespace: custom annotations: k8s.v1.cni.cncf.io/resourceName: intel.com/intel_sriov_netdevice spec: config: '{ "cniVersion": "0.3.1", "type": "sriov", "name": "sriov-network", "spoofchk": "off", "trust": "on" }'apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: ens1f0v2 namespace: custom annotations: k8s.v1.cni.cncf.io/resourceName: intel.com/intel_sriov_netdevice spec: config: '{ "cniVersion": "0.3.1", "type": "sriov", "name": "sriov-network", "spoofchk": "off", "trust": "on" }'kubectl apply -f ens1f0v1.yaml
kubectl apply -f ens1f0v2.yaml
Configure the Cloud-Native Router helm chart to set
jcnrDeploymentModeassinglePodand add references to the SR-IOV networks. Leave all other configuration as default.# uncomment below to set jcnrDeploymentMode to singlePod; in this mode all jcnr components # (vRouter, cRPD, config-controller, syslog-ng, contrail-tools, etc) will be # deployed in a single pod jcnrDeploymentMode: singlePod ################### SinglePod JCNR mode section ################### # To run JCNR in singlePod mode, uncomment jcnrDeploymentMode: singlePod in the global section above. # NetworkDetails - list of network attachment definition networkDetails: - name: ens1f0v1 # network attachment definition name namespace: test # namespace name where the network attachment definition is created type: fabric # fabric or workload, default is fabric (workload type is specifically for OAM interface and it should be a VF) - name: ens1f0v2 # network attachment definition name namespace: test # namespace name where the network attachment definition is created type: fabric # fabric or workload, default is fabric (workload type is specifically for OAM interface and it should be a VF) # NetworkDeviceResources networkResources: limits: intel.com/intel_sriov_netdevice: "2" requests: intel.com/intel_sriov_netdevice: "2"Note: You can configure the interface type as fabric when deploying Cloud-Native Router as a transit node or to receive traffic from an application pod using SR-IOV VF. If Cloud-Native Router is set up for overlay management access, please use the workload interface type.Continue the Cloud-Native Router installation.
Allocate PFs to Cloud-Native Router Pod
You can allocate Physical Functions (PFs) to the Cloud-Native Router pod using the host-device CNI plugin. The CNI plugin is included as a part of standard plugins on Kubernetes installation.
- Configure the host-device networking configuration files. Create configuration files for
each PF under
/etc/cni/net.d{ "cniVersion": "0.3.1", "type": "host-device", "device": "ens1f1" }{ "cniVersion": "0.3.1", "type": "host-device", "device": "ens1f2" } - Configure the
NetworkAttachmentDefinitionfor each PF:apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: ens1f1 namespace: custom spec: config: '{ "cniVersion": "0.3.1", "type": "host-device", "device": "ens1f1" }'apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: ens1f2 namespace: custom spec: config: '{ "cniVersion": "0.3.1", "type": "host-device", "device": "ens1f2" }' Configure the Cloud-Native Router helm chart to set
jcnrDeploymentModeassinglePodand add references to the host-device networks. Leave all other configuration as default.# uncomment below to set jcnrDeploymentMode to singlePod; in this mode all jcnr components # (vRouter, cRPD, config-controller, syslog-ng, contrail-tools, etc) will be # deployed in a single pod jcnrDeploymentMode: singlePod ################### SinglePod JCNR mode section ################### # To run JCNR in singlePod mode, uncomment jcnrDeploymentMode: singlePod in the global section above. # NetworkDetails - list of network attachment definition networkDetails: - name: ens1f1 # network attachment definition name namespace: custom # namespace name where the network attachment definition is created type: fabric # fabric or workload, default is fabric (workload type is specifically for OAM interface and it should be a VF) - name: ens1f2 # network attachment definition name namespace: custom # namespace name where the network attachment definition is created type: fabric # fabric or workload, default is fabric (workload type is specifically for OAM interface and it should be a VF) # NetworkDeviceResources # networkResources: # limits: # intel.com/intel_sriov_netdevice: "1" # requests: # intel.com/intel_sriov_netdevice: "1"Note: Workload type interfaces are not supported when allocating PFs to Cloud-Native Router.Continue the Cloud-Native Router installation.