Install and Verify Juniper Cloud-Native Router for Azure Deployment
The Juniper Cloud-Native Router (cloud-native router) uses the the JCNR-Controller (cRPD) to provide control plane capabilities and JCNR-CNI to provide a container network interface. Juniper Cloud-Native Router uses the DPDK-enabled vRouter to provide high-performance data plane capabilities and Syslog-NG to provide notification functions. This section explains how you can install these components of the Cloud-Native Router.
Install Juniper Cloud-Native Router Using Helm Chart
Read this section to learn the steps required to load the cloud-native router image components using Helm charts.
- Review the System Requirements for Azure Deployment section to ensure the setup has all the required configuration.
-
Download the desired Cloud-Native Router software package to the directory of your choice.
You have the option of downloading the package to install Cloud-Native Router only or downloading the package to install JNCR together with Juniper cSRX. See Cloud-Native Router Software Download Packages for a description of the packages available. If you don't want to install Juniper cSRX now, you can always choose to install Juniper cSRX on your working Cloud-Native Router installation later.
-
Expand the file
Juniper_Cloud_Native_Router_release-number.tgz.
tar xzvf Juniper_Cloud_Native_Router_release-number.tgz
-
Change directory to the main installation directory.
If you're installing Cloud-Native Router only, then:
This directory contains the Helm chart for Cloud-Native Router only.cd Juniper_Cloud_Native_Router_<release>
If you're installing Cloud-Native Router and cSRX at the same time, then:
This directory contains the combination Helm chart for Cloud-Native Router and cSRX.cd Juniper_Cloud_Native_Router_CSRX_<release>
Note:All remaining steps in the installation assume that your current working directory is now either Juniper_Cloud_Native_Router_<release> or Juniper_Cloud_Native_Router_CSRX_<release>.
-
View the contents in the current directory.
ls helmchart images README.md secrets
-
Change to the helmchart directory and expand the Helm
chart.
cd helmchart
For Cloud-Native Router only:
ls jcnr-<release>.tgz
tar -xzvf jcnr-<release>.tgz
The Helm chart is located in the jcnr directory.ls jcnr jcnr-<release>.tgz
For the combined Cloud-Native Router and cSRX:
ls jcnr_csrx-<release>.tgz
tar -xzvf jcnr_csrx-<release>.tgz
The Helm chart is located in the jcnr_csrx directory.ls jcnr_csrx jcnr_csrx-<release>.tgz
-
The Cloud-Native Router container images are required for deployment. Choose one of the
following options:
-
Configure your cluster to deploy images from the Juniper Networks
enterprise-hub.juniper.net
repository. See Configure Repository Credentials for instructions on how to configure repository credentials in the deployment Helm chart. -
Configure your cluster to deploy images from the images tarball included in the downloaded Cloud-Native Router software package. See Deploy Prepackaged Images for instructions on how to import images to the local container runtime.
-
- Follow the steps in Installing Your License to install your Cloud-Native Router license.
-
Enter the root password for your host server into the
secrets/jcnr-secrets.yaml file at the following
line:
root-password: <add your password in base64 format>
You must enter the password in base64-encoded format. To encode the password, create a file with the plain text password on a single line. Then issue the command: Copy the output of this command into secrets/jcnr-secrets.yaml.base64 -w 0 rootPasswordFile
-
Apply secrets/jcnr-secrets.yaml to the cluster.
kubectl apply -f secrets/jcnr-secrets.yaml namespace/jcnr created secret/jcnr-secrets created
- If desired, configure how cores are assigned to the vRouter DPDK containers. See Allocate CPUs to the Cloud-Native Router Forwarding Plane.
-
Customize the Helm chart for your deployment using the
helmchart/jcnr/values.yaml or
helmchart/jcnr_csrx/values.yaml file.
See Customize Cloud-Native Router Helm Chart for Azure Deployment for descriptions of the Helm chart configurations.
-
Optionally, customize Cloud-Native Router configuration.
See, Customize Cloud-Native Router Configuration for creating and applying the cRPD customizations.
- If you're installing Juniper cSRX now, then follow the procedure in Apply the cSRX License and Configure cSRX.
-
Label the nodes where you want Cloud-Native Router to be installed based on the
nodeaffinity
configuration (if defined in thevalues.yaml
). For example:kubectl label nodes ip-10.0.100.17.lab.net key1=jcnr --overwrite
-
Deploy the Juniper Cloud-Native Router using the Helm chart.
Navigate to the helmchart/jcnr or the helmchart/jcnr_csrx directory and run the following command:
orhelm install jcnr .
helm install jcnr-csrx .
NAME: jcnr LAST DEPLOYED: Fri Dec 22 06:04:33 2023 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None
-
Confirm Juniper Cloud-Native Router deployment.
helm ls
Sample output:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION jcnr default 1 2023-12-22 06:04:33.144611017 -0400 EDT deployed jcnr-<version> <version>
Verify Installation
The output shown in this example procedure is affected by the number of nodes in the cluster. The output you see in your setup may differ in that regard.
-
Verify the state of the Cloud-Native Router pods by issuing the
kubectl get pods -A
command.The output of thekubectl
command shows all of the pods in the Kubernetes cluster in all namespaces. Successful deployment means that all pods are in the running state. In this example we have marked the Juniper Cloud-Native Router Pods in bold. For example:kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE contrail-deploy contrail-k8s-deployer-579cd5bc74-g27gs 1/1 Running 0 103s contrail jcnr-0-dp-contrail-vrouter-nodes-b2jxp 2/2 Running 0 87s contrail jcnr-0-dp-contrail-vrouter-nodes-vrdpdk-g7wrk 1/1 Running 0 87s jcnr jcnr-0-crpd-0 1/1 Running 0 103s jcnr syslog-ng-ds5qd 1/1 Running 0 103s kube-system calico-kube-controllers-5f4fd8666-m78hk 1/1 Running 0 4h2m kube-system calico-node-28w98 1/1 Running 0 86d kube-system coredns-54bf8d85c7-vkpgs 1/1 Running 0 3h8m kube-system dns-autoscaler-7944dc7978-ws9fn 1/1 Running 0 86d kube-system kube-apiserver-ix-esx-06 1/1 Running 0 86d kube-system kube-controller-manager-ix-esx-06 1/1 Running 0 86d kube-system kube-multus-ds-amd64-jl69w 1/1 Running 0 86d kube-system kube-proxy-qm5bl 1/1 Running 0 86d kube-system kube-scheduler-ix-esx-06 1/1 Running 0 86d kube-system nodelocaldns-bntfp 1/1 Running 0 86d
-
Verify the Cloud-Native Router daemonsets by issuing the
kubectl get ds -A
command.Use the
kubectl get ds -A
command to get a list of daemonsets. The Cloud-Native Router daemonsets are highlighted in bold text.kubectl get ds -A
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE contrail jcnr-0-dp-contrail-vrouter-nodes 1 1 1 1 1 <none> 90m contrail jcnr-0-dp-contrail-vrouter-nodes-vrdpdk 1 1 1 1 1 <none> 90m jcnr syslog-ng 1 1 1 1 1 <none> 90m kube-system calico-node 1 1 1 1 1 kubernetes.io/os=linux 86d kube-system kube-multus-ds-amd64 1 1 1 1 1 kubernetes.io/arch=amd64 86d kube-system kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 86d kube-system nodelocaldns 1 1 1 1 1 kubernetes.io/os=linux 86d
-
Verify the Cloud-Native Router statefulsets by issuing the
kubectl get statefulsets -A
command.The command output provides the statefulsets.
kubectl get statefulsets -A
NAMESPACE NAME READY AGE jcnr jcnr-0-crpd 1/1 27m
-
Verify if the cRPD is licensed and has the appropriate configurations
- View the Access cRPD CLI section to access the cRPD CLI.
-
Once you have access the cRPD CLI, issue the
show system license
command in the cli mode to view the system licenses. For example:root@jcnr-01:/# cli root@jcnr-01> show system license License usage: Licenses Licenses Licenses Expiry Feature name used installed needed containerized-rpd-standard 1 1 0 2024-09-20 16:59:00 PDT Licenses installed: License identifier: 85e5229f-0c64-0000-c10e4-a98c09ab34a1 License SKU: S-CRPD-10-A1-PF-5 License version: 1 Order Type: commercial Software Serial Number: 1000098711000-iHpgf Customer ID: Juniper Networks Inc. License count: 15000 Features: containerized-rpd-standard - Containerized routing protocol daemon with standard features date-based, 2022-08-21 17:00:00 PDT - 2027-09-20 16:59:00 PDT
-
Issue the
show configuration | display set
command in the cli mode to view the cRPD default and custom configuration. The output will be based on the custom configuration and the Cloud-Native Router deployment mode.root@jcnr-01# cli root@jcnr-01> show configuration | display set
-
Type the
exit
command to exit from the pod shell.
-
Verify the vRouter interfaces configuration
- View the Access vRouter CLI section to access the vRouter CLI.
-
Once you have accessed the vRouter CLI, issue the
vif --list
command to view the vRouter interfaces . The output will depend upon the Cloud-Native Router deployment mode and configuration. An example for L3 mode deployment, with one fabric interface configured, is provided below:$ vif --list Vrouter Interface Table Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2 D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, Mon=Interface is Monitored Uuf=Unknown Unicast Flood, Vof=VLAN insert/strip offload, Df=Drop New Flows, L=MAC Learning Enabled Proxy=MAC Requests Proxied Always, Er=Etree Root, Mn=Mirror without Vlan Tag, HbsL=HBS Left Intf HbsR=HBS Right Intf, Ig=Igmp Trap Enabled, Ml=MAC-IP Learning Enabled, Me=Multicast Enabled vif0/0 Socket: unix MTU: 1500 Type:Agent HWaddr:00:00:5e:00:01:00 Vrf:65535 Flags:L2 QOS:-1 Ref:3 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:0 bytes:0 errors:0 TX packets:0 bytes:0 errors:0 Drops:0 vif0/1 PCI: 0000:5a:02.1 (Speed 10000, Duplex 1) NH: 6 MTU: 1500 Type:Physical HWaddr:ba:9c:0f:ab:e2:c9 IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:0 Flags:L3L2Vof QOS:0 Ref:12 RX port packets:66 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Fabric Interface: 0000:5a:02.1 Status: UP Driver: net_iavf RX packets:66 bytes:5116 errors:0 TX packets:0 bytes:0 errors:0 Drops:0 vif0/2 PMD: eno3v1 NH: 9 MTU: 1500 Type:Host HWaddr:ba:9c:0f:ab:e2:c9 IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:65535 Flags:L3L2DProxyEr QOS:-1 Ref:13 TxXVif:1 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:0 bytes:0 errors:0 TX packets:66 bytes:5116 errors:0 Drops:0 TX queue packets:66 errors:0 TX device packets:66 bytes:5116 errors:0
-
Type the
exit
command to exit the pod shell.