Install and Verify Juniper Cloud-Native Router on Amazon EKS
The Juniper Cloud-Native Router uses the the JCNR-Controller (cRPD) to provide control plane capabilities and JCNR-CNI to provide a container network interface. Juniper Cloud-Native Router uses the DPDK-enabled vRouter to provide high-performance data plane capabilities and Syslog-NG to provide notification functions. This section explains how you can install these components of the Cloud-Native Router.
Install Juniper Cloud-Native Router Using Juniper Support Site Package
Read this section to learn the steps required to install the cloud-native router components using Helm charts.
- Review the System Requirements for EKS Deployment to ensure the setup has all the required configuration.
- Download the tarball, Juniper_Cloud_Native_Router_<release-number>.tgz, to the directory of your choice. You must perform the file transfer in binary mode when transferring the file to your server, so that the compressed tar file expands properly.
-
Expand the file
Juniper_Cloud_Native_Router_<release-number>.tgz.
tar xzvf Juniper_Cloud_Native_Router_<release-number>.tgz
-
Change directory to
Juniper_Cloud_Native_Router_<release-number>.
cd Juniper_Cloud_Native_Router_<release-number>
Note:All remaining steps in the installation assume that your current working directory is now Juniper_Cloud_Native_Router_<release-number>.
-
View the contents in the current directory.
ls contrail-tools helmcharts images README.md secrets
-
Enter the root password for your host server into the
secrets/jcnr-secrets.yaml file at the following
line:
root-password: <add your password in base64 format>
You must enter the password in base64-encoded format. Encode your password as follows: Copy the output of this command into secrets/jcnr-secrets.yaml.echo -n "password" | base64 -w0
-
Enter your Juniper Cloud-Native Router license into the
secrets/jcnr-secrets.yaml file at the following
line.
crpd-license: | <add your license in base64 format>
You must enter your license in base64-encoded format. Encode your license as follows: Copy the output of this command into secrets/jcnr-secrets.yaml.base64 -w0 licenseFile
Note:You must obtain your license file from your account team and install it in the jcnr-secrets.yaml file as instructed above. Without the proper base64-encoded license key and root password in the jcnr-secrets.yaml file, the cRPD Pod does not enter Running state, but remains in CrashLoopBackOff state.
Note:Starting with Cloud-Native Router Release 23.2, the Cloud-Native Router license format has changed. Request a new license key from the JAL portal before deploying or upgrading to 23.2 or newer releases.
-
Apply secrets/jcnr-secrets.yaml.
kubectl apply -f secrets/jcnr-secrets.yaml namespace/jcnr created secret/jcnr-secrets created
-
Create the JCNR ConfigMap if using the Virtual Router Redundancy Protocol
(VRRP) for your Cloud-Native Router cluster. A sample
jcnr-aws-config.yaml
manifest is provided incRPD_examples
directory in the installation bundle. Apply thejcnr-aws-config.yaml
to the Kubernetes system.kubectl apply -f jcnr-aws-config.yaml configmap/jcnr-aws-config created
-
Customize the helm chart for your deployment using the
helmchart/values.yaml file.
See, Customize JCNR Helm Chart for EKS Deployment for descriptions of the helm chart configurations and a sample helm chart for EKS deployment.
-
Optionally, customize Cloud-Native Router configuration.
See, Customize Cloud-Native Router Configuration for creating and applying the cRPD customizations.
-
Install Multus CNI using the following command:
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/multus/v3.7.2-eksbuild.1/aws-k8s-multus.yaml
- Install the Amazon Elastic Block Storage (EBS) Container Storage Interface (CSI) driver.
-
Label the nodes to which Cloud-Native Router must be installed based on the
nodeAffinity
defined in thevalues.yaml
. For example:kubectl label nodes ip-10.0.100.17.us-east-2.compute.internal key1=jcnr --overwrite
-
Deploy the Juniper Cloud-Native Router using the helm chart.
Navigate to the
helmchart
directory and run the following command:helm install jcnr .
NAME: jcnr LAST DEPLOYED: Fri Sep 22 06:04:33 2023 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None
-
Confirm Juniper Cloud-Native Router deployment.
helm ls
Sample output:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION jcnr default 1 2023-09-22 06:04:33.144611017 -0400 EDT deployed jcnr-23.3.0 23.3.0
Install Juniper Cloud-Native Router Using AWS Marketplace Subscription
Read this section to learn the steps required to install the cloud-native router components using Helm charts.
- Review the System Requirements for EKS Deployment to ensure the setup has all the required configuration.
-
Configure AWS credentials using the command:
aws configure
. -
Authenticate to the Amazon ECR repo.
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.com
aws ecr get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.com
-
Download the helm package from the ECR repo.
helm pull oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/juniper-networks/jcnr --version 23.3.0
-
Expand the file jcnr-23.3.0.tgz.
tar xzvf jcnr-23.3.0.tgz
-
Change directory to
jcnr
.cd jcnr
Note:All remaining steps in the installation assume that your current working directory is now jcnr.
-
View the contents in the current directory.
ls Chart.yaml charts cRPD_examples values.yaml
-
Enter the root password for your host server into the
secrets/jcnr-secrets.yaml file at the following
line:
root-password: <add your password in base64 format>
You must enter the password in base64-encoded format. Encode your password as follows: Copy the output of this command into secrets/jcnr-secrets.yaml.echo -n "password" | base64 -w0
-
Enter your Juniper Cloud-Native Router license into the
secrets/jcnr-secrets.yaml file at the following
line.
crpd-license: | <add your license in base64 format>
You must enter your license in base64-encoded format. Encode your license as follows: Copy the output of this command into secrets/jcnr-secrets.yaml.base64 -w0 licenseFile
Note:You must obtain your license file from your account team and install it in the jcnr-secrets.yaml file as instructed above. Without the proper base64-encoded license key and root password in the jcnr-secrets.yaml file, the cRPD Pod does not enter Running state, but remains in CrashLoopBackOff state.
Note:Starting with Cloud-Native Router Release 23.2, the Cloud-Native Router license format has changed. Request a new license key from the JAL portal before deploying or upgrading to 23.2 or newer releases.
-
Apply secrets/jcnr-secrets.yaml.
kubectl apply -f secrets/jcnr-secrets.yaml namespace/jcnr created secret/jcnr-secrets created
-
Create the JCNR ConfigMap if using the Virtual Router Redundancy Protocol
(VRRP) for your Cloud-Native Router cluster. Apply the
jcnr-aws-config.yaml
to the Kubernetes system.kubectl apply -f jcnr-aws-config.yaml configmap/jcnr-aws-config created
-
Customize the helm chart for your deployment using the
values.yaml file.
See, Customize JCNR Helm Chart for EKS Deployment for descriptions of the helm chart configurations and a sample helm chart for EKS deployment.
-
Optionally, customize Cloud-Native Router configuration.
See, Customize Cloud-Native Router Configuration for creating and applying the cRPD customizations.
- Install the Amazon EBS CSI driver.
-
Label the nodes to which Cloud-Native Router must be installed based on the
nodeAffinity
defined in thevalues.yaml
. For example:kubectl label nodes ip-10.0.100.17.us-east-2.compute.internal key1=jcnr --overwrite
-
Deploy the Juniper Cloud-Native Router using the helm chart.
Run the following command:
helm install jcnr .
NAME: jcnr LAST DEPLOYED: Fri Sep 22 06:04:33 2023 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None
-
Confirm Juniper Cloud-Native Router deployment.
helm ls
Sample output:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION jcnr default 1 2023-09-22 06:04:33.144611017 -0400 EDT deployed jcnr-23.3.0 23.3.0
Verify Cloud-Native Router Installation on Amazon EKS
-
Verify the state of the Cloud-Native Router pods by issuing the
kubectl get pods -A
command. The output of thekubectl
command shows all of the pods in the Kubernetes cluster in all namespaces. Successful deployment means that all pods are in the running state. In this example we have marked the Juniper Cloud-Native Router Pods in bold. For example:kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE contrail-deploy contrail-k8s-deployer-5b6c9656d5-nw9t9 1/1 Running 0 13d contrail contrail-vrouter-nodes-wmr26 3/3 Running 0 13d jcnr kube-crpd-worker-sts-3 1/1 Running 0 13d jcnr syslog-ng-tct27 1/1 Running 0 13d kube-system aws-node-k8hxl 1/1 Running 1 (15d ago) 15d kube-system ebs-csi-node-c8rbh 3/3 Running 3 (15d ago) 15d kube-system kube-multus-ds-8nzhs 1/1 Running 1 (13d ago) 13d kube-system kube-proxy-h669c 1/1 Running 1 (15d ago) 15d
-
Verify the Cloud-Native Router daemonsets by issuing the
kubectl get ds -A
command. Use thekubectl get ds -A
command to get a list of daemonsets. The Cloud-Native Router daemonsets are highlighted in bold text.kubectl get ds -A
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE contrail contrail-vrouter-masters 0 0 0 0 0 <none> 13d contrail contrail-vrouter-nodes 1 1 1 1 1 <none> 13d jcnr syslog-ng 1 1 1 1 1 <none> 13d kube-system aws-node 8 8 8 8 8 <none> 15d kube-system ebs-csi-node 8 8 8 8 8 kubernetes.io/os=linux 15d kube-system ebs-csi-node-windows 0 0 0 0 0 kubernetes.io/os=windows 15d kube-system kube-multus-ds 8 8 8 8 8 <none> 13d kube-system kube-proxy 8 8 8 8 8 <none> 15d
-
Verify the Cloud-Native Router statefulsets by issuing the
kubectl get statefulsets -A
command. The command output provides the statefulsets.kubectl get statefulsets -A
NAMESPACE NAME READY AGE jcnr kube-crpd-worker-sts 1/1 27m
-
Verify if the cRPD is licensed and has the appropriate
configurations.
- View the Access the cRPD CLI section for instructions to access the cRPD CLI.
-
Once you have access the cRPD CLI, issue the
show system license
command in the cli mode to view the system licenses. For example:root@jcnr-01:/# cli root@jcnr-01> show system license License usage: Licenses Licenses Licenses Expiry Feature name used installed needed containerized-rpd-standard 1 1 0 2024-09-20 16:59:00 PDT Licenses installed: License identifier: 85e5229f-0c64-0000-c10e4-a98c09ab34a1 License SKU: S-CRPD-10-A1-PF-5 License version: 1 Order Type: commercial Software Serial Number: 1000098711000-iHpgf Customer ID: Juniper Networks Inc. License count: 15000 Features: containerized-rpd-standard - Containerized routing protocol daemon with standard features date-based, 2022-08-21 17:00:00 PDT - 2027-09-20 16:59:00 PDT
-
Issue the
show configuration | display set
command in the cli mode to view the cRPD default and custom configuration. The output will be based on the custom configuration and the Cloud-Native Router deployment mode.root@jcnr-01# cli root@jcnr-01> show configuration | display set
-
Type the
exit
command to exit from the pod shell.
-
Verify the vRouter interfaces configuration.
- View the Access the vRouter CLI section for instructions on how to access the vRouter CLI.
-
Once you have accessed the vRouter CLI, issue the
vif --list
command to view the vRouter interfaces . The output will depend upon the Cloud-Native Router deployment mode and configuration. An example for L3 mode deployment, with one fabric interface configured, is provided below:$ vif --list Vrouter Interface Table Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2 D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, Mon=Interface is Monitored Uuf=Unknown Unicast Flood, Vof=VLAN insert/strip offload, Df=Drop New Flows, L=MAC Learning Enabled Proxy=MAC Requests Proxied Always, Er=Etree Root, Mn=Mirror without Vlan Tag, HbsL=HBS Left Intf HbsR=HBS Right Intf, Ig=Igmp Trap Enabled, Ml=MAC-IP Learning Enabled, Me=Multicast Enabled vif0/0 Socket: unix MTU: 1514 Type:Agent HWaddr:00:00:5e:00:01:00 Vrf:65535 Flags:L2 QOS:-1 Ref:3 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:0 bytes:0 errors:0 TX packets:0 bytes:0 errors:0 Drops:0 vif0/1 PCI: 0000:00:07.0 (Speed 1000, Duplex 1) NH: 6 MTU: 9000 Type:Physical HWaddr:0e:d0:2a:58:46:4f IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:0 Flags:L3L2 QOS:0 Ref:8 RX device packets:20476 bytes:859992 errors:0 RX port packets:20476 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 Fabric Interface: 0000:00:07.0 Status: UP Driver: net_ena RX packets:20476 bytes:859992 errors:0 TX packets:2 bytes:180 errors:0 Drops:0 TX port packets:2 errors:0 TX device packets:8 bytes:740 errors:0 vif0/2 PCI: 0000:00:08.0 (Speed 1000, Duplex 1) NH: 7 MTU: 9000 Type:Physical HWaddr:0e:6a:9e:04:da:6f IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:0 Flags:L3L2 QOS:0 Ref:8 RX device packets:20476 bytes:859992 errors:0 RX port packets:20476 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 Fabric Interface: 0000:00:08.0 Status: UP Driver: net_ena RX packets:20476 bytes:859992 errors:0 TX packets:2 bytes:180 errors:0 Drops:0 TX port packets:2 errors:0 TX device packets:8 bytes:740 errors:0 vif0/3 PMD: eth2 NH: 10 MTU: 9000 Type:Host HWaddr:0e:d0:2a:58:46:4f IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:65535 Flags:L3L2DProxyEr QOS:-1 Ref:11 TxXVif:1 RX device packets:2 bytes:180 errors:0 RX queue packets:2 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:2 bytes:180 errors:0 TX packets:20476 bytes:859992 errors:0 Drops:0 TX queue packets:20476 errors:0 TX device packets:20476 bytes:859992 errors:0 vif0/4 PMD: eth3 NH: 15 MTU: 9000 Type:Host HWaddr:0e:6a:9e:04:da:6f IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:65535 Flags:L3L2DProxyEr QOS:-1 Ref:11 TxXVif:2 RX device packets:2 bytes:180 errors:0 RX queue packets:2 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:2 bytes:180 errors:0 TX packets:20476 bytes:859992 errors:0 Drops:0 TX queue packets:20476 errors:0 TX device packets:20476 bytes:859992 errors:0
-
Type
exit
to exit from the pod shell.