Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?


System Requirements for Azure Deployment

Read this section to understand the system, resource, port, and licensing requirements for installing Juniper Cloud-Native Router on Microsoft Azure Cloud Platform.

Minimum Host System Requirements

This section lists the host system requirements for installing the cloud-native router.

Table 1: Cloud-Native Router Minimum Host System Requirements
Component Value/Version Notes
Azure Deployment VM-based  
Instance Type Standard_F16s_v2  
CPU Intel x86 The tested CPU is Intel Cascade Lake
Host OS Rocky Linux 8.7  
Kernel Version

Rocky Linux: 4.18.X

The tested kernel version is
Kubernetes (K8s) Version 1.25.x The tested K8s version is 1.25.5
Calico Version 3.25.1  
Multus Version 4.0  
Helm 3.9.x  
Container-RT containerd  

Resource Requirements

This section lists the resource requirements for installing the cloud-native router.

Table 2: Cloud-Native Router Resource Requirements
Resource Value Usage Notes
Data plane forwarding cores 2 cores (2P + 2S)  
Service/Control Cores 0  
UIO Driver


To enable, add the following modules to be loaded at boot:

cat /etc/modules-load.d/k8s.conf 

The above libraries are provided by ibverbs package.

Hugepages (1G) 6 Gi Add GRUB_CMDLINE_LINUX_DEFAULT values in /etc/default/grub and reboot the host. For example:
GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0 default_hugepagesz=1G hugepagesz=1G hugepages=6 intel_iommu=on iommu=pt"

Update grub and reboot the host. For example:

grub2-mkconfig -o /boot/grub2/grub.cfg

Verify the hugepage is set by executing the following commands:

cat /proc/cmdline
grep -i hugepages /proc/meminfo
JCNR Controller cores .5  
JCNR vRouter Agent cores .5  

Miscellaneous Requirements

This section lists additional requirements for installing the cloud-native router.

Table 3: Miscellaneous Requirements
Cloud-Native Router Release Miscellaneous Requirements
Set IOMMU and IOMMU-PT in /etc/default/grub file. For example:
GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0 default_hugepagesz=1G hugepagesz=1G hugepages=64 intel_iommu=on iommu=pt"

Update grub and reboot the host. For example:

grub2-mkconfig -o /boot/grub2/grub.cfg 

Additional kernel modules need to be loaded on the host before deploying JCNR in L3 mode. These modules are usually available in linux-modules-extra or kernel-modules-extra packages. Run the following commands to add the kernel modules:

cat /etc/modules-load.d/crpd.conf

Applicable for L3 deployments only.

Run the ip fou add port 6635 ipproto 137 command on the Linux host to enable kernel based forwarding.

Add firewall rules for loopback address for VPC.

Configure the VPC firewall rule to allow ingress traffic with source filters set to the subnet range to which JCNR is attached, along with the IP ranges or addresses for the loopback addresses.

For example:

Navigate to Firewall policies on the Azure console and create a firewall rule with the following attributes:

  1. Name: Name of the firewall rule

  2. Network: Choose the VPC network

  3. Priority: 1000

  4. Direction: Ingress

  5. Action on Match: Allow

  6. Source filters:,,,,

  7. Protocols: all

  8. Enforcement: Enabled

where is the subnet to which JCNR is attached and,,, are loopback IP ranges.

JCNR for Azure supports IPv4 only.

Ensure accelerated networking is enabled for the fabric interface. If accelerated networking is enabled properly, two interfaces become available for the fabric interface. For example:

3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:22:48:23:3b:9e brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::222:48ff:fe23:3b9e/64 scope link 
       valid_lft forever preferred_lft forever
4: enP22960s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth1 state UP group default qlen 1000
    link/ether 00:22:48:23:3b:9e brd ff:ff:ff:ff:ff:ff
    altname enP22960p0s2

When configuring the fabric interface in the helm chart, you must provide the interface with hv_netvsc binded to it. Issue the ethtool -i interface_name command to verify it. For example:

user@jcnr01:~# ethtool -i eth1
driver: hv_netvsc
version: 5.15.0-1049-azure
firmware-version: N/A
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

Do not enable accelerated networking for the management interface.

NetworkManager is a tool in some operating systems to make the management of network interfaces easier. NetworkManager may make the operation and configuration of the default interfaces easier. However, it can interfere with the Kubernetes management and create problems.

To avoid the NetworkManager from interfering with the interface configurations, perform the following steps:

  1. Create the file, /etc/NetworkManager/conf.d/crpd.conf.
  2. Add the following content in the file.
    Note: enp* indicates all interfaces starting with enp. For specific interface names, provided a comma-separated list.
  3. Restart the NetworkManager service by running the command, sudo systemctl restart NetworkManager.
  4. Edit the sysctl file on the host and paste the following content in it:
  5. Run the command sysctl -p /etc/sysctl.conf to load the new sysctl.conf values on the host.
Verify the core_pattern value is set on the host before deploying JCNR:
sysctl kernel.core_pattern
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e

You can update the core_pattern in /etc/sysctl.conf. For example:


Port Requirements

Juniper Cloud-Native Router listens on certain TCP and UDP ports. This section lists the port requirements for the cloud-native router.

Table 4: Cloud-Native Router Listening Ports
Protocol Port Description
TCP 8085 vRouter introspect–Used to gain internal statistical information about vRouter
TCP 8072 Telemetry Information-Used to see telemetry data from JCNR control plane
TCP 9091 vRouter health check–cloud-native router checks to ensure contrail-vrouter-dpdk process is running, etc.
TCP 50052 gRPC port–JCNR listens on both IPv4 and IPv6
TCP 8081 JCNR Deployer Port
TCP 666 rpd
TCP 1883 Mosquito mqtt–Publish/subscribe messaging utility
TCP 9500 agentd on cRPD
TCP 21883 na-mqttd
TCP 50051 jsd on cRPD
TCP 51051 jsd on cRPD
UDP 50055 Syslog-NG

Download Options

To deploy JCNR on Azure you can download the helm charts from the Juniper Support Site.

Note: Before deploying JCNR on Azure via helm charts downloaded from the Juniper support site, you must whitelist as the JCNR image registry.

JCNR Licensing

Starting with Juniper Cloud-Native Router (JCNR) Release 22.2, we have enabled our Juniper Agile Licensing (JAL) model. JAL ensures that features are used in compliance with Juniper's end-user license agreement. You can purchase licenses for the Juniper Cloud-Native Router software through your Juniper Account Team. You can apply the licenses by using the CLI of the cloud-native router controller. For details about managing multiple license files for multiple cloud-native router deployments, see Juniper Agile Licensing Overview.


Starting with JCNR Release 23.2, the JCNR license format has changed. Request a new license key from the JAL portal before deploying or upgrading to 23.2 or newer releases.