Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

System Requirements for Tanzu Deployment

Read this section to understand the system, resource, port, and licensing requirements for installing Juniper Cloud-Native Router on a VMWare Tanzu platform.

Minimum Host System Requirements for Tanzu

Table 1 lists the host system requirements for installing JCNR on Tanzu.

Table 1: Minimum Host System Requirements for Tanzu
Component Value/Version Notes
CPU Intel x86 The tested CPU is Intel Xeon Gold 6212U 24-core @2.4 GHz
Host OS

RedHat Enterprise Linux

Version 8.4, 8.5, 8.6

Rocky Linux 8.6, 8.7, 8.8, 8.9
Kernel Version RedHat Enterprise Linux (RHEL): 4.18.X

Rocky Linux: 4.18.X

The tested kernel version for RHEL is 4.18.0-305.rt7.72.el8.x86_64

The tested kernel version for Rocky Linux is 4.18.0-372.19.1.rt7.176.el8_6.x86_64 and 4.18.0-372.32.1.rt7.189.el8_6.x86_64

NIC
  • Intel E810 CVL with Firmware 4.22 0x8001a1cf 1.3346.0

  • Intel E810 CPK with Firmware 2.20 0x80015dc1 1.3083.0

  • Intel E810-CQDA2 with Firmware 4.20 0x80017785 1.3346.0

  • Intel XL710 with Firmware 9.20 0x8000e0e9 0.0.0

  • Mellanox ConnectX-6

  • Mellanox ConnectX-7

Support for Mellanox NICs is considered a Juniper Technology Preview (Tech Preview) feature.

When using Mellanox NICs, ensure your interface names do not exceed 11 characters in length.

IAVF driver Version 4.8.2   
ICE_COMMS Version 1.3.35.0  
ICE Version 1.11.20.13 ICE driver is used only with the Intel E810 NIC
i40e Version 2.22.18.1 i40e driver is used only with the Intel XL710 NIC
Kubernetes (K8s) Version 1.22.x, 1.23.x, 1.25x The tested K8s version is 1.22.4. K8s version 1.22.2 also works.

JCNR supports an all-in-one or multinode Kubernetes cluster, with control plane and worker nodes running on virtual machines (VMs) or bare metal servers (BMS).

Note:

When you install JCNR on a VMWare Tanzu Kubernetes cluster, the cluster must contain at least one worker node.

Calico Version 3.22.x  
Multus Version 3.8  
Helm 3.9.x  
Container-RT containerd 1.7.x Other container runtimes may work but have not been tested with JCNR.

Resource Requirements for Tanzu

Table 2 lists the resource requirements for installing JCNR on Tanzu.

Table 2: Resource Requirements for Tanzu
Resource Value Usage Notes
Data plane forwarding cores 2 cores (2P + 2S)  
Service/Control Cores 0  
UIO Driver VFIO-PCI To enable, follow the steps below:
cat /etc/modules-load.d/vfio.conf
vfio
vfio-pci
Hugepages (1G) 6 Gi Add GRUB_CMDLINE_LINUX_DEFAULT values in /etc/default/grub on the host. For example: GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0 default_hugepagesz=1G hugepagesz=1G hugepages=64 intel_iommu=on iommu=pt"

Update grub and reboot the host. For example:

grub2-mkconfig -o /boot/grub2/grub.cfg

Verify the hugepage is set by executing the following commands:

cat /proc/cmdline

grep -i hugepages /proc/meminfo

Note:

This 6 x 1GB hugepage requirement is the minimum for a basic L2 mode setup. Increase this number for more elaborate installations. For example, in an L3 mode setup with 2 NUMA nodes and 256k descriptors, set the number of 1GB hugepages to 10 for best performance.

JCNR Controller cores .5  
JCNR vRouter Agent cores .5  

Miscellaneous Requirements for Tanzu

Table 3 lists additional requirements for installing JCNR on Tanzu.

Table 3: Miscellaneous Requirements for Tanzu

Requirement

Example

Enable the host with SR-IOV and VT-d in the system's BIOS.

Depends on BIOS.

Enable VLAN driver at system boot.

Configure /etc/modules-load.d/vlan.conf as follows:

cat /etc/modules-load.d/vlan.conf
8021q

Reboot and verify by executing the command:

lsmod | grep 8021q

Enable VFIO-PCI driver at system boot.

Configure /etc/modules-load.d/vfio.conf as follows:

cat /etc/modules-load.d/vfio.conf
vfio
vfio-pci

Reboot and verify by executing the command:

lsmod | grep vfio

Set IOMMU and IOMMU-PT in GRUB.

Add the following line to /etc/default/grub.
GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0 default_hugepagesz=1G hugepagesz=1G hugepages=64 intel_iommu=on iommu=pt"

Update grub and reboot.

grub2-mkconfig -o /boot/grub2/grub.cfg 
reboot

Additional kernel modules need to be loaded on the host before deploying JCNR in L3 mode. These modules are usually available in linux-modules-extra or kernel-modules-extra packages.

Note:

Applicable for L3 deployments only.

Create a /etc/modules-load.d/crpd.conf file and add the following kernel modules to it:

tun
fou
fou6
ipip
ip_tunnel
ip6_tunnel
mpls_gso
mpls_router
mpls_iptunnel
vrf
vxlan

Enable kernel-based forwarding on the Linux host.

ip fou add port 6635 ipproto 137

Exclude JCNR interfaces from NetworkManager control.

NetworkManager is a tool in some operating systems to make the management of network interfaces easier. NetworkManager may make the operation and configuration of the default interfaces easier. However, it can interfere with Kubernetes management and create problems.

To avoid NetworkManager from interfering with JCNR interface configuration, exclude JCNR interfaces from NetworkManager control. Here's an example on how to do this in some Linux distributions:

  1. Create the /etc/NetworkManager/conf.d/crpd.conf file and list the interfaces that you don't want NetworkManager to manage.

    For example:

    [keyfile]
     unmanaged-devices+=interface-name:enp*;interface-name:ens*
    where enp* and ens* refer to your JCNR interfaces.
    Note: enp* indicates all interfaces starting with enp. For specific interface names, provided a comma-separated list.
  2. Restart the NetworkManager service:
    sudo systemctl restart NetworkManager
    .
  3. Edit the /etc/sysctl.conf file on the host and paste the following content in it:
    net.ipv6.conf.default.addr_gen_mode=0
    net.ipv6.conf.all.addr_gen_mode=0
    net.ipv6.conf.default.autoconf=0
    net.ipv6.conf.all.autoconf=0
  4. Run the command sysctl -p /etc/sysctl.conf to load the new sysctl.conf values on the host.
  5. Create the bond interface manually. For example:

    ifconfig ens2f0 down
    ifconfig ens2f1 down
    ip link add bond0 type bond mode 802.3ad
    ip link set ens2f0 master bond0
    ip link set ens2f1 master bond0
    ifconfig ens2f0 up ; ifconfig ens2f1 up; ifconfig bond0 up

Verify the core_pattern value is set on the host before deploying JCNR.

sysctl kernel.core_pattern
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e

You can update the core_pattern in /etc/sysctl.conf. For example:

kernel.core_pattern=/var/crash/core_%e_%p_%i_%s_%h_%t.gz

Enable iommu unsafe interrupts and unsafe noiommu mode.

echo Y > /sys/module/vfio_iommu_type1/parameters/allow_unsafe_interrupts
echo Y > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode

Configure iptables to accept specified traffic.

 iptables -I INPUT -p tcp --dport 830 -j ACCEPT
 iptables -I INPUT -p tcp --dport 24 -j ACCEPT
 iptables -I INPUT -p tcp --dport 8085 -j ACCEPT
 iptables -I INPUT -p tcp --dport 8070 -j ACCEPT

 iptables -I INPUT -p tcp --dport 8072 -j ACCEPT
 iptables -I INPUT -p tcp --dport 50053 -j ACCEPT

iptables -A INPUT -p icmp -j ACCEPT
iptables -A OUTPUT -p icmp -j ACCEPT

iptables -A INPUT   -s 224.0.0.0/4 -j ACCEPT
iptables -A FORWARD -s 224.0.0.0/4 -d 224.0.0.0/4 -j ACCEPT
iptables -A OUTPUT  -d 224.0.0.0/4 -j ACCEPT

On the ESXi Hypervisor, enable 16 queues.

set esxcli system module parameters set -m icen -p NumQPsPerVF=16,16,16,16

On the ESXi Hypervisor, enable trust and disable spoofcheck:

esxcli intnet sriovnic vf set -s false -t true  -v 0  -n vmnic2

Check the settings:

esxcli intnet sriovnic vf get -n vmnic2
VF ID           Trusted         Spoof Check
0               true           false

Port Requirements

Juniper Cloud-Native Router listens on certain TCP and UDP ports. This section lists the port requirements for the cloud-native router.

Table 4: Cloud-Native Router Listening Ports
Protocol Port Description
TCP 8085 vRouter introspect–Used to gain internal statistical information about vRouter
TCP 8070 Telemetry Information- Used to see telemetry data from the JCNR vRouter
TCP 8072 Telemetry Information-Used to see telemetry data from JCNR control plane
TCP 8075, 8076 Telemetry Information- Used for gNMI requests
TCP 9091 vRouter health check–cloud-native router checks to ensure the vRouter agent is running.
TCP 9092 vRouter health check–cloud-native router checks to ensure the vRouter DPDK is running.
TCP 50052 gRPC port–JCNR listens on both IPv4 and IPv6
TCP 8081 JCNR Deployer Port
TCP 24 cRPD SSH
TCP 830 cRPD NETCONF
TCP 666 rpd
TCP 1883 Mosquito mqtt–Publish/subscribe messaging utility
TCP 9500 agentd on cRPD
TCP 21883 na-mqttd

TCP

50053

Default gNMI port that listens to the client subscription request

TCP 51051 jsd on cRPD
UDP 50055 Syslog-NG

Download Options

See JCNR Software Download Packages.

JCNR Licensing

See Manage JCNR Licenses.