System Requirements for OpenShift Deployment
Read this section to understand the system, resource, port, and licensing requirements for installing Juniper Cloud-Native Router on the Red Hat OpenShift Container Platform (OCP).
Minimum Host System Requirements for OCP
Table 1 lists the host system requirements for installing Cloud-Native Router on OCP.
Component | Value/Version | Notes |
---|---|---|
CPU | Intel x86 | The tested CPU is Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz 64 core |
Host OS | RHCOS 4.13 | |
Kernel Version | RedHat Enterprise Linux (RHEL): 4.18.X | The tested kernel version for RHEL is 4.18.0-372.40.1.el8_6.x86_64 |
NIC |
|
Support for Mellanox NICs is considered a Juniper Technology Preview (Tech Preview)
feature. When using Mellanox NICs, ensure your interface names do not exceed 11 characters in length. When using Mellanox NICs, follow the interface naming procedure in Interface Naming for Mellanox NICs. |
IAVF driver | Version 4.5.3.1 | |
ICE_COMMS | Version 1.3.35.0 | |
ICE | Version 1.9.11.9 | ICE driver is used only with the Intel E810 NIC |
i40e | Version 2.18.9 | i40e driver is used only with the Intel XL710 NIC |
OCP Version | 4.13 | |
OVN-Kubernetes CNI | ||
Multus | Version 3.8 | |
Helm | 3.12.x | |
Container-RT | crio 1.25x | Other container runtimes may work but have not been tested with JCNR. |
Note:
The component versions listed in this table are expected to work with JCNR, but not every version or combination is tested in every release. |
Resource Requirements for OCP
Table 2 lists the resource requirements for installing Cloud-Native Router on OCP.
Resource | Value | Usage Notes |
---|---|---|
Data plane forwarding cores | 1 core (1P + 1S) | |
Service/Control Cores | 0 | |
UIO Driver | VFIO-PCI | To enable, follow the steps below: Create a Butane config file,
variant: openshift version: 4.8.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 - path: /etc/modules-load.d/vfio-pci.conf mode: 0644 overwrite: true contents: inline: vfio-pci Create and apply the machine config: $ butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml $ oc apply -f 100-worker-vfiopci.yaml |
Hugepages (1G) | 6 Gi | Configure huge pages on the worker nodes using the following
commands:oc create -f hugepages-tuned-boottime.yaml # cat hugepages-tuned-boottime.yaml apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=1G hugepages=6 name: openshift-node-hugepages recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-hp" priority: 30 profile: openshift-node-hugepages oc create -f hugepages-mcp.yaml # cat hugepages-mcp.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: "" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: "" |
Cloud-Native Router Controller cores | .5 | |
Cloud-Native Router vRouter Agent cores | .5 |
Miscellaneous Requirements for OCP
Table 3 lists additional requirements for installing Cloud-Native Router on OCP.
Cloud-Native Router Release Miscellaneous Requirements | |
---|---|
Enable the host with SR-IOV and VT-d in the system's BIOS. |
Depends on BIOS. |
Enable VLAN driver at system boot. |
Configure /etc/modules-load.d/vlan.conf as follows: cat /etc/modules-load.d/vlan.conf 8021q Reboot and verify by executing the command: lsmod | grep 8021q |
Enable VFIO-PCI driver at system boot. |
Configure /etc/modules-load.d/vfio.conf as follows: cat /etc/modules-load.d/vfio.conf vfio vfio-pci Reboot and verify by executing the command: lsmod | grep vfio |
Set IOMMU and IOMMU-PT. |
Create a MachineConfig object that sets IOMMU and
IOMMU-PT:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 100-worker-iommu spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on iommmu=pt $ oc create -f 100-worker-kernel-arg-iommu.yaml |
Disable spoofcheck on VFs allocated to JCNR. Note:
Applicable for L2 deployments only. |
ip link set <interfacename> vf 1 spoofcheck off . |
Set trust on VFs allocated to JCNR. Note:
Applicable for L2 deployments only. |
ip link set <interfacename> vf 1 trust on |
Additional kernel modules need to be loaded on the host before deploying
Cloud-Native Router in L3 mode. These modules are usually available in
Note:
Applicable for L3 deployments only. |
Create a conf file and add the kernel modules: cat /etc/modules-load.d/crpd.conf tun fou fou6 ipip ip_tunnel ip6_tunnel mpls_gso mpls_router mpls_iptunnel vrf vxlan |
Enable kernel-based forwarding on the Linux host. |
ip fou add port 6635 ipproto 137 |
Exclude Cloud-Native Router interfaces from NetworkManager control. |
NetworkManager is a tool in some operating systems to make the management of network interfaces easier. NetworkManager may make the operation and configuration of the default interfaces easier. However, it can interfere with Kubernetes management and create problems. To avoid NetworkManager from interfering with Cloud-Native Router interface configuration, exclude Cloud-Native Router interfaces from NetworkManager control. Here's an example on how to do this in some Linux distributions:
|
Verify the core_pattern value is set on the host before deploying JCNR. |
sysctl kernel.core_pattern kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e You can update the core_pattern in kernel.core_pattern=/var/crash/core_%e_%p_%i_%s_%h_%t.gz |
Port Requirements
Juniper Cloud-Native Router listens on certain TCP and UDP ports. This section lists the port requirements for the cloud-native router.
Protocol | Port | Description |
---|---|---|
TCP | 8085 | vRouter introspect–Used to gain internal statistical information about vRouter |
TCP | 8070 | Telemetry Information- Used to see telemetry data from the Cloud-Native Router vRouter |
TCP | 8072 | Telemetry Information-Used to see telemetry data from Cloud-Native Router control plane |
TCP | 8075, 8076 | Telemetry Information- Used for gNMI requests |
TCP | 9091 | vRouter health check–cloud-native router checks to ensure the vRouter agent is running. |
TCP | 9092 | vRouter health check–cloud-native router checks to ensure the vRouter DPDK is running. |
TCP | 50052 | gRPC port–Cloud-Native Router listens on both IPv4 and IPv6 |
TCP | 8081 | Cloud-Native Router Deployer Port |
TCP | 24 | cRPD SSH |
TCP | 830 | cRPD NETCONF |
TCP | 666 | rpd |
TCP | 1883 | Mosquito mqtt–Publish/subscribe messaging utility |
TCP | 9500 | agentd on cRPD |
TCP | 21883 | na-mqttd |
TCP |
50053 |
Default gNMI port that listens to the client subscription request |
TCP | 51051 | jsd on cRPD |
UDP | 50055 | Syslog-NG |
Interface Naming for Mellanox NICs
When deploying Mellanox NICs in an OpenShift cluster, a conflict can arise between how OCP and Cloud-Native Router use interface names on those NICs. This might prevent your cluster from coming up.
Prior to installing JCNR, either disable predictable interface naming (Option 1: Disable predictable interface naming) or rename the Cloud-Native Router interfaces (Option 2: Rename the Cloud-Native Router interfaces). The Cloud-Native Router interfaces are the interfaces that you want Cloud-Native Router to control.
Option 1: Disable predictable interface naming
Before you start, ensure you have console access to the node.
Edit /etc/default/grub and append
net.ifnames=0
toGRUB_CMDLINE_LINUX_DEFAULT
.GRUB_CMDLINE_LINUX_DEFAULT="<existing_parameter_settings> net.ifnames=0"
Update grub.
grub2-mkconfig -o /boot/grub2/grub.cfg
Reboot the node.
Log back into the node. You might have to do this through the console if the network interfaces don't come back up.
List the interfaces and take note of the names of the non-Cloud-Native Router and Cloud-Native Router interfaces.
ip address
For all the non-Cloud-Native Router interfaces, update NetworkManager (or your network renderer) with the new interface names and restart NetworkManager.
Repeat on all the nodes where you’re installing the Cloud-Native Router vRouter.
Remember to update the fabric interfaces in your Cloud-Native Router installation helm chart with the new names of the Cloud-Native Router interfaces (or use subnets).
Option 2: Rename the Cloud-Native Router interfaces
Create a /etc/udev/rules.d/00-persistent-net.rules file to contain the rules.
Add the following line to the file:
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<mac_address>", ATTR{dev_id}=="0x0", ATTR{type}=="1", NAME="<new_ifname>"
where <mac_address> is the MAC address of the interface you’re renaming and <new_ifname> is the new name you want to assign to the interface (for example,
jcnr-eth1
).Add a corresponding line for each interface you’re renaming. (You’re renaming all the interfaces that Cloud-Native Router controls.)
Reboot the node.
Repeat on all the nodes where you’re installing the Cloud-Native Router vRouter.
Remember to update the fabric interfaces in your Cloud-Native Router installation helm chart with the new names of the Cloud-Native Router interfaces (or use subnets).