System Requirements for Wind River Deployment
Minimum Host System Requirements on a Wind River Deployment
Table 1 lists the host system requirements for installing Cloud-Native Router on a Wind River deployment.
| Component | Value/Version | Notes |
|---|---|---|
| CPU | Intel x86 | The tested CPU is Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz |
| Host OS | Debian GNU/Linux (depends on Wind River Cloud Platform version) | |
| Kernel Version | 5.10.x | 5.10.0-6-amd64 |
| NIC |
|
Support for Mellanox NICs is considered a Juniper Technology Preview (Tech Preview)
feature. When using Mellanox NICs, ensure your interface names do not exceed 11 characters in length. |
| Wind River Cloud Platform |
22.12, 24.09 |
|
| IAVF driver | Version 4.5.3.1 | |
| ICE_COMMS | Version 1.3.35.0 | |
| ICE | Version 1.9.11.9 | ICE driver is used only with the Intel E810 NIC |
| i40e | Version 2.18.9 | i40e driver is used only with the Intel XL710 NIC |
| Kubernetes (K8s) | Version 1.24 | The tested K8s version is 1.24.4 |
| Calico | Version 3.24.x | |
| Multus | Version 3.8 | |
| Helm | 3.9.x | |
| Container-RT |
|
Other container runtimes may work but have not been tested with JCNR. |
|
Note:
The component versions listed in this table are expected to work with JCNR, but not every version or combination is tested in every release. |
||
Resource Requirements on a Wind River Deployment
Table 2 lists the resource requirements for installing Cloud-Native Router on a Wind River deployment.
| Resource | Value | Usage Notes |
|---|---|---|
| Data plane forwarding cores | 1 core (1P + 1S) | |
| Service/Control Cores | 0 | |
| Hugepages (1G) | 6 Gi | Lock the controller and get the memory processors using below command: source /etc/platform/openrc system host-lock controller-0 system host-memory-list controller-0To set the huge pages, run the following command for each controller: system host-memory-modify controller-0 0 -1G 6 system host-memory-modify controller-0 1 -1G 6 View the huge pages with the following command: system host-memory-list controller-0 Unlock the controller: system host-unlock controller-0 |
| Cloud-Native Router Controller cores | .5 | |
| Cloud-Native Router vRouter Agent cores | .5 |
Miscellaneous Requirements on a Wind River Deployment
Table 3 lists the additional requirements for installing Cloud-Native Router on a Wind River deployment.
| Requirement |
Example |
|---|---|
|
Enable the host with SR-IOV and VT-d in the system's BIOS. |
Depends on BIOS. |
|
Isolate CPUs from the kernel scheduler. |
source /etc/platform/openrc system host-lock controller-0 system host-cpu-list controller-0 system host-cpu-modify -f application-isolated -c 4-59 controller-0 system host-unlock controller-0 |
|
Additional kernel modules need to be loaded on the host before deploying
Cloud-Native Router in L3 mode. These modules are usually available in
Note:
Applicable for L3 deployments only. |
Create a conf file and add the kernel modules: cat /etc/modules-load.d/crpd.conf tun fou fou6 ipip ip_tunnel ip6_tunnel mpls_gso mpls_router mpls_iptunnel vrf vxlan |
|
Enable kernel-based forwarding on the Linux host. |
ip fou add port 6635 ipproto 137 |
|
Verify the core_pattern value is set on the host before deploying JCNR. |
sysctl kernel.core_pattern kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e You can update the core_pattern in kernel.core_pattern=/var/crash/core_%e_%p_%i_%s_%h_%t.gz |
| Set adequate send and receive buffer sizes. |
sysctl -w net.core.rmem_default=67108864 sysctl -w net.core.rmem_max=67108864 sysctl -w net.core.wmem_default=67108864 sysctl -w net.core.wmem_max=6710886 |
Requirements for Pre-Bound SR-IOV Interfaces on a Wind River Deployment
In a Wind River deployment, you typically bind all your Cloud-Native Router interfaces to the vfio DPDK driver before you deploy JCNR. Table 4 shows an example of how you can do this on an SR-IOV-enabled interface on a host.
We support pre-binding interfaces for Cloud-Native Router L2 and L3 mode deployments.
|
Requirement |
Example |
|---|---|
|
Pre-bind the Cloud-Native Router interfaces to the vfio DPDK driver. |
source /etc/platform/openrc
system host-lock controller-0
system host-label-assign controller-0 sriovdp=enabled # <-- Label node to accept SR-IOV-enabled
# deployments.
system host-label-assign controller-0 kube-cpu-mgr-policy=static
system host-label-assign controller-0 kube-topology-mgr-policy=restricted # <-- see note below
system datanetwork-add datanet0 flat # <-- Create datanet0 network. You'll define this in a NAD
# later.
DTNIF=enp175s0f0
system host-if-modify -m 1500 -n $DTNIF -c pci-sriov -N 8 controller-0 $DTNIF --vf-driver=netdevice
# ^ Enable 8 (for example) VFs on enp175s0f0.
system host-if-add -c pci-sriov controller-0 srif0 vf $DTNIF -N 1 --vf-driver=vfio
# ^ Create srif0 interface that uses one of the VFs
# and bind to vfio driver.
IFUUID=$(system host-if-list 1 | awk '{if ($4 == "srif0") {print $2}}')
system interface-datanetwork-assign 1 $IFUUID datanet0 # <-- Attach srif0 interface to datanet0 network.
system host-unlock 1
Note:
On hosts with a single NUMA node or where all NICs are attached to the same
NUMA node, set On hosts with multiple NUMA nodes where the NICs are spread across NUMA nodes,
set |
|
Create and apply the Network Attachment Definition that attaches the datanet0 network defined above. |
Create a yaml file for the Network Attachment Definition. For example: apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: srif0net0
annotations:
k8s.v1.cni.cncf.io/resourceName: intel.com/pci_sriov_net_datanet0
spec:
config: '{
"cniVersion": "0.3.0",
"type": "sriov",
"spoofchk": "off",
"trust": "on"
}'
Apply the yaml to attach the datanet0 network: kubectl apply -f srif0net0.yamlwhere srif0net0.yaml is the file that contains the Network Attachment Definition above. |
|
Update the Helm chart values.yaml to use the defined networks. |
Here's an example of using two networks, datanet0/srif0net0 and datanet1/srif1net1. jcnr-vrouter:
guaranteedVrouterCpus: 4
interfaceBoundType: 1
networkDetails:
- ddp: "off"
name: srif0net0
namespace: default
- ddp: "off"
name: srif1net1
namespace: default
networkResources:
limits:
intel.com/pci_sriov_net_datanet0: "1"
intel.com/pci_sriov_net_datanet1: "1"
requests:
intel.com/pci_sriov_net_datanet0: "1"
intel.com/pci_sriov_net_datanet1: "1"
Here's an example of using a bond interface attached to two networks (datanet0/srif0net0 and datanet1/srif1net1) and a regular interface attached to a third network (datanet2/srif2net2). jcnr-vrouter:
guaranteedVrouterCpus: 4
interfaceBoundType: 1
bondInterfaceConfigs:
- mode: 1
name: bond0
slaveNetworkDetails:
- name: srif0net0
namespace: default
- name: srif1net1
namespace: default
networkDetails:
- ddp: "off"
name: bond0
- ddp: "off"
name: srif2net2
namespace: default
networkResources:
limits:
intel.com/pci_sriov_net_datanet0: "1"
intel.com/pci_sriov_net_datanet1: "1"
intel.com/pci_sriov_net_datanet2: "1"
requests:
intel.com/pci_sriov_net_datanet0: "1"
intel.com/pci_sriov_net_datanet1: "1"
intel.com/pci_sriov_net_datanet2: "1"
|
Requirements for Non-Pre-Bound SR-IOV Interfaces on a Wind River Deployment
In some situations, you might want to run with non-pre-bound interfaces. Table 5 shows the requirements for non-pre-bound interfaces.
|
Requirement |
Example |
|---|---|
|
Configure IPv4 and IPv6 addresses for the non-pre-bound interfaces allocated to JCNR. |
source /etc/platform/openrc system host-lock controller-0 system host-if-modify -n ens1f0 -c platform --ipv4-mode static controller-0 ens1f0 system host-addr-add 1 ens1f0 11.11.11.29 24 system host-if-modify -n ens1f0 -c platform --ipv6-mode static controller-0 ens1f0 system host-addr-add 1 ens1f0 abcd::11.11.11.29 112 system host-if-list controller-0 system host-addr-list controller-0 system host-unlock controller-0 |
Port Requirements
Juniper Cloud-Native Router listens on certain TCP and UDP ports. This section lists the port requirements for the cloud-native router.
| Protocol | Port | Description |
|---|---|---|
| TCP | 8085 | vRouter introspect–Used to gain internal statistical information about vRouter |
| TCP | 8070 | Telemetry Information- Used to see telemetry data from the Cloud-Native Router vRouter |
| TCP | 8072 | Telemetry Information-Used to see telemetry data from Cloud-Native Router control plane |
| TCP | 8075, 8076 | Telemetry Information- Used for gNMI requests |
| TCP | 9091 | vRouter health check–cloud-native router checks to ensure the vRouter agent is running. |
| TCP | 9092 | vRouter health check–cloud-native router checks to ensure the vRouter DPDK is running. |
| TCP | 50052 | gRPC port–Cloud-Native Router listens on both IPv4 and IPv6 |
| TCP | 8081 | Cloud-Native Router Deployer Port |
| TCP | 24 | cRPD SSH |
| TCP | 830 | cRPD NETCONF |
| TCP | 666 | rpd |
| TCP | 1883 | Mosquito mqtt–Publish/subscribe messaging utility |
| TCP | 9500 | agentd on cRPD |
| TCP | 21883 | na-mqttd |
|
TCP |
50053 |
Default gNMI port that listens to the client subscription request |
| TCP | 51051 | jsd on cRPD |
| UDP | 50055 | Syslog-NG |