Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

JCNR Interfaces Overview

SUMMARY This topic provides information on the network communication interfaces provided by the JCNR-Controller. Fabric interfaces are aggregated interfaces that receive traffic from multiple interfaces. Interfaces to which different workloads are connected are called workload interfaces.

Read this topic to understand the network communication interfaces provided by the JCNR-Controller. We cover interface names, what they connect to, how they communicate. and the services they provide.

Juniper Cloud-Native Router Interface Types

Juniper Cloud-Native Router supports two types of interfaces:

  • Fabric interfaces—Aggregated interfaces that receive traffic from multiple interfaces. Fabric interfaces are always physical interfaces. They can either be a physical function (PF) or a virtual function (VF). The throughput requirement for these interfaces is higher, hence multiple hardware queues are allocated to them. Each hardware queue is allocated with a dedicated CPU core . The interfaces are configured for the cloud-native router using the appropriate values.yaml file in the deployer helmcharts. You can view the interface mapping using the dpdkinfo -c command. View the Troubleshoot via the vRouter CLI topic in the Deployment Guide for more details. You also have fabric workload interfaces that have low throughput requirement. Only one hardware queue is allocated to the interface, thereby saving precious CPU resources. These interfaces can be configured using the appropriate values.yaml file in the deployer helmcharts.

  • Workload interfaces—Interfaces to which different workloads are connected. They can either be software-based or hardware-based interfaces. Software-based interfaces are either high-performance interfaces using the Data Plane Development Kit (DPDK) poll mode driver (PMD) or a low-performance interfaces using the kernel driver. Typically the DPDK interfaces are used for data traffic such as the GPRS Tunneling Protocol for user data (GTP-U) traffic and the kernel-based interfaces are used for control plane data traffic such as TCP. The kernel pod interfaces are typically for the operations, administration and maintenance (OAM) traffic. The interfaces are configured as a veth-pair, with one end of the interface in the pod and the other end in the Linux kernel on the host. JCNR also supports bonded interfaces via the link bonding PMD. These interfaces can be configured using the appropriate values.yaml file in the deployer helmcharts.

    JCNR supports different types of VLAN interfaces including trunk, access and sub-interfaces across fabric and workload interfaces.

    JCNR Interface Details

    The different JCNR interfaces are provided in detail below:

  • Agent interface

    vRouter has only one agent interface. The agent interface enables communication between the vRouter-agent and the vRouter. On the vRouter CLI when you issue the vif --list command, the agent interface looks like this:

  • DPDK VF workload interfaces

    These interfaces connect to the radio units (RUs) or millimeter-wave distributed units (mmWave-DUs). On the vRouter CLI when you issue the vif --list command, the DPDK VF workload interface looks like this:

  • DPDK VF fabric interfaces (Physical Trunk)

    DPDK VF fabric interfaces, which are associated with the physical network interface card (NIC) on the host server, accept traffic from multiple VLANs.

    The cRPD interface configuration using the show configuration command looks like this (the output is trimmed for brevity):

    On the vRouter CLI when you issue the vif --list command, the DPDK VF fabric interface looks like this:

  • Active or standby bond interfaces (Bond Trunk)

    Bond interfaces accept traffic from multiple VLANs. A bond interface runs in the active or standby mode (mode 0). You define the bond interface in the helm chart configuration as follows:

    The cRPD interface configuration using the show configuration command looks like this (the output is trimmed for brevity):

    On the vRouter CLI when you issue the vif --list command, the bond interface looks like this:

  • Pod interfaces using DPDK data plane (Virtio Trunk) virtio

    The trunk interfaces accept only tagged packets. Any untagged packets are dropped. These interfaces can accept a VLAN filter to allow only specific VLAN packets. A trunk interface can be a part of multiple bridge-domains (BD). A bridge domain is a set of logical ports that share the same flooding or broadcast characteristics. Like a VLAN, a bridge domain spans one or more ports of multiple devices. Virtio interfaces are associated with pod interfaces that use virtio on the DPDK data plane.

    The cRPD interface configuration using the show configuration command looks like this (the output is trimmed for brevity):

    On the vRouter CLI when you issue the vif --list command, the virtio with DPDK data plane interface looks like this:

  • Pod interfaces using Kernel interface

    The access interfaces accept both tagged and untagged packets. Untagged packets are tagged with the access VLAN or access BD. Any tagged packets other than the ones with access VLAN are dropped. The access interfaces is a part of a single bridge-domain. It does not have any parent interface.

    The cRPD interface configuration using the show configuration command looks like this (the output is trimmed for brevity):

    On the vRouter CLI when you issue the vif --list command, the veth pair interface looks like this:

  • L2 VLAN sub-interfaces

    You can configure a user pod with a Layer 2 VLAN sub-interface and attach it to the JCNR instance. VLAN sub-interfaces are like logical interfaces on a physical switch or router. They access only tagged packets that match the configured VLAN tag. A sub-interface has a parent interface. A parent interface can have multiple sub-interfaces, each with a VLAN ID. When you run the cloud-native router, you must associate each sub-interface with a specific VLAN.

    The cRPD interface configuration viewed using the show configuration command is as shown below (the output is trimmed for brevity).

    For L2:

    On the vRouter, a VLAN sub-interface configuration is as shown below:

    Note:

    To see the VLAN sub-interfaces on the vRouter, connect to the vRouter agent by executing the command kubectl exec -it -n contrail contrail-vrouter-<agent container> -- bash command, and then run the command vif --get.

  • L3 Physical Interface

    Corresponding interface state in the cRPD:

    L3 Bond Interface

    Corresponding interface state in the cRPD:
  • L3 Pod Vhost-User Interface

    Corresponding interface state in the cRPD:

  • L3 Kernel Interface

    Corresponding interface state in the cRPD:
  • L3 VLAN Sub-Interfaces

    Starting in Juniper Cloud-Native Router Release 23.2, the cloud-native router supports the use of VLAN sub-interfaces in L3 mode.

    Corresponding interface state in cRPD: