Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

JCNR-CNI

Read this chapter to learn about JCNR-CNI, which is the primary container network interface for JCNR.

The JCNR-CNI manages the secondary interfaces that the pods use. It creates the required interfaces based on the configuration in YAML-formatted network attachment definition (NAD) files. The JNCR-CNI configures some interfaces before passing them to their final location or connection point and provides an API for further interface configuration options.

  • JCNR-CNI instantiates different kinds of pod interfaces.

  • Creates virtio-based high performance interfaces for pods that leverage the DPDK data plane

  • Creates veth pair interfaces that allow pods to communicate using the Linux Kernel networking stack

  • Creates pod interfaces in access or trunk mode

  • Attaches pod interfaces to bridge domains

  • Supports IPAM plug-in for Dynamic IP address allocation

  • Allocates unique socket interfaces for virtio interfaces

  • Applies L2 access control lists (ACLs) to JCNR-vRouter

  • Attaches pod interfaces to a bridge domain

  • Manages the networking tasks in pods such as assigning IP addresses and setting up of interfaces between the pod and host in a Kubernetes cluster

  • Applies Kubernetes network policies that are translated to firewall filter rules. The JCNR-CNI sends the firewall policies to JCNR-vRouter for application in the data plane.

  • Connects pod interface to network: pod-to-pod and pod-to-network

  • Integrates with JCNR-vRouter for offloading packet processing

Benefits of JCNR-CNI

  • Improved pod interface management

  • Customizable administrative and monitoring capabilities

  • Improved application security

  • Increased performance through tight integration with cRPD and vRouter components

JCNR-CNI Inside Cloud-Native Router

JCNR-CNI is a specialized container network interface that can make a variety of network connections. It operates together with the Multus CNI. Figure 1 shows how JCNR-CNI interacts with the other components in Juniper Cloud-Native Router.

Figure 1: JCNR-CNI in an L2 Deployment JCNR-CNI in an L2 Deployment

JCNR-CNI Role in pod Creation

When you create a pod for use in the cloud-native router, the Kubernetes component known as kubelet calls the Multus CNI to set up pod networking and interfaces. Multus reads the annotations section of the pod.yaml file to find the NADs. If a NAD points to JCNR-CNI as the CNI plug in, Multus calls the JCNR-CNI to set up the pod interface. JCNR-CNI creates the interface as specified in the NAD. JCNR-CNI then generates and pushes a configuration into cRPD.

Network Attachment Definitions

The NAD files are YAML files that the Multus CNI uses during the interface creation phase of pod creation. A NAD specifies the interface MAC addresses and allocates IP addresses. Each pod can use one or more NAD, typically one per pod interface. In the pod YAML file, the NAD to use for pod creation is listed under the network annotations section. In addition to creating the interface on pods, NADs can create virtual switches. The NAD attaches pod interfaces to L2 switching instances. The table, Table 1 describes the L2 interface types and modes supported.

Table 1: NAD - L2 Interface Modes
Interface Mode Characteristics Comments

Access

Allows untagged packets to traverse the link to the pod Must be explicitly bound to a bridge domain
Virtual switches use access mode for non-DPDK interfaces and applications like SSH and syslog

Trunk

Allows packets within specifically configured VLAN range Implicitly part of one or more bridge domains
No IP address allocation by CNI Virtual switches in trunk mode carry DU user-plane traffic
If IP address is needed, the pod must have its own allocation method such as DHCP Dynamically add and remove network slices in 5G environments without restarting the Pod