Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Cloud-Native Router Common Features

SUMMARY Read this chapter to learn about the common features of the Juniper Cloud-Native Router. We discuss cloud-native router interface types and other features that are present in both L2 and L3 deployment mode.

Juniper Cloud-Native Router Interface Types

Juniper Cloud-Native Router supports the following types of interfaces:

  • Agent interface

    vRouter has only one agent interface. The agent interface enables communication between the vRouter-agent and the vRouter. On the vRouter CLI when you issue the vif --list command, the agent interface looks like this:

  • Data Plane Development Kit (DPDK) Virtual Function (VF) workload interfaces

    These interfaces connect to the radio units (RUs) or millimeter-wave distributed units (mmWave-DUs) On the vRouter CLI when you issue the vif --list command, the DPDK VF workload interface looks like this:

  • DPDK VF fabric interfaces

    DPDK VF fabric interfaces, which are associated with the physical network interface card (NIC) on the host server, accept traffic from multiple VLANs. On the vRouter CLI when you issue the vif --list command, the DPDK VF fabric interface looks like this:

  • Active or standby bond interfaces

    Bond interfaces accept traffic from multiple VLANs. A bond interface runs in the active or standby mode (mode 0).

    On the vRouter CLI when you issue the vif --list command, the bond interface looks like this:

  • Pod interfaces using virtio and the DPDK data plane

    Virtio interfaces accept traffic from multiple VLANs and are associated with pod interfaces that use virtio on the DPDK data plane.

    On the vRouter CLI when you issue the vif --list command, the virtio with DPDK data plane interface looks like this:

  • Pod interfaces using virtual Ethernet (veth) pairs and the DPDK data plane

    Pod interfaces that use veth pairs and the DPDK data plane are access interfaces rather than trunk interfaces. This type of a pod interface allows traffic from only one VLAN to pass.

    On the vRouter CLI when you issue the vif --list command, the veth pair with DPDK data plane interface looks like this:

  • VLAN sub-interfaces

    Starting in Juniper Cloud-Native Router Release 22.4, the cloud-native router supports the use of VLAN sub-interfaces. VLAN sub-interfaces are like logical interfaces on a physical switch or router. When you run the cloud-native router in L2 mode, you must associate each sub-interface with a specific VLAN. On the JCNR-vRouter, a VLAN sub-interface look like this:

  • Physical Function (PF) workload interfaces

  • PF fabric interfaces

  • The vhost0 interface

    The vhost0 interface is an L3-only interface. When you run the cloud-native router in L3 mode, you must map the vhost0 interface to a kernel-based physical interface such as eth0, en1, etc. You make the mapping by adjusting the value of the vrouter_dpdk_physical_interface: key in the file Juniper_Cloud_Native_Router_version/helmchart/values_L3.yaml prior to deployment. In this configuration, the system uses the same physical interface for both IPv4 and IPv6 traffic.

    Alternatively, you can choose specific interfaces for IPv4 and IPv6 traffic by entering the appropriate physical interface name in the vhost_interface_ipv4: and vhost_interface_ipv6: keys respectively.

Note:

vRouter does not support the vhost0 interface when run in L2 mode.

The vRouter-agent detects L2 mode in values.yaml during deployment, so does not wait for the vhost0 interface to come up before completing installation. The vRouter-agent does not send a vhost interface add message so the vRouter doesn't create the vhost0 interface.

Pods are the Kubernetes element that contains the interfaces used in cloud-native router. You control interface creation by manipulating the value portion of the key:value pairs in YAML configuration files. The cloud-native router uses a pod-specific file and a network attachment device (NAD)-specific file for pod and interface creation. During pod creation, Kubernetes consults the pod and NAD configuration files and creates the needed interfaces from the values contained within the NAD configuration file.

You can see example NAD and pod YAML files in the L2 - Add User Pod with Kernel Access to a Cloud-Native Router Instance and L2 - Add User Pod with virtio Trunk Ports to a Cloud-Native Router Instance examples.

Logging and Notifications

Read this topic to learn about logging and notification functions in Juniper Cloud-Native Router. We discuss the location of log files, what you can log, and various log levels. You can also learn about the available notifications and how the notifications are implemented in the cloud-native router.

File Locations

The Juniper Cloud-Native Router pods and containers use syslog as their logging mechanism. You can determine the location of the log files at the deployment time by retaining or changing the value of the log_path key in the values.yaml file. By default, the location of the log files is /var/log/jcnr. The system stores log files from all the cloud-native router pods and containers in the log_path directory.

In addition, a syslog-ng pod stores event notification data in JSON format on the host server. The syslog-ng pod stores the JSON-formatted notifications in the directory specified by the syslog_notifications key in the values.yaml file. By default, the file location is /var/log/jcnr and the filename is jcnr_notifications.json. You can change the location and filename by changing the value of the syslog_notifications key before the cloud-native router deployment.

When you use the default file locations, the /var/log/jcnr directory displays the following files:

Note:

The host server must manage the log rotation for the contrail-vrouter-dpdk.log and the jcnr-cni.log files.

Notifications

The syslog-ng pod continuously monitors the preceding log files for notification events such as interface up, interface down, interface add, and so on. When these events appear in a log file, syslog-ng converts the log events into notification events and stores the events in JSON format within the syslog_notifications file configured in the values.yaml file.

As of Juniper Cloud-Native Router Release 22,2 syslog-ng stores the following notifications:

Table 1: Supported Notifications
Notification Source Pod

License Near Expiry​

cRPD

License Expired

cRPD

License Invalid​

cRPD

License OK​

cRPD

JCNR Init Success​

Deployer

JCNR Init Failure​

Deployer

Upstream Fabric Bond Member Link Up​

vRouter

Upstream Fabric Bond Member Link Down​

vRouter

Upstream Fabric Bond Link Up​

vRouter

Upstream Fabric Bond Link Down​

vRouter

Downstream Fabric Link Up​

vRouter

Downstream Fabric Link Down​

vRouter

Appliance Link Up​

vRouter

Appliance Link Down​

vRouter

Any JCNR Application Critical Errors​

vRouter

JCNR MAC Table Limit Reached​

vRouter

JCNR CLI Start​

cRPD or vRouter-Agent

JCNR CLI Stop​

cRPD or vRouter-Agent

JCNR Kernel App Interface Up​

vRouter

JCNR Kernel App Interface Down​

vRouter

JCNR Virtio User Interface Up​

vRouter

JCNR Virtio User Interface Down​

vRouter

Juniper Cloud-Native Router Licensing

Read this section to learn about Juniper Cloud-Native Router licensing.

Licensing in the Juniper Cloud-Native Router

Starting in Juniper Cloud-Native Router Release 22.2, we've enabled our Juniper Agile Licensing (JAL) model. JAL ensures that features are used in compliance with Juniper's end-user license agreement. You can purchase licenses for the Juniper Cloud-Native Router software through your Juniper Account Team. You can apply the licenses by using the CLI of the cloud-native router controller. For details about managing multiple license files for multiple cloud-native router deployments, see Juniper Agile Licensing Overview

If your cRPD pod displays its state as running when you issue the command kubectl get pods -A on the host server, then you have properly applied your license file.

Note:

In Juniper Cloud-Native Router Releases 22.3 and 22.4, we only monitor license compliance. We do not enforce license compliance.

After configuring, apply your firewall filters to a bridge domain using a cRPD configuration command similar to:set routing-instances vswitch bridge-domains bd3001 forwarding-options filter input filter1. Then, commit the configuration for the firewall filter to take effect.

To see the how many packets matched the filter (per VLAN), you can use the cRPD CLI and issue the command:

The output from the above command looks like:

In this example, we applied the filter to the bridge domain bd3001. The filter has not yet matched any packets.

Useful CLI Commands

This section provides some example CLI commands and their outputs. We also provide some command completion example outputs. These outputs allow you to see the available command hierarchy which you can explore on your cloud-native router system.

You can see the bridge command hierarchy with the show bridge ? command shown as follows.

If you look further into the hierarchy, you see:

If you use the <[Enter]> option, you see something like:

The show bridge mac-table command displays the L2 MAC table which is dynamically learned by the vRouter.

If you look at the other option, statistics, you see:

If you use the <[Enter]> option, you see:

The show bridge statistics command displays the L2 VLAN traffic statistics per interface within a bridge domain.

To see the firewall (ACL) configuration: