Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Cloud-Native Router L2 Features

SUMMARY Read this chapter to learn about the features of the Juniper Cloud-Native Router running in L2 mode. We discuss L2 metrics and telemetry, L2 ACLs (firewall filters), MAC learning and aging, and L2 BUM traffic rate limiting.

Juniper Cloud-Native Router Deployment Modes

Starting in Juniper Cloud-Native Router Release 22.4, you can deploy and operate Juniper Cloud-Native Router in either L2 or L3 mode. You control the deployment mode by editing the appropriate values.yaml file prior to deployment.

To deploy the cloud-native router in L2 mode, retain or modify the values in the file Juniper_Cloud_Native_Router_version-number/helmchart/values.yaml.

Throughout the rest of this chapter we identify those features that are only available in L2 mode by beginning the feature name with L2.

In L2 mode, the cloud-native router behaves like a switch and so performs no routing functions and runs no routing protocols. The pod network uses VLANs to direct traffic to various destinations.

To deploy the cloud-native router in L3 mode, retain or modify the values in the file Juniper_Cloud_Native_Router_version-number/helmchart/values_L3.yaml,

In L3 mode, the cloud-native router behaves like a router and so performs routing functions and runs routing protocols such as ISIS, BGP, OSPF, and segment routing-MPLS. In L3 mode, the pod network is divided into an IPv6 underlay network and an IPv4 or IPv6 overlay network. The IPv6 underlay network is used for control plane traffic.

Juniper Cloud-Native Router L2 Interface Types

Juniper Cloud-Native Router supports the following types of interfaces:

  • Agent interface

    vRouter has only one agent interface. The agent interface enables communication between the vRouter-agent and the vRouter. On the vRouter CLI when you issue the vif --list command, the agent interface looks like this:

  • Data Plane Development Kit (DPDK) Virtual Function (VF) workload interfaces

    These interfaces connect to the radio units (RUs) or millimeter-wave distributed units (mmWave-DUs) On the vRouter CLI when you issue the vif --list command, the DPDK VF workload interface looks like this:

  • DPDK VF fabric interfaces

    DPDK VF fabric interfaces, which are associated with the physical network interface card (NIC) on the host server, accept traffic from multiple VLANs. On the vRouter CLI when you issue the vif --list command, the DPDK VF fabric interface looks like this:

  • Active or standby bond interfaces

    Bond interfaces accept traffic from multiple VLANs. A bond interface runs in the active or standby mode (mode 0).

    On the vRouter CLI when you issue the vif --list command, the bond interface looks like this:

  • Pod interfaces using virtio and the DPDK data plane

    Virtio interfaces accept traffic from multiple VLANs and are associated with pod interfaces that use virtio on the DPDK data plane.

    On the vRouter CLI when you issue the vif --list command, the virtio with DPDK data plane interface looks like this:

  • Pod interfaces using virtual Ethernet (veth) pairs and the DPDK data plane

    Pod interfaces that use veth pairs and the DPDK data plane are access interfaces rather than trunk interfaces. This type of a pod interface allows traffic from only one VLAN to pass.

    On the vRouter CLI when you issue the vif --list command, the veth pair with DPDK data plane interface looks like this:

  • VLAN sub-interfaces

    Starting in Juniper Cloud-Native Router Release 22.4, the cloud-native router supports the use of VLAN sub-interfaces. VLAN sub-interfaces are like logical interfaces on a physical switch or router. When you run the cloud-native router in L2 mode, you must associate each sub-interface with a specific VLAN. On the JCNR-vRouter, a VLAN sub-interface look like this:

  • Physical Function (PF) workload interfaces

  • PF fabric interfaces

Note:

vRouter does not support the vhost0 interface when run in L2 mode.

The vRouter-agent detects L2 mode in values.yaml during deployment, so does not wait for the vhost0 interface to come up before completing installation. The vRouter-agent does not send a vhost interface add message so the vRouter doesn't create the vhost0 interface.

In L3 mode, the vhost0 interface is present and functional.

Pods are the Kubernetes element that contains the interfaces used in cloud-native router. You control interface creation by manipulating the value portion of the key:value pairs in YAML configuration files. The cloud-native router uses a pod-specific file and a network attachment device (NAD)-specific file for pod and interface creation. During pod creation, Kubernetes consults the pod and NAD configuration files and creates the needed interfaces from the values contained within the NAD configuration file.

You can see example NAD and pod YAML files in the L2 - Add User Pod with Kernel Access to a Cloud-Native Router Instance and L2 - Add User Pod with virtio Trunk Ports to a Cloud-Native Router Instance examples.

L2 Metrics and Telemetry

Read this topic to learn how to view Layer 2 (L2) metrics from an instance of Juniper Cloud-Native Router.

Viewing L2 Metrics

Juniper Cloud-Native Router comes with telemetry capabilities that enable you to see performance metrics and telemetry data. The container contrail-vrouter-telemetry-exporter provides you this visibility. This container runs along side the other vRouter containers in the contrail-vrouter-masters pod.

The telemetry exporter periodically queries the Introspect agent on the vRouter-agent for statistics and reports metrics information in response to the Prometheus scrape requests. You can directly view the telemetry data by using the following URL: http://host server IP address:8070. The following table shows a sample output.

Note:

We've grouped the output shown in the following table. The cloud-native router does not group or sort the output on live systems.

Table 1: Sample Telemetry Output
Group Sample Output
Memory usage per vRouter
# TYPE virtual_router_system_memory_cached_bytes gauge
# HELP virtual_router_system_memory_cached_bytes Virtual router system memory cached 
virtual_router_system_memory_cached_bytes{vrouter_name="jcnr.example.com"} 2635970448
# TYPE virtual_router_system_memory_buffers gauge
# HELP virtual_router_system_memory_buffers Virtual router system memory buffer 
virtual_router_system_memory_buffers{vrouter_name="jcnr.example.com"} 32689
# TYPE virtual_router_system_memory_bytes gauge
# HELP virtual_router_system_memory_bytes Virtual router total system memory 
virtual_router_system_memory_bytes{vrouter_name="jcnr.example.com"} 2635970448
# TYPE virtual_router_system_memory_free_bytes gauge
# HELP virtual_router_system_memory_free_bytes Virtual router system memory free 
virtual_router_system_memory_free_bytes{vrouter_name="jcnr.example.com"} 2635969296
# TYPE virtual_router_system_memory_used_bytes gauge
# HELP virtual_router_system_memory_used_bytes Virtual router system memory used 
virtual_router_system_memory_used_bytes{vrouter_name="jcnr.example.com"} 32689
# TYPE virtual_router_virtual_memory_kilobytes gauge
# HELP virtual_router_virtual_memory_kilobytes Virtual router virtual memory 
virtual_router_virtual_memory_kilobytes{vrouter_name="jcnr.example.com"} 0
# TYPE virtual_router_resident_memory_kilobytes gauge
# HELP virtual_router_resident_memory_kilobytes Virtual router resident memory 
virtual_router_resident_memory_kilobytes{vrouter_name="jcnr.example.com"} 32689
# TYPE virtual_router_peak_virtual_memory_bytes gauge
# HELP virtual_router_peak_virtual_memory_bytes Virtual router peak virtual memory 
virtual_router_peak_virtual_memory_bytes{vrouter_name="jcnr.example.com"} 2894328001
Packet count per interface
# TYPE virtual_router_phys_if_input_packets_total counter
# HELP virtual_router_phys_if_input_packets_total Total packets received by physical interface
virtual_router_phys_if_input_packets_total{vrouter_name="jcnr.example.com",interface_name="bond0"} 1483
# TYPE virtual_router_phys_if_output_packets_total counter
# HELP virtual_router_phys_if_output_packets_total Total packets sent by physical interface
virtual_router_phys_if_output_packets_total{vrouter_name="jcnr.example.com",interface_name="bond0"} 32969
# TYPE virtual_router_phys_if_input_bytes_total counter
# HELP virtual_router_phys_if_input_bytes_total Total bytes received by physical interface
virtual_router_phys_if_input_bytes_total{interface_name="bond0",vrouter_name="jcnr.example.com"} 125558
# TYPE virtual_router_phys_if_output_bytes_total counter
# HELP virtual_router_phys_if_output_bytes_total Total bytes sent by physical interface
virtual_router_phys_if_output_bytes_total{vrouter_name="jcnr.example.com",interface_name="bond0"} 4597076
virtual_router_phys_if_input_bytes_total{vrouter_name="jcnr.example.com",interface_name="bond0"} 228300499320
virtual_router_phys_if_output_bytes_total{interface_name="bond0",vrouter_name="jcnr.example.com"} 228297889634
virtual_router_phys_if_input_packets_total{interface_name="bond0",vrouter_name="jcnr.example.com"} 1585421179
virtual_router_phys_if_output_packets_total{vrouter_name="jcnr.example.com",interface_name="bond0"} 1585402623
virtual_router_phys_if_output_packets_total{interface_name="bond0",vrouter_name="jcnr.example.com"} 1585403344
CPU usage per vRouter
# TYPE virtual_router_cpu_1min_load_avg gauge
# HELP virtual_router_cpu_1min_load_avg Virtual router CPU 1 minute load average
virtual_router_cpu_1min_load_avg{vrouter_name="jcnr.example.com"} 0.11625
# TYPE virtual_router_cpu_5min_load_avg gauge
# HELP virtual_router_cpu_5min_load_avg Virtual router CPU 5 minute load average
virtual_router_cpu_5min_load_avg{vrouter_name="jcnr.example.com"} 0.109687
# TYPE virtual_router_cpu_15min_load_avg gauge
# HELP virtual_router_cpu_15min_load_avg Virtual router CPU 15 minute load average
virtual_router_cpu_15min_load_avg{vrouter_name="jcnr.example.com"} 0.110156
Drop packet count per vRouter
# TYPE virtual_router_dropped_packets_total counter
# HELP virtual_router_dropped_packets_total Total packets dropped
virtual_router_dropped_packets_total{vrouter_name="jcnr.example.com"} 35850
Packet count per interface per VLAN
# TYPE virtual_router_interface_vlan_multicast_input_packets_total counter
# HELP virtual_router_interface_vlan_multicast_input_packets_total Total number of multicast packets received on interface VLAN
virtual_router_interface_vlan_multicast_input_packets_total{interface_id="1",vlan_id="100"} 0
# TYPE virtual_router_interface_vlan_broadcast_output_packets_total counter
# HELP virtual_router_interface_vlan_broadcast_output_packets_total Total number of broadcast packets sent on interface VLAN
virtual_router_interface_vlan_broadcast_output_packets_total{interface_id="1",vlan_id="100"} 0
# TYPE virtual_router_interface_vlan_broadcast_input_packets_total counter
# HELP virtual_router_interface_vlan_broadcast_input_packets_total Total number of broadcast packets received on interface VLAN
virtual_router_interface_vlan_broadcast_input_packets_total{interface_id="1",vlan_id="100"} 0
# TYPE virtual_router_interface_vlan_multicast_output_packets_total counter
# HELP virtual_router_interface_vlan_multicast_output_packets_total Total number of multicast packets sent on interface VLAN
virtual_router_interface_vlan_multicast_output_packets_total{interface_id="1",vlan_id="100"} 0
# TYPE virtual_router_interface_vlan_unicast_input_packets_total counter
# HELP virtual_router_interface_vlan_unicast_input_packets_total Total number of unicast packets received on interface VLAN
virtual_router_interface_vlan_unicast_input_packets_total{interface_id="1",vlan_id="100"} 0
# TYPE virtual_router_interface_vlan_flooded_output_bytes_total counter
# HELP virtual_router_interface_vlan_flooded_output_bytes_total Total number of output bytes flooded to interface VLAN
virtual_router_interface_vlan_flooded_output_bytes_total{interface_id="1",vlan_id="100"} 0
# TYPE virtual_router_interface_vlan_multicast_output_bytes_total counter
# HELP virtual_router_interface_vlan_multicast_output_bytes_total Total number of multicast bytes sent on interface VLAN
virtual_router_interface_vlan_multicast_output_bytes_total{interface_id="1",vlan_id="100"} 0
# TYPE virtual_router_interface_vlan_unicast_output_packets_total counter
# HELP virtual_router_interface_vlan_unicast_output_packets_total Total number of unicast packets sent on interface VLAN
virtual_router_interface_vlan_unicast_output_packets_total{interface_id="1",vlan_id="100"} 0
# TYPE virtual_router_interface_vlan_broadcast_input_bytes_total counter
# HELP virtual_router_interface_vlan_broadcast_input_bytes_total Total number of broadcast bytes received on interface VLAN
virtual_router_interface_vlan_broadcast_input_bytes_total{interface_id="1",vlan_id="100"} 0
# TYPE virtual_router_interface_vlan_multicast_input_bytes_total counter
# HELP virtual_router_interface_vlan_multicast_input_bytes_total Total number of multicast bytes received on interface VLAN
virtual_router_interface_vlan_multicast_input_bytes_total{vlan_id="100",interface_id="1"} 0
# TYPE virtual_router_interface_vlan_unicast_input_bytes_total counter
# HELP virtual_router_interface_vlan_unicast_input_bytes_total Total number of unicast bytes received on interface VLAN
virtual_router_interface_vlan_unicast_input_bytes_total{interface_id="1",vlan_id="100"} 0
# TYPE virtual_router_interface_vlan_flooded_output_packets_total counter
# HELP virtual_router_interface_vlan_flooded_output_packets_total Total number of output packets flooded to interface VLAN
virtual_router_interface_vlan_flooded_output_packets_total{interface_id="1",vlan_id="100"} 0
# TYPE virtual_router_interface_vlan_broadcast_output_bytes_total counter
# HELP virtual_router_interface_vlan_broadcast_output_bytes_total Total number of broadcast bytes sent on interface VLAN
virtual_router_interface_vlan_broadcast_output_bytes_total{interface_id="1",vlan_id="100"} 0
# TYPE virtual_router_interface_vlan_unicast_output_bytes_total counter
# HELP virtual_router_interface_vlan_unicast_output_bytes_total Total number of unicast bytes sent on interface VLAN
virtual_router_interface_vlan_unicast_output_bytes_total{interface_id="1",vlan_id="100"} 0
...

Prometheus is an open-source systems monitoring and alerting toolkit. You can use Prometheus to retrieve telemetry data from the cloud-native router host servers and view that data in the HTTP format. A sample of Prometheus configuration looks like this:

L2 ACLs (Firewall Filters)

Read this topic to learn about Layer 2 access control lists (L2 ACLs) in the cloud-native router.

L2 Firewall Filters

Starting with Juniper Cloud-Native Router Release 22.2 we've included a limited firewall filter capability. You can configure the filters using the Junos OS CLI within the cloud-native router controller, using NETCONF, or the cloud-native router APIs.

During deployment, the system defines and applies firewall filters to block traffic from passing directly between the router interfaces. You can dynamically define and apply more filters. Use the firewall filters to:

  • Define firewall filters for bridge family traffic.

  • Define filters based on one or more of the following fields: source MAC address, destination MAC address, or EtherType.

  • Define multiple terms within each filter.

  • Discard the traffic that matches the filter.

  • Apply filters to bridge domains.

Firewall Filter Example

Below you can see an example of a firewall filter configuration from a cloud-native router deployment.

Note:

You can configure up to 16 terms in a single firewall filter.

The only then action you can configure in a firewall filter is the discard action.

After configuration, you must apply your firewall filters to a bridge domain using a cRPD configuration command similar to:set routing-instances vswitch bridge-domains bd3001 forwarding-options filter input filter1. Then you must commit the configuration for the firewall filter to take effect.

To see how many packets matched the filter (per VLAN), you can issue the following command on the cRPD CLI:

The command output looks like this:

In the preceding example, we applied the filter to the bridge domain bd3001. The filter has not yet matched any packets.

L2 Firewall Filter (ACL) Troubleshooting

The following table lists some of the potential problems that you might face when you implement firewall rules or ACLs in the cloud-native router. You run most of these commands on the host server. The "Command" column indicates whether the command shown needs to run somewhere else.

Table 2: L2 Firewall Filter or ACL Troubleshooting
Problem Possible Causes and Resolution Command
Firewall filters or ACLs not working

gRPC connection (port 50052) to the vRouter is down.

Check the gRPC connection.

netstat -antp|grep 50052

The ui-pubd process is not running.

Check whether ui-pubd is running.

ps aux|grep ui-pubd
Firewall filter or ACL show commands not working

The gRPC connection (port 50052) to the vRouter is down.

Check the gRPC connection.

netstat -antp|grep 50052

The firewall service is not running.

ps aux|grep firewall
show log filter.log

You must run this command in the JCNR-controller (cRPD) CLI.

MAC Learning and Aging

Juniper Cloud-Native Router provides automated learning and aging of MAC addresses. Read this topic for an overview of the MAC learning and aging functionality in the cloud-native router.

MAC Learning

MAC learning enables the cloud-native router to efficiently send the received packets to their respective destinations.The cloud-native router maintains a table of MAC addresses grouped by interface. The table includes MAC addresses, VLANs, and the interface on which the vRouter learns each MAC address and VLAN. The MAC table informs the vRouter about the MAC addresses that each interface can reach.

The cloud-native router caches the source MAC address for a new packet flow to record the incoming interface into the MAC table. ​The router learns the MAC addresses for each VLAN or bridge domain. ​The cloud-native router creates a key in the MAC table from the MAC address and VLAN of the packet. Queries sent to the MAC table return the interface associated with the key. To enable MAC learning, the cloud-native router performs these steps:

  • Records the incoming interface into the MAC table by caching the source MAC address for a new packet flow.

  • Learns the MAC addresses for each VLAN or bridge domain.

  • Creates a key in the MAC table from the MAC address and VLAN of the packet.

If the destination MAC address and VLAN are missing (lookup failure), the cloud-native router floods the packet out all the interfaces (except the incoming interface) in the bridge domain.​

By default:

  • MAC table entries time out after 60 seconds.

  • The MAC table size is limited to 10,240 entries.

You can configure the aging timeout and MAC table size during deployment by editing the values.yaml file under the jcnr-vrouter directory on the host server. We recommend that you do not change the default values.

You can see the MAC table entries by using:

  • Introspect agent at http://host server IP:8085/mac_learning.xml#Snh_FetchL2MacEntry​.

  • The command show bridge mac-table on the cRPD CLI.

  • The command purel2cli --mac show​ on the CLI of the contrail-tools pod.

If you exceed the MAC address limit, the counter pkt_drop_due_to_mactable_limit increments. You can see this counter by using the introspect agent at http://host server IP:8085/Snh_AgentStatsReq.​

If you delete or disable an interface, the cloud-native router deletes all the MAC entries associated with that interface from the MAC table.​

MAC Entry Aging

The aging timeout for cached MAC entries is 60 seconds. You can configure the aging timeout at deployment time by editing the values.yaml file. The minimum timeout is 60 seconds and the maximum timeout is 10,240 seconds. You can see the time that is left for each MAC entry through introspect at http://host server IP:8085/mac_learning.xml#Snh_FetchL2MacEntry. We show an example of the output below:

BUM Rate Limiting

The rate limiting feature controls the rate of egress broadcast, unknown unicast, and multicast (BUM) traffic on fabric interfaces. You specify the rate limit in bytes per second by adjusting stormControlProfiles in the values.yaml file before deployment. The system applies the configured profiles to all specified fabric interfaces in the cloud-native router. The maximum per-interface rate limit value you can set is 1,000,000 bytes per second.

If the unknown unicast, broadcast, or multicast traffic rate exceeds the set limit on a specified fabric interface, the vRouter drops the traffic. You can see the drop counter values by running the dropstats command in the vRouter CLI. You can see the per-interface rate limit drop counters by running the vRouter CLI command vif --get fabric_vif_id --get-drop-statsFor example:

When you configure a rate limit profile on a fabric interface, you can see the configured limit in bytes per second when you run either vif --list or vif --get fabric_vif_id.

L2 API to Force Bond Link Switchover

When you run cloud-native router in L2 mode with cascaded nodes you can configure those nodes to use bond interfaces. If you also configure the bond interfaces as BONDING_MODE_ACTIVE_BACKUP, the vRouter-agent exposes the REST API call: curl -X POST http://127.0.0.1:9091/bond-switch/bond0 on localhost port 9091. You can use this REST API call to force traffic to switch from the active interface to the standby interface.

The vRouter contains two CLI commands that allow you to see the active interface in a bonded pair and to see the traffic statistics associated with your bond interfaces. These commands are: dpdkinfo -b and dpdkinfo -n respectively.

L2 Quality of Service (QoS)

Starting in Juniper Cloud-Native Router Release 22.4, you can configure quality of service (QoS) parameters including classification, marking, and queuing. The cloud-native router performs classification and marking operations in vRouter and queing (scheduling) operations in the physical network interface card (NIC). Scheduling is only supported on the E810 NIC.

QoS Overview

You enable QoS prior to the deploy time by editing the values.yaml file in Juniper-Cloud-Native-Router-version-number/helmchart directory and changing the qosEnable value to true. The default value for the QoS feature is false (disabled).

Note:

You can only enable the QoS feature if the host server on which you install your cloud-native router contains an Intel E810 NIC that is running lldp.

You enable lldp on the NIC using the lldptool which runs on the host server as a CLI application. Issue the following command to enable lldp on the E810 NIC. For example, you could use the following command:

The details of the above command are:

  • ETS–Enhanced Transmission Selection

  • willing–The willing attribute determines whether the system uses locally configured packet forwarding classification (PFC) or not. If you set willing to no(the default setting), the cloud-native router applies local PFC configuration. If you set willing to yes, and the cloud-native router receives TLV from the peer router, the cloud-native router applies the received values.

  • tsa–The transmission selection algorithm is a comma seperated list of traffic class to selection algorithm maps. You can choose ets, strict, or vendor as selection algorithms.

  • up2tc–Comma-separated list that maps user priorities to traffic classes

The list below provides an overview of the classification, marking, and queueing operations performed by cloud-native router.

  • Classification:

    • vRouter classifies packets by examining the priority bits in the packet

    • vRouter derives traffic class and loss priority

    • vRouter can apply traffic classifiers to fabric, traffic, and workload interface types

    • vRouter maintains 16 entries in its classifier map

  • Marking (Re-write):

    • vRouter performs marking operationsMarking is done in Vrouter. •Re-write of p-bits done in egress path. •At egress based on traffic class and drop priority new priority is derived. •Marking can be applied to Fabric interface only.

    • vRouter performs rewriting of p-bits in the egress path

    • vRouter derives new traffic priority based on traffic class and drop priority at egress

    • vRouter can apply marking to packets only on fabric interfaces

    • vRouter maintains 8 entries in its marking map

  • Queueing (Scheduling):

    • Cloud-native router performs strict priority scheduling in hardware (E810 NIC)

    • Cloud-native router maps each traffic class to one queue

    • Cloud-native router limits the maximum number of traffic queue to 4

    • Cloud-native router maps 8 possible priorities to 4 traffic classes; It also maps each traffic class 1 hardware queue

    • Cloud-native router can apply scheduling to fabric interface only

    • Virtual functions (VFs) leverage the queues that you configure in the physical functions (interfaces)

    • vRouter maintains 8 entries in its scheduler map

QoS Example Configuration

You configure QoS classifiers, rewrite rules, and schedulers in the cRPD using Junos set commands or remotely using NETCONF. We display a Junos-based example configuration below.

Viewing the QoS Configuration

You view the QoS configuration in the cRPD CLI using show commands in Junos operation mode, The show commands reveal the configuration of classifiers, rewrite rules, or scheduler maps individually. We display three examples below; one example for each operation.

  • Show Classifier

  • Show Rewrite-Rule

  • Show Scheduler-Map

Native VLAN

Starting in Juniper Cloud-Native Router Release 23.1, JCNR supports receiving and forwarding untagged packets on a trunk interface. Typically, trunk ports accept only tagged packets, and the untagged packets are dropped. You can enable a JCNR fabric trunk port to accept untagged packets by configuring a native VLAN identifier (ID) on the interface on which you want the untagged packets to be received. When a JCNR fabric trunk port is enabled to accept untagged packets, such packets are forwarded in the native VLAN domain.

native-vlan-id

Enable the native-vlan-id key in the Helm chart, prior to the deploy time, to configure the VLAN identifier to associate it with untagged data packets received on the fabric trunk interface. Edit the values.yaml file in Juniper_Cloud_Native_Router_<release-number>/helmchart directory and add the key native-vlan-id along with a value for it. For example,

Note:

After editing the values.yaml file, you have to install or upgrade JCNR using the edited values.yaml to ensure that the native-vlan-id key is enabled.

To verify, if native VLAN is enabled for an interface, connect to the vRouter agent by executing the command kubectl exec -it -n contrail contrail-vrouter-<agent container> -- bash command, and then run the command vif --get <interface index id>. A sample output is shown below.

Preventing Local Switching

Starting in Juniper Cloud-Native Router Release 23.1, JCNR provides support to prevent interfaces in a bridge domain that are a part of the same VLAN group, from transmitting ethernet frame copies in between those interfaces. The noLocalSwitching key provides the option to enable the functionality on the selected VLAN IDs.

Note:

The noLocalSwitching functionality is a Technology Preview feature in the Juniper Cloud-Native Router Release 23.1.

To prevent interfaces in a bridge domain from transmitting and receiving ethernet frame copies, enable the noLocalSwitching key and assign a VLAN ID to it to ensure that the interfaces belonging to the VLAN ID do not transmit frames to one another. Note that the noLocalSwitching functionality is enabled only on the access interfaces. To enable noLocalSwitching on a trunk interface that is a part of the same VLAN ID, you have to separately enable the trunk interface by setting the no-local-switching key in the trunk interface to true. Use the noLocalSwitching functionality when you want to block interfaces that are a part of a VLAN group to stop transmitting traffic directly to one another.

Note:

For all the trunk interfaces and access interfaces, cRPD isolates traffic for the bridge domains configured with no-local-switching.

To prevent local switching, perform the steps below prior to the deploy time:

  1. Edit the values.yaml file in Juniper_Cloud_Native_Router_<release-number>/helmchart directory.

  2. Enable the noLocalSwitching key and provide the VLAN IDs.

    Note:
    1. The value for the noLocalSwitching key can be an indivdual VLAN ID, or multipe comma-separated VLAN ID values, or a VLAN ID range, or a combination of comma-separated VLAN ID values and a VLAN ID range. For example, noLocalSwitching: [700, 701, 705-710].

    2. With this step the feature is enabled for all access interfaces having the specified VLAN ID. You can skip the next step if you do not want to enable the feature on the trunk interface.

  3. To enable the feature on a trunk interface, add the key no-local-switching and set it to true under the trunk interface configuration.

  4. Install or upgrade JCNR using the values.yaml.

Example

To know all the interfaces that are enabled for noLocalSwitching functionality on all the VLANs, connect to the vRouter agent by executing the command kubectl exec -it -n contrail contrail-vrouter-<agent container> -- bash command, and then run the command purel2cli --nolocal show. A sample output is shown below.

To check if noLocalSwitching functionality is enabled on a specific VLAN ID, connect to the vRouter agent by executing the command kubectl exec -it -n contrail contrail-vrouter-<agent container> -- bash command, and then run the command purel2cli --nolocal get <VLAN ID>. A sample output is shown below.