Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Customize JCNR Helm Chart for OpenShift Deployment

SUMMARY Read this topic to learn about the deployment configuration available for the Juniper Cloud-Native Router.

You can deploy and operate Juniper Cloud-Native Router in the L2, L3, or L2-L3 mode. You configure the deployment mode by editing the appropriate attributes in the values.yaml file prior to deployment.

Note:
  • In the fabricInterface key of the values.yaml file:

    • When all the interfaces have an interface_mode key configured, then the mode of deployment would be L2.

    • When one or more interfaces have an interface_mode key configured along with the rest of the interfaces not having the interface_mode key, then the mode of deployment would be L2-L3.

    • When none of the interfaces have the interface_mode key configured, then the mode of deployment would be L3.

Customize the helm charts using the Juniper_Cloud_Native_Router_release-number/helmchart/values.yaml file. The configuration keys of the helm chart are shown in the table below.

Helm Chart Description for OpenShift Deployment

Customize the Helm chart using the Juniper_Cloud_Native_Router_<release>/helmchart/jcnr/values.yaml file. We provide a copy of the default values.yaml in JCNR Default Helm Chart.

Table 1 contains a description of the configurable attributes in values.yaml for an OpenShift deployment.

Table 1: Helm Chart Description for OpenShift Deployment
Key Description
global  
  registry   Defines the Docker registry for the JCNR container images. The default value is enterprise-hub.juniper.net. The images provided in the tarball are tagged with the default registry name. If you choose to host the container images to a private registry, replace the default value with your registry URL.
  repository (Optional) Defines the repository path for the JCNR container images. This is a global key that takes precedence over the repository paths under the common section. Default is jcnr-container-prod/.
  imagePullSecret (Optional) Defines the Docker registry authentication credentials. You can configure credentials to either the Juniper Networks enterprise-hub.juniper.net registry or your private registry.
    registryCredentials Base64 representation of your Docker registry credentials. See Configure Repository Credentials for more information.
  secretName Name of the secret object that will be created.
  common   Defines repository paths and tags for the various JCNR container images. Use default unless using a private registry.
  repository Defines the repository path. The default value is jcnr-container-prod/. The global repository key takes precedence if defined.
  tag Defines the image tag. The default value is configured to the appropriate tag number for the JCNR release version.
  replicas (Optional) Indicates the number of replicas for cRPD. Default is 1. The value for this key must be specified for multi-node clusters. The value is equal to the number of nodes running JCNR.
  noLocalSwitching (Optional) Prevents interfaces in a bridge domain from transmitting and receiving Ethernet frame copies. Enter one or more comma separated VLAN IDs to ensure that the interfaces belonging to the VLAN IDs do not transmit frames to one another. This key is specific to L2 and L2-L3 deployments. Enabling this key provides the functionality on all access interfaces. To enable the functionality on trunk interfaces, configure no-local-switching in fabricInterface. See Prevent Local Switching for more details.
  iamRole   Not applicable.
  fabricInterface

Aggregated interfaces that receive traffic from multiple interfaces. Fabric interfaces are always physical interfaces. They can either be a physical function (PF) or a virtual function (VF). The throughput requirement for these interfaces is higher — hence multiple hardware queues are allocated to them. Each hardware queue is allocated with a dedicated CPU core. See JCNR Interfaces Overview for more information.

Use this field to provide a list of fabric interfaces to be bound to the DPDK. You can also provide subnets instead of interface names. If both the interface name and the subnet are specified, then the interface name takes precedence over the subnet/gateway combination. The subnet/gateway combination is useful when the interface names vary in a multi-node cluster.

Note:
  • When all the interfaces have an interface_mode key configured, then the mode of deployment is L2.

  • When one or more interfaces have an interface_mode key configured along with the rest of the interfaces not having the interface_mode key, then the mode of deployment is L2-L3.

  • When none of the interfaces have the interface_mode key configured, then the mode of deployment is L3.

For example:

  # L2 only
  - eth1:
      ddp: "auto"                
      interface_mode: trunk
      vlan-id-list: [100, 200, 300, 700-705]
      storm-control-profile: rate_limit_pf1
      native-vlan-id: 100
      no-local-switching: true
  # L3  only
  - eth1:
      ddp: "off"                                 
 # L2L3
  - eth1:
      ddp: "auto"                
  - eth2:
      ddp: "auto"                
      interface_mode: trunk
      vlan-id-list: [100, 200, 300, 700-705]
      storm-control-profile: rate_limit_pf1
      native-vlan-id: 100
      no-local-switching: true
  subnet An alternative mode of input to interface names. For example:
- subnet: 10.40.1.0/24 
  gateway: 10.40.1.1 
  ddp: "off"    

The subnet option is applicable only for L3 interfaces. With the subnet mode of input, interfaces are auto-detected in each subnet. Specify either subnet/gateway or the interface name. Do not configure both. The subnet/gateway form of input is particularly helpful in environments where the interface names vary in a multi-node cluster.

  ddp

(Optional) Indicates the interface-level Dynamic Device Personalization (DDP) configuration. DDP provides datapath optimization at the NIC for traffic like GTPU, SCTP, etc. For a bond interface, all slave interface NICs must support DDP for the DDP configuration to be enabled. See Enabling Dynamic Device Personalization (DDP) on Individual Interfaces for more details.

Options include auto, on, or off. Default is off.

Note:

The interface level ddp takes precedence over the global ddp configuration.

  interface_mode

Set to trunk for L2 interfaces and do not configure for L3 interfaces. For example,

interface_mode: trunk
  vlan-id-list

Provide a list of VLAN IDs associated with the interface.

    storm-control-profile

Use storm-control-profile to associate the desired storm control profile to the interface. Profiles are defined under jcnr-vrouter.stormControlProfiles.

  native-vlan-id

Configure native-vlan-id with any of the VLAN IDs in the vlan-id-list to associate it with untagged data packets received on the physical interface of a fabric trunk mode interface. For example:

fabricInterface: 
  - bond0: 
      interface_mode: trunk 
      vlan-id-list: [100, 200, 300] 
      storm-control-profile: rate_limit_pf1 
      native-vlan-id: 100  

See Native VLAN for more details.

  no-local-switching Prevents interfaces from communicating directly with each other when configured. Allowed values are true or false. See Prevent Local Switching for more details.
  fabricWorkloadInterface (Optional) Defines the interfaces to which different workloads are connected. They can be software-based or hardware-based interfaces.
  log_level Defines the log severity. Available value options are: DEBUG, INFO, WARN, and ERR.
Note:

Leave it set to the default INFO unless instructed to change it by Juniper Networks support.

  log_path

The defined directory stores various JCNR-related descriptive logs such as contrail-vrouter-agent.log, contrail-vrouter-dpdk.log, etc. Default is /var/log/jcnr/.

  syslog_notifications

Indicates the absolute path to the file that stores syslog-ng generated notifications in JSON format. Default is /var/log/jcnr/jcnr_notifications.json.

  corePattern

Indicates the core_pattern for the core file. If left blank, then JCNR pods will not overwrite the default pattern on the host.

Note:

Set the core_pattern on the host before deploying JCNR. You can change the value in /etc/sysctl.conf. For example, kernel.core_pattern=/var/crash/core_%e_%p_%i_%s_%h_%t.gz

  coreFilePath Indicates the path to the core file. Default is /var/crash.
  nodeAffinity

(Optional) Defines labels on nodes to determine where to place the vRouter pods.

By default the vRouter pods are deployed to all nodes of a cluster.

In the example below, the node affinity label is defined as key1=jcnr. You must apply this label to each node where JCNR is to be deployed:

nodeAffinity:
- key: key1
operator: In
values:
- jcnr

On an OCP setup, node affinity must be configured to bring up JCNR on worker nodes only. For example:

  nodeAffinity:
  - key: node-role.kubernetes.io/worker
    operator: Exists
  - key: node-role.kubernetes.io/master
    operator: DoesNotExist
Note:

This key is a global setting.

  key Key-value pair that represents a node label that must be matched to apply the node affinity.
  operator Defines the relationship between the node label and the set of values in the matchExpression parameters in the pod specification. This value can be In, NotIn, Exists, DoesNotExist, Lt, or Gt.
  cni_bin_dir For Red Hat OpenShift, don't leave this field empty. Set to /var/lib/cni/bin, which is the default path on any OCP deployment.
  grpcTelemetryPort

(Optional) Enter a value for this parameter to override cRPD telemetry gRPC server default port of 50053.

  grpcVrouterPort (Optional) Default is 50052. Configure to override.
  vRouterDeployerPort (Optional) Default is 8081. Configure to override.
jcnr-vrouter  
  cpu_core_mask

If present, this indicates that you want to use static CPU allocation to allocate cores to the forwarding plane.

This value should be a comma-delimited list of isolated CPU cores that you want to statically allocate to the forwarding plane (for example, cpu_core_mask: "2,3,22,23"). Use the cores not used by the host OS in your EC2 instance.

Comment this out if you want to use Kubernetes CPU Manager to allocate cores to the forwarding plane.

Note:

You cannot use static CPU allocation and Kubernetes CPU Manager at the same time. Using both can lead to unpredictable behavior.

  guaranteedVrouterCpus

If present, this indicates that you want to use the Kubernetes CPU Manager to allocate CPU cores to the forwarding plane.

This value should be the number of guaranteed CPU cores that you want the Kubernetes CPU Manager to allocate to the forwarding plane. You should set this value to at least one more than the number of forwarding cores.

Comment this out if you want to use static CPU allocation to allocate cores to the forwarding plane.

Note:

You cannot use static CPU allocation and Kubernetes CPU Manager at the same time. Using both can lead to unpredictable behavior.

  dpdkCtrlThreadMask

Specifies the CPU core(s) to allocate to vRouter DPDK control threads when using static CPU allocation. This list should be a subset of the cores listed in cpu_core_mask and can be the same as the list in serviceCoreMask.

CPU cores listed in cpu_core_mask but not in serviceCoreMask or dpdkCtrlThreadMask are allocated for forwarding.

Comment this out if you want to use Kubernetes CPU Manager to allocate cores to the forwarding plane.

  serviceCoreMask

Specifies the CPU core(s) to allocate to vRouter DPDK service threads when using static CPU allocation. This list should be a subset of the cores listed in cpu_core_mask and can be the same as the list in dpdkCtrlThreadMask.

CPU cores listed in cpu_core_mask but not in serviceCoreMask or dpdkCtrlThreadMask are allocated for forwarding.

Comment this out if you want to use Kubernetes CPU Manager to allocate cores to the forwarding plane.

  numServiceCtrlThreadCPU

Specifies the number of CPU cores to allocate to vRouter DPDK service/control traffic when using the Kubernetes CPU Manager.

This number should be smaller than the number of guaranteedVrouterCpus cores. The remaining guaranteedVrouterCpus cores are allocated for forwarding.

Comment this out if you want to use static CPU allocation to allocate cores to the forwarding plane.

  restoreInterfaces Set to true to restore the interfaces back to their original state in case the vRouter pod crashes or restarts or if JCNR is uninstalled.
  bondInterfaceConfigs (Optional) Enable bond interface configurations only for L2 or L2-L3 deployments.
  name Name of the bond interface.
  mode Set to 1 (active-backup).
  slaveInterfaces List of fabric interfaces to be bonded.
  primaryInterface

(Optional) Primary interface for the bond.

slaveNetworkDetails

Not applicable.
  mtu Maximum Transmission Unit (MTU) value for all physical interfaces (VFs and PFs). Default is 9000.
  stormControlProfiles Configure the rate limit profiles for BUM traffic on fabric interfaces in bytes per second. See /Content/l2-bum-rate-limiting_xi931744_1_1.dita for more details.
  dpdkCommandAdditionalArgs

Pass any additional DPDK command line parameters. The --yield_option 0 is set by default and implies the DPDK forwarding cores will not yield their assigned CPU cores. Other common parameters that can be added are tx and rx descriptors and mempool. For example:

dpdkCommandAdditionalArgs: "--yield_option 0 --dpdk_txd_sz 2048 --dpdk_rxd_sz 2048 --vr_mempool_sz 131072"
  dpdk_monitoring_thread_config (Optional) Enables a monitoring thread for the vRouter DPDK container. Every loggingInterval seconds, a log containing the information indicated by loggingMask is generated.
    loggingMask Specifies the information to be generated. Represented by a bitmask with bit positions as follows:
  • 0b001 is the nl_counter

  • 0b010 is the lcore_timestamp

  • 0b100 is the profile_histogram

    loggingInterval Specifies the log generation interval in seconds.
  ddp

(Optional) Indicates the global Dynamic Device Personalization (DDP) configuration. DDP provides datapath optimization at the NIC for traffic like GTPU, SCTP, etc. For a bond interface, all slave interface NICs must support DDP for the DDP configuration to be enabled. See Enabling Dynamic Device Personalization (DDP) on Individual Interfaces for more details.

Options include auto, on, or off. Default is off.

Note:

The interface level ddp takes precedence over the global ddp configuration.

  qosEnable

Set to true or false to enable or disable QoS. See Quality of Service (QoS) for more details.

Note:

QoS is not supported on Intel X710 NIC.

  vrouter_dpdk_uio_driver The uio driver is vfio-pci.
  agentModeType

Set to dpdk.

  fabricRpfCheckDisable Set to false to enable the RPF check on all JCNR fabric interfaces. By default, RPF check is disabled.
  telemetry

(Optional) Configures cRPD telemetry settings. To learn more about telemetry, see Telemetry Capabilities .

  disable

Set to true to disable cRPD telemetry. Default is false, which means that cRPD telemetry is enabled by default.

  metricsPort

The port that the cRPD telemetry exporter is listening to Prometheus queries on. Default is 8072.

  logLevel

One of warn, warning, info, debug, trace, or verbose. Default is info.

  gnmi

(Optional) Configures cRPD gNMI settings.

enable Set to true to enable the cRPD telemetry exporter to respond to gNMI requests.

vrouter  
  telemetry

(Optional) Configures vRouter telemetry settings. To learn more about telemetry, see Telemetry Capabilities .

metricsPort Specify the port that the vRouter telemetry exporter listens to Prometheus queries on. Default is 8070.

logLevel One of warn, warning, info, debug, trace, or verbose. Default is info.

gnmi (Optional) Configures vRouter gNMI settings.

enable - Set to true to enable the vRouter telemetry exporter to respond to gNMI requests.

  persistConfig Set to true if you want JCNR operator generated pod configuration to persist even after uninstallation. This option can only be set for L2 mode deployments. Default is false.
  interfaceBoundType Not applicable.
  networkDetails Not applicable.
  networkResources Not applicable.
contrail-tools  
  install   Set to true to install contrail-tools (used for debugging).