示例配置文件
阅读此部分,查找部署瞻博网络云原生路由器时使用的示例 YAML 配置文件。这些 YAML 文件通过影响部署说明来控制云原生路由器可用的特性和功能。此外,还包含用于工作负载配置的 YAML 文件。工作负载配置文件控制工作负载功能。
我们包括了以下示例配置文件:
- 瞻博网络云原生路由器主配置 FIle
-
- 主
values.yaml
文件
- 主
- 瞻博网络云原生路由器主 L3 配置文件
主
values_L3.yaml
文件 - 瞻博网络云原生路由器 vRouter 专用配置文件
- 瞻博网络云原生路由器 JCNR-CNI 专用配置文件
- L2 工作负载配置文件
-
L3 工作负载配置文件
使用这些文件了解可用于部署瞻博网络云原生路由器的配置选项。工作负载配置文件显示您可以如何配置中继和接入接口,以及为每个类型配置各种 VLAN。每个文件都包含以哈希标记 (#) 开始的注释,在这些示例中以 黑体 突出显示。
-
values.yaml
这是主 值.yaml 文件。TAR 文件中提供了 3 个其他值.yaml 文件。1 个值.yaml 用于每个安装组件: jcnr-cni、 jcnr-vrouter 和 syslog-ng。
如果各个 值.yaml 文件和主 值.yaml 文件之间存在冲突设置,则以主 值.yaml 文件中的设置 为准。
#################################################################### # Common Configuration (global vars) # #################################################################### global: registry: svl-artifactory.juniper.net/ # uncomment below if all images are available in the same path; it will # take precedence over "repository" paths under "common" section below #repository: path/to/allimages/ # uncomment below if you are using a private registry that needs authentication # registryCredentials - Base64 representation of your Docker registry credentials # secretName - Name of the Secret object that will be created #imagePullSecret: #registryCredentials: #secretName: regcred common: vrouter: repository: atom-docker/cn2/bazel-build/dev/ tag: R22.4-340 crpd: repository: junos-docker-local/warthog/ tag: 22.4R1.10 jcnrcni: repository: junos-docker-local/warthog/ tag: 20221220-ce5cad7 # defines the log severity. Possible options: DEBUG, INFO, WARN, ERR log_level: "INFO" # "log_path": this directory will contain various jcnr related descriptive logs # such as contrail-vrouter-agent.log, contrail-vrouter-dpdk.log etc. log_path: "/var/log/jcnr/" # "syslog_notifications": absolute path to the file that will contain syslog-ng # generated notifications in json format syslog_notifications: "/var/log/jcnr/jcnr_notifications.json" # mode in which jcnr will operate; possible options include "l2" or "l3" mode: "l2" #################################################################### # L2 PARAMS # #################################################################### # fabricInterface: NGDU or tor side interface, expected all types # of traffic; interface_mode is always trunk for this mode fabricInterface: - bond0: interface_mode: trunk vlan-id-list: [100, 200, 300, 700-705] storm-control-profile: rate_limit_pf1 # fabricWorkloadInterface: RU side interfaces, expected traffic is only # management/control traffic; interface mode is always access for this mode fabricWorkloadInterface: - enp59s0f1v0: interface_mode: access vlan-id-list: [700] jcnr-vrouter: # restoreInterfaces: setting this to true will restore the interfaces # back to their original state in case vrouter pod crashes or restarts restoreInterfaces: false # bond interface configurations bondInterfaceConfigs: - name: "bond0" mode: 1 # ACTIVE_BACKUP MODE slaveInterfaces: - "enp59s0f0v0" - "enp59s0f0v1" # MTU for all physical interfaces( all VF’s and PF’s) mtu: "9000" # vrouter fwd core mask # if qos is enabled, you will need to allocate 4 CPU cores (primary and siblings) cpu_core_mask: "2,3,22,23" # rate limit profiles for bum traffic on fabric interfaces in bytes per second stormControlProfiles: rate_limit_pf1: bandwidth: level: 0 #rate_limit_pf2: # bandwidth: # level: 0 # Set ddp to true to enable Dynamic Device Personalization (DDP) # It provides datapath optimization at NIC for traffic like GTPU, SCTP etc. ddp: true # Set true/false to Enable or Disable QOS, note: QOS is not supported on X710 NIC. qosEnable: false # core pattern to denote how the core file will be generated # if left empty, JCNR pods will not overwrite the default pattern corePattern: "" # path for the core file; vrouter considers /var/crashes as default value if not specified coreFilePath: /var/crash
-
values_L3.yaml
在 L3 模式下部署时,values_L3.yaml 文件控制云原生路由器的安装和操作参数。
请注意,这两
values_L3.yaml
个配置参数和主values.yaml
中都存在一些通用配置参数。未在中values_L3.yaml
设置的任何值均从主values.yaml
文件的通用配置参数部分获取。# This is a sample values.yaml file to install JCNR in L3 mode # Install by overriding values.yaml with this file e.g. # helm install jcnr -f values_L3.yaml # Please note the overriding file does not replace values,yaml i.e. any values # that are not present in this file will be taken from the original values.yaml # e.g. if global.repository is commented in values_L3.yaml and uncommented in # values.yaml, then the value in values.yaml is still considered # #################################################################### # Common Configuration (global vars) # #################################################################### global: registry: enterprise-hub.juniper.net/ # uncomment below if all images are available in the same path; it will # take precedence over "repository" paths under "common" section below repository: jcnr-container-prod/ # uncomment below if you are using a private registry that needs authentication # registryCredentials - Base64 representation of your Docker registry credentials # secretName - Name of the Secret object that will be created #imagePullSecret: #registryCredentials: #secretName: regcred common: vrouter: #repository: atom-docker/cn2/bazel-build/dev/ tag: R22.4-340 crpd: #repository: junos-docker-local/warthog/ tag: 22.4R1.10 jcnrcni: #repository: junos-docker-local/warthog/ tag: 20221220-ce5cad7 # defines the log severity. Possible options: DEBUG, INFO, WARN, ERR log_level: "INFO" # "log_path": this directory will contain various jcnr related descriptive logs # such as contrail-vrouter-agent.log, contrail-vrouter-dpdk.log etc. log_path: "/var/log/jcnr/" # "syslog_notifications": absolute path to the file that will contain syslog-ng # generated notifications in json format syslog_notifications: "/var/log/jcnr/jcnr_notifications.json" # mode in which jcnr will operate; possible options include "l2" or "l3" mode: "l3" jcnr-vrouter: # vrouter fwd core mask cpu_core_mask: "2,3" # set multinode to true if you have more than one node in your Kubernetes cluster # (master + worker) and you want to run vrouter in both master and worker nodes #multinode: false # nodeSelector can be given as a key value pair for vrouter to install on the specific nodes, we can give multiple key value pair. # Example: nodeSelector: {key1: value1} #nodeSelector: # key1: value1 # key2: value2 #nodeSelector: {} # contrail vrouter vhost0 binding interface on the host vrouter_dpdk_physical_interface: "eth2" # uio driver will be vfio-pci or uio_pci_generic vrouter_dpdk_uio_driver: "vfio-pci" vhost_interface_ipv4: "" vhost_interface_ipv6: "" # vrouter gateway IP for IPv4 vhost_gateway_ipv4: "" # if gateway IP is not provided vrouter will pickup the gateway IP from kernel table # vrouter gateway IP for IPv6 vhost_gateway_ipv6: "" # if gateway IP is not provided vrouter will pickup the gateway IP from kernel table # core pattern to denote how the core file will be generated # if left empty, JCNR pods will not overwrite the default pattern corePattern: "" # path for the core file; vrouter considers /var/crashes as default value if not specified coreFilePath: /var/crash jcnr-cni: #data plane default is dpdk for vrouter case, linux for kernel module dataplane: dpdk # only for development environment where master and worker on a single node, then we need to give true standalone: false # if crpd needs to be running on the master node as RR (Route Reflector) then we need to enable this filed. cRPD_RR: enabled: false networkAttachmentDefinitionName: jcnr # default NAD name and VRF name will be Platter, if we change the name, NAD and VRF will be created on the new Name # Pod yaml we need to give the NAD name and VRF name as above vrfTarget: 10:10 # vrfTarget used for the default NAD #JCNR case, Calico running with default BGP port 179, then for cRPD BGP port have to be different, change the port to 178 BGPListenPort: 178 # if cRPD connects to MX or some other router, then we have to leave this port to 179 by default, MX wants to connect to jcnr then MX to cRPD BGP port has to be configured as 178 BGPConnectPort: 179 # If master node is used as a RR, then this address should be matched with master node ipv4 loopback address. BGPIPv4Neighbor: 10.1.1.2 # If master node is used as a RR, then this address should be matched with master node ipv6 loopback address. BGPIPv6Neighbor: 2001:db8:abcd::2 SRGBStartLabel: "400000" SRGBIndexRange: "4000" # we can add multiple master nodes configuration by copying the below node configuration as many times as nodes, have the unique name based on the node host name, # Name format node-<actual-node-name>.json with unique IP Address masterNodeConfig: node-masternode1.json: | { "ipv4LoopbackAddr":"100.1.1.2", "ipv6LoopbackAddr":"abcd::2", "isoLoopbackAddr":"49.0004.1000.0000.0000.00", "srIPv4NodeIndex":"2002", "srIPv6NodeIndex":"3002" } # we can add multiple worker nodes configuration by copying the below node configuration as many times as nodes, have the unique name based on the node host name, # Name format node-<actual-node-name>.json with unique IP Address workerNodeConfig: node-workernode1.json: | { "ipv4LoopbackAddr":"100.1.1.3", "ipv6LoopbackAddr":"abcd::3", "isoLoopbackAddr":"49.0004.1000.0000.0001.00", "srIPv4NodeIndex":"2003", "srIPv6NodeIndex":"3003" }
-
jcnr-vrouter 特定
values.yaml
这个值.yaml 文件特定于 jcnr-vrouter Pod。它位于 Juniper_Cloud_Native_Router_<rease-编号>/helmchartcharts/jcnr-vrouter 目录中。如果在此文件中输入的任何值与主值.yaml 文件中的值相冲突,则主值.yaml 文件中的值优先。
# # This is a YAML-formatted file. # # Declare variables to be passed into your templates. common: registry: svl-artifactory.juniper.net/ repository: atom-docker/cn2/bazel-build/dev/ # anchor tag for vrouter container images vrouter-tag: &vrouter_tag JCNR-22.3-6 contrail_init: image: contrail-init tag: *vrouter_tag pullPolicy: IfNotPresent contrail_vrouter_kernel_init_dpdk: image: contrail-vrouter-kernel-init-dpdk tag: *vrouter_tag pullPolicy: IfNotPresent contrail_vrouter_agent: image: contrail-vrouter-agent tag: *vrouter_tag pullPolicy: IfNotPresent contrail_vrouter_agent_dpdk: image: contrail-vrouter-dpdk tag: *vrouter_tag pullPolicy: IfNotPresent resources: limits: memory: 4Gi hugepages-1Gi: 4Gi # Hugepages must be enabled with default size as 1G; minimum 4Gi to be used requests: memory: 4Gi hugepages-1Gi: 4Gi contrail_vrouter_telemetry_exporter: image: contrail-telemetry-exporter tag: *vrouter_tag pullPolicy: IfNotPresent contrail_k8s_deployer: image: contrail-k8s-deployer tag: *vrouter_tag pullPolicy: IfNotPresent contrail_k8s_crdloader: image: contrail-k8s-crdloader tag: *vrouter_tag pullPolicy: IfNotPresent contrail_k8s_applier: image: contrail-k8s-applier tag: *vrouter_tag pullPolicy: IfNotPresent busyBox: image: busybox tag: "latest" pullPolicy: IfNotPresent vrouter_name: master # uio driver will be vfio-pci or uio_pci_generic vrouter_dpdk_uio_driver: "vfio-pci" # MTU for all physical interfaces( all VF’s and PF’s) mtu: "9000" vrouter_log_path: "/var/log/jcnr/" # Defines the log severity. Possible options: DEBUG, INFO, WARN, ERR log_level: "INFO" dpdkCommandAdditionalArgs: "--yield_option 0" # Set ddp to true to enable Dynamic Device Personalization (DDP) # It provides datapath optimization at NIC for traffic like GTPU, SCTP etc. ddp: true # vrouter fwd core mask cpu_core_mask: "2,3" # vrouter service thread mask service_core_mask: "" # vrouter control thread mask dpdk_ctrl_thread_mask: "" # dpdk_mem_per_socket: "1024" # L3 disabled for switching mode jcnr_mode: "l2_only" # global Mac table size - We recommend leaving this at the default value mac_table_size: "10240" # timeout (seconds) for aging Mac table entries (S) mac_table_ageout: 60 # parameters for vRouter livenessProbe livenessProbe: initialDelaySeconds: 10 periodSeconds: 20 timeoutSeconds: 5 failureThreshold: 3 successThreshold: 1 # parameters for vRouter startupProbe startupProbe: initialDelaySeconds: 10 periodSeconds: 20 timeoutSeconds: 5 failureThreshold: 3 successThreshold: 1 # setting this to true will restore the interfaces back to # their original state in case vrouter pod crashes or restarts restoreInterfaces: false # tor side interface, expected all types of traffic fabricInterface: - enp4s0f0vf0 - bond0 # RU side interfaces, expected traffic is only management/control traffic fabricWorkloadInterface: - enp4s0f1vf0 # bond interface configurations bondInterfaceConfigs: - name: "bond0" mode: 1 # ACTIVE_BACKUP MODE slaveInterfaces: - "enp1s0f1" - "enp2s0f1" # rate limit for broadcast/multicast traffic on fabric interfaces in bytes per second fabricBMCastRateLimit: 0
-
jcnr-cni 特定
values.yaml
这个值.yaml 文件特定于 jcnr-cni Pod。jcnr-cni specfic 值.yaml 文件位于 Juniper_Cloud_Native_Router_<release-number>/helm_charts/jcnr/charts/jcnr-cni 目录下。如果在此文件中输入的任何值与主值.yaml 文件中的值相冲突,则主值.yaml 文件中的值优先。
# Default values for jcnr. # This is a YAML-formatted file. # Declare variables to be passed into your templates. common: registry: svl-artifactory.juniper.net/ repository: junos-docker-local/warthog/ crpdImage: image: crpd tag: "22.3R1.8" pullPolicy: IfNotPresent jcnrCNIImage: image: jcnr-cni tag: "20220918-fadf886" pullPolicy: IfNotPresent crpdConfigGeneratorImage: image: crpdconfig-generator tag: "v3" pullPolicy: IfNotPresent busyBox: image: busybox tag: "latest" pullPolicy: IfNotPresent #data plane default is dpdk for vrouter case, linux for kernel module dataplane: dpdk networkAttachmentDefinitionName: vswitch crpd_log_path: "/var/log/jcnr/" # Defines the log severity. Possible options: panic, fatal, error, # warn or warning, info, debug, trace log_level: "info" # parameters for cRPD livenessProbe livenessProbe: initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 3 successThreshold: 1 # parameters for cRPD startupProbe startupProbe: initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 3 successThreshold: 1 crpdConfigs: interface_groups: fabricInterface: # TOR side interface, expected all types of traffic - bond0: interface_mode: trunk # interface mode is always trunk for fabricInterface vlan-id-list: [100, 200, 700] # vlan-id-lists - enp4s0f0vf0: interface_mode: trunk # interface mode is always trunk for fabricInterface vlan-id-list: [300, 500, 3001, 3002] # vlan-id-lists - enp4s0f0vf1: interface_mode: trunk # interface mode is always trunk for fabricInterface vlan-id-list: [3003, 3004, 3201-3250, 900] # vlan-id-lists - enp4s0f0vf2: interface_mode: trunk # interface mode is always trunk for fabricInterface vlan-id-list: [3251-3255] # vlan-id-lists fabricWorkloadInterface: # RU side interfaces, expected traffic is only management/control traffic - enp4s0f1vf0: interface_mode: access # interface mode is always access for fabricWorkloadInterface vlan-id-list: [700] # vlan-id-list must always be a single value for fabricWorkloadInterface - enp4s1f1vf0: interface_mode: access # interface mode is always access for fabricWorkloadInterface vlan-id-list: [900] # vlan-id-list must always be a single value for fabricWorkloadInterface routing_instances: - vswitch: instance-type: virtual-switch
-
nad-dpdk_trunk_vlan_3002.yaml
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: nad-vswitch-bd3002 spec: config: '{ "cniVersion":"0.4.0", "name": "nad-vswitch-bd3002", "capabilities":{"ips":true}, "plugins": [ { "type": "jcnr", "args": { "instanceName": "vswitch", "instanceType": "virtual-switch", "bridgeDomain": "bd3002", "bridgeVlanId": "3002", "dataplane":"dpdk", "mtu": "9000" }, "ipam": { "type": "static", "capabilities":{"ips":true}, "addresses":[ { "address":"2001:db8:3002::10.2.0.1/64", "gateway":"2001:db83002::10.2.0.254" }, { "address":"10.2.0.1/24", "gateway":"10.2.0.254" } ] }, "kubeConfig":"/etc/kubernetes/kubelet.conf" } ] }'
-
nad-kernel_access_vlan_3001.yaml
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: pod1-vswitch-bd3001-1 spec: config: '{ "cniVersion":"0.4.0", "name": "pod1-vswitch-bd3001-1", "capabilities":{"ips":true}, "plugins": [ { "type": "jcnr", "args": { "instanceName": "vswitch", "instanceType": "virtual-switch", "bridgeDomain": "bd3001", "bridgeVlanId": "3001", "dataplane":"dpdk", "mtu": "9000", "interfaceType":"veth" }, "ipam": { "type": "static", "capabilities":{"ips":true}, "addresses":[ { "address":"2001:db8:3001::10.1.0.1/64", "gateway":"2001:db8:3001::10.1.0.254" }, { "address":"10.1.0.1/24", "gateway":"10.1.0.254" } ] }, "kubeConfig":"/etc/kubernetes/kubelet.conf" } ] }'
-
nad-odu-bd3003-sub.yaml
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: vswitch-bd3003-sub spec: config: '{ "cniVersion":"0.4.0", "name": "vswitch-bd3003-sub", "capabilities":{"ips":true}, "plugins": [ { "type": "jcnr", "args": { "instanceName": "vswitch", "instanceType": "virtual-switch", "bridgeDomain": "bd3003", "bridgeVlanId": "3003", "parentInterface":"net1", "interface":"net1.3003", "dataplane":"dpdk" }, "ipam": { "type": "static", "capabilities":{"ips":true}, "addresses":[ { "address":"10.3.0.1/24", "gateway":"10.3.0.254" }, { "address":"2001:db8:3003::10.3.0.1/120", "gateway":"2001:db8:3003::10.3.0.1" } ] }, "kubeConfig":"/etc/kubernetes/kubelet.conf" } ] }'
-
nad-odu-bd3004-sub.yaml
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: vswitch-bd3004-sub spec: config: '{ "cniVersion":"0.4.0", "name": "vswitch-bd3004-sub", "capabilities":{"ips":true}, "plugins": [ { "type": "jcnr", "args": { "instanceName": "vswitch", "instanceType": "virtual-switch", "bridgeDomain": "bd3004", "bridgeVlanId": "3004", "parentInterface":"net1", "interface":"net1.3004", "dataplane":"dpdk" }, "ipam": { "type": "static", "capabilities":{"ips":true}, "addresses":[ { "address":"30.4.0.1/24", "gateway":"30.4.0.254" }, { "address":"2001:db8:3004::10.4.0.1/120", "gateway":"2001:db8:3004::10.4.0.1" } ] }, "kubeConfig":"/etc/kubernetes/kubelet.conf" } ] }'
-
odu-virtio-subinterface.yaml
apiVersion: v1 kind: Pod metadata: name: odu-subinterface-1 annotations: k8s.v1.cni.cncf.io/networks: | [ { "name": "vswitch-bd3003-sub" }, { "name": "vswitch-bd3004-sub" } ] spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - 5d7s39.englab.juniper.net containers: - name: odu-subinterface image: svl-artifactory.juniper.net/junos-docker-local/warthog/pktgen19116:subint imagePullPolicy: IfNotPresent securityContext: privileged: false resources: requests: memory: 2Gi limits: hugepages-1Gi: 2Gi env: - name: KUBERNETES_POD_UID valueFrom: fieldRef: fieldPath: metadata.uid volumeMounts: - name: dpdk mountPath: /dpdk subPathExpr: $(KUBERNETES_POD_UID) - mountPath: /dev/hugepages name: hugepage volumes: - name: dpdk hostPath: path: /var/run/jcnr/containers - name: hugepage emptyDir: medium: HugePages
-
pod-dpdk-trunk-vlan3002.yaml
apiVersion: v1 kind: Pod metadata: name: odu-trunk-1 annotations: k8s.v1.cni.cncf.io/networks: nad-vswitch-bd3002 spec: containers: - name: odu-trunk image: svl-artifactory.juniper.net/junos-docker-local/warthog/pktgen19116:trunk imagePullPolicy: IfNotPresent securityContext: privileged: true resources: requests: memory: 2Gi limits: hugepages-1Gi: 2Gi env: - name: KUBERNETES_POD_UID valueFrom: fieldRef: fieldPath: metadata.uid volumeMounts: - name: dpdk mountPath: /dpdk subPathExpr: $(KUBERNETES_POD_UID) - mountPath: /dev/hugepages name: hugepage volumes: - name: dpdk hostPath: path: /var/run/jcnr/containers - name: hugepage emptyDir: medium: HugePages
-
pod-kernel-access-vlan-3001.yaml
apiVersion: v1 kind: Pod metadata: name: odu-kenel-pod-bd3001-1 annotations: k8s.v1.cni.cncf.io/networks: pod1-vswitch-bd3001-1 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - 5d8s7.englab.juniper.net containers: - name: odu-kenel-pod-bd3001-1 image: vinod-iperf3:latest imagePullPolicy: IfNotPresent command: ["/bin/bash","-c","sleep infinity"] securityContext: privileged: false env: - name: KUBERNETES_POD_UID valueFrom: fieldRef: fieldPath: metadata.uid volumeMounts: - name: dpdk mountPath: /dpdk subPathExpr: $(KUBERNETES_POD_UID) volumes: - name: dpdk hostPath: path: /var/run/jcnr/containers
-
L3_nad-net1.yaml
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: net1 spec: config: '{ "cniVersion":"0.4.0", "name": "net1", "type": "jcnr", "args": { "vrfName": "net1", "vrfTarget": "1:11" }, "kubeConfig":"/etc/kubernetes/kubelet.conf" }'
-
l3_odu1.yaml
apiVersion: v1 kind: Pod metadata: name: L3-pktgen-odu1 annotations: k8s.v1.cni.cncf.io/networks: | [ { "name": "net1", "interface":"net1", "cni-args": { "mac":"aa:bb:cc:dd:ee:51", "dataplane":"vrouter", "ipConfig":{ "ipv4":{ "address":"10.1.51.2/30", "gateway":"10.1.51.1", "routes":[ "10.1.51.0/30" ] }, "ipv6":{ "address":"2001:db8::10:1:51:2/126", "gateway":"2001:db8::10:1:51:1", "routes":[ "2001:db8::1:1:51:0/126" ] } } } }, "name": "net2", "interface":"net2", "cni-args": { "mac":"aa:bb:cc:dd:ee:52", "dataplane":"vrouter", "ipConfig":{ "ipv4":{ "address":"10.1.52.2/30", "gateway":"10.1.52.1", "routes":[ "10.1.52.0/30" ] }, "ipv6":{ "address":"2001:db8::10:1:52:2/126", "gateway":"2001:db8::10:1:52:1", "routes":[ "2001:db8::10:1:52:0/126" ] } } } ] spec: containers: - name: L3-pktgen-odu1 image: svl-artifactory.juniper.net/blr-data-plane/dpdk-app/dpdk:21.11 imagePullPolicy: IfNotPresent command: ["/bin/bash","-c","sleep infinity"] securityContext: privileged: false env: - name: KUBERNETES_POD_UID valueFrom: fieldRef: fieldPath: metadata.uid resources: requests: memory: 4Gi limits: hugepages-1Gi: 4Gi name: hugepages command: ["sleep"] args: ["infinity"] volumeMounts: - name: dpdk mountPath: /dpdk subPathExpr: $(KUBERNETES_POD_UID) - name: hugepages mountPath: /hugepages volumes: - name: dpdk hostPath: path: /var/run/jcnr/containers - name: hugepages emptyDir: medium: HugePages