ON THIS PAGE
JCNR Use-Cases and Configuration Overview
SUMMARY Read this chapter to review configuration examples for various Juniper Cloud-Native Router use cases when deployed in the container network interface (CNI) mode.
The Juniper Cloud-Native Router can be deployed as a virtual switch or a transit router, either as a pure container network function (CNF) or as a container network interface (CNI). In the CNF mode, there are no application pods running on the node and the router only performs packeting switching or forwarding through various interfaces on the system. In the CNI mode, application pods using software-based network interfaces such as veth-pairs or DPDK vhost-user based interfaces, attach to the cloud-native router. This chapter provides configuration examples for attaching different workload interface types to the cloud-native router CNI instance.
Configuration Example
The JCNR CNI is deployed as a secondary CNI along with Multus as a primary CNI, to create different types of secondary interfaces for the application pod. Multus uses a network attachment definition (NAD) file to configure a secondary interface for the application pod. The NAD specifies how to create a secondary interface, IP address allocation, network instance and more. A pod can have one or more NADs, typically one per pod interface. Theconfig:
field in the NAD file
defines the JCNR CNI configuration. Here is a generic format of the
NAD:apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: <vrf-name> spec: config: '{ "cniVersion":"0.4.0", "name": "<vrf-name>", "plugins": [ { "type": "jcnr", "args": { "key1":"value1", "key2","value2", .... }, "ipam": { "type": "<ipam-type>", .... }, "kubeConfig":"/etc/kubernetes/kubelet.conf" } ] }'
Key | Description |
---|---|
instanceName | The routing-instance name |
instanceType | One of: virtual-router—for non-VPN-related applications vrf—Layer 3 VPN implementations virtual-switch—Layer 2 implementations |
interfaceType | Either "veth" or "virtio" |
vlanId | A valid vlan id "1-4095" |
bridgeVlanId | A valid vlan id "1-4095" |
vlanIdList | A list of command separated vlan-id, e.g: "1, 5, 7, 10-20" |
parentInterface | Valid interface name as it should appear in the pod. Child/sub-interfaces have parentInterface as their prefix followed by "." If parentInterface is specified, sub interface must be explicitly specifiied. |
vrfTarget | The route-target for vrf routing instance |
bridgeDomain | Bridge Domain under which pod interface should be attached in the virtual-switch instance. |
type (ipam) | static—assigns same IP to all pods, to assign a unique IP per pod define a
unique NAD per pod per interface host-local—unique IP address per pod interface on the same host. IP addresses are not unique across two different nodes whereabouts—unique IP address per pod across all nodes |
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: vswitch-pod1-bd100 spec: config: '{ "cniVersion":"0.4.0", "name": "vswitch-pod1-bd100", "plugins": [ { "type": "jcnr", "args": { "instanceName": "vswitch", "instanceType": "virtual-switch", "interfaceType": "veth", "bridgeDomain": "bd100", "bridgeVlanId": "100" }, "ipam": { "type": "static", "addresses":[ { "address":"99.61.0.2/16", "gateway":"99.61.0.1" }, { "address":"1234::99.61.0.2/120", "gateway":"1234::99.61.0.1" } ] }, "kubeConfig":"/etc/kubernetes/kubelet.conf" } ] }'
k8s.v1.cni.cncf.io/networks
annotation. For
example:apiVersion: v1 kind: Pod metadata: name: pod1 annotations: k8s.v1.cni.cncf.io/networks: vswitch-pod1-bd100 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker containers: - name: pod1 image: ubuntu:latest imagePullPolicy: IfNotPresent securityContext: privileged: false env: - name: KUBERNETES_POD_UID valueFrom: fieldRef: fieldPath: metadata.uid volumeMounts: - name: dpdk mountPath: /dpdk subPathExpr: $(KUBERNETES_POD_UID) volumes: - name: dpdk hostPath: path: /var/run/jcnr/containers
/dpdk/dpdk-interfaces.json
inside the application container for the DPDK
application to consume. It is also exported into the pod as a pod annotation.When you
create a pod for use in the cloud-native router, the Kubernetes component known as
kubelet calls the Multus CNI to set up pod networking and interfaces.
Multus reads the annotations section of the pod.yaml file to refer the
corresponding NAD. If a NAD points to jcnr
as the CNI plug in, Multus calls
the JCNR-CNI to set up the pod interface. JCNR-CNI creates the interface as specified in the
NAD. JCNR-CNI then generates and pushes a configuration into cRPD.
Troubleshooting
Pods main fail to come up for various reasons:
-
Image not found
-
CNI failed to add interfaces
-
CNI failed to push configuration into cRPD
-
CNI failed to invoke vRouter REST APIs
-
The NAD is invalid or undefined
The following commands will be useful to troubleshooting pod issues:
# Check the Pod status kubectl get pods –A
# Check pod state and CNI logs kubectl describe pod <pod-name>
# Check the pod logs kubectl logs pod <pod-name>
# Check the net-attach-def kubectl get net-attach-def <net-attach-def-name> -o yaml
# Check CNI logs tail –f /var/log/jcnr/jcnr-cni.log
# Check the cRPD config added by CNI (on the cRPD CLI) cli> show configuration groups cni