Deploy Kubevirt DPDK Dataplane Support for VMs
SUMMARY Cloud-Native Contrail® Networking™ (CN2) supports the deployment of Kubevirt with vRouter DPDK dataplane for high-performance Virtual Machine (VM) and container networking in Kubernetes.
Kubevirt Overview
Kubevirt is an open-source Kubernetes project that enables the management and scheduling of VM workloads alongside container workloads within a Kubernetes cluster. Kubevirt provides a method of running VMs in a Kubernetes-orchestrated environment.
Kubevirt provides the following additional functions to your Kubernetes cluster. These enhancements help support Kubevirt in a cloud networking environment:
-
Other types of pods, or Custom Resource Definitions (CRDs), to the Kubernetes API server.
-
Controllers for cluster-wide logic to support the new types of pods.
-
Daemons for node-specific logic to support the new types of pods.
Kubevirt
creates and manages VirtualMachineInstance
(VMI) objects.
A VMI is an
instantiation of a VM. In other words, when you create a VM, the VMI associated with that VM
represents the unique instance or state of that VM. VMIs enable you to
terminate
and initiate VMs at another time with no change in data or
state. If a VM fails,
the VMI helps restore it.
Kubevirt provides a way
for VMs to run in the Kubernetes-orchestrated environment.
Kubevirt and the DPDK vRouter
Kubevirt does not typically support user space networking for fast packet processing. In
CN2,
however, enhancements enable Kubevirt to support vhostuser
interface types
for VMs. A
vhostuser
interface type is a virtual network interface that enables
high-performance communication between a VM and a user-space application.
These interfaces perform user space networking with the Data Plane Development Kit (DPDK)
vRouter and give pods
and Kubevirt VMs access to the increased performance and packet processing
the DPDK vRouter provides.
The following are some of the benefits of the DPDK vRouter application:
-
Packet processing occurs in user space and bypasses kernel space. This bypass increases packet-processing efficiency.
-
Kernel interrupts and context switches do not occur because packets bypass kernel space. This bypass results in less CPU overhead and increased data throughput.
-
DPDK enhances the forwarding plane of the vRouter in user space, increasing performance.
-
DPDK Lcores run in poll mode. This mode enables the Lcores to receive and process packets immediately upon receiving them.
Kubevirt utilizes the DPDK vRouter as a high-performance networking solution for VMs running in Kubernetes. Instead of relying on default kernel-based networking, Kubevirt integrates with DPDK vRouter to achieve accelerated packet processing for the VMs.
Deploy Kubevirt
Prerequisites
You must have an active Kubernetes cluster and the ability to use the
kubectl
client in order to deploy Kubevirt.
Pull Kubevirt Images and Deploy Kubevirt Using a Local Registry
See the following topic for information about how to deploy Kubevirt: Pull Kubevirt Images and Deploy Kubevirt Using a Local Registry.
These instructions are for the following Kubevirt releases:
-
v0.58.0 (current)
-
v0.48.0
Launch a VM Alongside a Container
With Kubevirt, launching and managing a VM in Kubernetes is similar to deploying a pod. You
can create a VM object using kubectl
. After creating a VM object, that VM
is active and running in your cluster.
Use the following high-level steps to launch a VM alongside a container:
-
Create a
VirtualNetwork
. -
Launch a VM.
Create a Virtual Network
The following
NetworkAttachmentDefinition
object is an example of a virtual network:
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: vn-blue namespace: contrail annotations: juniper.net/networks: '{ "ipamV4Subnet": "19.1.1.0/24" }' labels: vn: vn-blue-vn-green spec: config: '{ "cniVersion": "0.3.1", "name": "nad-blue", "type": "contrail-k8s-cni" }'
Launch a VM
The following VirtualMachine
specs are examples of
VirtualMachine
instances with a varying number of interfaces:
For the latest Kubevirt API version information, refer to the official Kubevirt documentation for the latest release: https://github.com/kubevirt/kubevirt.
- Single
vhostuser
interface VM:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-single-virtio namespace: contrail spec: running: true template: metadata: labels: kubevirt.io/size: small kubevirt.io/domain: vm-single-virtio app: vm-single-virtio-app spec: nodeSelector: master: master terminationGracePeriodSeconds: 30 domain: cpu: sockets: 1 cores: 8 threads: 2 #dedicatedCpuPlacement: true memory: hugepages: pageSize: "2Mi" resources: requests: memory: "512Mi" devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default bridge: {} - name: vhost-user-vn-blue vhostuser: {} useVirtioTransitional: true networks: - name: default pod: {} - name: vhost-user-vn-blue multus: networkName: vn-blue volumes: - name: containerdisk containerDisk: image: <image>:<latest> - name: cloudinitdisk cloudInitNoCloud: userDataBase64: SGkuXG4=
-
Single
vhostuser
interface VM with CNI parameters:Kubevirt also enables you to specify a network to use for a VM through the
spec:network
anddomain:interfaces
fields. This functionality is limited because Kubevirt typically doesn't allow you to specify additional network parameters like"ips"
or"cni-args"
. With Kubevirt running in a CN2 environment however, you can define specific VM networks in a Kubevirt VM'sannotations
field.Use key
k8s.v1.cni.cncf.io/networks
, define your custom networks, and specify additional desired parameters. The CN2 CNI resolves the custom network configuration by filling in missing fields.For example, if you specify a
"cni-args"
parameter, the VMannotations
, the CN2 CNI attempts to merge the default network of the Kubevirt VM with the user-specified network, including additional parameter fields.Note the
annotations
field. In this example,cni-args
enables the health-check feature. The custom networksvn-green
andvn-blue
are merged with the default VM's network.apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: k8s.v1.cni.cncf.io/networks: | [ { "name": "vn-green", "namespace": "kubevirttest", "cni-args": { "core.juniper.net/health-check": "[{\"name\": \"bfd-hc\", \"namespace\": \"kubevirttest\"}]" } }, { "name": "vn-blue", "namespace": "kubevirttest" } ] labels: special: vm-ubuntu name: vm-ubuntu-hc-1 namespace: kubevirttest spec: running: true template: metadata: labels: kubevirt.io/size: small kubevirt.io/domain: vm-single-virtio app: vm-single-virtio-app spec: nodeSelector: master: master terminationGracePeriodSeconds: 30 domain: cpu: sockets: 1 cores: 8 threads: 2 #dedicatedCpuPlacement: true memory: hugepages: pageSize: "2Mi" resources: requests: memory: "512Mi" devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default bridge: {} - name: vhost-user-vn-blue vhostuser: {} useVirtioTransitional: true networks: - name: default pod: {} - name: vhost-user-vn-blue multus: networkName: vn-blue volumes: - name: containerdisk containerDisk: image: <image>:<latest> - name: cloudinitdisk cloudInitNoCloud: userDataBase64: SGkuXG4=
- Multi
vhostuser
interface:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-multi-virtio namespace: contrail spec: running: true template: metadata: labels: kubevirt.io/size: small kubevirt.io/domain: vm-multi-virtio app: vm-multi-virtio-app spec: nodeSelector: worker: worker terminationGracePeriodSeconds: 30 domain: cpu: sockets: 1 cores: 8 threads: 2 #dedicatedCpuPlacement: true memory: hugepages: pageSize: "2Mi" resources: requests: memory: "512Mi" devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default bridge: {} - name: vhost-user-vn-blue vhostuser: {} - name: vhost-user-vn-green vhostuser: {} useVirtioTransitional: true networks: - name: default pod: {} - name: vhost-user-vn-blue multus: networkName: vn-blue - name: vhost-user-vn-green multus: networkName: vn-green volumes: - name: containerdisk containerDisk: image: <image>:<latest> - name: cloudinitdisk cloudInitNoCloud: userDataBase64: SGkuXG4=
- Bridge/
vhostuser
interface VM:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-virtio-veth namespace: contrail spec: running: true template: metadata: labels: kubevirt.io/size: small kubevirt.io/domain: vm-virtio-veth app: vm-virtio-veth-app spec: nodeSelector: master: master terminationGracePeriodSeconds: 30 domain: cpu: sockets: 1 cores: 8 threads: 2 #dedicatedCpuPlacement: true memory: hugepages: pageSize: "2Mi" resources: requests: memory: "512Mi" devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default bridge: {} - name: vhost-user-vn-blue vhostuser: {} - name: vhost-user-vn-green bridge: {} useVirtioTransitional: true networks: - name: default pod: {} - name: vhost-user-vn-blue multus: networkName: vn-blue - name: vhost-user-vn-green multus: networkName: vn-green volumes: - name: containerdisk containerDisk: image: <image>:<latest> - name: cloudinitdisk cloudInitNoCloud: userDataBase64: SGkuXG4=