Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Deploy Kubevirt DPDK Dataplane Support for VMs

SUMMARY Cloud-Native Contrail® Networking (CN2) supports the deployment of Kubevirt with vRouter DPDK dataplane for high-performance Virtual Machine (VM) and container networking in Kubernetes.

Kubevirt Overview

Kubevirt is an open-source Kubernetes project that enables the management and scheduling of VM workloads alongside container workloads within a Kubernetes cluster. Kubevirt provides a method of running VMs in a Kubernetes-orchestrated environment.

Kubevirt provides the following additional functions to your Kubernetes cluster. These enhancements help support Kubevirt in a cloud networking environment:

  • Other types of pods, or Custom Resource Definitions (CRDs), to the Kubernetes API server.

  • Controllers for cluster-wide logic to support the new types of pods.

  • Daemons for node-specific logic to support the new types of pods.

Kubevirt creates and manages VirtualMachineInstance (VMI) objects. A VMI is an instantiation of a VM. In other words, when you create a VM, the VMI associated with that VM represents the unique instance or state of that VM. VMIs enable you to terminate and initiate VMs at another time with no change in data or state. If a VM fails, the VMI helps restore it. Kubevirt provides a way for VMs to run in the Kubernetes-orchestrated environment.

Kubevirt and the DPDK vRouter

Kubevirt does not typically support user space networking for fast packet processing. In CN2, however, enhancements enable Kubevirt to support vhostuser interface types for VMs. A vhostuser interface type is a virtual network interface that enables high-performance communication between a VM and a user-space application. These interfaces perform user space networking with the Data Plane Development Kit (DPDK) vRouter and give pods and Kubevirt VMs access to the increased performance and packet processing the DPDK vRouter provides.

The following are some of the benefits of the DPDK vRouter application:

  • Packet processing occurs in user space and bypasses kernel space. This bypass increases packet-processing efficiency.

  • Kernel interrupts and context switches do not occur because packets bypass kernel space. This bypass results in less CPU overhead and increased data throughput.

  • DPDK enhances the forwarding plane of the vRouter in user space, increasing performance.

  • DPDK Lcores run in poll mode. This mode enables the Lcores to receive and process packets immediately upon receiving them.

Kubevirt utilizes the DPDK vRouter as a high-performance networking solution for VMs running in Kubernetes. Instead of relying on default kernel-based networking, Kubevirt integrates with DPDK vRouter to achieve accelerated packet processing for the VMs.

Deploy Kubevirt

Prerequisites

You must have an active Kubernetes cluster and the ability to use the kubectl client in order to deploy Kubevirt.

Pull Kubevirt Images and Deploy Kubevirt Using a Local Registry

See the following topic for information about how to deploy Kubevirt: Pull Kubevirt Images and Deploy Kubevirt Using a Local Registry.

Note:

These instructions are for the following Kubevirt releases:

  • v0.58.0 (current)

  • v0.48.0

Launch a VM Alongside a Container

With Kubevirt, launching and managing a VM in Kubernetes is similar to deploying a pod. You can create a VM object using kubectl. After creating a VM object, that VM is active and running in your cluster.

Use the following high-level steps to launch a VM alongside a container:

  1. Create a VirtualNetwork.

  2. Launch a VM.

Create a Virtual Network

The following NetworkAttachmentDefinition object is an example of a virtual network:

Launch a VM

The following VirtualMachine specs are examples of VirtualMachine instances with a varying number of interfaces:

Note:

For the latest Kubevirt API version information, refer to the official Kubevirt documentation for the latest release: https://github.com/kubevirt/kubevirt.

  • Single vhostuser interface VM:
  • Single vhostuser interface VM with CNI parameters:

    Kubevirt also enables you to specify a network to use for a VM through the spec:network and domain:interfaces fields. This functionality is limited because Kubevirt typically doesn't allow you to specify additional network parameters like "ips" or "cni-args". With Kubevirt running in a CN2 environment however, you can define specific VM networks in a Kubevirt VM's annotations field.

    Use key k8s.v1.cni.cncf.io/networks, define your custom networks, and specify additional desired parameters. The CN2 CNI resolves the custom network configuration by filling in missing fields.

    For example, if you specify a "cni-args" parameter, the VM annotations, the CN2 CNI attempts to merge the default network of the Kubevirt VM with the user-specified network, including additional parameter fields.

    Note the annotations field. In this example, cni-args enables the health-check feature. The custom networks vn-green and vn-blue are merged with the default VM's network.

  • Multi vhostuser interface:
  • Bridge/vhostuser interface VM: