Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Deploy a KubeVirt-based VM

Cloud-Native Router supports vhost-user interfaces for a DPDK capable KubeVirt-based VM. Read this topic to learn about deploying a KubeVirt-based VM with Cloud-Native Router.

You can run, deploy and manage virtual machines (VMs) within a Kubernetes cluster using KubeVirt. Such VM based workloads are not easily containerized and KubeVirt enables both pods and VM-based workloads to reside in a common, shared environment. The Cloud-Native Router supports networking for KubeVirt-based VMs as a secondary CNI. You can create Layer 3 interfaces with L3 virtual routing and forwarding (VRF) instances.

By default, KubeVirt supports Linux bridge interfaces using veth pairs. Veth pair interfaces can cause high processing times for VMs with DPDK capabilities. Cloud-Native Router can create custom vhost-user interfaces in a KubeVirt-based DPDK-enabled VM to achieve high-performance packet processing.

Cloud-Native Router implements a network binding plugin with a sidecar container. The binary in the sidecar container creates vhost-user interfaces based on the VM specifications and updates the domain XML with the interface details.

Configuration Steps

Open-Source Kubernetes v1.32 onwards and Red Hat OpenShift (RHOCP) v4.19 onwards support this feature. Ensure KubeVirt v1.3.0 and multus CNI is installed in the Kubernetes cluster. For more details review https://kubevirt.io/user-guide/cluster_admin/installation/
  1. Enable KubeVirt support in the Cloud-Native Router helm chart. When enabled, this configuration registers the vhostuser network binding to the kubevirt resource in open-source Kubernetes and kubevirt-hyperconverged resource in RHOCP. You can either install Cloud-Native Router with KubeVirt settings enabled or upgrade the installation to enable it later.
  2. Verify the KubeVirt resource is patched.
    1. View logs for the apply-jcnr-deployment pod in the contrail-deploy namespace:
    2. You can also describe the resource and verify the network binding. An RHOCP example is provided below:

  3. Create and apply one or more NetworkAttachmentDefinition (NAD) manifests to define network instances net-red and net-yellow. You can also use any existing NAD for multiple VMs or pods.
  4. Create and apply a VM specification manifest with interface bindings and networks defined. The VM is configured with a primary interface and two secondary vhost-user interfaces in the net-red and net-yellow networks.
    Note: It is not recommended to use bridge or veth-pair interface type for secondary interfaces when deploying a KubeVirt-based VM with Cloud-Native Router. Only vhost-user (DPDK) interfaces are supported.
  5. Verify the vhost-user interfaces are created for the VM.
    1. To access the VM use the virtctl utility to expose a nodeport service for the VM. You can then directly access the VM using ssh on the port mapped to the target port.
    2. Verify the interfaces using ip -a command:

      Note that interfaces ens2 and ens3 are dpdk interfaces, however the output does display any information about them.

  6. Verify the interface binding in the domain XML of the VM.
    1. KubeVirt creates a special pod for each VM. List the pod in the kv1 namespace:
    2. Verify the network interfaces are available in the pod:

    3. List the containers in the pod:

    4. Navigate to the shell of the compute container:

    5. List the running VMs. Take note of the domain ID.

    6. View the domain XML using the virsh dumpxml <domain_id> command. Notice the two vhostuser interfaces.

    7. You can verify any plugin errors using the kubectl logs virt-launcher-dpdkvm-xpnvw -c hook-sidecar-0 -n kv1 command.