Deploy a KubeVirt-based VM
Cloud-Native Router supports vhost-user interfaces for a DPDK capable KubeVirt-based VM. Read this topic to learn about deploying a KubeVirt-based VM with Cloud-Native Router.
You can run, deploy and manage virtual machines (VMs) within a Kubernetes cluster using KubeVirt. Such VM based workloads are not easily containerized and KubeVirt enables both pods and VM-based workloads to reside in a common, shared environment. The Cloud-Native Router supports networking for KubeVirt-based VMs as a secondary CNI. You can create Layer 3 interfaces with L3 virtual routing and forwarding (VRF) instances.
By default, KubeVirt supports Linux bridge interfaces using veth pairs. Veth pair interfaces can cause high processing times for VMs with DPDK capabilities. Cloud-Native Router can create custom vhost-user interfaces in a KubeVirt-based DPDK-enabled VM to achieve high-performance packet processing.
Cloud-Native Router implements a network binding plugin with a sidecar container. The binary in the sidecar container creates vhost-user interfaces based on the VM specifications and updates the domain XML with the interface details.
Configuration Steps
-
Enable KubeVirt support in the Cloud-Native Router helm chart. When enabled, this
configuration registers the vhostuser network binding to the
kubevirtresource in open-source Kubernetes andkubevirt-hyperconvergedresource in RHOCP. You can either install Cloud-Native Router with KubeVirt settings enabled or upgrade the installation to enable it later.#Specific to kubevirt deployments and kubevirt must me installed in the cluster kubevirt: repository: atom-docker/cn2/ tag: v1# helm install jcnr .
-
Verify the KubeVirt resource is patched.
- View logs for the
apply-jcnr-deploymentpod in thecontrail-deploynamespace:│ applier time="2025-11-18T18:52:21Z" level=info msg="Detected hco.kubevirt.io API group (OCP platform)" │ │ applier time="2025-11-18T18:52:21Z" level=info msg="Kubevirt support validated for platform: OCP" │ │ applier time="2025-11-18T18:52:21Z" level=info msg="Kubernetes version: v1.31.6" │ │ applier time="2025-11-18T18:52:21Z" level=info msg="Applying OCP patch with image: s-artifactory.juniper.net/atom-docker/cn2/jcnr-vhostuser-netbinding:v1" │ │ applier time="2025-11-18T18:52:21Z" level=info msg="Patching HyperConverged resource to register vhostuser network binding" │ │ applier time="2025-11-18T18:52:21Z" level=info msg="Successfully applied kubevirt patch for platform: OCP"
You can also describe the resource and verify the network binding. An RHOCP example is provided below:
# kubectl describe hyperconvergeds kubevirt-hyperconverged -n openshift-cnv Name: kubevirt-hyperconverged Namespace: openshift-cnv Labels: app=kubevirt-hyperconverged Annotations: deployOVS: false API Version: hco.kubevirt.io/v1beta1 Kind: HyperConverged Metadata: Creation Timestamp: 2025-04-07T23:56:28Z Finalizers: kubevirt.io/hyperconverged Generation: 54 Resource Version: 179755209 UID: 0245a746-bfae-4760-a1b5-c7c1501ced56 Spec: Cert Config: Ca: Duration: 48h0m0s Renew Before: 24h0m0s Server: Duration: 24h0m0s Renew Before: 12h0m0s ...<trimmed> Network Binding: Vhostuser: Downward API: device-info Sidecar Image: s-artifactory.juniper.net/atom-docker/cn2/jcnr-vhostuser-netbinding:v1 Resource Requirements: ...<trimmed>
- View logs for the
-
Create and apply one or more NetworkAttachmentDefinition (NAD) manifests to define
network instances
net-redandnet-yellow. You can also use any existing NAD for multiple VMs or pods.apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: net-red namespace: kv1 spec: config: '{ "cniVersion":"0.4.0", "name": "net-red", "plugins": [ { "type": "jcnr", "args": { "dataplane":"dpdk", "instanceName": "net-red", "instanceType": "vrf", "vrfTarget":"2:11" }, "ipam": { "type": "host-local", "capabilities":{"ips":true}, "ranges": [ [ { "subnet": "10.11.0.0/24", "gateway": "10.11.0.254" } ], [ { "subnet": "2001:db8:10.11.0.0/120", "gateway": "2001:db8:10.11.0.254" } ] ], "routes": [ { "dst": "0.0.0.0/0"}, { "dst": "30.0.0.0/8", "gw": "10.11.0.254" }, { "dst": "21.0.0.0/8", "gw": "10.11.0.254" }, { "dst": "3ffe:ffff:0:01ff::1/64" } ] }, "kubeConfig":"/var/home/core/kubeconfig" } ] }'apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: net-yellow namespace: kv1 spec: config: '{ "cniVersion":"0.4.0", "name": "net-yellow", "plugins": [ { "type": "jcnr", "args": { "dataplane":"dpdk", "instanceName": "net-yellow", "instanceType": "vrf", "vrfTarget":"2:12" }, "ipam": { "type": "host-local", "capabilities":{"ips":true}, "ranges": [ [ { "subnet": "10.12.0.0/24", "gateway": "10.12.0.254" } ], [ { "subnet": "2001:db8:10.12.0.0/120", "gateway": "2001:db8:10.12.0.254" } ] ], "routes": [ { "dst": "0.0.0.0/0"}, { "dst": "30.0.0.0/8", "gw": "10.12.0.254" }, { "dst": "21.0.0.0/8", "gw": "10.12.0.254" }, { "dst": "3ffe:ffff:0:01ff::1/64" } ] }, "kubeConfig":"/var/home/core/kubeconfig" } ] }'# kubectl apply -f net-red-nad.yaml
# kubectl apply -f net-yellow-nad.yaml
-
Create and apply a VM specification manifest with interface bindings and networks
defined. The VM is configured with a primary interface and two secondary vhost-user
interfaces in the
net-redandnet-yellownetworks.apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: name: dpdkvm namespace: kv1 spec: running: true template: metadata: labels: kubevirt.io/size: small kubevirt.io/domain: vm-virtio-veth app: vm-1 spec: nodeSelector: kubernetes.io/hostname: node2 terminationGracePeriodSeconds: 30 domain: cpu: sockets: 1 cores: 4 threads: 2 #dedicatedCpuPlacement: true memory: hugepages: pageSize: "1Gi" resources: requests: memory: "8Gi" devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default bridge: {} - name: vhost-user-vn-net-red binding: name: vhostuser - name: vhost-user-vn-net-yellow binding: name: vhostuser useVirtioTransitional: true networks: - name: default pod: {} - name: vhost-user-vn-net-red multus: networkName: net-red - name: vhost-user-vn-net-yellow multus: networkName: net-yellow volumes: - name: containerdisk containerDisk: image: s-artifactory.juniper.net/atom-docker/dpdk-pktgen/vmdisks/dpdk-pktgen-auto:latest - downwardAPI: fields: - path: "labels" fieldRef: fieldPath: metadata.labels - path: "annotations" fieldRef: fieldPath: metadata.annotations name: podinfo - name: cloudinitdisk cloudInitNoCloud: #userDataBase64: SGkuXG4= userData: | #cloud-config runcmd: - sed -i 's/GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX="default_hugepagesz=1GB hugepagesz=1G hugepages=4 intel_iommu=on iommu=pt isolcpus=2-7"/' /etc/default/grub - grub2-mkconfig -o /boot/grub2/grub.cfg# kubectl apply -f dpdkvm.yaml
Note: It is not recommended to use bridge or veth-pair interface type for secondary interfaces when deploying a KubeVirt-based VM with Cloud-Native Router. Only vhost-user (DPDK) interfaces are supported. -
Verify the vhost-user interfaces are created for the VM.
- To access the VM use the
virtctlutility to expose a nodeport service for the VM. You can then directly access the VM usingsshon the port mapped to the target port.# virtctl expose vmi -n kv1 dpdkvm --name=dpdkvm-ssh-service --port=2022 --target-port=22 --type=NodePort
# kubectl get services -n kv1 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dpdkvm-ssh-service NodePort 172.30.17.159 <none> 2022:31222/TCP 3d20h
# ssh root@node2 -p 31222 root@node2's password: Welcome to Ubuntu 20.04.3 LTS (GNU/Linux 5.4.0-122-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage System information as of Tue Nov 18 18:59:07 UTC 2025 System load: 0.13 Processes: 200 Usage of /: 10.7% of 38.58GB Users logged in: 0 Memory usage: 4% IPv4 address for ens1: 10.129.1.206 Swap usage: 0% * Super-optimized for small spaces - read how we shrank the memory footprint of MicroK8s to make it the smallest full K8s around. https://ubuntu.com/blog/microk8s-memory-optimisation Last login: Fri Jul 22 20:32:33 2022 from 10.107.21.196
Verify the interfaces using
ip -acommand:root@dpdkvm:~# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UP group default qlen 1000 link/ether 0a:58:0a:81:01:ce brd ff:ff:ff:ff:ff:ff inet 10.129.1.206/23 brd 10.129.1.255 scope global dynamic ens1 valid_lft 86313365sec preferred_lft 86313365sec inet6 fe80::858:aff:fe81:1ce/64 scope link valid_lft forever preferred_lft forever 3: ens2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 02:49:27:00:00:8f brd ff:ff:ff:ff:ff:ff 4: ens3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 02:49:27:00:00:90 brd ff:ff:ff:ff:ff:ffNote that interfaces
ens2andens3are dpdk interfaces, however the output does display any information about them.
- To access the VM use the
-
Verify the interface binding in the domain XML of the VM.
- KubeVirt creates a special pod for each VM. List the pod in the
kv1namespace:# kubectl get pods -n kv1 NAME READY STATUS RESTARTS AGE virt-launcher-dpdkvm-xpnvw 3/3 Running 0 26m
Verify the network interfaces are available in the pod:
# kubectl describe pod virt-launcher-dpdkvm-xpnvw -n kv1 Name: virt-launcher-dpdkvm-xpnvw Namespace: kv1 Priority: 0 Service Account: default Node: node2/10.87.70.71 Start Time: Tue, 18 Nov 2025 18:55:47 +0000 Labels: app=vm-1 kubevirt.io=virt-launcher kubevirt.io/created-by=a50ed4b4-9d2a-43be-9aab-2c2049e27405 kubevirt.io/domain=vm-virtio-veth kubevirt.io/nodeName=node2 kubevirt.io/size=small vm.kubevirt.io/name=dpdkvm Annotations: descheduler.alpha.kubernetes.io/request-evict-only: jcnr.juniper.net/dpdk-interfaces: [ { "name": "podc4fb1146cc8", "vhost-adaptor-path": "/dpdk/vhost-podc4fb1146cc8.sock", "vhost-adaptor-mode": "client", "ipv4-address": "10.11.0.1/24", "ipv6-address": "2001:db8:7a0b:1/120", "mac-address": "02:49:27:00:00:8f" }, { "name": "podbe2ca98ba59", "vhost-adaptor-path": "/dpdk/vhost-podbe2ca98ba59.sock", "vhost-adaptor-mode": "client", "ipv4-address": "10.12.0.1/24", "ipv6-address": "2001:db8:7a0c:1/120", "mac-address": "02:49:27:00:00:90" } ] k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.129.1.206/23"],"mac_address":"0a:58:0a:81:01:ce","gateway_ips":["10.129.0.1"],"routes":[{"dest":"10.128.0.... k8s.v1.cni.cncf.io/network-status: [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.129.1.206" ], "mac": "0a:58:0a:81:01:ce", "default": true, "dns": {} },{ "name": "kv1/net-red", "interface": "podc4fb1146cc8", "ips": [ "10.11.0.1", "2001:db8:7a0b:1" ], "mac": "02:49:27:00:00:8f", "dns": {}, "gateway": [ "10.11.0.254" ] },{ "name": "kv1/net-yellow", "interface": "podbe2ca98ba59", "ips": [ "10.12.0.1", "2001:db8:7a0c:1" ], "mac": "02:49:27:00:00:90", "dns": {}, "gateway": [ "10.12.0.254" ] }] k8s.v1.cni.cncf.io/networks: [{"name":"net-red","namespace":"kv1","mac":"02:49:27:00:00:8f","interface":"podc4fb1146cc8"},{"name":"net-yellow","namespace":"kv1... kubectl.kubernetes.io/default-container: compute kubevirt.io/domain: dpdkvm kubevirt.io/migrationTransportUnix: true kubevirt.io/network-info: [{"name":"podc4fb1146cc8","mac":"02:49:27:00:00:8f","networkName":"net-red","intfType":"vhostuser","mode":"server","path":"vhost-podc4fb1146c... kubevirt.io/vm-generation: 1 openshift.io/scc: kubevirt-controller post.hook.backup.velero.io/command: ["/usr/bin/virt-freezer", "--unfreeze", "--name", "dpdkvm", "--namespace", "kv1"] post.hook.backup.velero.io/container: compute pre.hook.backup.velero.io/command: ["/usr/bin/virt-freezer", "--freeze", "--name", "dpdkvm", "--namespace", "kv1"] pre.hook.backup.velero.io/container: compute seccomp.security.alpha.kubernetes.io/pod: localhost/kubevirt/kubevirt.json Status: RunningList the containers in the pod:
# kubectl top pod virt-launcher-dpdkvm-xpnvw --containers -n kv1 POD NAME CPU(cores) MEMORY(bytes) virt-launcher-dpdkvm-xpnvw compute 1000m 113Mi virt-launcher-dpdkvm-xpnvw hook-sidecar-0 0m 12Mi virt-launcher-dpdkvm-xpnvw volumecontainerdisk 1m 0Mi
Navigate to the shell of the compute container:
# kubectl exec -it virt-launcher-dpdkvm-xpnvw -n kv1 -c compute -- bash
List the running VMs. Take note of the domain ID.
bash-5.1$ virsh list Id Name State ------------------------------------- 1 kv1_dpdkvm running
View the domain XML using the
virsh dumpxml <domain_id>command. Notice the twovhostuserinterfaces.bash-5.1$ virsh dumpxml 1 <feature policy='require' name='vmx-exit-load-efer'/> <feature policy='require' name='vmx-exit-save-preemption-timer'/> <feature policy='disable' name='vmx-exit-clear-bndcfgs'/> <feature policy='require' name='vmx-entry-noload-debugctl'/> <feature policy='require' name='vmx-entry-ia32e-mode'/> <feature policy='require' name='vmx-entry-load-perf-global-ctrl'/> <feature policy='require' name='vmx-entry-load-pat'/> <feature policy='require' name='vmx-entry-load-efer'/> <feature policy='disable' name='vmx-entry-load-bndcfgs'/> <feature policy='require' name='vmx-eptp-switching'/> <feature policy='disable' name='hle'/> <feature policy='disable' name='rtm'/> <feature policy='disable' name='mpx'/> ...<trimmed> <interface type='ethernet'> <mac address='0a:58:0a:81:01:ce'/> <target dev='tap0' managed='no'/> <model type='virtio-transitional'/> <mtu size='1400'/> <alias name='ua-default'/> <rom enabled='no'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </interface> <interface type='vhostuser'> <mac address='02:49:27:00:00:8f'/> <source type='unix' path='/var/run/kubevirt-hooks/vhost-podc4fb1146cc8.sock' mode='server'/> <target dev='vhost-podc4fb1146cc8.sock'/> <model type='virtio-transitional'/> <driver name='vhost'/> <alias name='ua-vhost-user-vn-net-red'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </interface> <interface type='vhostuser'> <mac address='02:49:27:00:00:90'/> <source type='unix' path='/var/run/kubevirt-hooks/vhost-podbe2ca98ba59.sock' mode='server'/> <target dev='vhost-podbe2ca98ba59.sock'/> <model type='virtio-transitional'/> <driver name='vhost'/> <alias name='ua-vhost-user-vn-net-yellow'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </interface> <serial type='unix'> ...<trimmed>You can verify any plugin errors using the
kubectl logs virt-launcher-dpdkvm-xpnvw -c hook-sidecar-0-n kv1 command.
- KubeVirt creates a special pod for each VM. List the pod in the