Skip to main content

Kubevirt - Running VMs under Kubernetes

Kubevirt deployed on Rancher

Intro is a (CNCF) project to make it possible to run virtual machines as containerized workloads on a k8s cluster. We'll have a look at it by deploying it to a Rancher k3s cluster and spinning up a Debian guest VM. 

We're using version v0.34.2 of Kubevirt and a three node 1.19.3+k3s2 cluster on bare metal.  


Kubevirt operator and custom resource

Installing the operator and custom resource in the cluster will deploy the api, controller, and operator resources, so we'll be able to manage the vms   

# Using version 0.34.2
devbox1:~$ echo $VERSION

# Install the operator
devbox1:~$ kubectl create -f${VERSION}/kubevirt-operator.yaml

namespace/kubevirt created
Warning: CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use CustomResourceDefinition created created created
serviceaccount/kubevirt-operator created created created created created
deployment.apps/virt-operator created

# Install the custom resource
devbox1:~$ kubectl create -f${VERSION}/kubevirt-cr.yaml created

# Verify (install took about a minute)
devbox1:~$ kubectl get -n kubevirt -o=jsonpath="{.status.phase}"
devbox1:~$ kubectl get -n kubevirt -o=jsonpath="{.status.phase}"

install virtctl

Virtctl is a binary that can be used to manage the virtual machines, similar to kubectl for the cluster. We'll download the latest version and install it to /usr/local on our development box. It could also be installed as a plugin to your kubectl.

devbox1:~$ echo $ARCH
devbox1:~$ cd Downloads/
devbox1:~/Downloads$ curl -L -o virtctl${VERSION}/virtctl-${VERSION}-${ARCH}
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   651  100   651    0     0   2106      0 --:--:-- --:--:-- --:--:--  2100
100 45.7M  100 45.7M    0     0   971k      0  0:00:48  0:00:48 --:--:--  777k
devbox1:~/Downloads$ chmod +x virtctl
devbox1:~/Downloads$ sudo install virtctl /usr/local/bin

Running a test vm

We should now be able to spin up a virtual machine that downloads a registry disk (kubevirt/cirros-container-disk-demo) to boot from. It will not persist any data, so once we're done we'll just remove it. Get the vm.yaml from kubevirt's github. The disks are defined in the yml:

        - name: containerdisk
            image: kubevirt/cirros-container-disk-demo

# Deploy the vm - creating a 'vm'
devbox1:~/Projects/kubevirts$ kubectl apply -f vm.yaml created

devbox1:~/Projects/kubevirts$ kubectl get vms
testvm   13s

# Start the vm using virtctl - creating a running machine in a 'vmi' instance
devbox1:~/Projects/kubevirts$ virtctl start testvm
VM testvm was scheduled to start
devbox1:~/Projects/kubevirts$ kubectl get vmis
NAME     AGE   PHASE     IP           NODENAME
testvm   22s   Running   castor

# Connect to the vm
devbox1:~/Projects/kubevirts$ virtctl console testvm

# Running in default namespace:
devbox1:~/Projects/kubevirts$ kubectl get pods -n default -o wide |grep virt
virt-launcher-testvm-8nwzq   2/2     Running   0          7m55s   castor   <none>           <none>

# Stop / remove virtual machine
devbox1:~/Projects/kubevirts$ virtctl stop testvm
VM testvm was scheduled to stop
devbox1:~/Projects/kubevirts$ kubectl delete vm testvm "testvm" deleted
devbox1:~/Projects/kubevirts$ kubectl get vmis
No resources found in default namespace.
devbox1:~/Projects/kubevirts$ kubectl get vms
No resources found in default namespace.

Further steps

So far so good, but we'd also like to be able to run our own iso's and diskimages of course. To be able to upload and store images, we'll use the CDI operator, a data importer, and use our NFS storage class to hold images. Both the iso's and qcow disk images. 

Deploy the Data importer custom resource

# Install the CDI custom operator 
devbox1:~/Projects/kubevirts$ echo $VERSION
devbox1:~/Projects/kubevirts$ kubectl create -f$VERSION/cdi-operator.yaml
namespace/cdi created created created created
serviceaccount/cdi-operator created created created
deployment.apps/cdi-operator created
configmap/cdi-operator-leader-election-helper created

# Deploy the CDI itself
devbox1:~/Projects/kubevirts$ kubectl create -f$VERSION/cdi-cr.yaml created

# All running
devbox1:~/Projects/kubevirts$ kubectl get pods -n cdi
NAME                               READY   STATUS    RESTARTS   AGE
cdi-apiserver-f5fc95fb4-xv8xx      1/1     Running   0          54s
cdi-deployment-86979fbccf-4qrrw    1/1     Running   0          48s
cdi-operator-6d7579d75c-dpg8b      1/1     Running   0          110s
cdi-uploadproxy-7ff594fbdf-j4h7z   1/1     Running   0          45s

Create a pvc to hold a Debian installation image

We will setup a persistent volume claim called debian_netinst that is annotated to make sure that the CDI creates it from the download link. We'll use the nfs-client storage class that is available in our cluster. 

# pvc_debian.yml

apiVersion: v1
kind: PersistentVolumeClaim
  name: "debian-netinst"
    app: containerized-data-importer
  annotations: ""
  - ReadWriteOnce
      storage: 400Mi
  storageClassName: nfs-client

# Deploy
devbox1:~/Projects/kubevirts$ kubectl create -f pvc_debian.yml
persistentvolumeclaim/debian-netinst created

devbox1:~/Projects/kubevirts$ kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
debian-netinst   Bound    pvc-3d2980bf-19d5-4b79-8e44-b1cb55e6aea6   400Mi      RWO            nfs-client     112s

Define the Debian virtual machine

To define our virtual machine, we create a yml that defines a disk image, debian_vm_hd, as PVC resource on the nfs-client, and the Virtual Machine resource itself. Initial state will be running: false, and we're using 2 cores, 4G ram, and the default q35 chipset. For the first disk we will bind our Debian netinstall image that was stored as PVC in the previous step.

# debian.yml
 apiVersion: v1
 kind: PersistentVolumeClaim
   name: debian-vm-hd
     - ReadWriteOnce
       storage: 5Gi
   storageClassName: nfs-client
 kind: VirtualMachine
   name: debian-vm
   running: false
       labels: debian-vm
           cores: 2
           - bootOrder: 1
               bus: sata
             name: cdromiso
           - disk:
               bus: virtio
             name: harddrive
           type: q35
             memory: 4G
       - name: cdromiso
           claimName: debian-netinst
       - name: harddrive
           claimName: debian-vm-hd

Running the VM

We are now ready to spin up the VM. Kubevirt will ask for a pod to be scheduled, containing the VM

# Deploy the virtual machine, disk image and VM created
devbox1:~/Projects/kubevirts$ kubectl apply -f debian.yml
persistentvolumeclaim/debian-vm-hd created created

# Check the Virtual machine resource
devbox1:~/Projects/kubevirts$ kubectl get vms
debian-vm   88s

# Start up the VM
devbox1:~/Projects/kubevirts$ virtctl start debian-vm
VM debian-vm was scheduled to start

# Kubernetes will schedule a pod containing the VM
devbox1:~/Projects/kubevirts$ kubectl get vmis
NAME        AGE   PHASE        IP    NODENAME
debian-vm   5s    Scheduling
devbox1:~/Projects/kubevirts$ kubectl get vmis
NAME        AGE   PHASE     IP           NODENAME
debian-vm   40s   Running   pollux

Connecting to the console

Provided you have virt-viewer installed and X setup correctly, we can now connect to the console, using

Hestia:~/$ virtctl vnc debian-vm
Debian installer grub menu
Console showing the debian install grub menu

 After going through the installation of Debian, we can stop the machine, edit the yml file to remove the cd rom (or change the boot order) and once again run the vm. It will now just boot into debian. Congrats!

devbox1:~/Projects/kubevirts$ virtctl stop debian-vm

# Comment out the cdrom

devbox1:~/Projects/kubevirts$ kubectl apply -f debian.yml
persistentvolumeclaim/debian-vm-hd unchanged configured

devbox1:~/Projects/kubevirts$ virtctl start debian-vm
Debian VM booting in k8s
Booting from harddisk


Kubevirt is a very interesting project so far and the ability to join two worlds, container orchestration and virtual machine pools seem powerful. As they mention on the site, being able to take workloads that still need virtual machines and combining them with containerized workloads is very cool. 

Also, since the project uses standard Kubernetes extensions, your resources and vms are exposed in your k8s install and can be managed using kubectl. As a further example of that, in the Rancher management UI, you'll see your machines show up without any further configuration.

Looking at the vm in Rancher
Inspecting the VM's yml from within Rancher

Of course there is no specific integration in the GUI and for now we need to use virtctl to manage the vms, but this may change very soon with the new Harvester project that Rancher is working on.