Kubevirt - Running VMs under Kubernetes

Intro
Kubevirt.io is a (CNCF) project to make it possible to run virtual machines as containerized workloads on a k8s cluster. We'll have a look at it by deploying it to a Rancher k3s cluster and spinning up a Debian guest VM.
We're using version v0.34.2 of Kubevirt and a three node 1.19.3+k3s2 cluster on bare metal.
Installation
Kubevirt operator and custom resource
Installing the operator and custom resource in the cluster will deploy the api, controller, and operator resources, so we'll be able to manage the vms
# Using version 0.34.2
devbox1:~$ echo $VERSION
v0.34.2
# Install the operator
devbox1:~$ kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator.yaml
namespace/kubevirt created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/kubevirts.kubevirt.io created
priorityclass.scheduling.k8s.io/kubevirt-cluster-critical created
clusterrole.rbac.authorization.k8s.io/kubevirt.io:operator created
serviceaccount/kubevirt-operator created
role.rbac.authorization.k8s.io/kubevirt-operator created
rolebinding.rbac.authorization.k8s.io/kubevirt-operator-rolebinding created
clusterrole.rbac.authorization.k8s.io/kubevirt-operator created
clusterrolebinding.rbac.authorization.k8s.io/kubevirt-operator created
deployment.apps/virt-operator created
# Install the custom resource
devbox1:~$ kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr.yaml
kubevirt.kubevirt.io/kubevirt created
# Verify (install took about a minute)
devbox1:~$ kubectl get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}"
Deploying
devbox1:~$ kubectl get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}"
Deployed
devbox1:~$
install virtctl
Virtctl is a binary that can be used to manage the virtual machines, similar to kubectl for the cluster. We'll download the latest version and install it to /usr/local on our development box. It could also be installed as a plugin to your kubectl.
devbox1:~$ echo $ARCH
linux-amd64
devbox1:~$ cd Downloads/
devbox1:~/Downloads$ curl -L -o virtctl https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-${ARCH}
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 651 100 651 0 0 2106 0 --:--:-- --:--:-- --:--:-- 2100
100 45.7M 100 45.7M 0 0 971k 0 0:00:48 0:00:48 --:--:-- 777k
devbox1:~/Downloads$ chmod +x virtctl
devbox1:~/Downloads$ sudo install virtctl /usr/local/bin
Running a test vm
We should now be able to spin up a virtual machine that downloads a registry disk (kubevirt/cirros-container-disk-demo) to boot from. It will not persist any data, so once we're done we'll just remove it. Get the vm.yaml from kubevirt's github. The disks are defined in the yml:
- name: containerdisk
containerDisk:
image: kubevirt/cirros-container-disk-demo
# Deploy the vm - creating a 'vm'
devbox1:~/Projects/kubevirts$ kubectl apply -f vm.yaml
virtualmachine.kubevirt.io/testvm created
devbox1:~/Projects/kubevirts$ kubectl get vms
NAME AGE VOLUME
testvm 13s
# Start the vm using virtctl - creating a running machine in a 'vmi' instance
devbox1:~/Projects/kubevirts$ virtctl start testvm
VM testvm was scheduled to start
devbox1:~/Projects/kubevirts$ kubectl get vmis
NAME AGE PHASE IP NODENAME
testvm 22s Running 10.42.2.36 castor
# Connect to the vm
devbox1:~/Projects/kubevirts$ virtctl console testvm
# Running in default namespace:
devbox1:~/Projects/kubevirts$ kubectl get pods -n default -o wide |grep virt
virt-launcher-testvm-8nwzq 2/2 Running 0 7m55s 10.42.2.36 castor <none> <none>
# Stop / remove virtual machine
devbox1:~/Projects/kubevirts$ virtctl stop testvm
VM testvm was scheduled to stop
devbox1:~/Projects/kubevirts$ kubectl delete vm testvm
virtualmachine.kubevirt.io "testvm" deleted
devbox1:~/Projects/kubevirts$ kubectl get vmis
No resources found in default namespace.
devbox1:~/Projects/kubevirts$ kubectl get vms
No resources found in default namespace.
Further steps
So far so good, but we'd also like to be able to run our own iso's and diskimages of course. To be able to upload and store images, we'll use the CDI operator, a data importer, and use our NFS storage class to hold images. Both the iso's and qcow disk images.
Deploy the Data importer custom resource
# Install the CDI custom operator
devbox1:~/Projects/kubevirts$ echo $VERSION
v1.26.1
devbox1:~/Projects/kubevirts$ kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
namespace/cdi created
customresourcedefinition.apiextensions.k8s.io/cdis.cdi.kubevirt.io created
clusterrole.rbac.authorization.k8s.io/cdi-operator-cluster created
clusterrolebinding.rbac.authorization.k8s.io/cdi-operator created
serviceaccount/cdi-operator created
role.rbac.authorization.k8s.io/cdi-operator created
rolebinding.rbac.authorization.k8s.io/cdi-operator created
deployment.apps/cdi-operator created
configmap/cdi-operator-leader-election-helper created
# Deploy the CDI itself
devbox1:~/Projects/kubevirts$ kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
cdi.cdi.kubevirt.io/cdi created
# All running
devbox1:~/Projects/kubevirts$ kubectl get pods -n cdi
NAME READY STATUS RESTARTS AGE
cdi-apiserver-f5fc95fb4-xv8xx 1/1 Running 0 54s
cdi-deployment-86979fbccf-4qrrw 1/1 Running 0 48s
cdi-operator-6d7579d75c-dpg8b 1/1 Running 0 110s
cdi-uploadproxy-7ff594fbdf-j4h7z 1/1 Running 0 45s
Create a pvc to hold a Debian installation image
We will setup a persistent volume claim called debian_netinst that is annotated to make sure that the CDI creates it from the download link. We'll use the nfs-client storage class that is available in our cluster.
# pvc_debian.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "debian-netinst"
labels:
app: containerized-data-importer
annotations:
cdi.kubevirt.io/storage.import.endpoint: "https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-10.6.0-amd64-netinst.iso"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 400Mi
storageClassName: nfs-client
# Deploy
devbox1:~/Projects/kubevirts$ kubectl create -f pvc_debian.yml
persistentvolumeclaim/debian-netinst created
devbox1:~/Projects/kubevirts$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
debian-netinst Bound pvc-3d2980bf-19d5-4b79-8e44-b1cb55e6aea6 400Mi RWO nfs-client 112s
Define the Debian virtual machine
To define our virtual machine, we create a yml that defines a disk image, debian_vm_hd, as PVC resource on the nfs-client, and the Virtual Machine resource itself. Initial state will be running: false, and we're using 2 cores, 4G ram, and the default q35 chipset. For the first disk we will bind our Debian netinstall image that was stored as PVC in the previous step.
# debian.yml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: debian-vm-hd
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: nfs-client
---
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
name: debian-vm
spec:
running: false
template:
metadata:
labels:
kubevirt.io/domain: debian-vm
spec:
domain:
cpu:
cores: 2
devices:
disks:
- bootOrder: 1
cdrom:
bus: sata
name: cdromiso
- disk:
bus: virtio
name: harddrive
machine:
type: q35
resources:
requests:
memory: 4G
volumes:
- name: cdromiso
persistentVolumeClaim:
claimName: debian-netinst
- name: harddrive
persistentVolumeClaim:
claimName: debian-vm-hd
Running the VM
We are now ready to spin up the VM. Kubevirt will ask for a pod to be scheduled, containing the VM
# Deploy the virtual machine, disk image and VM created
devbox1:~/Projects/kubevirts$ kubectl apply -f debian.yml
persistentvolumeclaim/debian-vm-hd created
virtualmachine.kubevirt.io/debian-vm created
# Check the Virtual machine resource
devbox1:~/Projects/kubevirts$ kubectl get vms
NAME AGE VOLUME
debian-vm 88s
# Start up the VM
devbox1:~/Projects/kubevirts$ virtctl start debian-vm
VM debian-vm was scheduled to start
# Kubernetes will schedule a pod containing the VM
devbox1:~/Projects/kubevirts$ kubectl get vmis
NAME AGE PHASE IP NODENAME
debian-vm 5s Scheduling
devbox1:~/Projects/kubevirts$ kubectl get vmis
NAME AGE PHASE IP NODENAME
debian-vm 40s Running 10.42.1.28 pollux
Connecting to the console
Provided you have virt-viewer installed and X setup correctly, we can now connect to the console, using
Hestia:~/$ virtctl vnc debian-vm

After going through the installation of Debian, we can stop the machine, edit the yml file to remove the cd rom (or change the boot order) and once again run the vm. It will now just boot into debian. Congrats!
devbox1:~/Projects/kubevirts$ virtctl stop debian-vm
# Comment out the cdrom
devbox1:~/Projects/kubevirts$ kubectl apply -f debian.yml
persistentvolumeclaim/debian-vm-hd unchanged
virtualmachine.kubevirt.io/debian-vm configured
devbox1:~/Projects/kubevirts$ virtctl start debian-vm

Conclusion
Kubevirt is a very interesting project so far and the ability to join two worlds, container orchestration and virtual machine pools seem powerful. As they mention on the site, being able to take workloads that still need virtual machines and combining them with containerized workloads is very cool.
Also, since the project uses standard Kubernetes extensions, your resources and vms are exposed in your k8s install and can be managed using kubectl. As a further example of that, in the Rancher management UI, you'll see your machines show up without any further configuration.

Of course there is no specific integration in the GUI and for now we need to use virtctl to manage the vms, but this may change very soon with the new Harvester project that Rancher is working on.
Tags
- Log in to post comments