Skip to content

Kubernetes with kubevirt and kata-containers inside Vagrant (with libvirt/kvm)

License

Notifications You must be signed in to change notification settings

olberger/vagrant-k8s-kubevirt-katacontainers

Repository files navigation

Kubernetes with kubevirt and kata-containers, inside Vagrant (with libvirt/kvm)

Provide a consistent way to run Kubernetes (k8s) with KubeVirt and Kata Containers, locally, provided nested virtualization is supported.

Note, this installs Kubernetes from “official” (from k8s) packages on Ubuntu 18.04, via kubeadm (and not minikube or another installer)

If this works well inside Vagrant, in principle, the same provisionning scripts could be used for installation of a similar cluster on a base operating system installed with Ubuntu.

Features

Include configuration for :

  • the Ubuntu 18.04 Vagrant base image : generic/ubuntu1804 (from https://roboxes.org/) run inside qemu/kvm
  • Kubernetes installed (kubernetes.sh) with kubeadm, with :
    • the cri-o container runtime, which is supposed to be compatible both with KubeVirt and KataContainers (other options may be possible : containerd ?) : crio.sh
    • and kubelet and cri-o configured the same way, for using the systemd cgroups driver
    • the Calico CNI network system (other options may be possible)
    • Rancher’s Local Path Provisioner to automatically allocate PersistentVolumes from a directory of the node’s file-system : rancher-local-path-provisioner.sh
    • KubeVirt for deploying full-fledged cloud VMs on the cluster via qemu/kvm (in a similar way to IaaS clouds) : kubevirt.sh
    • the Containerized Data Importer for KubeVirt, to handle automatic import of VM images: cdi.sh
    • Kata Containers for running PODs/containers, sandboxed inside mini VMs (qemu too): kata.sh

This is a reworked Vagrant setup, based on an initial version at https://github.com/mintel/vagrant-minikube, taking additions from https://gist.github.com/avthart/d050b13cad9e5a991cdeae2bf43c2ab3 and my own findings.

Demo

Here’s a recording of the Vagrant provisionning of the cluster :

https://asciinema.org/a/243325.png

Play the recording: https://asciinema.org/a/243325

Starting the cluster inside Vagrant

Install Pre-requisites

Support for nested virtualization

For this to work, you need to have a base hardware whose CPUs provide virtualization support allowing “nested virtualization”. As we will run qemu/kvm VMs inside the qemu/kvm base started by Vagrant, this support is essential.

  • Check that you have support for this in your CPU
  • Check that the KVM modules are configured to allow it

Vagrant

Ensure you have vagrant installed, with its libvirt/KVM virtualization driver

https://www.vagrantup.com/docs/installation/

You may install it using your distribution’s packages :

Arch

sudo pacman -S vagrant

Ubuntu

sudo apt-get install vagrant

Run it

Clone this repo, then:

vagrant up --provider=libvirt

The long provisionning process will occur.

SSH into the VM

Once the provisionning ends, it’s ready.

You’ll perform most of the work inside the Vagrant VM:

vagrant ssh

Check the k8s cluster is up and running

kubectl get nodes

Access your code inside the VM

We automatically mount /tmp/vagrant into /home/vagrant/data.

For example, you may want to git clone some kubernetes manifests into /tmp/vagrant on your host-machine, then you can access them in the vagrant machine.

This is bi-directional, and achieved via vagrant-sshfs

Deploy stuff on the cluster

Once the k8s cluster is running you may test deployment of virtualized applications and systems.

Testing “regular cloud VMs” via KubeVirt

Basic VM instances

  • declare a Kubevirt virtual machine to be started with qemu/kvm:
    kubectl apply -f https://raw.githubusercontent.com/kubevirt/demo/master/manifests/vm.yaml
    ...
    kubectl get vms
        
  • start the VM’s execution (takes a while: downloading VM image, etc.)
    virtctl start testvm
    
    # wait until the VM is started
    kubectl wait --timeout=180s --for=condition=Ready pod -l kubevirt.io/domain=testvm
    # you can check the execution of qemu
    ps aux | grep qemu-system-x86_64
        
  • connect to the VM’s console
    virtctl console testvm
        

    it may take a while to get messages on the console, and eventually a login prompt (press ENTER if need be)

Testing automatic VM image import with DataVolumes

We have prepared a few deployment manifests to test booting VMs from boot disk images specified from URLs.

Example with a Fedora machine

  • copy the =fedora-datavolume.yaml= manifest into the cluster host inside Vagrant:
    cp examples-kubevirt/fedora-datavolume.yaml /tmp/vagrant
        

    it will be available in ~vagrant/data/fedora-datavolume.yaml

  • connect via vagrant ssh, and:
    • create the DataVolume and VM Instance definitions:
      kubectl create -f data/fedora-datavolume.yaml
              
    • check that the DataVolume was created:
      kubectl get dv
              
      NAME        AGE 
      fedora28-dv 4m58s
              
    • check that the corresponding PersistentVolume Claim was allocated (automatically, thanks to the Local Path Provisioner):
      kubectl get pvc
              
      NAME        STATUS VOLUME                                   CAPACITY ACCESS MODES STORAGECLASS AGE 
      fedora28-dv Bound  pvc-b2bc560a-6b88-11e9-a6b2-525400a08028 10Gi     RWO          local-path   5m21s
              
    • look at the corresponding Persistent Volume:
      kubectl get pv
              
      NAME                                     CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM               STORAGECLASS REASON AGE 
      pvc-b2bc560a-6b88-11e9-a6b2-525400a08028 10Gi     RWO          Delete         Bound  default/fedora28-dv local-path          5m20s
              
    • watch the importer download the boot disk image and convert it automatically, thanks to Containerized Data Importer (CDI), so that qemu can boot it:
      kubectl logs -f -l cdi.kubevirt.io=importer -l cdi.kubevirt.io/storage.import.importPvcName=fedora28-dv
              

      you’ll be able to check the growth of the contents of the PVC, where the disk.img boot disk for qemu will be constructed:

      du -sh /opt/local-path-provisioner/pvc-b2bc560a-6b88-11e9-a6b2-525400a08028/
              
      277M /opt/local-path-provisioner/pvc-b2bc560a-6b88-11e9-a6b2-525400a08028/
              
    • once the image is imported, watch the importer’s logs:
      kubectl logs -f -l kubevirt.io=virt-launcher
              
  • Finally, you can connect to the VM’s console:
    virtctl console testvmfedora29
        

Note that you may also manage import of cloud images via the Containerized Data Importer with:

wget http://cloud-images.ubuntu.com/releases/18.04/release/ubuntu-18.04-server-cloudimg-amd64.img
mv ubuntu-18.04-server-cloudimg-amd64.img ubuntu-18.04-server-cloudimg-amd64.qcow2
virtctl image-upload --pvc-name=upload-pvc --pvc-size=10Gi --image-path=ubuntu-18.04-server-cloudimg-amd64.qcow2 --uploadproxy-url=https://$(kubectl get service -n cdi cdi-uploadproxy -o wide | awk 'NR==2 {print $3}'):443/ --insecure

Kata-containers

You can also test, from inside the VM, the launch of containers inside “qemu sandboxing”:

kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/examples/test-deploy-kata-qemu.yaml

Once the container is running, you can run a shell inside it:

kubectl exec -it $(kubectl get pod -l run=php-apache-kata-qemu -o wide | awk 'NR==2 {print $1}') bash

Deploying a similar cluster on real OS

The scripts may be used, in the same order, to deploy a cluster on an (non-virtualized) Ubuntu 18.04 Server machine.

So far, only limitation found is related to AppArmor libvirt constraints preventing VMs to be started by KubeVirt.

Immediate workaround can be disabling it (which may not be the best idea, YMMV):

sudo ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/usr.sbin.libvirtd

About

Kubernetes with kubevirt and kata-containers inside Vagrant (with libvirt/kvm)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages