NUC KVM with openvswitch
In this article I will discuss a simple one-node homelab setup which I will later use for testing various technologies included distributed systems such as kubernetes variants, message queues and other things that I wish to experiment with.
Given the great support for hardware accelerated virtualisation and emulation offered by Linux via KVM I thought it would be a great opportunity to dig into this a little bit further with some hands-on experience.
Hardware
To start with, I’ll focus on a single physical node setup. For most distributed systems, they will run on a scaled down set of hardware, often requiring 2 - 3 nodes for a minimal setup. For this I decided to try with a four-core (2 threads per core) CPU in the form an Intel i5-8259U. Not the beefiest of processors but has a fairly low power consumption and is available in a NUC form-factor relatively cheaply. This will allow to allocate 2 vCPUs per node and have 4 nodes running. With 32 GB of memory, each Node can have 7GB of RAM to leave 4GB for the OS and user-space programs.
The initial set-up for this article looks like this
- NUC 8 i5-BEH
- 32GB Corsair Vengence DDR4-2400 SO-DIMMM memory
- 500GB Crucial P2 500GB PCIe M.2 2280 NVMe
Additionally for OS install, you’ll need a USB stick with minimum 500MB of space available.
Finally a monitor / keyboard is required for the initial Debian install.
Base OS
For the base OS I decided to go with Debian 10 (Buster). Because I do not intend to do much with the host itself it seemed sensible to go with a stable base. This does however mean that the Linux Kernel is 4.19, rather than the latest stable 5.x one that is available.
I installed this from a boot-able USB stick
- Download the Debian 10 AMD64 network install ISO
- From MacOS, image the USB drive with the Debian image
sudo diskutil unmountDisk /dev/diskN (Where diskN is the USB drive) sudo dd bs=1m if=debian-10.4.0-amd64-netinst.iso of=/dev/rdiskN; sync sudo diskutil eject /dev/rdiskN
- Insert the USB stick into the NUC and power it up. Follow the instructions, I provision the minimal install - only the SSH server. I add my SSH key to my user’s
~/.ssh/authorized_keys
and setPasswordAuthentication no
in/etc/ssh/sshd_config
. - Now is a good time to make sure everything is up-to-date
su - root apt-get update && apt-get upgrade
- After creating user / root accounts and initial install; install sudo and add user to sudo group.
su - root apt-get install sudo adduser <username> sudo
KVM install
Debian provides a KVM Guide on it’s wiki. The following is a synopsis of the instructions I followed
- Install libvirt for a server
apt-get install --no-install-recommends qemu-kvm libvirt-clients libvirt-daemon-system
- Add your user to libvirt group
sudo adduser <username> libvirt
At this point, you can run virt-host-validate
to check if the system is configured correctly. I encountered the following issues:
QEMU: Checking if IOMMU is enabled by kernel : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
...
LXC: Checking if device /sys/fs/fuse/connections exists : FAIL (Load the 'fuse' module to enable /proc/ overrides)
The first issue requires the kernel to be re-configured. This can be done during boot. In /etc/default/grub
Update the following line
- GRUB_CMDLINE_LINUX_DEFAULT="quiet"
+ GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
NOTE: this will require the grub.cfg
to be updated, by running update-grub
, followed by a reboot
.
For the second item (Fuse failure) we need to enable the fuse kernel module, this can be hot-loaded with modprobe fuse
, and permanently added to system config by updating /etc/modules-load.d/modules.conf
fuse
NOTE: now would also be a good time to add vhost_net
module, to accelerate virtio
networking.
Openvswitch install
Openvswitch is a powerful software switch. I specifically want it so that I can provide a bridge between guest VMs and the host’s physical NIC so that each VM can be assigned a public IP on the network. KVM supports ‘routed’ networking which directly routes traffic between the host’s physical NIC and the guest VMs but books including Mastering KVM Virtualization do not recommend for production. I know this isn’t production, but equally could not get this to work out-of the box.
The Debian openvswitch installation guide can be found here
A guide for using openvswitch with libvirt can be found here
- Install openvswitch packages
apt-get install openvswitch-switch openvswitch-common
- Create a bridge that we’ll connect KVM guests to
ovs-vsctl add-br ovsbr
- Add the physical NIC as a port on the newly created bridge
ovs-vsctl add-port ovsbr eno1
- Update KVM default network to use the new openvswitch bridge:
Edit the file to contain
virsh net-destroy default virsh net-edit default
<network> <name>default</name> <forward mode='bridge'/> <bridge name='ovsbr'/> <virtualport type='openvswitch'/> </network>
- Restart libvirt to make sure all settings are applied
systemctl restart libvirtd
- Edit
/etc/network/interfaces
to disable IP allocation to the physical NIC
NOTE: Replace the values betweenauto eno1 iface eno1 inet manual pre-up ip link set dev eno1 up post-down ip link set dev eno1 down auto ovsbr iface ovsbr inet static post-up ovs-vsctl set bridge ovsbr other-config:hwaddr=<desired mac address> address <static ip>/24 gateway <gateway ip>
<
and>
with required values for your network setup - Reboot the box
sudo reboot
The node is now ready to provision a virtual machine.
Reference versions
This article was written using the following versions:
- Debian
10.4.0
- libvirt
5.0.0-4+deb10u1
- openvswitch
2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u2
Additional Reading
- https://wiki.debian.org/KVM
- https://libvirt.org/formatnetwork.html
- http://docs.openvswitch.org/en/latest/howto/libvirt/