Tuesday, December 13, 2016

Rapido: A Glorified Wrapper for Dracut and QEMU

Introduction

I've blogged a few of times about how Dracut and QEMU can be combined to greatly improve Linux kernel dev/test turnaround.

My first post covered the basics of building the kernel, running dracut, and booting the resultant image with qemu-kvm.

A later post took a closer look at network configuration, and focused on bridging VMs with the hypervisor.

Finally, my third post looked at how this technique could be combined with Ceph, to provide a similarly efficient workflow for Ceph development.

In bringing this series to a conclusion, I'd like to introduce the newly released Rapido project. Rapido combines all of the procedures and techniques described in the articles above into a handful of scripts, which can be used to test specific Linux kernel functionality, standalone or alongside other technologies such as Ceph.

Usage - Standalone Linux VM

The following procedure was tested on openSUSELeap 42.2 and SLES 12SP2, but should work fine on many other Linux distributions.

Step 4 - Boot!

In a whopping four seconds, or thereabouts, the VM should have booted to a rapido:/# bash prompt. Leaving you with two zram backed Btrfs filesystems mounted at /mnt/test and /mnt/scratch.

Everything, including the VM's root filesystem, is in memory, so any changes will not persist across reboot. Use the rapido.conf QEMU_EXTRA_ARGS parameter if you wish to add persistent storage to a VM.

Although the network isn't used in this case, you should be able to observe that the VM's network adapter can be reached from the hypervisor, and vice-versa.

Usage - Ceph vstart.sh cluster and CephFS client VM

This usage guide builds on the previous standalone Linux VM procedure, but this time adds Ceph to the mix. If you're not interested in Ceph (how could you not be!) then feel free to skip to the next section.

Step I - Checkout and Build

We already have a clone of the Rapido and Linux kernel repositories. All that's needed for CephFS testing is a Ceph build: