<path> should be replaced with a file or device path (the same for each)

serial= specifies the SCSI logical unit serial number

This attaches two virtual SCSI devices to the VM, both of which are backed by the same file and share the same SCSI logical unit identifier.
Once booted, the SCSI devices for each corresponding path appear as sda and sdb, which are then detected as multipath enabled and subsequently mapped as dm-0:

QEMU additionally allows for virtual device hot(un)plug at runtime, which can be done from the QEMU monitor CLI (accessed via ctrl-a c) using the drive_del command. This can be used to trigger a multipath failover event:

Friday, June 9, 2017

There were many other interesting talks during the conference, all of which can be viewed on the oSC 2017 media site.
A video of my presentation is available below, and on YouTube.
Many thanks to the organisers and sponsors for putting on a great event.

The above examples work by replacing the normal git commit editor with a call to git interpret-trailers, which appends the desired tag to the commit message and then exits.

My specific use case is to add Reviewed-by: tags to specific commits during interactive rebase, e.g.:

> git rebase --interactive HEAD~3

This brings up an editor with a list of the top three commits in the current branch. Assuming the aforementioned rb alias has been configured, individual commits will be given a Reviewed-by tag when appended with the following line:

exec GIT_EDITOR="git rb" git commit --amend

As an example, the following will see three commits applied, with the commit message for two of them (d9e994e and 5f8c115) appended with my Reviewed-by tag.

Bonus: By default, the vim editor includes git rebase --interactive syntax highlighting and key-bindings - if you press K while hovering over a commit hash (e.g. d9e994e from above), vim will call git show <commit-hash>, making reviewing and tagging even faster!

Thanks to:

Upstream Git developers, especially those who implemented the interpret-trailers functionality.

Tuesday, December 13, 2016

Introduction

I've blogged a few of times about how Dracut and QEMU can be combined to greatly improve Linux kernel dev/test turnaround.

My first post covered the basics of building the kernel, running dracut, and booting the resultant image with qemu-kvm.

A later post took a closer look at network configuration, and focused on bridging VMs with the hypervisor.

Finally, my third post looked at how this technique could be combined with Ceph, to provide a similarly efficient workflow for Ceph development.

In bringing this series to a conclusion, I'd like to introduce the newly released Rapido project. Rapido combines all of the procedures and techniques described in the articles above into a handful of scripts, which can be used to test specific Linux kernel functionality, standalone or alongside other technologies such as Ceph.

Usage - Standalone Linux VM

The following procedure was tested on openSUSELeap 42.2 and SLES 12SP2, but should work fine on many other Linux distributions.

Step 4 - Boot!

In a whopping four seconds, or thereabouts, the VM should have booted to a rapido:/# bash prompt. Leaving you with two zram backed Btrfs filesystems mounted at /mnt/test and /mnt/scratch.

Everything, including the VM's root filesystem, is in memory, so any changes will not persist across reboot. Use the rapido.conf QEMU_EXTRA_ARGS parameter if you wish to add persistent storage to a VM.

Although the network isn't used in this case, you should be able to observe that the VM's network adapter can be reached from the hypervisor, and vice-versa.

Usage - Ceph vstart.sh cluster and CephFS client VM

This usage guide builds on the previous standalone Linux VM procedure, but this time adds Ceph to the mix. If you're not interested in Ceph (how could you not be!) then feel free to skip to the next section.

Step I - Checkout and Build

We already have a clone of the Rapido and Linux kernel repositories. All that's needed for CephFS testing is a Ceph build:

Tuesday, June 28, 2016

Developing a USB gadget application that runs on Linux?
Following a recent Ceph USB gateway project, I was looking at ways to test a Linux USB device without the need to fiddle with cables, or deal with slow embedded board boot times.

Ideally USB gadget testing could be performed by running the USB device code within a virtual
machine, and attaching the VM's virtual USB device port to an emulated USB
host controller on the hypervisor system.

I was unfortunately unable to find support for virtual USB device ports in QEMU, so I abandoned the above architecture, and discovered dummy_hcd.ko instead.

The dummy_hcd Linux kernel module is an excellent utility for USB device testing from within a standalone system or VM.

dummy_hcd.ko offers the following features:

Re-route USB device traffic back to the local system

Effectively providing device loopback functionality

USB high-speed and super-speed connection simulation

It can be enabled via the USB_DUMMY_HCD kernel config parameter. Once the module is loaded, no further configuration is required.

Tuesday, May 10, 2016

Introduction

Ceph's vstart.sh utility is very useful for deploying and testing a mock cluster directly from the Ceph source repository. It can:

Generate a cluster configuration file and authentication keys

Provision and deploy a number of OSDs

Backed by local disk, or memory using the --memstore parameter

Deploy an arbitrary number of monitor, MDS or rados-gateway nodes

All services are deployed as the running user. I.e. root access is not needed.

Once deployed, the mock cluster can be used with any of the existing Ceph client utilities, or exercised with the unit tests in the Ceph src/test directory.

When developing or testing Linux kernel changes for CephFS or RBD, it's useful to also be able to use these kernel clients against a vstart.sh deployed Ceph cluster.

Test Environment Overview - image based on content by Sage Weil

The instructions below walk through configuration and deployment of all components needed to test Linux kernel RBD and CephFS modules against a mock Ceph cluster. The procedure was performed on openSUSE Leap 42.1, but should also be applicable for other Linux distributions.

Network Setup

First off, configure a bridge interface to connect the Ceph cluster with a kernel client VM network:

Kernel VM Deployment

Build a kernel:

> cd $kernel_source_dir
> make menuconfig

$kernel_source_dir should be replaced with the actual path. Ensure CONFIG_BLK_DEV_RBD=m, CONFIG_CEPH_FS=y, CONFIG_CEPH_LIB=y, CONFIG_E1000=y and
CONFIG_IP_PNP=y are set in the kernel config. A sample can be found here.

> make
> INSTALL_MOD_PATH=./mods make modules_install

Create a link to the modules directory ./mods, so that Dracut can find them:

Conclusion

A mock Ceph cluster can be deployed from source in a matter of seconds using the vstart.sh utility.
Likewise, a kernel can be booted directly from source alongside a throwaway VM and connected to the mock Ceph cluster in a couple of minutes with Dracut and QEMU/KVM.

This environment is ideal for rapid development and integration testing of Ceph user-space and kernel components, including RBD and CephFS.