Once in a while, I get to tackle issues that have little or no documentation other than the official documentation of the product and the product’s source code. You may know from experience that product documentation is not always sufficient to get a complete configuration working. This article intend to flesh out a solution to customizing disk configurations using Curtin.

This article take for granted that you are familiar with Maas install mechanisms, that you already know how to customize installations and deploy workloads using Juju.

While my colleagues in the Maas development team have done a tremendous job at keeping the Maas documentation accurate (see Maas documentation), it does only cover the basics when it comes to Maas’s preseed customization, especially when it comes to Curtin’s customization.

Curtin is Maas’s fastpath installer which is meant to replace Debian’s installer (familiarly known as d-i). It does a complete machine installation much faster than with the standard debian method. But while d-i is well known and it is easy to find example of its use on the web, Curtin does not have the same notoriety and, hence, not as much documentation.

Theory of operation

When the fastpath installer is used to install a maas unit (which is now the default), it will send the content of the files prefixed with curtin_ to the unit being installed. The curtin_userdata contains cloud-config type commands that will be applied by cloud-init when the unit is installed. If we want to apply a specific partitioning scheme to all of our unit, we can modify this file and every unit will get those commands applied to it when it installs.

But what if we only have one or a few servers that have specific disk layout that require partitioning ? In the following example, I will suppose that we have one server, named curtintest which has a one terabyte disk (1 TB) and that we want to partition this disk with the following partition table :

Partition #1 has the /boot file system and is bootable

Partition #2 has the root (/) file system

Partition #3 has a 31 Gb file system

Partition #4 has 32 Gb of swap space

Partition #5 has the remaining disk space

Since only one server has such a disk, the partitioning should be specific to that curtintest server only.

Setting up Curtin development environment

To get to a working Maas partitioning setup, it is preferable to use Curtin’s development environment to test the curtin commands. Using Maas deployment to test each command quickly becomes tedious and time consuming. There is a description on how to set it up in the README.txt but here are more details here.

Aside from putting all the files under one single directory, the steps described here are the same as the one in the README.txt file :

You now have an environment you can use with Curtin to automate installations. You can test it by using the following command which will start a VM and run « curtin install » in it. Once you get the prompt, login with :

Creating Maas’s Curtin preseed commands

Now that we have our Curtin development environment available, we can use it to come up with a set of commands that will be fed to Curtin by Maas when a unit is created.

Maas uses preseed files located in /etc/maas/preseeds on the Maas server. The curtin_userdata preseed file is the one that we will use as a reference to build our set of partitioning commands. During the testing phase, we will use the -c option of curtin install along with a configuration file that will mimic the behavior of curtin_userdata.

We will also need to add a fake 1TB disk to Curtin’s development environment so we can use it as a partitioning target. So in the development environment, issue the following command :

The sources: statement is only there to avoid having to repeat the SOURCE portion of the curtin command and is not to be used in the final Maas configuration. The URL is the address of the server from which you are running the Curtin development environment.

WARNING

The builtin [] statement is VERY important. It is there to override Curtin’s native builtin statement which is to partition the disk using « block-meta simple ». If it is removed, Curtin will overwrite he partitioning with its default configuration. This comes straight from Scott Moser, the main developer behind Curtin.

Now let’s run the Curtin command :

root@ubuntu:~# /curtin/bin/curtin install -c conffile

Curtin will run its installation sequence and you will see a display which you should be familiar with if you installed units with Maas previously. The command will most probably exit on error, comlaining about the fact that install-grub received an argument that was not a block device. We do not need to worry about that at the motent.

Once completed, have a look at the partitioning of the /dev/vdc device :

The partitioning commands were successful and we have the /dev/vdc disk properly configured. Now that we know that the mechanism works, let try with a complete configuration file. I have found that it was preferable to start with a fresh 1TB disk :

You will note that I have added a few statement like [« echo », « ‘### Partitioning disk ###' »] that will display some logs during the execution. Those are not necessary.
Now let’s try a second test with the complete configuration file :

We now have a correctly partitioned disk in our development environment. All we need to do now is to carry that over to Maas to see if it works as expected.

Customization of Curtin execution in Maas

The section « How preseeds work in MAAS » give a good outline on how to select the name of the a preseed file to restrict its usage to specific sub-groups of nodes. In our case, we want our partitioning to apply to only one node : curtintest. So by following the description in the section « User provided preseeds« , we need to use the following template :

{prefix}_{node_arch}_{node_subarch}_{release}_{node_name}

The fileneme that we need to choose needs to end with our hostname, curtintest. The other elements are :

prefix : curtin_userdata

osystem : amd64

node_subarch : generic

release : trusty

node_name : curtintest

So according to that, our filename must be curtin_userdata_amd64_generic_trusty_curtintest

Now that maas is properly configured for curtintest, complete the test by deploying a charm in a Juju environment where curtintest is properly comissionned. In that example, curtintest is the only available node so maas will systematically pick it up :

Conclusion

Customizing disks and partition using curtin is possible but currently not sufficiently documented. I hope that this write up will be helpful. Sustained development on Curtin is currently done to improve these functionalities so things will definitively get better.

]]>http://caribou.kamikamamak.com/2015/06/26/custom-partitioning-with-maas-and-curtin-2/feed/0iSCSI and Device mapper Multipath test setuphttp://caribou.kamikamamak.com/2014/09/30/iscsi-and-device-mapper-multipath-test-setup/
http://caribou.kamikamamak.com/2014/09/30/iscsi-and-device-mapper-multipath-test-setup/#commentsTue, 30 Sep 2014 09:31:57 +0000http://caribou.kamikamamak.com/?p=854Continuer la lecture →]]>I have seen this setup documented a few places, but not for Ubuntu so here it goes.

I have used this many time to verify or diagnose Device Mapper Multipath (DM-MPIO) since it is rather easy to fail a path by switching off one of the network interfaces. Nowaday, I use two KVM virtual machines with two NIC each.

Those steps have been tested on Ubuntu 12.04 (Precise) and Ubuntu 14.04 (Trusty). The DM-MPIO section is mostly a cut and paste of the Ubuntu Server Guide

The virtual machine that will act as the iSCSI target provider is called PreciseS-iscsitarget. The VM that will connect to the target is called PreciseS-iscsi. Each one is configured with two network interfaces (NIC) that get their IP addresses from DHCP. Here is an example of the network configuration file :

We can see in the dmesg output that the new device /dev/sda has been discovered. Format the new disk & create a file system. Then verify that everything is correct by mounting and unmounting the new file system.

All that is remaining is to add an entry to the /etc/fstab file so the file system that we created is mounted automatically at boot. Notice the _netdev entry : this is required otherwise the iSCSI device will not be mounted.

]]>http://caribou.kamikamamak.com/2014/09/30/iscsi-and-device-mapper-multipath-test-setup/feed/0Ubuntu Trusty on the Elitebook EVO 850http://caribou.kamikamamak.com/2014/09/05/ubuntu-trusty-on-the-elitebook-evo-850/
http://caribou.kamikamamak.com/2014/09/05/ubuntu-trusty-on-the-elitebook-evo-850/#commentsFri, 05 Sep 2014 12:05:39 +0000http://caribou.kamikamamak.com/?p=851Continuer la lecture →]]>After three years of hard work, it is time to retire my 8440p and let the family enjoy its availability. For my new workhorse, I have chosen the HP Elitebook EVO 850 that fit my budget and performance requirement.

Before hosing the Windows 7 installation, I thought of testing the basic functionalities. So after booting Win7 I checked that most of the thing (sound, light, webcam, etc) did work as expected.

Never underestimate the power of the bug : if there is some hardware issue, then it is better to do a first diagnostic on Windows. The HP tech will love you for that (been there, done that). Otherwise, there will always be a doubt that Ubuntu is the culprit & they will not try to look any further.

After a successful Windows boot, I created a bootable USB stick with the latest Ubuntu release on it to verify that Ubuntu itself runs fine. No need to wipe out Windows and install Ubuntu on it only to find out that the hardware fails miserably. Here is the command I used to create the bootable USB stick, since the USB creator has been buggy for years on Ubuntu :

$ dd if=ubuntu-14.04.1-desktop-amd64.iso of=/dev/sdc bs=4M

One important note : this is a laptop that is factory installed with a secure boot configuration in the BIOS. I did not have to change anything to boot Ubuntu so you should not have to.

Since everything looked good, I went ahead & restarted the laptop & installed Ubuntu Trusty Tahr 14.04..1 on the laptop, using a full disk install with full disk encryption. Installation was flawless and completed in less than five minutes, thanks to the 250Gb SSD drive !

]]>http://caribou.kamikamamak.com/2014/09/05/ubuntu-trusty-on-the-elitebook-evo-850/feed/1remote kernel crash dump : More testing neededhttp://caribou.kamikamamak.com/2014/06/17/remote-kernel-crash-dump-more-testing-needed/
http://caribou.kamikamamak.com/2014/06/17/remote-kernel-crash-dump-more-testing-needed/#commentsTue, 17 Jun 2014 10:29:03 +0000http://caribou.kamikamamak.com/?p=849Continuer la lecture →]]>A couple of weeks ago I announced that I was working on a new remote functionality for kdump-tools, the kernel crash dump tool used on Debian and Ubuntu.

I am now done with the development of the new functionality, so the package is ready for testing. If you are interested, just read the previous post which has all the gory details on how to set it up & test it.

]]>http://caribou.kamikamamak.com/2014/06/17/remote-kernel-crash-dump-more-testing-needed/feed/1remote kernel crash dump for Debian and Ubuntuhttp://caribou.kamikamamak.com/2014/05/20/remote-kernel-crash-dump-for-debian-and-ubuntu/
http://caribou.kamikamamak.com/2014/05/20/remote-kernel-crash-dump-for-debian-and-ubuntu/#commentsTue, 20 May 2014 15:42:09 +0000http://caribou.kamikamamak.com/?p=830Continuer la lecture →]]>A few years ago, while I started to participate to the packaging of makedumpfile and kdump-tools for Debian and ubuntu. I am currently applying for the formal status of Debian Maintainer to continue that task.

For a while now, I have been noticing that our version of the kernel dump mechanism was lacking from a functionality that has been available on RHEL & SLES for a long time : remote kernel crash dumps. On those distribution, it is possible to define a remote server to be the receptacle of the kernel dumps of other systems. This can be useful for centralization or to capture dumps on systems with limited or no local disk space.

So I am proud to announce the first functional beta-release of kdump-tools with remote kernel crash dump functionality for Debian and Ubuntu !

For those of you eager to test or not interested in the details, you can find a packaged version of this work in a Personal Package Archive (PPA) here :

New functionality : remote SSH and NFS

In the current version available in Debian and Ubuntu, the kernel crash dumps are stored on local filesystems. Starting with version 1.5.1, they are stored in a timestamped directory under /var/crash. The new functionality allow to either define a remote host accessible through SSH or an NFS mount point to be the receptacle for the kernel crash dumps.

A new section of the /etc/default/kdump-tools file has been added :

# ---------------------------------------------------------------------------
# Remote dump facilities:
# SSH - username and hostname of the remote server that will receive the dump
# and dmesg files.
# SSH_KEY - Full path of the ssh private key to be used to login to the remote
# server. use kdump-config propagate to send the public key to the
# remote server
# HOSTTAG - Select if hostname of IP address will be used as a prefix to the
# timestamped directory when sending files to the remote server.
# 'ip' is the default.
# NFS - Hostname and mount point of the NFS server configured to receive
# the crash dump. The syntax must be {HOSTNAME}:{MOUNTPOINT}
# (e.g. remote:/var/crash)
#
# SSH="<user@server>"
#
# SSH_KEY="<path>"
#
# HOSTTAG="hostname|[ip]"
#
# NFS="<nfs mount>"
#

The kdump-config command also gains a new option : propagate which is used to send a public ssh key to the remote server so passwordless ssh commands can be issued to the remote SSH host.

Those options and commands are nothing new : I simply based my work on existing functionality from RHEL & SLES. So if you are well acquainted with RHEL remote kernel crash dump mechanisms, you will not be lost on Debian and Ubuntu. So I want to thank those who built the functionality on those distributions; it was a great help in getting them ported to Debian.

Testing on Debian

First of all, you must enable the kernel crash dump mechanism at the kernel level. I will not go in details as it is slightly off topic but you should :

Add crashkernel=128M to /etc/default/grub in GRUB_CMDLINE_LINUX_DEFAULT

Run udpate-grub

reboot

Install the beta packages

The package in the PPA can be installed on Debian with add-apt-repository. This command is in the software-properties-common package so you will have to install it first :

Configure kdump-tools for remote SSH capture

Edit the file /etc/default/kdump-tools and enable the kdump mechanism by setting USE_KDUMP to 1 . Then set the SSH variable to the remote hostname & credentials that you want to use to send the kernel crash dump. Here is an example :

USE_KDUMP=1
...
SSH="ubuntu@TrustyS-netcrash"

You will need to propagate the ssh key to the remote SSH host, so make sure that you have the password of the remote server’s user you defined (ubuntu in my case) for this command :

It is a safe practice to verify that the remote SSH host can be accessed without password. You can use the following command to test (with your own remote server as defined in the SSH variable in /etc/default/kdump-tools) :

Once you are sure that your NFS setup is correct, then you can proceed with a real crash dump test.

Testing on Ubuntu

As you would expect, setting things on Ubuntu is quite similar to Debian.

Install the beta packages

The package in the PPA can be installed on Debian with add-apt-repository. This command is in the software-properties-common package so you will have to install it first :

$ sudo add-apt-repository ppa:louis-bouchard/networked-kdump

Packages are available for Trusty and Utopic.

$ sudo apt-get update
$ sudo apt-get -y install linux-crashdump

Configure kdump-tools for remote SSH capture

Edit the file /etc/default/kdump-tools and enable the kdump mechanism by setting USE_KDUMP to 1 . Then set the SSH variable to the remote hostname & credentials that you want to use to send the kernel crash dump. Here is an example :

USE_KDUMP=1
...
SSH="ubuntu@TrustyS-netcrash"

You will need to propagate the ssh key to the remote SSH host, so make sure that you have the password of the remote server’s user you defined (ubuntu in my case) for this command :

It is a safe practice to verify that the remote SSH host can be accessed without password. You can use the following command to test (with your own remote server as defined in the SSH variable in /etc/default/kdump-tools) :

If the passwordless connection can be achieved, then everything should be all set.

Configure kdump-tools for remote NFS capture

Edit the /etc/default/kdump-tools file and set the NFS variable with the NFS mount point that will be used to transfer the crash dump :

NFS="TrustyS-netcrash:/var/crash"

The format needs to be the syntax that normally would be used to mount the NFS filesystem. You should test that your NFS filesystem is indeed accessible by mounting it manually (you might need to install the nfs-common package) :

Once you are sure that your NFS setup is correct, then you can proceed with a real crash dump test.

Miscellaneous commands and options

A few other things are under the control of the administrator

The HOSTTAG modifier

When sending the kernel crash dump, kdump-config will use the IP address of the server to as a prefix to the timestamped directory on the remote host. You can use the HOSTTAG variable to change that default. Simply define in /etc/default/kdump-tools :

HOSTTAG="hostname"

The hostname of the server will be used as a prefix instead of the IP address.

Currently, this is only implemented for the SSH method, but it will be available for NFS as well in the final version.

kdump-config show

To verify the configuration that you have defined in /etc/default/kdump-tools, you can use kdump-config’s show command to review your options.

]]>http://caribou.kamikamamak.com/2014/05/20/remote-kernel-crash-dump-for-debian-and-ubuntu/feed/0Juju, maas and agent-namehttp://caribou.kamikamamak.com/2013/11/28/juju-maas-and-agent-name/
http://caribou.kamikamamak.com/2013/11/28/juju-maas-and-agent-name/#commentsThu, 28 Nov 2013 16:05:13 +0000http://caribou.kamikamamak.com/?p=820Continuer la lecture →]]>While working on a procedure with colleagues, I ran into an issue that I prefer to document somewhere. Since my blog has been sleeping for close to a year, I thought I’d put it here.

I was running a set of tests on a version of juju that will resemble 1.16.4. This version, as each one after 1.16.2 implement the agent-name identifier that is, as I understand it, used to discriminate between multiple bootstrapped environments.

I had a bootstrapped environment on a maas 1.4 server (1.4+bzr1693+dfsg-0ubuntu2~ctools0) with a few services running on two machines. Many hours of testing led us to identify that even though juju was using agent-name, it appeared that maas was not.

After upgrading to the latest and greatest version of maas (1.4+bzr1693+dfsg-0ubuntu2.2~ctools0), I ran a « juju status » and to my dismay I got this :

$ juju status

Please check your credentials or use ‘juju bootstrap’ to create a new environment.

Well, it turns out that after the upgrade of the maas packages, maas will start to honour the agent-name, but since the instances were created without any agent-name, it was no longer able to provide the information.

Raphael Badin who works on the maas team suggested to use the following maas shell commands to fix things up :

Be very careful with such a command if your maas environment is in production as this will change all the nodes present in MAAS.

]]>http://caribou.kamikamamak.com/2013/11/28/juju-maas-and-agent-name/feed/0Planes, Trains & Automobileshttp://caribou.kamikamamak.com/2012/12/02/planes-trains-automobiles/
http://caribou.kamikamamak.com/2012/12/02/planes-trains-automobiles/#commentsSun, 02 Dec 2012 01:27:47 +0000http://caribou.kamikamamak.com/?p=744Continuer la lecture →]]>Ok, the title shows my age. But a facebook post from a colleague (oh, yeah, who happens to be my boss), writing his thoughts on a train to Québec city made me think of my own experience of today, sitting in a plane for 8 hours on my way to Montréal.

Not because we were late leaving by one hour, wait that was only enlightened by the pilot’s humour in describing in almost real time the reasons for us being late (mixup in the baggage loading, then nobody to remove the boarding gate, then the guy who’s supposed to move the plane out of the boarding area just left without notice). But because this plane was taking me from he country I elected to live from the country were I was born, the city where I spent some of my best time.

Flying for me is rarely a burden, once I’m in the plane. Actually, once I’ve cleared security and am waiting to board. Then the calm reaches me; all I need to do is sit and be taken care of. Even when, while watching the beginning of « The Fight Club » on the onboard video system, I watch an « in flight collision », three days after dreaming of experiencing an airline crash on take off. I am not a frequent flyer by any measure, but air travelling is not a problem for me.

But every now and then, I get to fly back home from home. I get to return to Montréal, Québec, from Le Chesnay France. This does often put me in curious positions. Like an hour ago when, in the hotel’s elevator, I meet two people from France. I recognise the accent, then automatically switch to my « France » accent and behave just like any other person from France visiting. But earlier, coming back from the airport, I talked to the taxi driver as any other quebéquois would have done. I do like this situation. Makes me think that I have taken out the best of both worlds. I also remember this big map of France that I put up on my appartment wall, back when going to France was not even a possibility yet. Back then it was a wish.

It also take me away from my family, my two beloved daughter and Nathalie, the woman I love. And bring me back to my family, my two other brothers and my parents who still live here. Even though I am briefly far away from the family I have participated to build, I am also briefly closer to the family who saw me grow up. Airplanes get me to go from one to another in only a few hours.

Automobiles will not see me much here. This may be a proof that I am only a visitor here nowadays. I used to drive the streets of Montréal daily, without hesitation, knowing where to go. I have this feeling driving in the Paris area nowadays. I’m much more familiar with Versailles and Paris than I would be in Laval or the east end of Montréal. But if I was to come back here, it would all come back very quickly.

Planes, trains & Automobiles take us to our lives at the speed at which they evolve, or at the distances where they happen.

]]>http://caribou.kamikamamak.com/2012/12/02/planes-trains-automobiles/feed/1Installing Ubuntu Quantal with Full disk encryptionhttp://caribou.kamikamamak.com/2012/10/25/installing-ubuntu-quantal-with-full-disk-encryption/
http://caribou.kamikamamak.com/2012/10/25/installing-ubuntu-quantal-with-full-disk-encryption/#commentsThu, 25 Oct 2012 08:15:00 +0000http://caribou.kamikamamak.com/?p=738Continuer la lecture →]]>For historical reasons, I have been installing my Ubuntu laptop on a fully encrypted disk for years. Up until now, I needed to use the alternate CD since this was the only possibility to install with full disk encryption.

This is no longer the case. If you want to use full disk encryption with Quantal Quetzal, you can use the standard installation CD and will be presented with the following options :

You can then select the « Encrypt the new Ubuntu installation for security » option to request full disk encryption. Alternatively, you can elect to use LVM as I did, but this is not a requirement in order to get full disk encryption.

Kudos to the Ubuntu development team for making this option so simple now !

]]>http://caribou.kamikamamak.com/2012/10/25/installing-ubuntu-quantal-with-full-disk-encryption/feed/0Lucid panics after 208 days ? Don’t get biten by thathttp://caribou.kamikamamak.com/2012/06/12/lucid-panics-after-208-days-dont-get-biten-by-that/
http://caribou.kamikamamak.com/2012/06/12/lucid-panics-after-208-days-dont-get-biten-by-that/#commentsTue, 12 Jun 2012 08:54:50 +0000http://caribou.kamikamamak.com/?p=735Continuer la lecture →]]>If you are part of those people who are reluctant to upgrade to newer kernels, here is an example of how this can make your life miserable every 209 days.

There is a specific kernel bug in Lucid that will provoke a kernel panic after 208 days, which is regular behavior on a server (and a cloud instance ?). Here is the kernel GIT commit related to this :

This has been fixed in the ubuntu kernel since 2.6.32-38 months ago but if you prefer not to upgrade to earlier kernels on Lucid, you will be hit by this bug.

]]>http://caribou.kamikamamak.com/2012/06/12/lucid-panics-after-208-days-dont-get-biten-by-that/feed/0Amazon Music Store’s MP3 on Precisehttp://caribou.kamikamamak.com/2012/06/05/amazon-music-stores-mp3-on-precise/
http://caribou.kamikamamak.com/2012/06/05/amazon-music-stores-mp3-on-precise/#commentsTue, 05 Jun 2012 09:37:18 +0000http://caribou.kamikamamak.com/?p=733Continuer la lecture →]]>One nice thing about Banshee was the seemless integration of the Amazon MP3 store. Since I reinstalled my laptop on Precise, I know have Rhythmbox instead of Banshee which does not seems to offer the same kind of integration.

Luckily for me, I found a nice little hack that will help me get my favorite non-DRMed MP3 files on Ubuntu :