I was looking for a way to automate the creation of VMs for testing various distributed system / cluster software packages. I’ve used Vagrant in the past but I wanted something that would:

Allow me to use raw ISO files as the basis for guest VMs.

Guest VMs should be set up with bridged IPs that are routable from the host.

Guest VMs should be able to reach the Internet.

Other hosts on the local network should be able to reach guest VMs. (Setting up additional routes is OK).

VM creation should work with any distro that supports Kickstart files.

Scripts should be able to create and delete VMs in a scripted, fully-automatic manner.

Guest VMs should be set up to allow passwordless ssh access from the “ansible” user.

I’ve previously used virsh’s virt-install tool to create VMs and I like how easy it is to set up things like extra network interfaces and attach existing disk images. The scripts in this repo fully automate the virsh VM creation process.

Sample Kickstart file

The Ansible user: Although I’d prefer to create the “ansible” user as a locked account,with no password just an ssh public key, Kickstart on Ubuntu does not allow this, so I do set up an encrypted password.

To set up your own password, use the encrypt-pw script to create a SHA512-hashed password that you can copy and paste into the Kickstart file. After a VM is created you can use this password if you need to log into the VM via the console.

To use your own ssh key, replace the ssh key in the %post section with your own public key.

The %post section at the bottom of the Kickstart file does a couple of things:

It updates all packages with the latest versions.

To configure a VM with Ansible, you just need ssh access to a VM and Python installed. on the VM. So I use %post to install an ssh-server and Python.

I start the serial console, so that virsh console $vmname works.

I add a public key for Ansible, so I can configure the servers with Ansible without entering a password.

Despite the name, the commands in the %post section are not the last commands executed by Kickstart on an Ubuntu 18.10 server. The “ansible” user is added after the %post commands are executed. This means that the Ansible ssh public key gets added before the ansible user is created.

To make key-based logins work I set the UID:GID of authorized_keys to 1000:1000. The user is later created with UID=1000, GID=1000, which means that the authorized_keys file ends up being owned by the ansible user by the time the VM creation is complete.

Create an Ubuntu 18.10 server

This creates a VM using Ubuntu’s text-based installer. Since the `-d` parameter is used,progress of the install is shown on screen.

create-vm script

# Copyright 2018 Earl C. Ruby III## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.

This script will take an .iso file created by revisor and generate a VM from it.

OPTIONS: -h Show this message -n Host name (required) -i Full path and name of the .iso file to use (required) -k Full path and name of the Kickstart file to use (required) -r RAM in MB (defaults to ${RAM}) -c Number of VCPUs (defaults to ${VCPUS}) -s Amount of storage to allocate in GB (defaults to ${STORAGE}) -b Bridge interface to use (defaults to ${BRIDGE}) -m MAC address to use (default is to use a randomly-generated MAC) -v Verbose -d Debug modeEOF}

I spin up a lot of VMs using VMware Fusion. I generally keep “clean” generic copies of a few different distros and versions of Linux servers ready to go with my login, an sshd server, ssh keys, and basic settings that I use already set up. When I need to quickly test something manually — usually some new, multi-VM distributed container orchestration or database system — I just make as many copies of the server’s *.vmwarevm file as I need, fire up the VM copies on my laptop, test whatever I need to test, then shut them down. Eventually I delete the copies and recover the disk space.

Depending on where my laptop is running I’ll get a completely random IP address for the VM from the local DHCP server. I would log into the consoles, get the IPs, then log into the various VMs from a terminal. (Cut and paste just works a whole lot better on a terminal than on the VMware console.)

However, since the console screens are up, and I repeat this pattern several times a week, I figured why not save a step and make the ephemeral VMs just show me their IP address on their consoles without having to login, so I added an “on reboot” file called /etc/cron.d/welcome on the master image which updates the /etc/issue file.

When a new VM boots, it writes the hostname, kernel info, and the ethernet config to the /etc/issue file. /etc/issue is displayed on the screen before the login prompt, so now I can just glance at the console, see the IP address, and ssh to the new VM.

Although you’d never want to do this on a production system, it works great for ephemeral, throw-away test VMs.

This is a talk I gave last week at the SF Microservices Meetup titled Policy-based Cloud Storage, Persisting Data in a Multi-Site, Multi-Cloud World. In it I cover Apcera‘s approach to storage for containers and how to use policy to manage very large scale application deployments.

I needed to add some sudo access rights for support personnel on about a hundred Centos 6.6 servers. Currently no one one these hosts had sudo rights, so the /etc/sudoers file was the default file. I’m using Ansible to maintain these hosts, but rather than modify the default /etc/sudoers file using Ansible’s lineinfile: command, I decided to create a support.conf file and use Ansible’s copy: command to copy that file into /etc/sudoers.d/. That way if a future version of Centos changes the /etc/sudoers file I’m leaving that file untouched, so my changes should always work.

The support.conf file I created copied over just fine, and the validation step of running “visudo -cf” on the file before moving it into place claimed that the file was error-free and should work just fine as a sudoers file.

I logged in as the support user and it didn’t work:

[support@c1n1 ~]$ sudo /bin/ls /var/log/*
support is not in the sudoers file. This incident will be reported.

Not only did it not work, it was telling me that the support user wasn’t even in the file, which they clearly were.

After Googling around a bit and not finding much I saw this in the Sudoers Manual:

sudo will read each file in /etc/sudoers.d, skipping file names that end in ‘~’ or contain a ‘.’ character to avoid causing problems with package manager or editor temporary/backup files.

sudo was skipping the file because the file name contained a period!

I changed the name of the file from support.conf to support and it worked.

I’m using Ansible to set up the network interface cards of multiple racks of storage servers running Centos 6.6. Each server has four network interfaces to configure, a public 1GbE interface, a private 1GbE interface, and two 10GbE interfaces that are set up as a bonded 20GbE interface with two VLANs assigned to the bond.

If Ansible changes an interface on a server it calls a handler to restart the network interfaces so the changes go into effect. However, I don’t want the network interfaces of every single server in a cluster to restart at the same time, so at the beginning of my network.yml playbook I set:

serial: 1

That way Ansible just updates the network config of one server at a time.

Also, if there are any failures I want Ansible to stop immediately, so if I screwed something up I don’t take out the networking to every computer in the cluster. For this reason I also set:

max_fail_percentage: 1

If a change is made to an interface I’ve been using the following handler to restart the interface:

- name: Restart Network
service: name=network state=restarted

That works, but about half the time Ansible detects a failure and drops out with an error, even though the network restarted just fine. Checking the server immediately after Ansible says that there’s an error shows that the server is running and it’s network interfaces were configured correctly.

This behavior is annoying since you have to restart the entire playbook after one server fails. If you’re configuring many racks of servers and the network setup is just updating one server at a time I’d end up having to restart the playbook a half dozen times to get through it, even though nothing was actually wrong.

At first I thought that maybe the ssh connection was dropping (I was restarting the network after all) but you can log in via ssh and restart the network and never lose the connection, so that wasn’t the problem.

The connection does pause as the interface that you’re ssh-ing in over resets, but the connection comes right back.

I wrote a short script to repeatedly restart the network interfaces and check the exit code returned, but the exit code was always 0, “no errors”, so network restart wasn’t reporting an error, but for some reason Ansible thought there was a failure.

There’s obviously some sort of timing issue causing a problem, where Ansible is checking to see if all is well, but since the network is being reset the check times out.

I initially came up with this workaround:

- name: Restart Network
shell: service network restart; sleep 3

That fixes the problem, however, since “sleep 3” will always exit with a 0 exit code (success), Ansible will always think this worked even when the network restart failed. (Ansible takes the last exit code returned as the success/failure of the entire shell operation.) If “service network restart” actually does fail, I want Ansible to stop processing.

In order to preserve the exit code, I wrote a one-line Perl script that restarts the network, sleeps 3 seconds, then exits with the same exit code returned by “service network restart”.

Google “How do I mount an ISO image in Linux” and most of the links still say to use “-t iso9660”. For example:

mount -t iso9660 -o loop,ro diskimage.iso /mnt/iso

That worked fine 10 years ago, but these days not all ISOs use ISO9660 file systems. Many use the UDF (Universal Disk Format) file system, and if you specify ISO9660 when mounting a UDF ISO file, subtle problems can occur. For instance, file names that contain upper case letters on a UDF file system will appear in lower case when that ISO is mounted using ISO9660.

On any modern Linux distro mount is smart enough to figure out what type of file system to use when mounting an ISO file, so it’s perfectly fine to let mount infer the type, e.g.:

mount -o loop,ro diskimage.iso /mnt/iso

Here’s an example of what happens when you try to mount a type UDF ISO as type ISO9660. Note that the case of the file names changes to all lower case when mounting as iso9660, which in this case causes subtle errors to occur within the software.

This works great if you’re installing on a clean, empty server, but if you’re upgrading a server that had an older version of Django on it (1.6.4 in my case) Ansible will act as if it’s installing 1.6.5, but when it’s done I still had version 1.6.4.

Note the line “Requested django==1.6.5, but installing version 1.6.4”. Thanks PIP!

It turned out to be a bug in PIP versions earlier than PIP 1.4, not Ansible. A little Googling turned up a page on Stackoverflow that pointed the finger at an old cached copy of 1.6.4 in the build directory, which I found in /tmp/pip-build-root.

I updated my Ansible YAML file to get rid of the temporary directory and now it works fine:

On OpenSUSE systems running systemd all rcX scripts now redirect start, stop, reload, restart, etc. service commands to systemctl. The messages that used to appear on STDOUT telling you that a command is successful (or not) are now logged, but are no longer displayed on STDOUT.

That I can deal with, but every call to an rcX script now generates the message “redirecting to systemctl” to STDERR. I have a lot of scripts that call rcX scripts, and they interpret STDERR messages as “something just broke”.

The culprit is the new /etc/rc.status script that ships with OpenSUSE. It spews out the “redirecting to systemctl” message to STDERR for every operation that you do. The following command will modify the script and remove this stupid message:

I was talking to a co-worker about Pidgin not coming into focus when there’s a new, inbound IM. The Pidgin window used to come into focus, front and center, when I was running Ubuntu/Gnome and when running OpenSUSE/KDE, but when I upgraded my office desktop to Ubuntu/Unity it stopped behaving this way. My co-worker noticed the same behavior with Fedora17/Gnome. A new IM would come in, but the Pidgin IM window would remain in the background, hidden, unseen and unread.

I thought “There has to be a setting that controls this,” and there is…

Bring up Pidgin’s Buddy List

Click Tools > Plugins

Locate the Message Notification plugin and highlight it

At the bottom of the Plugins window is a Configure Plugin button. Click it