I've been getting into snorkelling a bit recently. I've always enjoyed it but recently I bought some good quality gear, replacing the toy shop crap I've been using. It's another world with good equipment! It's not easy to get time, but so far I've snorkelled Jervis Bay, Bushrangers Bay, Clovelly and The Haven in Terrigal.

My son has been asking what it's like, so I bought the Kogan waterproof camera case for $19. Took it out last weekend for a spin at The Haven but the visibility was terrible. The camera case works a treat though, and I'm looking forward to using it some more. Need to work out a strap to attach it to my arm or something though.

Yesterday was the (almost) annual OpenTech conference. For various reasons, the conference didn’t happen last year, so it was good to see it back this year.

OpenTech is the conference where I most wish I could clone myself. There are three streams of talks and in pretty much every slot there are talks I’d like like to see in more than one stream. These are the talks that I saw.

Electromagnetic Field: Tales From the UK’s First Large-Scale Hacker Camp (Russ Garrett)
Last August, Russ was involved in getting 500 hackers together in a field near Milton Keynes for a weekend of hacking. The field apparently had better connectivity than some data centres. Russ talked about some of the challenges of organising an event like this and asked for help organising the next one which will hopefully take place in 2014.

Prescribing Analytics (Bruce Durling)
Bruce is the CTO of Mastodon C, a company that helps people extract value from large amounts of data. He talked about a project that crunched NHS prescription data and identified areas where GPs seem to have a tendency to prescribe proprietary drugs rather than cheaper generic alternatives.

GOV.UK (Tom Loosemore)
Tom is Deputy Director at the Government Digital Service. In less than a year, the GDS has made a huge difference to the way that the government uses the internet. It’s inspirational to see an OpenTech stalwart like Tom having such an effect at the heart of government.

How We Didn’t Break the Web (Jordan Hatch)
Jordan works in Tom Loosemore’s team. He talked in a little more detail about one aspect of the GDS’s work. When they turned off the old DirectGov and Business Link web sites in October 2012, they worked hard to ensure that tens of thousands of old URLs didn’t break. Jordan explained some of the tools they used to do that.

The ‘State of the Intersection’ address (Bill Thompson)Bill’s talk was couched as a warning. For years, talks at OpenTech have been about the importance of Open Data and it’s obvious that this is starting to have an effect. Bill is worried that this data can be used in ways that are antithetical to the OpenTech movement and warned us that we need to be vigilant against this.

Beyond Open Data (Gavin Starks)
Gavin has been speaking at OpenTech since the first one in 2004 (even before it was called OpenTech) and, as with Tom Loosemore, it’s great to see his ideas bearing fruit. He is now the CEO of the Open Data Institute, an organisation founded by Tim Berners-Lee to the production and use of Open Data. Gavin talked about how the new organisation has been doing in its first six months of existence.

Silence and Thunderclaps (Emma Mulqueeny)
Emma has two contradictory-sounding ideas. The Silent Club is about taking time out in our busy lives to sit and be still and silent for an hour or so; and then sending her a postcard about what you thought or did during that time. The Thunderclap is a way to get a good effect out of that stack of business cards that we all seem to acquire.

Thinking Pictures Paul Clarke)
Paul takes very good photographs and used some of them to illustrate his talk which covered some of the ethical, moral and legal questions that go through his mind when deciding which pictures to take, share and sell.

1080s – the 300seconds project (300seconds)
The 300 seconds project wants to get more women talking at conferences. And they think that one good way to achieve that is for new speakers to only have to talk for five minutes instead of the full 20- or 40-minutes (or more) that many conferences expect. The Perl community has been using Lightning Talks to do this with great success for over ten years, so I can’t see why they shouldn’t succeed.

Scaling the ZeroMQ Community (Pieter Hintjens)
Peter talked about how the ZeroMQ community runs itself. Speaking as someone who has run a couple of open source project communities, some of his rules seemed a little harsh to me (“you can only expect to be listened to if you bring a patch or money”) but his underlying principles are sound. All projects should aim to reach a stage where the project founders are completely replaceable.

The Cleanweb Movement (James Smith)
I admit that I knew nothing about the Cleanweb Movement. Turns out it’s a group of people who are building web tools which make it easier for people to use less energy. Which sounds like a fine idea to me.

Repair, don’t despair! Towards a better relationship with electronics (Janet Gunter and David Mery)
Janet and David started the Restart Project, which is all about encouraging people to fix electrical and electronic devices rather than throwing them out and buying replacements. They are looking for more volunteers to help people to fix stuff (and to teach people how to teach stuff).

CheapSynth (Dave Green)
Dave Green has been missing from OpenTech for a few years, but this was a triumphant return. He told us how you can build a cheap synth from a repurposed Rock Band game controller. He ended his talk (and the day) by leading the room in a rendition of Blue Money.

As always, OpenTech was a great way to spend a Saturday. Thank you to all of the organisers and the speakers for creating such and interesting day. As I tweeted during the day:

Being at @opentechuk always makes me embarrassed that I’m not getting more done. Which is, I suppose, the point of it :/

OpenSCAP is a project that lets you scan physical machines looking for known vulnerabilities or configuration problems (like public-writable directories).

Obviously it would be good to use this to scan guests, especially in a cloud scenario where you want to help naive users not to deploy guests that are just going to get pwned the minute they go online.

While Puppet may get all the glory, Facter,
the hard working information gathering library that can, seldom gets much
exciting new functionality. However with the release of Facter 1.7
Puppetlabs have standardised and included a couple of useful facter
enhancements that make it easier than ever to add custom facts to your
puppet runs.

These two improvements come under the banner of 'External Facts'. The first
allows you to surface your own facts from a static file, either
plain text key value pairs or a specific YAML / JSON format. These static
files should be placed under /etc/facter/facts.d

At its simplest this is a way to surface basic, static, details from
system provisioning and other similar large events but it's also an easy
way to include details from other daemon and cronjobs. One of my first
use cases for this was to create 'last_backup_time' and
'last_backup_status' facts that are written at the conclusion of my
backup cronjob. Having the values inserted from out of band is a much nicer
prospect that writing a custom fact that parses the cron logs.

If that's a little too static for you then the second usage might be what
you're looking for. Any executable scripts dropped in the same directory
that produce the same output formats as allowed
above will be executed by facter when it's invoked.

The ability to run scripts that provide facts and values makes
customisation easier in situations where ruby isn't the best language for
the job. It's also a nice way to reuse existing tools or for including
information from further afield - such as the current binary log in
use by MySQL or Postgres or the hosts current state in the load
balancer.

While there have been third party extensions that provided this
functionality for a while it's great to see these enhancements get
included in core facter.

You can use qcow2 backing files as a convenient way to test what happens when you try to create exabyte-sized filesystems. Just to remind you, 1 exabyte is a million terabytes, or a pile of ordinary hard disks stacked 8 miles high.

Situation: You have a Windows DVD (or ISO), but like any sane person in 2013 you don’t have a DVD drive on the computer. You want to convert the Windows DVD into a bootable USB key. There are many recipes for this online, but they all require another Windows machine and of course cannot be automated.

However with guestfish (and the always brilliant SYSLINUX doing most of the heavy lifting), this script will unpack the ISO and turn it into a bootable USB key.

Notes:

I am not going to support this script. You will need to read the script, look up the commands in the guestfish man page, and understand what it does. Any requests for help will be deleted unread.

Over time parts of your puppet manifests will become unneeded. You might
move a cronjob or a users in to a package or no longer need a service to be
enabled after a given release. I've recently had this use case and had two
options - either rely on comments in the Puppet code and write an out of
band tool to scan the code base and present a report or add them to the
puppet resources themselves. I chose the latter.

Below you'll find a simple metaparameter (a parameter that works with any
resource type) that adds this feature to puppet. As this is an early
prototype I've hacked it directly in to my local puppet fork. Below you'll
see a sample resource that declares a deprecation date and message, the
code that implements it and a simple command line test you can run to
confirm it works.

Using the metaparameter is easy enough, just specify 'deprecation' as a
property on a resource and provide a string that contains the date to start
flagging the deprecation on (in YYYYMMDD format) and the message puppet
should show. I don't currently fail the run on an expired resource but this
is an option.

The are some other aspects of this to consider -
Richard Clamp raised the idea of having
a native type that could indicate this for an entire class (I'd rather use
a function, but only because they are much easier to write) and Trevor
Vaughan suggested a Puppet face that could present a report of the expired,
and soon to be expired, code.

I don't know how widely useful this is but it made a nice change to
write some puppet code. The small size of the example will hopefully
show how easy it is to extend nearly every part of puppet - including
more 'complicated' aspects like metaparameters. Although not the
relationship ones, those are horrible ;) I've submitted the idea to the
upstream development list so we'll see what happens.

rsync involves using a network connection into or out of the appliance, and is therefore a lot more involved to set up. The script below shows one way to do this, by running an rsync daemon on the host, and letting the libguestfs appliance connect to it.

The script runs rsync inside the appliance, copying /home from the attached disk image out to /tmp/backup on the host. If the operation is repeated, then only incrementally changed files will be copied out. (To incrementally delete files on the target, add the deletedest:true flag).

Note you will need to open port 2999 on your host’s firewall for this to work.