Wed, 26 Nov 2014

After a previous comment about
"
templating CloudFormation JSON from a tool higher up in your stack"
I had a couple of queries about how I'm doing this. In this post I'll show a
small example that explains the work flow. We're going to create a small
CloudFormation template, with a single Jinja2 embedded directive, and call it
from an example playbook.

This template creates an S3 bucket resource and dynamically sets the
"DeletionPolicy" attribute based on a value in the playbook. We use a
file extension of '.json.j2' to distinguish our pre-expanded templates
from those that need no extra work. The line of interest in the template
itself is "DeletionPolicy": "{{ deletion_policy }}". This is a
Jinja2 directive that Ansible will interpolate and replace with a literal
value from the playbook, helping us move past
a CloudFormation Annoyance, Deletion Policy as a Parameter.
Note that this template has no parameters, we're doing the work in
Ansible itself.

Now we move on to the playbook. The important part of the preamble is the
deletion_policy variable, where we set the value for later use
in the template. We then move on the the 2 essential and one house keeping
task.

Because the Ansible CloudFormation module doesn't have an inbuilt option to
process Jinja2 we create the stack in two stages. First we
process the raw jinja JSON document and create an intermediate file. This
will have the directives expanded. We then run the CloudFormation module
using the newly generated file.

We've only covered a simple example here but if you're willing to
commit to preprocessing your templates you can add a lot of flexibility,
and heavily reduce the line count, using techniques like this. Creating
multiple subnets in a VPC, adding route associations and such is another
good place to introduce these techniques.

Mon, 24 Nov 2014

You can create some high value resources using CloudFormation that you'd
like to ensure exist even after a stack has been removed. Imagine being the
admin to accidently delete the wrong stack and having to watch as your RDS
master, and all your prod data, slowly vanishes in to the void of AWS
reclaimed volumes. Luckily AWS provides a way to reduce this risk, the
DeletionPolicy Attribute. By specifying this on a resource you can
ensure that if your stack is deleted then certain resources survive and
function as usual. This also helps keep down the number of stacks you have
in the "DELETE_FAILED" stage if you try and remove a shared security group
or such.

Once you start sprinkling this attribute through your templates you'll
probably feel the need to have it vary in staging and prod. While it's a
lovely warm feeling to have your RDS masters in prod be a little harder to
accidently kill you'll want a clean tear down of any frequently created
staging or developer stacks for example. The easiest way to do this is to
make the DeletionPolicy take its value from a parameter, probably using
code like that below.

Unfortunately this doesn't
work. You'll get an error that looks something like
cfn-validate-template: Malformed input-Template format error: Every
DeletionPolicy member must be a string. if you try to validate your
template (and we always do that, right?).

There are a couple of ways around this, the two I've used are:
templating your CloudFormation json from a tool higher up in your stack,
Ansible for example. The downside is your templates are unrunable without
expansion. A second approach is to double up on some resource declarations and
use CloudFormation
Conditionals. You can then create the same resource, with the
DeletionPolicy set to the appropriate value, based off the value of a
parameter. I'm uncomfortable using this in case of resource removal on
stack updates if the wrong parameters are ever passed to your stack. I
prefer the first option.

Even though there are ways to work around this limitation it really feels
like it's something that' Should Just Work' and as a CloudFormation user
I'll be a lot happier when it does.

Sat, 22 Nov 2014

While AWS CloudFormation is one of the best ways to ensure your AWS
environments are reproducible it can also be a bit of an awkward beast to
use. Here are a couple of simple time saving tips for refining your CFN template parameters.

The first one is also the simplest, always define at the least a
MinLength property on your parameters and ideally an
AllowedValues or AllowedPattern. This ensures
that your stack will fail early if no value is provided. Once you start
using other tools, like Ansible, to glue your stacks together it becomes
very easy to create a stack parameter that has an undefined value.
Without one of the above parameters CloudFormation will happily use the
null and you'll either get an awkward failure later in the stack
creation or a stack that doesn't quite work.

The second tip is for the parameters type property. While it's
possible to use a 'type' of 'String' and an 'AllowedPattern' to ensure a value
looks like an AWS resource, such as a subnet id, the addition of AWS-
specific types, available from November 2014, allows you to get a lot more
specific:

This goes one step beyond 'Allowed*' and actually verifies
the resource exists in the users account. It doesn't do this at
the template validation stage, which would be -really- nice, but it does it
early in the stack creation so you don't have a long wait and a failed,
rolled back, set of resources.

Neither of these tips will prevent you from making the error, or
unfortunately catch them on validation. They will surface the issues much
quicker on actual stack creation and make your templates more robust.
Here's a
list of the available AWS Specific Parameter Types, in the table under
the 'Type' property and you can find more details in the 'AWS-Specific
Parameter Types' section.

While it's nice to see the output in facter you need to make a small change
to your config file to use them in puppet. Set stringify_facts =
false in the [main] section of your puppet.conf file and you can
use these new facts inside your manifests.

Would I use this in general production? No, never again, but it's a nice
reminder of how easy facter is to extend. A couple of notes if you
decide to play with this fact - I deliberately filter out non-ansible
facts. There was something odd about seeing facter facts nested inside
Ansible ones inside facter. If you foolishly decide to use this heavily,
and you're running puppet frequently, adding a simple cache for the
ansible results might be worth looking at to help your performance.

Puppet's always had a couple of little inconsistencies when it comes to the
file and template functions. The file function has always been able to
search for multiple files and return the contents of the first file found but it
required absolute paths. The template
function accepts module based paths but doesn't allow for matching on the
first found file. Although this can be fixed with
the Puppet
Multiple Template Source Function.

One of the little niceties that came with Puppet 3.7 is an easily missed
improvement to the file function that makes using it
easier and more consistent with the template function. In earlier puppet
versions you called file with absolute paths, like this:

Thanks to a code submission from Daniel
Thornton (which fixes an issue that's been logged since at least
2009) you can now call the file function in the same way as you'd
use template, while retaining support for matching the first
found file.

Although most puppet releases come with a couple of 'wow' features
sometimes it's the little ones like this that adds consistency to the
platform and helps cleanup and abstract your modules, that you appreciate
more in the long term.

Sat, 04 Oct 2014

In the past if you wanted to run your own puppet-lint checks there was no
official, really clean way to distribute them outside of the core code. Now,
with the 1.0 release of puppet-lint
you can write your own, external, puppet-lint checks and make them easily
distributable.

I spent a little bit of time this morning reading through the existing 3rd
party community plugins and after porting a private
absolute
template path check over to the new system I have to say that
rodjek has done an excellent job
with both the ease of writing your own checks and the quality of the developer tutorial.
if you have any local style rules then now's a great time to get them
represented in your puppet-lint runs.

Thu, 11 Sep 2014

A little while ago in a twitter conversation, many hops away a few of us
discussed the Puppet Certified Professional exam and topic coverage.
Specifically how much of it was focused on Puppet Enterprise (PE) and if
it would either dissuade users of purely FOSS Puppet or heavily impact
their chance of passing if they'd never used PE.

While I stand by my views I began to worry that my
knowledge of the syllabus was only based on hearsay, the
practice exam questions
and that I was being overly harsh and possibly spreading
misinformation through my own ignorance. So I booked a place and took the
exam a couple of days later.

The exam is multiple choice and most questions are quite direct. While
there were tricky questions I only encountered one that could
be either a very subtle trick or a mistake, and I've reported that upstream
and received a positive response about it being investigated. The
questions I had heavily pointed towards topics that you'd have to use
puppet on a semi-regular basis to know the answers to.

In terms of candidate preparation, other than the obvious choice of taking
PuppetLabs training courses, I think that being comfortable with all the
material in Pro Puppet and having a decent six-twelve months of hands on
experience of Puppet, MCollective and Puppet DB will cover most of the
scope. This also requires knowing how puppet fits together and
understanding how it works, not just being able to write modules and
work with the DSL. In hindsight I'd have scored higher by
downloading the Puppet Enterprise VM and spending a few hours working
through the GUI features. Instead I went in having never used PE and still
had a decent pass. I'd also note that the practice questions mentioned
above are an accurate illustration of the real exam questions format and
difficulty.

As I've only just taken the exam, and the fact I have more than enough
puppet experience on my CV already, I don't think the cert will add my to
my employability but for people with less years of puppet and looking to
validate their skills it's not a bad way to spend an hour. Doubly so if you
can take the test for free at a local puppetcamp; in case you needed any more
reasons to attend one.

It is amazing how many small commitments and fragments of an online
presence you can collect over years of being involved in different
projects and user groups. I've ended up hosting planets, user group
sites, submission forms (and other scripts), managing twitter
announcement accounts, pushing tar balls (don't ask) and running (and
owning) more domains than I could ever really want or do anything useful
with. After an initial audit of how difficult it'd be to move some of my
public servers I've realised that something has to change.

I've decided to take a deliberate step back and reduce my involvement
in a number of projects, and my general online footprint, to levels that
are comfortable and maintainable while leaving me enough time to get
involved in some newer projects, technology and groups that are relevant
to me. Although I slowly began the cleaning process a few months
ago, initially by transferring domains and in some cases even
deleting websites and removing their DNS, there's still quite a lot
of cruft to trim.

Like most full time sysadmins my personal systems, which thanks to
Debian and Bytemark have been in use for many years and in place release
upgrades, are a lot more disorderly, and manual, than I'd accept at work
or even in my home lab. A clean up like this seems to be the perfect
time to move to newer, more appropriate, platforms like nginx and puppet
modules (yes I have puppet code that predates modules) and replace custom
nagios wrapping with serverspec and such. Some of the evolved
configurations with dozens of complicated edge cases are going to be
difficult to migrate and I'm trying to bring myself to just kill a
number of them, even if it leaves certain links now dead. This site
(unixdaemon.net) will probably be one of the biggest victims of this.

What have I learned from this audit and clean up? First, don't make open
ended commitments. As an example I run one site for a group that I've
not even attended for over 6 years. Secondly I no longer have the free
time I once did and so it has to count for more. I need to get more
proactive about handing things off that I'm no longer passionate about.

Wed, 23 Jul 2014

Once we started linking multiple CloudFormation stacks together with
Ansible we started to feel the need to query Amazon Web Services for
both the output values from existing CloudFormation stacks and certain
other values, such as security group IDs and Elasticache Replication
Group Endpoints. We found that the quickest and easiest way to gather
this information was with a handful of Ansible Lookup Plugins.

I've put the code for the more generic Ansible AWS Lookup Plugins
on github and even if you're an Ansible user who's not using AWS they
are worth a look just to see how easy it is to write one.

In order to use these lookup plugins you'll want to configure both your
default AWS credentials and, unless you want to keep the plugins
alongside your playbooks, your lookup plugins path in your Ansible
config.

First we configure the credentials for boto, the underlying AWS library
used by Ansible.

Tue, 25 Mar 2014

Constructing a large, multiple application, virtual datacenter with
CloudFormation can quickly lead to a sprawl of different stacks. The
desire to split things sensibly, delegate control of separate tiers and
loosely couple as many components as possible can lead to a large number
of stacks, lots of which need values from stacks created earlier in the
run order. While it's possible to do this with the native AWS
CloudFormation command line tools, or even some clever bash
(or Cumulus),
having a strong, higher level tool can make life a lot easier and
reproducible. In this post I'll show one possible way to manage
interrelated stacks using Ansible.

We won't be delving into the individual templates used in this example.
If you're having this kind of issue with CloudFormation then you
probably have more than enough of your own to use as examples. Instead,
I'll show a basic Ansible playbook for managing three related
stacks.

The first part of our playbook should be familiar to most Ansible users.
We set up where to run the playbook, how to connect and ensure we don't
spend time gathering facts. We then define the variables that we'll be
using as parameters to a number of stacks. The ability to specify
literals in a single place was the first benefit I saw when converting
a project to Ansible. This may not sound like a major win but being able
to change the AMI ID in a single place, or even store it in an external
file that our build system can automatically update, is something I'd
find difficult to give up.

Now we'll move to the first of our Ansible tasks, a CloudFormation stack
represented as a single Ansible resource. The underlying template
creates a basic SNS resource we'll later use in all our auto-scaling
groups.

The 'args:' section contains the values we want to pass in to the
template. Here we're only passing a single value that we defined earlier
in the 'vars:' section. We'll see more complicated examples of this
later. We also register the output from the CloudFormation action. This
includes any values we specify as "Outputs" in the template and provides
a nice way to deliberately define what we're exposing from our template.
The alternative is to pull out arbitrary values from a given resource
created in a previous stack but that's a hefty breach of encapsulation
and will often bite you later when the templates change.

The Create Security Groups CloudFormation task doesn't really have
anything interesting from an Ansible perspective, we run it, create the
repos and gather the outputs using 'register' for use in our next
template.

The 'Create Webapp' example below shows most of the basic CloudFormation
resource features in a single task. We use variables defined at
the start of the playbook to reduce duplication of literal strings. We
prefix the stack names to allow multiple developers to each build full
sets of stacks without duplicate stack name conflicts while keeping
grouping simple in the AWS web dashboard.

In the args section we also use the return values from our previous
stacks. The nested value access is a little verbose but it's easy to
pickup and being able to see all the possible values when running
Ansible under debug mode makes things a lot easier. We also had the need
to pull down output values from stacks created outside of Ansible, so I
wrote a simple
Ansible CloudFormation lookup plugin.

So what does Ansible gain us as a stack management tool? In terms of
raw CloudFormation it provides a nice way to remove boilerplate
literals from each stack and define them once in the
'vars' section. The ability to register the output from a
stack and then use it later on is an essential one for this kind of
stack building and retrieving existing values as a pythonish hash
is much easier than doing it on the command line. As for added power, it
should be easier to implement AWS functionality that's currently
missing from CloudFormation as an Ansible module than a CloudFormation
external resource (although more on that when I actually write one) and
performing other out of band tasks, letting your ticketing system know
about a new stack for example, is a lot easier to integrate into Ansible
than trying to wrap the cli tools manually.

I've been using Ansible for stack management in a project that involves
over a dozen separate moving parts for the last month and so far it's
been working fine with minimal pain.

Sat, 22 Mar 2014

I've been doing my usual quarterly sweep of the always too full
bookshelves and hit the usual dilemma of what to keep, what to donate to
charity and what to recycle. Among the technical books in this batch is
the 'Sendmail Cookbook', something I've always kept as a good luck charm
to ward off the evil of needing to work with mail servers with m4 based
configuration languages.

Sendmail is one of those projects that I've not kept up with over the
years. I have no idea how much has changed since the book was published
over a decade ago, 2003 in this case, so I don't know if this is a
useful book to pass on or if it's dangerously out of date and should be
removed from circulation. It'd be handy if the larger projects
maintained a page of books related to the project and a table of how
relevant the material is in relation to different versions.

This would not only help me prune my shelves of older, now out of date
books, but would help people new to a project pick books that were still
relevant for the versions they need to learn.

- but they quickly become painful. The two commands below each create
stacks that depend on values from resources that have been defined in a
previous stack. You can spot these values by their unfriendly appearance,
such as 'rtb-9n0tr34lac55' and 'subnet-e4n0tr34la'.

When building a large, multi-tier VPC you'll often find yourself needing to extract
output values from existing stacks and pass them in as parameters to
dependent stacks. This results in a lot of repeated literal strings and
boilerplate in your commands and will soon cause you to start doubting
your approach.

The real pain came for us when we started adding extra availability
zones for resilience. A couple of my co-workers were keeping their stuff
running with bash and python + boto but the code bases were starting to
get a little creaky and complicated and this seemed like a problem that
should have already been solved in a nice, declarative way.
It was about the point when we decided to add an extra subnet to a number
of tiers that I caved and went trawling through github for somebody
else's solution. After some investigation I settled on
Cumulus as the first
project to experiment with as a replacement for our ever growing, hand
hacked, creation scripts. To pay Cumulus the proper respect it did make
life a lot easier at first.

The code snippets below show an example set of stacks that were
converted over from raw command lines like the above to Cumulus yaml
based configs. First up we have the base declaration and a simple stack
definition.

Each of the keys under 'stacks:' will be created as a separate
CloudFormation stack by cumulus. Their names will be prefixed with
'locdsw', taken from the first line of our example, and they'll be
placed inside the 'eu-west-1' region. The configuration above will
result in the creation of a stack called 'locdsw-sns-email-topic'
appearing in the CloudFormation dashboard

The stacks resources are defined in the template specified
in cf_template. Our example does not depend on existing stacks and takes
a single parameter, AutoScaleSNSTopic, with a value of 'testymctest'.
Cumulus has no support for variables so you'll find yourself
repeating certain parameters, like ami id and key id, throughout the
configuration.

For a while we had an internal branch that treated the
CloudFormation templates as jinja2 templates. This enabled us to remove
large amounts of duplication inside individual templates. These changes
were submitted upstream but one of the goals of the Cumulus project is
that the templates it manages can still be used by the native
CloudFormation tools, so the patch was (quite fairly) rejected.

Let's move on to the second stack defined in our config. The
point of interest here is the addition of an explicit dependency on the
sns-email-topic stack. Note that it's not referred to using the prefixed
name, which can be a point of confusion for new users.

The webapp params section contains two different types of values. Simple
ones we've seen before, 'Owner' and 'AMIId' for example, and composite
ones that reference values that other stacks define as outputs. Let's
look at ASGSNSArn in a little more detail.

Here, inside the webapp stack declaration, we look up a value defined in
the output of the previously executed sns-email-topic template. From the
CloudFormation Outputs for that template we retrieve the value of
EmailSNSTopicARN. We then pass this to the webapp.json template as the
ASGSNSArn parameter on stack creation. If you need to pull a parameter
in from an existing stack that was created in some other way you can
specify it as 'source: -fullstackname'. The '-' makes it an absolute
name lookup, cumulus won't prefix the stackname with locdsw for
example.

Cumulus met a number of my stack management needs, and I'm still using
it for older, longer lived stacks such as monitoring, but because of its
narrow focus it began to feel restricting quite quickly. I've started to
investigate Ansible as a possible replacement as it's a more generic tool
and I'm in need of flexibility that'd feel quite out of place in
cumulus.

In terms of day to day operations the main issues we hit included the
need to turn on ALL the debug, both cumulus and boto, to see why stack
creations failed. A lot of the AWS returned errors were being caught
and replaced by generic, unhelpful error messages at any filter level
greater than debug. Running under debug results in a LOT of output,
especially when boto is idle polling, waiting for the stack creation
to complete so it can begin the next one. The lack of any variables or
looping was also an early constraint. The answers to this seemed to
include pushing the complexity down to the templates and writing large
mapping sections, increasing duplication of literals between templates
and a lot of FN::FindInMaps maps. The second approach was to have
multiple configs. This was less than ideal due to the number of
permutations, environment (dev, stage, live), region and in
development which developer was using it. The third option, a small
pre-processor that expanded embedded jinja2 in to a CloudFormation
template, added another layer between writing and debugging and so
didn't last very long.

If you're running a small number of simple templates then Cumulus might
be the one tool you need. For us, Ansible seems to be a better fit, but
more about that in the next post.

Tue, 04 Mar 2014

Once we started extracting applications into different logical
CloudFormation stacks and physical templates, we began to notice quite
a lot of duplication in our json when it came to declaring
IAM rules. Some
of our projects store their puppet, hiera and rpm files in restricted S3
buckets so allowing stacks access to them based upon environment,
region, stack name and other criteria quickly becomes quite long-winded.
After looking at a couple of dozen application templates and finding
that over 30% of the json was IAM based it was time to find a different
approach.

One of the CloudFormation techniques I'd seen mentioned but never used
before was nested CloudFormation stacks.
This allows you to define an entire stack as just another resource in
your template. Here's some example json that does this:

You can see that a stack is declared in the same manner as all other
resources. The 'TemplateURL' property must point to a URL that hosts a
complete, valid CloudFormation template. This allows you to develop the
nested stack in the same way as you'd progress your actual application
templates and test it in isolation. For my experiments I found it
easiest to store them in S3 under a basic hierarchy with a little
versioning to allow multiple versions of the IAM rules to be in use at
once across the stacks. The other properties in the example are
'Parameters'. These are passed to the sub-stack at creation time as
actual parameters and are what makes this approach so flexible and
powerful.

Inside the nested stack template we add define a AWS::IAM::Role,
an AWS::IAM::InstanceProfile and a number of AWS::IAM::Policy types that are
abstracted to only allow access for one app/environment combination at a
time. We do this using the parameters we pass in as values at different
levels of the hierarchy. This way we can ensure that every application
using a specific version of the IAM roles gets exactly the same
permissions while not bulk pasting it into each applications json
template or hard coding any of the application specific values. It's
also worth noting that as stacks are given "CloudFormationed" IDs that
include some randomness you can have multiple versions of the nested
stack at once with no overlap or conflicts between apps.

You can see a small extract from our sample IAM template, with the parameters interpolated into the path, here -

Accessing nested stack outputs is as simple as a call to Fn::GetAtt with
the resource name of the nested stack as the first argument
(IAMRolesStack as seen in our first code snippet) and the outputs name
as part of the second.

So what did we get from this? A few very worth while things. We removed
a LOT of boilerplate from all our application templates. This also makes
CloudFormation application templates easier to create as only a few
people need in-depth knowledge of our IAM rules and bucketing
scheme, application templates can focus on the application. It's easier
to confirm that applications have the same access rights based on the S3
bucket used, rather than diffing through lots of subtly different IAM
resources.

I'm using this technique on a couple of medium size projects at the
moment and so far it seems like a good way to overcome IAM json
spaghetti with no large drawbacks.

Sat, 01 Mar 2014

Structured facts in facter
had become the Puppet communities version of 'Duke Nukem Forever',
something that's always been just around the next corner. Now that
the facter 2.0.1 release candidate is out you can finally get
your hands on an early version and do some experimentation.

First we grab a version of facter 2 that supports structured facts from puppetlabs -

This is the part where we can be underwhelmed, it's all still flat.
Don't let the lack of nested facts dishearten you though. The Puppetlabs
people have done all the hard work of implementing structured facts
support, they've just not converted any showcase facts over yet. Instead
of waiting for an official structured fact lets add our own and have a
little play.

As we're experimenting with a throw away environment we'll drop the
structured fact directly in to our expanded archive. In a real
environment you'd never do this, you'd either use FACTERLIB or deploy
your modules properly with puppet as Luke intended.

Well, our first TODO will be to determine how to show structured facts
as strings, but we'll defer that for now as we really want to see some
deep nesting. Assuming you're on a RedHat osfamily host you can run
facter with the yaml output, otherwise you'll have to settle for the
sample outputs below:

Success! Structured fact output! From (nearly) Puppet! Of course,
this is only a release candidate for Facter 2 so we're not
production ready yet but as a taster of what's coming and a way to
get ahead and start converting your own facts it's a lovely, and
amazingly overdue, gift.

As for writing structured facts, as you can see from my
structured yumplugins fact
example there's no difference between a structured and an unstructured
one apart from the value it returns.

The Guard gem "is a command line tool to easily handle events on
file system modifications" which, simply put, means "run a
command when a file changes". While I've used a number of different
little tools to do this in the past, Guard presents a promising base to
build more specific test executors on so I've started to integrate it
in to more aspects of my work flow. In this example I'm going to show
you how to validate a CloudFormation template each time you save a
change to it.

The example below assumes that you already have the AWS CloudFormation
command line tools installed, configured and available on your path.

Now that guard is up and running open up a second terminal
to the directory you've been working in. We'll now make a couple of
changes and watch Guard in action. First we'll make a small change to the text that shouldn't break anything.

The 'FAILED: example-sns-email-topic.json' line is displayed in
less welcome red, the dialog box pops up again and we know that our
last change was incorrect. While this isn't quite as nice as having vim
running the validate in the background and taking you directly to the
erroring line it's a lot easier to plumb in to your tool chain
and gives you 80% of the benefit for very little effort. For
completeness we'll reverse our last edit to fix the template.

One last config option that's worth noting is ':all_on_start =>
false' from the Guardfile. If this is set to true then, as you'd
expect by the name, all CloudFormation templates that match the
watch will be validated when Guard starts. I find the validates to be
quite slow and I often only dip in to a couple of templates so I set
this to off. If you spend more focused time working on nothing but
templates then having this set to 'true' gives you a nice early warning in
case someone checked in a broken template. Although your git
hooks shouldn't allow this anyway. But that's a different post.

After reading through the validate errors of a couple of days work it
seems my most common issue is from continuation commas. It's just a
shame that CloudFormation doesn't allow trailing commas everywhere.

Wed, 12 Feb 2014

One of the biggest surprises of Config
Management Camp 2014 for me was how interesting Canonicals
orchestration management tool, Juju has
become. Although I much preferred the name 'Ensemble'.

I attended the Juju session in an attempt to keep myself out of the
Puppet room and was pleasantly surprised at how much Juju had progressed
since I last looked at it. Rather than being another config management
solution it allows you to model your systems using "charms", which can be
implemented using anything from a bash script to a set of chef/puppet
cookbooks/modules, and instead focuses on ensuring that they run across
your fleet in a predictable way while enforcing dependencies, even over
multiple tiers, no matter how many tools you choose to use underneath.

Listening to the presentations it seems Juju has some very well thought
out parts. Multiple callback hooks, that are triggered on state changes,
are used as an orchestration back channel between different hosts and the
services they provide flowed nicely in the demos. The web dashboard was
very polished and had some very shiny canvas magic that could be
borrowed in other tools. I also liked the command line interface for
linking different tiers and associating supporting roles, such as tying
wordpress instances to a mysql back end. There is also some cloud
provider performance abstraction code at work where you can request a
certain amount of resources and Juju will map that to the closest
instance type in which ever provider you're currently using.

I was only in the room for a couple of the talks but both the Canonical staff
were a credit to their company. The material was well presented,
they managed to answer all the audiences questions and you get the
impression that they'd be a nice project to work with. Hopefully I'll
have a chance to play with the platform some more in the future.

Mon, 13 Jan 2014

I'm still new to Ansible and while it's been interesting seeing how
people are starting to use the tool, picking up bits and pieces from
different blog posts is a little too hit and miss for my learning needs.
When I spotted Ansible Configuration Management (PacktPub)
I decided to take the plunge and see if it could provide me with a more
consistent introduction. And it did.

This book makes an ideal first stop for anyone wanting to learn Ansible.
While it's a short book (92 pages and even less than that of actual
content) it provides a very good introduction and overview of at least
your first months experience of Ansible. While none of its coverage is
going to be the only coverage of a subject you'll ever need it
introduces enough of the concepts and features to be the best starting
guide for Ansible I've seen so far. I found that each chapter filled in a number
of gaps in my understanding of how Ansible should be used.

If you're looking at introducing Ansible to your team this book is far
from the worst way to do it. Its coverage is broad enough that you'll
probably get a few re-reads out of it as you bring more of your
infrastructure under Ansible control and start to evolve your needs from
basic playbooks to more advanced role composition. It's worth noting
that this isn't a cookbook, it's not going to hand hold you through
using each of the built in modules. For the more experienced sysadmins
looking for a quick way to learn Ansible this is a boon as it keeps
the page count down.

I'd liked to have seen more coverage of extending Ansible, the last
chapter provides a basic introduction but it's not enough for what I
need but this'd be a good subject for a second book once the testing
tool chain and such as progressed to a more mature place. Score - 7/10

Sat, 11 Jan 2014

As the Ansible/AWS investigations continue I had the need to lookup
outputs from existing CloudFormation stacks. I spent ten minutes reading
through the existing lookup plugins and came up with the
Ansible CloudFormation Lookup Plugin.

I'm not sure this is going to be our final solution. Michael DeHaan suggested that
moving to a fact plugin might be better in terms of cleaner usage and
easier testing, so I'm at the least going to implement a trial version of
that. I was quite surprised at how easy writing an Ansible lookup
plugin was though, even for someone with my limited python skills.

Once you've downloaded and installed the plugin, using it in your templates is as simple as

It uses boto under the covers and expects to find your credentials as
environmental variables. This is only a tiny chunk of code but it's
allowed us to continue on with the evaluations while gaining a little
more comfort in our ability to extend Ansible to suit our needs.

Mon, 06 Jan 2014

I picked up a copy of Learning AWS OpsWorks
during the PacktPub holiday sale. It was cheap, short and covered a AWS
product that I've never had need to dig in to and knew very little
about.

The book takes you through creating a basic stack, the layers inside it
and deploying an application to managed instances. Its coverage is
very high level and doesn't really go beyond a cursory explanation of
the services used. As you'd expect from the page count, it doesn't delve
in to either the Amazon services you use or how to make chef do your
bidding, instead sticking to its focus and giving you just enough
information to get the example working and not much else. It's worth
mentioning that the console screen shots are already out of date so you
need to do a little exploring on your own as you follow the steps.

Learning AWS OpsWorks is a brief but informative high level overview of
AWS OpsWorks and how you'd use it to create and manage basic stacks. I
don't think it's worth the full price, a Safari account would be quite
useful here. It's also very unlikely you'll need to read it more than
once so it's not great value for money. It does however present the
concepts in an easy to understand way, so if you're looking to pick up
basic OpsWorks in a big rush it's the only competition to the
official docs.

Fri, 03 Jan 2014

Back in November 2013 Amazon added a much requested feature to
CloudFormation, the ability to conditionally include resources or their
properties in to a stack. As an example I'm currently using this as a small
cost saving measure to ensure only my production RDS instances have PIOPs
applied to them while being able to build each environment from a single
template.

CloudFormation Conditionals
live in their own section of a CloudFormation template. In the example
below we show two ways of setting values for use in a condition. A
simple string comparison and a longer, composite comparison that
includes an or. Each of these are based on a provided parameter.

The key part of this snippet is the 'Condition' line. If the value on
the right hand evaluates to true the resource is created when the
template runs. If the condition is false the entire resource is skipped.
As a second example we'll show how to conditionally include a single
property value. In this case the 'Iops' property of a AWS::RDS::DBInstance resource.

If InProd is true the Iops property is included and set to 1000. If the
value of InProd is false then the special value of 'AWS::NoValue' is
returned. This causes the property itself to be completely excluded.

CloudFormation Conditions are quite a new feature, and I was a little
late in discovering it, so we've only just started to use them in our
templates. They are however worth learning about as they provide a
flexible new way to structure your templates.