Overview

The frontend code written in HTML, CSS and Javascript and run in the
user's browser

The web application that handles HTTP requests and serves the frontend
code

A backend database where data is stored

A reverse proxy or load balancer which sits in front of the web
application (or hosts it via fastcgi or mod_* in apache)

And other various service applications (email, queues, workers, ...) which
may themselves be restful HTTP servers, or implement some other RPC
mechanism (thrift, protobuf, ...)

In general web developers focus on the frontend and web application code (with
some SQL thrown in for good measure). Configuring services, standing up
machines, monitoring performance and downtime and deploying new releases are
all activities which they'd rather not spend a lot of time focusing on. In
larger companies there is usually an Operations team which can manage a lot of
these tasks. But an Operations team is not always available. So how can a web
developer perform these activities without it becoming a huge burden?

The Badgerodon Stack is designed to solve 3 of these problems: It handles the
configuration of applications via environmental variables or config files, it
handles the lifecycle of applications and which machines they run on and it
also handles deployment.

More of a methodology than an application, the Badgerodon Stack involves
several concepts:

Stack

The collection of software needed to run a complete web application. For
example: the LAMP stack (linux, apache, mysql, php).

Application

An executable program. Applications have a binary
(/bin/ls), zero or more options when executed
(/bin/ls -a /some/directory) and a set of
environmental variables (PATH=/home/user/bin). Applications are
called services if they are meant to run continuosly.

Archive

A .tar.gz or .zip file which contains an
application and any other files it needs to run. The Badgerodon Stack
extracts the contents of this archive and executes the application according
to the machine config file.

Machine Config File

A description of all the applications a machine should run: where those
applications come from, the binary path and options used to start the
application, a set of environmental variables to pass to the application and
possibly symlink definitions or even directly embedded config files.

Usage

Usage is straightforward. First create a config file which describes all the
applications you would like to run on the machine:

applications:-name:examplesource:s3://example-bucket/stack/releases/example/v123.tar.gzservice:command:[bin/example some argument]environment:MY_VAR:"somevalue"

Next run the stack application with options that direct it to watch this
config file and react accordingly whenever it is changed:

stack watch s3://example-bucket/stack/machine-1.yaml

All of the applications will be downloaded, extracted and services will be
created and started (according to the service mechanism available on the
machine: systemd, upstart, etc...). If the config file changes applications
will be updated so that they are brought into alignment with the config file.

This is why the Badgerodon Stack is a pull-based deployment system. To release
a new version of your application, build and bundle it as a new archive
(example/v124.tar.gz) and then update the config file.

Walkthrough

Perhaps the best way to understand the Badgerodon Stack is to see it in action. So lets
build a simple web application: a link shortening service. Our service will consist
of 2 applications:

A python application built on top of flask which will handle 2 HTTP endpoints: one
to generate links and another to follow them

A Redis database which will store the links

We will deploy these 2 applications on a single linux machine. (I will be
using a virtual machine with virtual box, but a cloud vm or nitrous.io box
would work just as well)

The Python Application

We will start by creating a simple python script in
stack-example/links/links.py:

importosimportredisimportuuidfromflaskimportFlask,redirect,request# get config from the environmentredis_hostname=os.getenv("REDIS_HOSTNAME","localhost")port=int(os.getenv("PORT","5000"))# connect to redisr=redis.StrictRedis(host=redis_hostname,port=6379,db=0)# create our appapp=Flask(__name__)# post to /links to create a link@app.route("/links",methods=["POST"])defput_link():if"url"inrequest.form:link_id=str(uuid.uuid4())r.set(link_id,request.form["url"])returnlink_idelse:return"Expected URL",400# get /links/<link_id> to redirect to the saved url@app.route("/links/<link_id>")deflinks(link_id):url=r.get(link_id)ifurl:returnredirect(url)else:return"Link Not Found",404# run the appif__name__=="__main__":app.debug=Trueapp.run(port=port)

This simple app depends on python and two libraries: redis and
flask. For development we can just install python (via
apt-get or similar), and then use pip to install the
libraries. But this setup won't work for the Badgerodon Stack because the
eventual server we plan to run the application on doesn't have these
installed.

So we have to bundle our application so it has no dependencies. Typically we
would use a build server to do this (like Jenkins), which would also free us
to use whatever operating system we wanted for local development. We would
commit our code to Github (or similar) and Jenkins would listen for changes to
the repository. It knows how to pull down the code and build the project
accordingly. (This is known as continuous integration)

But for this example we will just do the build locally. For python we can
use pyinstaller.
It can be run with: (assuming you're in the same folder as the python file)

pyinstaller -F links.py

This will create an executable (dist/links) which can be run
with no dependencies. All that remains is to package it in a
.tar.gz file:

tar -czf links.tar.gz -C dist links

Remembering all these steps may be a bit tedious, so here's a build script
which automates this process.

Name it build.sh, put it in the same directory as links.py
and then every time you run it, it will create the archive for you.

So far we have a directory tree that looks like this:

stack-example/
links/
links.py
build.sh
links.tar.gz

Redis

Now we need to create our redis application. First we need to download the
redis source code from here: download.redis.io/releases/redis-2.8.19.tar.gz.
Create a directory we can work from, and extract it there. You should have a
redis-2.8.19 subfolder. In that folder you should be able to build the app by
running make, which creates an executable in the src
folder named redis-server. All we need to do is take that
executable and put it in another .tar.gz archive.

Though it can be run directly from the command line, Redis is usually
configured for your specific needs via a config file. We will come back to
this topic later.

Redis is a stateful application. Though you can certainly run it as a
volatile, pure in-memory application, you probably want to store off its
data somewhere so you don't lose everything on a restart. As with any
database there are various ways to accomplish this (EBS volumes, periodic
backups, master-slave replication, etc...) but crucially this isn't
something the Badgerodon Stack does for you. For this simple example we
will just ignore the issue and let a restart flush our data.

Storage

Now that we have our applications built and bundled we can move on to deployment.
But before we do that, we need to decide how we want to store our releases.
There are lots of options (you can find a complete list in the documentation),
but for demonstration purposes we will store releases and config on
Google Drive.

If you don't have a google account go ahead and make one. You will need to
generate a json block of credentials for the stack application to list,
retrieve and upload files. To do this run this command and follow the
instructions:

One of the keys should be a UUID (for example
45429f7c-fd70-4e70-bd11-8bed0862b2dc). Type quit to
exit telnet. GETting that UUID from our link service will
redirect you to what was stored:

curl -L localhost:5000/links/{THE_UUID_FROM_BEFORE}

And you should see:

<b>3:16</b> For this is the way God loved the world: He gave his one and only Son, so that everyone who believes in him will not perish but have eternal life.

So our service works, but we still need to setup the stack application so that
it will pick up changes automatically. For that we need to use the
watch command instead of apply, and we need to
make it so our application runs on boot (and continuously).

The way this is done depends on the init system your operating system uses.
For Ubuntu that means Upstart.

Deployment

With the Badgerodon Stack watch service running we should now be able makes
changes to the config file and those changes will be picked up automatically.

Let's make it so that our redis data is saved to the hard drive. To do this
we need to create a config file for Redis that tells it to sync
the database to disk. There are three approaches we could take:

We could directly embed the config file into the release itself

We could embed several config files (perhaps for different environments or
machines) and symlink the appropriate one on deploy

We could directly embed the config file in the machine config file

The first option is pretty straightforward (just keep in mind that
applications are run with a working directory set to the contents of the
archive). The second can be done with the special links property:

applications:-name:redislinks:redis.conf:config/dev.conf

But let's go with the third option since it makes it easier for us to update
configuration without requiring a complete rebuild of the release. We use the
special files property, and the fact that YAML has good support
for embedded string blocks.

files maps file names to their contents, so the stack application
will create a redis.conf file when this configuration is applied.
We also modified the service command so that it uses the config file.
Consult the
Redis
documentation for the meanings of the various config statements.

Simply re-upload the config file to trigger a re-deploy. It should take about
15 seconds for the changes to be applied.

User Data

Now that we have one server setup, the process for setting up additional
servers is exactly the same: just point stack watch to a
different config file (or even re-use the same config file). There is, however
an additional automation step we can take to make this easier.

When you create a virtual machine on any of the cloud providers
(Amazon, Google, Digital Ocean, ...) you can specify a script to run on boot.
Here's an example script which does this: