Menu

I am a big fan of the feature branching model. Working in an isolated branch created especially for the feature you are working on has its advantages. But, there is one thing I keep forgetting: creating the actual feature branch. This means I'm commiting directly to the master branch. Most of the times I notice this just before pushing. When this is the case, I quickly create a new feature branch and move my commits to it. In this post I'd like to share how I do this, how I move my commits from the master to a new feature branch.

Move commits to a new feature branch

Make sure you have checked out the branch that contains the commits you like to move and execute the following:

git branch feature will create the feature branch called feature.

git reset --hard origin/master will reset the current local master branch to the same commit as the remote master branch.

git checkout feature will simply switch to the feature branch which still contains the 4 commits.

git push origin feature will push it to the remote repository.

Here is what happened

The following ASCII drawing represents the situation I'm in when I discoved I have working on the master instead of a feature branch.

master
↓
commits A--B--C--D--E
↑
origin/master

Commit A is where origin/master the remote master branch. Commit B, C, D and E are the commits that should be moved to a new feature branch.

I start by creating the new feature branch and call it feature. This should set the state of the feature branch to the same state as the one currently checked out, in my case master.

git branch feature

Now I have the following situation where master and feature point to the same commit E.

feature
master
↓
commits A--B--C--D--E
↑
origin/master

I do not want commits from B to E to be on the master branch, so I reset to commit A with the git reset command. The easiest way to to reset to origin/master:

git reset --hard origin/master

Alternatively I could reset it n possitions back. I use that approuch when it is just a single commit (HEAD^), or not more than a hand full (HEAD~5).

git reset --hard HEAD~4

I rarely reset to a commit sha like the following. But if you know the sha from commit A you can use it to reset to there.

git reset --hard fd83c2

The above resets the index and directory content the local master branch to point to commit A.

One of the programming practices that had the biggest impact on the way I write code is Test Driven Development. It significantly shortened the development feedback loop and helps to break down development into small steps with each have a clear goal. The test suite acts as a safety net that enables me to refactor with confidence. It is also a fun way to document a project in an executable form.

What if I could bring this technique to the development of my docker containers? I expect that it will improve at least something to the ssh-into-a-container-and-start-trail-and-erroring-while-putting-the-succesfull-commands-in-a-dockerfile way I currently work.

The test environment

Docker doesn't come with a test environment, nor are there specific test tools for docker. But this doesn't mean we cannot test our containers. We just a good test runner and something that can interact with the docker environment. I decided to use ruby with rspec as a test runner and the docker-api gem to interact with the docker environment.

Gemfile

To setup the environment I create a new folder and put the following Gemfile into it:

source'https://rubygems.org'gem'rspec'gem'docker-api'

A simple bundle install will retrieve all the dependencies.

Test Driven Development

Here is the process that I have in mind:

Start with a failing test

Verify the test fails

Implement the fix

Run tests again to see verify it works and doesn't break anything else

Repeat

What I want to develop

My goal is to develop a docker image that can be used to run as the database service for my application. It needs to run postgres 9.3 and must have a user in place for my application that has an empty database present.

Writing the first test

It is time to write the first test. It should just guide me to the next step in my development process, and not any further. It also must have a single and clear goal. A good one to start with is to verify if there is an image present in the docker environment. I don't care about the details of the image yet, just that is has the correct name pjvds/postgres. So I start by creating a file called specs.rb, require docker and write down the first spec:

Driving the next step

I must write another failing test to drive the next step in my development process. Since I want to run postgres, the image should expose the postgres default tcp port 5432. This is docker's way to make a port inside a container available to the outside. This information is stored in the image container configuration and can easily be accessed with the docker-api gem. So, I write the following test:

Implementing the test

Green

If I now run the specs again, it succeeds:

$ ./build
.Finished in 0.00278 seconds3 example, 0 failures

Starting the container

In the previous tests I asserted the docker environment and the image configuration. For the next test I want to test if the container accepts postgres connections and need a running container instance for this. In short I want to do the following:

Start a container

Execute tests

Stop it

I introduce a new describe level where I start the container based on the image:

describe"running it as a container"dobefore(:all)doid=`docker run -d -p 5432:5432 #{@image.id}`.chomp@container=Docker::Container.get(id)endafter(:all)do@container.killendend

Test postgres accepts connections

Now that I have a context where the container is running, I write a small test to make sure it does not refuse connections to postgres.

Adding a super user

Here is a tricky part, I can't add a user with the createuser tool that ships with postgres. This requires postgres to be running, and since we are running postgres in an environment that doesn't have upstart available it isn't running after the installation. I could spend a lot of time getting it up and running in the background, or could start it in the foreground and pipe a command to it via stin. I opt for the later and create a small script that does exactly that:

The ADD command copies the psql file into the container. Then it will give the file execution permissions and uses it to create a superuser with the name root.

Green!

If we now run the spec again, we see it succeed:

$ build
.Finished in 0.05831 seconds4 examples, 0 failures

What is next

I can repeat this loop until I've added all the features requirements. Some tests that would follow could be;

does it have a user for our application?

does this user also have a database?

is this database empty?

is the postgis extension available?

Taking this to the next level I could add this to a CI service, like wercker and execute the tests on every push. This also makes it possible to do automated deployments to a docker index. But thats a scenario to cover in another post.

Conclusion

Using tests to drive the development of a docker container is pretty easy. There are a lot of client api's that enables almost any major programming environment to become a docker test environment. The biggest difference that I see compared with testing software applications that the docker tests come in the form of integration tests. This could be a problem if some aspect will take more time, but my current container tests are executing very fast. Also rebuilding the container is quite fast because of the way dockers caching works. You can even leverage this further by creating a base image for the stable prerequisites.

In short, it's a great addition to the ssh-into-a-container-and-start-trial-and-erroring-while-putting-the-succesfull-commands-in-a-dockerfile way of working and we will definitely explore this route even further.

Jeff Lindsay created Dokku, the smallest PaaS implementation you've ever seen. It is powered by Docker and written in less than 100 lines of Bash code. I wanted to play with ever since it was released. This weekend I finally did and successfully deployed my application to Dokku running on an Digital Ocean droplet. In this post I share how you can do this as well. Of course I used to wercker to automate everything.

With the wercker cli installed add the project to wercker
using the wercker create command (you can use the default options with any questions it will ask you).

$ cd getting-started-nodejs
$ wercker create

The wercker command should finish with something that looks like:

Triggering build
A new build has been created
Done.
-------------
You are all set up to for using wercker. You can trigger new builds by
committing and pushing your latest changes.
Happy coding!

Generate an SSH key

Run wercker open to open the newly added project on wercker. You should see a successfull build that was triggered during the project creation via the wercker cli. Go to the settings tab and scroll down to 'Key management'. Click the generate new key pair button and enter a meaningful name, I named it "DOKKU".

Create a Dokku Droplet

Now that we have an application in place and have generated an SSH key that will be used in deployment pipeline, it is time to get a dokku environment. Although you can run dokku virtually on every place that runs Linux, we'll use Digital Ocean to get the environment up and running within a minute.

After logging in to Digital Ocean, create a new droplet. Enter the details of your liking. The important part is to pick Dokku on Ubuntu 13.04 in the applications tab.

Get the ip

After the droplet is created, you'll see a small dashboard with the details of that droplet. Next, replace the public SSH key in the dokku setup with the one from wercker. You can find it in the settings tab of your project. Copy the public key from the key management section and replace the existing key. Next, copy the ip address from the dokku setup(you can find the ip address of it in the left top corner), we'll use it later. You can now click 'Finish setup'.

Create a deploy target

Go to the settings tab of the project on wercker, click on add deploy target and choose custom deploy target. Let's name it production and add two environment variables by clicking the add new variable button. The first one is the server host name:
name it SERVER_HOSTNAME and set the value to the ip address of your newly created digital ocean droplet. Add another with the name DOKKU and choose SSH Key pair as a type. Now select the previously created ssh key from the dropdown and hit ok.

Don't forget to save the deploy target by clicking the save button!

Add the wercker.yml

We're ready for the last step which is setting up our deployment pipeline using the wercker.yml file. All we need to do now is tell wercker which steps to perform during a deploy. Create a file called wercker.yml in the root of your repository with the following content:

Deploy

Go to your project on wercker and open the latest build, wait until it is finished (and green). You can now click the Deploy to button and select the deploy target we created earlier. A new deploy will be queued and you'll be redirected to it.
Wait untill the deploy is finished and enjoy your first successfull deploy to a Digital Ocean droplet running dokku!