We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we'll assume that you are happy to receive all cookies from this website. If you would like to change your preferences you may do so by following the instructions here

User Acceptance Tests from a can

01.08.2014

Martin Fleischer

Recently Simone wrote a blog post about the Celepedia launch. I participated in the test automation for this project. A large proportion of the tests, especially for the frontend, consists of UATs with Selenium. Since I already wrote a blog post about Selenium tests in general, I’m focusing on the a special part of the test setup here. Together with the IDEAS-DevOps we integrated a test tool executing the test suite using docker containers. As I already said in my former blog post user acceptance tests with Selenium getting slow really quickly. I also remarked that parallelizing the UATs helps solving this problem. Instead of showing parts of the test setup we implemented, I decided to write a tiny project to spare the project specific detail. Also the original project is written in Java, I wanted to to it more kreuzwerer-ish and use ruby instead.

Before implementing a test, there are some things need to be set up. The basic structure of the tests is pretty simple:

.
├── Gemfile
└── spec
├── features
└── spec_helper.rb

All the dependencies are located within the Gemfile. The test are written using the Capybara framework which provides a nice syntax and neat helpers for testing the user interface. I also added the Rspec testing framework because I like the expectations syntax. The test is using PhantomJS, a headless WebKit browser that renders your Websites much quicker than a real browser. PhantomJS can easily be plugged in into Capybara using the Poltergeist gem. Thus I can use the nice Capybara DSL on top of the headless driver.

The features directory is the place where to add the tests. On the same level as the features directory there is the spec_helper which will be required by all of the test specs. The helper loads the frameworks and libraries I mentioned before and configures the used driver. By default capybara uses the poltergeist driver in this setup. Setting the environment variable DRIVER switches the driver slightly.

Now its time to write the first test. Due to my poverty of imagination I took an example from the Selenium page but implemented the example in Capybara style. It just goes to http://www.google.com, types 'Cheese!' into the search bar which has the attribute name with value 'q' and hits the search button with name 'btnG'. After that it ensures that the title includes 'Cheese!' within the next 10 seconds.

Now it comes to the exciting part. First I added a Dockerfile. I don't want to go into detail on docker to much. In short docker bundles software in lightweight linux containers and makes them easily shippable. "Dockerized" applications could be imagined as software running in a virtual machine, but container virtualization boots much faster and is more resource efficient than emulating a complete OS.
The Dockerfile is kind of a blueprint for a docker image. Let's just see what it looks like.

install the required gems

add the whole project

ADD . uats
WORKDIR /uats

The Dockerfile is based on the latest ubuntu version. First the system is gonna be updated and packages like ruby and build-essentials, required for running the test suite, are are installed. Also GNU parallel is installed, which will be used to run the tests in multiple containers concurrently. I'll come back to this later. To get the tests running on a headless WebKit, PhantomJS is installed. I separated adding the Gemfile and installing the gems with bundler from adding the whole project. Therefore changes on the tests are separately cached and the gems are used form cache if the Gemfile didn't change like described in Brian Morearty's blog.

Doesn't look to different compared to a normal rspec run, except from the output format:

But have a look at the time rspec took. The test ran almost 3 times faster, not too bad.
I also added simple rake tasks for building the image and running the tests, which makes running the "dockerized" test suite much more comfortable.

Running the test in containers makes the test suite much more shippable. Combined with boot2docker a virtual machine with a nice cli management tool for Windows and OS X everyone should be able to run the suite. More than that it shouldn't be difficult executing your test suite on a CI-Server using the docker image. When executing UATs in different environments you will notice that the environment could have strong impact on the behavior of your test. Running your test suite everywhere within containers will prevent such effects.

The big advantage of using docker containers by using virtual machines besides the shipability is ease and speed. But there are some points where running UATs in virtual machines shines. First of all, although boot2docker is a really neat solution, an environment with nested isolation technologies like running docker in a virtual machine doesn't always bring the desired simplicity as I see it. For example using samba for sharing files between the boot2docker-vm didn't feel too naturally for me.

Another point is that containers are only running on Linux, but maybe you want to test the browser compatibility using your UATs. The driver used for controlling the Internet Explorer obviously only runs on Windows and thus can not be tested within the docker containers.

Of course we have not been the first ones running tests in docker containers . Nick Gauthier describes a quite similar setup for running tests in containers. There is a comment for this post mentioning CircleCI offering "dockerized" testing as a service. Also I found dsgrid, an interesting Github project that runs Selenium Grid with docker. I created a public repository where you can find the source code and play around by yourself.