I'm going in the next article (or two, if I feel I go way overboard in length) to explain you Maven as a build tool. At the end you won't be an expert, but you'll have a solid grasp on what it does to build projects, how to investigate what it does, and why it is so popular. So grab you favorite beverage (better be coffee), we're jumping straight in:

Introduction

Maven does three things, really well, which are sort of inter connected:

1. Project Management

The name of the project, who works on it, what SCM are configured for it, licenses, etc.

2. Artifact Repository (Artifact = fancy name for binaries)

Downloading of the various dependencies. Even Maven itself has a lot of its functionality as plugins that will be downloaded.

3. Build System

A bunch of steps to execute to get from a bunch of sources a binary output.

Maven has for a single project only one configuration file, and it must be called pom.xml. POM itself stands for Project Object Model. This is an unfortunate name, it literally just means that all the data for a project is there, touching all the 3 points mentioned (project management, artifact repository dependencies, and the build system).

In this article we will focus only on the 3rd point of the article, namely the build system.

If you don't know what fast-live-reload is you should probably see this video. TL;DW: unlike any other live reload tool on the planet, fast-live-reload can build execution pipelines (probably the only one still from the command line), serve folders, proxy sites, etc. Truly a Swiss army knife.

More than once when monitoring folders, I wanted to execute commands for the specific files that would change.

A good example that comes to my mind is AsciiDoc. When I monitor a folder where I have a bunch of asciidoc files, I want to run asciidoctor individually. For that file. So far that was not really possible. What I would do would be, decide upfront on what file I would work, and run:

fast-live-reload-o'*.adoc'-e'asciidoctor mycurrentfile.adoc'

"Superb", I know.

Finally, since version 2.6.0, if in the executed command the variable $FILE is defined, the command will be executed for each changed file, and the FILE environment variable will be passed to the script. Thus the previous example would be:

Of course, this lead to the Mozilla team releasing a hotfix for Firefox, namely 47.0.1 which fixed, you guessed it, allowing the old WebDriver API to work. The only small problem is that Mozilla knowingly introduced a bug that nuked all the WebDriver Firefox tests. For 3 weeks. Between June 7 and June 28.

Unreal.

Solution

Now that we know this might happen, how do we mitigate this?

The answer is containers. Docker containers.

Germanium, out of the box, comes into two flavours. One one hand it’s the library itself that we know and love and that gets tested against a set of browsers.

Now, for Firefox and Chrome, also docker images are automatically built, that guarantees such changes in infrastructure aren’t destroying your tests stability. For Firefox, since version 47 wasn’t running anymore using WebDriver, the container used Firefox 46 and the old API (the Marionette support is at this stage abysmal).

Of course, this means we’re not on the bleedingest of edges of the browser version, but that is OK. In the Continuous Integration system, we want to be sure we don’t get all our tests failing just because WebDriver has a bug now, especially since the API itself it’s still coagulating, so this is bound to happen some more. The key is to lock the moving parts of the testing environment, and Germanium does that by default.

Also these images themselves are used to test Germanium itself. So you know that all the API calls that are documented there are running as expected.

This version has a far better implementation of the wait() function, that has the following guarantees:

All the closures will be ran at least once, both the wait conditions, and the while_not conditions.

In case the closures take more than 400ms to execute, no wait will happen, but the closures will be executed again imediatelly.

In case the evaluation of the closures takes less than 400ms, the time of the closure execution will be substracted from the wait(): for example if the closures took 250ms, the wait will be only 150ms, to compensate for the run time, so each loop stays at ~400ms.