I was lucky enough to start building websites before front end tooling became mainstream and I was able to learn the ins and outs of JS, CSS, image preparation and general performance tuning without having to enrol on a 1 month bootcamp just to learn how to configure webpack.

The current state of tutorials on how to set up front end, modern buildchains is akin to a tourist trying to navigate London using town planner's blueprints: the information is in there somewhere, we'd be a better informed tourist if we took the time to understand them, but really we just want to see the Queen.

Anyway, the point is that this article will not be one of those. We'll keep it simple to get us off to a flying start.

Why?

The merits of front end asset tooling are widely accepted. They make our lives easier by processing, transpiling, optimising and managing dependencies in a predictable way.

However, as with any diverse set of tooling, they have brought their own issues to the party. Most prominently: version management and host dependency.

I encounter this issue regularly when working on projects which I've inherited from others. The previous developers will have kindly checked their buildchain files into the project repo, codifying and version controlling it, but then it turns out that it assumes an old version of node, and one of the dependent packages only work on MacOS, and another one requires me to install X00MB of additional OS packages which the project doesn't even make use of.

I do not like screwing up my dev machine because of ill-maintained, poorly encapsulated build systems. I just want to clone a project and run build please.

So that's where docker comes to the rescue. By encapsulating our buildchain in a docker image we're giving ourselves and other developers the opportunity to execute a potentially complex set of actions without having to first tailor the environment in which they are running. The only prerequisite is that docker is installed. No need for nvm. No need to brew install any random crap. No trying to get disparate buildchains to execute inside your multi-project homestead VM 🤮.

Just docker-compose up buildchain.

Prerequisites

Laying The Ground Work

We'll be creating a new image for our buildchain which will have the single purpose of building our source files into compiled assets. I like to keep all of the config for this grouped together and separated from our actual project files, this ensures that we don't end up copying our buildchain config into our PHP and nginx images unnecessarily.

We're using node as our base image because, as we all know, all build chains are JS based these days.

The next few lines are just installing yarn. This isn't strictly necessary, you can also omit this and just use npm everywhere.

We then set the working directory which gives us a clean place inside the container to actually perform our work.

We copy in our package.json which will do all of the normal things a package.json does.

We install all of our dependencies as defined in our package.json and then set the command that will run whenever this image is used to create a container. As we're just keeping things simple for now we're just going to use scripts in our package.json to run CLI binaries to compile our assets.

One important thing to note is that we're running yarn install as part of the image building process. So the node_modules folder and its contents will exist as part of our image and won't change between executions unless you make changes to package.json and rebuild the image.

This is important for two reasons:

Our node_modules folder will exists on our image's filesystem, not on our host's. We don't need it to exist in our project directory and we don't need npm/yarn installed on our host to create it.

But we will need to be able to make changes to the contents of node_modules while we're developing our buildchain so we need to remember to include some method of changing its contents without having to constantly rebuild our image.

First we mount in our package.json. This file already exists inside our image but by mounting in the same file from our host it ensures that any yarn add commands that we run inside our container will be reflected in the package.json that's checked into our project repo - not just the one inside the container.

Second we're mounting our actual project into the container. We just dump the whole lot in there so that the buildchain has access to everything.

Recap

So far we've:

Created a standalone buildchain image which will be responsible for compiling our assets.

Added a package.json to it and installed the dependencies.

Mounted our host's package.json to catch any yarn adds.

Mounted our project files upon which the buildchain will be acting.

First Run

Give it a try:

docker-compose up buildchain

Nice. If we have a look at our compiled assets in src/web/assets we should see that our sass has been converted to css and our Javascript has been babelified to remove ES6-only syntax.

And that's the basics of running your buildchain in a docker container. It's now ready to share as part of your project's VCS repo and any developer will be able to compile the project's assets with zero prior knowledge of the buildchain and without having to install anything by isolating the buildchain from the host on which it is running.

As an added bonus we've also kept our project files nice and tidy by moving all of the buildchain config files into a directory dedicated to our buildchain - no more mixing project files with buildchain config files!

Adding More JS Modules

A package.json isn't worth its salt unless it has more dev-dependencies listed than you can count. So, given that we potentially don't have node/npm/yarn installed on our host, how do we update it?

We can use our container:

docker-compose run buildchain yarn add cat-names --dev

This will add the dependency to your package.json.

Be careful though, by running this we've just changed some files in a container which is based onour buildchain image, when that container exited those changes were lost. (You can read about this here: https://docs.docker.com/v17.09...)

We also haven't updated the buildchain image itself. Any new container built from the existing image will not have cat-names installed.

But we have changed package.json, and that's what was originally used to create our image, so in order to rebuild our image based on our newly updated package.json we can run:

docker-compose build buildchain

"But I don't want to rebuild the image every time I want to add a dependency!" I hear you scream.

We've added a new named volume to the volumes list and then we've used this when defining a mount point in the buildchain's list of volumes.

You'll need to run docker-compose up buildchain in order for it to pick up the config change.

Now, if you do things like yarn add and yarn remove they'll be updating both your package.jsonand the node_modules directory which is being persisted across container executions.

No Watch, No Glory

Currently our container will run our build scripts and then exit immediately. This is fine if all we want to do is a single compilation, like we might do during CI. But whilst in development we want to be able to watch for file changes and recompile when a change is detected.

There are all sorts of tools to do this, but we're just keeping things simple remember, and the simplest solution I can find that'll work for us is npm-watch.

Get it installed by running:

docker-compose run buildchain yarn add npm-watch

Add the following watch config to docker-config/buildchain/package.json:

Try it out by running docker-compose up buildchain and editing the scss and js files in your src/assets directory.

Pretty neat.

Last point of discussion before I head off for a rest: Why did we update the command for yarn run watch in docker-compose.yml but not in our Dockerfile?

You can think of docker-compose.yml as environment specific. It allows us to take images and make tweaks to them depending on the environment in which we're executing them.

Our image should always assume it's running in production mode. In production (which will be a continuous integration pipeline for this image) our buildchain will just be performing a single compilation with no need to watch for file changes. So we tell our image to default to just compiling the assets once (yarn run build) but override that behaviour in our docker-compose.yml which will be used during local dev.

Next Steps

Now you're free to develop as complicated a build chain as you like. A few guidelines to follow though:

Try to keep any files directly related to the buildchain and not the project in docker-config/buildchain

If you create additional buildchain config files make sure to COPY them into the image in the Dockerfile

And if you'd like to be able to edit them on your host without having to rebuild the image, also mount them in as volumes in docker-compose.yml

That's all the basics covered for now. In the next article in this series I'll be discussing how the image that we've created today fits into a continuous integration pipeline allowing us to ensure our production builds are identical to our local ones.

Also, stay tuned for that 'advanced techniques' article in which I'll cover things like proxying using browsersync and HMR with webpack. Gotta keep up the hipster cred.

Feedback

Noticed any mistakes, improvements or questions? Or have you used this info in one of your own projects? Please drop me a note in the comments below. 👌