Inspired to Test

Recently I have been having fun watching videos by an experienced software developer who goes by the name of Uncle Bob (Steve Martin). I have also been reading Steve's book "The Clean Coder." Steve's writing and talks on Youtube have an entertaining mix of humor and information that I find to be engaging, and enlightening.

I can imagine Steve will rub a lot of people the wrong way; he sounds exactly like a fundamentalist, know-it-all uncle, who will not quit preaching about the way things should be done, until you finally agree with him or walk away. I love that about Steve. He's got the experience and the conviction to make a solid point.

Steve Martin's video on the "The Three Laws of TDD" is great fun! In this video, you get to watch a man who fanatically practices Test Driven Development, preach his way through the three laws with zeal.

Steve makes many points worthy of reflection, but one of the most interesting was the idea that in practicing Test Driven Development:

~ "The developer will produce algorithms that they would not have thought of making if they had not coded the software line-by-line."

That is a loaded premise that deserves testing.

My hunch, is that developers think and code in paragraphs, and story-arcs. Perhaps this is because the big-picture has to come ahead of the line-of-code. Our development work is driving towards some predefined feature. We sit down and start creatively writing towards our end goal. After we think we have enough functionality to test, we run the code. As a result, our functionality may work well -- for the ways we expect the code to be used. But coding with 100% coverage may cause us to discover solutions we could not have perceived.

And so it goes, that coding your thoughts out in real-time could not only set you back in terms of efficiency, but may also lead you to an entirely different architecture.

The Three Laws of Test Driven Development

What are the three laws of TDD?

First Law: You may not write production code until you have written a failing unit test.

Second Law: You may not write more of a unit test than is sufficient to fail, and not compiling is failing.

Third Law: You may not write more production code than is sufficient to pass the currently failing test.

At first pass, this technique sounds insane. To only ever write the tests first seems like you would paint yourself into a box pretty quickly. But I have been practicing this dogmatically on a test project of moderate complexity: a language parsing engine; and I have not found the method to be nearly as painful as I had expected.

What I found was that development was incredibly slow and boring at first. Though I did notice that the code I had written became structured in a much simpler way than I would usually have attempted. My source was naturally falling into smaller functions with fewer concerns.

When the code-complexity increased a little, the boredom disappeared. I found myself darting back through some of the first tests I had written, to update behaviors as I rounded out the behavior of the parsing engine.

This was all par for the course. I expected these things from my experiences writing tests for other projects. But in being 100% dogmatic about the 3 Laws of TDD, I had two very big and very unexpected "Aha!" moments.

Number & Nature

The first was the number and nature of the bugs I was exposing. I was managing to catch bugs with TDD, that I know I could not have typically found until much later in the project if I had not followed the 3 Laws of TDD. I also found the bugs were smaller and more limited in their scope. The reduced complexity of the bugs made them much quicker to trace and squash.

After getting deeper into the project, I realized I was flying.

The second "Aha!" moment came after a few iterations when I installed the code-coverage tool. To my surprise, I found I had thousands of lines of code, with only two if-else branches statements not covered.

My code was 99% green, out of the box!

Suddenly the effort of coding tests after writing the code looked like a big fat waste of time. And it was all about the complexity.

Code is Like Rope

Imagine you were looping pieces of knotted rope around your arms into a manageable bundle. If you work out the kinks and the knots each time you reach one, you will have a tight bunch, which maintains its shape without much effort.

But imagine if your boss was yelling at you to "get that God damned rope bundled as quickly as possible", and you just wound that sucker up as fast as you could into an enormous rope-spaghetti. Next time you come to use that rope, you are going to have a few problems.

The rope will not unwind. You will have to untangle one thing here so that you can un-knot another thing there. And then you will have back track on yourself because you didn't realize that if you untangle one part, you make a knot in another. It then becomes more cost effective to buy a new rope than to spend the time untangling the old one.

In my experience, source-code is like that.

Exactly like that.

TDD vs BDD

The point of this post was to share how I am setting up Node.js projects for Test/Behavior Driven Development. I will explain how I do this in a moment.

While TDD stands for Test Driven Development, BDD stands for Behavior Driven Development. These two practices are essentially the same practice, but with different nuances. BDD targets domain logic, and uses chained domain specific language, making the tests read more like spoken English.

But why test "behaviors" instead of "units"? That's a great question. What we really want to test in software is something like: "Can my software generate Output X when running Function Y?" The small details of how Function Y produces Output X do not necessarily matter to us, we simply care that we get Output X. The advantage of thinking about tests this way is that when we change the internals of Function Y without having to re-write most of our tests.

The first thing that our lib needs is a file loader. So lets add a failing test. In the code below, we add a group of tests for file loader in my-lib. Then we add a test to check that our file loader will reject a promise to load a file that does not exist.

Great, it works! We have followed the TDD principles, creating a test that fails, and then added the minimum nessecary code to make the test pass. Lets keep going and write another failing test. This time we will write one that should pass. Add the following code:

All we should have to do to make this test pass, is add the correct file and re-run the tests.

echo Hello World! > ../mock/yes-file.txt
mocha ./

Tada!

Now just keep going. Create a failing test, write the code to pass the test, and so on. At the point your code begins to get useful, I hope you experience the same wins that I have.

Npm Testing Setup

So far we have been running our tests using the command mocha ./, but our project would be better served if we could run our tests from an NPM script in our package.json file.

Looking at our package.json file, we can see that our current test command is just mocha.

{
"scripts": {
"test": "mocha"
}
}

So if we use the command npm test or yarn test from the CLI, we can see Mocha get kicked off by Node.js.

Because we have set up our test alongside our code Mocha does not know where to look. It is looking for the directory called "test".

Warning: Could not find any test files matching pattern: test

Let's update the package.json file so that Mocha has something to work with.

{
"scripts": {
"test": "mocha ./lib/**/*.spec.js"
}
}

This should make Mocha look inside all sub directories of ./lib to see if there are an *spec.js files. If you save the JSON file and re-run npm test, you should get the following output:

Great, mocha found the tests! But...

Unfortunately our second test is failing. This is because we were running Mocha from the ./lib directory previously. The mock file ../mock/yes-file.txt was being opened relative to where the mocha command was run.

(Note: The first test does not fail because we are testing that a file can not be loaded.)

Lets update our code in to handle the path; at the top of ./lib/my-lib.js:

Our relative directory is now handled in relation to the __dirname. This value __dirname is the directory that the current JavaScript file lives in.

Now we can run the same tests from the package level with yarn test, as well as from the directory level with ./mocha <file>. This turns out to be very useful.

Symlinking Our App

While we are talking about directories we should talk about symlinking. I find that the practice of symlinking the package directory into the ./node_modules directory allows us to simplify our imports.

Instead of requiring scripts relative to our current JavaScript file we can require scripts relative to the directory that package.json lives in. So instead of:

const myModule = require('../../../../lib/myModule')

We can do this:

const myModule = require('app/lib/myModule')

Much cleaner!

We should add our symlink creation to our package.json scripts, so that any user checking out our module, will have things set up the right way. Add the following two lines to your package.json file:

When you run npm install or yarn install now, a symbolic link will be created inside of your ./node_modules directory. This allows you to link to files using Node's recursive require architecture.

If we ls the ./node_modules/app directory, we can see that it links back to the directory containing our package.json file and our ./lib and ./mock directories.

Please note: you may have to handle this differently for Windows. In which case an extenal script in the ./bin directory may be more appropriate. For the purpose of this tutorial, we are going to assume this code will never be run on Windows.

Now lets update our code to use the symbolically linked directory. At the top of our file ./lib/my-lib.spec.js make these changes:

That's it for sym-linking. We can now require files relative to the package.json file rather than the file in which we are using the require() statement.

Setting Up Code Coverage

Another important part of Test Driven Development is Code Coverage. We want to know that 100% of our code is covered by our tests. This does not mean that there are no bugs in our code, but it does mean that it is much harder to break the code, or introduce new bugs without noticing.

We installed the Node.js packages required for Code Coverage back in a previous step, so here we just have to implement them in our package.json file.

Your directory structure should now look something like this, (notice the .istanbul.yml file at the top:

Ok good!

Lets run yarn cover or npm cover and see what happens.

Fantastic!

We can see from the coverage output that our coverage is at 100%. And if we click on the HTML link, we can inspect the cover in the Web Browser. The table of contents looks like this:

And we can drill down further by clicking on the dirctories and files:

Note: all your coverage files are kept in the ./coverage directory.

Pre-Commit Testing

Practicing Test Driven Development is a great way to squash bugs early. But did you know that you can keep the majority of bugs from every getting your code base in the first place? It's true. You can use Git's "Pre-Commit" hook to run tests every time you try to commit.

git commit -m "I fixed all the things!"
# Tests kick off here

If your tests fail, the offending code never makes it into the history of your repository. It's worth thing about that for a minute.

If your tests pass, your working code is immortalized in the code-base. In the life-cycle of a project, good reasons arise to checkout code from a previous commit: perhaps you will need to diverge the code-base from a previous point in history, or to copy over some pre-existing features that were discarded.

Wouldn't it be great if you knew that at any point you checked out that code-base, that you could be sure you were checking out "working code"? And wouldn't it be great to know that the code had 100% coverage so that if you could tweak the feature to sit alongside current changes without side-effects? I think so.

So let's install the Node.js package git-pre-commit to handle this for us:

yarn add git-pre-commit

We can now see two things added to our .git/hooks directory:

pre-commit-utils directory

pre-commit.js file

Fortunately, we can interact with our Git Pre-Commit Hook by adding commands to scripts in our package.json file. Update the test and precommit scripts like so:

We have added && yarn run cover to the test script so that code-coverage is automatically run after testing. Then we also added yarn test to the precommit script, which will kick off both tests and coverage every time you try to commit.

Ok, lets make a bug to test this out!

In your ./lib/my-lib.js file, add some bogus data to the return content.

Super! The commit failed. Our nasty bug got Git-blocked before it could spread across the web. Let's double check by looking at the git log.

Phew.

Testing in the Cloud

With the code so far, we can engage in test-driven development from our local machine. But inevitably we will be sharing our code, and people will create Pull Requests, asking you to merge their fixes and updates. But you can not be certain that the developer creating the Pull Requests has setup their environment correctly and run the tests.

No worries.

We can force each new pull request to have tests run automatically in the cloud by using Travis-CI and Github. Assuming you already have a Travis-CI and Github account, you can follow these steps.

1) Enable Your Repo in Travis

2) Select Travis-CI Integration in Github Settings

3) Add a Travis YML File to your Repo

In the root of you repository, (alongside package.json), add a file called .travis.yml and insert the following contents:

Commit this file to your repository and push to Github. Travis-CI should now have what it needs to run your tests and coverage in the cloud. Tests will be run and coverage will be generated every time that someone creates a pull-request to merge into your code-base.

Now when I create a Pull Request in Github, I can see that my tests have run and passed in Travis-CI:

So all incoming code will now run the gauntlet of tests you created, as well as any tests that the collaborator adds. Passing this test bar raises confidence that the new code being committed will integrate well. And hopefully the act of having any "bar" there at all will help weed out the wrong kind of commits and the wrong kind of collaborators.

Lets take a quick look at what happened on Travis:

In this picture we can see that Mocha tests and Istanbul coverage ran in Travis CI, just like they did on our local machine.

Coverage Reports in The Cloud

Great, things are adding up. We have a system locked down that can be used to practice Behavior Driven Development, we have a way of checking our coverage, we can make sure any incoming production code pass tests before being accepted; but we do not have any way of sharing the status of our coverage with outside developers, yet.

This is not spit-ball spaghetti code. The code that the collaborator checks out has been cared for.

If the outside developer commits changes to the new code, they will have a higher confidence that they will not introduce new bugs and in turn, a higher confidence that their changes will be well received, which should be an encouragement to the developer that their submissions are not a waste of their time.

To set up coverage reports, we will use Coveralls.io. Assuming you have already signed up for a Coveralls.io account, use the following steps:

1) Flip the switch on your repo

2) Copy your repo token

Click on the "Details" button. This should bring you to a screen containing your Coveralls Repo Token. Copy this token and create the following .coveralls.yml file next to your package.json file:

In your .coveralls.yml file, add your token:

repo_token: <your-token-here>

This will allow you to run coverage from your local machine only. This repo token should not be shared publicly, so make sure you update your .gitignore file so your token does not get committed:

This script will run your coverage, and pass the response to the Coveralls package in your ./node_modules directory, which will push the updated coverage reports to the coveralls server where they can be seen publicly.

Add Coveralls Script to Travis

Finally, add the following line to your .travis.yml file.

after_success:
- npm run coveralls

This will trigger the npm coveralls script.

If we commit these changes and create a pull request, we can see what that Coveralls is now reporting your Coverage at 100%!

Now when we commit and push changes, the Code Coverage Badge is automatically updated to represent the most recent results, displaying them in your README.md file on Github.

Conclusion

Congratulations! You can now use Behavior Driven Development in Node.js to create production code, and reap the many benefits of TDD along the way. You can check that any incoming code passes tests before you merge it, and attract new contributors to your project with your coverage badge.

I would like to say a big thank you to David Ernst for the blog post he wrote on this subject, which was a big help getting some of this proceedure figured out at the start of the year.

If you would like to look at the code from this blog post, I have added everything here on GitHub: node-bdd-cookie-cutter.