Agnostic HTTP Endpoint Testing with Jasmine and Chai

Last updated on
27th
of February,
2016.

In this post, I’m going share my strategy for endpoint testing. It has a few cornerstones:

It should test against a running server by sending HTTP requests to it, instead of hooking onto the server instance directly, like supertest does. This way, the strategy becomes agnostic and portable - it can be used to test any endpoint server, even servers written in other languages, as long as they communicate through HTTP.

Each suite should be written as a narrative. To this end, BDD-style testing is very suitable. As an example, consider the narrative desribing the authentication flow for an app:

I register as a user, providing a suitable email and password. The server should return a 200 response and an authentication token. Then, I login using the same email and password as earlier before. The server should return a 200 response and a authentication token. I login using a different email and password. This time, the server should return a 401 response. If I register with the same email as before, the server should return a 422 response and an error message in the response body indicating that the email has been taken.

A few points to take note of:

Even though the strategy is meant to be as agnostic as possible, you need to find a way to run the server with a empty test database, and then have some (hopefully scripted) way to drop it once the tests are complete. This part will depend on what database adapter/ORM you are using. I will share my solution for an Express server backed by RethinkDB later.

Remember that the database is a giant, singular hunk of state. If you’re going to be adopting this style of testing, there is no way around this. You’re not just going to be running GET requests - you’re going to be running POST and PUT and DELETE requests as well. This means that you need to be very careful about tests running concurrently or in parallel. It’s great to have performant tests, but don’t trade performance for tests that are easy to reason about and which reveal clearly which parts of your app are breaking.

I tried Ava first, and was actually halfway through writing the test suite for a project with it. I really liked it, but Ava was built for running tests concurrently and in parallel. There came a point where the test suite would fail unpredictably depending on the order in which the tests were run. Although it’s possible to run Ava tests in serial, I felt like I was fighting against the framework.

I also considered Tape, but I consider Ava to be superior to Tape for stateless unit testing. If you’re using Tape, do consider checking out Ava for future projects. Their syntaxes are very similar, except Ava is noticeably faster.

In the end, I settled with Jasmine, although I imagine Mocha would be equally suitable. There are three technical issues I would like to talk about: how I write the Jasmine specs in ES2015 JavaScript, how and why I used Chai, and how to setup and teardown the test database.

ES2015

There is only 2 words to describe why this is so important here:

async/await

(I know - technically, it’s not part of the ES2015 spec, but let’s dispense with the pedantism here.)

Thankfully, jasmine-es6 exists, and installing it is exactly the same as plain Jasmine. It ships with async support out of the box.

Chai

Jasmine ships with its own BDD-style expect assertions, but I chose to overwrite it in favour of Chai’s assertions instead, which features a much richer plugin ecosystem. In particular, the existence of chai-http prompted the switch. chai-http provides assertions for HTTP testing, as well as a thin superagent wrapper with which to make requests with. Perfecto!

It’s not really difficult to roll your own assertions and request wrapper, as I did with Ava, but why bother if you can piggyback on the hard work of others?

Database Setup/Teardown

Setup is quite straighforward - depending on what server framework you’re using, configure it (ideally using environment variables passed in through the command line) to connect to a test database using a different set of credentials from your usual development credentials.

I also reset the database in between each narrative (or what Jasmine calls specs). I find that this is a good balance between not resetting at all, which would make keeping track of database state untenable, or resetting after each expectation, which makes setup and teardown much more tedious and slows testing down (e.g. registering a user before each expectation).

With that in mind, a good rule of thumb emerges. If a narrative becomes so long as to make the database state confusing to reason about, it’s probably time to split it up.

As for database teardown, I rolled my own solution. For this particular project, I’m using thinky, a ORM for RethinkDB. thinky exposes the RethinkDB driver r, which allows me to write this: