As we worked on React 16, we revamped the folder structure and much of the build tooling in the React repository. Among other things, we introduced projects such as Rollup, Prettier, and Google Closure Compiler into our workflow. People often ask us questions about how we use those tools. In this post, we would like to share some of the changes that we’ve made to our build and test infrastructure in 2017, and what motivated them.

While these changes helped us make React better, they don’t affect most React users directly. However, we hope that blogging about them might help other library authors solve similar problems. Our contributors might also find these notes helpful!

At Hubba, our business needs are always evolving and the speed of development needs to catch up with it. One of the ways to keep the team moving forward without breaking everything is End-to-end (E2E) testing.

Having a full test suite with E2E tests allows us to move quickly. It allows developers to push code without worrying about breaking things.It enables releases with extra confidence.And, it catches errors that are missed during manual regression testing.

End-to-end testing is where you test your whole application from start to finish. It involves assuring that all the integrated pieces of an application function and work together as expected.

It's a bit hard to single out something in particular since I'm learning every day. But here's a few things, in no particular order:

Being in a good team with a good manager is worth more than working on particular projects. Projects come and go, succeed and fail. It's the people you work with who are making these years of your life special, and you can only do your best work when there is an atmosphere of honesty, mutual support and respect.

Big rewrites often fail. If you want to replace some old code, you need to have an incremental adoption strategy for it. Blindly rewriting something from scratch in the hope of solving some probems often creates many more of them, especially if you didn't write the original version.

Guillermo Rauch tweeted this a while back. Let’s take a quick dive into what it means.

NOTE: This is a cross-post from my newsletter. I publish each email two weeks after it’s sent. Subscribe to get more content like this earlier right in your inbox! 💌

Write tests. Not too many. Mostly integration.

This is deep, albeit short, so let’s dive in:

Write tests.

Yes, for most projects you should write automated tests. You should if you value your time anyway. Much better to catch a bug locally from the tests than getting a call at 2:00 in the morning and fix it then. Often I find myself saving time when I put time in to write tests.

React makes it simple to build functional, component-based user interfaces on web and mobile; at Facebook, we have more than 30,000 React components in our main web repo alone. React's simplicity and functionality has led to its adoption by hundreds of thousands of developers outside Facebook.

With today's release of React 16, we've completely rewritten the internals of React while keeping the public API essentially unchanged. From an engineering standpoint, it's a bit like swapping out the engine of a running car: since hundreds of other companies (including Facebook) use React in production every day, we wanted to do the swap without forcing people to rewrite their components built in React.

North Korea has threatened Australia with "disaster" for aligning itself with the US against the country's reclusive regime.

Foreign Minister Julie Bishop insists we are "not a primary target", but it is clear the US is not the only country imperilled by the increasingly aggressive hermit kingdom, which has made significant leaps in its missile capabilities.

Just how much of the world is at potential risk?

North Korea has a large arsenal of reliable short-range ballistic missiles.

Under current leader Kim Jong-un's reign, North Korea has tested these short-range missiles 50 times. Only one of these tests has failed under the current leader, signalling their operational readiness.

There are many good articles on how to get started with automated browser testing using the NodeJS version of Selenium.

Some wrap the tests in Mocha or Jasmine, and some automate everything with npm or Grunt or Gulp. All of them describe how to install what you need, along with giving a basic working code example. This is very helpful because getting all the different pieces working for the first time can be a challenge.

But they fall short of digging into the details of the many gotchas and best practice of automating your browser testing when using Selenium.

This article continues where those other articles leave off, and will help you to write automated browser tests that are far more reliable and maintainable with the NodeJS Selenium API.

I have never met a single programmer who can code perfectly. I don’t think such a person exists.

Once upon a time I hated testing software. It wasn’t important to me. I didn’t see the purpose. It seemed like a huge waste of everybody’s time and money.

Throughout my career I was never taught how, or why, I should be testing my software. I made lots of excuses for not wanting to learn. I spoke to many developers who also made excuses for not wanting to learn. They still make the same excuses today. I eventually learned, they didn’t.

Throughout my time working with others I’ve came across many different opposing views to software testing (some were valid, but that’s another blog post). Below are some of the most common reasons developers don’t give software testing a chance …

Test Impact Analysis (TIA) is a technique that helps determine which subset of tests for a given set of changes.

A similar depiction for tests to run for a hypothetical change.

The key to the idea is that not all tests exercise every production source file (or the classes made from that source file). Code coverage or instrumentation, while tests are running is the mechanism by which that intelligence is gleaned (details below). That intelligence ends up as a map of production sources and tests that would exercise them, but begins as a map of which productions sources a test would exercise.

A super fast JSDom based Selenium Webdriver API. Write end to end tests once and run them against this super fast, headless browser built on node.js, then once those tests pass you can run them against real browsers in the cloud!

Installation

npm install taxi-rank -g

Usage

In a separate terminal, run taxi-rank, then you can use cabbie (or your webdriver client of choice), to connect to this super-fast virtual driver:

When I took my first real dev job in the late 90s, it was not common for developers to write their own automated tests. Instead, large companies depended on teams of testers, who tested manually, or were experts in complex (and expensive) automation software. Small companies were more likely to depend on code review, months of “integration” after the “development,”…or most commonly: pure hope.

But times have changed. Today, on most teams, writing automated tests is a normal part of the software developer’s job. Changes to a codebase usually aren’t considered complete until there are at least

Several weeks ago, I created a Babel plugin for runtime type-checking in JavaScript. After testing it on my own projects I applied it to React’s source code and got some interesting results. In this article, I will go step by step through my experiment.

Every math or comparison operation with different types in JavaScript is potentially unsafe. You can get silent unexpected result because values are converted by tricky rules. For example, 1 + 1 = 2 but if you accidentally add 1 + "1" you will get "11". To avoid such errors you can use Flow, TypeScript or check operand types in runtime. I will apply the last approach to the React source code.

This short guide is intended to catch you up with the most important reasoning, terms, tools, and approaches to JavaScript testing. It combines many great recently written articles about some aspects discussed here and adds a little more from our experience.

Look at the following Logo of Jest, a testing framework by Facebook.

And indeed, Facebook have an excellent reason to use this slogan. In general, JS developers are not too happy with website testing. JS tests tend to be limited, hard to implement, and slow.

Nevertheless, with the right strategy and the right combination of tools a nearly full coverage can be achieved and tests can be very organized, simple, and relatively fast.

After five years working with Node.js, I’ve learned a lot. I’ve already shareda fewstories, but this time I wanted to focus on the ones I learned the hard way. Bugs, challenges, surprises, and the lessons you can apply to your own projects!

Basic concepts

Any new platform has its share of tricky bits, but at this point these concepts are second-nature for me. Digging into a bug you caused is a good way to ensure that you learn. Even if it is a bit painful!

When I was first getting started with Node.js, I was writing a scraper. It didn’t take me long to realize that if I didn’t do something, I’d make a whole lot of requests in parallel. This alone was an important discovery. But since I hadn’t yet fully internalized

Redux has become one of the most popular Flux implementations for managing data flow in React apps. Reading about Redux, though, often causes a sensory overload where you can’t see the forest for the trees. This also holds for testing Redux projects.

As usual, we’ll start by iterating that there isn’t one right way to test your Redux project. We will present an opinionated flavor that we personally believe in.

One of the biggest motivations for testing is engineering velocity. It may seem that writing tests slows down the development process, but this perceived notion only holds in the short term. Without automated tests, we’ve noticed that our projects can only grow that much before our ability to deliver grinds to a halt.

Tests are today an essencial part of the software development. They help us to minimize the number of bugs. They ease the verification that everything keeps working after code refactoring or changes. And also, among other things, they give confidence to the team, as they can be an indicator of the status of the project. There are many different types of tests: integration tests, unit tests, load tests… This entry will be focused on unit tests in JavaScript and how to use Sinon to get rid of the dependencies from another modules. We will be able to run these tests with Mocha as we saw previously

This line jumped out at me when I first read it, and I’ve thought back on it a lot since.

Tests are funny things. It feels like writing normal application code, but it’s not. Not really. A lot of the time I’ll catch myself refactoring test setups, making weird optimizations, sometimes even metaprogramming dynamic tests… but at the end of the day it usually doesn’t matter. What matters is having explicit tests that make it easy for you to understand both 1) what is failing, and 2) how you can fix it quickly.

With the help of Electron, it is now possible to run your entire web stack (SPA + backend) in a single process. When applied to testing, this becomes a compelling alternative to webdriver based testing, as it is much faster and easier to debug. In this talk, we will look at what types of tests there are in a typical web app, what they're for and then show off a real example.

Transcript

Tonight I'm going to be talking about something unheard of, something that actually very few people believe is even possible. And that is full stack integration testing that doesn't suck. At Featurist we take testing very seriously, which means we write all the tests. And we also take a lot of care about our development experience. And if something doesn't feel right or is painful, we experiment with the tools, try this and that, and there's nothing good out there, we will end up writing our own stuff. And I think when it comes to integration testing of web applications, we have arrived to a particularly sweet spot, and that's what this talk is about. But before we get into the details, let's quickly establish some base terminology in terms of what types of testing are there. Broadly speaking, there are unit tests and integration tests. Unit tests are focused on taking a unit of code such as modular class, and run a test in isolation, testing its behavior in isolation, providing inputs and asserting outputs. Integration test in contrasts are treating your whole system as a whole, running tests across the entire stack. Those tests have their pros and cons. The good things about unit tests, they're easy to write. Since the code on the test is very easy to set up, normally it's just instantiating a class or importing a modular function, they're very fast, they run in single process and incredibly fast to excise, and have a good feedback. By which I mean, if anything goes wrong it's typically easy to understand what has gone wrong, since there's just not much going on in a single test. The not so good things about unit test is that get small coverage. So if you have a sufficiently large system, a single task doesn't cover much, it's not very interesting normally. Unit tests also don't cover the glue code which is the code that is used to put the units together. And again, in a system of any size there's a lot of that code. And finally, the user interface is typically hard to unit test because where the UI code runs in the context of this massive global API called DOM, and unless you're using a particular framework such as React, that sort of remedy that problem, it's kind of hard to have a unit. So there're integration tests to compliment those bad things. So the good things about integration tests, they cover is lots of code. A single test casts through the entire system, all the layers, or lots of the layers, and that's why it's a valuable thing. It also covers glue code by definition since it exercise everything, glue code included. And it covers the UI, since, well that's how users exercise our systems normally, therefore that's how integration test exercises system as well. And the not so good things about integration tests is that they're very complex to set up. Systems normally include databases and some session state, and what have you, and to recreate a particular test in a particular place of an application sometimes it can be just really hard. They're slow, they're slow to start and they're slow to run. They're slow to start because again applications consist of many layers, be it a browser application, and then there's a backend application, there's a database, and you have to bring it all up and run it synchronously-ish, and it's hard to debug. Because there's so many layers underneath just the browser window and browser is everything the test has to, mostly, that's how...that's where the feedback is. For example, if somebody is booking a ticket, and something went wrong, there is no confirmation, ticket booked successfully, and then you have to go and figure what exactly happened. But all you have to start with is just lack of a div in a browser. So yeah, we have unit test, they're nice and easy, they're the joy to work with, but sometimes not very useful, it's called good cop. And then we have integration test, they're tedious, slow, and painful. But try leaving those aside, those integration tests aside, and it's only a matter of time till you get into this. The slide is called, all unit tests are passing, no integration tests. Beautiful. So yeah, on a typical project therefore we have few integration tests, because while you can get away without having anything at all, but they're too painful to write lots of them. We have lots of unit tests to cover all the permutations that's underneath, and we have mixtures of the two in between, be it testing of some adapters, testing of queries, or may be functional tests of particular components. And, years ago smart people already noticed that and came up with this notion of test pyramid, where we have a UI, where we have expensive and slow tests at the top and then some mixture of the slow and fast test in between, and finally fat base of unit tests. But what if, what if I told you that in JavaScript today this isn't quite true anymore. The test could be a lot less painful, and they could also be faster. So let's have a look again at the bad things of integration tests. We have a complex setup, there's nothing much we can do that here. We have slow and hard to debug. Those last two things are not inherent part of integration tests as such, they are at least partly implied by the tools we're using to write those tests. And the tools that are currently mainstream to write those tests in JavaScript and in many other web technologies is Webdriver IO which in JavaScript is a Selenium. A Selenium binding is for NodeJS. And if that doesn't tell you much, it's okay, I still don't understand what that means too. But, I know what it does, and what it does is a very simple thing, it allows you to control the behavior of a web page such as clicking buttons or watching things being on the page and not being on the page from a remote process, for example from a script. And that very neatly applies to the problem of integration testing since we want an automated way of exercising the UI. So a typical Webdriver setup is as running your backend application in one process, it's running your client side JavaScript in another process, in a browser process, and it's also running tests in a third process. So in a test we have three of those processes all running together, coordinating the run. And that's where the slowness and the hard to debug bit come from. Because for example, if you we have a client's application that is using common JS requires, then before it gets to the browser process of course we need to run through for example browserify, the pre which is slow. And then if you want to debug or if you want to pause the entire test, then you need like three breakpoints, you need to coordinate the breakpoints to pause the test in a particular place, is just painful. And that brings us to the solution of this problem, and the solutions called Electron. And in a nutshell, Electron is a technology that allows us to let...that gives an access to a DOM API and to the node API in the same process. And what that means is that, on line one you can create a div, and on line two you can create a file of the same script. Or, on line one you can mount your React application, and then line two you can start your Express backend, and on line three you can seed your database. And that's exactly what I'm about to show you. Okay, so the test, we have a very simple app here. This app has a button, and when we click the button it calls express with service and fetches to dos. Here's the execus[SP] call that does that. And our test is going to do exactly that. It's going to click the button and assert that the data from the server, from the database actually is showing on the page. Let's run it in...okay, that's actually already finished although you didn't see it. It was really fast, but we can run it again. And this is less than a second, and the test is doing exactly what I said. In most client's side application it starts up the Express server, and seeds the database as well, all within the same process and all very fast. Let's look at the test here and put some breakpoints and see how we step through the code. Let's run the test again. And here is you can see we're about to click the button, and let's go and open up the browser application. And when he clicks the button, and here where you have to trust me, it's going to end up here where it makes this request to the Express API to fetch the data. So let's carry on, here we are, this is the browser application. And now let me show you something cool. Let me go into the actual server application which is the Express endpoint, which is right where we query the database, the SQL database on the server, and we return the data, here we are. Once again, this is the full stack, as full as you can get. And our entire test is paused, waiting patiently for us to examine for example the data we've just returned from the database. And then we can go here, and right before we show the data on the page, you can go back to the old browser app. Here we are, we're about to show the same data, this is a browser app already. And the test is passed. So this is possible to do with Electron. And this test is written in Mocha, and as electron-mocha, the library allows us to run those tests in Electron. But also a another crucial bit to the whole setup, is a way to interact with the DOM, by which I mean the way to assert that certain things that are in the DOM, and the way to click the button for example. And this is another...this is the library that we developed at Featurist, and we're using quite extensively for quite a long time already called Browser-Monkey. And it allows us to do exactly that, allows you to assert the DOM and interact with the DOM. It looks like this here. Which looks a little bit like JQuery if you really think about it, with one really important and critical difference, Browser-Monkey has implicit weights embedded into it. For example, right after we click the button here on line 45, when we arrive in line 46 there is no data still on the web page. So if you were using JQuery for example, this assertion would fail, there wouldn't be no one and two on the page. But Browse-Monkey knows that things in single page applications don't always are there yet, so it's going to wait implicitly for stuff to appear. The reason it's fast is that there is no need to browserify or run your client side code through the webpack or something. Since Electron has the DOM API as well as the...sorry, has the Node API as well as DOM, we can just use our require statements and stuff just works. So it's really, really, really fast compared to Weddriver based tests. In fact, I'm going to give you some numbers as to how fast it is. So, it's always fast the second time. But under three seconds and here we have two full stack test that seed the database, start the Express services, mount the app, and do everything including clean up and setting up. And that's pretty awesome I think. And it doesn't stop there, for example we can do something like running our toss test in vdom, it's not really Electron related, but it's just so cool so I can't just skip that. Look, it's one second, that's almost approaching those B2D threshold of 300 millisecond being a perfect feedback time. Okay, not quite, almost, but it's still good. Oh, okay, this is the demo. Right, so yeah. The test stack is largely speaking electron-mocha and browser-monkey. So those two things put together allow us to write these otherwise historically painful tests in a much, much more comfortable way. And it's not just the test, it's not just a test project in the side, we are using this for real on real client site projects. We are running tests in the Electron using electron-mocha, and browser-monkey too. So there's nothing that should stop anyone here to just go and do it tomorrow. And, well, finally, final note is that the fact that integration tests suddenly are easy to write doesn't mean that we should abandon all other types of test, [inaudible 00:18:07] still a valuable thing for particular cases. For example, where you need to exercise permutations of a particular behavior or logic. And, well this is it for me. Thank you. Yeah? - [Male 1] So, on the client project where you said running 1,000 or whatever tests, how much time does that take? How much time does that suite takes? - Well, we have a split there. We have s test that run only browser test, moching out the APIs, and this is a very, very large suite of tests, about 1,000 tests, and it runs in electron-mocha, it takes about maybe four minutes. Well, obviously it very much depends on what types of test they are, they're quite complex tests, not perfect. And we have proper full stack tests that run about two or three minutes, they're lesser then there but maybe 100, but there's more that they... They're not limited to just one Express app, they're like at least five or six of them, including some moched services. So yeah, that's few minutes which is quite good. But again you'd have to go and see the particulars of the test to make some meaning of those numbers. - [Male 2] But 1,000 test is pretty large. - Yeah. - [Male 3] [Inaudible 00:19:49]. Have you done any time or space complexity analysis on this? So how does it scale relative to doing integration test in other ways to try and test the [inaudible 00:20:03]. - No, we haven't. - [Male 4] Can you just repeat the question. - Oh, sorry, yes. Repeat the questions. The question was, have we done any.... - Complexity analysis... - Complexity analysis. - I'm just wondering how it scales relative to everything else, because we'll obviously having two tests running the same, it pretty impressive, but you've got the whole stack running on that. But when you have, let's say when you have to scale it up to real world test suites, does it scale comparatively to existing tools and methods that we use already? Or is it like, nice, but might vary? - Well, those tests, the way we're running tests, we're running them sequentially, so one after another. So I suppose the way we're using it it doesn't really matter if there's 1 or 1,000, it's just going to run one-by-one. And every test tears down completely everything and brings everything back up... - Yeah. - ...mostly. - So we haven't spotted any scalability issues so far. - Yeah, I was just interested to see a graph go from one test - this is how long it took - up to maybe 10,000 tests of a similar suite across the [inaudible 00:21:20] integration test tools. - Okay, well that's probably for the next version of this talk. - No, [inaudible 00:21:27] nice one, cheers man. - [Male 5] It's kind of at the moment, if you have basically in the web driver, [inaudible 00:21:37] do you have implicit ways, you have actually everything executed on load, and then you can add extra weight, what's the purpose of this writing tool? [Inaudible 00:21:50], but what is the purpose really? - The purpose of... - Because you have everything built in a web driver tools at the moment. So the benefit you said basically get there... - The benefit is... - [Inaudible 00:22:07]. - The benefit is the speed. The way web driver works fundamentally is different from the way you can run those tests in Electron. Web driver has it at least three separate processes, and which makes it slower and also harder to debug. For example, if you need to pause a test the way I showed like in your Express app, you'd need to coordinate perhaps breakpoints or you'd need to dance a little bit. But here you can just put a breakpoint and everything shuts there. And yeah, and like I said since there's no browserify role or webpack, there's no browser compilation at all., the startup time of those tests which is important, if you carry on repeating tests time after time while you're working on a feature, it's really important that it starts up quickly. And I think it's a very impressive gain there, so those are the things. When I talking about browser-monkey and implicit weights and stuff, yes a way it mirrors what web driver is giving you as a similar API for asserting and manipulating or interacting with the page. But since we're not using web driver because of its disadvantages that we don't really like, we had to come up with a similar tool that allows us to interact with the page, and [inaudible 00:23:46]. - [Inaudible 00:23:47]. - Okay. - [Male 6] Can you manage multiple browsers? - This is a very good question. This is one area where this approach falls short, of course you can't manage the multiple browsers, or at least I don't know how to do that, maybe you can. But I suppose the argument here would be that, largely we are interested in exercising behavior, and therefore if we want to test the browser...the code in different browsers, we'd just end up with a different suite of particular tests, a test that slice the [inaudible 00:24:28] and nothing else. And so that would be my excuse. Thank you very much.

Testing is a double-edged sword. On one hand, having a solid test suite makes code easier to refactor, and gives confidence that it works the way it should. On the other hand, tests must be written and maintained. They have a cost, like any other code.

In a magical world, we could write our code, and then verify that it works with very little extra code.

Snapshot tests come close to offering this dreamy future. In this tutorial, we will go over what snapshot tests are and how to start using them with React.

What is a Snapshot Test?

A snapshot test verifies that a piece of functionality works the same as it did when the snapshot was created. It’s like taking a picture of an app in a certain state, and then being able to automatically verify that nothing has changed.

This is the biggest React Styleguidist update with 300 commits and four month of work. It incorporates a lot of rethinking and rewriting. But most of the changes were done to make the initial configuration easier.

Here’s a quick overview of the most notable changes. See the release notes for the full list.

It will load components from src/components/**/*.js. And example files from Component/Readme.md or Component/Component.md.

Testing is integral to creating and maintaining high-quality software. Throughout the buildout process, you’ll often find developers and designers doing manual testing — “Does this look right?” However, due to the often subjective nature of interface design, it’s not really possible to write an automated test to capture that “correctness”. This means that companies are faced with a decision between time-consuming manual testing or the inevitable decline in UI quality that results from a lack of a proper testing regime.

The reason testing UIs is hard is that the salient details of the smallest modules of UI (components) are hard to express programmatically. When is the output of a component correct? Correctness can neither be determined by the exact sequence of HTML tags/classes nor the textual part of the output. For years, different approaches have attempted to hit the sweet-spot and capture the nuance without any real success.

Test Driven Development (TDD) is a process for writing software that provably satisfies the software requirements. The process works like this:

This workflow is commonly known as Red, Green Refactor.

When you dig into TDD you’re going to find a bunch of options for test frameworks. Let me save you some time: Which one you pick matters less than how simple your test suite is. Some of the fancier ones (Mocha, Jasmine) tend to encourage users to produce overly-complicated tests, but if you follow the advice in this article, almost any framework will suffice.

In fact, if you’re not testing a large app, a simple vanilla-js test suite is probably fine:

Building open source software can be lots of fun. For some reason, it brings me way too much joy seeing code coverage and tests run on every push to github. It doesn't need to be hard to setup! I'm going to show you how to add automated testing and code coverage reporting to your apps and libraries for every push, pull request, and merge! Here's what we need to get started:

Jest is the best test runner out there right now. It may not be the absolute fastest, but it has the easiest and most useful features that you're going to need. If you use mocha, you'll end up installing istanbul or something else that will run coverage. You want snapshot testing? Jest has that built in too.

In this lesson, we walk through how to use one of React's Test Utilities (from the react-addons-test-utils package) called "Shallow Rendering". This lets us render our React component one level deep - without a DOM - so that we can write tests for it. It works kind of like ReactDOM.render, where the shallow renderer is a temporary place to "hold" your rendered component so that you can assert things about its output. Tests written using the shallow renderer are great for stateless or "dumb" components that simply have their props passed to them from a parent container or "smart" component. These shallow renderer tests work especially well

Hello my friends, today we are going to look into the most popular solutions for functional web testing. For my review, I listed the most well-known and popular frameworks, sorted them by the number of stars in their GitHub repos and picked top 5. Here are they (the number of stars is specified for the moment when this article was written, and it can differ from the current score).

CasperJS is written in Python, i.e. it is not a native Node.js solution. However, I’ve added this framework to my review because it can be installed from npm and so it fits well into the Node.js toolchain.

Further, we will have a detailed look at each of them. We will discuss their main features and try to perform a couple of basic actions with each, which will allow us to understand what each framework is worth. I’ll describe what you need to do to write your first simple test. This review does not encompass all the features — only the first impression from getting started with the framework. As a test scenario, we will use each framework to find its repository on GitHub.

it works out of the box, no need to specify a blob for test files or add Babel hooks

it’s runs tests in parallel, this stops you from using global state and runs faster

the tests are async by default

I quite like AVA since I can just drop it in and reap benefits without too much hassle. I’ve never had to fight against AVA to get tests running, that means more focus on the code and tests and less on the setup.

Running AVA at the command line is as simple as (if you’ve got it installed globally):

Today we are going to dive into the world of functional web testing with TestCafe.

Unlike the majority of other end-to-end (e2e) testing tools, TestCafe is not dependent on Selenium or WebDriver. Instead, it injects scripts into the browser to communicate directly with the DOM and handle events. It works on any modern browser that supports HTML5 without any plugins. Further, it supports all major operating systems and can run simultaneously on multiple browsers and machines.

Parameterized tests allow a developer to run the same test over and over again using different values. This can be useful if you need to test that your function can handle a range of different inputs, including edge cases. It can be impractical to write an individual test for each input.

In this contrived example, we want to test our validateName function with a a range of different inputs to try and break it. Here we are using three, but what if we want to test with hundreds? We want to avoid duplicating the same test again and again.

Many testing frameworks such as JUnit, NUnit and MSTest offer parameterized testing as a feature and have done so for years. Unfortunately for us Javascript developers, Mocha does not support this feature out of the box. People generally seem to work around the problem by writing their own forEach loops and iteratively calling their unit tests. Though this works, it can lead to a slightly confusing, cluttered syntax which can detract from the precise purpose of the test.

The idea that TDD damages design and architecture is not new. DHH suggested as much several years ago with his notion of Test Induced Design Damage; in which he compares the design he prefers to a design created by Jim Weirich that is “testable”. The argument, boils down to separation and indirection. DHH’s concept of good design minimizes these attributes, whereas Weirich’s maximizes them.