Running jQuery QUnit tests under Continuous Integration

UPDATE: The code in this post is out of date. Read it for the explanation, but if you want to implement it, go grab NQUnit via nuget.

Setup

This post assumes you are already writing unit tests for your JavaScript code. If not, check out Chad’s post on Getting Started with jQuery QUnit. We use jQuery and QUnit at work, so my code examples are geared toward those frameworks. However, the approach should be very easy to adapt to your JavaScript framework of choice.

Overview

A good continuous integration server will let you run your automated tests, fail the build if a test fails, and publish a report of the test results. For all of that to work, your CI tool needs to be able to run the tests from a command-line and harvest the output in a form that it understands. One problem is that JavaScript testing frameworks like QUnit require you to open an HTML page in a browser to run and view the test results. You can certainly launch an HTML page in a browser from a command line ("start mypage.htm"), but you won’t be able to feed the results back to the CI server. We can get around this by using a tool like WatiN to control the browser from NUnit (or some other test framework which is supported by your CI server). WatiN allows you to spawn an instance of Internet Explorer, navigate to a URL, inspect the contents of the rendered DOM, and shutdown the browser, all from within an NUnit test.

A simple solution

Our first approach was to modify the QUnit test runner script so that it would create an element named TESTRESULTS that held the number of failed tests. We could then run the HTML page containing our tests and use WatiN to verify that TESTRESULTS contained a 0. An entire page of QUnit tests would be reported by NUnit as a single test. Either all of the QUnit tests passed making the single NUnit test pass, or any QUnit test failed and the NUnit test failed. You can see an example of this approach in my comment on Chad’s post referenced earlier. There are two problems with this approach: your total test count is inaccurate (you could have hundreds of QUnit tests which only show up as a single test in the CI test report), and more importantly, it is not immediately obvious which test failed, since it could have been any one of the many QUnit tests within a page.

Our current approach

A better approach would be to have a single NUnit test for each QUnit test. However, manually writing an NUnit test for each QUnit test sounds like a nightmare. What we need is a way to generate test cases dynamically. MbUnit supports this natively with its Factory attribute. We can get the same behavior in NUnit using the IterativeTest add-in (get the source and compile it against your version of NUnit). It allows you to specify a factory method that supplies a series of values to a single test method. When NUnit loads the class with this test method, it will create a separate test case for each value passed to the test method (if your factory method returns 3 values, you will have 3 test cases).

The full code to our test fixture is at the bottom of this post. The method GetQUnitTestResults takes an HTML page name as input and returns a series of QUnitTest instances. Each QUnitTest instance contains the details of an individual QUnit test run: the filename, the test name, the result, and any related failure message. The method RunQUnitTests is the actual factory method used by the IterativeTest add-in, and allows me to easily add new QUnit HTML pages without having to create new test methods. The gory details of parsing the DOM for the QUnit results are in grabTestResultsFromWebPage. Its not my proudest piece of code, but it gets the job done (for now). This is likely the only method you would need to change if you were using a JavaScript test framework other than QUnit.

Using this approach, we now get an accurate reflection of our test count, as each QUnit test is counted with every other NUnit test. Even better, when a test fails, we get detailed information about exactly which test failed and why. This is the output of a run that has a failing test:

Notes

NUnit add-ins must be compiled against the version of NUnit you are using. Make sure you have the same version of NUnit on your developer desktops and the CI server. We standardized on NUnit 2.4.7 simply because that was the latest version supported by TestDriven.NET.

The IterativeTest add-in must be in the addins sub-folder of the folder that contains your nunit-console.exe. If you want to run the tests via TestDriven.NET, you’ll also want to put a copy of the add-in in <ProgramFiles>TestDriven.NET 2.0NUnit2.4addins

While there is a way to get the TeamCity custom NUnit runner to load add-ins (using /addin:), I ran into trouble getting it all to work with WatiN as well. I discovered a JetBrains add-in for NUnit that allows nunit-console.exe to report progress to TeamCity in the same way as their custom runner. While it isn’t supposed to be available until TeamCity 4.0, I found the necessary files in this patch (see the Attachments section) which enables NUnit 2.4.7 support to TeamCity 3.1.

This is good stuff. Just thought I should mention (for anyone else’s benefit) that I tried this with WatiN 1.3 and had all kinds of problems (had to make minor changes to make it compile, but then it still didn’t work). It all worked with the WatiN 2.0 CTP.

Anton

Hey guys,

There is TestCaseSource attribute in nunit so there is no need anymore in writing new attributes for that purpose.
Sample:

@Bayard – That is what we are doing – we use WatiN (a variant of watir) to run the tests, and retrieve the output. The difference is that we do the assertions in NUnit so that each javascript test can be reported independently, instead of the entire page of tests failing or succeeding as one.

http://www.jeremyjarrell.org Jeremy Jarrell

Hi Joshua,

Thanks for the excellent writeup. We we’re able to take this code and run with it to get our js tests running on our CI server.

However, we’ve been plagued by timeout issues when running Watin on top of IE. Has anyone else had similar issues and discovered how to solve them?