Crowdsourcing a feature support matrix using QUnit and Browserscope

The idea

While we were working on the CSS Regions feature, one of the things people asked, from quite early on, was a way of telling what CSS Regions features were supported in what version of the different browsers out there. In the beginning, “Get the latest WebKit nightly” was all nice and simple, but when the code got into Chrome – which has no less than 3 official channels and Microsoft started working on their own Internet Explorer 10, things got more complicated. Especially since it was still not very easy to say what and how much of the spec was implemented at any given time.

This is how the CSS Regions support matrix was born. We decided early on that we want to have the support matrix out in the open, free for anybody to both check out the code for the tests and run them on their own setups – and eventually submit results.

So, what if you want to implement a similar feature support matrix yourself, maybe for another CSS/HTML spec? Well, you could go straight to the code on gitHub or you could read through this post for a walkthrough and then go check out the code.

The “hidden” stuff

As the title says it, under the hood, the support matrix is powered up by QUnit, Browserscope and a suite of feature detections tests we wrote. There’s also a sprinkle of Twitter Bootstrap and jQuery but that’s for the shiny UI, more on which later on.

Why QUnit?

We chose QUnit for a number of reasons, mainly having to do with its flexibility and how easy it is to write tests. We’ll start with the latter.

Writing QUnit tests is merely a matter of including qunit.js and calling test() with a function callback containing your assertions. By default, QUnit automatically runs all the tests once they’re loaded, but if you want to postpone running the tests, there’s a switch for that (QUnit.config.autostart) and a function to call later (QUnit.start()).

All in all, a very simple feature detection test for the hypothetical, yet oh-so-useful ‘sparkle’ CSS property might look like this:

Like most unit testing frameworks out there, QUnit provides hooks to be able to integrate it with different build or report systems. As we’ve seen before, tests can be auto-run or can be deferred to a later time. Also, there are hooks to execute an action after every test and also after all the tests have been run. These are useful for instance, for collecting data across tests and then sending it somewhere else for processing. This again, is fairly simple to do:

<script type="text/javascript">// <![CDATA[
//All the magic from above comes here plus some more,
//say another test that checks support for sparkle: 'fake' var passedTests = 0;
//This gets called after each test
QUnit.testDone = function(t) {
//t is an object containing details about the test that just run
// t.name is the name of the test
// t.failed is the number of assertions failed
// t.passed is the number of assertions passed
// t.total is the total number of assertions
//if the test passes, just increase the number of passed tests
if (!t.failed && t.total == t.passed)
passedTests++;
}
//This gets called once all the tests have run
QUnit.done = function(r) {
alert(passedTests + ' have passed, submitting results');
window.location = 'http://www.example.com/dashboard?passed=' + passedTests;
}
// ]]></script>

The snippet above, uses the testDone() hook to count the passing tests and the done() hook to report the results to a dashboard hosted at example.com. If that looks too simple, it’s because it really is. The next section will show you neater things that can be done using these tricks.

Why Browserscope?

So now you have a bunch of feature detection tests. How do you collect, store and present the results without a headache and writing too much code? That’s where Browserscope takes the spotlight.

In a nutshell, Browserscope is an open-source distributed testing platform designed with browser profiling and feature testing in mind. What this means is that it allows anyone with a browser and an internet connection to run your tests, provided they’re hosted on a publicly available site. It then takes care of collecting and grouping the results, based on the users’ browsers.

Your tests and the results people get by running them are associated to an API key you can get by logging in to Browserscope (you can find extensive details here). You can generate more than one key – basically each key represents a test suite and its results.

While all this talking about tests might get you thinking about testing frameworks or at least some APIs or helpers, the reality is a lot simpler. The only thing Browserscope cares about is that you fill in the _bTestResults object with key-value pairs. The keys represent the names of the tests in the suite and the values represent the score that particular test achieved. As long as the values are numbers, they can be as simple as a 1 for pass, and 0 for fail or as complex as percentages and fractions. Once the _bTestResults object was filled in, all you need to do is to dynamically load a script from Browserscope and it will automagically send your results in the cloud for processing.

Too much talking and too little code? Here’s how you would go about sending the test results to Browserscope for our beloved ‘sparkle’ CSS property.

Congratulations, that’s it! Run this code in your browser (after filling in your test key) then head to http://www.browserscope.org/user/tests/table/CHANGE-THIS-TO-YOUR-TEST-KEY?v=3&layout=simple to check for your results.

Putting it all together

If you have followed along in the previous sections it should be pretty clear how the two work together and how they can be used to deploy a minimally functional feature support matrix. For the sake of completeness, the snippet below has pretty much everything you need to get going. Just add some markup and CSS of your liking and you’re good to go.

One final note, though: the example below automatically sends the results to Browserscope. In practice, the nice thing to do is to ask your users before sending their results to Browserscope (e.g. via an opt-in checkbox, a confirmation dialog, etc.).

If you want some tips on how to improve the feature support matrix with a UI that… sparkles, read on!

Making it readable

OK, so now that you got your feature detects running and the results are comfortably aggregated in Browserscope, how do you show the world the level of support for your new feature? The simplest way is to use Browserscope to display their ready-made HTML widget. Adding a line like this

to your code, will load and display a table with the test results, grouped by “top browsers”. Different URL parameterscan be used to customize both the data that’s being sent to the client and the format of this data. The most important URL parameters are:

v – specifies how the data is aggregated and what browsers are included. You can choose to group data by predefined categories of browsers (top, top-d, top-d-e, top-m) or by the browsers for which there are actual test results stored (0, 1, 2, 3). For instance v=3 will return all browser versions, while v=top will return the top browsers list (regardless of whether your tests were run on them or not).

o – specifies the format of the data. o=html is actual HTML code, o=js loads JavaScript code that will render the table and o=json will return the test results encoded as a JavaScript object.

w and h – set the width and height of the HTML widget, when using o=js or o=html.

To take care yourself of the actual rendering of the results, choose JSON as the format of the data. Doing so will return a JavaScript object that might look like this:

This object can then be turned into matrix-like table with PASS / FAIL / N/A markers for each browser and test combination or into a bar chart showing the overall level of support for each browser. You can use Bootstrapto easily give your page a modern-looking styling: just download and unzip it, then include the provided CSS and JS files into your HTML – both come in vanilla and minified versions, if you want to snoop around or just drop them in. In the end, the skeleton of your support matrix will likely look like something along these lines:

In the end

If you want to dive deeper in the code for the CSS Regions support matrix UI, here’s a couple of tips to get you up to speed faster: it relies on Twitter’s Bootstrap for general styling and enhanced forms while more specific styling is done using Sass. The logic for processing and displaying the Browserscope results is in the assets/js/results.js file, while the actual feature detects are in the assets/js/testregions.js file. The index.html file is a good starting point to get an idea of how things flow and are tied together.

Last but certainly not least, feel free to fork this project and write your own feature support matrices for your favorite bleeding-edge specs. Also, bug reports and pull requests are most welcome. Go sparkle!