LessThanDot

Less Than Dot is a community of passionate IT professionals and enthusiasts dedicated to sharing technical knowledge, experience, and assistance. Inside you will find reference materials, interesting technical discussions, and expert tips and commentary. Once you register for an account you will have immediate access to the forums and all past articles and commentaries.

LTD Social Sitings

Note: Watch for social icons on posts by your favorite authors to follow their postings on these and other social sites.

I’ve been playing around lately with a pure command-line Jasmine runner that doesn’t rely on a SpecRunner file to run tests. I work daily with a largish application that is well over 100K lines of front-end code and greater than 7000 front-end tests. Over time as the codebase and test count has grown, our Continuous Integration environment has continued to get slower. While build servers like Jenkins and TeamCity provide some analytics around slow tests, there is still some digging involved to identify the best targets for improvement, something I’m hoping a local runner can make easier.

I’ve taken a very small project I’ve used in prior posts on Karma and WallabyJS and written a reusable Jasmine console runner, relying on Phantom 2, that creates a set of statistics as it runs and tries to identify the slowest set of tests in the set without pushing it to a remote server.

What are my slowest tests?

My test project is small enough that I won’t learn that much new, but it’s big enough to serve as an example.

I have Phantom installed locally and in my path, so to run the tests I can do this:

The sample code uses requirejs, so I’m passing in an array of specs that will be used in a define statement prior to running jasmine.

The results from running this look like:

Text

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

jasmine started
suiteDone [0.003s,10/10] : compass
suiteDone [0.175s,19/19] : tile
suiteDone [0.003s,8/8] : tree
suiteDone [0.007s,31/31] : weather
68/68 specs in 0.203s
-----------------------------------------------------------------
68 tests passed in 0.186s
Average Time: 0.003s
Standard Deviation: 0.004s
28% (19) of the tests account for 90% of the overall time.
15% (10) of the tests account for 50% of the overall time.
-----------------------------------------------------------------
Slowest Tests:
[ 0.014s]: tile -> getEvaporationAmount -> should be 0 if there are no trees and the terrain doesn't have dry evaporation
[ 0.010s]: tile -> getEvaporationAmount -> should be the terrain's evaporation if there are no trees
[ 0.010s]: tile -> canSupportAdditionalTrees -> should support additional trees if there is enough average rainfall for grass, existing trees, and a new tree
[ 0.009s]: tile -> onGrow -> should provide full amount of water to trees if available after watering the terrain
[ 0.009s]: tile -> canSupportDryGrass -> should be able to support dry grass when there is enough averageRainfall available
[ 0.009s]: tile -> canSupportGrass -> should be able to support grass when there is enough averageRainfall available
[ 0.009s]: tile -> getPlantConsumptionAmount -> should be 0 when the terrain doesn't require any water and there are no trees
[ 0.009s]: tile -> canSupportAdditionalTrees -> should not support a tree if there is not enough average rainfall for grass and new tree
[ 0.009s]: tile -> onGrow -> should evenly split remainder of water if there is not enough left after watering the terrain

jasmine started
suiteDone [0.003s,10/10] : compass
suiteDone [0.175s,19/19] : tile
suiteDone [0.003s,8/8] : tree
suiteDone [0.007s,31/31] : weather
68/68 specs in 0.203s
-----------------------------------------------------------------
68 tests passed in 0.186s
Average Time: 0.003s
Standard Deviation: 0.004s
28% (19) of the tests account for 90% of the overall time.
15% (10) of the tests account for 50% of the overall time.
-----------------------------------------------------------------
Slowest Tests:
[ 0.014s]: tile -> getEvaporationAmount -> should be 0 if there are no trees and the terrain doesn't have dry evaporation
[ 0.010s]: tile -> getEvaporationAmount -> should be the terrain's evaporation if there are no trees
[ 0.010s]: tile -> canSupportAdditionalTrees -> should support additional trees if there is enough average rainfall for grass, existing trees, and a new tree
[ 0.009s]: tile -> onGrow -> should provide full amount of water to trees if available after watering the terrain
[ 0.009s]: tile -> canSupportDryGrass -> should be able to support dry grass when there is enough averageRainfall available
[ 0.009s]: tile -> canSupportGrass -> should be able to support grass when there is enough averageRainfall available
[ 0.009s]: tile -> getPlantConsumptionAmount -> should be 0 when the terrain doesn't require any water and there are no trees
[ 0.009s]: tile -> canSupportAdditionalTrees -> should not support a tree if there is not enough average rainfall for grass and new tree
[ 0.009s]: tile -> onGrow -> should evenly split remainder of water if there is not enough left after watering the terrain

So from the top:

I show the top suite names, so I have feedback on larger codebases

I get a X/Y specs in Z time overview, to help me see how long the test run took and how much was successful

Stats: general statistics on just the passing tests and the number of tests responsible for 50% and 90% of the runtime

Spec list: the tests responsible for 50% of the runtime, by runtime descending

There are several things we learn from this run:

A small number of tests (15%) account for half of the overall time (Pareto Principle)

All of those 15% belong to the same top-level suite

There is ~.017s between the total from the specs and the overall run time

I don’t know if that ~0.017 is normal, but I’ve seen some fairly large numbers sneak our of other codebases where beforeEach logic was set at entirely too high a level, code was running outside the specs, and so on and in this case it’s low enough I wouldn’t focus on it. My first stop would be seeing what is going on with the tile class and suite, since that feels like more of a systematic issue across the whole suite than an individual test issue.

How it works (and how to re-use it)

This runner is not ready for drop-in use with another project, but it’s also not that far off.

In a nutshell, the script opens a Phantom page with minimal HTML and no URL. It then injects in jasmine, a jasmine bootloader for RequireJS, a custom console runner, requireJS, a requireJS configuration, and then a script that require()’s the passed in spec list before running window.execute to run the tests.

The custom console runner takes care of capturing results from the tests and passes them back via the console log, which is captured in the outer phantom script for processing. The top-level suite output flows out as each suite is finished, but the stats wait until the full suite has run to minimize delays that show up if you interact with the console too heavily/frequently.

Customizing this for other projects is relatively easy, and I’ll probably work on making it easier to reuse as a I have more time. Right now the main things you need to do are:

Replace the jasmine paths with ones that make sense for your project

Replace the “inject additional required files” section with the additional dependencies you need

Update the “execute provided spec list” section to match how you run tests

You will also want to download the runner, console-reporter, and jasmine bootloader from github.

For instance, if you are using a basic SpecRunner.html file with the spec and source files listed in script tags, you could drop these in the “inject additional required files” section and replace the “execute provided spec list” with just a single call to “windows.executeTests()”.

Related Posts

After working with NCrunch building and running tests in the background for the last several…

About the Author

My roles have included accidental DBA, lone developer, systems architect, team lead, VP of Engineering, and general troublemaker. On the technical front I work in web development, distributed systems, test automation, and devop-sy areas like delivery pipelines and integration of all the auditable things.