We have reached the point in our project where we have almost a thousand tests and people have stopped bothering with running them before doing a check in because it takes so long. At best they run the tests that are relevant to the piece of code that they changed and at worst they simply check it in without testing.

I believe this problem is due to the fact that the solution has grown to 120 projects (we usually do much smaller projects and this is only the second time we do TDD properly) and the build + test time has grown to about two-three minutes on the lesser machines.

How do we lower the run time of the tests? Are there techniques? Faking more? Faking less? Maybe the bigger integration tests shouldn't run automatically when running all the tests?

Edit: as a response to several of the answers, we already use CI and a build server, this is how i know the tests fail. The problem (actually a symptom) is we keep getting messages about failed builds. Running partial tests is something that most people do but not all. and regarding the tests, they are actually pretty well made, they use fakes for everything and there is no IO at all.

You already implied the solution in your question: only run the tests that are relevant to the piece of code that was changed. Run the entire test suite periodically, as part of the QA/Release cycle. That said, 2 to 3 minutes doesn't sound like a lot of time, so it is possible your developer team is checking in things too frequently.
–
Robert HarveyJan 25 '13 at 16:50

2

First benchmark, to figure out where the performance cost comes from. Are there a few expensive tests, or is it the sheer amount of tests? Are certain setups expensive?
–
CodesInChaosJan 25 '13 at 17:20

10

Damn, I wish our tests were only 2-3 minutes. To run all our unit tests, it takes 25 minutes - and we don't have any integration tests yet.
–
IzkataJan 25 '13 at 19:46

11 Answers
11

A possible solution would be to move the testing portion from the development machines to a continuous integration setup (Jenkins for example) using version control software of some flavor (git, svn, etc...).

When new code has to be written the given developer will create a branch for whatever they are doing in the repository. All work will be done in this branch and they can commit their changes to the branch at any time without messing up the main line of code.

When the given feature, bug fix, or whatever else they are working on has been completed that branch can be merged back into the trunk (or however you prefer to do it) where all unit tests are run. If a test fails the merge is rejected and the developer is notified so they can fix the errors.

You can also have your CI server run the unit tests on each feature branch as commits are made. This way the developer can make some changes, commit the code, and let the server run the tests in the background while they continue to work on additional changes or other projects.

This. If the developers "have stopped bothering with running them (the unit tests) before doing a check in", then you want your CI setup to be running them after a check in.
–
Carson63000Jan 25 '13 at 19:45

+1: A further improvement would be to modularize the tests. If a specific module/file has not changed since the last run, there is no reason to re-run the tests that are responsible for testing it. Sort of like a makefile not recompiling everything just because one file has changed. This may require some work but will probably give you cleaner tests as well.
–
LeoJan 25 '13 at 23:06

Will the branching methodology work with TFS? We write C# with TFS and branching in TFS is less friendly than in git. I believe this idea will even be rejected since we never do branching.
–
ZivJan 26 '13 at 10:23

I have no personal experience working with TFS; however, I was able to come across this guide from Microsoft which seems to show a simlilar branching strategy to the one in the post: msdn.microsoft.com/en-us/magazine/gg598921.aspx
–
MikeJan 30 '13 at 22:15

The majority of Unit Tests should take under 10 milliseconds each or so. Having 'almost a thousand tests' is nothing and should take maybe a few seconds to run.

If they're not, then you should stop writing highly coupled integration tests (unless that's what the code needs) and start writing good unit tests (starting with well decoupled code and proper usage of fakes/mocks/stubs/etc). That coupling will impact test quality and the time it takes to write them too - so it's not just a matter of reducing test run time.

Well, you probably shouldn't stop writing integration tests and other non-unit automated tests, as they are useful in their own right. You just shouldn't confuse them with unit tests, and keep them separate, partly because they're slower.
–
delnanJan 25 '13 at 17:25

2

Your correct that these seem to be integration tests.
–
Tom SquiresJan 25 '13 at 17:31

8

This answer is not productive. Firstly, it sets an unreasonable expectation. There are overheads in the unit testing framework itself; that each test take less than a millisecond does not imply a thousand test must take less than a few seconds. That the OP's entire test suite finishing in 2-3 minutes is a very good sign, by most measures.
–
rwongJan 26 '13 at 8:47

3

@rwong - sorry, I call bullshit. The metric I got was from running the two different professional projects available to me: one with ~300 tests, one with ~30000 tests and looking at the test runtimes. A test suite taking 2-3 minutes for <1000 tests is atrocious and a sign that the tests are not sufficiently isolated.
–
TelastynJan 26 '13 at 13:19

1

@rwong In the same vein as Telastyn, here's a data point from me: Even with quite a few larger-than-ideal tests, the test framework (py.test) doing tons of magic in the background, and everything being pure Python code ("100x slower than C"), running the circa 500 tests in a project of mine takes less than 6 seconds on an several years old, slow netbook. This figure is roughly linear in the number of tests; while there is some start-up overhead, it is amortized over all tests, and the per-test overhead is O(1).
–
delnanJan 26 '13 at 15:43

Check execution time, and find all most slowest tests and then analyze why they take so much time to execute.

You have 100 projects, may be you do not need to build and test them every time? Could you run all unittest only at a night builds? Create several 'fast' build configurations for daily use. CI server will perform only limited set of unittests projects related to 'hot' parts of your current development process.

When it isn't possible to isolate such operations, may be you have integration tests? May be you could schedule integration tests to night builds only?

Check all occasional singletons, that keep references to instances/resources and that consume memory, this could leads to performance degradation while running all tests.

In addition you could use following tools to make both your life easy and tests run faster

Gated commit some CI servers could be configured to perform build and test before commiting code to source repository. If someone commits code without running all tests beforehand, that contains also failed tests, it will be rejected and returned back to author.

Configure CI server to execute tests in parallel: using several machines or processes. Examples are pnunit and CI configuration with several nodes.

Continuous testing plug-in for developers, that will automaticaly run all test during writing code.

If they're not running the tests, it means they perceive the cost (waiting for the tests to run, dealing with false failures) to be greater than the value (catching bugs right away). Decrease the costs, increase the value, and people will run the tests all the time.

1. Make your tests 100% reliable.

If you ever have tests that fail with false negatives, deal with that right away. Fix them, change them, eliminate them, whatever it takes to guarantee 100% reliability. (It's OK to have a set of unreliable, but still useful tests that you can run separately, but the main body of tests must be reliable.)

2. Change your systems to guarantee that all tests pass all the time.

Use continuous integration systems to ensure that only passing commits get merged in to the main/official/release/whatever branch.

3. Change your culture to value 100% passing tests.

Teach the lesson that a task isn't "done" until 100% of tests pass and it has been merged in to the main/official/release/whatever branch.

4. Make the tests fast.

I have worked on projects where tests take a second, and on projects where they take all day. There is a strong correlation between the time it takes to run tests and my productivity.

The longer tests take to run, the less often you'll run them. That means you'll go longer without getting feedback on the changes you're making. It also means you'll go longer between commits. Committing more often means smaller steps that are easier to merge; commit history is easier to follow; finding a bug in the history is easier; rolling back is easier, too.

Imagine tests that run so fast that you don't mind automatically running them every time you compile.

Making tests fast can be hard (that's what the OP asked, right!). Decoupling is key. Mocks/fakes are OK, but I think you can do better by refactoring to make mocks/fakes unnecessary. See Arlo Belshee's blog, starting with http://arlobelshee.com/post/the-no-mocks-book.

5. Make tests useful.

If the tests don't fail when you screw up, then what's the point? Teach yourselves to write tests that will catch the bugs you're likely to create. This is a skill unto itself, and will take lots of attention.

STRONGLY agree, particularly point 3 & 1. If developers aren't running tests, then the tests are broken, the environment is broken, or both. Point 1 is the minimum. False fails are worse than missing tests. Because people learn to accept fails. Once failure is tolerated, it spreads, and it takes a mighty effort to get back to 100% passing and EXPECTING 100% passing. Start fixing this today.
–
Bill IVSep 7 '14 at 19:29

Train your developers on Personal Software Process (PSP) helping them to understand and improve their performance by using more discipline. Writing code has nothing to do with slamming your fingers on a keyboard and afterwards press a compile and check in button.

PSP used to be very popular in the past when compiling code was a process which took a lot of time (hours/days on a mainframe so everybody had to share the compiler). But when personal workstations became more powerful, we all came to accept the process:

type some code without thinking

hit build/compile

fix your syntax to make it compile

run tests to see if what you wrote actually makes sense

If you think before you type, and then after you typed, review what you wrote, you can reduce the number of errors before you run a build and test suite. Learn not to press build 50 times a day, but maybe once or twice, then it matters less that your build and testing time takes a few minutes more.

A couple minutes is OK for unit tests. However, keep in mind that there are 3 major types of tests:

Unit tests -- test each "unit" (class or method) independently of the rest of the project

Integration tests -- test the project as a whole, usually by making calls into the program. Some projects I've seen combine this with regression tests. There is significantly less mocking here than unit tests

Regression tests -- test the completed project as a whole, as the test suite is an end user. If you have a console application, you would use the console to run and test the program. You never expose internals to these tests and any end user of your program should (in theory) be able to run your regression test suite(even though they never will)

These are listed in order of speed. Unit tests should be quick. They won't catch every bug, but they establish that the program is decently sane. Unit tests should run in 3 minutes or less or decent hardware. You say you only have 1000 unit tests, and they take 2-3 minutes? Well, that's probably OK.

Things to check:

Make sure to ensure that your unit tests and integration tests are separate though. Integration tests will always be slower.

Ensure that your unit tests are running in parallel. There is no reason for them not to if they are true unit tests

Ensure your unit tests are "dependency free". They should never access a database or the filesystem

Other than that, your tests don't sound too bad right now. However, for reference, one of my friend's on a Microsoft team has 4,000 unit tests that run in under 2 minutes on decent hardware(and it's a complicated project). It's possible to have fast unit tests. Eliminating dependencies(and mock only as much as needed) is the main thing to get speed.

One possible way: split your solution. If a solution has 100 projects, then it's quite unmanageable. Just because two projects (say A and B) use some common code from another project (say Lib) doesn't mean they have to be in the same solution.

Instead, you can create solution A with projects A and Lib and also solution B with projects B and Lib.

As to tests that talk to servers: If it's talking to a server, it's not really a unit test, it's something higher. If I were you, I'd separate out the unit tests (which should run quick) and at least run those before every commit. That way you'll at least get the quick stuff (things that don't need to talk to the server) out of the way before code is committed.
–
Michael KohneJan 25 '13 at 17:47

@MichaelKohne I knew someone would spot it. I know they are not exactly unit tests but they serve the same purpose, it's only about how you name them.
–
SulthanJan 25 '13 at 18:10

1

mostly it's about how you name them, but it's good to keep the difference in mind (whatever name you use). If you don't differentiate, then (in my experience) the devs have a tendency to just write higher-level tests. At which point you don't get the tests forcing you to be sensible in your abstractions and coupling.
–
Michael KohneJan 25 '13 at 19:09

Though your description of the problem does not give a thorough insight into the codebase, I think I can safely say your problem is two-fold.

Learn to write the right tests.

You say you have almost a thousand tests, and you have 120 projects. Assuming that at most half of those projects are test projects, you have 1000 tests to 60 production code projects. That gives you about 16-17 tests pr. project!!!

That is probably the amount of tests that I would have to cover about 1-2 classes in a production system. So unless you only have 1-2 classes in each project (in which case your project structure is too fine grained) your tests are too big, they cover too much ground. You say this is the first project that you are doing TDD properly. A say, the numbers that you present indicate that this is not the case, you are not doing TDD property.

You need to learn to write the right tests, which probably mean that you need to learn how to make the code testable in the first place. If you cannot find the experience inside the team to do that, I would suggest hiring help from the outside, e.g. in form of one or two consultants helping your team over a duration of 2-3 months to learn to write testable code, and small minimal unit tests.

As a comparison, on the .NET project that I am currently working on, we can run roughly about 500 unit tests in less than 10 seconds (and that was not even measured on a high spec machine). If those were your figures, you would not be afraid to run these locally every so often.

Learn to manage the project structure.

You have divided the solution into 120 projects. That is by my standards a staggering amount of projects.

So if it makes sense to actually have that amount of projects (which I have a feeling it doesn't - but your question does not provide enough information to make a qualified judgement of this), you need to divide the projects into smaller components that can be build, versioned, and deployed separately. So when a developer runs unit the test suite, he/she only needs to run the tests relating to component he/she is working on currently. The build server should take care of verifying that everything integrates correctly.

But splitting up a project in multiple components build, versioned, and deployed separately requires in my experience a very mature development team, a team that is more mature than I get the feeling that your team is.

But at any rate, you need to do something about the project structure. Either split the projects into separate components, or start merging projects.

Ask yourself if you really need 120 projects?

p.s. You might want to check out NCrunch. It's a Visual Studio plug-in that runs your test automatically in the background.

Can your test environment run anywhere? If it can, use cloud computing to run the tests. Split the tests among N virtual machines. If the time to run the tests on a single machine is T1 seconds, then the time to run them split up, T2, could approach T2=T1/N. (Assuming each test case takes about the same amount of time.)
And you only have to pay for the VMs when you're using them. So you don't have a bunch of test machines sitting in some lab somewhere 24/7.
(I'd love to be able to do this where I work, but we're tied to specific hardware. No VMs for me.)

JUnit test are normally to be quick, but some of them simply must take some time to excecute.

For example, the database test are usually taking a few time to initialize and finish.

If you have hundreds of tests, even if they are fast, they require a lot of time to run because of their number.

What can be done is:

1) Identify the crucial tests. Those for the most important parts of libraries and those which are most likely to fail after changes. Only those test should be run always on compile. If some code is often broken, its tests should be obligatory even if they take long to execute, on the other side, if some part of software never caused a problem, you can safely skip tests on each build.

2) Prepare the continuous integration server, which will run all tests in the background. It's up to you if you decide to build every hour or to build after every commit (the second makes sense only if you want to automatically detect whose commit has caused trouble).