Is it feasible to expect 100% code coverage in heavy jquery/backbonejs web applications? Is it reasonable to fail a sprint due to 100% coverage not being met when actual code coverage hovers around 92%-95% in javascript/jquery?

10 Answers
10

Realistic
If you have automated testing that has been shown to cover the entire code base, then insisting upon 100% coverage is reasonable.
It also depends upon how critical the project is. The more critical, the more reasonable to expect / demand complete code coverage.
It's easier to do this for smaller to medium sized projects.

Unrealistic
You're starting at 0% coverage ...
The project is monstrous with many, many error paths that are difficult to recreate or trigger.
Management is unwilling to commit / invest to make sure the coverage is there.

I've worked the gamut of projects ranging from no coverage to decent. Never a project with 100%, but there were certainly times I wished we had closer to 100% coverage.
Ultimately the question is if the existing coverage meets enough of the required cases for the team to be comfortable in shipping the product.

We don't know the impact of a failure on your project, so we can't say if 92% or 95% is enough, or if that 100% is really required. Or for that matter, the 100% fully tests everything you expect it to.

@BryanOakley ...and also your tests could be pointless, or not even test anything
–
David_001Jun 15 '12 at 13:58

3

@BryanOakley And even with 100% branch coverage, it's possible that a certain combination of branches can cause a problem. (two sequential IF statements, for example, can be branched into and around in separate tests, but missing a test that enters both. Full branch coverage, yet one execution path is missed)
–
IzkataJun 15 '12 at 19:55

2

Even 100% branch coverage, including all execution paths is not enough. Maybe some error only happens when you take some combination of branches and you have some external input, say a malformed date. There is no possibility that all cases will ever be covered. At the same time, one can have a good confidence with less than 100% coverage but suitably chosen edge cases as input.
–
AndreaJun 16 '12 at 15:55

In most cases, 100% code coverage means that you've "cheated" a little bit:

Complex, frequently changing parts of the system (like the gui) have been moved to declarative templates or other DSLs.

All code touching external systems has been isolated or handled by libraries.

The same goes for any other dependency, particularly the ones requiring side effects.

Basically, the difficult to test parts have been shunted to areas where they don't necessarily count as "code". It's not always realistic, but note that independent of helping you test, all of these practices make your codebase easier to work on.

It is very unrealistic theortically and impractical in a business sense.

It is unrealistic with code that has high cyclomatic complexity. There are too many variables to cover every combination.

It is unrealistic with code that is heavily concurrent. The code is not deterministic so you can't cover every condition that might happen because behavior will change on every test run.

It is unrealistic in a business sense, it only really pays dividends to write tests for code that is critical path code, that is code that is important and code that may change frequently.

Testing every line of code isn't a good goal

It is very expensive to write tests, it is code that has to be written and tested it self, it is code that has to be documented in what it actually trying to test, it is code that has to be maintained with business logic changes and the tests fail because they are out of date. Maintaining automated tests and the documentation about them can be more expensive than maintaining the code sometimes.

This is not to say that unit test and integration tests aren't useful, but only where they make sense, and outside of industries that can kill people it doesn't make sense to try and test every line of code in a code base. Outside these critical kill someone code bases, it is impossible to calculate a positive return on investment that 100% code coverage would entail.

100% code coverage for unit tests for all pieces of a particular application is a pipe dream, even with new projects. I wish it were the case, but sometimes you just cannot cover a piece of code, no matter how hard you try to abstract away external dependencies. For example, let's say your code has to invoke a web service. You can hide the web service calls behind a interface so you can mock the that piece, and test the business logic before and after the web service. But the actual piece that needs to invoke the web service cannot be unit tested (very well anyway). Another example is if you need to connect to a TCP server. You can hide the code that connects to a TCP server behind a interface. But the code that physically connects to a TCP server cannot be unit tested, because if it is down for any reason then that would cause the unit test to fail. And unit tests should always pass, no matter when they are invoked.

A good rule of thumb is all of your business logic should have 100% code coverage. But the pieces that have to invoke external components, it should have as close to 100% code coverage as possible. If you cannot reach then I wouldn't sweat it too much.

Much more important, are the tests correct? Do they accurately reflect your business and the requirements? Having code coverage just to have code coverage doesn't mean anything if all you doing is testing incorrectly, or testing incorrect code. That being said, if your tests are good, then having 92-95% coverage is outstanding.

Testing what happens when you get strange combinations of error cases and failure-to-responds can be exceptionally tricky.
–
Donal FellowsJun 14 '12 at 21:33

Isn't understanding what your system will do when presented with these tricky problems part of the appeal of unit testing? Also, there's a bit of confusion here between unit tests and integration tests.
–
Peter SmithJun 15 '12 at 0:38

I'd say unless the code is designed with specific goal of allowing 100% test coverage, 100% may be not achievable. One of the reasons would be that if you code defensively - which you should - you should have sometimes code that handles situations that you're sure shouldn't be happening or can't be happening given your knowledge of the system. To cover such code with tests would be very hard by definition. To not have such code may be dangerous - what if you're wrong and this situation does happen one time out of 256? What if there's a change in unrelated place which makes impossible thing possible? Etc. So 100% may be rather hard to reach by "natural" means - e.g., if you have code that allocates memory and you have code that checks if it has failed, unless you mock out memory manager (which may not be easy) and write a test that returns "out of memory", covering that code might be difficult. For JS application, it may be defensive coding around possible DOM quirks in different browsers, possible failures of external services, etc.

So I would say one should strive for being as close to 100% as possible and have a good reason for the delta, but I would not see not getting exactly 100% as necessarily failure. 95% can be fine on a big project, depending on what the 5% are.

If you are starting out with a new project, and you are strictly using a test-first methodology, then it is entirely reasonable to have 100% code coverage in the sense that all of your code will be invoked at some point when your tests have been executed. You may not however have explicitly tested every individual method or algorithm directly due to method visibility, and in some cases you may not have tested some methods even indirectly.

Getting 100% of your code tested is potentially a costly exercise, particularly if you haven't designed your system to allow you to achieve this goal, and if you are focusing your design efforts on testability, you are probably not giving enough of your attention to designing your application to meet it's specific requirements, particularly where the project is a large one. I'm sorry, but you simply can't have it both ways without something being compromised.

If you are introducing tests to an existing project where testing has not been maintained or included before, then it is impossible to get 100% code coverage without the costs of the exercise outweighing the effort. The best you can hope fore is to provide test coverage for the critical sections of code that are called the most.

Is it reasonable to fail a sprint due to 100% coverage not being met when actual code coverage hovers around 92%-95% in javascript/jquery?

In most cases I would say that it you should only consider your sprint to have 'failed' if you haven't met your goals. Actually, I prefer not to think of sprints as failing in such cases because you need to be able to learn from sprints that don't meet expectations in order to get your planning right the next time you define a sprint. Regardless, I don't think it's reasonable to consider the code coverage to be a factor in the relative success of a sprint. Your aim should be to do just enough to get everything to work as specified, and if you are coding test-first, then you should be able to feel confident that your tests will support this aim. Any additional testing you feel you may need to add is effectively sugar-coating and thus an added expense which can hold you up in completing your sprints satisfactorily.

I don't do this as a matter of course, but I have done it on two large projects. If you've got a framework for unit tests set up anyway, it's not hard exactly, but it does add up to a lot of tests.

Is there some particular obstacle you are encountering that is preventing you from hitting those last few lines? If not, if getting from 95% to 100% coverage is straightforward, so you might as well go do it. Since you're here asking, I'm going to assume that there is something. What is that something?

No, it's not possible and it never will be. If it were possible all of mathematics would fall into finitism. For example, how would you test a function that took two 64 bit integers and multiplied them? This has always been my problem with testing versus proving a program correct. For anything but the most trivial programs, testing is basically useless as it only covers a small number of cases. It's like checking a 1,000 numbers and saying you've proved the Goldbach conjecture.

Oh! So somebody is upset that I didn’t answer the problem on the plane of its conception; testing is a waste… I don’t care if it’s popular. It will never work. It cannot. The smartest computer scientists have known this (Dijkstra, Knuth, Hoare et al.). I guess if you’re a JavaScript programmer huffing eXtreme Programming then you don’t care about those cranks, though. Blah, whatever, who cares… write crappy code. Waste CO^2 running your tests. — I mean, who has time to sit and think anymore? We’ve exported that to the computer.
–
veryfoolishJun 17 '12 at 17:22

1

The question is tagged "TDD". TDD is more a design tool and a problem exploration tool than a testing one, and each "test" is just an example of how the code will behave in some context, so that people can read and understand what's going on, then change it safely. TDD done well tends to lead to cleaner, easier-to-use code, and running the tests just checks that the documentation is current. Most TDD suites hardly ever catch bugs; it's not what they're there for. I think you're being downvoted because your answer betrays that lack of understanding, and I hope this comment helps with that.
–
LunivoreAug 20 '12 at 13:30