tags:

views:

answers:

What is the % code-coverage on your project? I'm curious as to reasons why.

Is the dev team happy with it? If not, what stands in the way from increasing it?

Stuart Halloway is one whose projects aim for 100% (or else the build breaks!). Is anyone at that level?

We are at a painful 25% but aspire to 80-90% for new code. We have legacy code that we have decided to leave alone as it evaporates (we are actively re-writing).

+3 A:

We run at 85% code coverage, but falling below it does not break the build. I think using code coverage as an important metric is a dangerous practice. Just because something is covered in a test does not mean the coverage is any good. We try to use it as guidance for the areas we are weakly covered, not as a hard fact.

80% is the exit criteria for the milestone. If we don't make it thrgouh the sprint (even though we do plan the time up front), we add it through the stabilization. We might take an exception for particular component or feature, but we open Pri 1 item for the next milestone.

During coding, code coverage is measured automatically on the daily build and the report is sent to the whole team. Anything that falls under 70% is yellow, under 50% is red. We don't fail the build currently, but we have a plan to add this in the next milestone.

Not sure what the dev happines has to do with unit testing. Devs are hired to build quality product and there should be a process to enforce minimum quality and way to measure it. If somebody is not happy about the process, they are free to suggest another way of validating their code, before it is integrated with the rest of the components.

Btw, we measure code coverage on automated scenario tests as well. Thus, we have three unmbers - unit, scenario and combined.

I often use code coverage under our automated test suite, but primarily to look for untested areas. We get about 70% coverage most of the time, and will never hit 100% for two reasons;

1) We typically automate new functionality after the release which is manually tested for it's first release and hence not included in coverage analysis. Automation is primarily for functional regression in our case and is the best place to execute and tweak code coverage.

2) Fault injection is required to get 100% coverage, as you need to get inside execption handlers. This is difficult and time consuming to automate. We don't currently do this and hence won't ever get 100%. Jame's Whittakers books on breaking software cover this subject well for anyone interested.

It is also worth remembering that code coverage does not equate to test coverage, as is regularly discussed in threads such as this and this over on SQAforums. Thus 100% code coverage can be a mis-leading metric.

I agree with smacl. In our embedded automotive projects we aim at an 80% coverage. If we have less than that we need to investigate and report why we don't reach it. Usually we don't test all failure cases and some error handlers don't get used during our test cases.

That's a really good question. I can't speak for everyone else's code. In my own, the only statements that aren't covered are methods that you're forced to override and are used mainly for logging purposes.

re: bosses. That is a tough sell, Forser. Of course you and I know that testing == catching bugs early == less money expended dealing with problems. Because that is a "3-way" equation, it is often hard to relate to non-techies

A project I did a couple of years ago achieved 100% line coverage but I had total control over it so I could enforce the target.
We've now got an objective to have 50% of new code covered, a figure that will rise in the near future, but no way to measure it. We will soon have tools in place to measure code coverage on every nightly run of the unit tests, so I'm convinced our position will improve.