On Wed, Mar 14, 2007 at 05:18:31PM +0100, Luis Motta Campos wrote:
> On Mar 14, 2007, at 4:39 PM, Dave Hodgkinson wrote:
> >On 14 Mar 2007, at 16:18, Luis Motta Campos wrote:
> >> Maybe some of you can tell me good (or bad) impressions about
> >>Devel::Cover.
> >Used heavily at $company-1 to generate pretty tables of coverage.
> >What you *do* with the info is another story...
>> What is adviseable to *do* with this info?
> Is there any problem in using this info as a guideline for
> programmer time investment on testing improvements?
I think that's definitely one thing you can do with it. If you need to
persuade your management that programmer time spent on testing is a good
thing, you need metrics for quality. Devel::Cover can help you to
measure your test coverage, and thus the areas where you need to improve
or modify your testing.
Software engineering involves QA. Coverage analysis with a pretty
clickable report like Devel::Cover can help you in several ways:
- ensure your tests cover your code
- show you which methods/branches/conditionals are naked
- show you which _other_ classes your tests reach into
The last one is a big win for me. Unit testing means testing units of
code in isolation. If I'm testing my LinkFactory class, I mock my
various Link objects, to ensure that my tests are against the
LinkFactory only. By running the tests against a single class at a time,
and generating a Devel::Cover report, I can see which other classes are
being analysed for coverage and mock those - essentially D::C is telling
me "these are the methods/conditionals/... you need to test, and these
are the classes you need to mock if you want your tests to be granular".
In general, though, increasing coverage across an application is a
great game for a dev team to play. Build a continuous-integration test
server which runs your test suite under Devel::Cover every time a commit
hits your source control repository. Obviously, this gives you an
automatic notification when your source tree is broken, but you can use
the coverage numbers to flag commits which lower, rather than raise,
coverage. You can run reporting to see which developers in a team are
responsible for improving/lowering coverage. I think that is exactly the
kind of quality metric of which management would approve. And the
general optimisation of team duties is much easier when the managers are
aware of the relative strengths of each team member. It also helps with
appraising programmers in performance reviews to have a metric for their
performance as part of the team, as an engineer.
/joel