I am a strong advocate of test-driven development, and I have been for a long time, although I didn't know it would end up being called that for most of that time. I am *not* critisising test-driven development. I am expressing my personal doubts about the tools that I have seen available for supporting that.

My point, badly made, is that I have no interest in seeing screenfuls of "xxxx.t .....ok". I do not care how many tests passed, or what their names were, or what the percentages are.

The only thing I am interested in is "0 failures" or "Failure: test nn at file.pl:(nnn)". The output from test harness doesn't tell me what I need to know. What (code; not test) failed,how it failed, and where (which source code file, not test file).

Instead I have to go off on a hunting spree, first locate the test that failed, then re-run it having added extra print statements to find out how it failed. Then track that back to the source code where the failure originates.

The Test::* modules are set up to make the writing, running and reporting of tests easy.

But writing tests is not the objective. The objective is the finding and fixing of failures to meet specification.

To this end, I want a process that puts the tests close to the code under test. That way, when failures occur, the messages can take me directly to the code that needs fixing, not to a test script in another directory that leaves me with nothing except grep in order to track back the failing test to the failing code.

All that said, I do not have an alternative to offer. I have played some with some ideas based upon Devel::StealthDebug.

I have ideas for combining these two notions--automated test generation inline, with the ability to turn those tests off for production use such that they have zero impact upon the tested code when disabled.

That is whereI think the future lies--inline, automated (unit) testing that can be enabled and disabled via command line switch.

I think that P6 is moving in this direction with is PRE{}, POST{}, FIRST(), ENTER{} & LEAVE() blocks. I have yet to see enough (or visualise enough) P6 code, and the specifications are rather loose and changable, for me to decide whether they are flexible enough to achieve everything I would like, but they appear as if they might.

So, as I think I have remembered to say each time I have mentioned it--my reservations are purely my own. As with everything, what exists now is infinitely better that what might exist one day.

It sounds like what you're doing is more testing implementation. That's not what TDD as it has been pubilicized widely is. TDD is about testing interfaces rather than implementation (though the line is admittedly blurry at times).

Personally, I can't follow you argument about having to look at the testcase to track down the failure; maybe if you only used a simple ok() function? The more expressive tests in Test::More like is() and friends will not only tell there was a failure but also what was expected and what happened; together with descriptive test names I almost never have to look into my testcases to track down a failure.

My own experience with doing interface tests inline is that this makes aggressive refactors hard (which runs contrary to the spirit of test-first development). I find that particularly the tests I write first often suggest a completely different direction than what the eventual structure of the code grows into. That doesn't hinder me; they're separate from the code, after all.

The Test:: modules were written with that approach in mind, and since you want something other than that, they obviously won't be a particularly good fit. That doesn't make them subpar in itself, it just means there's an impendance mismatch between your goals and theirs.