I think that a lot of the current push for TDD comes from the dynamic languages / Web development camp where tests largely compensate for the fact that these languages don't have types, i.e.: as Cas said, it's a crutch for shit languages.

The story of the origin may be true (the XP folks came from smalltalk which is more or less dynamic) but it doesn't mean that it's useless in a language with static types. A type system can prove the absence of certain wrong behaviors, but it can't prove the presence of a correct one. Of course in the absolute definition of "prove", tests won't do that either, but they do at least provide some assurances against regressions when you refactor code.

Regression tests have always been a major part of QA, and it's just a consequence of the fact that dev machines have gotten so much more powerful that we can now write them preemptively, and run them with the push of a button.

White box & regression testing are awesome when need. I think that the original point however is that they've become fadish and are being used in places where they're worthless. "It's the write way to do it" (yes, I'm aware of my spelling there). Pointless white box testing is a waste of both computer (and worse) human time. As an example I've seen exhaustive test units for Remez method routines...WTF??? Think damn-you! Don't just do because they say you should.

I think the takeaway is that tests should arise simultaneously with source and not as an afterthought after things break, and I'm convinced that's still a good idea.

I'm not a fan of "test first" where implementation is an exercise in turning red bars green, no. To me, that's a sort of dogma that actually hurts testing, since people may write sloppy tests just to bring coverage up, then write crappy code that just manages to turn the bars green. But really, no methodology can save a codebase written without any respect for craft.

A polynomial or rational (min-max approximation) of a function generated using Remez exchange method. (i.e. You know in advance exactly the set the values where the maximum error occurs and what the error is...so white-box testing is trivial)

BTW: When I said white-box testing a couple of post back...I meant black-box testing. Bad me.

For one month I have been working on a project with 3 other developers that got absolutely no clue what they are doing. When they try to fix something they always break 2 other things. The code is nearly unmanageable. The only thing that still save me are the unit test that I wrote. Every time they change something I can run my test and see if they broke something again. If one test fail, I refuse to merge their work into the main branch.

Indeed, testing becomes a Big F**ing Deal when you have multiple developers, since documentation isn't always Ye Olde Tome Of The Holy Spec Volumes I-XVII -- and even when it is, who wants to dedicate days or weeks to reading the spec when the API is supposed to define boundaries of acceptable inputs anyway?

Really I'd say that this is like just giving aspirin to someone seriously ill..treating the symptom and not the problem. The idea of test units should be to reduce engineering cost and not increase them, which sounds like what's happening in this case.

A software system developed by even a small team of say ten developers is much too complex for every developer to have a full understanding of the system, or to fully foresee the effects of any given change. For this, unit tests are essential. If I was in a job interview and I was told 'we don't do unit testing here', I don't think I would take the job. At least, the money would have to be very good :-)

You're missing my point. Writing and running these test do not advance the project. Multiply the time your spending by two and that's your opportunity cost of not progressing your project. The fact that you might be wasting less time than otherwise doesn't change the fact your wasting time.

I thought unit tests was more of a "prevent errors" than "fix errors". In terms of large projects, I thought unit tests are more for "conforming your modules to design specs" rather than "fixing broken code".

Using Unit Tests to fix code sounds, kinda silly. We have debuggers for that and they work rather well. I thought the reason companies spent so much time on unit tests so that modules written can work everywhere.

Does it waste a lot of valuable time? Hell yeah!Is it beneficial to large coding groups? Depends.

It is exactly like what the first post stated about JUnit tests, it is just a way to plaster over the cracks of coders merging modules together. Why it takes up soo much development time really just depends on the coders themselves. "Writing tests is just a drag, and so is fixing your code to match those tests. Green checking is just boring... hack-and-slash code to get the green check. Yay!" Just because you hack code together, doesn't make it robust or readable. But, alas, that is what JUnit tests promote when it comes to large coding groups.

We just want to code, not test. What is this, a driving school simulation?

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org