This is a mini-blog. I'm working to find a compromise between a tweet and a lengthy essay. I find it difficult to complete longer documents because of an obsession with perfection. So this little experiment is to see if I can create a blog of mini articles. Herein I will talk about many technical things generally related to software development and Agile practices.

14 June 2017

I Hate Parameterized Tests

This is totally a personal thing. Well, its a bit 'real' too. I hate parameterized tests.Parameterized tests obscure what is going on in the code. I have yet to see a clean, understandable, elegant example of a parameterized test. They are close to as bad as looping in a test around an assertion. The one redeeming quality they might have is that a better (I won't see well) done version of a parameterized test will at least run all the tests even if one fails.They still stink!One of the most common issues I've run into with parameterized tests is that it is unclear which combination of conditions caused the failure. That is, some collection of parameters caused the test to fail. What were there values? What semantics are associated with that particular grouping? Most of the time nobody knows, and nobody can easily tell. In the rare case that we can quickly isolate the issue we are often still at a loss or 'What does it mean?'With enough effort, you might survive.In a few cases I've put a bunch of effort into fixing parameterized tests for people. By fix, I really mean, making it tolerable to have in the test suite. Usually I add some extra parameters that suggest names for conditions and give meaning to the collection of parameters. I tried making parameter objects with explicit names once. I even used an Enumeration in Java to 'name' the parameters. It helped, but it was perilous and fraught with danger. In all cases I think I ultimately surrendered to exhaustion rather than satisfaction.Better Choices are...Better!My recommendation about Parameterized Tests is, don't. Rather, find all the nifty edges, give them names, and use those to create named test cases. Type out 1000 tests if you have to, but know explicitly what each test does by its name. When one of these fails you will have a big red arrow pointing at your problem. No hunting, no weird structures, no head scratching. Just immediate feedback. The labor we put into a test suite is the real asset to the software. Without the tests the software's quality is dubious at best, without the source code, the test suite still holds the answers to what should be. Making the choice to be careful and explicit is the right one. Throwing spaghetti at the wall to see what sticks is a poor way to ensure quality and will make change harder over time rather than easier.