Why Do Rubyists Test So Completely?

The Ruby community, according to some data I am making up, has the strongest
test-driven development attitude. Not all of us
TATFT; not all of us test most of the time; and
not all of us test ever—but those who do test make up a larger proportion
than they do in, say, the Java world.

But why? What is it about Ruby that drives us to attain 100% code
coverage?

We test. Kent Beck tests.
Rails comes set up for you to write tests as part of your application. Are you
such a skilled programmer, such a badass rebel, such a unique person, and such a
loner that you wouldn’t dare test your Ruby code?

No, because your app would break and you know it. More importantly, everyone in
the community knows it. If you publish it to Rubyforge
and there are no tests, don’t expect us to use it. So you write tests, because
we want you to.

Test-driven development is cheating. You think
upfront
about the problem, the solutions, the problems with those solutions, and you
document all this in code. It fails, with a pretty error message, so you make it
pass.

Any method, at any time, might produce nil. You never know when it might
happen. Even instance variables might produce nil. Your code will keep going
until this becomes an issue…and it will become an issue.

The reason nil appears so often, even when unexpected, is in part because of
the implicit return at the end of methods, and in part because if in Ruby is
different from if in C (it’s an expression that produces a value instead of a
statement that just runs). This is different from what many people expect and
this difference often goes ignored.

We’ve worked around this in places where we expect nil. Either we re-write the
algorithm to #compact the nils out early, or we use #validate_presence_of
and stop expecting nil, or we use the #try
hack,
or some other specific solution. But it lingers in the back of our mind that any
method we’re calling could produce an absolutely useless value.

We have to write complete tests that verify the runtime code because there is no
tool to automate this. In a world where even C can tell you when you are passing
an integer where it expected a string—before even running your
program—Ruby cannot do anything of the sort. Your program works until it
stops working, and then you dig through Hoptoad for a
minute, write a test for 30 minutes, fix the code (five minutes), and re-deploy.
Live, while the client is waiting.

So we write the tests first so the client stays happy. Instead of just hacking
away at code until it compiles, we write tests then hack away at code until they
pass.

We write tests so we can refactor without care. What if we change the methods
called, or the order of the callbacks; what will break?

This matters because code can depend on ordering. Variables can be mutated: set,
unset, and changed from anywhere. We might print to the logger in one callback
then follow-up that printing in another. We might depend on an instance variable
not being nil in a method because of undocumented invariants. If we re-write a
seemingly innocent method in a way that changes the order in which things are
mutated, anything could go wrong.

So we test because Ruby provides no better alternative, and because testing is
awesome. What if we tested only because we enjoyed it, not because we got
anything technical out of it? What if our unit tests never failed except for
when we first wrote them? What if regressions were caught by the language
implementation instead of by custom code?

What changes would you make to Ruby to achieve the goal of never unit testing
again?