On testers and testing

Over the years, I've come to hold some strong. opinions on testing, testers and the entire business of quality assurance. Inspired by this post on Facebook's testing, I wanted to write this down so I can point people to it. Some of this is controversial. In fact, even mentioning some of this in conversation has caused people to flip the bozo bit on me.

Most product teams don't need a separate testing role. And even if they do, the ratio of full time dev:full time test should be in the order of >20:1. For evidence, look no further than some of the most successful engineering teams of all time. Whether it be the Facebook of today or the original NT team from 30 years ago, some great software products have come from teams with no or little testers.

Developers need to test their own code. Period. The actual mechanism is unimportant. It could be unit tests, full fledged automated tests or button mashing manually or some combination thereof. If your developers can't/won't or somehow think it's 'beneath them', you need better developers.

Here's a politically incorrect thing to say. Several large engineering organizations start hiring testers from the pool of folks who couldn't cut it as developers. I've been in/heard of a scary number of dev interviews where somebody has said "(S)he is not good enough to be a dev. Maybe a test role?". This leads to the widely held but rarely spoken of perception that someone in test is not as smart as someone in dev.
And due to the widespread nature of #3, the few insanely smart people who are actually genuinely passionate about quality and testing get a raw deal. I know this since I worked with a few of them.

Tracking code coverage is dangerous. Since it is so easily measurable, it often becomes a proxy for the actual goal - building quality software. If your dev team is spending their cycles writing silly tests to hit every rarely used code path - just because it shows up in some status report - you have a problem.
Code coverage is one of many signals that needs to be used but since it is so seductively numeric in nature, it drowns out many others. It falls victim to Goodhart's law.

I'm yet to see well-written test code in organizations that have testers separate from developers. Sadly, this is accepted as a fact of life when it doesn't need to be.

Just like metrics like code coverage are overused, other quality signals are underused. Signals like - how many support emails has this generated? Actually using the product all the time yourself and detecting issues. Analyzing logs from production and customer installations. All the other tactics mentioned in the Facebook post at the top.

A common tactic by engineering leaders is to try and reduce the size of their testing orgs by moving to automated testing. This is a huge mistake. If you have a user-facing product, you absolutely need manual eyes on it. Sit someone down from the Windows organization at Microsoft for coffee and they'll blame the focus on automated testing for a lot of the issues in Windows Vista. The mistake here is assuming that you need a full-tim test person to use the product.

Some disclaimers

Some of my best friends and the smartest people I've met are from the QA world. So this is by no means an indictment of everyone who is in the testing profession. You know who you are. :)

There are always exceptions. I know of several product organizations where a separate testing function is necessary (for example: hardware, mission critical products, nuclear reactors, huge legacy installed base, etc). But most of the below should hold true for anyone shipping a typical internet service.

If you're a small startup, none of this applies since you don't have the luxury of getting fulltime testers anyhow.