Good Enough Releases vs. Error-Free Code: Finding the Right Balance

Good Enough Releases vs. Error-Free Code: Finding the Right Balance

Developing a “proof of concept” has long been a central pillar of the business world. Under this framework, companies are encouraged to:

Build a barebones prototype.

Gauge consumer demand.

Add requested bells & whistles.

Scale up production for the masses.

This proven formula allows you to test the waters with minimal investment. You don’t need to develop a flawless product – you only need one that’s “good enough” for early adopters.

But the days of “good enough” are quickly fading as mistakes and premature releases become more expensive.

Some mistakes are minor – like an insignificant typo. And with the right approach, a company can own up to its error with good humor.

Other times, these mistakes can erode trust and inflict unimaginable damage. The recent Facebook data breach is a perfect example. And Uber’s first self-driving car fatality is another.

And sadly, we’ll see even more scandals like these as society continues to embrace the Internet of Everything. Digitization allows for unprecedented speed and convenience. But it also exposes us to more hacking and abuse.

Sometimes, there’s no malicious intent at all. A few lines of buggy code can produce negative consequences. For example, we still don’t know if the recent Uber fatality was the car’s fault or the pedestrian’s. But rest assured that the company will pay a hefty price either way.

A Wakeup Call to Software Testers and Developers

Whether deliberate or accidental, software errors can have grave consequences. And as gatekeepers of the underlying code, protecting society from bad outcomes is our responsibility.

But how do we find the right balance between speed and quality?

On the one hand, software testers and developers face increasing pressure to deliver finished products under tight deadlines. It’s no longer advantageous to delay releases for additional testing. The longer you wait, the more likely it is that a competitor will beat you to market – making your product less relevant.

On the other hand, consumers rightfully expect that the finished product will perform as expected. And more than this, using that product shouldn’t expose them to any dangers – be they physical or virtual. Facebook (and potentially Uber) fell into this trap.

Left to their own devices, many companies choose whichever option is easier (and more profitable) in the short-term. This explains why Facebook didn’t tell the public about its data breach for nearly 3 years – or why Volkswagen lied about its car emissions.

Correcting these errors simply wasn’t profitable in the here and now.

Defendants of this approach could argue that there’s a cost to preemptively fixing all bugs. And that it’s ultimately the customer who pays.

However, opponents could argue that there’s also a cost to not fixing bugs. And the consumer pays the price as well.

So which should you choose – speed or quality?

When Fixing Bugs: Comparing Speed vs. Quality

The choice doesn’t have to be so black & white. With the right approach, you can tackle both speed and quality (with greater weight given to speed).

The goal is to find and fix as many bugs as possible – as quickly as possible – so that you can release stable builds with minimal delays.

You can accomplish this by prioritizing tests to identify the most critical bugs – the ones that will make consumers angry (and hackers happy). For example, you might start with mandatory testing before moving on to sanity testing. And if time permits, you could schedule in some regression testing as well.

If you also migrate more of your assets over to the SaaS model, you gain even greater flexibility. This approach allows you to push continuous updates – even if the initial prototype isn’t 100% error-free. In fact, we’re huge supporters of incremental improvements – provided that the first release is stable enough.

This strategy isn’t necessarily easier. But focusing on speed and quality together is certainly:

Cheaper when you factor in the potential fallout that often accompanies premature releases.

More profitable when you factor in the goodwill and trust that fast, stable releases can help generate.

Agree?

Disagree?

Are there better ways to increase speed – without sacrificing quality – during software testing and development?