Should Software Developers Test Their Code Before Launching?

Should Software Developers Test Their Code Before Launching?

This seems like a silly question. Obviously, developers need to make sure their builds are stable before sending them on to the testing team. So yes, developers must at least taste test their applications.

Why Developers Should Test Their Own Code

The primary benefit is speed. Developers know their code better than anyone else. This allows them to quickly dig into the scripts, find potential issues, and correct problems on the spot. And this is a huge time-saver – especially when you contrast it with the countless back-and-forth exchanges that developers and testers typically go through.

A perfect example of this speed is Wix, a web design firm that churns out 800+ projects a year. The company doesn’t maintain a roster of standalone testers. Instead, its developers devote about 30% of their time writing code – and the remaining 70% testing that code.

Having developers do some of the testing is also courteous – in a way. After all, it makes the quality assurance team’s job much easier. Why send a buggy piece of software forward when you can simply test and fix it on your own?

But there is a downside. And despite occasional outliers like Wix, we would argue that blurring the lines between developers and testers causes more harm than good. We’ve already looked at the dangers of having testers “develop” in a previous article. Now let’s look at some of the disadvantages of having developers “test.”

Why Developers Shouldn’t Test Their Own Code

Again, we’re not against minor testing here and there. Remember that developers have to “taste” the soup before letting others try it. But we are against having developers act as the last line of defense before the product launches. You shouldn’t release any build until it goes through stringent testing. And believe it or not, the developer is often the least qualified person to perform that analysis.

The reason is simple.

We all have blind spots. And the closer we are to a product, the larger that blind spot becomes.

As a developer, it’s very difficult to be 100% objective about something you’ve created. But even if you could perform impersonal testing, there’ll always be things you overlook. Perhaps the product works perfectly for all of the applications you’ve thrown at it, but what about the countless applications the end-users will try?

We can’t stress this point enough. Once the product is in the customer’s hands, it takes on a life of its own. And as discussed in an earlier article, it’s very difficult – if not impossible – to anticipate every single way in which your product will be used.

It sometimes takes an outsider to spot what you’ve missed – even if that 3rd party doesn’t know a thing about programming. In fact, this is precisely what will happen. Once your product launches, there will be hundreds, thousands, or perhaps millions of “3rd parties” who’ll eventually discover the bugs you’ve overlooked.

Why rely on customers to find defects when you have access to a dedicated team of trained professionals who can do this analysis for you? They have the tools, knowledge, and skills to push your platform to the limit. Even they won’t be able to catch everything – testers have blind spots too. But they’ll come closer than anyone else.

Software Testing Is an Investment – not a Cost

There’s a reason why testing represents 80% of total software development costs. Devoting that much of your budget to quality assurance is a wise investment. A so-so product that performs as promised is far more valuable than a brilliant idea full of bugs.

However, that investment only pays off if you tap into this resource and use all of the testing capabilities at your disposal.

Don’t believe us?

Try it out yourself. Take a project you’re currently working on and look for every bug you possibly can. Spare no expense. And once you’re done, give that project to a few software testers and have them do their own analyses. You’ll be amazed at what they find.