When you have a short deadline to being with, and you're nearing the end of development, would you rather rush through coding to dedicate more time to QAing or keep your pace and possible have VERY little QA time?

I am a sole developer on this project and I am the sole QAer as well. It is an extremely short time period I have to do this, notice to release. It is a very data-driven application and the data tends to be a little......user-inputed. I could spend days searching through intermediate processes, but it will take away from the little valuable time I have left.

What do you recommend? Maximize QA time by rushing to finish? Or keep steady and cross my fingers on release?

I don't like the feeling of a release version and just watching a new batch of data just waiting for it to snag and fail.

4 Answers
4

First, I'll note that as much as most of us wish there were a good way out of this, the fact is that you don't have good options right now. The best you can hope for is to find the least bad one(s).

That said, the general rule is that the earlier you catch a bug, the quicker/cheaper it is to fix.

As such, despite pressure to hurry, I think it's generally better to stick to your pace and attempt to do things as well as you can up-front. If you try to slap things together in a hurry and then fix the mess later, there's a strong likelihood that the overall effort will be greater (and the result probably still of lower quality).

The "make-it-fast, fix-it-later" approach was used by American car manufacturers, much to their detriment. The "catch mistakes early" approach was used by Japanese manufacturers and was a key part of their rise in the '70s and '80s. The This American Life "Nummi" episode discusses this, though the main focus is the corporate culture of the car manufacturers.
–
outisAug 6 '11 at 18:56

How about writing some automated unit tests as you finish up? I'm not going to get all preachy and say you should write all your tests beforehand in a TDD style (though I prefer this practice), because that isn't a helpful thing to say to you in this situation.

However, picking what you perceive to be the riskiest areas of the code and at least giving them test coverage as you complete your development might be achievable. You can then complete your tests after your deadline, write up any failing ones as bugs and change the code to make them pass. This will have the advantage of you finding bugs yourself, which should show to your customer that you care about the code.

Finally, as the bug reports come in (and you know they will), don't be tempted to dive in and fix them immediately. Instead, try to take the time to write a failing unit test that replicates the bug and then make it (and all of your other unit tests) pass. Over time, you will grow little islands of unit tests around your buggiest / most changing code, and if you get to work on a version 2.0 of the code, then you can write further tests to grow your test coverage to where you will have more confidence in your code next time around.

Sacrifice additional features for some QA time, and then sacrifice some MORE time for implementing a system to alert you of problems once it's in the wild, along with all the information you would need to fix the bug. Then spend some time coming up with a way to quickly push changes to customers.

If you're going to be supporting a poorly QAed piece of code, at least give yourself the tools you are soon going to need.