If you're a programmer working alone, or with a couple other programmers, and you don't have dedicated testers, the programmers have to make up the difference.

Obviously, even when you have testers, everyone is responsible for quality. I know how to write unit tests. I know how to "eat my own dogfood". I know the value of a second pair of eyes, applied via pair programming, code reviews, and having programmers test each others' features. But in the past I have always counted on dedicated testers. Now I am working without any testers.

Some examples of what I know we lose by not having testers:

A programmer writing code to do X will tend to test X with the same thinking that she put in to her code. So, she is likely to overlook the same mistakes both when coding and when testing.

Programmers love their programs, and have an emotional need to see their programs in the best light possible. This makes it difficult to find flaws.

A programmer has to split time between programming (which he is eager to do) and testing (which is less attractive). It is easy for a programmer to call testing "done" so he can move on to the next joyous programming task. This conflict doesn't exist in a dedicated tester.

Testers have testing skills that they hone over their careers. I lack many of those skills.

How can we fill in these gaps? How can we still produce high-quality software with the people we have?

7 Answers
7

Your first bullet assumes that you write the code before you write the tests. That's a common assumption, but note that you have a choice about whether it's true. If you change the assumption, if you write tests before you write code, you can wear your "tester hat" without being (quite so) biased by the code you've written.

You can do this on two scales: On the larger scale of a whole feature, and on the smaller scale of each method and class you write.

On the scale of the whole feature, Acceptance Test-Driven Development (ATDD) helps. When you take on a new feature, the first thing you do—before you write any code—is write examples of that feature in action. If each action is written clearly, and the set of examples is reasonably comprehensive, they can act as a guide for your development. And you can automate them. Passing the next automated test gives a nice sense of progress. And failing a test that used to pass gives a quick indication that you broke something. And fast feedback means that it takes less time to figure out how you broke it.

On the scale of each class and method, Test-Driven Development (TDD) helps. You write one teeny tiny test. Then you write the code to pass that test. Then you clean up. Then you write one more teeny tiny test. Then the code to pass that test plus all of your existing tests. Then you clean up. Repeat until you pass all of your automated tests.

Still, at some point you'll want to get a good exploratory tester involved, for all the reasons you cite.

Possibly the biggest thing you can do is get someone else to test your code. If you aren't in a position to do test-driven or test-first development, then give another programmer your code, the interfaces, and get them to write the unit tests. Then get someone else again to test the integrated software.

Part of the issue here is the age-old one of testing being seen as somehow less 'valuable' than programming. They're interdependent: programmers who test and test well can get away with not having testers, but they can't get away without testing (or if they try, they end up paying the price).

The other side is the inevitable difference in skill-sets. Programmers are good at finding ways to make things work. Testers are good at finding things programmers didn't think about when they were making it work. Really good testers do this in a way that doesn't make programmers unhappy.

Some of the other things you can do:

open betas: when you think it's mostly ready, let users who are willing to take some risks work with the software. If you add some extra logging into your software this also gives you a much clearer picture of how the software gets used.

crowd-sourced testing, like UTest (this obviously isn't possible for everything and everyone, but it can get you a fresh perspective from people who like testing.)

Build the tests in, as Dale suggested.

Automate as many tests as you can think of - if your team is writing code that tests your code (with the caveat that no-one tests the code they wrote), they're more likely to enjoy the testing side than if they're trying to do manual exploratory testing they're not familiar with. A caveat here is figuring out what should be automated: I'd suggest starting with a list drawn from your steel thread and expanding to likely error conditions and things people know have bitten them in the past.

Bring in some "business users" (innocent dummy users you select, not real representatives from the paying customer), show them the app, and they'll discover eminent bugs almost immediately during the first demo. (I wonder if this phenomenon has gotten a special name or jargon expression.)

Better bring in some fake users that you select before going out to customers and showing the demo there, ending in embarassment.

I know this does not replace automated testing, but this is always a good reality check.

I do not think there is a magic bullet for overcoming the lack of a tester. You need to follow the same kind of process you would follow if you had a tester, except that during test cycle, some or all of you will need to stop writing code. Some of you will be better at testing than others. The degree to which you succeed will depend on your collective willingness to spend your time on something that is not your favorite thing to do.

I worked at a start-up with eight to ten developers and one tester. We needed more testers, so the one tester wrote a high-level test plan and divided it up among the developers. The results were not great but were better than leaving the entire job to one person. We eventually staffed up the test team as the money became available.

I wrote a couple of blogs about this subject some time ago based on my experience with some Agile teams where they were having issues finding the correct testers and ended up having developer perform testing tasks.

Basically I think that most programmers can do descent testing if they only approach this operation professionally, the problem is that most of them don't have a clue on how to do this and end up running some scenarios as they think about them (most like a new tester does if he doesn't have any training or guidance).

In a nutshell, you need to start by understanding the limitations of developers when they tests (a) their code, and (b) any code. If you do this then at least you are not diving blindly into the task.

Then start by making sure your developers understand a little more about the testing process - what can each of them test, how to plan their testing, define some scenarios up front (using any type of brainstorming/mind-mapping session), etc.

During the testing itself, make sure your developers are focused on their task and are not just pressing buttons to "get it over with". Define testing sessions and how to approach the times when they will "feel" they have nothing else to test, etc.

In the end, I think it is mostly about mindsets and approaching the testing professionally. If you want to test seriously, you will be able to find the issues in the code, but if you see testing as a burden and a waste of time, you will just try to get the "dirty task" done and you'll surely miss even some of the most trivial issues in the product.

write a design (anything from a big ass document to classes and methods list),

review your design,

make someone inspect your design,

write unit tests,

review unit tests,

make someone inspect your unit tests,

write code,

review code,

make someone inspect your code.

PSP/TSP, for example, insists on the strong separation between the "write" and "review" steps, which means you:

write design/tests/code,

stop at some point and claim it finished,

then take a walk/shower/nap/lunch, return and review it by yourself,

fix found issues,

give the result to some fellow developer for inspection.

It is very important to take your time on both review and inspection, something like 30+ minutes per page, depending on contents complexity.
Do not forget to reserve time for the review/inspection in the project's schedule (PSP/TSP suggests it to be ~50%).

@joelmonte, testers are cheaper, and, at some point, you will start to lose money by teaching developers how to do (blackbox) testing parttime, instead of hiring some fulltime qa-engineers.
–
Misha AkovantsevJan 23 '12 at 1:17

I'm largely just reinforcing what has been said before (such as suggestions from Dale & knb):

Test first - build the automated checks into the software to help ensure you developing the thing right

Peer review each others code / pair program

Frequent code retrospectives to ensure all Programmers are on same page

Remembering the Business Users are only actors pretending to be Real Users, can you get Real Users involved? Either get them in to UAT environment or make releases to production which only a limited no. of Users can access (e.g. AB Testing)?