We want you to write more tests. Yes, you. You've already been told that tests are the safety net that protects you when you need to refactor your code, or when another developer adds features. You even know that tests can help with the design of your code.

But, although you've read the books and heard the lectures, maybe you need a little more inspiration, tips, and prodding. And you need it to be in a place where when you see it, you can't ignore it.

That's where we can help. We're the "Google Testing Grouplet," a small band of volunteers who are passionate about software testing.

We're unveiling the public release of "Testing on the Toilet": one of Google's little secrets that has helped us to inspire our developers to write well-tested code. We write flyers about everything from dependency injection to code coverage, and then regularly plaster the bathrooms all over Google with each episode, almost 500 stalls worldwide. We've received a lot of feedback about it. Some favorable ("This is great because I'm always forgetting to bring my copy of Linux Nerd 2000 to the bathroom!") and some not ("I'm trying to use the bathroom, can you folks please just LEAVE ME ALONE?"). Even the Washington Post noticed.

We've decided to share this secret weapon with the rest of the world to help spread our passion for testing, and to provide a fun and easy way for you to educate yourself and the rest of your company about these important tricks and techniques.

We'll be putting episodes on this blog on a regular basis and providing PDFs so you can print them out and put them up in your own bathrooms, hallways, kitchens, moon bases, secret underground fortresses, billionaire founders' Priuses, wherever. Send your photos and stories to TotT@google.com and let us know how Testing on the Toilet is received at your company.

I've worked for a long time in the testing/QA engineering field for software (seems like every company calls it something different, and there are different expectations of each, but that just leads to my point). My point is that QA needs to have a bright line between it and development. No doubt, developers need to be good at creating good unit testing. By the same token, good QA/Testers need to be good at development.

But the two should remain separate. Go to any mature industry's engineering department, and ask how much of the release testing is done or defined by the actual development engineers. The answer will be you'll be laughed out of the office. There's good reason for this.

This is a sign of the immaturity of the software development industry. We're getting there, though. I've worked for a couple of the truly big Seattle software interests, and they are slowly getting more mature in how they deal with QA. The key is, (for as good and happy as developers doing a good job developing unit testing for their own work): SDEs are not QAEs, and the distinction should not be blurred, if you want to maintain true quality.

I think you may be missing a certain level of irony in your "positive comments"...You may also be missing a certain level of fibre from your diet if you are regularly on the loo long enough to read through so much...

To respond to Joe #302You're wrong. I'm a developer in a fortune 500 company and I test everything I do. By the time it goes to Alpha test, only field anomalies break my code. So to say "SDEs are not QAEs, and the distinction should not be blurred, if you want to maintain true quality." you're, well, full of crap. Your statements should be flushed.Not everyone has poor software developers working for them. A true programmer can take the project from cradle to grave, or more aptly from napkin scribblings to deployment and not have problems. I have 3000 sites with my software for over three years and have had 2 bugs related to errors on my part. Proper Software Engineering includes heavy testing during development. I also know of one company that spends Billions (Microsoft) on everything you stated, and well, let their end product speak for itself. It does not support your conclusion.I rest my case.-T-

To respond to Joe #302I think that you may want to rethink your position and help usher your company into quicker development. Everyone in your company should be involved in QA activities because it is everyone's job. If the developers learn from their own mistakes, then they should be constantly improving and not have to bother you with errors that could be fixed by them. The QAE should be worrying more about integration testing rather than unit testing, but the SDE should also ensure that their code is scalable and will not cause errors due to an inability to forsee problems.

To respond to Major Tom: I commend you on having only 2 bugs in your portion of code with a 3000 site deployment. However, I'm wondering if you're an anomaly and am concerned that this methodology sets a dangerous precedent. Considering the approach of a development engineer should be, and in the best companies is, fundamentally different from that of a test engineer. The DE is solving a positive functional problem, while the TE is addressing the validation and exceptions associated with the DE's solution. When presented with requirements from (in most cases) Marketing PMs, the DE considers how to make the feature/product work, while the TE considers where and how the feature/product will be deployed (functionality and performance testing) and under what circumstances it might break (stress, robustness, and negative testing). As much as we try in our smaller organization to unite the DE and TE mindset to save headcount and to arguably improve unit testing prior to the hand-off to TEs, they will always remain firmly divided.

I had very good success writing my own tests as I was writing libdomainkeys. As I wrote the code, I would create the test. Every branch was labelled with a comment naming the test which forced that branch to be executed. Too bad our text editors don't (usually) support hot-linking between source files.

I think a point to be made here is that engineers of any sort tend to get accustomed to an idea or algorithm or whatever they've been working on. They see it as working just fine. The QA people are there to ensure that your idea of fine agrees with everyone elses. Sure, maybe it doesn't have bugs, but maybe the interface isn't as intuitive as it might be, or whatever. The point is, the more people that look at it before production, the more likely it is to be useful and useable. If you want to streamline your development timeline, you include QA people in the development cycle, to add their input as the project progresses. That way problems can be identified and resolved before it becomes a major problem. But you already knew that - it's your basic concurrent engineering model you learned about in college. I hope.

Kudos to the effort of brainstorming the idea of testing bumper(Harry Robinson style) stickers and tests in toilets.

Here are my 2 cents:

Toilet stickers:

1. The better you code, the lesser it stinks. 2. You expect the toilet to be clean, as we expect your code to be. 3. You mess, we spot, you clean. 4. You dropped the paper roll by mistake. Fix it ! 5. Thanks for remembering that you need to go back and fix the bug I found. 6. for ($i=0,$i< ($i--1);++$i) - I am not reminding your coding error :) 7. 234/8 - No! It's a cricket match score and not 234 bugs per 8 KLOC. 8. I am your friend but unfortunately our customers aren't. 9. The more time you spend here, the less time you distract me and the more bugs I find. 10. Did you come here running to think about the explanation you want to offer on why you missed that bug? 11. Don't believe me when I say; your code is working fine so far. I just want to give you a bigger surprise. 12. I heard you say, "Please leave me alone" and that is why I am not reminding you of the bug that took the longest time for you to fix it.

Tests:

1. An end user thought of running our program with a platform we no longer support. Should he get an error message - "You are not supposed to that" ? 2. "Arggh! The page is corrupted." - Does this mean you were unable to handle an exception? 3. I am going on a 3 week vacation and while I come back, I will have a report that my script generated on the reliability test. Are you sure, I wont see anything in RED? 4. "Sorry, no donut for you" - Who got the donut then? 5. The voice kept breaking, when I was using a dial up connection. Did ._ _ _ know? 6. I hear too many click sounds when I login. Do I have an option to hear less clicks? 7. I refer to James Bach's 36 heuristics to test. Do you use them to code? 8. When firewall is ON, your code isn't. 9. Proxies do not by pass bugs. 10. I tried closing the window in between. It goes away smoothy without the error message you expected to see. 11. I am always amazed at your coding skills but not the way you build your code. 12. His code never merges with anyone. Beware if you are merging with him. 13. I found a bug with your unit test script and hence I know what to test. 14. Last time my developer forgot to enable the logger. He spent a week convincing me to reproduce the issue. 15. Psst! Don't tell this to anyone. I look at our competitor product known issues and test them on our product. This helped me catch bugs faster :)

I have a simple question as a former-programmer, now business guy, between dev and QA.

Why not put pay performance targets on both sides as incentives per testing release?

In other words, each QA person gets $10 for each bug they find (up to 10, or whatever). Each development person gets $10 for the number of bugs not found under a certain target (10, or whatever).

Seems like an easy way to get people more motivated about the whole testing process.

This is a horrible idea.

I will not improve quality. It will not improve relations between business, developers, and testers. In fact, it will make relations worse within the company and can lead to real bug escapes that impact customers.

This will only lead to finger pointing and time wasted chasing unimportant and duplicate "bugs".

As a tester, I have often had to make the decision about reporting a bug as one or 100,000. I don't know if the 100,000 errors my automated test found are due to 100,000 different problems in the code or just one. I could report them separately, or I could report them in groups, or I could pick up the phone and talk to a developer than could help me determine how the bug should be reported.

And what if the bug exists because a QA person did a poor job of reviewing the requirements? I could easily provide input to requirements that I know will lead to bugs just to get a bonus.

Not all bugs are due to a developer making a mistake in their piece of code. Some are due to bad requirements given to the developer. Some are due to bugs someone else created. If one developer's code does not properly interact with another developer's code, who is to blame? If the issue to bad documentation and requirements, do we blame QA?

And then we could argue about what is and what is not a bug.

There was a Dilbert cartoon many years ago about the idea of rewards for fixing bugs: Now, I'm going to code me a new minivan.

I also had a question about the regularity of this. I think its a great idea, but I've had a hard time finding the collection of these. Is there a place for people to submit these, because I could come up with a few episodes.

Seeing Testing on the Toilet is producing something weekly I thought I would have a go at doing something similar with Testing on the Bog. So if others are looking for testing ideas to place on your toilet door please have a look.