I'm a software developer. What do I need to know about software testing (apart from TDD, unit-testing and stuff) to make my software work as it should? For example, there are cases when TDD is not applicable. Therefore, I need to do simple smoke testing of my code every time it is deployed/changed. In order to prevent silly mistakes, I probably should have some simple test cases and check manually if everything works properly. It's better to avoid bugs than fix them, right? And this is the time when basic software testing skills might come in handy. What are those basics that I need to know?

"For example, there are cases when TDD is not applicable"? Really? Can you explain this? "I need to do simple smoke testing of my code every time it is deployed/changed"? Why not simply run all the tests? Why only run a "smoke test"? "check manually"? Why would you do that when you have automated unit testing? This question is quite difficult to understand. Can you explain what you want to know?
–
S.LottFeb 14 '12 at 20:58

- Prototyping or other exploratory-style work such as an architectural spike; - User interfaces such as web pages or emails
–
alternFeb 19 '12 at 9:38

Prototypes don't go into production. There are no "silly mistakes" that matter. And. Why would not not test the prototype? I don't get what "basics" you're looking for. You've already enumerated them. Test everything. Every time. Except when it doesn't matter. What else do you think there is?
–
S.LottFeb 19 '12 at 16:44

Why would not not test the prototype? that's my point. prototype should be tested. better if its done by developer who created the prototype he understands what's under the hood, he knows what should be tested and what shouldn't. important thing is that the way developer writes code affects how app should be tested.
–
alternFeb 20 '12 at 9:50

3 Answers
3

A book that was recommended to me and which I relied on as a professional software tester is Testing Computer Software by Kaner, Falk, and Nguyen. Topics include the fact that you simply can't test everything, how to report and analyze bugs (more useful for a larger team or when dealing with bug reports from customers), equivalence classes and boundary values, localization, user manuals, legal consequences, and managing. So it covers a broad range of areas relevant to testing, and introduces and defines key terminology. If you're going to spend a lot of time testing or even working with testers, I recommend this book.

The most useful skill I have ever encountered in a developer/tester is the ability to "forget" all assumptions about how the user is supposed to use the software. I won't say "dumb it down", because testers have to be very intelligent about their job, but a developer testing their own work must abandon any pre-conceived notions of how they envisioned the end user interacting with it.

For instance, I know a tester whose first action when opening any sort of "record maintenance" screen is to immediately hit "save". Your average developer might think that behavior is so ridiculous that they don't build in appropriate error-checking for a case where NOTHING was specified by the user. Navigation testing is another usual soft spot, especially in web apps; does your website react properly to the "Back" button? Many, in fact, do not, especially if they rely heavily on ViewState and postbacks.

A good tester realizes that nothing can be assumed as to the behavior of the end user. These are the testers that developers hate, because they'll come back with some edge case that is a pain and two-thirds to cover. But, they're also the testers that help you produce rock-solid-reliable software. After a little while, you as a developer, submitting release candidates to a tester such as this, will begin to adopt some of the same mindset, writing automated tests for edge cases and incorporating "stupid" user actions into manual UI and end-to-end testing.

One thing; TDD is almost never "not applicable"; the only time you shouldn't TDD something is when there isn't a suitable testing framework for you to use, and that would only be a valid reason for a very small set of "legacy" or "specialty" languages.

Forgetting all assumptions is excellent, essential advice. While working on a large software project, I realized that I had access to many experts. Sometimes the expert was someone else, sometimes it was myself. This expert knowledge is almost never distributed to customers, so it can't play a part in analyzing how to use and understand a piece of software.
–
bneelyFeb 14 '12 at 18:24

Well, first off, for avoiding 'silly' mistakes, you can generally employ static analysis tools. This will help you catch things like failing to meet coding convetions, possible null dereferences, etc. I would say that's a first line of defense to writing quality software.

Your second line of defense is your unit tests, which provide rapid feedback that your code does what it is supposed to and continues to do so as it's changed.

From there, you have the concept of integration/behavior tests that test whether or not the code actually fulfills your requirements. These are going to be your "smoke tests" that exercise your application in a more integrated state. I cannot overemphasize that I believe you ought to automate these. Firing up an application and clicking around randomly to get things to break (i.e. a "bug bash") is not productive use of a skilled developer's time, in my opinion. You can always automate (that's what we do for a living). There are GUI exercise tools if need be - you can test all the layers below the GUI as well. You can do load tests, you can do long running tests, you can randomize things and log execution steps, etc.

So, in the end, I'd say that the software testing skill that you need is how to automate tests of integrated parts of your code, and how to interpret the results of that automation. These sorts of repeatable, documentable testing efforts will be a lot more valuable to you than "try to break it" (not that this is a bad thing to do -- just not optimal for a developer to do it).

what if i do not have chance/time/possibility to automate tests or its not really reasonable at the moment?
–
alternFeb 14 '12 at 17:45

I don't know that automating a smoke test need take much time. It could be something as simple as a script that attempts to run 200 instances of your application or something.
–
Erik DietrichFeb 14 '12 at 17:55

2

There are automated test scripting hosts that can be invaluable in circumstances where you don't think you have time to develop a programmatic test. Simply set it to "record" and it will watch you, then you can "play back" the exact same set of actions, adding additional checks to ensure that various things occur. Voila, in just a little more time than it took you to run the test once manually, you have now created an automated regression test.
–
KeithSFeb 14 '12 at 18:24