Average time a programmer spends testing: 10 minutes. Average time a programmer spends debugging something he should have tested: 2.5 hours.
–
CraigeMay 25 '11 at 13:40

1

Does one really need to formalize testing, when 80% of all shops have no running tests at all?
–
JobMay 25 '11 at 17:46

@Craige: Testing typically takes much more than 10 minutes. It might even take longer than the total time spent debugging. However, the time spent on testing is proactive (achieving comprehensive coverage, even though only a few percentage of tests would reveal defects), while the time spent on debugging is reactive (the defect jumps at the programmer at the most inconvenient time, putting one under pressure to get rid of the bug, and ending up introducing additional bugs as part of the fix.)
–
rwongDec 2 '12 at 9:26

10 Answers
10

Testing is meant to find defects in the code, or from a different angle, to prove to a suitable level (it can never be 100%) that the program does what it is supposed to do. It can be manual or automated, and it has many different kinds, like unit, integration, system / acceptance, stress, load, soak etc. testing.

Debugging is the process of finding and removing a specific bug from the program. It is always a manual, one-off process, as all bugs are different.

My guess is that the author means, that on Level 0, only manual tests are performed, in an ad hoc fashion, without a test plan or anything to ensure that the tester actually thoroughly tested the feature under test, and that the tests can be reliably repeated.

Debugging is an attempt to fix known and unknown issues by methodically going over the code. When you're debugging you're usually not focused on the code as a whole, and you're almost always working in the backend, in the actual code.

Testing is an attempt to create an issue through various ways of using the code that can then be debugged. It's almost always done in userspace, where you're running the code as an end user would run it, and trying to make it break.

I agree, and I'd like to stress your point about "running the code as an end user would run it" just to highlight the over emphasis people tend to put on automated testing and TDD. Particularly for web based apps - what's more informative, code testing code, or people testing web pages?
–
MemeDeveloperDec 2 '12 at 6:26

Debugging is a manual step by step process that is involved, unstructured and unreliable. By testing through debugging you create scenarios that are not repeatable therefore useless for regression testing. All levels other than 0 (in your example) exclude debugging in my view for this exact reason.

The Saff Squeeze is a debugging technique that is very structured, very reliable, not particularly involved and conceivably at least partially automatable. It achieves this by recognizing that there is, in fact, no difference between testing and debugging.
–
Jörg W MittagMay 25 '11 at 15:18

If you debugging is "unstructured, unreliable and manual" you are not doing it right! Or clearly we just use these two words to mean different things.
–
MemeDeveloperDec 2 '12 at 6:23

In simple terms, a "bug" is said to have occured when your program, on execution, does not behave the way it should. That is it does not produce the expected output or results. Any attempt to find the source of this bug, finding ways to correct the behaviour and making changes to the code or configuration to correct the problem can be termed debugging.

Testing is where you make sure the program or code works correctly and in a robust manner under different conditions: You "test" your code by providing inputs, standard correct inputs, intentionally wrong inputs, boundary values, changing environment(OS, config file). Essentially, we can say that you try to discover bugs and eventually "debug" them in the testing process. Hope that helps.

In some systems - Smalltalk, for instance - there's no difference at all, because you can perform your write-test/run-test/write-code cycle entirely within your debugger.
–
Frank SheararDec 8 '11 at 9:29

@FrankShearar: It's probably no accident that the above paper was written by an old Smalltalker. The TDD cycle (which is of course also by Kent Beck) is basically a description of how Smalltalk code has been written since the dawn of time: write some example code in the workspace, let the debugger catch the no method exception, click on create method, write the code, resume execution (yay for resumable exceptions!), repeat.
–
Jörg W MittagDec 8 '11 at 12:15

Bugs are visible errors. Debugging is the process started after test case design. It is a more difficult task than testing, because in the debugging process we need to find out the source of the error and remove it, so sometimes debugging frustrates the user.

Speaking in everyday, practical terms, I think it totally depends on the context.

In a med-large team, working to high / very high standards (think banking, military, large scale, high budget or business critical systems) then I think clearly "debugging" should be "a result of testing", and they are clearly very different things. Ideally testing leads to debugging (in a staging environment) and in production we need close to zero of either.

Testing is wide in scope, regular and very formalised - while debugging is a particular process that happens occasionally when there's a need to fix a particular failure - which is not obvious and requires a deeper investigation of a system's functioning and resultant outputs.

Here in my mind testing is something essential, while debugging is a specific tool needed only when the resolution to a failure is opaque.

I totally understand the obvious utility in TDD for large teams and or systems than simply cannot afford to be "buggy". It also clearly makes a lot of sense for complex (often "back-end") systems or if there is a high proportion of complexity in the code compared to the output. Then "testing" has a realistic chance of informing when and why failures occur. Systems that do a lot of complex work and or result in clearly measurable outputs are generally readily testable, and so testing is distinct from debugging. In these cases testing strongly implies a procedure based, formalised method of confirming or dis-confirming the match of expectations and actual output. Testing happens all the time, and occasionally informs us of the need for debugging.

It would be lovely if this was a ubiquitous truth, I'd love it if my dev cycles were delimited by clearly defined binary output (red, green) but...

In my case (which is admittedly particular - working 98% solo on small-mid sized, under-funded web based, data focused corporate admin systems) I just really can't see how TDD could possibly help me. Or rather "debugging" and "testing" are virtually the same.

Mainly though the use of the term "testing" implies / closely relates to the methodology of TDD.

I know this is a totally, utterly un-Zeitgeist "shun the non believer, shun, shun", despicably un-cool thing to say. But thinking about my context, with a practical hat on I just don't even vaguely, in my wildest imagination see how TDD could possibly help me deliver more value for money to my clients.

Or rather, I strongly disagree with the common assumption that "testing" is a formal code based process.

My basic objection (applicable in my particular *context*) is that...

If I cant write code that works reliably - then how the hell am I supposed to write code that works reliably to test said presumably sub standard code.

To me I have never seen any example nor argument that (in my particular context) enthused me sufficiently to even bother thinking about writing a single test, I could be writing some laughably insubstantial testing code right now, maybe "does my repository return a User entity with Name == X, when I ask it for exactly - and only - that?", but there's probably more utility in me writing this streaming, maybe-the-internet-really-is-just-pure-foolish-spouting-self-gratifying-wildly-under-informed-blood-boilinglyignorant-wastefully-silly trash, but I just feel the need to play devil's advocate here. (Kind of hoping someone will show me the light and convert me, maybe this will end up giving my clients better value for money?).

Arguably "debugging" sometimes is the same as "testing". By this I really mean that in my daily working life I spend at least a third of my time playing about with the local version of my system in different browsers, desperately trying various different wacky things out in an attempt to break my work and then investigating the reasons why it failed and correcting them.

I 100% agree with the obvious utility in the TDD mantra "red/green/refactor", but for me (working in low-med budget, solo dev RIA land) I would really really love for someone to please show me how I could possibly, logically and vitally realisticallyget any additional value from writing more (just as potentially flawed testing code) than I do from actually interacting with the full (and essentially only) output of my efforts which are essentially bound to real human interaction.

For me when developers talk about "testing" it generally implies TDD.

I try to code as if there were tests, I think all the patterns / practices and trends that all this testing focused development has encouraged are fantastic and beautiful, but for me in my little world "testing" is not writing more code, its actually testing real world outputs it in an approaching realistic manner, and that's virtually the same as debugging, or rather the active change here is the "debugging" which is a direct result of human, output centric non automated "testing". This is in contrast to the generally accepted view of "testing" as something automated and formal, and "debugging" as something human and ad-hoc or unstructured.

If the goal is really value for money / effort, and you're making web based interactive applications, then the output of the effort is the web pages and very essentially how they react to human input - so "testing" is best achieved by testing those web pages, through real human interaction. When this interaction leads to unexpected or undesirable outputs, then "debugging" occurs. Debugging is also closely related to the idea of real time inspection of program state. Testing is generally associated with automation which I think is often an unfortunate association.

If the goal is really value for effort, and automated testing is efficient and highly beneficial, while debugging is either just an output of that testing, or a poor substitute for automated testing, then why is the second most visited website in the world (Facebook) so often riddled with blindingly obvious (to users, but clearly not the testing team and testing code) bugs?

Maybe its because they're concentrating on the reassuring green lights and forgetting to actually use the outputs of their work?

Do too many developers think testing is something you do with code, and debugging is something you do occasionally with the IDE because an icon turns red and you can't work out why? I think these words have unfortunate value judgements associated with them, which generally obscure the practical reality of what we should focus on to close the gaps between expectations and outputs.

The testing maturity model that you have listed are descriptions of the mentality of the development team.

What the list implies, without saying explicitly, is how the change in mentality affects the way testing is conducted.

As a development team advances to the next level, the scope of testing is broadened.

At Level 0, no testing is done, because the team thinks it is not necessary.

At Level 1, testing is done to provide a nominal coverage of basic functionalities.

At Level 2, testing is broadened to include everything in Level 1, and adds in destructive testing (a dedicated test team who have access to all information that developers have access to, including source code and binaries, and tries to find bugs that can be triggered from a user role)

At Level 4, the goals of software testing is well-understood by every person, including the customer-facing IT staff. Thus, IT staff will be able to provide feedback about what scenarios to test for, improving the risk coverage of Level 4.

(Disclaimer: I do not have access to the textbook, therefore my terminology may be incorrect.)