One of the things that I try to balance when writing out test scripts for manual testing is trusting that the person executing the test (which may not always be me) is reasonably competent to know how to do certain tasks versus assuming that the person executing the test needs instruction in how to do the task.

When working with a brand new application or brand new feature where I am involved in the design and requirements meetings, its easy to forget that there are assumptions and such made in those meetings that aren't always communicated across the board. At times, when I've had to hand off the test cases to someone else, usually I end up deeply involved in the testing due to this inside knowledge to the point where I feel like I should throw up my hands and take over the testing myself.

But if I'm too detailed in my test scripts, I find that I'm spending a LOT of time documenting the test steps and being rather nit-picky on those little details and end up getting behind in a project where I don't have as much time to test as I would like.

Is this a problem, then, with the amount of detail in the step-by-step descriptions of the test script or is this something that should be documented better in requirements and specifications? At what point does adding detail to a test script become a detriment to testing?

12 Answers
12

Good question. As others have answered, the amount of detail will depend upon your particular context. When I'm asked that question by clients, I've started drawing a simple 2 X 2 matrix that looks like this:

That's a good question. I wish there were a hard and fast answer, but you need to calibrate the level of detail with the capabilities of your tester. I think you should always describe the intent of the test case, i.e. the requirement you are testing. For someone with more experience, you can leave the rest to the tester's imagination. For someone with less experience, you may need to spell things out, with the accompanying maintenance costs.

I prefer to err toward too few instructions than too many, because specific details of how you test something may change more quickly than the intent of the test case. For example, it may be important for an individual's first name to be up to 20 characters in length, but the way you navigate to the relevant dialog box will change over time.

For better or worse, some organizations may be forced to use detailed test instructions for regulatory reasons.

In addition to the given answers, it would also depend on the current members of the team, and the kind of members you intend to hire in future. The current testers will be well aware of the basic interaction with the system, and the new hire should be given appropriate training of the system and hence, every single step need not be documented in a TC.

It sounds like a good idea to document with such a detail that a person unknown to the system can come and test it. But is your application really going to be tested by an unknown? The time spent by a tester in documenting too much is also a time wasted.

For the TCs against the functionality and requirements (documented or not), I prefer the TCs that say "1. Verify that in a given condition, the application behaves a certain way" assuming the tester knows how to create the scenario. Not a rule, just a strong preference. Based on experience and judgement, additional steps can be added if required.

Documenting TCs against recently discovered defects, however requires more details as you are trying to communicate something that is unusual or unexpected.

Bottom line:The documentation should not be more complex than your application and more verbose than the specifications.

For better or worse, I was taught years ago to write a test case as if a tester were walking in off the street to execute it. Thus my test cases tend to have a lot of detail. However, I have executed other's test cases so entrenched in detail that I had to take pressure time to sort out the meaningful bits & stick to those. That is very frustrating, especially under pressure of due dates.

(I will specifically address steps only of a test case, understanding that there is much more to a test case and script than just the steps.)

To answer your question, effective test case steps should have the following info but kept as succinct as possible:
- The Action/Description needs to have the URL to navigate to, path to feature, action required.
- The Input Data needs to have example (or real) data for input
- Expected Results needs to have the output results and any changes in the screen layout
- Actual Results -actual as compared to expected results
- Pass/Fail with a date
- Comments or Notes - rarely used for me; this is where I put in known bug information, or the logic behind why to x instead of y (in other words, the program was not very intuitive in design at this step and clarification would be appreciated)

In another post, I lost a few points because of my personal preference to insert a short comment about the logic behind the step (or input, or expected result). Perhaps I work with a program that is not very intuitive and needs explaining here and there. Not sure, but the "target audience" of the test case is whom I am writing to, and I would rather not presume the "target audience" has the small nugget of information that will make a big difference when executing that particular step. My personal preference, you could say.

Agreed. There's all the data that is setup behind the scenes, configuration requirements, environment requirements, etc. That stuff needs to be documented as well. I guess it's the stuff like "Click on the OK button" as specific step to be executed after a step for entering data on a form rather than just including in the step for data entry "Enter and save data XYZ in the form". That's a trivial example but that's kind of what I'm aiming at in this question.
–
TristaanOgreMay 13 '11 at 16:01

In this example, I would include it in the step for data entry. A single step for "Click on the OK button" is a waste to me.
–
Laura HensleyMay 13 '11 at 16:47

I tried both ways over the years and still having doubts each time.
At the end it comes down to who is going to use the document- if you are sure it is going to be someone with in-depth knowledge of the system and with good testing skills you can easily give only the headlines.
An exception is when details are needed by a standard or certification, Medical equipment or SW that is certified like SW drivers for Windows.

If you need to be sure every option/path is tested (click the button, use the hot key, tab to it & enter) then you want to detail every step. And unfortunately sometimes that tedious details is necessary.

On the flip side, for a test focused on something else I find putting in specifics can narrow the test & prevent problems from being found.
I'm a key-boarder, and have missed mouse click errors as a result. If I write "tab to this button & " in my test plan I've narrowed it down so much the next person to pick up & execute the plan will also miss the mouse click error. Sometimes, encouaging other testers to 'do their own thing' when navigating through less relevant steps of the plan is better.

The only draw back to that is that you might still run into the problem you experienced. Most of the time, the person testing and the person writing the tests are the same person and so that person will go through on the keyboard and not find anything simply because the mouse clicks weren't run. In that case, perhaps two sets of test cases would be necessary... one for mouse, one for keyboard.
–
TristaanOgreMay 17 '11 at 13:06

Very good point. In our case we write tests & hand them over to an offshore testing group. But if I were running my own tests I would probably need to include the different things to remind myself to check them all.
–
CKleinMay 20 '11 at 13:23

You need enough detail to ensure that all the requirements of what your testing can actually be tested. And that's where it should stop. To quote Einstein, "as simple as possible, but no simpler." The more specific you are, the more likely you are to become too focused on one part of the requirements that you miss out on the others.

There is an exception to this, and that is known bugs. If a bug was recently fixed, you will want to put some additional testing around that, even though the requirements haven't changed which by the first rule would say you don't need to put any more restrictions. I would put the steps to reproduce the bug and make sure it doesn't get reproduced. Having a bug go public can be embarrassing, but not nearly as bad as when you say you've fixed a bug and it actually still exists.

Now, the above is (what I would call) an idealist view on it. It doesn't take into consideration things like common sense and intuition.

tl;dr Every test case should have enough information to show a requirement being met (reference the requirement in the documentation) or a bug being fixed (reference the bug in the documentation) but no more.

As most other posts have pointed out, the amount of detail included in a manual test plan depends on the testing experience and abilities of the tester. Prior to recent organizational changes, all of the tests that were written by QAs within my department were also executed by fellow QAs within the same department.

What this meant was that we could assume that the tester would not need the test writer to map out every navigation step and write the test like a 'user guide'. However, with a recent organizational shift to SCRUM Development in which the goal is that SCRUM teams can take on defects for all products within the organization and each task in completing a Change Request becomes a 'team goal', we often find ourselves in situations where a QA without specific knowledge of the product being tested is executing the test. Also, we often have developers executing tests to help reach our 'team goals' since developers outnumber QAs by 3 to 1 on my team.

What I quickly learned is that when you write tests at such a high level as soon as someone less familiar with testing procedures or the product that you are testing attempts to run the test, you find yourself so involved that you'd might as well be running the test yourself. I now include much more detail in the testing steps as well as any prerequisites such as data setup, scripts for data verification, etc. The extra work during test writing saves hours of time later on.

It is always good to go for a detailed steps. I have seen testers run the test cases without having knowledge of the product. ie. simply u need to to execute the already pre-written test cases and fill the results.
It will help for those who doesn't have the product knowledge or when you want to outsource executing the test cases.

When writing scenarios using header+steps conventions I usually put preconditions, actors, involved system components in the header section + a brief summary and in the case of more complex scenarios, a "dictionary" of entities used in that particular scenario.

In writing regular steps I try to follow a simple pattern for every step:

Action:

One sentence < Achieve the goal / do something > by following (micro)steps < list >

Result:

One sentence of summary verified by < brief checklist of few key
factors that validate result >

This has the following advantage: new testers / user learn not only what has to be done, but also how an experienced tester may focus on the logic of the test, which allows not digging into details and therefore intuitive identifying gaps in scenario logic or in test coverage.

Good Questions. In my previous experience I would like to say that when ever you are instructed to write testcases for manual testing some documents need to be in hand first if you are going to write testcases at time of project planning and design. I will feel better if I get project requirement documents(PRD) and workflow of project. If I have these documents, I can proceed for writing testcases. Also communication between client and QA analyst is too much important. Close communication between QA Analyst and Client can be more helpful to understand the client requirements.

I think that experienced users should write less detailed test scripts. This allows them to spend more time for better test coverage. One thing I also think is that it's good for experienced testers to both write and execute their own scripts. During this process they can identify gaps. Passing scripts to different testers slows down the process with them asking questions about the scripts themselves. Also the executes are just following scripts pre written and don't always see the big picture. What are others views on both writing and exectuting your own scripts? Certainly UAT testing I agree should be detailed and executed by a different person but I am questioning System testing

Hi @Dave, you've got several concepts here, as well as a separate question (which is not a good fit here, because it's an opinion-based question, but could be rephrased as "What are the benefits and drawbacks of writing and executing your own scripts vs handing scripts to different testers?" Perhaps you could ask this as a question - you might get a better response than tagging onto a 2 year old question
–
Kate PaulkSep 16 '13 at 19:44