I am preparing an initial test plan document for a new project. The project involves several layers of HW and software and requires mainly black box testing but also some level of grey box testing (getting readings from within the code, but without actually reading or modifying the code).
The goal of this document is to assess the feasibility of the project from testing point of view- required test equipment, time, personnel, needed automation etc.
The approach I am going to use is to start by using a table with test types on one side (Functional, Performance, Stability) and modules/sub modules on the other side (GUI, computation engine, API from A to B etc.) then fill each square with high level headline for tests. When done, the table gives a pretty good idea of the project.

I tried using MindMaps in the past, they were great for brain storming but quickly lost focus.

5 Answers
5

If you have the space you could do Index Cards, although this is more suited towards Agile/Scrum sprints and definable tasks if you want visibility nothing says that like index cards and tape on a wall.

It sounds more like you want a Test Matrix of some kind to be able to map between test types and phases, lots of ways to do tables like that (Word, Excel, Project...etc.). As long as they are readable the tool doesn't always matter, in the past I've used Excel for this sort of thing as it allowed me to manipulate the information easily plus do counts on summary sheets to know how many tasks/tests were in each phase.

+1 It's incredible. Excel is simultaneously the worst tool and best tool for so many things. It's the worst because we really abuse the idea of a spreadsheet when there are tools out there that do things (like this, and many others) so much better. At the same time, it's the best tool because of how versatile it is and how effective we can be in it, despite the availability of 'better' tools.
–
corsiKa♦May 12 '11 at 17:33

1

+1 to Excel. And if you are a "power user" Excel is even better.
–
RsfMay 12 '11 at 17:44

I do it the same way you do - list broad areas like functionality, performance, security, then list pieces and approaches for each, then break those down into test cases if necessary.

In addition, I like to add a "goals" and "non-goals" section sometimes, especially where there are legacy tools or APIs that I am not going to explicitly test. Sometimes I'll get into test planning and start checking how that external API deals with zeros, and writing non-goals out first helps me avoid this time-sink, and gets me focusing on instead, "How would we handle a weird response from this external API?", which is testing in scope again.

I usually do this in a wiki, with headings for the broad areas, then sub-headings for the pieces underneath that, and a list of types of tests (manual exploratory testing, scripted manual testing, automated integration, E2E tests, automated UI tests, etc.) with (possibly) test cases in a list under that. I tend to be a hierarchical thinking, so this works well for me.

I like using wikis for test plans because they are great living documents, and because I can have a Q&A section at the bottom whenever I run into something unclear, and developers can just edit it directly to give me answers. Having answers in a public place in writing does wonders for keeping everyone (devs, QA, business owners) on the same page.

What objectives are you seeking to accomplish through testing? If it is improving the quality and rate of delivery of the solution, consider the following.

First, look at the individuals and interactions around these tests. Are there multiple teams? Just one team? What teams and stakeholders are up/downstream from the product / module. What do they want to have/see?

How to achieve this is? Look for seams. Seams are places where you can insert tests. Seams exist between modules (as you've identified), between teams, and between steps in acceptance/sign-off chain.

For each of these, look at how the tests can facilitate collaboration between the two sides of the team. e.g. design sessions when changing interfaces, coming together to write bdd scenarios for signoff.

Then consider what constitutes working software and craft what the Scrum community calls the "definition of done" this should include performance constraints and any deployment, testing or automation (and if necessary, policies).

Then attempt to apply this definition of done to each feature (or user story if you have them) and see where the gaps are in terms of instrumentation. This will provide a roadmap for what tests need to be written.

Using the instrumentation to measure stories in terms of functionality and performance is the next step on this chain.

By basing your tests on what has emerged from the actual definition of done and how to measure this, you'll have created a lean set of tests. These will be the most able to change if (when) the definition of done changes. Also tests per feature mean you can handle each new feature based on its needs, not limit what new software can be created based on the testing methodology.

In summary, take a step away from the tools, and see how your testing can tie the project together and facilitate lowering the cost of delivering software and improving he chances of successful delivery.

There are two parts to this: understanding the project well enough to test it, and planning the test process.

There are lots of ways to familiarizing yourself with a new project. You should read as much of the project documentation as you can. If there's a prototype, try it out. If you have questions, write them down and follow up with the author or someone else more familiar with the project that you are. I like to draw diagrams that showing the relevant entities, relationships, parties, and actions. When I ask questions, I take the diagram with me.

No doubt there are entire books and websites devoted to writing test plans, and you can use as search engine as well as I can. How you assemble the plan will depend on who else is available to help you. I prefer to work breadth-first, i.e. start with an outline, maybe have someone review it, and then start digging in on the details. If you get writer's block, you can often use a search engine to find lists of criteria to use for tests, e.g. things to test for Windows apps, things to test for web forms, things to test about web navigation, and so on.

As you dig into the test plan details, you will come up with more questions. Write them down, put a placeholder in your test plan, and then keep writing. When you have enough questions, follow up with someone.

There is much more to say about planning, but you specifically asked how to get started, so I'll stop there.

The easiest approach is to write use-cases with alternate and exception scenarios. Map the requirements to the use-cases and ensure that all requirements are covered by scenarios. Your use-cases then translate directly to test cases. If you are testing to lower level units the same approach applies, just write the use-cases for the lower level units.