We are in the process of developing an in-house testing solution that is tailored to our applications. As of right now when you add a new test case to a plan you must choose the prerequisite steps, and then enter the steps, along with the additional details. My question is when creating a new test set should we ask if the cases within the set are meant to be stand-alone (requiring a log in, and log out after each case so the next case can execute), or just assume that all tests sets are developed to be run sequentially with no prerequisites necessary?

4 Answers
4

Instead of an "either/or", it is better to think of test case or scenario testing as a tree structure:

Trunk: This is common test set that are always run (e.g. launch the application)

Branch: These are major scenarios that have few common tasks (e.g. "admin" vs "user" scenarios)

Branch: ... (if needed)

Branch: ... (if needed)

Twig: Test cases that are launched from a common branch (e.g. "admin" sets up a new user with specific permissions.)

Leaf: Specific test cases to verify the permissions in a specific "twig" scenario (e.g. verify positive and negative cases for each permission granted or denied)

This allows you to use applications like VM or automated setups to snapshot branches or higher levels to repeat scenarios in sequence with variations and also the greatest independence of setups such that they can be run independently in any order.That structure gives you the maximum implementation flexibility for efficiently testing your application context.

This is actually a great way to think of it, and I appreciate your input. I actually have two reasons for wanting to run in succession: 1) They will execute much quicker 2) (And I feel the most important) They will allow state changes to occur in the application that wouldn't normally happen with independent tests. As of right now, there are no charts, diagrams, or any way for me to ascertain relationships and dependencies between forms without digging through code that is overly complicated. In your scenario, are the tests flagged in any way? More to follow...
–
SpenceMay 18 '14 at 23:06

I'm actually going to respond to below, and would love your insight on the topic of full succession testing.
–
SpenceMay 18 '14 at 23:22

Hmm ... we seem to be getting buried on terminology. By "full succession". I assume you mean a test that is intended to run straight through without stopping for failures (as opposed to the "fail-closed" model that Kate mentioned). Without digging through the code to capture the dependencies, I would recommend you do some heavy manual exploring and build a model of the dependencies. On one application, I kept a hierarchical list of dependencies in a Word document in outline mode. I ended up with a document a few hundred pages long. Understanding is key before automation.
–
Jeff_LucasMay 19 '14 at 16:40

Thanks, that's exactly what I'm doing now, and I'd like to thank everybody that gave me feedback. I wish I had 15 reputation so I could upvote every answer I've received =) I will go ahead and mark this as the answer.
–
SpenceMay 19 '14 at 17:39

Here are pros and cons of running tests in a sequence. I understood you ask in a context of automation, not manual execution.

Pros:

Execution time can take shorter, because previous test sets prerequisites for the next one.

Cons:

Harder to run in parallel. If you have a sequence of tests that depends on each other, you cannot run them in parallel. You can only run the whole sequence in parallel to other sequences of tests, assuming that sequences are independent each other.

Limited feedback in case of test failure. When executing a tests in a sequence, the strategy is to skip tests that follow a failing one. So if your sequence is long, and one of initial tests fails, then you will loose significant amount of feedback, because lots of your tests will be skipped.

Harder to maintain. What if one of the tests in a sequence verifies a feature that no longer exists in the product? You will need to remove it from a sequence but also make sure the next test in the sequence will have a necessary input state/prerequisites. You should therefore make sure that it is clear why one test depends on a result of another.

It is also easier to accidentally rely on side-effects, e.g. test2 depends on something test1 did.
–
user246May 18 '14 at 16:48

@user246, right, this may result in false positives. If there is such implicit dependency on such effects it should be made explicit in a test.
–
dzieciouMay 18 '14 at 17:05

@user246 The issue I'm running in to is that there is no documentation regarding dependencies within the code. If I'm stuck doing one test, then resetting I can never accurately portray what is happening in production. Am I forced to badger development about providing the documentation that is required while they're still busy putting out fires?
–
SpenceMay 18 '14 at 23:28

This is a pretty common situation - and one I've dealt with myself. Based on your comments and the question itself, I can offer a few suggestions.

Document your test dependencies - if you're going to be running tests that depend on the result of other tests, make sure you document them.

Build in per-test checks - it's a lot more complex to check the results of a test with dependent tests, but if you don't do this you'll make analyzing your results much more difficult. As an example, if you run a series of sale transactions as part of a test run, you want to check the totals for each transaction immediately after the transaction completes rather than waiting until the end of the test run to check all the transactions.

Consider a fail-closed model - By this I mean abort the run as soon as any test fails because each subsequent test depends on the success of the test before it. You can work around this with well-designed per-test checks, but they won't get you out of all the potential problems. Fail-closed also saves a lot of time analyzing results after the first failure, but has the disadvantage that you won't know if there's a problem in the later tests.

Make your tests as granular as possible - You want a whole lot of small routines that you can parameterize and call at will. The structure I use for this kind of testing has a driver that pulls an ordered list of tests to run (with test type and data identifiers), then based on the type of test calls the appropriate routine and sends the data identifiers to the appropriate routine. This makes reordering tests for a change in application flow relatively trivial. Since I build in the validation on a per-test basis, a reordered flow doesn't affect my baseline data much.

Thanks for the thorough feedback. I'm really impressed by the depth in the answers I've received. I'll be implementing all of your suggestions as they seem almost neccesary for my environment.
–
SpenceMay 19 '14 at 13:49

An additional theoretical concept to consider is this. The ISTQB terminology separates test cases from test procedures. During test case design, in the description of the test case you should detail all that you mentioned, prerequisites, etc and the expected outcome. However, when implementing the test case (either as manual or automatic) you essentially create the test procedure which may include details on how to execute the test case and also various optimizations such as combining test case parts if they are similar.
So, strictly speaking your test case description should be stand-alone and you would create a separate test procedure detailing sequential execution. In practice, of course, you don't always document both, however this is a good practice to seprate more abstract concepts of a test case and its implementation details.