When designing and play-testing a new game, a lot of the time, the people trying out functionality get used to things, and no longer try the "dumb ways" any more. So if the development breaks something, it may be a feature only a new user would bother trying. What are some of the best ways to make sure that your game has its "stupid cases" tested when it's finally released, or after a major update? Methodology, software assistance, maybe even sites where people will play-test for you, et ceteras.

7 Answers
7

This will make sure that certain stupid bugs, if there is a unit test for them, won't recur, because the unit test will fail if they do. This requires a change in programming methodology, but in my opinion it is completely worth it.

Automate Whatever Testing You Can

Beyond unit testing, create a set of automated function and acceptance tests that are run on every build to make sure certain builds are good. If you have scriptable controls and your game is generally consistent, you can test lots of bugs automatically.

Create A Multi-Level Test Plan

Make sure that your testers have a test plan that tests the most important bugs. This should be multi-level:

Smoke Test: Tests that the game doesn't crash in the most common cases.

Regular Test: Tests more uncommon cases.

Soak Test: Run as deep as you can, regressing as many common bugs as possible. Also test that the game can stay on for very long periods of time (days) without crashing.

A sort of code coverage approach to test cases can be done using simple flags that are tripped once the block of code has been executed in-game. Displaying which flags have been tripped and untripped on screen can allow testers to know which cases have been covered and which haven't.

Simple as it is, it's still effective so long as the flags have meaningful names so the testers can figure out what's supposed to be done.

This technique is credited to Matthew Jack who implemented it in Crysis.

Several good answers here on the programming side. I'll add one that's more design oriented.

Have your system designers write edge case test plans for testers

If they know what they are doing, the person behind designing a system, or scripting an in-game sequence, is very likely to know the edge cases of that system, and where it might break down. They should also have an idea of where the system interacts with others. Having them write out a test plan, or discuss with the testers where things are likely to go wrong can save everyone time.

You also need to organize user experience testing with "Kleenex testers", testers that see your game for the first time and that you will never use again. It's a bit expensive and complicated to organize, but well worth the effort.
If you do, film every test with 3 cameras: one on the screen, one on the controls and one on the tester's face in order to detect frustration.

Great question, every response in this thread is an excellent response. One thing to add:

In my experience (30 years of software development): apps and games are more reliable if at least one member of the test group is a skilled gorilla tester, masterful at ad-hoc testing, abusing apps and deliberately using games the wrong way to find bugs of the kind described by the original poster. Gorilla testers have an uncommon skill -- when you find a good one, keep them on your team.

For an example of effective gorilla testing see: www.youtube.com/watch?v=8C-e96m4730 ;)

To sum up the responses in this thread: effective software quality strategies are multi-faceted, combining multiple approaches to achieve a high level of confidence in the functional quality and reliability of your game.

An interesting trick I heard about from a friend to test graphics; during a known-good run through of a level record performance stats from the GPU at regular intervals (polys on screen for example). You can then replay that route and if the numbers change outside a given tolerance it might mean something isn't displaying correctly anymore.

One simple easy thing which helps a lot, especially with stress testing, and network testing. Make it possible for your AI to play against other AI. If you can leave your game alone playing itself, or a small group of AI over a network for a weekend you can learn a lot.