4 Answers
4

I would avoid randomness in your automated tests. You want all of your automated tests to be predictable. Trying random inputs is certainly not predictable. That way if one of your tests fail, you will be able to more quickly determine why.

Instead, spend some time thinking about cases that might deserve testing and test those explicitly. Thinking about your code like this is more effective than using random tests and hoping that they catch cases you didn't think of.

If add(x, y) = x + y and we test add() by asserting add(x, y) == add(y, x), then we can use random x and y just fine. Random inputs don't have to be unpredictable, but you do have to make sure and show (or log) the test case if it's random and something breaks. QuickCheck is a testing library that was written precisely to help with testing using random values.
–
Michael ShawMar 30 '13 at 2:14

1

If your test is only dealing with a small number of randomly generated values per test, recording every random value is sufficient to allow reproduction of the test results. If this is unwieldy saving the seed is often sufficient to allow the test to be reproduced. Of course, this sort of testing can become difficult if working with randomized algorithms, unless they accept a random number generator as a parameter. All that being said, randomness is inappropriate unless you are testing so many values that you have no choice.
–
BrianMar 30 '13 at 15:36

Now your acceptance test will fail half the time. Just re-running the test will potentially change the result. That'll make it harder to notice errors and track down the problem with it occurs. It'll be way easier if you just write two tests, one for each login method.

I sometimes write tests that depend on random occurrences for whether they succeed or not. I've learned that there are a few caveats, as described by other answers:

You do not ideally want a test that only fails sometimes. If you break your code, you want to find out when you integrate your change, so you know which change to undo.

If there are only a small number of inputs that cause failure, you need to know which inputs they are in order to fix the problem.

The former problem can be mostly solved, as long as you are willing to accept a slow test suite, by running the test many times. The latter problem requires you to log any random factors that may cause the failure.

These techniques also apply to situations where the randomness is unavoidable, e.g. you are testing for potential timing-related conditions in a multithreaded program.

As long as you are careful, it can be done. But the best thing to do is to isolate potential failure conditions and write a predictable test that always fails for them prior to fixing the problem, IMHO.

Well, once you catch a failure via randomness, that failure should be turned into its own test, either with nonrandom parameters or a hard-coded seed. In the OP's case, may as well just start by doing that, so no randomness.
–
BrianMar 30 '13 at 15:37