If each call to a[n] RNG produces blocks of n bits (where n > 15), [...] Each subsequent generation of an n-bit block shall be compared with the previously generated block. The test shall fail if any two compared n-bit blocks are equal.

If each call to a[n] RNG produces fewer than 16 bits, [...] (for some n > 15) [...] Each subsequent generation of n bits shall be compared with the
previously generated n bits. The test fails if any two compared n-bit sequences are equal.

Isn't (n > 15) too small?

I am concerned about their choice of 16 being the minimum for 'n'. At 16-bits, one would expect to see the test fail, inappropriately, about every 64,000 iterations. That is rather frequent! Even an 'n' of 32 will lead to noticeable false positives if the RNG is being used very frequently.

In my specific case, I have a 32-bit TRNG, and would much rather stretch 'n' to 64 to prevent false positives.

2 Answers
2

Superficially as you say, this seems like it's a broken requirement - if the RNG was truly random, then it would fail this test with a high probability in practice (and since FIPS 140-2 requires the device to stop working when this happens, that's bad).

However what we're dealing with in practice is pseudo RNGs, and this self-test requirement basically distills to the PRNG should never emit the same $n$ bit block twice in succession (and whatever 'device' uses it should check that it doesn't).

This is in practice is a very easy requirement to meet in a PRNG implementation, so I would speculate that this self-test requirement was more designed to catch hardware/software errors that would inadvertently repeat an already generated block, or to guard against RNGs that sample environmental data without mixing it into existing state.

Yes, one could argue that a PRNG with a 16-bit internal state is a design failure, but since output block size need not be the same as internal state size, the requirement is still questionable as it basically cuts out PRNG's having an output block size less than 64 bits (in practice this is not a problem, but the requirement still feels like it's missing the point).
–
ThomasJul 9 '13 at 12:13

1

"The PRNG should never emit the same $n$ bit block twice in succession." That's a bullshit requirement that actually reduces randomness.
–
orlpJul 10 '13 at 11:59

I was trying hard to avoid comment on the legitimacy of the tests - OP was asking about the practicality. The self-test requirement does appear somewhat pointless.
–
archieJul 14 '13 at 20:54

I agree with nightcracker here, if a PRNG actually never produces the same $n$ bit block twice in succession you just built yourself a distinguisher from an idealized RNG
–
Alexandre YamajakoAug 8 '13 at 0:21

This requirement is a part of the general "continuous tests", and as such it's primary purpose is to detect "flatline" failures (if you can torture that analogy into making sense). Remember that FIPS-140 was originally written with hardware implementations in mind, so the approaches they take for testing sometimes seem odd for software since hardware failures tend to be different from software failures.

I don't think there's a problem, though. You could choose $n = 16$ and potentially have statistical problems, but you don't have to pick something that small. Build your construction to output the size block that you want it to. Eg, modern constructions like CTR_DRBG from SP800-90 output 16 bytes at a time, which is well outside statistical concern.