1 Answer
1

The simple answer is that if you don't trust CryptGenRandom() then you are doomed. Indeed, CryptGenRandom() is provided by the operating system, and if the OS is hostile then there is very little you can do to defend against it. An hostile OS can inspect all your RAM, log all your passwords, send all your data to external third parties.

The less simple answer relies on a subtlety, which is that you may mistrust CryptGenRandom() while not actually supposing that the OS is hostile. Maybe you consider that CryptGenRandom() is simply flawed, by accident and not by malice, and delivers randomness of poor quality, while no other component of the OS actively tries to cheat on you.

In that case, it becomes sensible to augmentCryptGenRandom() with some extra entropy. Entropy comes from physical events and the OS is ideally placed to gather physical events; applications are in a much less ideal position for that. Nevertheless, some people have tried. For instance, there is an applicative entropy gatherer in GnuPG (at least in version 1.4.14, look for the rndw32.c file).

The correct way to gather entropy is to take all the "events" (e.g. "key X was pressed at time T"), concatenate them together, then hash the whole lot with a proper hash function like SHA-256. Then use the output as seed for a cryptographically secure PRNG. In particular, don't shun CryptGenRandom() completely; call it to get some bytes of output (say, 16 bytes), and add that output as an "event" like all the others you gather. The beauty of the hashing is that even if the event is flawed in a malicious way, it cannot lower your resulting entropy; it can only contribute positively (at worst, it does not contribute significant amounts of entropy).

The hard part of random generation is not obtaining entropy; it is estimating how much you got. An hardware event is "useful entropy" exactly inasmuch as any potential attacker cannot predict its contents. For instance, an attacker observing the network may predict the exact moment your machine receives an incoming packet, with precision down to, perhaps, the microsecond. If you time that event with nanosecond precision (Time Stamp Counters are convenient for that -- Windows calls them "performance counters"), then you may consider that each incoming packet yields 10 bits of entropy (because 210 is approximately equal to 1000, which is the ratio between microseconds and nanoseconds). It is all relative to a specific physical context; making a one-size-fits-all RNG in pure software thus necessarily implies a lot of overkill.