On Sun, Aug 18, 2002 at 12:38:59AM -0500, Oliver Xymoron wrote:> On Sat, Aug 17, 2002 at 09:01:20PM -0700, Linus Torvalds wrote:> > > > On 17 Aug 2002, Robert Love wrote:> > > > > > [1] this is why I wrote my netdev-random patches. some machines just> > > have to take the entropy from the network card... there is nothing> > > else.> > > > I suspect that Oliver is 100% correct in that the current code is just> > _too_ trusting. And parts of his patches seem to be in the "obviously> > good" category (ie the xor'ing of the buffers instead of overwriting).> > Make sure you don't miss this bit, I should have sent it> separately. This is a longstanding bug that manufactures about a> thousand bits out of thin air when the pool runs dry.

There's a reason why I did what I did here, and it has to do with anattack which Bruce Schneier describes in his Yarrow paper:

called the "iterative guessing attack". Assume that the adversary hassomehow knows the current state of the pool. This could because theinitial state was known to the attacker, either because it was in aknown, initialized state (this could happen if the distributiondoesn't save the state of the pool via an /etc/init.d/random script),or because the attacker managed to capture the initial seed file usedby the /etc/init.d/random script. Now what the attacker can do isperiodically sample the pool, and attempt to explore all possiblevalues which have been mixed into the pool that would result in thevalue which he/she read out of /dev/random.

So in fact, by being more selective about which values get mixed intothe pool, you can actually help the iterative guessing attack! That'swhy the current code tries to mix in sufficient randomness tocompletely reseed the secondary extraction pool, and not just enoughrandomness for the number of bytes required. This was a deliberatedesign decision to try to get the benefits of Yarrow's "catastrophicreseeding".

Your complaint in terms of "manufacturing about a thousand bits out ofthin air" is a fair one, but it depends on how you view things. Fromthe point of view of absolute randomness, you're of course right. Ifthe primary pool only has 100 bits of randomness, andxfer_secondary_pool attempts to transfer 1100 bits of randomness, itdrains the primary pool down to 0, but credits the secondary pool with1100 bits of randomness, and yes, we have "created" a thousand bits ofrandomness.

That being said though, from the adversary only gets to see resultspulled out of the secondary pool, and the primary pool is completelyhidden from the adversary. So when xfer_secondary_pool extracts alarge amount of randomness from the primary pool, it's doing so usingextract_entropy(), which uses SHA to extract randomness from theprimary pool. Significant amounts of cryptographic analysis (whichwould also as a side effect break the SHA hash) would be required inorder to figure out information in the primary pool based solely onthe outputs that are being fed into the secondary pool.

So is it legitimate to credit the secondary pool with 1100 bits ofrandomness even though the primary pool only had 100 bits ofrandomness in it? Maybe. It depends on whether you care more about"absolute randomness", or "cryptographic randomness". Yarrow reliesentirely on cryptographic randomness; the effective size of itsprimary and secondary pools are 160 bits and 112 bits, respectively.

I tried to take a bit more of a moderate position between relyingsolely on crypgraphic randomness and a pure absolute randomness model.So we use large pools for mixing, and a catastrophic reseeding policy.

>From a pure theory point of view, I can see where this might be quitebothersome. On the other hand, practically, I think what we're doingis justifiable, and not really a secucity problem.

That being said, if you really want to use your patch, please do itdifferently. In order to avoid the iterative guessing attackdescribed by Bruce Schneier, it is imperative that you extractr->poolinfo.poolwirds - r->entropy_count/32 words of randomness fromthe primary pool, and mix it into the secondary. However, if you wantto save the entropy count from the primary pool, and use that to capthe amount of entropy which is credited into the secondary pool, sothat entropy credits aren't "manufacturered", that's certainlyaccepted. It would make /dev/random much more conservative about itsentropy count, which might not be a bad thing from the point of viewof encouraging people to use it only for the creation of long-termkeys, and not to use it for generation of session keys. - Ted

P.S. /dev/urandom should probably also be changed to use an entirelyseparate pool, which then periodically pulls a small amount of entropyfrom the priamry pool as necessary. That would make /dev/urandomslightly more dependent on the strength of SHA, while causing it tonot draw down as heavily on the entropy stored in /dev/random, whichwould be a good thing.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at http://vger.kernel.org/majordomo-info.htmlPlease read the FAQ at http://www.tux.org/lkml/