If somebody (else) runs a process constantly sucking numbers from /dev/random on an (maybe your) essentially "idle" machine, i.e. with little activity on keyboard, disk, network, etc., can you still guarantee that _you_ still get sufficiently good random numbers from /dev/random, to prevent any attacks, even if this "somebody else" communicates these numbers to an assumed attacker? (Do not assume that you can use e.g. the Pentium time stamp register).

If you have a "bad guy" running on your machine, they can constantlysuck numbers from /dev/random. This will cause a "denial of serviceattack", since /dev/random will only issue random numbers if sufficiententropy is available to generate them.

So an application which uses /dev/random can block; if the applicationdoes not want to block, it can open /dev/random in non-blocking mode(usually recommended). However, this does not answer the question ofwhat to do when /dev/random has been exhausted. The right course ofaction is probably to give the user a warning message and exit.

While this may not sounds entirely satisfactory, consider what else anattacker to could if they have access to your machine. (a) they couldtry breaking in as root, (b) they could do resource starvation byrunning a program which does:

while (1) { cp = malloc(1 megabyte); touch_all_memory(cp); fork(); }

(c) they could break in using some neglected hole in (pick your choiceof) sendmail, /proc, NIS, NFS, etc., etc., etc.

In the long run, a system which is doing fair-resource allocation toprevent one user from grabbing all availble CPU, virtual memory, andother resources will also have to treat /dev/random as a valuableresource whose use must be controlled to prevent one user from grabbingall available entropy. However, this sort of resource control is hardto do right; especially if you want an efficient system! Given that wedon't even handle memory exhaustion terribly efficiently at the moment,random number exhaustion is a similar (unsolved) problem in Linux.

When we solve the general resource allocation problem, it should not beterribly difficult to extend it to solve the /dev/random allocationproblem. Why hasn't it been addressed in Linux so far? I suspectbecause there aren't that many Linux systems doing serious time-sharingsystems. We have machines which are network services, and single-userdesktop machines, but for those machines things like quotas and resourceallocation aren't as important. While there are some time-sharingmachiens using Linux, they tend to be in the minority.