Posted
by
Unknown Lamer
on Monday October 14, 2013 @08:59PM
from the proofs-ahead dept.

Okian Warrior writes "As a followup to Linus's opinion about people skeptical of the Linux random number generator, a new paper analyzes the robustness of /dev/urandom and /dev/random. From the paper: 'From a practical side, we also give a precise assessment of the security of the two Linux PRNGs, /dev/random and /dev/urandom. In particular, we show several attacks proving that these PRNGs are not robust according to our definition, and do not accumulate entropy properly. These attacks are due to the vulnerabilities of the entropy estimator and the internal mixing function of the Linux PRNGs. These attacks against the Linux PRNG show that it does not satisfy the "robustness" notion of security, but it remains unclear if these attacks lead to actual exploitable vulnerabilities in practice.'"
Of course, you might not even be able to trust hardware RNGs. Rather than simply proving that the Linux PRNGs are not robust thanks to their run-time entropy estimator, the authors provide a new property for proving the robustness of the entropy accumulation stage of a PRNG, and offer an alternative PRNG model and proof that is both robust and more efficient than the current Linux PRNGs.

Linus signs off on many changes everyday. He does expect you to read the code before trying to change it. That was the problem before - someone put up a change.org petition that made clear they had no idea how it worked.

I just read TFA and associated paper, and the petition. Linus was right, the petition was pointless, and motivated by confusion on the petitioner's part. However, the paper points out some scary issues in the Linux PRNG. It's tin-foil hat stuff, but it shows how one user on a Linux system could write a malicious program that would drain the entropy pools, and then feed the entropy pool non-random data which Linux would estimate as very random. If this attack were done just before a user uses/dev/random

No, RNGs are easy. Super easy. Just take a trustworthy source of noise, such as zener diode noise, and accumulate it with XOR operations. I built a 1/2 megabyte/second RNG that exposed a flaw in the Diehard RNG test suite. All it took was a 40 MHz 8-bit A/D conversion of amplified zener noise XORed into an 8-bit circular shift register . The Diehard tests took 10 megabytes and said if it found a problem. My data passed several times, so I ran it thousands of times, and found one test sometimes failed on my RNG data. Turns out the Diehard tests had a bug in that code. Sometimes the problem turns out to be in the test, not the hardware.

The nice thing about randomness though, is that it adds up. If you xor one stream of hopefully random bits with another stream of hopefully random bits, you get a result that is at least as random as the best of the two streams, quite possibly better than either. It's a rare and precious thing in cryptography: something you can't make worse by messing up. At worst you make no difference.

So if you're paranoid, come up personally with a ridiculously long phrase (you don't need to remember it), feed it through a key derivation function, and use it in a stream cipher with proven security guarantees (in particular one that passes the next-bit test [wikipedia.org] for polynomial time). Instead of using this directly, xor it together with a source of hopefully random stuff.

If you write to/dev/random this is more or less what happens. Write to it to your heart's content - it can only make it better, not worse. (This is as I recall, please check with an independent source before you try).

Voila, no matter what NSA has done to your HRNG chip, this door is secured. Your time is better spent focusing on the other doors, or the windows.

(But you should be very careful in using HRNG output directly. I am very surprised to read that some open source OSes disable the stream cipher if a HRNG is present - this is a very bad idea!)

Pick two random dates between 1950 (or earlier... arbitrary cut-off) and today. Then go to that date and find all the sports scores from that day. Do some random math on those scores independently. Then take those two results and do some more random math between the two.

Add in more Nth days as you please.

That enough entropy for you? I guess it might not be. I suppose you could also factor in which days of the week the Cleveland Browns are likely to win on, since that is definitely random.

Good for you. That is still not a viable solution for generating cryptographic keys, IVs, salts, and so on. Two drawbacks with your idea:

1. Too slow. You need far more random data than a zener diode can generate. You could combine many of them, but then you need to combine them in the right way.

2. Unreliable. Zener diodes are easy to affect with temperature, and you need to make sure that hardware flaws don't make them produce 1 more often than 0 (or the other way around).

This is why we need software RNGs. We take a good hardware-based seed from multiple sources (combined using SHA256 or something), and then use that to seed the CSPRNG (not just a PRNG). The CSPRNG then generates a very long stream of secure random data which can then be used.

I'm not too pleased about the design of Fortuna, but it seems like one of the better choices for how to combine HW input and generate an output stream.

The TCO of a randomly measured Zener diode or other noise source is likely to be pretty darn close, cycle to cycle as it takes a bit of TCO delta to produce a lot of randomization change. Therefore NOT so easy to change unless you use a lot of temperature -- on the the same diode. Change the family, even the diode within a family (although less so in a batch) and you can get some delta.

I agree to all suggestions that multiple hashing techniques can be used to prevent filling the entropy pools, and multiple

There are all kinds of ways to design the hardware to prevent problems besides the obvious of measuring various environmental variables like temperature.

Feedback from the output of each slicer can adjust the comparison threshold to produce an output with an equal number of high and low states over an arbitrary period of time. This technique is commonly used for closed loop control of duty cycle in applications requiring precision waveform generation. The slicer threshold can be monitored as another health

It seems unlikely that you would get good results from such a set up without whitening the data, but you don't mention doing that. Did you try it with anything other than the rather old Diehard tests? Do you know they are not an indication of cryptographicly secure randomness? Do you have a schematic that could be reviewed for errors?

I built this in the mid 90's, and at the time the old DOS-based Diehard tests were the best I could find. Certainly the stream should be whitened before used in cryptography, but the data directly from the board passed the Diehard tests without whitening. However, the board XORed 80 bits from the A/D to produce a single output bit (or more - this was software controllable). Since the 8-bit randomness accumulator was rotated 1 bit every sample, each bit was XORed 10 times with each of the A/D outputs. I

Android. Many embedded systems. Many micro systems, such as tomsrtbt or similar (now virtually unneeded, due to the lack of floppy discs on new computers, and the prevalence of booting of CDs or USB flash drives). Many lightweight systems [wikipedia.org], such as Damn Small Linux.Etc.

I don't know of any distros that completely eschew GNU. The two are very tightly integrated, originally the kernel was written to run gnu, and even now, it cannot be built but with gcc. While I believe that they have done much to contribute to the free software we care about, loads of people here on/. have some disdain for gnu, and with that in mind might want to ban it from their systems. You can do this by replacing the individual components. BSD userland has been successfully ported to linux, and with P

Plan 9 failed simply because it fell short of being a compelling enough improvement on Unix to displace its ancestor. Compared to Plan 9, Unix creaks and clanks and has obvious rust spots, but it gets the job done well enough to hold its position. There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough.â"Eric S. Raymond[3]

I didn't even click on the link and knew it was some fag linking xkcd. It's not clever. It's not funny. Just the subject containing something about RNG, with a link under it and not even a short, useless, one sentence post shows the kind of unoriginal, uninspired, idiot is making the post.
It was funny to read when it came out. It's even funny when clicking on the Random button on the site and seeing it. It's NOT funny when someone links to it from a one-sentence post and thinks they're so fucking clever to have discovered xkcd.
You probably still use lmgtfy and think you're so damn clever.
It means in real life, you're an unoriginal hipster doofus.
Got anything to do with sanitizing inputs to a SQL database, etc.? Link to Bobby Tables. Got a nerd-project slow-ass turing machine? Like a minecraft logic circuit from redstone? Link to the one where it's some guy alone in the world making a computer out of rocks. Got a story about password security or encryption? Link to the one where they beat the password out of the guy with a wrench.
Fuck off. You're not clever.

At what scope/scale of time or range of values does it really matter if a PRNG is robust?A PRNG seeded by a computer's interrupt count, process activity, and sampled I/O traffic (such as audio input, fan speed sensors, keyboard/mouse input, which I believe is a common seeding method) is determined to be sufficiently robust if only polled once a second, or for only 8 bits of resolution, exactly how much less robust does it get if you poll the PRNG say, 1 million times per second, or in a tight loop? Does it get more or less robust when the PRNG is asked for a larger or smaller bit field?

Unless I'm mistaken, the point is moot when the only cost of having a sufficiently robust PRNG is to wait for more entropy to be provided by its seeds or to use a larger modulus for its output, both rather trivial in the practical world of computing.

The headline is somewhat sensational. There is a pretty wide gulf between an abstract and rather arbitrary metric and a practical vulnerability. This is kinda the security equivalent of pixel peeping, a fun mathematical exercise at best and pissing contest at worst, but ultimately not all that important.

Right up until someone figures out how to use those "ultimately not all that important" technical weaknesses to good use. And then suddenly every single last deployed system might turn out to be vulnerable. Which might already be the case, but has been kept carefully secret for a rainy day.

Of course, usually it's a lot of "theoretical" shadow dancing, and given the nature of the field some things will indubitably remain unclear forever (unless another Snowden stands up, who just happens to give us a clear-e

The headline is somewhat sensational. There is a pretty wide gulf between an abstract and rather arbitrary metric and a practical vulnerability. This is kinda the security equivalent of pixel peeping, a fun mathematical exercise at best and pissing contest at worst, but ultimately not all that important.

I am definitely not a statistician; but there may be applications other than crypto and security-related randomization that break (or, rather worse, provide beautifully plausible wrong results) in the presence of a flawed RNG.

Its trivial to show that typical PRNG's are frequently not good enough for the monte-carlo simulations and testing that they are thrown at. An N-bit PRNG can only produce at most half of all N+1 bit sequences, a quarter of all N+2 bit sequences, an eighth of all N+3 bit sequences, and so on....

Given this, obviously the larger N is, the better. Of course most standard libraries use a 32-bit (or less) generator and most programmers are lazy or uneducated in the matter... so only half of all 8-billion to one shots are even possible with that 32-bit generator and then each can only be sampled if the generator just happens to be in the perfect 4-billion to one shot state....

As the saying goes...Random numbers are too important to be left to chance.

Useless for you. But the NSA might disagree. The math is what keeps them at bay. If the math shows cracks, it'd be certain that the NSA has figured out some kind of exploit. Keep in mind that the NSA doesn't rely on just one technique, but can aggregate multiple data sources. So those interrupts that the RNG relies on can be tracked, and the number that results can be narrowed to a searchable space. Keep in mind that 2^32, which is big by any human standard, is minuscule for a GPU.

Exactly. If the recent leaks have taught us anything it's that the NSA has managed to produce real working exploits where previously such issues have just been discarded as nothing to worry about because they were "only theory".

At this point it's stupid to assume that just because you can't come up with a working exploit that someone with the resources the NSA has hasn't already.

It's of course not even just the NSA people should worry about, it seems naive to think the Russians, Chinese et. al. haven't put similar resources into this sort of thing, the difference is they just haven't had their Snowden incidents yet. I'd imagine the Chinese and Russians have exploits for things the NSA hasn't managed to break as well as the NSA having exploits for things the Chinese and Russians haven't managed to break. Then there's the Israelis, the French, the British and many others.

It's meaningless to separate theory and practice at this point. If there's a theoretical exploit then it should be fixed, because whilst it may just be theoretical to one person, it may not to a group of others.

Your attitude is exactly what is wrong with security. Quite a few still use MD5 because "it is not that broken". Linus really should take a look in this new provably better method and adapt it ASAP and not wait until it bites hard.

Lots of people still use MD5, because it's widely available, fast, and good enough for what they need. Just like CRC32 is good enough for certain tasks. MD5 may be broken for cryptographic purposes, but it's still fine to use for things like key-value stores.

First of all, not all computers are PCs. A server running in a VM has no audio input, fan speed, keyboard, mouse, or other similar devices that are good sources of entropy. A household router appliance running Linux not only has no audio input, fan, keyboard, or mouse -- it doesn't even have a clock it can use as a last resort source of entropy.

Second, there are many services that require entropy during system startup. At that point, there are very few interrupts, no mouse or keyboard input yet, and some of the sources of entropy may even be initialized yet.

One problematic situation is a initializing a household router. On startup it needs to generate random keys for its encryption, TCP sequence numbers, and so on. Without a clock, a disk, a fan, or any peripherals, the only good source of entropy it has is network traffic, and there hasn't been any yet. A router with very little traffic on its network may take ages to get enough packets to make a decent amount of entropy.

VMs do have good sources of entropy... while a server indeed has no audio / fan / keyboard / mouse inputs (whether physical or virtual), a server most certainly does have a clock (several clocks: on x86, TSC + APIC + HPET). You can't use skew (as low-res clocks are implemented in terms of high-res clocks), but you still can use the absolute value on interrupts (and servers have a lot of NIC interrupts) for a few bits of entropy. Time is a pretty good entropy source, even in VMs: non-jittery time is just too expensive to emulate, the only people who would try are the blue-pill hide-that-you-are-in-a-VM security researchers.

The real security concern with VMs is duplication... if you clone a bunch of VMs but they start with the same entropy pool, then generate an SSL cert after clone, the other SSL certs will be easily predicted. (Feeling good about your EC2 instances, eh?) This isn't a new concern - cloning via Ghost has the same problem - but it's easier to get yourself into trouble with virtualization.

The real security concern with VMs is duplication... if you clone a bunch of VMs but they start with the same entropy pool, then generate an SSL cert after clone, the other SSL certs will be easily predicted.

Yeah, I encountered that the other day. Built a VM, took a snapshot, did some stuff, reverted, did the same stuff. I was testing a procedure doc I was writing. Part of the procedure was creating an SSL cert, and I got an identical one on both attempts. That seems a little fishy to me; I would expec

I think it is past time for CPUs to provide hardware random numbers. Via CPUs have done this [slashdot.org] for years, but Via CPUs are just too slow for most uses. (I used to run my mail server on a Via C3... I am a lot happier now that my server runs on an AMD low-power dual-core.)

Recent Intel chips do have some sort of random number generator (RdRand [wikipedia.org]).

Entropy Broker: A project to allow one computer to act as a randomness server to others. Could use any actual hardware machine to seed any number of VMs.

See also the comments in the LWN article, where someone with the user name "starlight" simply sends random data over SSH and then the receiving computer uses rngd to mix that data into the entropy pool. Simple, and simple is good.

It probably has a radio with signals of varying strengths and packet losses.

This is true. Although I suspect a lot of the time the CPU isn't going to get to see a lot of that stuff (although with most wireless chipsets being softmac devices, maybe it does?)

It also probably has a multitude of routed, nonrouted, and broadcast packets on its various network interfaces.

In a dark-start situation (coming up from a wide area power outage), possibly not. Imagine an ADSL router whereby all the clients are connected via wireless. The clients can't talk to the router until it has managed to initialise the wireless encryption subsystems, which requires entropy. Even on a wired network, you may well only see a few DHCP requests from workstations. Obviously rebooting a router onto a large active network is different to rebooting the entire network at the same time, as would happen in a power outage.

And on top of that, it's connected to a global network of nodes with varying packet delivery times, and where at any time it can ask for a multitude of continuously changing and a least partially stochastic metrics (e.g. exchange rates, 4th word of news headlines, youtube +1 counts, etc, etc).

A global network that it may well be unable to access until it has enough entropy to make a cryptographic handshake with the upstream peer.

The output of a software RNG, aka PRNG (pseudo random number generator), is completely determined by a seed. In other words, to a computer (or an attacker), what looks like a random sequence of numbers is no more random than, let's say,

(2002, 2004, 2006, 2008, 2010, 2012...)

However, the PRNG sequence is often sufficiently hashed up for many applications such as Monte Carlo simulations.

When it comes to secure applications such as cryptography and Internet gambling, things are different. Now a single SRNG sequence is pathetically vulnerable and one needs to combine multiple SRNG sequences, using seeds that are somehow independently produced, to provide a combined stream that hopefully has acceptable security. But using COTS PC or phone doesn't allow developers to create an arbitary stream of independent RNG seeds, so various latency tricks are used. In general, these tricks can be defeated by sufficient research, so often a secure service relies partly on "security through obscurity", i.e. not revealing the precise techniques for generating the seeds.

This is hardly news. For real security you need specialized hardware devices.

The previous discussion was about whether or not to remove the Intel hardware RNG from the equation, because Intel is an American company, and as such subject to NSA requirements. I.e, nobody knows if the Intel hardware RNG is a true random number generator, or a pseudo random number generator with a secret key that only the NSA knows.

(Some would claim that such a suggestion is tinfoil hat material, but lately, Edward Snowden has been making the tinfoil had crowd say "damn, it's worse tha

you can not check, generally, check how a hw random built into a chip works.

and heck... if the one in the rasp is good enough for you then probably the intel equivalent should suffice. however what good are real random numbers when you want a repeatable string of random numbers for which the seed is hard to guess from the numbers...

So, with all the 'revelations' and discussion surrounding this and encryption over the past several weeks, I've been wondering if a local hardware based entropy solution could be developed. By 'solution', I mean an actual piece of hardware that takes various static noise from your immediate area, ranging from 0-40+kHz( or into MHz or greater?), both internal and external to case, and with that noise builds a pool for what we use as/dev/random and/dev/urandom. Perhaps each user would decide what frequencies to use, with varying degrees of percentage to the pool, etc.. etc...

It just seems that with so much 'noise' going on around us in our everyday environments, that we have an oppurtunity to use some of that as an entropy source. Is anyone doing this, cause it seems like a fairly obvious implementation.

Yes. Hardware high-speed super-random number generators are trivial. I did it with amplified zener noise through an 8-bit 40MHz A/D XORed onto an 8-bit ring-shift register, generating 0.5MB/second of random noise that the Diehard tests could not differentiate from truly random numbers. XOR that with an ARC4 stream, just in case there's some slight non-randomness, and you're good to go. This is not rocket science.

Two unstable Os, at different frequencies, work well, with an appropriate whitener and shielding. You need to determine when measurable entropy falls below a certain level from raw and cease. Use that to seed a pure CSPRNG - the Intel generator uses this construct with the DRBG_AES_CTR (which seems to be OK, if you trust AES, unlike the hairbrained and obviously backdoored DRBG_Dual_EC). You als

No, a 120MHz radio signal cannot bias my generator unless the signal is so strong that the noise signal goes outside the ADC input range. Simply adding non-random bias to a random signal does not reduce the randomness in the signal. Simply XORing all the ADC output bits together for 80 cycles, where the low 4 bits correlate sample-to-sample less than 1%, and you get an error of less than 1 part in 10^50 of non-randomness. Add any signal you like to the noise, and it wont make any difference, so long as

Dual_EC_DRBG did well against Diehard but is known to be backdoored by the NSA. The Linux PRNG does well against it too. In other words Diehard is known to be a poor test for a cryptographically secure PRNG.

Unfortunately there is no simple suite of tests you can perform to make this determination. Zener noise is a good source of entropy but the chances of your A/D being unbiased, or that XORing with an ARC4 stream is enough to remove the bias completely is slim. At best you created another useful source of

"Not so random" means that you can mathematically calculate how likely it is that you can predict the next number over a long time. If you can predict the next number with an accuracy of 1 in 250 while the random generator provides 1 in 1000 then the random generator isn't that random.

Many random generators picks the previous value as a seed for the next value, but that is definitely predictable. Introduce some additional factors into the equation and you lower the predictability. One real problem with random generators using previous value as a seed without adding a second factor is that they can't generate the same number twice or three times in a row (which actually is allowed under the randomness rules).

It's a completely different thing to create a true random number. For a 32 bit number you essentially should have one generator source for each bit that don't care about how the bit was set previously. It is a bit tricky to create that in a computer in a way that also allows for fast access to a large number of random numbers and prevent others from reading the same random number as you do.

For computers it's more a question of "good enough" to make prediction an unfeasible attack vector.

Because whilst they may have had the time and resources to research it and come up with a solution, they don't necessarily have the time and resources to fight their patch past Commander Torvald's ego and army of protective zealots.

Many bright academics simply have better things to do than get involved in that sort of childishness. Research is what they enjoy and they have neither the time nor interest in political bickering to get involved, especially if it flies in the face of logic - i.e. someone refusin

Because whilst they may have had the time and resources to research it and come up with a solution, they don't necessarily have the time and resources to fight their patch past Commander Torvald's ego and army of protective zealots.

Or maybe now they have actually created a paper that may or may not define a real issue (I'll wait and see how this is peer reviewed as I have no idea about theoretical weaknesses created by different ways of combining sources of entropy in order to create random data) with the kernel in depth (not just some moronic bullshit petition based on a completely incorrect assessment of how someone guessed something would work with no real investigation of the actual system involved) the kernel team will actually l

You're just focussing on one incident though, and there have been hundreds over the years. Often Linus is right and is just dealing with an idiot, but sometimes he's not.

But I'm simply reasoning a response to the GP's post as to why people may wish to steer clear of contributing a fix once they've written a paper. If a clique is unnecessarily cruel to anyone, no matter how stupid that person, then that's going to put people off interacting with it directly. As far back as the original Tanenbaum debate he's

I swear, if I worked for the NSA I'd be pushing out headlines like this to make people ignore real security issues...

The article is a highly academic piece that analyzes the security of the linux rng against a bizarre and probably pointless criteria: What is an attacker's ability to predict the future output of the RNG assuming he knows the entire state of your memory at arbitrary attacker selected points in time and can add inputs to the RNG. Their analysis that the linux rng is insecure under this (rather contrived) model rests on an _incorrect_ assumption that Linux stops adding to the entropy pool when the estimator concludes that the entropy pool is full. Instead they offer the laughable suggestion of using AES in counter mode as a "provably secure" alternative.

(presumably they couldn't get a paper published that said "don't stop adding entropy just because you think the pool is at maximum entropy", either because it was too obviously good a solution or because their reviewers might have noticed that Linux already did that)

Their analysis that the linux rng is insecure under this (rather contrived) model rests on an _incorrect_ assumption that Linux stops adding to the entropy pool when the estimator concludes that the entropy pool is full.

Exactly. The maintainer of the/dev/random driver explained this and a lot more about this paper here [ycombinator.com].

Yeah, because random number generators is like pr0n for the masses, and will distract them from boring stuff like the government monitoring and storing that really stupid thing they said about their boss, right next to their cybersex session with their mistress.

Somebody's gotta implement it in hardware. Do you trust Intel or AMD? I don't. If I can run an OSS analyzer on it and the results come out positive, I might be convinced. But I'm not sure this feature even exists for any consumer chips.

Well, Intel and VIA have such things integrated into their processors now. Unfortunately, they (at least Intel - not sure how VIA's implementation worked) decided to whiten the data in firmware - you run a certain instruction that gives you a "random" number, instead of just polling the diode. With all the current furor over the NSA stuff, many people are claiming that it *is* hacked.

Without whitening you're likely to get biased output, which is undesirable in most of the contexts where you'd want a truly random number in the first place. You can do whitening in software, which is fine for things like seeding a PRNG where you've already got a lot of code related to managing incoming entropy, but it makes the instruction difficult to use in other contexts where you just want a random number for direct consumption. On the other hand, pre-whitened data is just as useful for PRNG seeds *and

Even if they didn't whiten the data you could never trust that they didn't bias the measurement somehow. Besides which TFA isn't questioning the sources of entropy, of which there are many in a typical Linux system, it is the way that they are combined.

Laws don't mean you will know for sure how a measurement will turn out. One of the fundamental tenets of quantum mechanics is that you cannot determine in advance in which state a particle will be. If it is in a superposition of 0 and 1, for instance, there will be a likelihood for it to, once measured, be 0 or 1, and that likelihood may be biased, but you cannot know any of this unless you created the bias in the first place. Even then, even if you know the exact likelihood, unless the particle is entirely

There are quantum effects involved in this process [wikipedia.org]. Quantum effects (more specifically wave function collapse [wikipedia.org]) are thought to be a source of true, inherent and perfectly unpredictable randomness. Throw that into a massive (from an atomistic point of view) chaotic system and you get a gigantic mess that is impossible to simulate with sufficient precision to predict the noise that comes out (and far, far beyond our computational means even if you don't care about precision).

avalanche diodes conduct bursts of current at random times. A true random number generator simply measures time between those bursts of current then scales that value to whatever numerical range you need.

You can also time the clicks produced by a geiger-mueller tube detecting beta radiation from a radioactive source, but that requires a lot more difficult-to-integrate hardware.

Even if you base the final random number on a truly random source you have to ensure that the scaling routine doesn't introduce any sort of bias into the final value. THAT is the tricky part.

Yes, there are apparently laws that approximate how it would work, however to know the outputs one would need to have information about all the energy field states that make up the device's matter (and dark matter) at a given plank time.

In the past when I've needed a random number generator I've used a single infrared LED and IR Diode connected to a serial port. Poll the difference between the time it takes a photon to build up, be emitted, and detected multiple times and use the lowest bits as the random

If the CPRNG state is not compromised, the Linux random generators are secure. In fact the property required for robustness is quite strong: Recovery from compromise even if entropy is only added slowly. For example, the OpenSSL generator also does not fulfill the property. Fortuna does, as it is explicitly designed for this scenario.

I also have to say that the paper is not well-written as the authors seem to believe the more complicated formalism used the better. This may also explain why there is no analysis of the practical impact: The authors seem to not understand what "practical" means and why it is important.

Here's direct link Ted's post [schneier.com]. The most interesting points are (if I understand them correctly):

1. The insecurity discussed in the paper is about how quickly the Linux entropy pool recovers from a compromised state. I.e. imagine that somebody somehow gains full read access to your computer's memory, and reads your randomness pool (but kindly does not read all your private keys etc.), but then loses that access at some later pool. How long does it take until the entropy pool has recovered enough entropy to b

I think the only question on my mind is what exactly is deemed insecure for? Generating public/private key pairs? Doing encryption for SSL/TLS?

I've been around computers for a good number of years and I know no computer can be truly random, but isn't there a point where we say, "It's random enough."? Is this OP saying.. Linux's RNG isn't "Random Enough." and my question is.. what isn't it random enough for?

That's not quite the point.The two random devices are there not to be alternate sources of randomness, but to be a good source of provably random numbers gained from hardware randomness mixed into the entropy pool, and another source of cryptographically random numbers seeded from the first - but with perhaps orders of magnitude less entropy per output bit compared to input bits.The first source will stop outputting when it runs low on entropy.