The Random Sanity Project is a free, open source service that helps secure the Internet by sanity-checking sources of randomness. If you are a CTO or system administrator responsible for a security-critical web server or application running on the Internet you should consider using this service to alert you of catastrophic hardware or software failures that could completely compromise the security of your website or application.

Gavin Andresen, the former lead developer of bitcoin, is breaking his silence.

While in recent months he’s been more active on Twitter discussing the block size debate (even stamping his name on a new bitcoin scaling ‘agreement‘), Andresen has largely been absent from the bitcoin developer community for about a year.

But, that doesn’t mean the prodigious worker, who did much to help build out bitcoin’s early developer team and market, hasn’t been busy.

Click to expand...

Keeping an eye on bitcoin

In May 2015, a vulnerability in Blockchain’s Android bitcoin wallet left several users out money. According to Softpedia, the vulnerability allowed duplicate bitcoin addresses to be created and given to different users. At its core, the problem was with Blockchain’s random number generator, random.org, which provided insufficient entropy on certain versions of the Android operating system.

And two years before, in August 2013, all bitcoin wallet applications on Android operating systems were potentially at risk when several vulnerabilities were found within another random number generator, Java SecureRandom.

Click to expand...

Andresen has been working on the Random Sanity Project for about six months. According to him, the project is not intended to be a profit-making business. Instead, ideally, the project would be sponsored by an entity like the Linux Foundation to offer the service to anyone for free.

So how does Random Sanity work? Every system and every programming language has a way of getting random bytes – for instance, Linux has a special folder called ‘/dev/urandom’ and OpenSSL provides several random number generators (which Bitcoin Core uses).

Users of the Random Sanity Project can take those random numbers – from 16 to 64 bytes – and input them into the service, which will return a ‘true’ if the bytes look random, or a ‘false’ if the numbers don’t.

“The problem of detecting whether your random numbers are good enough is a tricky problem,” Andresen told CoinDesk. “There are a bunch of ways you can screw up.”

Introduction
The haveged project is an attempt to provide an easy-to-use, unpredictable random number generator based upon an adaptation of the HAVEGE algorithm. Haveged was created to remedy low-entropy conditions in the Linux random device that can occur under some workloads, especially on headless servers. Current development of haveged is directed towards improving overall reliability and adaptability while minimizing the barriers to using haveged for other tasks.

What this means is that an attacker who can predict the output of your RNG — perhaps by taking advantage of a bug, or even compromising it at a design level — can often completely decrypt your communications. The Debian project learned this firsthand, as have many others. This certainly hasn’t escaped NSA’s notice, if the allegations regarding its Dual EC random number generator are true.

All of this bring us back to Snowden’s quote above, and the question he throws open for us. How do you know that an RNG is working? What kind of tests can we run on our code to avoid flaws ranging from the idiotic to the highly malicious?

Click to expand...

Statistical Tests
If you look at the literature on random number generators, you’ll find a lot of references to statistical randomness testing suites like Diehard or NIST’s SP 800-22. The gist of these systems is that they look a the output of an RNG and run tests to determine whether the output is, from a statistical perspective, “good enough” for government work (very literally, in the case of the NIST suite.)

Entropy and Randomness
Not all randomness is created equally. There are two sorts of randomness to think about: uniformity and unpredictability. A random number generator provides ‘uniform’ output if all numbers will come up equally often if run long enough. That’s useful for modeling random processes, but not good enough for security.

For computer security, random numbers need to be hard to guess: they need to be unpredictable. The predictability of numbers is quantified in a measure called entropy.

If a fair coin is tossed it provides one bit of entropy: the coin lands with equal probability on heads or tails (which can be thought of as 0 and 1). Because the probability is equal there’s no predictability in the coin’s ‘output’. We say it provides one bit of entropy.

An unfair coin toss provides less than one bit, since it’s much easier to guess when you know the bias. Flipping a coin with heads on both sides provides no entropy, since the result of a coin toss can be guessed with absolute certainty.

Click to expand...

Take a dip in the pool
On Linux, the root of all randomness is something called the kernel entropy pool. This is a large (4,096 bit) number kept privately in the kernel’s memory. There are 24096 possibilities for this number so it can contain up to 4,096 bits of entropy. There is one caveat - the kernel needs to be able to fill that memory from a source with 4,096 bits of entropy. And that’s the hard part: finding that much randomness.

The entropy pool is used in two ways: random numbers are generated from it and it is replenished with entropy by the kernel. When random numbers are generated from the pool the entropy of the pool is diminished (because the person receiving the random number has some information about the pool itself). So as the pool’s entropy diminishes as random numbers are handed out, the pool must be replenished.

Replenishing the pool is called stirring: new sources of entropy are stirred into the mix of bits in the pool.

This is the key to how random number generation works on Linux. If randomness is needed, it’s derived from the entropy pool. When available, other sources of randomness are used to stir the entropy pool and make it less predictable. The details are a little mathematical, but it’s interesting to understand how the Linux random number generator works as the principles and techniques apply to random number generation in other software and systems.

The kernel keeps a rough estimate of the number of bits of entropy in the pool. You can check the value of this estimate through the following command:

Code (Text):

cat /proc/sys/kernel/random/entropy_avail

A healthy Linux system with a lot of entropy available will have return close to the full 4,096 bits of entropy. If the value returned is less than 200, the system is running low on entropy.

Click to expand...

Running out of entropy
One of the dangers of a system is running out of entropy. When the system’s entropy estimate drops to around the 160 bit level, the length of a SHA-1 hash, things get tricky, and how they effect programs and performance depends on which of two Linux random number generators are used.

Linux exposes two interfaces for random data that behave differently when the entropy level is low. They are /dev/random and /dev/urandom. When the entropy pool becomes predictable, both interfaces for requesting random numbers become problematic.

When the entropy level is too low, /dev/random blocks and does not return until the level of entropy in the system is high enough. This guarantees high entropy random numbers. If /dev/random is used in a time-critical service and the system runs low on entropy, the delays could be detrimental to the quality of service.

On the other hand, /dev/urandom does not block. It continues to return the hashed value of its entropy pool even though there is little to no entropy in it. This low-entropy data is not suited for cryptographic use.

The solution to the problem is to simply add more entropy into the system.