Posted
by
Soulskill
on Friday September 13, 2013 @08:54AM
from the getting-in-before-the-rush dept.

DoctorBit writes "A team of researchers funded in part by the NSF has just published a paper in which they demonstrate a way to introduce hardware Trojans into a chip by altering only the dopant masks of a few of the chip's transistors. From the paper: 'Instead of adding additional circuitry to the target design, we insert our hardware Trojans by changing the dopant polarity of existing transistors. Since the modified circuit appears legitimate on all wiring layers (including all metal and polysilicon), our family of Trojans is resistant to most detection techniques, including fine-grain optical inspection and checking against "golden chips."' In a test of their technique against Intel's Ivy Bridge Random Number Generator (RNG) the researchers found that by setting selected flip-flop outputs to zero or one, 'Our Trojan is capable of reducing the security of the produced random number from 128 bits to n bits, where n can be chosen.' They conclude that 'Since the Trojan RNG has an entropy of n bits and [the original circuitry] uses a very good digital post-processing, namely AES, the Trojan easily passes the NIST random number test suite if n is chosen sufficiently high by the attacker. We tested the Trojan for n = 32 with the NIST random number test suite and it passed for all tests. The higher the value n that the attacker chooses, the harder it will be for an evaluator to detect that the random numbers have been compromised.'"

Silicon is just politics by other means. So presume both the Chinese and the West are trying to flood supply channels with compromised/counterfit silicon in hopes of it finding its way the other side's hardware.

All they need to do? It's already been done at the fab! Why else would this be coming out now? These researchers have been under a gag order for years and only now got bold enough to stand up to the NSA.

That only works if you've replaced enough other stuff in your computer, that the compromised chips don't have a code that the compromised windows is programed to ignore, that you didn't buy a compromised chip in the first place, etc...

Also, if I'm bothering with custom compromised chips, I might just have the CPU ID be reprogrammable on them, and bring with me a device capable of reading the code from the removed CPU and burning it into the replacement.

Are tinfoil hats on special this week? It's not very likely to happen to anybody who isn't a very big target because to make such a modification have to completely understand your chip design, know how you're going to use it and judge that compromising YOUR chip design is sufficiently valuable to reap rewards.

If you consider very widely used device, there's greater likelihood of being compromised, and it would more likely be done with the cooperation of the chip designers than otherwise, in which case it

Silicon wafers are generally "electrically passified" before being diced and sent to packaging. This is often done by growing a reasonably thick layer of Silicon dioxide on top to insulate the planar circuits below.

FWIW, often during pre-production, a small number of wafers are made w/o the passification step so that engineers can use FIBs (focused ion beams) to modify the circuitry to assist in finding workarounds for bugs. The reason for this is that FIBs can't easily penetrate this layer w/o doing lots

How likely is it that the NSA or whoever already uses this? It seems to me that with many science fields, the agencies are more than happy to sit back and let someone else spend time and money to develop the tech, then they steal it, copy it, or as a last resort, buy it with taxpayer money. But then obviously, we wouldn't know if they ARE actually coming up with innovation, since they'd obviously keep it secret.

In general though, it seems like the best and brightest scientists have strong disincentives

Except that we know for sure that the NSA has made breakthroughs in the past, putting them years ahead of academia in cryptanalysis. They knew about differential cryptanalysis before it was officially discovered. Bruce Schneier points out that according to documents leaked by Snowden, the NSA's "research and development" budget for cryptanalysis is more than is being spent on cryptanalysis research by all of academia combined.

So what can they offer? A larger budget for your research than you would ever

People work for the government for job security, as it's harder to get fired just because of a personality conflict with one supervisor. Or they do it for idealism, patriotism or a more specific desire to defeat "the terroists" of the moment. Or the government pays to educate some bright young person and that person feels loyalty afterwards. I have to disagree with one of your points though. " the private sector pays more..." . Given that the NSA and others seem to

If I were a disgruntled member of the intelligence industrial complex and knew that this was actually being done by a government agency, and I did not relish the thought of a Russian sabbatical, couldn't I surface the news by telling researcher friends of mine how to do it?

This is a real problem with incomplete understanding of entropy and how it is used. The question is not "does rdrand provide X entropy" it is "does rdrand provide at least X entropy that it is being credited for".

If a process in linux asks for a random number the current pool is evaluated. Each input to the pool provides (theoretically) some X entropy and is credited with having provided some Y entropy where (presumably) X >= Y. If the *credited* entropy is enough then a number is returned, otherwise it

1. Changing the dopant in a transistor is undetectable by visual inspection - clearly;

2. Randomness isn't the same as predictability.

I skimmed through the paper thinking that the innovation was that they'd actually been able to modify an Intel chip. But they appear to be saying little more than that you can manufacture a chip "wrongly" (after a LOT of waffle - you'd never get away with this writing math papers!).

In security, you're trying to change the behavior of corporate drones, idiots, and people who are invested in the status quo. People use these papers as ammunition for that.

The drones will call your attack "theoretical" and "impractical" unless you spell out exactly how to do it, step by step. If they hadn't detailed exactly how to do it, the attitude would basically have been that nobody could possibly figure out the impossible complexity of weakening a REAL RNG. I mean, look at the self tests! Nobody could get around that! In fact, even people who weren't complete idiots might have guessed, at first glance, that the self tests would be hard to defeat, or that you couldn't do this hack without screwing up the chip.

Even with a detailed paper, they will probably be ignored until somebody actually does it in the field. If you wrote a one-pager that said "Warning! Somebody could alter the behavior of gates by tweaking the dopants", they would 1000 percent ignore it.

As for the verbose background information, it's standard in the field (although they went a bit heavy on it). It has zero cost, and readers in the field who don't need it simply skip it. So I don't know why you're getting so upset about it.

Please don't trash people's work in fields you don't even slightly understand.

This is not my field by a long stretch. After reading the pdf this morning, what I got from the paper was a method to undetectably make relatively easily-done changes to various transistors such that those changes offer an entry point for external reading and possibly manipulation to potentially useful effect within real-world manufacturing methods. Do this, pwn chips. Profit.

What these guys have done strikes me as impressive - and wonderfully, elegantly sneaky. I know there are some design and fab peop

Why do you think I don't even understand the field? Everything I've said is accurate, everything they've said is accurate, and all I'm saying is that I don't get what the deal is with writing a big paper about it. You've suggested that it's about socially engineering the PHBs rather than informing academia, which is fair enough... but that's not a "paper" in the way I think of it, then.

There are easy numeric methods for determining how random data is. Optical inspection would be unnecessary to discover this modification. You might even get away with generating a few megabytes of data, zipping it, and then comparing the resulting compression ratio to a known good chip.

Actually, no. Technically speaking, there is no such thing as random data, only a random process. You can certainly test how random a data stream seems, but if the data source is a black box, you never really know.

From TFS:

Since the Trojan RNG has an entropy of n bits and [the original circuitry] uses a very good digital post-processing, namely AES, the Trojan easily passes the NIST random number test suite if n is chosen sufficiently high by the attacker. We tested the Trojan for n = 32 with the NIST random number test suite and it passed for all tests.

What if your black box is just feeding you encrypted bits of pi? You would never know, but the black box's maker could trivially reproduce your "random" numbers.

The NIST 800-22 test has bit length parameters. The article doesn't indicate if it passed the 128 bit NIST test after they reduced the entropy to 32 bits, only that it passed *some* NIST test. From another poster it seems the standard NIST parameters used for the NIST test may not be sufficient to test that the prng exhibits the level of entropy that people are relying on it to exhibit. The lavarnd folks pass a billion bit NIST test, so it is possible to run longer versions of the test. If the reduced e

It would have to be based on a statistical analysis which means it isn't a proof, it is demonstrated to a confidence level. How confident do you need to be?

Secondly, to properly evaluate to greater number of bits of entropy is going to require a larger sampling and I expect this increases exponentially. How much time do you have to reach your confidence?

The testing would be balancing those two questions, but in no case could an absolute answer be found.

But, from the horses mouth:

The subject of statistical testing and its relation to cryptanalysis is also discussed, and some recommended statistical tests are provided. These tests may be useful as a first step in determining whether or not a generator is suitable for a particular cryptographic application. However, no set of statistical tests can absolutely certify a generator as appropriate for usage in a particular application, i.e., statistical testing cannot serve as a substitute for cryptanalysis. The design and cryptanalysis of generators is outside the scope of this paper.

As a person who has worked in semiconductors since the first SSI 7400 , I can say for certain that many things have been done and there are some really talented people who can do things that -almost- defy reason. I know that engineers put their own little signatures in ASICs and that some engineers are far more competent than can be understood by most. I have seen many circuits that were situationally controlled or externally controlled by means that would not be obvious without an understanding of the phy

Actually, no. Technically speaking, there is no such thing as random data, only a random process.

Actually, there is random data. That is, data generated by a random process.

Unsurprisingly, there are quite a few different tests which can determine, or perhaps "preditct the chance" if some data is produced by a random process i.e. is random, or not. The easiest for a layman is to try to compress it. Random data of sufficient size won't compress with unbelievably huge probability.

You can still generate an arbitrary amount of entropy with a compromised RNG if you know it's compromised. Let's say you have a ridiculously compromised RNG with only 1-bit of entropy and 32-bit output, such an RNG could trivially fail statistical tests, if it used simple combinatorials to mix the nth output with the n-1th output, or it could be almost undetectable, if it uses complex combinatorials, such as the AES method used in the Intel RDRAND. In either case, each word will contain some entropy, even i

Yes, I just realized this. A properly written OS can periodically test the hardware RNG for reduced entropy. Let us suppose we can detect if the entropy has fallen below 32 bits. Then, whenever we are using the hardware RNG, we pessimistically assume that there are only 16 bits of entropy available per sample. Grab a bunch, run it through a good hash function, repeat, concatenate. You end up with as many bits of good random data as you need, and you XOR it with the random bits you got from other sources.

This should still be detectable. It just requires more time. One could also reduce the time by looking at the combined output of an entire batch of chips. If they all have the same mask, they will all produce the same reduced set of random numbers. So one additional meta-test of data from a lot could show they have been compromised.

You are now aware that the infamous Ken Thompson Compiler / Microcode Hack was well known to the government before he pontificated on it during his ACM acceptance speech / paper. [bell-labs.com]

Acknowledgment
I first read of the possibility of such a Trojan horse in an Air Force critique (4) of the security of an early implementation of Multics.

Which was published in the very apt. year of 1984, I might add...

Tell me, indeed, how exactly would you select the chips that did not already have such modification for comparison? Oh it should take more time indeed, but far much more than you realize. Get out your Oscillosc

Since the Ivy-bridge random number generator is supposedly "unauditable" how are these researchers able to prove anything about re-doping a black box design? Shouldn't they just look at it and spot the massive array of transistors that spells out "NSA BACKDOOR UNIT" instead of having to worry about all this subterfuge?

no there aren't. The digits of pi have no patten other than being the digits of pi, so they will pass a random number tests. A good pseudo random number generator will pass randomness tests, but can be easily reproduced if you know the starting seed. Also putting a simple sequence (1,2,3,4...) through an encryption algorithm will give you an output that passes randomness tests.

I thought we already covered this in the linux rdrand story. It's called unauditable because it whitens the raw entropy output using encryption on chip, making even quite non-random source data appear to be random. It is not called unauditable because it's a black box design. The paper states that the design is very well known.

The attack described in this paper is to modify both the entropy source output "c" and the post-processing encryption key "K", undetectably setting a fraction of them to constant b

I looked at the paper from CRI, they apparently did do testing on the raw (pre-whitening) entropy source on test chips that give direct access to it. Unfortunately the goal of that audit was to build confidence in the general design, the NSA wasn't an issue when that was done.

What I take away from this is - the good news is, the RDRAND circuitry has an open, well documented design which is apparently robust. Thus, if we can obtain confidence that it's not backdoored by the NSA, it's a great feature to have.

I wonder if it's possible for an attacker to mess with microcode in such a way as to trojan things like random number generation, without having any other effects that would be more easily noticed. It doesn't seem likely.

Of course, true RNG depends on things like timing keystrokes, mouse clicks, network packets, etc. The LSBs of such times probably aren't used for anything else, and could thus be tampered with more easily.

It's pretty hard to get reliable crypto when your adversaries are the SIGINT arms of s

Also notice that this attack does not make RdRand unusable. It still gives you some bits of entropy per output value, just a lot less than expected. However if you expect nothing or very little, the output is good even in the compromised version. And for various reasons, RdRand has a lot less entropy than 1bit/1bit anyways (theoretically as low as 1bit/512bit), so hashing it together large-scale is necessary in any case (I bet many people overlooked that little gem...).

These parts would not pass the standard verification process and would be rejected from being assembled into devices.Standard testing of ICs for functional faults includes a scan process. Per the design specification that the part was supposed to buildt a number of scan vectors are passed through the devices. These scan vectors check as much of the device as possible. The goal is to check every flop and every logic path between flops. The tests are to detect manufacturing errors. And can find single faults in devices.Typical errors are stuck at 1 or stuck at 0, also shorts and would easily expose modifications of this sort, especially of such a scale as to radically change things.

"Hello, Intel. Under the terms of this national security letter, you must change your verification software to ignore certain errors. The engineers who carry out this order must not reveal anything about this. Anyone who does will be subject to a term of incarceration not exceeding..."

1) computer generated" random numbers" of the type this covers are fully state to state defined they are not random in any way. To make them random you need to seed the initial state and then reduce the output.2) the automated scan check is bit by bit on the logic it does not care that 64 bits make a random number it looks at the logic cone input for every single bit independently and verifies the functionality. This is done to make sure all the logic works.

There is no untestable magic there:1) entropic source2) digial state algorithm3) async sampling

"The ES runs asynchronously on a self-timed circuit and uses thermal noise within the silicon to output a random stream of bits at the rate of 3 GHz. The ES needs no dedicated external power supply to run, instead using the same power supply as other core logic. The ES is designed to funct

Given Hanlon's razor, an accidental, rather than malicious, error in doping would be even more likely. If the chip were inadvertently doped incorrectly, it would pass visual inspections and even software tests without awareness of the defect. How many defective dice, not merely with RNGs but also with other circuits, are already in service due to inspection failures?

Although this paper shows how insidious a threat from a well-funded adversary might be, even more it shows the need for more comprehensive inspection mechanisms to discover misdoping which might go undetected by existing standard procedures.

BTW, the paper includes a well written and readable introduction to the context of the problem. Good job.

In semiconductor manufacturing, doping is the introduction of slight amounts of impurities into a semiconducting material, to create a condition of surplus or deficit electrons. Donors such as arsenic and phosphorus add electrons, creating n-type semiconductors, while acceptors such as boron and aluminum cause a deficit of electrons, making a p-type semiconductor. The terms surplus and deficit are relative to a state where all of the atomic orbitals are filled and the semiconductor has almost no conductivit

A misdoping would light up the equipment alarms, in-line electrical tests, end-of-line electrical tests (both on the chips themselves and special test regions in the lines between the chips). Doping is performed relatively early in the manufacturing process and Intel et al know just how big a risk a misdoping is and test for it extensively in-line. This is because if you only catch it at the end of the line you potentially have hundreds of millions of dollars worth of product to scrap because from the 20

I would agree almost all the time. An error in doping, not being selective, would likely be obvious, because it would affect the other components on the same layer.

However, there is a small amount of boutique production which is done almost by hand, and more subject to errors. The chips are usually less complex, and given the right kind of circuit (such as the RNG from the paper) errors are more likely to slip through, especially if the circuit were to be confined, by itself, to layers not used in the inter

Then we can buy them from fabs that we trust, and they will have to more explicitly compete on the issue of trust.

There is also some possibility that buyers could inspect the manufacturing processes.

Anomalies in other computational functions are less of a concern, IMHO, because any environment with a mix of CPUs and chipsets should reveal tainted chips at least occassionally. Random number generation is an exception here.

This can only be used for attacks on things that can be compromised in a way such that they do not need to perform their original function perfectly anymore. A CPRNG is an ideal target, as it does not need to produce good _and_ bad number after the attack, it is sufficient if it produced bad numbers that look good. The AES whitener in the CPRNGs this was demonstrated on make this very easy and while it looks convenient, it may have been inserted in there exactly to make compromised versions of this CPRNG hard to detect. On the other hand, if you attacked, say, a hash function or a block cipher in this way, it would start producing wrong outputs, potentially for a large number of cases and not only would it fail at its original function, this would also be pretty obvious.

Still, this is a significant attack and underlines why a single source of entropy should never be fully trusted and that CPRNGs should always be open software and use multiple entropy sources that get mixed.

I don't believe the authors attacked the Ivy Bridge RNG in the way described. They described a way, they didn't do it.

Why?1) They show a plot of a DFFR_X1. This is a normal D type flip flop you would find in synopsis libraries and many other libraries you would use in an SoC process. These are not the flops used in the Ivy Bridge DRNG. Also the plot was from a layout program, not a micrograph.

2) The proposed attack required an average of 2.1 billion attacks (fixing k and v until you hit the right CRC). I do

Sabotage would be to make something stop working. The mentioned chips will work just fine, but their RNGs will be predictable. Only the ones who caused it know and will take advantage of it. Looks like a trojan to me.

If the RNGs aren't producing numbers as "random" as claimed, then it's not working. It's sabotage.

No, it's not. Saboteurs break machines and bring them to a halt. Check the etymology.

Actually, you should check the etymology. There's no evidence for the old story about people throwing their shoes into the machines.Even if it was, there's no requirement for there to be a stoppage of production, there's just the requirement of the actors maliciously disrupting the process.An RNG that doesn't output "random" numbers to spec is BROKEN. Anyone intentionally causing that is engaging in SABOTAGE.

If the RNGs aren't producing numbers as "random" as claimed, then it's not working.

Unless you have access to the AES key in the RNG chip, the numbers are effectively random. Even if an attacker knows that the numbers only jump around in for example a 32-bit subspace of the N-bit key space, they don't know which subspace, unless they break AES. On the other hand, if you do have the key, as you probably would if you are the one who tampered with the chip, then you're in a whole new position.

I guess that's the "nice" thing about the attack -- only the one who planted it can exploit it. Usefu

I've considered this as well (I will be using the NIST random number test suite in the near future). However, what can they accomplish with this? I see two approaches they could have taken:
1. Flag a non-random generator as "random". However, just because I use the NIST test suite does not mean that I don't use any other test suites, that would presumably catch this. This seems high-risk from the NSA's point of view - just one publicly available test that proves NIST is gamed shows their hand.
2. Flag some

Yes, I know this. However, this would not require them to compromise the NIST Random number test suite - No reasonalbe test suite would be able to detect this sort of scenario anyway.
So, back to the original question: Is the NIST Random number test suite compromised? What could they gain by doing this?

The whole point of TFA is about a technique for (mostly undetectably) modifying a good hardware RNG and turning it into a really lousy one.

Getting your entropy from multiple places probably helps (if they don't know what 6 RNGs you chose it's harder to dope them all, and even if they do, they still have to slog through the entropy from multiple crippled sources rather than only a single one (and, while it is possible to cripple the RNG entirely, that will sho

Oh, noting requires that the RNG be on-die (and, as you say, there are all kinds of options that aren't, the universe is a very noisy place). However, unless your computer is actually configured to use whatever access to the noise of the universe it has (nasty little webcams are another good source), and dump that entropy in whatever pool(s) your software environment specifies, it doesn't help you much.

The on-die ones are valued mostly because they are really, really, fast and a whole lot cheaper than th

Well, I prefer an over-driven triode, but those are harder to get than they used to be. Nothing wrong with a mic as a source. In fact, many computers come with them built-in. Chop and hash that source a few times, compress it lightly, and fold with an xor and you've got a pretty random signal. The problem comes if you want your random numbers to follow some standardized distribution. And you usually do. Uniform random distribution and Normal random distribution are the ones usually needed, but sometim