Scientists have developed a technique to sabotage the cryptographic capabilities included in Intel's Ivy Bridge line of microprocessors. The technique works without being detected by built-in tests or physical inspection of the chip.

The proof of concept comes eight years after the US Department of Defense voiced concern that integrated circuits used in crucial military systems might be altered in ways that covertly undermined their security or reliability. The report was the starting point for research into techniques for detecting so-called hardware trojans. But until now, there has been little study into just how feasible it would be to alter the design or manufacturing process of widely used chips to equip them with secret backdoors.

In a recently published research paper, scientists devised two such backdoors they said adversaries could feasibly build into processors to surreptitiously bypass cryptographic protections provided by the computer running the chips. The paper is attracting interest following recent revelations the National Security Agency is exploiting weaknesses deliberately built-in to widely used cryptographic technologies so analysts can decode vast swaths of Internet traffic that otherwise would be unreadable.

The attack against the Ivy Bridge processors sabotages random number generator (RNG) instructions Intel engineers added to the processor. The exploit works by severely reducing the amount of entropy the RNG normally uses, from 128 bits to 32 bits. The hack is similar to stacking a deck of cards during a game of Bridge. Keys generated with an altered chip would be so predictable an adversary could guess them with little time or effort required. The severely weakened RNG isn't detected by any of the "Built-In Self-Tests" required for the P800-90 and FIPS 140-2 compliance certifications mandated by the National Institute of Standards and Technology.

The tampering is also undetectable to the type of physical inspection that's required to ensure a chip is "golden," a term applied to integrated circuits known to not include malicious modifications. Christof Paar, one of the researchers, told Ars the proof-of-concept hardware trojan relies on a technique that requires low-level changes to only a "few hundred transistors." That represents a minuscule percentage of the more than 1 billion transistors overall. The tweaks alter the transistors' and gates' "doping polarity," a change that adds a small number of atoms of material to the silicon. Because the changes are so subtle, they don't show up in physical inspections used to certify golden chips.

"We want to stress that one of the major advantages of the proposed dopant trojan is that [it] cannot be detected using optical reverse-engineering since we only modify the dopant masks," the researchers reported in their paper. "The introduced trojans are similar to the commercially deployed code-obfuscation methods which also use different dopant polarity to prevent optical reverse-engineering. This suggests that our dopant trojans are extremely stealthy as well as practically feasible."

Besides being stealthy, the alterations can happen at a minimum of two points in the supply chain. That includes (1) during manufacturing, where someone makes changes to the doping, or (2) a malicious designer making changes to the layout file of an integrated circuit before it goes to manufacturing.

In addition to the Ivy Bridge processor, the researchers applied the dopant technique to lodge a trojan in a chip prototype that was designed to withstand so-called side channel attacks. The result: cryptographic keys could be correctly extracted on the tampered device with a correlation close to 1. (In fairness, they found a small vulnerability in the trojan-free chip they used for comparison, but it was unaffected by the trojan they introduced.) The paper was authored by Georg T. Becker of the University of Massachusetts, Amherst; Francesco Regazzoni of TU Delft, the Netherlands and ALaRI, University of Lugano, Switzerland; Paar of UMass; Horst Gortz Institut for IT-Security, Ruhr-Universitat, Bochum, Germany; and Wayne P. Burleson of UMass.

In an e-mail, Paar stressed that no hardware trojans have ever been found circulating in the real world and that the techniques devised in the paper are mere proofs of concept. Still, the demonstration suggests the covert backdoors are technically feasible. It wouldn't be surprising to see chip makers and certifications groups respond with new ways to detect these subtle changes.

Story updated to change "dozen" to "hundred," after researchers clarified the "a few dozen" modified transistors applied only to the side-channel trojan. The attack on the Ivy Bridge processors requires modification of 256 or 512 transistors, depending on the entropy the attacker wants.

Promoted Comments

The engineering file that describes to the fabs how to make the chips is the primary attack vector here. And I would bet if they really wanted, the NSA and other agencies could find a way to reach this file.

The article mentions a hardware level attack like this hasn't happened yet, but I'm willing to bet $20 that within the next 10 years, we will see compromised chips. Maybe not from Intel or with the NSA behind it (I'm thinking China and a smaller manufacturer), but we will definitely see compromised chips in the very near future.

I am not sure what the actual report says has not happen, I can tell you from personal experience, this sort of tampering isn't a new concern for certain military organizations. A malicious chip doesn't have to be a trojan, it just has to be defective, but past 70-80% of the tests thrown at it.

If an AND gate was modified to fail ( not output a High ) when both inputs were High, but throw out a Low then functional validation testing could in theory not catch the problem until ( possible unless every value on every single pin is tested ) it was actually on the hardware it was purchased for.

So this sort of threat is real, perhaps not the corruption of a RNG circuit, but flawed defecive parts that past a certain level of validation functional testing is a real concern.

While frightening, especially with the implications to compromising a cryptography circuit or system on the chip, I can't help but feel software will continue to be the method of choice for attackers for at least the foreseeable future. This type of exploit seems very useful to a nation state that tramples human rights (or the NSA! Zing!), but this seems to be something that will be out of reach to most criminal organizations from a cost, effort and time perspective.

Everything in life is a balancing act, with compromising computer systems being no exception. Compromising hardware is incredibly powerful because it's so difficult to detect and because of its staying power. It's incredibly weak in that you have no portability, mobility, or re-usability. Software can spread and usually can't be totally destroyed permanently (unless you zap all copies). Hardware cannot be easily transferred or downloaded and can be destroyed permanently.

My worries are not about random hackers but around State sponsored type attacks. This proof of concept shows that chips can be modified to reduce entropy without being detectable using the current tests. This has implications for anyone who buys chips outside of their country.

For instance State A could order Company A to place weaknesses in all the chips they send to Company B, distributor forState B to weaken security in State B overall.

Or, State A gets a batch of defective chips from Company A, and force Company B (HP/Dell/Acer) to use these defective chips in all machines they service belonging to certain organizations/states.

Or in leiu of installing keyloggers/camera, install this chip to gain passcodes.

Or State A could plant defective chips in all the devices they give to say a third world country.

Besides being stealthy, the alterations can happen at a minimum of two points in the supply chain. That includes (1) during manufacturing, where someone makes changes to the doping,

This is really interesting as it takes very little, (literally a few atoms) to change the amount of entropy in the RNG. The more concerning thing from a manufacturing side is how much variance does normal production currently have in this area? In other words, are there some chips with inherently poor RNG in the market without any sort of sabotage involved? At what point with decreasing transistor size do we have to worry about natural variances affecting the RNG?

As for someone sabotaging the a production run, the NSA has had a program in place to ensure the integrity of chip manufacturing: the Trusted Foundry program. Oddly enough, Intel isn't on that list.

IBM is on the list and their East Fishkill plant is note worthy that humans don't interact with the wafers once they being the steps for lithography until they're ready to be cut. Changing the doping would require some interacting with fab machinery or a very noticeable breach in the facility, both traceable with good security protocols in place.

Quote:

(2) a malicious designer making changes to the layout file of an integrated circuit before it goes to manufacturing.

Since this is changing the fundamental design of how the chip performs, the idea that this could be used to alter the RNG should be self evident. How little needs to be changed is noteworthy but not unexpected. Optical techniques should be able to spot this.

As far as safe guards against this attack goes though, best practices about chain of custody and encryption should be enough. The layout file exists both in digital format as it is designed and validated through simulators and as a physical mask for lithography purposes. The systems inside a validation clusters likely do not need online access so they could be keep on their own private intranet.

Notably "David Johnston Sep 12, 2013: My thoughts are - don't lend your chips to that guy. His proposed attack takes an average of 2147483648 Ivy Bridges to succeed. I think it's a hypothetical argument. The principle is sound though - if someone has physical access to your silicon and enough money, they can do evil things."

but also "David Johnston Sep 5, 2013: I've examined my own RNG with electron microscopes and picoprobes. So I and a number of test engineers know full well that the design hasn't been subverted. For security critical systems, having multiple entropy sources is a good defense against a single source being subverted. But if an Intel processor were to be subverted, there are better things to attack, like the microcode or memory protection or caches."

Scary. I'm assuming they can modify the "golden" inspection method so it DOES find this sort of tampering? The article mentioned that the existing method does not, but had no details on how you could find the evidence.

Couldn't this be detected by doing random point plots and looking for concentrations? We would expect to see banding on a chip tampered with in this way, as opposed to uniform noise. I've seen this technique used to show just how bad the RNG on old Unix system is for instance.

It won't be caught by the built-in diagnostics sure, but that doesn't mean it's completely undetectable.

This whole thing seems pretty ridiculous; someone would have to "slip in" an extra implant mask during a implant step?! I kinda think that might be noticed in a modern fab. Or slip in an extra circuit during the design process? Just as unlikely to get through.

This whole thing seems pretty ridiculous; someone would have to "slip in" an extra implant mask during a implant step?! I kinda think that might be noticed in a modern fab. Or slip in an extra circuit during the design process? Just as unlikely to get through.

A lone guy doing this seems unlikely, but if the NSA ordered a special batch of chips and had enough money to back it up I'm betting they could get Intel to make this tiny alteration for them. Presumably the use would be for covert swapping of chips on people's machines to undermine their security.

[url="http://en.wikipedia.org/wiki/List_of_Intel_manufacturing_sites"]Wiki has a list of their Fabs[/url]They do have a fab in china, but it's not making Ivy Bridge chips. I'm not sure if the fab in Ireland makes IVB, they're on the right node size.

From the fabs, the silicon gets shipped around the world to assembly sites, but I don't think they could be tampered with during that stage.

So, someone capable of tampering with a threshold number of chips during manufacture can produce a seriously compromised computer ecosystem? Uh, duh? Not that that seems remotely feasible or anything, but I guess it's a theoretically legitimate argument. In the same way it would be theoretically legitimate to fix global warming by launching a large enough mirror into space to deflect enough of the sun's heat away from the earth that it cools down.

Ok I'm confused... is this something that can be done to a chip only during manufacture or an attack that can be done on chips that are in use?

During manufacture. They're altering the doping levels in the silicon itself, basically making a defective chip that still works well enough to pass the standard self-tests.

Most humans have at least some morals. Corporations ... not so much. It would be a disservice to their shareholders not to "play ball" in exchange for special perks. It could be cash or even access to a competitors internal communications, for example.

Intel apparently manufactures chips mostly in the US, though it does have 3 foreign fabs (Ireland, Israel and China). It's worth noting that their fab in China is a 65 nm plant making it a few generations old.

Ok I'm confused... is this something that can be done to a chip only during manufacture or an attack that can be done on chips that are in use?

During manufacture. They're altering the doping levels in the silicon itself, basically making a defective chip that still works well enough to pass the standard self-tests.

Most humans have at least some morals. Corporations ... not so much. It would be a disservice to their shareholders not to "play ball" in exchange for special perks. It could be cash or even access to a competitors internal communications, for example.

Corporations are made of humans, and corruption has existed for the whole of humanity...

Ok I'm confused... is this something that can be done to a chip only during manufacture or an attack that can be done on chips that are in use?

During manufacture. They're altering the doping levels in the silicon itself, basically making a defective chip that still works well enough to pass the standard self-tests.

Most humans have at least some morals. Corporations ... not so much. It would be a disservice to their shareholders not to "play ball" in exchange for special perks. It could be cash or even access to a competitors internal communications, for example.

Corporations are made of humans, and corruption has existed for the whole of humanity...

What I think would be news here is if an attack vector into Intel was found which traced to a modification of the Chip plans - those six transistors which were mentioned. Now, that would be some highly likely and probably malicious hacking.

Hmm, I wonder if Intel has appeared on any of those recent 'corporate hacked' lists??

The engineering file that describes to the fabs how to make the chips is the primary attack vector here. And I would bet if they really wanted, the NSA and other agencies could find a way to reach this file.

The article mentions a hardware level attack like this hasn't happened yet, but I'm willing to bet $20 that within the next 10 years, we will see compromised chips. Maybe not from Intel or with the NSA behind it (I'm thinking China and a smaller manufacturer), but we will definitely see compromised chips in the very near future.

I am less concerned about actual sabotage at Intel, though it is possible, than I am about entire counterfeit chips substituted in assemblies. Counterfeiting is a current serious business issue, at least for simpler components.

The engineering file that describes to the fabs how to make the chips is the primary attack vector here. And I would bet if they really wanted, the NSA and other agencies could find a way to reach this file.

The article mentions a hardware level attack like this hasn't happened yet, but I'm willing to bet $20 that within the next 10 years, we will see compromised chips. Maybe not from Intel or with the NSA behind it (I'm thinking China and a smaller manufacturer), but we will definitely see compromised chips in the very near future.

This is exactly what I was thinking as I read the article. I'm not wearing my tin foil hat yet, but I have it near me at all times.

It sounds far more plausible that some state agency would be sponsoring a company or particular agents within a company to explicitly "sabotage" a batch of chips deliberately with a purposely made back door rather than some sort of "maybe it'll work" trojan technique that requires a lot of precision and, well, outright luck on so many levels just to get a finished product into the hands of end users.

I am less concerned about actual sabotage at Intel, though it is possible, than I am about entire counterfeit chips substituted in assemblies. Counterfeiting is a current serious business issue, at least for simpler components.

Also the "snoop chip" doesn't have to be a CPU, just on the bus somewhere where it can listen.

The article mentions a hardware level attack like this hasn't happened yet, but I'm willing to bet $20 that within the next 10 years, we will see compromised chips. Maybe not from Intel or with the NSA behind it (I'm thinking China and a smaller manufacturer), but we will definitely see compromised chips in the very near future.

I am less concerned about actual sabotage at Intel, though it is possible, than I am about entire counterfeit chips substituted in assemblies. Counterfeiting is a current serious business issue, at least for simpler components.

I don't think counterfeiters are capable of making Ivy Bridge CPUs. They will relabel lower end CPUs to try to get more money for them, but that's the kind of thing someone will notice if they're paying attention at all.

While frightening, especially with the implications to compromising a cryptography circuit or system on the chip, I can't help but feel software will continue to be the method of choice for attackers for at least the foreseeable future. This type of exploit seems very useful to a nation state that tramples human rights (or the NSA! Zing!), but this seems to be something that will be out of reach to most criminal organizations from a cost, effort and time perspective.

Everything in life is a balancing act, with compromising computer systems being no exception. Compromising hardware is incredibly powerful because it's so difficult to detect and because of its staying power. It's incredibly weak in that you have no portability, mobility, or re-usability. Software can spread and usually can't be totally destroyed permanently (unless you zap all copies). Hardware cannot be easily transferred or downloaded and can be destroyed permanently.

While frightening, especially with the implications to compromising a cryptography circuit or system on the chip, I can't help but feel software will continue to be the method of choice for attackers for at least the foreseeable future. This type of exploit seems very useful to a nation state that tramples human rights (or the NSA! Zing!), but this seems to be something that will be out of reach to most criminal organizations from a cost, effort and time perspective.

Everything in life is a balancing act, with compromising computer systems being no exception. Compromising hardware is incredibly powerful because it's so difficult to detect and because of its staying power. It's incredibly weak in that you have no portability, mobility, or re-usability. Software can spread and usually can't be totally destroyed permanently (unless you zap all copies). Hardware cannot be easily transferred or downloaded and can be destroyed permanently.

The engineering file that describes to the fabs how to make the chips is the primary attack vector here. And I would bet if they really wanted, the NSA and other agencies could find a way to reach this file.

The article mentions a hardware level attack like this hasn't happened yet, but I'm willing to bet $20 that within the next 10 years, we will see compromised chips. Maybe not from Intel or with the NSA behind it (I'm thinking China and a smaller manufacturer), but we will definitely see compromised chips in the very near future.

I am not sure what the actual report says has not happen, I can tell you from personal experience, this sort of tampering isn't a new concern for certain military organizations. A malicious chip doesn't have to be a trojan, it just has to be defective, but past 70-80% of the tests thrown at it.

If an AND gate was modified to fail ( not output a High ) when both inputs were High, but throw out a Low then functional validation testing could in theory not catch the problem until ( possible unless every value on every single pin is tested ) it was actually on the hardware it was purchased for.

So this sort of threat is real, perhaps not the corruption of a RNG circuit, but flawed defecive parts that past a certain level of validation functional testing is a real concern.

It sounds far more plausible that some state agency would be sponsoring a company or particular agents within a company to explicitly "sabotage" a batch of chips deliberately with a purposely made back door rather than some sort of "maybe it'll work" trojan technique that requires a lot of precision and, well, outright luck on so many levels just to get a finished product into the hands of end users.

Even in that situation it is impractical at least to a smart person.

The reason I say that is, if Intel were to modify their designs for the NSA, it would cost how much once it was eventual determined that it was the CPU that caused the compromised.

We won't even get into the fact there would be so many people that would need to know about the design change that it could never be kept secret. Just think about all the QA, Testing, Design Engineers that it takes to validate even the addition of a single instruction to an intel chip.

You are right. It would be easier to add the modification to the intel chip and replace the hardware itself then it would be to fight on both ends.

This whole thing seems pretty ridiculous; someone would have to "slip in" an extra implant mask during a implant step?! I kinda think that might be noticed in a modern fab. Or slip in an extra circuit during the design process? Just as unlikely to get through.

Welcome to academic security publishing! A sound mathematical theory is presented, which lay people rush to attach to utterly ridiculous threat models and feign clutching their pearls.

Intel doesn't have these files just lying around on a NAS for any hourly employee to tamper with. There are extensive checks on them and their integrity to protect them from knowledgeable engineers. Positing a threat model that goes after those by random fab engineers is ludicrous.

Furthermore, if we're assuming the microcode or circuit is compromised, the only logical choice is to grind the system into sand and start over. We're talking about an attacker with physical access to the machine, who can undetectably read and modify kernel memory space, and arbitrarily change inputs from the outside world. It's puzzling to me that people would act like these threat models are legitimate and Seriously Concerning, yet be unwilling to take that logic forward even a half step.

Scary. I'm assuming they can modify the "golden" inspection method so it DOES find this sort of tampering? The article mentioned that the existing method does not, but had no details on how you could find the evidence.

I dont really find it scary that you could alter a CPU before its produced to introduce a trojan, or rather, to subvert its capabilities. I would find it scary if you could do it at home with a soldering iron and average PC skills.