Intel Boot Guard, Coreboot and user freedom

PC World wrote an article on how the use of Intel Boot Guard by PC manufacturers is making it impossible for end-users to install replacement firmware such as Coreboot on their hardware. It's easy to interpret this as Intel acting to restrict competition in the firmware market, but the reality is actually a little more subtle than that.

UEFI Secure Boot as a specification is still unbroken, which makes attacking the underlying firmware much more attractive. We've seen several presentations at security conferences lately that have demonstrated vulnerabilities that permit modification of the firmware itself. Once you can insert arbitrary code in the firmware, Secure Boot doesn't do a great deal to protect you - the firmware could be modified to boot unsigned code, or even to modify your signed bootloader such that it backdoors the kernel on the fly.

But that's not all. Someone with physical access to your system could reflash your system. Even if you're paranoid enough that you X-ray your machine after every border crossing and verify that no additional components have been inserted, modified firmware could still be grabbing your disk encryption passphrase and stashing it somewhere for later examination.

Intel Boot Guard is intended to protect against this scenario. When your CPU starts up, it reads some code out of flash and executes it. With Intel Boot Guard, the CPU verifies a signature on that code before executing it[1]. The hash of the public half of the signing key is flashed into fuses on the CPU. It is the system vendor that owns this key and chooses to flash it into the CPU, not Intel.

This has genuine security benefits. It's no longer possible for an attacker to simply modify or replace the firmware - they have to find some other way to trick it into executing arbitrary code, and over time these will be closed off. But in the process, the system vendor has prevented the user from being able to make an informed choice to replace their system firmware.

The usual argument here is that in an increasingly hostile environment, opt-in security isn't sufficient - it's the role of the vendor to ensure that users are as protected as possible by default, and in this case all that's sacrificed is the ability for a few hobbyists to replace their system firmware. But this is a false dichotomy - UEFI Secure Boot demonstrated that it was entirely possible to produce a security solution that provided security benefits and still gave the user ultimate control over the code that their machine would execute.

To an extent the market will provide solutions to this. Vendors such as Purism will sell modern hardware without enabling Boot Guard. However, many people will buy hardware without consideration of this feature and only later become aware of what they've given up. It should never be necessary for someone to spend more money to purchase new hardware in order to obtain the freedom to run their choice of software. A future where users are obliged to run proprietary code because they can't afford another laptop is a dystopian one.

Intel should be congratulated for taking steps to make it more difficult for attackers to compromise system firmware, but criticised for doing so in such a way that vendors are forced to choose between security and freedom. The ability to control the software that your system runs is fundamental to Free Software, and we must reject solutions that provide security at the expense of that ability. As an industry we should endeavour to identify solutions that provide both freedom and security and work with vendors to make those solutions available, and as a movement we should be doing a better job of articulating why this freedom is a fundamental part of users being able to place trust in their property.

[1] It's slightly more complicated than that in reality, but the specifics really aren't that interesting.

How would you have designed this feature, or how would you suggest modifying the implementation of this feature, to preserve the same amount of additional security without the problem you're concerned about?

> How would you have designed this featureMake fuses eraseable. But only upon asserting some hardware key. So one with physical access can force-erase fuses, BUT only if hardware key toggled.

On hardware level implementation can be like this: fuses are often just some FLASH- or EEPROM-like memory, which just lacks ERASE procedure, so once WRITE occured, you can't write new values again, because to do it flash have to be erased and there is no ERASE. Sometimes OTP memory can be programmed more than once, if bits only change in WRITE direction, but not in reverse ERASE direction. Yet, hardware can be designed in way it prevents further partial WRITEs if desired.

My idea is: ERASE can be here. Flash/EEPROM/etc memory requires "high voltage" to erase or program data (internally generated, but power supply input pin can be separate). This could be put on separate pin. Button press would energize this pin. In this case, witout button press you can't neither program, nor re-program FUSES. At manufacture you have to assert Vpp to write fuse. At re-program you have to do the same. And hardware itself should be unable to do it under program control. Only pysical button press should activate it. If Vpp voltage is physically missing, erase or write just not going to work for purely physical reassons. So even if software executes erase/write sequences, it just not going to work due to lack of programming voltage. Write fails and there is nothing software can do about it.

I can also admit Google's approach: they have read-only SPI ROM with "base" code, and then code here can be configured to allow or disallow unsigned boot, subject to some procedures. Software can't overwrite SPI ROM: its #WP pin is tied to ground, so it happens to be READONLY. And if one really wants to reflash this part, SPI prommer can replace SPI readonly "base". Then you'll have new readonly base running YOUR code. Replacing it in software would not work. And it's actualy most reliable way to have trusted code base: if trusted code base is readonly, nobody can change it. And requirement of physical action ensures this action can't happen in unnoticeable manner most of time. OTOH we do not know if OEM or Intel or whoever has shared their private key and can't check this.

Make the control key writable. The user would need so sign the new key with the current key in order to write the new key (that would prevent malware from replacing the key with its own, taking control away from the user). Another way would be to require the user to flip a hardware switch before the boot guard key is replaced (malware can't flip random hardware switches on their own). The vendor would provide the initial private key to the user upon computer purchase and if they would not, they would be legally liable for fraud (the key to this feature would need to be legally declared to be the "title" for the computer and if the seller says in ads etc that "you own the computer" but they fail to provide the title to the computer (the Boot Guard Private Key), the user shall be able to sue the key out of them along with punitive damages.

In a world where the NSA requests the X.509 private key of Lavabit, what makes anyone think they won't get their hands on the Intel signing key?

That isn't to say that this security measure is useless (not every adversary is the NSA), just that this dichotomy between security & freedom is false here. As I'm sure you know well, the ability to inspect the code of your firmware is too much of a security benefit to just ignore. Intel is hurting both freedom and -in many ways,-security in this case.

In a world where the NSA requests the X.509 private key of Lavabit, what makes anyone think they won't get their hands on the Intel signing key?

There is no Intel signing key. Each vendor uses their own key. There are some vendors who can probably be compelled to give up such a key by the NSA without too much trouble, and there are some vendors who probably can't.

But yeah, that's part of the problem. Being forced to trust that your hardware vendor will maintain good key security isn't necessarily realistic. UEFI Secure Boot allows you to revoke your machine's trust in your vendor, this doesn't.

You're being paranoid as hell. Here's a nice, secure solution: PUT A BLOODY JUMPER ON THE MOTHERBOARD! OR HOLD A PIN HIGH FOR X PERIOD OF TIME OR SOMETHING. Modern processor sockets have a metric ton of pins, some of which are just there to be pretty. Surely Intel could have left a pin in there to do this.I understand you're being really really REALLY anal about security, but think of how ridiculous it sounds for someone to open up your laptop, solder a jumper on the board, close it without damage(modern shitty laptop designs with clips make it impossible) and then somehow make a patch for the bios.Not to bloody mention there's really no standard way of flashing bioses nowadays. Not even standard tools are guaranteed to do it and chances are they won't. It's far more common to have to do some soldering to even get Coreboot on.How about the other way around? Have a special, security hardened laptop for a bit more money which is unflasable. There. All's fine with the world.There's a ton of options and yet you display a bit of sympathy for the vendor/Intel. Also Secure Boot has yet to be used as it was intended to be used as a security feature. Right now it's just an annoyance.

I understand you're being really really REALLY anal about security, but think of how ridiculous it sounds for someone to open up your laptop, solder a jumper on the board, close it without damage(modern shitty laptop designs with clips make it impossible) and then somehow make a patch for the bios.

There's nothing ridiculous about this at all. Several of the attacks described in the Snowden documents are exactly this.

Not to bloody mention there's really no standard way of flashing bioses nowadays

There really is - UEFI capsule updates provide a standardised mechanism for handing payloads off to the firmware.

Also Secure Boot has yet to be used as it was intended to be used as a security feature. Right now it's just an annoyance.

Nonsense. Several attacks against UEFI are entirely mitigated by Secure Boot. Several bootkits in the wild (including the GrayFish attack in the news today) are entirely prevented by Secure Boot.

Yeah, that's a rather good question. I don't believe the process is publicly documented, and my understanding is that there are three states (key provisioned, key not provisioned but can be, key not provisioned and can't be), so it may be that all chips sold on the open market are already set to permanently have no key.

"The hash of the public half of the signing key is flashed into fuses on the CPU."This makes 3 questions cross my mind.1) What happens if I replace the cpu. Am I now magically left unprotected.2) Am I now stuck that I cannot swap cpu between different model motherboards at all if it been written.3) With Intel Boot Guard how do I deal with vendor who private key leaks. Do I have to dump all that vendors hardware to be secure again. There appears to be absolutely no way to update the public key.

In my eyes it would have been more sane to add a i2c bus to the cpu to a chip that hold the public key. Of course from the cpu this been a read only i2c bus with a very limit set of instructions. People with direct physical access could remove and replace chip in case of vendor private key leak. CPU's would have remained replaceable. The i2c chip could be a pure rom and the storage ram in cpu for the key could be not writeable by anything else.

Protected Path idea does not like the idea that people will load up their own code.

Intel Boot Guard does not stop you attacker ripping out your cpu and putting in another one. Once a person has physical access all bets that the firmware will not be replaced is off. Really nothing stops attacker replacing complete motherboard in device. Its not the first time that complete laptops have been swapped to attack networks.

Intel Boot Guard makes it more profitable for Intel not really any more secure. In fact worse because a case of 2 computers 1 with borked motherboard and one with borked cpu you will not be able to make 1 out of 2. This could be life and death. Think you are at somewhere like the south pole or on mars no easy access to spares. So the boot guard could kill people.

>1) What happens if I replace the cpu. Am I now magically left unprotected.>Intel Boot Guard does not stop you attacker ripping out your cpu and putting in another one

Right - what's to stop the attacker from switching the CPU for one that trusts their modified firmware? Is it only intended for systems where the CPU is soldered to the motherboard?

It sounds like this scheme is intended for the ultra-paranoid against a targeted attack, but an attacker that resourceful could always swap out as much of the machine's hardware as they like.

Consequently this form of protection seems to have a fairly narrow threat model that it will protect against (an attacker sufficiently dedicated that they are targeting you and get physical access to the machine, but not to the extent that they will actually replace any hardware), which makes me highly doubt that the pros will be worth the cons.

boot guard would be even better if advanced users had an option of having their signing key flashed into the chip. Then you can have best of both worlds. Coreboot cab sign the forward and owner can verify and re-sign. Or build on their own and sign. And please no NDAs.

boot guard would be even better if advanced users had an option of having their signing key flashed into the chip. Then you can have best of both worlds. Coreboot team can sign the firmware and owner can verify and re-sign. Or build on their own and sign.

1) This only applies to highly integrated systems with soldered down CPUs (like laptops) and anything with a user-replaceable CPU doesn't get this.

or (more likely)

2) It's not literally the CPU. It's in the PCH, but for the more highly integrated systems, the CPU and PCH are in the same package (although not same die) and I assume this is where the confusion comes from.

That solution is measured boot, and it exists within Boot Guard: Instead of validating against a fused key, the chipset (not the CPU!) sends a hash of the bootblock to the TPM ('measures the bootblock'), to be stored in a PCR (platform configuration register) that the CPU can't modify.

If you later want to make sure that your system wasn't tampered with, you can do a (remote) attestation step in which you have the TPM try to use a key sealed ('configured to only be readable if PCR(x)=y') to that hash. If the answer isn't satisfactory, something is wrong.

The TPM is a reasonably safe storage for keys, and using its crypto primitives, while slow, prevents replay attacks.

In this configuration, if you choose not to trust the vendor, you can simply clear the TPM, have it generate your own key pair, and use that for attestation.

So, the issue is that Intel, in their infinite wisdom, decided to also give vendors that other option (verified boot) that enables them to shoot their customers in the foot.Similar to how Intel TXT uses an Authenticated Code Module (signed code as initialization dependency, this time Intel only), while AMD's SKINIT also just measures the code into an otherwise inaccessible TPM register, allowing the end user to decide which code to trust.

My main concern with this is the failure mode: particularly what happens if a key gets leaked: What is the best response of a hardware vendor in that case?

a) They come clean tell all their users that the cerficates flashed in their cpu no longer provides security and that they should go buy someone else's laptop. A competetor's laptop to be exact since it'll take them a while to get hardware out of the door with fresh keys. Also they can scrap all their inventory.

b) They say nothing because they very likely can't afford a) *maybe* they will replace keys in new hardware but, really, that's incredibly unlikely.

Anyway the point being that there seems to be absolutely no incentive for vendors to respond responsibly to any breach, and no recourse for users. I honestly don't even know what a responsible and reasonable way for a company to handle this would even look like.

I think this is a really good point. This kind of "security" is still only as strong as its weakest link, and in a world (said in that movie guy voice) where leakers could be anywhere, this system may actually make things worse since it provides a huge fiscal disincentive for the vendors to notify their customers.

If someone intercepts the laptop (at airport, border control or a mail package), he can not flash the firmware anymore. No big deal, he can still replace the cpu or board with a backdoored one.

However, when a security vulnerability is discovered in the UEFI implementation on my mainboard, how do I get it fixed? Will the vendors be as good at providing the patches as Android phone vendors are? What if my board is two years old and the vendor doesn't care anymore, do I have to throw it away?

It seems to be that this makes the security worse, not better.

Not to say that virtually all proprietary firmwares I've seen are severely lacking in quality. They are slow as hell, have unpleasant visual presentation (with the possible exception of apple) and often lack essential functionality (a debugger, shell, boot from USB mass storage, boot from SD, ...).

Disregarding other comments, Intel is quite innovative, they introduced a PC for instance, that uses 10W of power only when idle, but it can also provide enough computation-performance when needed.The product got quite some native advertising in the german IT-press.You can gues, that the development-costs were high. In order to make it affordable to the public, the company includes things like the boot-guard, you mention.At the local LUG we had two theme-evenings about UEFI and we dealt with that PC. The situation is that it will not boot from Harddisk in UEFI-mode, and when you ask Intel, they really recommend, to use legacy-mode instead. So you have a super-modern, super-innovative PC and you have to run it in legacy-mode, because you are not using Windows. This is clearly the reason for me, why not to buy such products.Intel wants to discriminate against Linux and probably also other alternative operating-systems.

this will end coreboot. if the coreboot team doesn't like it, its not good for anyone.

system vendors dont flash cpu fuses; that is simply ridiculous... intel does that... system vendors do flash firmware.

typical intel consumer lock-in tactics. dont be fooled by "unified".. it was EFI which was and is proprietary (EFI code is in current uefi implementations)... microsoft did the same antitrust code in acpi. i had to disassemble, find evidence and threaten justice to get it fixed. i do believe they contacted insyde and told them to fix the microsoft junk that is hardcoded into every one of our machines... linux users should thank me :-P

whole bunch of born yesterday everywhere. think ya'll invented something new, but you are running on a technology designed the brightest minds over 50 years ago.. Please don't fail to look back and marvel, and never say something old wasn't good or better.

even in its most secure configuration, uefi provides no security guarantee and increases attack surface. ill prove it to you one day.

16 bit bios was fine. getting to 32 bit can literally be done a few lines of code. on modern systems, these classic bootloaders can be extremely small, simple and secure.. i would say military grade.

intel is breaking some core backwards compatibility mantras here and this is definitely lock-in.

avoid uefi, call your intel representative and voice your concerns. If they want to break compatibility, they could at least fix privileged instructions so software could not not break the outer ring.

.So you can't replace your UEFI FW with Coreboot. Big deal. There is no reason to install Coreboot and it would be risky at best.

Who cares? Only those who see the benefit in open source operating systems, utilities and support FSF/OSF. Forgetting cost, they want the freedom to be able to study how and what a program/operating system is actually doing - with their personal information and data. And those who aren't in love with the Microsoft/NSA relationship - closed operating source so no audit on what Microsoft could/does plant in the code.

Being able to have an open source BIOS, while important, is only the side issue here. The issue is closed hardware that contains a significant amount of, for lack of a better term, execution time code - code different to that implementing basic hardware instruction set and function. The issue is the Intel ME. Both the dedicated hardware and the encrypted blob (code) that it runs. Perhaps in its early incarnations were acceptable but in later Intel chipsets the ME hardware and software cannot be disabled. From Igor Skochinsky's paper:

. Management Engine (or Manageability Engine) is a dedicated microcontroller on all recent Intel platforms. Shares flash with the BIOS but is completely independent from the main CPU. Can be active even when the system is hibernating or turned off (but connected to mains). Has a dedicated connection to the network interface; can intercept or send any data without the main CPU's knowledge

So in short if you are happy with a separate "watch dog" processor running alongside your PC's main processor, with nearly unlimited ability to snoop on what the main processor is doing/has access to AND is capable of out-of-band network access,, well then I guess you are right - there's no reason to want something like Coreboot, no big deal.