At-risk EFI versions likely put Windows and Linux PCs at risk, too.

Share this story

An alarming number of Macs remain vulnerable to known exploits that completely undermine their security and are almost impossible to detect or fix even after receiving all security updates available from Apple, a comprehensive study released Friday has concluded.

Further Reading

The exposure results from known vulnerabilities that remain in the Extensible Firmware Interface, or EFI, which is the software located on a computer motherboard that runs first when a Mac is turned on. EFI identifies what hardware components are available, starts those components up, and hands them over to the operating system. Over the past few years, Apple has released updates that patch a host of critical EFI vulnerabilities exploited by attacks known as Thunderstrike and ThunderStrike 2, as well as a recently disclosed CIA attack tool known as Sonic Screwdriver.

An analysis by security firm Duo Security of more than 73,000 Macs shows that a surprising number remained vulnerable to such attacks even though they received OS updates that were supposed to patch the EFI firmware. On average, 4.2 percent of the Macs analyzed ran EFI versions that were different from what was prescribed by the hardware model and OS version. Forty-seven Mac models remained vulnerable to the original Thunderstrike, and 31 remained vulnerable to Thunderstrike 2. At least 16 models received no EFI updates at all. EFI updates for other models were inconsistently successful, with the 21.5-inch iMac released in late 2015 topping the list, with 43 percent of those sampled running the wrong version.

Hard to detect (almost) impossible to disinfect

Attacks against EFI are considered especially potent because they give attackers control that starts with the very first instruction a Mac receives. What's more, the level of control attackers get far exceeds what they gain by exploiting vulnerabilities in the OS or the apps that run on it. That means an attacker who compromises a computer's EFI can bypass higher-level security controls, such as those built into the OS or, assuming one is running for extra protection, a virtual machine hypervisor. An EFI infection is also extremely hard to detect and even harder to remedy, as it can survive even after a hard drive is wiped or replaced and a clean version of the OS is installed.

"As the pre-boot environment becomes increasingly like a full OS in and of its own, it must likewise be treated like a full OS in terms of the security support and attention applied to it," Duo Security researchers wrote in a whitepaper outlining their research. Referring to the process of assuring the quality of a release, the researchers added: "This attention goes beyond just releasing well QA'd EFI patches—it extends to the use of appropriate user and admin notifications to message the security status of the firmware alongside easy-to-apply remedial actions."

Duo Security warned that the problem of out-of-date pre-boot firmware for computers running Windows and Linux may be even worse. Whereas Apple is solely responsible for supplying the motherboards that go into Macs, there are a wide number of manufacturers supplying motherboards for Windows and Linux machines, with each manufacturer providing vastly different families of firmware. Duo Security focused on Macs because Apple's control over the entire platform made such an analysis much more feasible and because they provided an indication of how pre-boot firmware is faring across the entire industry.

In an e-mailed statement, Apple officials wrote: "We appreciate Duo's work on this industry-wide issue and noting Apple’s leading approach to this challenge. Apple continues to work diligently in the area of firmware security and we’re always exploring ways to make our systems even more secure. In order to provide a safer and more secure experience in this area, macOS High Sierra automatically validates Mac firmware weekly."

Further Reading

Apple didn't respond to a followup question asking how the weekly firmware validation measure works in the just-released High Sierra version of macOS. The new macOS version introduces a feature called eficheck, but Duo Security researchers said they have found no evidence it warns users when they're running out-of-date EFI versions, as long as they're official ones from Apple. Instead, eficheck appears only to check if EFI firmware was issued by someone other than Apple.

The research comes two years after Apple overhauled the way it delivers firmware updates. Since 2015, Apple has bundled software and firmware updates in the same release in an effort to ensure users automatically install all available security fixes. Prior to the change, Apple distributed EFI updates separately from OS and application updates. Further complicating the old process, firmware updates required users to install them by first booting into a dedicated EFI firmware mode.

The Duo Security research indicates that the new firmware patching regimen has multiple problems of its own. In some cases, entire Mac model categories aren't receiving firmware updates at all. In other cases, Mac models receive an EFI update with a version that's earlier than the one that's currently installed. The error results in no update being installed, since a Mac's EFI system will automatically reject updates that try to roll back to earlier versions. In other cases, Macs don't get updated for reasons Duo Security wasn't able to determine.

Attacks on the bleeding edge

Further Reading

People with out-of-date EFI versions should know that pre-boot firmware exploits are currently considered to be on the bleeding edge of computer attacks. They require large amounts of expertise, and, in many—but not all—cases, they require brief physical access to the targeted computer. This means that someone who uses a Mac for personal e-mail, Web browsing, and even online banking probably isn't enough of a high-profile user to be targeted by an attack this advanced. By contrast, journalists, attorneys, and people with government clearances may want to include EFI attacks in their threat modeling.

Duo Security is releasing a free tool it's calling EFIgy that makes it easy to check whether a Mac is running an EFI version with a known vulnerability. It's available for download here. For people using Windows and Linux computers, the process for verifying they have the most up-to-date UEFI version isn't nearly as simple. Windows users can open a command prompt with administrative rights and type "wmic BIOS get name, version, serialnumber" and then compare the result with what's recommended by the hardware manufacturer. Finding the UEFI version on a Linux computer varies from distribution to distribution. In some cases, out-of-date firmware can be updated. For older computers, the best course of action may be to retire the machine. A blog post accompanying the whitepaper is here.

Duo Security's research exposes a security blind spot in the Mac world that almost certainly extends well into the Windows and Linux ecosystems as well. Now that the findings have gone public and a much larger sample of Macs can be tested, the world will be able to get a better idea how widespread the problem really is. Getting a clearer picture on how Windows and Linux systems are affected will take more time.

Post updated in the eighth paragraph to add details about eficheck.

Promoted Comments

In my experience Windows and Linux users are more at risk since motherboard manufacturers don't support BIOS updates very long. After each almost yearly CPU+chipset revision they seem to leave the old models to rot. At least Asus does anyway since thats what I use. I also don't think I've ever seen security fixes in any BIOS change notes. It usually compatibility tweaks.

Conversely, Windows and Linux users have the advantage of being less of a mono-culture. An exploit that targets 2018 MacBooks is likely to work against any and all 2018 MacBooks, which is a lot of devices. While PCs way outsell Macs, I doubt any one model of PC comes close to MacBook sales. Similar is true although at lesser margins for iMacs and MacBook Pros.

In my experience Windows and Linux users are more at risk since motherboard manufacturers don't support BIOS updates very long. After each almost yearly CPU+chipset revision they seem to leave the old models to rot. At least Asus does anyway since thats what I use. I also don't think I've ever seen security fixes in any BIOS change notes. It usually compatibility tweaks.

Conversely, Windows and Linux users have the advantage of being less of a mono-culture. An exploit that targets 2018 MacBooks is likely to work against any and all 2018 MacBooks, which is a lot of devices. While PCs way outsell Macs, I doubt any one model of PC comes close to MacBook sales. Similar is true although at lesser margins for iMacs and MacBook Pros.

I would be weary of this. Security by obscurity not withstanding, AMI and Phoenix make the vast majority of UEFI BIOS' as far as I can tell. Old fogeys will recognize these names from their 286 or 386 days.

Yup, but there are dozens if not hundreds of different UEFI versions made by each of these two brands that basically corner the market. And only the right one works on a given motherboard (a quick tour of forums illustrates that plenty of people have found that out the hard way, esp. with complex mobo naming schemes and versioning). And while some of the changes are mostly cosmetic (Asus/ASROCK/Gigabye branding), most are not (support of 3rd party controllers, brand-specific features, etc.)

It's not so much security by obscurity, rather, it's the sheer volume of work that would be needed to create functional attacks for this highly fragmented market. Granted, finding a weakness in a component that is present throughout each brand's UEFI would simplify the task, but then there's still the work of creating all the versions and testing them.

I think a good security practice from a hardware standpoint would be to require some sort of physical switch be activated in order to update the firmware. I guess it may be an issue when a critical update is required and there are a million laypeople that you need to talk through the switch and then the switch back.

-d

The three listed attacks require physical access to the device (thunderbolt). Has there been any evidence of a remote EFI attack? That would be pretty terrible...

Dubbed "Thunderstrike 2," the new proof-of-concept attack still spreads primarily through infected Thunderbolt accessories. But where the original Thunderstrike required a malicious user to have physical access to your computer to work—something sometimes referred to as an "evil maid" attack, though an evil butler could probably do the same job—the new one can be spread remotely. The malware can be delivered "via a phishing e-mail and malicious Web site," and once downloaded it can infect connected accessories that use Option ROM (Apple's Thunderbolt-to-gigabit-Ethernet accessory is a commonly cited example). Once the accessory is infected, the malware can spread to any Mac that you plug the accessory into.

It jumps from the internet to a flashable thunderbolt accessory then from the thunderbolt accessory to EFI.

Always keep your BIOS up to date. Or at least read the release notes to see what fixes and other improvements they bring. Besides apple, Microsoft also pushes out firmware updates automatically for their Surface devices.

I must admit that I find the line "As the pre-boot environment becomes increasingly like a full OS in and of its own, it must likewise be treated like a full OS in terms of the security support and attention applied to it, This attention goes beyond just releasing well QA'd EFI patches—it extends to the use of appropriate user and admin notifications to message the security status of the firmware alongside easy-to-apply remedial actions." to be rather chilling. Not so much because it's an ignoble sentiment compared to the "eh, if it doesn't cause hard crashes when combined with certain RAM configurations it's completely fine." school of firmware 'support'; but because the track record of things 'treated like a full OS in terms of security support and attention applied to it' is really, really, not encouraging. Less totally atrocious than it used to be; but continues to be scary despite constant patching and admin vigilance.

This is probably a deeply lost cause; but system firmware seems like one of those areas where you would want to avoid being 'full OS like' as much as possible. Full OSes get a lot of attention, testing, 3rd party security tools, etc. but they are fundamentally dangerous because their mandate is to do pretty much as much as possible; run any arbitrary programs, etc. That makes them enormously flexible, powerful, and useful; but paints a massive target on their backs.

Given how painful the attempt to secure full OSes has been, and how marginally successful it has been; one would really hope that firmware would be deliberately avoiding that quagmire and keeping it simple, stupid. That doesn't have to mean "we should slavishly adhere to the IBM PC BIOS forever, Real Mode 4 lyfe!"; but it really doesn't mean "Let's have the guys who gave us ACPI write what is basically an entire OS; then leave updates at the mercy of people who manufacture motherboards!" That's a plan that sounds more like a parody of the Android release model than a sane foundation for pretty much every x86(and some non-x86) system manufactured; but it's the one we have.

You can't block all conceivable malice, especially the stuff with physical access(unless you want to do things like drop all option ROM support; or only be able to use option ROMs on devices cryptographically blessed by your platform vendor for a...small extra fee; if the attacker can get a PCIe device with an option ROM on your PCIe bus, you are going to execute it); but given the risks; one would hope that firmware would be moving in the direction of 'do as absolutely little as possible before handing off to the OS, and put at least the first stage of that in actual ROM(ideally all of it; but things like CPU microcode updates probably preclude that, but at least a physically-not-software-modifiable stage gives you something to reflash any later stages from).

Unfortunately, that doesn't seem to be the direction things are going.

I think a good security practice from a hardware standpoint would be to require some sort of physical switch be activated in order to update the firmware. I guess it may be an issue when a critical update is required and there are a million laypeople that you need to talk through the switch and then the switch back.

It's basically irrelevant if it's a Mac or not as long as the BIOS is EFI and for one exploit you need a Thunderbolt on it, or am I missing something? Is the code that a MacBook runs in the EFI vulnerable where a different but similarly spec'ed laptop doesn't have these problems?

It's basically irrelevant if it's a Mac or not as long as the BIOS is EFI and for one exploit you need a Thunderbolt on it, or am I missing something? Is the code that a MacBook runs in the EFI vulnerable where a different but similarly spec'ed laptop doesn't have these problems?

No, that's not correct. Firmware exploits need to be tailored to the firmware and the delivery method needs to support the OS as well, because of the vast diversity of PC firmware, there are several firmware companies and the hardware is much more diverse as well, this is a more difficult than firmware exploits for Macs, which all use very similar firmware. This makes Macs a much easier target, but it also makes it easier for Apple to plug security holes. Very few people ever update the firmware on their PCs, leaving them more open, but harder to target.

So if EFI is so complex and can get persistent infections since it's basically an embedded OS maybe there's some security to be gained (in addition to the simplicity/convenience) from forcing Legacy boot mode and disabling the EFI capabilities.

Seems like there's an opportunity for manufacturers to get ahead of this emerging threat by increasing their focus on security at this level, and heavily marketing themselves as such. You probably wouldn't see consumer attention at the scale they look for Intel vs. AMD, but a subset of the market might get interested enough to care the machines they're buying include parts from a reputable OEM who truly cares about their security.

In my experience Windows and Linux users are more at risk since motherboard manufacturers don't support BIOS updates very long. After each almost yearly CPU+chipset revision they seem to leave the old models to rot. At least Asus does anyway since thats what I use. I also don't think I've ever seen security fixes in any BIOS change notes. It usually compatibility tweaks.

PLUS most PCs are bought as pre-built units so the typical user wont ever know of a BIOS update or how to do it. Windows Update and AFAIK Linux updaters dont deliver BIOS and firmware for motherboards. They typically need special flashing programs direct from the manufacturer.

Seems like there's an opportunity for manufacturers to get ahead of this emerging threat by increasing their focus on security at this level, and heavily marketing themselves as such. You probably wouldn't see consumer attention at the scale they look for Intel vs. AMD, but a subset of the market might get interested enough to care the machines they're buying include parts from a reputable OEM who truly cares about their security.

Or they could limit useless functions and thus make sure that attack surface is tiny. Add a hardware write switch to make it even safer. There is no need for EFI to do anything other than initialise the hardware and handle boot/HW settings.

In my experience Windows and Linux users are more at risk since motherboard manufacturers don't support BIOS updates very long. After each almost yearly CPU+chipset revision they seem to leave the old models to rot. At least Asus does anyway since thats what I use. I also don't think I've ever seen security fixes in any BIOS change notes. It usually compatibility tweaks.

Conversely, Windows and Linux users have the advantage of being less of a mono-culture. An exploit that targets 2018 MacBooks is likely to work against any and all 2018 MacBooks, which is a lot of devices. While PCs way outsell Macs, I doubt any one model of PC comes close to MacBook sales. Similar is true although at lesser margins for iMacs and MacBook Pros.

So if EFI is so complex and can get persistent infections since it's basically an embedded OS maybe there's some security to be gained (in addition to the simplicity/convenience) from forcing Legacy boot mode and disabling the EFI capabilities.

I feel like it shouldn't require a full complicated system like EFI to verify a blob matches a signature. I do understand it has other duties which can introduce bugs, but it seems to me that splitting it off into much smaller systems that can be fully QA'd (and perhaps not even re-flashed) would make for better security.

For example, a chip that manages a fixed block of memory (BIOS/OS loader) and checks against a very simple list of built in signatures would seem quite possible to QA fully as the code shouldn't be too large if it is fixed to a very small (but appropriate) set of hashes/signature schemes.

A dedicated serial interface on the motherboard (no connection to outside the case) would allow those who wish to open their PC up and upload their own certificates. It only needs to understand three commands put <cert>, list, delete <cert>. Even if there was a bug in it as long as it wasn't in the heavily QA'd read+verify stage it doesn't matter as you need extended physical access to open up the PC and use the extra serial connection. It mitigates the evil-maid attack slightly as quickly plugging in a thunderbolt cable is easy, opening up the PC not so much. As usual, if they have extended physical access and can disassemble the PC all bets are off anyway as they could just replace the motherboard with a vulnerable one.

That gives a secure channel into the more complicated BIOS/OS systems that can chain much more complex signature validation schemes etc if they want. It would mitigate attacks like thunder-strike as while they might be able to fool the BIOS or OS into loading them while they are plugged in, they wouldn't be able to overwrite the OS/BIOS to stick around afterwards as they have no communication line to the chip.

It seems to me that (U)EFIs problems so far stem from the fact it's not just doing the security stuff, it's doing the driver loading etc. and bugs found in that are allowing the security to be bypassed.

edit: Do the downvotes care to articulate a counter argument? Assuming they have one...

So if EFI is so complex and can get persistent infections since it's basically an embedded OS maybe there's some security to be gained (in addition to the simplicity/convenience) from forcing Legacy boot mode and disabling the EFI capabilities.

But then you'd lose out on some other security features, like secure boot, so I'm not sure the tradeoff is worthwhile.

In my experience Windows and Linux users are more at risk since motherboard manufacturers don't support BIOS updates very long. After each almost yearly CPU+chipset revision they seem to leave the old models to rot. At least Asus does anyway since thats what I use. I also don't think I've ever seen security fixes in any BIOS change notes. It usually compatibility tweaks.

Conversely, Windows and Linux users have the advantage of being less of a mono-culture. An exploit that targets 2018 MacBooks is likely to work against any and all 2018 MacBooks, which is a lot of devices. While PCs way outsell Macs, I doubt any one model of PC comes close to MacBook sales. Similar is true although at lesser margins for iMacs and MacBook Pros.

I would be weary of this. Security by obscurity not withstanding, AMI and Phoenix make the vast majority of UEFI BIOS' as far as I can tell. Old fogeys will recognize these names from their 286 or 386 days.

In my experience Windows and Linux users are more at risk since motherboard manufacturers don't support BIOS updates very long. After each almost yearly CPU+chipset revision they seem to leave the old models to rot. At least Asus does anyway since thats what I use. I also don't think I've ever seen security fixes in any BIOS change notes. It usually compatibility tweaks.

Conversely, Windows and Linux users have the advantage of being less of a mono-culture. An exploit that targets 2018 MacBooks is likely to work against any and all 2018 MacBooks, which is a lot of devices. While PCs way outsell Macs, I doubt any one model of PC comes close to MacBook sales. Similar is true although at lesser margins for iMacs and MacBook Pros.

I would be weary of this. Security by obscurity not withstanding, AMI and Phoenix make the vast majority of UEFI BIOS' as far as I can tell. Old fogeys will recognize these names from their 286 or 386 days.

Yup, but there are dozens if not hundreds of different UEFI versions made by each of these two brands that basically corner the market. And only the right one works on a given motherboard (a quick tour of forums illustrates that plenty of people have found that out the hard way, esp. with complex mobo naming schemes and versioning). And while some of the changes are mostly cosmetic (Asus/ASROCK/Gigabye branding), most are not (support of 3rd party controllers, brand-specific features, etc.)

It's not so much security by obscurity, rather, it's the sheer volume of work that would be needed to create functional attacks for this highly fragmented market. Granted, finding a weakness in a component that is present throughout each brand's UEFI would simplify the task, but then there's still the work of creating all the versions and testing them.

Seems like there's an opportunity for manufacturers to get ahead of this emerging threat by increasing their focus on security at this level, and heavily marketing themselves as such. You probably wouldn't see consumer attention at the scale they look for Intel vs. AMD, but a subset of the market might get interested enough to care the machines they're buying include parts from a reputable OEM who truly cares about their security.

Or they could limit useless functions and thus make sure that attack surface is tiny. Add a hardware write switch to make it even safer. There is no need for EFI to do anything other than initialise the hardware and handle boot/HW settings.

Sounds good to me. Still probably need to update the EFI on occasion, right?

I think a good security practice from a hardware standpoint would be to require some sort of physical switch be activated in order to update the firmware. I guess it may be an issue when a critical update is required and there are a million laypeople that you need to talk through the switch and then the switch back.

-d

The three listed attacks require physical access to the device (thunderbolt). Has there been any evidence of a remote EFI attack? That would be pretty terrible...

I must admit that I find the line "As the pre-boot environment becomes increasingly like a full OS in and of its own, it must likewise be treated like a full OS in terms of the security support and attention applied to it, This attention goes beyond just releasing well QA'd EFI patches—it extends to the use of appropriate user and admin notifications to message the security status of the firmware alongside easy-to-apply remedial actions." to be rather chilling. Not so much because it's an ignoble sentiment compared to the "eh, if it doesn't cause hard crashes when combined with certain RAM configurations it's completely fine." school of firmware 'support'; but because the track record of things 'treated like a full OS in terms of security support and attention applied to it' is really, really, not encouraging. Less totally atrocious than it used to be; but continues to be scary despite constant patching and admin vigilance.

This is probably a deeply lost cause; but system firmware seems like one of those areas where you would want to avoid being 'full OS like' as much as possible. Full OSes get a lot of attention, testing, 3rd party security tools, etc. but they are fundamentally dangerous because their mandate is to do pretty much as much as possible; run any arbitrary programs, etc. That makes them enormously flexible, powerful, and useful; but paints a massive target on their backs.

Given how painful the attempt to secure full OSes has been, and how marginally successful it has been; one would really hope that firmware would be deliberately avoiding that quagmire and keeping it simple, stupid. That doesn't have to mean "we should slavishly adhere to the IBM PC BIOS forever, Real Mode 4 lyfe!"; but it really doesn't mean "Let's have the guys who gave us ACPI write what is basically an entire OS; then leave updates at the mercy of people who manufacture motherboards!" That's a plan that sounds more like a parody of the Android release model than a sane foundation for pretty much every x86(and some non-x86) system manufactured; but it's the one we have.

You can't block all conceivable malice, especially the stuff with physical access(unless you want to do things like drop all option ROM support; or only be able to use option ROMs on devices cryptographically blessed by your platform vendor for a...small extra fee; if the attacker can get a PCIe device with an option ROM on your PCIe bus, you are going to execute it); but given the risks; one would hope that firmware would be moving in the direction of 'do as absolutely little as possible before handing off to the OS, and put at least the first stage of that in actual ROM(ideally all of it; but things like CPU microcode updates probably preclude that, but at least a physically-not-software-modifiable stage gives you something to reflash any later stages from).

Unfortunately, that doesn't seem to be the direction things are going.

I think the problem is flashing in the first place. If you can change it in situ, you're fucked.

That gives an attacker the ability to do things remotely, or at least without opening the case (which would make things a whole lot more conspicuous that something bad was up). Granted, physical access negates ALL security (or should be considered to do that), but allowing things to be flashed, updated and all of that just opens the first floor window to anyone who has the skill-set and determination to climb in through it and change the tinting on it.

It's not that I don't get why we can updated system BIOS, and I have to admit, it's a shit-ton easier to deal with in mouse-controlled GUI than the older keyboard-only methods. But one would think that they'd be able to make a BIOS that self-optimizes for the hardware that's plugged in without it being able to be reprogramed or flashed to do other things someone wouldn't like done.

It'd piss off the overclockers and gamers, but I don't see the ABILITY to reprogram the BIOS going away. I'm thinking more along the lines of those systems that have to be more secure than the average home user's. After all, generally speaking, state-sponsored malware rarely ends up in Grandma's computer. It's more likely to be in a computer connected to a utility network, a nuclear reactor, a government black box building, or something like that.

It's one thing to be vulnerable to these kinds of hacks. But it's quite another to be targeted by them. I'm guessing (and probably deserve the down-votes for saying it), but I suspect there will always be a shit-ton more people who are vulnerable to an exploit than will ever experience that exploit on their specific system.

Getting it patched would be nice, but at that level, patching becomes a bit of an issue - especially for Grandma.

> Windows users can open a command prompt with administrative rights and type> "wmic BIOS get name, version, serialnumber" and then compare the result with> what's recommended by the hardware manufacturer. Finding the UEFI version> on a Linux computer varies

Er? Are you f*cking kidding? comparing the version is obviously not enough. An attacker will be so smart not to modify the version! One need to compare the full hash. But even then, the attacker could run the whole OS in a virtualization and fake the efi rom read to contain the known good state.

In my experience Windows and Linux users are more at risk since motherboard manufacturers don't support BIOS updates very long. After each almost yearly CPU+chipset revision they seem to leave the old models to rot. At least Asus does anyway since thats what I use. I also don't think I've ever seen security fixes in any BIOS change notes. It usually compatibility tweaks.

Conversely, Windows and Linux users have the advantage of being less of a mono-culture. An exploit that targets 2018 MacBooks is likely to work against any and all 2018 MacBooks, which is a lot of devices. While PCs way outsell Macs, I doubt any one model of PC comes close to MacBook sales. Similar is true although at lesser margins for iMacs and MacBook Pros.

I would be weary of this. Security by obscurity not withstanding, AMI and Phoenix make the vast majority of UEFI BIOS' as far as I can tell. Old fogeys will recognize these names from their 286 or 386 days.

It may be a little bit better in that there's also Insyde Software, so there's three main UEFI core authors (based on the UEFI Forum's members) in addition to internally developed stuff at places like HP and Intel (likely limited to big servers). Still, given Apple's market footprint, your point is likely correct in that Apple's installed base of targets is no larger than other EFI authors.

If nothing else, Apple, being an operating system developer as well, at least is familiar with patching stuff to deal with security flaws, as opposed to consumer motherboard vendors who mostly provide EFI updates for a very limited amount of time and primarily for feature and compatibility reasons.

(Ob nitpick: "wary", not "weary". Weary is being tired of something, wary is being cautious about or suspicious of something.)

So if EFI is so complex and can get persistent infections since it's basically an embedded OS maybe there's some security to be gained (in addition to the simplicity/convenience) from forcing Legacy boot mode and disabling the EFI capabilities.

I feel like it shouldn't require a full complicated system like EFI to verify a blob matches a signature. I do understand it has other duties which can introduce bugs, but it seems to me that splitting it off into much smaller systems that can be fully QA'd (and perhaps not even re-flashed) would make for better security.

For example, a chip that manages a fixed block of memory (BIOS/OS loader) and checks against a very simple list of built in signatures would seem quite possible to QA fully as the code shouldn't be too large if it is fixed to a very small (but appropriate) set of hashes/signature schemes.

A dedicated serial interface on the motherboard (no connection to outside the case) would allow those who wish to open their PC up and upload their own certificates. It only needs to understand three commands put <cert>, list, delete <cert>. Even if there was a bug in it as long as it wasn't in the heavily QA'd read+verify stage it doesn't matter as you need extended physical access to open up the PC and use the extra serial connection. It mitigates the evil-maid attack slightly as quickly plugging in a thunderbolt cable is easy, opening up the PC not so much. As usual, if they have extended physical access and can disassemble the PC all bets are off anyway as they could just replace the motherboard with a vulnerable one.

That gives a secure channel into the more complicated BIOS/OS systems that can chain much more complex signature validation schemes etc if they want. It would mitigate attacks like thunder-strike as while they might be able to fool the BIOS or OS into loading them while they are plugged in, they wouldn't be able to overwrite the OS/BIOS to stick around afterwards as they have no communication line to the chip.

It seems to me that (U)EFIs problems so far stem from the fact it's not just doing the security stuff, it's doing the driver loading etc. and bugs found in that are allowing the security to be bypassed.

Problem with any kind of "security by signing" is that you sign away your control to someone else. The way MS tried to make PCs windows only.

Having an open and very limited code base for initialising the HW and dealing with basic configuration (Boot, voltages, memory timings and such) can be made very simple and thus difficult to have a lot of bugs in.

I must admit that I find the line "As the pre-boot environment becomes increasingly like a full OS in and of its own, it must likewise be treated like a full OS in terms of the security support and attention applied to it, This attention goes beyond just releasing well QA'd EFI patches—it extends to the use of appropriate user and admin notifications to message the security status of the firmware alongside easy-to-apply remedial actions." to be rather chilling. Not so much because it's an ignoble sentiment compared to the "eh, if it doesn't cause hard crashes when combined with certain RAM configurations it's completely fine." school of firmware 'support'; but because the track record of things 'treated like a full OS in terms of security support and attention applied to it' is really, really, not encouraging. Less totally atrocious than it used to be; but continues to be scary despite constant patching and admin vigilance.

This is probably a deeply lost cause; but system firmware seems like one of those areas where you would want to avoid being 'full OS like' as much as possible. Full OSes get a lot of attention, testing, 3rd party security tools, etc. but they are fundamentally dangerous because their mandate is to do pretty much as much as possible; run any arbitrary programs, etc. That makes them enormously flexible, powerful, and useful; but paints a massive target on their backs.

Given how painful the attempt to secure full OSes has been, and how marginally successful it has been; one would really hope that firmware would be deliberately avoiding that quagmire and keeping it simple, stupid. That doesn't have to mean "we should slavishly adhere to the IBM PC BIOS forever, Real Mode 4 lyfe!"; but it really doesn't mean "Let's have the guys who gave us ACPI write what is basically an entire OS; then leave updates at the mercy of people who manufacture motherboards!" That's a plan that sounds more like a parody of the Android release model than a sane foundation for pretty much every x86(and some non-x86) system manufactured; but it's the one we have.

You can't block all conceivable malice, especially the stuff with physical access(unless you want to do things like drop all option ROM support; or only be able to use option ROMs on devices cryptographically blessed by your platform vendor for a...small extra fee; if the attacker can get a PCIe device with an option ROM on your PCIe bus, you are going to execute it); but given the risks; one would hope that firmware would be moving in the direction of 'do as absolutely little as possible before handing off to the OS, and put at least the first stage of that in actual ROM(ideally all of it; but things like CPU microcode updates probably preclude that, but at least a physically-not-software-modifiable stage gives you something to reflash any later stages from).

Unfortunately, that doesn't seem to be the direction things are going.

I think the problem is flashing in the first place. If you can change it in situ, you're fucked.

That gives an attacker the ability to do things remotely, or at least without opening the case (which would make things a whole lot more conspicuous that something bad was up). Granted, physical access negates ALL security (or should be considered to do that), but allowing things to be flashed, updated and all of that just opens the first floor window to anyone who has the skill-set and determination to climb in through it and change the tinting on it.

It's not that I don't get why we can updated system BIOS, and I have to admit, it's a shit-ton easier to deal with in mouse-controlled GUI than the older keyboard-only methods. But one would think that they'd be able to make a BIOS that self-optimizes for the hardware that's plugged in without it being able to be reprogramed or flashed to do other things someone wouldn't like done.

It'd piss off the overclockers and gamers, but I don't see the ABILITY to reprogram the BIOS going away. I'm thinking more along the lines of those systems that have to be more secure than the average home user's. After all, generally speaking, state-sponsored malware rarely ends up in Grandma's computer. It's more likely to be in a computer connected to a utility network, a nuclear reactor, a government black box building, or something like that.

It's one thing to be vulnerable to these kinds of hacks. But it's quite another to be targeted by them. I'm guessing (and probably deserve the down-votes for saying it), but I suspect there will always be a shit-ton more people who are vulnerable to an exploit than will ever experience that exploit on their specific system.

Getting it patched would be nice, but at that level, patching becomes a bit of an issue - especially for Grandma.

Yeah, I've never been a fan of the whole "flash your BIOS while in the OS" idea, at least as done by motherboard manufacturers. It's a risky process that I feel more comfortable doing at boot, when there is nothing to interfere. If it's done by the OS itself presumably it benefits from the safeguards that apply to updates (which sadly aren't perfect either, but a major step forward still).

A good practice would be to offer the option to disable BIOS update from anywhere outside of the BIOS itself, and then password protect access to the BIOS (which should be something everyone does anyways) so it can't be done automatically. But then OS-sanctioned updates wouldn't be possible anymore.

So if EFI is so complex and can get persistent infections since it's basically an embedded OS maybe there's some security to be gained (in addition to the simplicity/convenience) from forcing Legacy boot mode and disabling the EFI capabilities.

I feel like it shouldn't require a full complicated system like EFI to verify a blob matches a signature. I do understand it has other duties which can introduce bugs, but it seems to me that splitting it off into much smaller systems that can be fully QA'd (and perhaps not even re-flashed) would make for better security.

For example, a chip that manages a fixed block of memory (BIOS/OS loader) and checks against a very simple list of built in signatures would seem quite possible to QA fully as the code shouldn't be too large if it is fixed to a very small (but appropriate) set of hashes/signature schemes.

A dedicated serial interface on the motherboard (no connection to outside the case) would allow those who wish to open their PC up and upload their own certificates. It only needs to understand three commands put <cert>, list, delete <cert>. Even if there was a bug in it as long as it wasn't in the heavily QA'd read+verify stage it doesn't matter as you need extended physical access to open up the PC and use the extra serial connection. It mitigates the evil-maid attack slightly as quickly plugging in a thunderbolt cable is easy, opening up the PC not so much. As usual, if they have extended physical access and can disassemble the PC all bets are off anyway as they could just replace the motherboard with a vulnerable one.

That gives a secure channel into the more complicated BIOS/OS systems that can chain much more complex signature validation schemes etc if they want. It would mitigate attacks like thunder-strike as while they might be able to fool the BIOS or OS into loading them while they are plugged in, they wouldn't be able to overwrite the OS/BIOS to stick around afterwards as they have no communication line to the chip.

It seems to me that (U)EFIs problems so far stem from the fact it's not just doing the security stuff, it's doing the driver loading etc. and bugs found in that are allowing the security to be bypassed.

Problem with any kind of "security by signing" is that you sign away your control to someone else. The way MS tried to make PCs windows only.

Having an open and very limited code base for initialising the HW and dealing with basic configuration (Boot, voltages, memory timings and such) can be made very simple and thus difficult to have a lot of bugs in.

That was why I suggested there should be a separate dedicated interface to set custom certificates. You shouldn't be required to give away control to a few entities like MS, but from a security standpoint if there are only a few entities you may wish to trust (Ubuntu, MS, Apple etc) being able to upload their certificates to a decided and well secured single-purpose chip to ensure the integrity of the boot progress is a good idea.

UEFI as a practical implementation falls down in that Microsoft are in such an absurd position of power. But that's less a security concern and more a principle one.

So if EFI is so complex and can get persistent infections since it's basically an embedded OS maybe there's some security to be gained (in addition to the simplicity/convenience) from forcing Legacy boot mode and disabling the EFI capabilities.

I feel like it shouldn't require a full complicated system like EFI to verify a blob matches a signature. I do understand it has other duties which can introduce bugs, but it seems to me that splitting it off into much smaller systems that can be fully QA'd (and perhaps not even re-flashed) would make for better security.

For example, a chip that manages a fixed block of memory (BIOS/OS loader) and checks against a very simple list of built in signatures would seem quite possible to QA fully as the code shouldn't be too large if it is fixed to a very small (but appropriate) set of hashes/signature schemes.

A dedicated serial interface on the motherboard (no connection to outside the case) would allow those who wish to open their PC up and upload their own certificates. It only needs to understand three commands put <cert>, list, delete <cert>. Even if there was a bug in it as long as it wasn't in the heavily QA'd read+verify stage it doesn't matter as you need extended physical access to open up the PC and use the extra serial connection. It mitigates the evil-maid attack slightly as quickly plugging in a thunderbolt cable is easy, opening up the PC not so much. As usual, if they have extended physical access and can disassemble the PC all bets are off anyway as they could just replace the motherboard with a vulnerable one.

That gives a secure channel into the more complicated BIOS/OS systems that can chain much more complex signature validation schemes etc if they want. It would mitigate attacks like thunder-strike as while they might be able to fool the BIOS or OS into loading them while they are plugged in, they wouldn't be able to overwrite the OS/BIOS to stick around afterwards as they have no communication line to the chip.

It seems to me that (U)EFIs problems so far stem from the fact it's not just doing the security stuff, it's doing the driver loading etc. and bugs found in that are allowing the security to be bypassed.

Problem with any kind of "security by signing" is that you sign away your control to someone else. The way MS tried to make PCs windows only.

Having an open and very limited code base for initialising the HW and dealing with basic configuration (Boot, voltages, memory timings and such) can be made very simple and thus difficult to have a lot of bugs in.

That was why I suggested there should be a separate dedicated interface to set custom certificates. You shouldn't be required to give away control to a few entities like MS, but from a security standpoint if there are only a few entities you may wish to trust (Ubuntu, MS, Apple etc) being able to upload their certificates to a decided and well secured single-purpose chip to ensure the integrity of the boot progress is a good idea.

UEFI as a practical implementation falls down in that Microsoft are in such an absurd position of power. But that's less a security concern and more a principle one.

At that point it might be best to have a port that you have to connect your "thingie" to reflash the BIOS. That way computer itself has no write access at all. Although I guess that would be considered user unfriendly. Problem is that anything that is easy to use is going to be trivial to brake and as evidenced by reality nobody gives a flying fuck about security when it comes to convenience.

So if EFI is so complex and can get persistent infections since it's basically an embedded OS maybe there's some security to be gained (in addition to the simplicity/convenience) from forcing Legacy boot mode and disabling the EFI capabilities.

I feel like it shouldn't require a full complicated system like EFI to verify a blob matches a signature. I do understand it has other duties which can introduce bugs, but it seems to me that splitting it off into much smaller systems that can be fully QA'd (and perhaps not even re-flashed) would make for better security.

For example, a chip that manages a fixed block of memory (BIOS/OS loader) and checks against a very simple list of built in signatures would seem quite possible to QA fully as the code shouldn't be too large if it is fixed to a very small (but appropriate) set of hashes/signature schemes.

A dedicated serial interface on the motherboard (no connection to outside the case) would allow those who wish to open their PC up and upload their own certificates. It only needs to understand three commands put <cert>, list, delete <cert>. Even if there was a bug in it as long as it wasn't in the heavily QA'd read+verify stage it doesn't matter as you need extended physical access to open up the PC and use the extra serial connection. It mitigates the evil-maid attack slightly as quickly plugging in a thunderbolt cable is easy, opening up the PC not so much. As usual, if they have extended physical access and can disassemble the PC all bets are off anyway as they could just replace the motherboard with a vulnerable one.

That gives a secure channel into the more complicated BIOS/OS systems that can chain much more complex signature validation schemes etc if they want. It would mitigate attacks like thunder-strike as while they might be able to fool the BIOS or OS into loading them while they are plugged in, they wouldn't be able to overwrite the OS/BIOS to stick around afterwards as they have no communication line to the chip.

It seems to me that (U)EFIs problems so far stem from the fact it's not just doing the security stuff, it's doing the driver loading etc. and bugs found in that are allowing the security to be bypassed.

Problem with any kind of "security by signing" is that you sign away your control to someone else. The way MS tried to make PCs windows only.

Having an open and very limited code base for initialising the HW and dealing with basic configuration (Boot, voltages, memory timings and such) can be made very simple and thus difficult to have a lot of bugs in.

That was why I suggested there should be a separate dedicated interface to set custom certificates. You shouldn't be required to give away control to a few entities like MS, but from a security standpoint if there are only a few entities you may wish to trust (Ubuntu, MS, Apple etc) being able to upload their certificates to a decided and well secured single-purpose chip to ensure the integrity of the boot progress is a good idea.

UEFI as a practical implementation falls down in that Microsoft are in such an absurd position of power. But that's less a security concern and more a principle one.

At that point it might be best to have a port that you have to connect your "thingie" to reflash the BIOS. That way computer itself has no write access at all. Although I guess that would be considered user unfriendly. Problem is that anything that is easy to use is going to be trivial to brake and as evidenced by reality nobody gives a flying fuck about security when it comes to convenience.

That's why in my original post I recommand a serial port on the motherboard not connected to the outside of the case. It is trivial for anyone even just following instructions to attach a serial cable or other suitable interface to the motherboard (mine already has one default unconnected serial port for POST output etc). It's really not a roadblock to anyone who wants to build their own system or manage its security. But having to open the PC etc to get to it makes drive-by physical access difficult, and remote exploitation impossible.

If they have enough time to open your PC case and start plugging things into the motherboard they have enough time to perform much worse and more comprehensive actions too. Plus you could physically secure the ability to open the case as necessary to reduce the risk in high-security scenarios.

So if EFI is so complex and can get persistent infections since it's basically an embedded OS maybe there's some security to be gained (in addition to the simplicity/convenience) from forcing Legacy boot mode and disabling the EFI capabilities.

I feel like it shouldn't require a full complicated system like EFI to verify a blob matches a signature. I do understand it has other duties which can introduce bugs, but it seems to me that splitting it off into much smaller systems that can be fully QA'd (and perhaps not even re-flashed) would make for better security.

For example, a chip that manages a fixed block of memory (BIOS/OS loader) and checks against a very simple list of built in signatures would seem quite possible to QA fully as the code shouldn't be too large if it is fixed to a very small (but appropriate) set of hashes/signature schemes.

A dedicated serial interface on the motherboard (no connection to outside the case) would allow those who wish to open their PC up and upload their own certificates. It only needs to understand three commands put <cert>, list, delete <cert>. Even if there was a bug in it as long as it wasn't in the heavily QA'd read+verify stage it doesn't matter as you need extended physical access to open up the PC and use the extra serial connection. It mitigates the evil-maid attack slightly as quickly plugging in a thunderbolt cable is easy, opening up the PC not so much. As usual, if they have extended physical access and can disassemble the PC all bets are off anyway as they could just replace the motherboard with a vulnerable one.

That gives a secure channel into the more complicated BIOS/OS systems that can chain much more complex signature validation schemes etc if they want. It would mitigate attacks like thunder-strike as while they might be able to fool the BIOS or OS into loading them while they are plugged in, they wouldn't be able to overwrite the OS/BIOS to stick around afterwards as they have no communication line to the chip.

It seems to me that (U)EFIs problems so far stem from the fact it's not just doing the security stuff, it's doing the driver loading etc. and bugs found in that are allowing the security to be bypassed.

Problem with any kind of "security by signing" is that you sign away your control to someone else. The way MS tried to make PCs windows only.

Having an open and very limited code base for initialising the HW and dealing with basic configuration (Boot, voltages, memory timings and such) can be made very simple and thus difficult to have a lot of bugs in.

That was why I suggested there should be a separate dedicated interface to set custom certificates. You shouldn't be required to give away control to a few entities like MS, but from a security standpoint if there are only a few entities you may wish to trust (Ubuntu, MS, Apple etc) being able to upload their certificates to a decided and well secured single-purpose chip to ensure the integrity of the boot progress is a good idea.

UEFI as a practical implementation falls down in that Microsoft are in such an absurd position of power. But that's less a security concern and more a principle one.

UEFI is an industry standard not a Microsoft standard. You can use your own private / public key for EUFI.

So if EFI is so complex and can get persistent infections since it's basically an embedded OS maybe there's some security to be gained (in addition to the simplicity/convenience) from forcing Legacy boot mode and disabling the EFI capabilities.

I feel like it shouldn't require a full complicated system like EFI to verify a blob matches a signature. I do understand it has other duties which can introduce bugs, but it seems to me that splitting it off into much smaller systems that can be fully QA'd (and perhaps not even re-flashed) would make for better security.

For example, a chip that manages a fixed block of memory (BIOS/OS loader) and checks against a very simple list of built in signatures would seem quite possible to QA fully as the code shouldn't be too large if it is fixed to a very small (but appropriate) set of hashes/signature schemes.

A dedicated serial interface on the motherboard (no connection to outside the case) would allow those who wish to open their PC up and upload their own certificates. It only needs to understand three commands put <cert>, list, delete <cert>. Even if there was a bug in it as long as it wasn't in the heavily QA'd read+verify stage it doesn't matter as you need extended physical access to open up the PC and use the extra serial connection. It mitigates the evil-maid attack slightly as quickly plugging in a thunderbolt cable is easy, opening up the PC not so much. As usual, if they have extended physical access and can disassemble the PC all bets are off anyway as they could just replace the motherboard with a vulnerable one.

That gives a secure channel into the more complicated BIOS/OS systems that can chain much more complex signature validation schemes etc if they want. It would mitigate attacks like thunder-strike as while they might be able to fool the BIOS or OS into loading them while they are plugged in, they wouldn't be able to overwrite the OS/BIOS to stick around afterwards as they have no communication line to the chip.

It seems to me that (U)EFIs problems so far stem from the fact it's not just doing the security stuff, it's doing the driver loading etc. and bugs found in that are allowing the security to be bypassed.

Problem with any kind of "security by signing" is that you sign away your control to someone else. The way MS tried to make PCs windows only.

Having an open and very limited code base for initialising the HW and dealing with basic configuration (Boot, voltages, memory timings and such) can be made very simple and thus difficult to have a lot of bugs in.

That was why I suggested there should be a separate dedicated interface to set custom certificates. You shouldn't be required to give away control to a few entities like MS, but from a security standpoint if there are only a few entities you may wish to trust (Ubuntu, MS, Apple etc) being able to upload their certificates to a decided and well secured single-purpose chip to ensure the integrity of the boot progress is a good idea.

UEFI as a practical implementation falls down in that Microsoft are in such an absurd position of power. But that's less a security concern and more a principle one.

UEFI is an industry standard not a Microsoft standard. You can use your own private / public key for EUFI.

That's why I don't object to UEFI as an idea, but may have an objection to some practical implementations. One of the primary concerns was (is) that it would be easy for vendors to not bother including the ability to upload custom certificates. That was mitigated slightly as MS required it for their certification. But my understanding is that the requirement for that is now removed.

In other words, the concern isn't that UEFI as a standard hands MS disproportionate control - it was that practical implementations would just hardcode in MS's certificate and call it a day. But so far that thankfully hasn't really been an issue.

So if EFI is so complex and can get persistent infections since it's basically an embedded OS maybe there's some security to be gained (in addition to the simplicity/convenience) from forcing Legacy boot mode and disabling the EFI capabilities.

I feel like it shouldn't require a full complicated system like EFI to verify a blob matches a signature. I do understand it has other duties which can introduce bugs, but it seems to me that splitting it off into much smaller systems that can be fully QA'd (and perhaps not even re-flashed) would make for better security.

For example, a chip that manages a fixed block of memory (BIOS/OS loader) and checks against a very simple list of built in signatures would seem quite possible to QA fully as the code shouldn't be too large if it is fixed to a very small (but appropriate) set of hashes/signature schemes.

A dedicated serial interface on the motherboard (no connection to outside the case) would allow those who wish to open their PC up and upload their own certificates. It only needs to understand three commands put <cert>, list, delete <cert>. Even if there was a bug in it as long as it wasn't in the heavily QA'd read+verify stage it doesn't matter as you need extended physical access to open up the PC and use the extra serial connection. It mitigates the evil-maid attack slightly as quickly plugging in a thunderbolt cable is easy, opening up the PC not so much. As usual, if they have extended physical access and can disassemble the PC all bets are off anyway as they could just replace the motherboard with a vulnerable one.

That gives a secure channel into the more complicated BIOS/OS systems that can chain much more complex signature validation schemes etc if they want. It would mitigate attacks like thunder-strike as while they might be able to fool the BIOS or OS into loading them while they are plugged in, they wouldn't be able to overwrite the OS/BIOS to stick around afterwards as they have no communication line to the chip.

It seems to me that (U)EFIs problems so far stem from the fact it's not just doing the security stuff, it's doing the driver loading etc. and bugs found in that are allowing the security to be bypassed.

Problem with any kind of "security by signing" is that you sign away your control to someone else. The way MS tried to make PCs windows only.

Having an open and very limited code base for initialising the HW and dealing with basic configuration (Boot, voltages, memory timings and such) can be made very simple and thus difficult to have a lot of bugs in.

That was why I suggested there should be a separate dedicated interface to set custom certificates. You shouldn't be required to give away control to a few entities like MS, but from a security standpoint if there are only a few entities you may wish to trust (Ubuntu, MS, Apple etc) being able to upload their certificates to a decided and well secured single-purpose chip to ensure the integrity of the boot progress is a good idea.

UEFI as a practical implementation falls down in that Microsoft are in such an absurd position of power. But that's less a security concern and more a principle one.

UEFI is an industry standard not a Microsoft standard. You can use your own private / public key for EUFI.

Never thought I'd see the day when my Mac (EFI bios in my mini) is more vulnerable to a particular attack than my stand-by Windows 10 box (self-build with AS-ROCK mobo from 2009, non-EFI).

Interesting times.

Security by obscurity being an argument for Windows machines.

People arguing that no one needs the processing power of the A series chips in iDevices; it's the user experience that makes Android superior. (Paraphrasing/combining a number of comments about the new iPhones, and why they're overpriced.)