The defenses put in place to thwart the 2008 attack turn out to be very weak.

Share this story

Cold boot attacks, used to extract sensitive data such as encryption keys and passwords from system memory, have been given new blood by researchers from F-Secure. First documented in 2008, cold boot attacks depend on the ability of RAM to remember values even across system reboots. In response, systems were modified to wipe their memory early during the boot process—but F-Secure found that, in many PCs, tampering with the firmware settings can force the memory wipe to be skipped, once again making the cold boot attacks possible.

The RAM in any commodity PC is more specifically called Dynamic RAM (DRAM). The "dynamic" here is in contrast to the other kind of RAM (used for caches in the processor), static RAM (SRAM). SRAM retains its stored values for as long as the chip is powered on; once the value is stored, it remains that way until a new value is stored or power is removed. It doesn't change, hence "static." Each bit of SRAM typically needs six or eight transistors; it's very fast, but the high transistor count makes it bulky, which is why it's only used for small caches.

DRAM, on the other hand, has a much smaller size per bit, using only a single transistor paired with a capacitor. These capacitors lose their stored charge over time; when they're depleted, the DRAM no longer retains the value it was supposed to remember. To handle this, the DRAM is refreshed multiple times per second to top up the capacitors and rewrite the values being stored. This rewriting is what makes DRAM "dynamic." It's not just the power that needs to be maintained for DRAM; the refreshes also need to occur.

But that refreshing is double-edged. Memory is typically refreshed every 64 milliseconds, with the individual DRAM cells engineered to retain their value for at least that long under normal operating conditions. But outside normal operating conditions, the situation changes. At high temperatures, the memory needs to be refreshed more often. Cool the DRAM down and it needs to be refreshed less often. Cool it enough and it can go tens of seconds between refreshes.

This discovery formed the basis of the 2008 research and discovery of the cold boot attack: memory from a victim system is cooled to -50°C, and then the machine is abruptly powered off without shutting down the operating system. This frozen memory can be put into a different machine equipped with software to read the memory, or the machine can be rebooted into a different operating system that similarly reads the frozen memory and saves it to disk.

Industry response

The industry response to this attack was to make the system wipe memory early on in the boot process. This doesn't help if someone wants to move the chips to a different machine, but in systems with soldered-down memory it should protect against rebooting into a different operating system and dumping memory that way: by the time the different operating system is loaded, the memory has already been wiped, leaving nothing to dump.

But alas, nothing in the PC world is simple. Naively, one might think that this could be achieved by simply having the machine's firmware or processor automatically wipe the memory every single time the system is initialized. For no particularly obvious reason, that's not the solution that the PC industry chose.

Instead, the solution is something more complex: the operating system would set a special value (the "memory overwrite request," MOR) in the firmware's non-volatile storage that would specify if the memory wipe should occur or not. On booting, the firmware sets the value to indicate that a wipe should occur next boot. The operating system can, however, clear the value to suppress the wipe if it has guaranteed that it has already overwritten sensitive values in RAM. This skips the wipe next boot; the firmware then sets the value again, and the process is continued.

In this way, if the operating system is terminated without performing a clean shutdown (as is done in a cold boot attack), the MOR will still indicate that a wipe is necessary. So booting into the alternative operating system will always force memory to be overwritten first.

Cold boot, rebooted

The new attack takes advantage of this design in a way that seems rather obvious: overwrite the MOR to suppress the memory wipe, then perform a cold boot attack as normal. The system boots up, sees that it shouldn't wipe memory, then loads the attacker's operating system and allows memory to be dumped, including all the encryption keys and other secrets contained within.

Enlarge/ Step 2 is what distinguishes this from a traditional cold boot attack.

The F-Secure researchers say that the attack is effective against typical corporate laptops. In response, Microsoft has updated its BitLocker configuration recommendations to require a BitLocker PIN to start and to disable system suspending (allowing only hibernation, which wipes encryption keys from memory anyway). Apple says that its systems equipped with its T2 security chip are unaffected, because they don't store secrets in main memory at all. Beyond that, however, the researchers say that there's no obvious fix to the problem.

The original specification doesn't seem blind to this problem, either. It says that the value used to determine whether a memory wipe should occur should have its integrity protected to prevent attackers from being able to tamper with it and suppress the overwrite. The success of the attacks suggests that this integrity protection either isn't happening or isn't sufficient to actually protect against attackers anyway.

Why the memory wiping is designed this way is not immediately clear, and the specification doesn't provide much elucidation. The whole memory-wiping process is only meant to be activated when powering on a machine from the S4 or S5 power states (S4 is "soft off," in which everything is powered down except for the front panel power button; S5 is "hard off" with not even the front-panel power button operational). It seems straightforward then to always perform the memory wipe; there should be no harm in doing so.

The only time you don't want a memory wipe is when restoring from the S3 suspend state. In S3 suspend, DRAM contents are refreshed, but the CPU and most other system components are powered down; this provides the combination of quick booting with low power consumption. However, the specification says that the firmware shouldn't ever perform memory wipes when leaving the S3 suspend state, so in this scenario it shouldn't matter what the MOR value is.

Update: So consensus is that the entire MOR system is there as a performance optimization. Zeroing out a few GB of system memory would add a little time to the boot process—a Skylake processor, for example, can only zero out about 20-30GB per second, depending on exact memory speed, so in typical configurations this could add a quarter, perhaps even as much as a half a second to the system's boot time—so MOR was added so that the delay could be avoided. The security problem is just a bonus.

Whenever I'm having a bad day with the operating system and think software is terrible I can always remember that firmware is usually worse and blindly trusted by what runs on top of it.

OSes have their vices; but at least they aren't a different set of largely obfuscated vices on each and every motherboard flavor in service.

As for "Apple says that its systems equipped with its T2 security chip are unaffected, because they don't store secrets in main memory at all." I'm curious how that works. Apple presumably stores their disk encryption keys and possibly some other first-party and 3rd-party-users-of-new-APIs application secrets I there; but it seems enormously presumptuous to assume that no applications are storing keys in main memory, depending on the security guarantees of the OS's memory allocation and access control. Are they act doing some sort of fancy memory encryption with the keys in the coprocessor; or did they just declare that they don't really care about the secrets the applications their customers run might be storing?

Because it adds boot time. Wiping memory at full "optimized zero this" speeds is fast, but with a large amount of RAM, it still can add seconds to the boot. It's a second or so to wipe 16GB of DDR4, assuming you can hold theoretical peak speeds.

With an important metric (for reasons far beyond my understanding) being boot time to the Windows desktop, given the choice between wiping memory on boot (good for security and reasonably unlikely to be exploited at wide scale) versus speeding boot by a second or two, OEMs choose boot speed.

Also, yeah, if your firmware is owned, this is pretty far down the list of things to worry about. Modern Windows OSes can load binaries to execute them from firmware...

It almost feels like this research is almost funded by the entertainment industry to make them look less stupid when they write completely off the wall scripts with involving IT equipment and it ends up being plausible.

As for "Apple says that its systems equipped with its T2 security chip are unaffected, because they don't store secrets in main memory at all." I'm curious how that works. Apple presumably stores their disk encryption keys and possibly some other first-party and 3rd-party-users-of-new-APIs application secrets I there; but it seems enormously presumptuous to assume that no applications are storing keys in main memory, depending on the security guarantees of the OS's memory allocation and access control. Are they act doing some sort of fancy memory encryption with the keys in the coprocessor; or did they just declare that they don't really care about the secrets the applications their customers run might be storing?

Presumably, they're talking about FileVault and related encryption keys. I believe the T2 chip is in the path for disk IO, so that chip can store the secret and do encryption/decryption without keys in main memory.

I would assume application memory is still vulnerable to this, though there may be some "secret keeping" API I'm not aware of.

It almost feels like this research is almost funded by the entertainment industry to make them look less stupid when they write completely off the wall scripts with involving IT equipment and it ends up being plausible.

I mean, two years ago, "We're going to exploit the speculative execution engine in this processor to leak secrets - it doesn't really check permissions right, so we can just help ourselves to anything we want!" would have been laughed at in a Hollywood movie.

... and now it's the gift that just keeps on giving. Kernel RAM? Hypervisor memory? SGX enclaves? Yeah, you can read all of them if you know how to ask properly. sigh

So, if your system is completely owned at the firmware level, it's completely owned? Gotcha.

The thing is that this doesn't require 'completely owned'. The original spec states that setting(and clearing) the memory overwrite flag be something you can do just by twiddling an ACPI interface.

That's generally something that the OS will treat as a privileged thing to do; but no exploitation or modification of the firmware is required for there to be an "actually, it's cool, don't bother with the overwrite" button. It's specified by default.

Was it just me, or did anyone else get to "Step 1 (get physical control over the device) and stop reading?

The way I see it, if someone has physical control over your device in the first place, you're hosed in a whole lot more worse ways than just this one.

It's kind of like a coroner listing off all the weird ways someone can die, and someone in the audience pipes up with one more. Yeah, it's a way to compromise a system. Yeah, you can get pwned by it. But so is every other method that can be used (probably a lot simpler) if one has physical control over the device.

I don't mean this to sound as snarky as it is. I mean it's a cool (no pun intended) way to extract information. But it seems to me it's fairly complicated, and there are easier ways to get that information if you have the device in your own hands.

[edit to add: It seems to me that if one can keep someone from accomplishing "Step 1", then it's a relatively easy solution to the problem.)

Maybe I'm mis-reading this whole thing, but this seems like a good argument to use encrypted containers for really important things, as opposed to full-disk encryption (or just use both!)

It has its downsides (ex. leaves some things unencrypted, like cookies that allow Gmail not to ask for a password each time), but since the software used is typically not always running (or if it is, the container isn't typically open most of the time), it means that this kind of attack would yield nothing in terms of decryption key?

Because it adds boot time. Wiping memory at full "optimized zero this" speeds is fast, but with a large amount of RAM, it still can add seconds to the boot. It's a second or so to wipe 16GB of DDR4, assuming you can hold theoretical peak speeds.

With an important metric (for reasons far beyond my understanding) being boot time to the Windows desktop, given the choice between wiping memory on boot (good for security and reasonably unlikely to be exploited at wide scale) versus speeding boot by a second or two, OEMs choose boot speed.

Also, yeah, if your firmware is owned, this is pretty far down the list of things to worry about. Modern Windows OSes can load binaries to execute them from firmware...

Not to mention the power that writing that much memory as fast as possible will burn. To hit those kinds of speeds, the controller will be opening lots of banks of DRAM at one time to mitigate row access times. Luckily you will be writing zeros so you will get back all that charge that was used to store ones. (That's a joke. It doesn't work that way.)

Was it just me, or did anyone else get to "Step 1 (get physical control over the device) and stop reading?

This is the plan-B scenario. You did do almost all the right things, and your one fault is letting your laptop sleep instead of shutting it down (and thus wiping any private key from memory) and then somebody steals your laptop. Could he (or she) retrieve your private keys by manipulating the hardware? Exploiting your mistake of not shutting down every time?

Any pointers on where I should be looking? That document mentioned the T2 only in the context of firmware passwords; and had nothing to say about filevault beyond the cipher it uses. I didn't see anything relevant to answering the question of which secrets one would or wouldn't expect to find in RAM on an OSX system if they dumped it while it was asleep.

In the weak sense that there is probably a path through most of all of Apple's documentation if you traverse the links in the correct order I'm sure it's all there; but nothing in that document or nearby seemed informative in this context.

Was it just me, or did anyone else get to "Step 1 (get physical control over the device) and stop reading?

The way I see it, if someone has physical control over your device in the first place, you're hosed in a whole lot more worse ways than just this one.

It's kind of like a coroner listing off all the weird ways someone can die, and someone in the audience pipes up with one more. Yeah, it's a way to compromise a system. Yeah, you can get pwned by it. But so is every other method that can be used (probably a lot simpler) if one has physical control over the device.

I don't mean this to sound as snarky as it is. I mean it's a cool (no pun intended) way to extract information. But it seems to me it's fairly complicated, and there are easier ways to get that information if you have the device in your own hands.

[edit to add: It seems to me that if one can keep someone from accomplishing "Step 1", then it's a relatively easy solution to the problem.)

It's just you. Namely, that you don't understand the problem space at all. OK, so it's not just you -- other people are wrong too. ;-)

The whole point of Full Disk Encryption products, like Bitlocker, are to protect data in the instances when an attacker has gained physical control over your device. (eg: They stole your laptop) So yes, a vulnerability in that model is notable.

That being said, using BitLocker in transparent (TPM-only mode) has a number of such risks. Besides the cold boot methods mentioned above, both the hardware and OS Software are open to being attacked. Things like Firewire/Thunderbolt ports, or the OS being vulnerable to remote exploits, lock screen authentication bypasses, etc. There are a fleet of other mitigation in place for those, but Microsoft has always recommended that BitLocker be used with a PIN for these reasons. A pre-boot environment drastically reduces the attack surface, and makes it a lot harder for most of these attacks to work.

Peter, I'd advocate changing the line about Microsoft updating their recomendations. Yes, they did update that KB article. But they've always recommended TPM+PIN as the secure method.

Because it adds boot time. Wiping memory at full "optimized zero this" speeds is fast, but with a large amount of RAM, it still can add seconds to the boot. It's a second or so to wipe 16GB of DDR4, assuming you can hold theoretical peak speeds.

With an important metric (for reasons far beyond my understanding) being boot time to the Windows desktop, given the choice between wiping memory on boot (good for security and reasonably unlikely to be exploited at wide scale) versus speeding boot by a second or two, OEMs choose boot speed.

Also, yeah, if your firmware is owned, this is pretty far down the list of things to worry about. Modern Windows OSes can load binaries to execute them from firmware...

On a system with 5-nines required uptime and a large amount of RAM, a full manual zeroing of memory could blow your uptime SLA with a single reboot.

Any pointers on where I should be looking? That document mentioned the T2 only in the context of firmware passwords; and had nothing to say about filevault beyond the cipher it uses. I didn't see anything relevant to answering the question of which secrets one would or wouldn't expect to find in RAM on an OSX system if they dumped it while it was asleep.

In the weak sense that there is probably a path through most of all of Apple's documentation if you traverse the links in the correct order I'm sure it's all there; but nothing in that document or nearby seemed informative in this context.

I believe T2 encrypts the drives, the difference is when not using file vault the drive will unencrypt and mount without a password. The key in both cases sits in in the T2.

Because it adds boot time. Wiping memory at full "optimized zero this" speeds is fast, but with a large amount of RAM, it still can add seconds to the boot. It's a second or so to wipe 16GB of DDR4, assuming you can hold theoretical peak speeds.

With an important metric (for reasons far beyond my understanding) being boot time to the Windows desktop, given the choice between wiping memory on boot (good for security and reasonably unlikely to be exploited at wide scale) versus speeding boot by a second or two, OEMs choose boot speed.

Also, yeah, if your firmware is owned, this is pretty far down the list of things to worry about. Modern Windows OSes can load binaries to execute them from firmware...

On a system with 5-nines required uptime and a large amount of RAM, a full manual zeroing of memory could blow your uptime SLA with a single reboot.

Such systems are usually not optimized for fast reboots, because outside of maintenance windows they shouldn't be rebooted to begin with. I've seen reboot times of such servers in the 10 minute + range, thanks to RAID controllers and various other things going on.

Was it just me, or did anyone else get to "Step 1 (get physical control over the device) and stop reading?

The way I see it, if someone has physical control over your device in the first place, you're hosed in a whole lot more worse ways than just this one.

It's kind of like a coroner listing off all the weird ways someone can die, and someone in the audience pipes up with one more. Yeah, it's a way to compromise a system. Yeah, you can get pwned by it. But so is every other method that can be used (probably a lot simpler) if one has physical control over the device.

I don't mean this to sound as snarky as it is. I mean it's a cool (no pun intended) way to extract information. But it seems to me it's fairly complicated, and there are easier ways to get that information if you have the device in your own hands.

[edit to add: It seems to me that if one can keep someone from accomplishing "Step 1", then it's a relatively easy solution to the problem.)

Why do you think at rest encryption such as file vault or bitlocker even exists in the first place? Because devices can and do fall into the hands of other people, especially mobile ones like laptops or phones. You can't just wave that away.

When you have to explain to the CEO that the stolen HR laptop with all your employees' personal information (salary, SSN, etc.) wasn't in fact encrypted*, do you really think "but they shouldn't have let it get stolen" is going to suffice? Of course not. You want it encrypted, AND you don't want that encryption to be trivially circumvented.

Of course no security is perfect, but you want to provide a reasonable deterrent considering the value of what you're protecting. And there are solutions for physicals attacks that go beyond probing pins, so that you can protect against someone freezing your chip, using a microprobe, etc. Simply shrugging and saying "eh well the device is in someone else's hands, so I'm f'ed" is NOT a solution for a huge number of cases. Many devices spend their entire (post-factory) lives outside of the secret owner's physical control.

Wouldn't system memory encryption implemented at the memory controller level be able to defeat the cold boot attack? Seems a lot simpler than having the firmware remember to wipe all memory at boot, the system can simply forget the old key and generate a new one.

Wouldn't system memory encryption implemented at the memory controller level be able to defeat the cold boot attack? Seems a lot simpler than having the firmware remember to wipe all memory at boot, the system can simply forget the old key and generate a new one.

Even if it was possible to encrypt the system memory, it would be pretty pointless because the CPU must be able to access the unencrypted content.

Was it just me, or did anyone else get to "Step 1 (get physical control over the device) and stop reading?

The way I see it, if someone has physical control over your device in the first place, you're hosed in a whole lot more worse ways than just this one.

It's kind of like a coroner listing off all the weird ways someone can die, and someone in the audience pipes up with one more. Yeah, it's a way to compromise a system. Yeah, you can get pwned by it. But so is every other method that can be used (probably a lot simpler) if one has physical control over the device.

I don't mean this to sound as snarky as it is. I mean it's a cool (no pun intended) way to extract information. But it seems to me it's fairly complicated, and there are easier ways to get that information if you have the device in your own hands.

[edit to add: It seems to me that if one can keep someone from accomplishing "Step 1", then it's a relatively easy solution to the problem.)

Why do you think at rest encryption such as file vault or bitlocker even exists in the first place? Because devices can and do fall into the hands of other people, especially mobile ones like laptops or phones. You can't just wave that away.

When you have to explain to the CEO that the stolen HR laptop with all your employees' personal information (salary, SSN, etc.) wasn't in fact encrypted*, do you really think "but they shouldn't have let it get stolen" is going to suffice? Of course not. You want it encrypted, AND you don't want that encryption to be trivially circumvented.

Of course no security is perfect, but you want to provide a reasonable deterrent considering the value of what you're protecting. And there are solutions for physicals attacks that go beyond probing pins, so that you can protect against someone freezing your chip, using a microprobe, etc. Simply shrugging and saying "eh well the device is in someone else's hands, so I'm f'ed" is NOT a solution for a huge number of cases. Many devices spend their entire (post-factory) lives outside of the secret owner's physical control.

*True story based on second hand experience

If you've got physical access you can do cool stuff like adding keyloggers with RF transmitters etc.

One place I worked we had a PC stolen during a building power-outage (probably an electrician). It happened to be a shiny new PC we'd put a copy of the PROD DB onto prior to a major migration -- we'd run out of disk space in the rack. Yeah... that was fun... I had Forensics in bunny suits dusting for prints just behind me... The good thing was it wasn't my decision to do that, I didn't have new orifices torn into me by the CEO, CTO and everyone else down to the cleaner.

Because it adds boot time. Wiping memory at full "optimized zero this" speeds is fast, but with a large amount of RAM, it still can add seconds to the boot. It's a second or so to wipe 16GB of DDR4, assuming you can hold theoretical peak speeds.

With an important metric (for reasons far beyond my understanding) being boot time to the Windows desktop, given the choice between wiping memory on boot (good for security and reasonably unlikely to be exploited at wide scale) versus speeding boot by a second or two, OEMs choose boot speed.

It's absurd.

This is only relevant to soldered memory systems (socketed memory can always just be transferred to another machine to have its contents read, so there's no promise of a memory wipe in that situation anyway), and soldered memory systems are small. I think there are 16GB soldered systems on the market, but I don't think there's anything bigger than that presently. A Skylake processor can zero 16GB in 0.8 of a second or less (using SSE2 non-temporal stores). Faster memory and smaller memory makes that faster still. The amount of time added to the boot process is negligible. The complexity (and resultant security flaw) is totally unnecessary.

Because it adds boot time. Wiping memory at full "optimized zero this" speeds is fast, but with a large amount of RAM, it still can add seconds to the boot. It's a second or so to wipe 16GB of DDR4, assuming you can hold theoretical peak speeds.

With an important metric (for reasons far beyond my understanding) being boot time to the Windows desktop, given the choice between wiping memory on boot (good for security and reasonably unlikely to be exploited at wide scale) versus speeding boot by a second or two, OEMs choose boot speed.

It's absurd.

This is only relevant to soldered memory systems (socketed memory can always just be transferred to another machine to have its contents read, so there's no promise of a memory wipe in that situation anyway), and soldered memory systems are small. I think there are 16GB soldered systems on the market, but I don't think there's anything bigger than that presently. A Skylake processor can zero 16GB in 0.8 of a second or less (using SSE2 non-temporal stores). Faster memory and smaller memory makes that faster still. The amount of time added to the boot process is negligible. The complexity (and resultant security flaw) is totally unnecessary.

It’s absurd now. With slower processors and memory not so absurd when they came up with the fix. After that, it’s a case of “if it ain’t broke, why would we slow system boot down by almost a second”. And no matter when, the competive benchmark disadvantage is noticeable if everyone else isn’t doing it.

So, if your system is completely owned at the firmware level, it's completely owned? Gotcha.

There's no such thing as security when an attacker has physical access to a machine. It's ridiculous that anyone is concerned with this threat that requires possession of the machine, liquid nitrogen, and specialized equipment when there are ten thousand worse security holes. How about the way 2FA is broken thanks to vulnerable txt messaging? How about the fact that requiring an online Microsoft account and a Windows 10 user account paired with that online account to use the same password, influencing users to implement a shorter, easier to remember and input password for an online account that's exposed to potential attack 24/7?

Yes, unless you're a billionaire criminal with Swiss bank accounts targeted by a daring heist team, or someone carrying around top-secret data targeted by nation states or mega-corporations, this will never happen to you.

No one is going to do this to get access to a $100,000 401k account or stock portfolio.

Wouldn't system memory encryption implemented at the memory controller level be able to defeat the cold boot attack? Seems a lot simpler than having the firmware remember to wipe all memory at boot, the system can simply forget the old key and generate a new one.

Even if it was possible to encrypt the system memory, it would be pretty pointless because the CPU must be able to access the unencrypted content.

The decryption key would be stored in the CPU in some kind of secure enclave, inaccessible to SW or attackers with access. So no, not pointless.

The threat model is that the attacker ONLY has physical control over the device, not any local/remote login access. So they don't have the ability to execute arbitrary software on the machine, not without replacing the boot media. But that would still not give you access to the original, encrypted disk.

Whenever I'm having a bad day with the operating system and think software is terrible I can always remember that firmware is usually worse and blindly trusted by what runs on top of it.

OSes have their vices; but at least they aren't a different set of largely obfuscated vices on each and every motherboard flavor in service.

As for "Apple says that its systems equipped with its T2 security chip are unaffected, because they don't store secrets in main memory at all." I'm curious how that works. Apple presumably stores their disk encryption keys and possibly some other first-party and 3rd-party-users-of-new-APIs application secrets I there; but it seems enormously presumptuous to assume that no applications are storing keys in main memory, depending on the security guarantees of the OS's memory allocation and access control. Are they act doing some sort of fancy memory encryption with the keys in the coprocessor; or did they just declare that they don't really care about the secrets the applications their customers run might be storing?

Of course Apple is not claiming that NO APP stores secrets in RAM; that would be an absurd claim. What they are saying is that all keys handled by Apple have authentication performed via the T2, not on the x86, not in DRAM. (So eg everything controlled by Keychain and Safari, disk encryption, Notes encryption).I don't know the extent to which APIs have been updated to uniformity with iOS. On iOS the APIs allow for all apps to do this (unless they are just determined to be incompetent) and I assume this is where Apple wants to take the Mac, but not how far they are along that journey.

in particular it doesn't have anything like the detail the iOS paper gives regarding exactly HOW apps should call in to get data protected, or something verified.

I think it's fair to say that Macs cannot (as long as they are trapped in the world of x86) provide the cradle-to-grave complete secrets isolation that iPhones provide BUT that the ARM Macs will be able to provide that (though probably contingent on apps using new APIs).

What Apple can and does do is protect ITS secrets (the ones I listed) and that's probably good enough for most people. (Though if you use, eg, Chrome, including Chrome as a repository of password, I don't know; that's the sort of case that might be problematic.)

Of course this is all moot because I would expect that Apple doesn't even honor (or provide) that MOR setting anyway. Do we have any evidence that they implement it? They have their own firmware that does things very differently from standard PC BIOS's.

So, if your system is completely owned at the firmware level, it's completely owned? Gotcha.

There's no such thing as security when an attacker has physical access to a machine. It's ridiculous that anyone is concerned with this threat that requires possession of the machine, liquid nitrogen, and specialized equipment when there are ten thousand worse security holes. How about the way 2FA is broken thanks to vulnerable txt messaging? How about the fact that requiring an online Microsoft account and a Windows 10 user account paired with that online account to use the same password, influencing users to implement a shorter, easier to remember and input password for an online account that's exposed to potential attack 24/7?

This doesn't require liquid nitrogen or any other cooling, the chips are never powered off. It's a "warm boot" attack. The tools to talk to the flash chips are likely under $50.

So, if your system is completely owned at the firmware level, it's completely owned? Gotcha.

There's no such thing as security when an attacker has physical access to a machine. It's ridiculous that anyone is concerned with this threat that requires possession of the machine, liquid nitrogen, and specialized equipment when there are ten thousand worse security holes. How about the way 2FA is broken thanks to vulnerable txt messaging? How about the fact that requiring an online Microsoft account and a Windows 10 user account paired with that online account to use the same password, influencing users to implement a shorter, easier to remember and input password for an online account that's exposed to potential attack 24/7?

This doesn't require liquid nitrogen or any other cooling, the chips are never powered off. It's a "warm boot" attack. The tools to talk to the flash chips are likely under $50.

Are you sure about that? the article's friendly diagram includes cold boot as part of the recipe, and without a cold boot even a 64 ms gap in refreshing during the reboot process could wipe the data:

Quote:

Memory is typically refreshed every 64 milliseconds, with the individual DRAM cells engineered to retain their value for at least that long under normal operating conditions.

If you're correct then the article is a bit unclear about those points

Why isn't the RAM just transparently encrypted with a reliably reset key? Maybe even on the DIMM/chip. Would this cause a noticable performance loss? It would at least be a nice feature for hardened devices.

Because it adds boot time. Wiping memory at full "optimized zero this" speeds is fast, but with a large amount of RAM, it still can add seconds to the boot. It's a second or so to wipe 16GB of DDR4, assuming you can hold theoretical peak speeds.

With an important metric (for reasons far beyond my understanding) being boot time to the Windows desktop, given the choice between wiping memory on boot (good for security and reasonably unlikely to be exploited at wide scale) versus speeding boot by a second or two, OEMs choose boot speed.

Also, yeah, if your firmware is owned, this is pretty far down the list of things to worry about. Modern Windows OSes can load binaries to execute them from firmware...

On a system with 5-nines required uptime and a large amount of RAM, a full manual zeroing of memory could blow your uptime SLA with a single reboot.

Is anyone using a single box/board to achieve 5-nines these days? Almost 20 years ago my mom worked on a product for the telcos that used redundant everything to achieve that, including two full motherboards running in sync. Theoretically, you never rebooted the whole system ever. This was just a billing system, not an actual switch, so essentially just a hardened server.