The whole idea of a signed boot image is to prevent replacing the boot image with something customized or malicious. However, the entire process seems to be reliant on a single point of failure: the verification of the image signature. The boot loader will need to verify that the signed image has been signed by the private key which requires the public key to be stored... somewhere.

Maybe it is stored in a non-volatile memory chip (eg. ROM chip). Maybe it is stored in a super-secret sector in the built-in storage device. (What other alternatives are there?)

Isn't a ROM chip (truly read-only) the only way -- barring soldering a new chip in place -- to ensure the boot process cannot be altered? If so, why aren't mobile devices concerned with maintaining a strong-hold on the boot image using these chips? Is there some technical reason or is it more a cost/benefit reason?

Assuming device manufacturers want to maintain some flexibility to alter the boot process in the event bugs are discovered, how do they (try) to protect this read/write boot code and/or set of public keys?

1 Answer
1

For a trusted boot (or authenticated boot or secure boot or however you want to call it), you need a root of trust that's protected for integrity. It doesn't need to be a “super-secret” sector, it needs to be a “super-shielded” sector. ROM fits the bill.

In practice, device manufacturers like to put as little code as possible in the ROM, because they don't want to stop updating the code a few months before the device ships. So that code typically does nothing more than initialize a few core devices (e.g. put the MMU into shape), load the first few sectors from a storage medium, verify their cryptographic signature against a public key stored in ROM, and transfer execution to that code. If the signature doesn't match, the ROM code may allow the user to upload an alternate boot image (typically over USB).

That first non-ROM boot code is often considerably more complex than the ROM code: it usually needs to know how to manage memory, access various storage devices, perhaps read filesystems, have at least a rudimentary user interface and so on. So, in practice, all that code buggy, and that (or later in the boot chain) is how iPhones get jailbroken. Some manufacturers, such as Apple, are indeed deeply concerned with controlling the boot image; that doesn't mean they succeed against a determined community.

Furthermore, even if the signed boot image has held, the operating system itself is far too complex to withstand attacks for very long. Signing the whole OS is not practical as users of all but the most basic mobile phones and PDAs expect to be able to install their own applications (and OSes tend to have root holes). The OS can typically do pretty much what it wants, except that if it naively overwrites the boot image, the device won't boot. If you want to run a different operating system, you may be able to use the original OS as a jumping point from which you boot your real operating system.

Speed is another factor: verifying the authenticity of a 100kB bootloader is just a blip on a modern smartphone, but verifying the authenticity or the integrity of a multi-GB operating system is too slow to do at once. Incremental verification also slows down disk reads, and requires filesystem support that isn't widely available at this time.

Why would the OS kernel components not be signed, though? Seems to me that having a signed trivial boot-loader is of minimal value, whereas signing individual drivers and core static code would be a far more secure design. As for overwriting the boot image, wouldn't the original signature verifier prevent this (eg. the intent of the question). I don't actually know. I'm asking. I've written OS kernels for x86 processors for fun, but the whole mobile OS battlefield mentality is completely virgin territory. I understand how root holes can lead to piggyback loading vectors, though.
–
logicalscopeDec 21 '11 at 1:52

1

@logicalscope The boot image may be signed, and further OS components may be signed. But a whole chain including third-party drivers and applications collected from hundreds or thousands of suppliers, which marketing demands be released yesterday… that doesn't happen on consumer devices (it can happen on professional devices). There are two factors that work against signing: the confidence you can have in the signed code decreases with the size of the signed code and with the number of suppliers. In a way, it's surprinsing that some Apple devices can resist jailbreaking for months.
–
GillesDec 21 '11 at 2:08

@logicalscope Are you expanding from smart cards and readers to mobile phones, by any chance? The sheer amount of stuff that gets added into all those megabytes makes a difference. And the large number of applications combined with very high pressure on release speed also makes a difference. On the flip side, you have a processor that won't break a sweat doing crypto, but that's only the tip of the security iceberg.
–
GillesDec 21 '11 at 2:15

If you are referring to my Common Criteria work, then actually I work on software systems rather than smartcards/readers. The mobile question has some applicability to my work, however, but the question is a personal one I've had for some time as I watch the mobile makers try to one-up the modders/jailbreakers.
–
logicalscopeDec 21 '11 at 2:29

I guess in the end, signed buggy code is still buggy code.
–
logicalscopeDec 21 '11 at 2:31