IMPORTANT NOTE TO INDUSTRY FOLKS: This blog post is aimed at regular everyday folks; it’s intended to dispel a few common myths and help regular people understand UEFI a bit better. It is not a low-level fully detailed and 100% technically accurate explanation, and I’m not a professional firmware engineer or anything like that. If you’re actually building an operating system or hardware or something, please don’t rely on my simplified explanations or ask me for help; I’m just an idiot on the internet. If you’re doing that kind of thing and you have money, join the UEFI Forum or ask your suppliers or check your reference implementation or whatever. If you don’t have money, try asking your peers with more experience, nicely. END IMPORTANT NOTE

You’ve probably read a lot of stuff on the internet about UEFI. Here is something important you should understand: 95% of it was probably garbage. If you think you know about UEFI, and you derived your knowledge anywhere other than the UEFI specifications, mjg59’s blog or one of a few other vaguely reliable locations/people – Rod Smith, or Peter Jones, or Chris Murphy, or the documentation of the relatively few OSes whose developers actually know what the hell they’re doing with UEFI – what you think you know is likely a toxic mix of misunderstandings, misconceptions, half-truths, propaganda and downright lies. So you should probably forget it all.

Good, now we’ve got that out of the way. What I mostly want to talk about is bootloading, because that’s the bit of firmware that matters most to most people, and the bit news sites are always banging on about and wildly misunderstanding.

Terminology

First, let’s get some terminology out of the way. Both BIOS and UEFI are types of firmware for computers. BIOS-style firmware is (mostly) only ever found on IBM PC compatible computers. UEFI is meant to be more generic, and can be found on systems which are not in the ‘IBM PC compatible’ class.

You do not have a ‘UEFI BIOS’. No-one has a ‘UEFI BIOS’. Please don’t ever say ‘UEFI BIOS’. BIOS is not a generic term for all PC firmware, it is a particular type of PC firmware. Your computer has a firmware. If it’s an IBM PC compatible computer, it’s almost certainly either a BIOS or a UEFI firmware. If you’re running Coreboot, congratulations, Mr./Ms. Exception. You may be proud of yourself.

Secure Boot is not the same thing as UEFI. Do not ever use those terms interchangeably. Secure Boot is a single effectively optional element of the UEFI specification, which was added in version 2.2 of the UEFI specification. We will talk about precisely what it is later, but for now, just remember it is not the same thing about UEFI. You need to understand what Secure Boot is, and what UEFI is, and which of the two you are actually talking about at any given time. We’ll talk about UEFI first, and then we’ll talk about Secure Boot as an ‘extension’ to UEFI, because that’s basically what it is.

Bonus Historical Note: UEFI was not invented by, is not controlled by, and has never been controlled by Microsoft. Its predecessor and basis, EFI, was developed and published by Intel. UEFI is managed by the UEFI Forum. Microsoft is a member of the UEFI forum. So is Red Hat, and so is Apple, and so is just about every major PC manufacturer, Intel (obviously), AMD, and a laundry list of other major and minor hardware, software and firmware companies and organizations. It is a broad consensus specification, with all the messiness that entails, some of which we’ll talk about specifically later. It is no one company’s Evil Vehicle Of Evilness.

References

If you really want to understand UEFI, it’s a really good idea to go and read the UEFI specification. You can do this. It’s very easy. You don’t have to pay anyone any money. I am not going to tell you that reading it will be the most fun you’ve ever had, because it won’t. But it won’t be a waste of your time. You can find it right here on the official UEFI site. You have to check a couple of boxes, but you are not signing your soul away to Satan, or anything. It’s fine. As I write this, the current version of the spec is 2.4 Errata A, and that’s the version this post is written with regard to.

There is no BIOS specification. BIOS is a de facto standard – it works the way it worked on actual IBM PCs, in the 1980s. That’s kind of one of the reasons UEFI exists.

Now, to keep things simple, let’s consider two worlds. One is the world of IBM PC compatible computers – hereafter referred to just as PCs – before UEFI and GPT (we’ll come to GPT) existed. This is the world a lot of you are probably familiar with and may understand quite well. Let’s talk about how booting works on PCs with BIOS firmware.

BIOS booting

It works, in fact, in a very, very simple way. On your bog-standard old-skool BIOS PC, you have one or more disks which have an MBR. The MBR is another de facto standard; basically, the very start of the disk describes the partitions on the disk in a particular format, and contains a ‘boot loader’, a very small piece of code that a BIOS firmware knows how to execute, whose job it is to boot the operating system(s). (Modern bootloaders frequently are much bigger than can be contained in the MBR space and have to use a multi-stage design where the bit in the MBR just knows how to load the next stage from somewhere else, but that’s not important to us right now).

All a BIOS firmware knows, in the context of booting the system, is what disks the system contains. You, the owner of this BIOS-based computer, can tell the BIOS firmware which disk you want it to boot the system from. The firmware has no knowledge of anything beyond that. It executes the bootloader it finds in the MBR of the specified disk, and that’s it. The firmware is no longer involved in booting.

In the BIOS world, absolutely all forms of multi-booting are handled above the firmware layer. The firmware layer doesn’t really know what a bootloader is, or what an operating system is. Hell, it doesn’t know what a partition is. All it can do is run the boot loader from a disk’s MBR. You also cannot configure the boot process from outside of the firmware.

UEFI booting: background

OK, so we have our background, the BIOS world. Now let’s look at how booting works on a UEFI system. Even if you don’t grasp the details of this post, grasp this: it is completely different. Completely and utterly different from how BIOS booting works. You cannot apply any of your understanding of BIOS booting to native UEFI booting. You cannot make a little tweak to a system designed for the world of BIOS booting and apply it to native UEFI booting. You need to understand that it is a completely different world.

Here’s another important thing to understand: many UEFI firmwares implement some kind of BIOS compatibility mode, sometimes referred to as a CSM. Many UEFI firmwares can boot a system just like a BIOS firmware would – they can look for an MBR on a disk, and execute the boot loader from that MBR, and leave everything subsequently up to that bootloader. People sometimes incorrectly refer to using this feature as ‘disabling UEFI’, which is linguistically nonsensical. You cannot ‘disable’ your system’s firmware. It’s just a stupid term. Don’t use it, but understand what people really mean when they say it. They are talking about using a UEFI firmware’s ability to boot the system ‘BIOS-style’ rather than native UEFI style.

What I’m going to describe is native UEFI booting. If you have a UEFI-based system whose firmware has the BIOS compatibility feature, and you decide to use it, and you apply this decision consistently, then as far as booting is concerned, you can pretend your system is BIOS-based, and just do everything the way you did with BIOS-style booting. If you’re going to do this, though, just make sure you do apply it consistently. I really can’t recommend strongly enough that you do not attempt to mix UEFI-native and BIOS-compatible booting of permanently-installed operating systems on the same computer, and especially not on the same disk. It is a terrible terrible idea and will cause you heartache and pain. If you decide to do it, don’t come crying to me.

For the sake of sanity, I am going to assume the use of disks with a GPT partition table, and EFI FAT32 EFI system partitions. Depending on how deep you’re going to dive into this stuff you may find out that it’s not strictly speaking the case that you can always assume you’ll be dealing with GPT disks and EFI FAT32 ESPs when dealing with UEFI native boot, but the UEFI specification is quite strongly tied to GPT disks and EFI FAT32 ESPs, and this is what you’ll be dealing with in 99% of cases. Unless you’re dealing with Macs, and quite frankly, screw Macs.

Edit note: the following sections (up to Implications and Complications) were heavily revised on 2014-01-26, a few hours after the initial version of this post went up, based on feedback from Peter Jones. Consider this to be v2.0 of the post. An earlier version was written in a somewhat less accurate and more confusing way.

UEFI native booting: how it actually works – background

OK, with that out of the way, let’s get to the meat. This is how native UEFI booting actually works. It’s probably helpful to go into this with a bit of high-level background.

UEFI provides much more infrastructure at the firmware level for handling system boot. It’s nowhere near as simple as BIOS. Unlike BIOS, UEFI certainly does understand, to varying degrees, the concepts of ‘disk partitions’ and ‘bootloaders’ and ‘operating systems’.

You can sort of look at the BIOS boot process, and look at the UEFI process, and see how the UEFI process extends various bits to address specific problems.

The BIOS/MBR approach to finding the bootloader is pretty janky, when you think about it. It’s very ‘special sauce’: this particular tiny space at the front of the disk contains magic code that only really makes much sense to the system firmware and special utilities for writing it. There are several problems with this approach.

It’s inconvenient to deal with – you need special utilities to write the MBR, and just about the only way to find out what’s in one is to dd the contents out and examine them.

As noted above, the MBR itself is not big enough for many modern bootloaders. What they do is install a small part of themselves to the MBR proper, and the rest to the empty space on the disk between where the conventional MBR ends and the first partition begins. There’s a rather big problem with this (well, the whole design is a big problem, but never mind), which is that there’s no reliable convention for where the first partition should begin, so it’s difficult to be sure there’ll be enough space. One thing you usually can rely on is that there won’t be enough space for some bootloader configurations.

The design doesn’t provide any standardized layer or mechanism for selecting boot targets other than disks…but people want to select boot targets other than disks. i.e. they want to have multiple bootable ‘things’ – usually operating systems – per disk. The only way to do this, in the BIOS/MBR world, is for the bootloaders to handle it; but there’s no widely accepted convention for the right way to do this. There are many many different approaches, none of which is particularly interoperable with any of the others, none of which is a widely accepted standard or convention, and it’s very difficult to write tooling at the OS / OS installation layer that handles multiboot cleanly. It’s just a very messy design.

The design doesn’t provide a standard way of booting from anything except disks. We’re not going to really talk about that in this article, but just be aware it’s another advantage of UEFI booting: it provides a standard way for booting from, for instance, a remote server.

There’s no mechanism for levels above the firmware to configure the firmware’s boot behaviour.

So you can imagine the UEFI Elves sitting around and considering this problem, and coming up with a solution. Instead of the firmware only knowing about disks and one ‘magic’ location per disk where bootloader code might reside, UEFI has much more infrastructure at the firmware layer for handling boot loading. Let’s look at all the things it defines that are relevant here.

EFI executables

The UEFI spec defines an executable format and requires all UEFI firmwares be capable of executing code in this format. When you write a bootloader for native UEFI, you write in this format. This is pretty simple and straightforward, and doesn’t need any further explanation: it’s just a Good Thing that we now have a firmware specification which actually defines a common format for code the firmware can execute.

The GPT (GUID partition table) format

The GUID Partition Table format is very much tied in with the UEFI specification, and again, this isn’t something particularly complex or in need of much explanation, it’s just a good bit of groundwork the spec provides. GPT is just a standard for doing partition tables – the information at the start of a disk that defines what partitions that disk contains. It’s a better standard for doing this than MBR/’MS-DOS’ partition tables were in many ways, and the UEFI spec requires that UEFI-compliant firmwares be capable of interpreting GPT (it also requires them to be capable of interpreting MBR, for backwards compatibility). All of this is useful groundwork: what’s going on here is the spec is establishing certain capabilities that everything above the firmware layer can rely on the firmware to have.

EFI system partitions

I actually really wrapped my head around the EFI system partition concept while revising this post, and it was a great ‘aha!’ moment. Really, the concept of ‘EFI system partitions’ is just an answer to the problem of the ‘special sauce’ MBR space. The concept of some undefined amount of empty space at the start of a disk being ‘where bootloader code lives’ is a pretty crappy design, as we saw above. EFI system partitions are just UEFI’s solution to that.1

The solution is this: we require the firmware layer to be capable of reading some specific types of filesystem. The UEFI spec requires that compliant firmwares be capable of reading the FAT12, FAT16 and FAT32 variants of the FAT format, in essence. In fact what it does is codify a particular interpretation of those formats as they existed at the point UEFI was accepted, and say that UEFI compliant firmwares must be capable of reading those formats. As the spec puts it:

“The file system supported by the Extensible Firmware Interface is based on the FAT file system. EFI defines a specific version of FAT that is explicitly documented and testable. Conformance to the EFI specification and its associate reference documents is the only definition of FAT that needs to be implemented to support EFI. To differentiate the EFI file system from pure FAT, a new partition file system type has been defined.”

An ‘EFI system partition’ is really just any partition formatted with one of the UEFI spec-defined variants of FAT and given a specific GPT partition type to help the firmware find it. And the purpose of this is just as described above: allow everyone to rely on the fact that the firmware layer will definitely be able to read data from a pretty ‘normal’ disk partition. Hopefully it’s clear why this is a better design: instead of having to write bootloader code to the ‘magic’ space at the start of an MBR disk, operating systems and so on can just create, format and mount partitions in a widely understood format and put bootloader code and anything else that they might want the firmware to read there.

The whole ESP thing seemed a bit bizarre and confusing to me at first, so I hope this section explains why it’s actually a very sensible idea and a good design – the bizarre and confusing thing is really the BIOS/MBR design, where the only way for you to write something from the OS layer that you knew the firmware layer could consume was to write it into some (but you didn’t know how much) Magic Space at the start of a disk, a convention which isn’t actually codified anywhere. That really isn’t a very sensible or understandable design, if you step back and take a look at it.

As we’ll note later, the UEFI spec tends to take a ‘you must at least do these things’ approach – it rarely prohibits firmwares from doing anything else. It’s not against the spec to write a firmware that can execute code in other formats, read other types of partition table, and read partitions formatted with filesystems other than the UEFI variants of FAT. But a UEFI compliant firmware must at least do all these things, so if you are writing an OS or something else that you want to run on any UEFI compliant firmware, this is why the EFI system partition concept is so important: it gives you (at least in theory) 100% confidence that you can put an EFI executable on a partition formatted with the UEFI FAT implementation and the correct GPT partition type, and the system firmware will be able to read it. This is the thing you can take to the bank, like ‘the firmware will be able to execute some bootloader code I put in the MBR space’ was in the BIOS world.

So now we have three important bits of groundwork the UEFI spec provides: thanks to these requirements, any other layer can confidently rely on the fact that the firmware:

Can read a partition table

Can access files in some specific filesystems

Can execute code in a particular format

This is much more than you can rely on a BIOS firmware being capable of. However, in order to complete the vision of a firmware layer that can handle booting multiple targets – not just disks – we need one more bit of groundwork: there needs to be a mechanism by which the firmware finds the various possible boot targets, and a way to configure it.

The UEFI boot manager

The UEFI spec defines something called the UEFI boot manager. (Linux distributions contain a tool called efibootmgr which is used to manipulate the configuration of the UEFI boot manager). As a sample of what you can expect to find if you do read the UEFI spec, it defines the UEFI boot manager thusly:

“The UEFI boot manager is a firmware policy engine that can be configured by modifying architecturally defined global NVRAM variables. The boot manager will attempt to load UEFI drivers and UEFI applications (including UEFI OS boot loaders) in an order defined by the global NVRAM variables.”

Well, that’s that cleared up, let’s move on. 😉 No, not really. Let’s translate that to Human. With only a reasonable degree of simplification, you can think of the UEFI boot manager as being a boot menu. With a BIOS firmware, your firmware level ‘boot menu’ is, necessarily, the disks connected to the system at boot time – no more, no less. This is not true with a UEFI firmware.

The UEFI boot manager can be configured – simply put, you can add and remove entries from the ‘boot menu’. The firmware can also (it fact the spec requires it to, in various cases) effectively ‘generate’ entries in this boot menu, according to the disks attached to the system and possibly some firmware configuration settings. It can also be examined – you can look at what’s in it.

One rather great thing UEFI provides is a mechanism for doing this from other layers: you can configure the system boot behaviour from a booted operating system. You can do all this by using the efibootmgr tool, once you have Linux booted via UEFI somehow. There are Windows tools for it too, but I’m not terribly familiar with them. Let’s have a look at some typical efibootmgr output, which I stole and slightly tweaked from the Fedora forums:

This is a nice clean example I stole and slightly tweaked from the Fedora forums. We can see a few things going on here.

The first line tells you which of the ‘boot menu’ entries you are currently booted from. The second is pretty obvious (if the firmware presents a boot menu-like interface to the UEFI boot manager, that’s the timeout before it goes ahead and boots the default entry). The BootOrder is the order in which the entries in the list will be tried. The rest of the output shows the actual boot entries. We’ll describe what they actually do later.

If you boot a UEFI firmware entirely normally, without doing any of the tweaks we’ll discuss later, what it ought to do is try to boot from each of the ‘entries’ in the ‘boot menu’, in the order listed in BootOrder. So on this system it would try to boot the entry called ‘opensuse’, then if that failed, the one called ‘Fedora’, then ‘CD/DVD Drive’, and then the second ‘Hard Drive’.

UEFI native booting: how it actually works – boot manager entries

What does these entries actually mean, though? There’s actually a huge range of possibilities that makes up rather a large part of the complexity of the UEFI spec all by itself. If you’re reading the spec, pour yourself an extremely large shot of gin and turn to the EFI_DEVICE_PATH_PROTOCOL section, but note that this is a generic protocol that’s used for other things than booting – it’s UEFI’s Official Way Of Identifying Devices For All Purposes, used for boot manager entries but also for all sorts of other purposes. Not every possible EFI device path makes sense as a UEFI boot manager entry, for obvious reasons (you’re probably not going to get too far trying to boot from your video adapter). But you can certainly have an entry that points to, say, a PXE server, not a disk partition. The spec has lots of bits defining valid non-disk boot targets that can be added to the UEFI boot manager configuration.

For our purposes, though, lets just consider fairly normal disks connected to the system. In this case we can consider three types of entry you’re likely to come across.

BIOS compatibility boot entries

Boot0000 and Boot0004 in this example are actually BIOS compatibility mode entries, not UEFI native entries. They have not been added to the UEFI boot manager configuration by any external agency, but generated by the firmware itself – this is a common way for a UEFI firmware to implement BIOS compatibility booting, by generating UEFI boot manager entries that trigger a BIOS-compatible boot of a given device. How they present this to the user is a different question, as we’ll see later. Whether you see any of these entries or not will depend on your particular firmware, and its configuration. Each of these entries just gives a name – ‘CD/DVD Drive’, ‘Hard Drive’ – and says “if this entry is selected, boot this disk (where ‘this disk’ is 3,0,00 for Boot0000 and 2,0,00 for Boot0004) in BIOS compatibility mode”.

‘Fallback path’ UEFI native boot entries

Boot0001 is an entry (fictional, and somewhat unlikely, but it’s for illustrative purposes) that tells the firmware to try and boot from a particular disk, and in UEFI mode not BIOS compatibility mode, but doesn’t tell it anything more. It doesn’t specify a particular boot target on the disk – it just says to boot the disk.

The UEFI spec defines a sort of ‘fallback’ path for booting this kind of boot manager entry, which works in principle somewhat like BIOS drive booting: it looks in a standard location for some boot loader code. The details are different, though.

What the firmware will actually do when trying to boot in this way is reasonably simple. The firmware will look through each EFI system partition on the disk in the order they exist on the disk. Within the ESP, it will look for a file with a specific name and location. On an x86-64 PC, it will look for the file \EFI\BOOT\BOOTx64.EFI. What it actually looks for is \EFI\BOOT\BOOT{machine type short-name}.EFI – ‘x64’ is the “machine type short-name” for x86-64 PCs. The other possibilities are BOOTIA32.EFI (x86-32), BOOTIA64.EFI (Itanium), BOOTARM.EFI (AArch32 – that is, 32-bit ARM) and BOOTAA64.EFI (AArch64 – that is, 64-bit ARM). It will then execute the first qualifying file it finds (obviously, the file needs to be in the executable format defined in the UEFI specification).

This mechanism is not designed for booting permanently-installed OSes. It’s more designed for booting hotpluggable, device-agnostic media, like live images and OS install media. And this is indeed what it’s usually used for. If you look at a UEFI-capable live or install medium for a Linux distribution or other OS, you’ll find it has a GPT partition table and contains a FAT-formatted partition at or near the start of the device, with the GPT partition type that identifies it as an EFI system partition. Within that partition there will be a \EFI\BOOT directory with at least one of the specially-named files above. When you boot a Fedora live or install medium in UEFI-native mode, this is the mechanism that is used. The BOOTx64.EFI (or whatever) file handles the rest of the boot process from there, booting the actual operating system contained on the medium.

Full UEFI native boot entries

Boot0002 and Boot0003 are ‘typical’ entries for operating systems permanently installed to permanent storage devices. These entries show us the full power of the UEFI boot mechanism, by not just saying “boot from this disk”, but “boot this specific bootloader in this specific location on this specific disk”, using all the ‘groundwork’ we talked about above.

Boot0002 is a boot entry produced by a UEFI-native Fedora installation. Boot0003 is a boot entry produced by a UEFI-native OpenSUSE installation. As you may be able to tell, all they’re saying is “load this file from this partition”. The partition is the HD(1,800,61800,6d98f360-cb3e-4727-8fed-5ce0c040365d) bit: that’s referring to a specific partition (using the EFI_DEVICE_PATH_PROTOCOL, which I’m really not going to attempt to explain in any detail – you don’t necessarily need to know it, if you interact with the boot manager via the firmware interface and efibootmgr). The file is the File(\EFI\opensuse\grubx64.efi) bit: that just means “load the file in this location on the partition we just described”. The partition in question will almost always be one that qualifies as an EFI system partition, because of the considerations above: that’s the type of partition we can trust the firmware to be able to access.

This is the mechanism the UEFI spec provides for operating systems to make themselves available for booting: the operating system is intended to install a bootloader which loads the OS kernel and so on to an EFI system partition, and add an entry to the UEFI boot manager configuration with a name – obviously, this will usually be derived from the operating system’s name – and the location of the bootloader (in EFI executable format) that is intended for loading that operating system.

Linux distributions use the efibootmgr tool to deal with the UEFI boot manager. What a Linux distribution actually does, so far as bootloading is concerned, when you do a UEFI native install is really pretty simple: it creates an EFI system partition if one does not already exist, installs an EFI boot loader with an appropriate configuration – often grub2-efi, but there are others – into a correct path in the EFI system partition, and calls efibootmgr to add an appropriately-named UEFI boot manager entry pointing to its boot loader. Most distros will use an existing EFI system partition if there is one, though it’s perfectly valid to create a new one and use that instead: as we’ve noted, UEFI is a permissive spec, and if you follow the design logically, there’s really no problem with having just as many EFI system partitions as you want.

Configuring the boot process (the firmware UI)

The above describes the basic mechanism the UEFI spec defines that manages the UEFI boot process. It’s important to realize that your firmware user interface may well not represent this mechanism very clearly. Unfortunately, the spec intentionally refrains from defining how the boot process should be represented to the user or how the user should be allowed to configure it, and what that means – since we’re dealing with firmware engineers – is that every firmware does it differently, and some do it insanely.

Many firmwares do have fairly reasonable interfaces for boot configuration. A good firmware design will at least show you the boot order, with a reasonable representation of the entries on it, and let you add or remove entries, change the order, or override the order for a specific boot (by changing it just for that boot, or directly instructing the firmware to boot a particular menu entry, or even giving you the option to simply say “boot this disk”, either in BIOS compatibility mode or UEFI ‘fallback’ mode – my firmware does this). Such an interface will often show ‘full’ UEFI native boot entries (like the Fedora and openSUSE examples we saw earlier) only by their name; you have to examine the efibootmgr -v output to know precisely what these entries will actually try and do when invoked.

Some firmwares try to abstract and simplify the configuration, and may do a good or a bad job of it. For instance, if you have an option to ‘enable or disable’ BIOS compatibility mode, what it’ll really likely do is configure whether the firmware adds BIOS compatibility entries for attached drives to the UEFI boot manager configuration or not. If you have an option to ‘enable or disable’ UEFI native booting, what likely really happens when you ‘disable’ it is that the firmware changes the UEFI boot manager configuration to leave all UEFI-native entries out of the BootOrder.

The key point to remember is that any configuration option inside your firmware interface which is to do with booting is really, behind the scenes, configuring the behaviour of the UEFI boot manager. If you understand all the stuff we’ve discussed above, you may well find it easier to figure out what’s really happening when you twiddle the knobs your firmware interface exposes.

In the BIOS world, you’ll remember, you don’t always find that systems are configured to try and boot from removable drives – CD, USB – before booting from permanent drives. Some are, and some aren’t. Some will try CD before the hard disks, but not USB. People have got fairly used to having to check the BIOS configuration to ensure the boot order is ‘correct’ when trying to install a new operating system.

This applies to the UEFI world too, but because of the added flexibility/complexity of the UEFI boot manager mechanism, it can look unfamiliar and scary.

If you want to ensure that your system tries to boot from removable devices using the ‘fallback’ mechanism before it tries to boot ‘permanent’ boot entries – as you will want to do if you want to, say, install Fedora – you need this to be the default for your firmware, or you need to be able to tell the firmware this. Depending on your firmware’s interface, you may find there is a ‘menu entry’ for each attached removable device and you just have to adjust the boot order to put it at the top of the list, or you may find that there is the mechanism to directly request ‘UEFI fallback boot of this particular disk’, or you may find that the firmware tries to abstract the configuration somehow. We just don’t know, and that makes writing instructions for this quite hard. But now you broadly understand how things work behind the scenes, you may find it easier to understand your firmware user interface’s representation of that.

Configuring the boot process (from an operating system)

As we’ve noted above, unlike in the BIOS world, you can actually configure the UEFI boot process from the operating system level. If you have an insane firmware, you may have to do this in order to achieve what you want.

You can use the efibootmgr tool mentioned earlier to add, delete and modify entries in the UEFI boot manager configuration, and actually do quite a lot of other stuff with it too. You can change the boot order. You can tell it to boot some particular entry in the list on the next boot, instead of using the BootOrder list (if you or some other tool has configured this to happen, your efibootmgr -v output will include a BootNext item stating which menu entry will be loaded on the next boot). There are tools for Windows that can do this stuff from Windows, too. So if you’re really struggling to manage to do whatever it is you want to do with UEFI boot configuration from your firmware interface, but you can boot a UEFI native operating system of some kind, you may want to consider doing your boot configuration from that operating system rather than from the firmware UI.

So to recap:

Your UEFI firmware contains something very like what you think of as a boot menu.

You can query its configuration with efibootmgr -v, from any UEFI-native boot of a Linux OS, and also change its configuration with efibootmgr (see the man page for details).

This ‘boot menu’ can contain entries that say ‘boot this disk in BIOS compatibility mode’, ‘boot this disk in UEFI native mode via the fallback path’ (which will use the ‘look for BOOT(something).EFI’ method described above), or ‘boot the specific EFI format executable at this specific location (almost always on an EFI system partition)’.

The nice, clean design that the UEFI spec is trying to imply is that all operating systems should install a bootloader of their own to an EFI system partition, add entries to this ‘boot menu’ that point to themselves, and butt out from trying to take control of booting anything else.

Your firmware UI has free rein to represent this mechanism to you in whatever way it wants, and it may do this well, or it may do this poorly.

Installing operating systems to UEFI-based computers

Let’s have a quick look at some specific consequences of the above that relate to installing operating systems on UEFI computers.

UEFI native and BIOS compatibility booting

Here’s a very very simple one which people sometimes miss:

If you boot the installation medium in ‘UEFI native’ mode, it will do a UEFI native install of the operating system: it will try to write an EFI-format bootloader to an EFI system partition, and attempt to add an entry to the UEFI boot manager ‘boot menu’ which loads that bootloader.

If you boot the installation medium in ‘BIOS compatibility’ mode, it will do a BIOS compatible install of the operating system: it will try to write an MBR-type bootloader to the magic MBR space on a disk.

This applies (with one minor caveat I’m going to paper over for now) to all OSes of which I’m aware. So you probably want to make sure you understand how, in your firmware, you can choose to boot a removable device in UEFI native mode and how you can choose to boot it in BIOS compatibility mode, and make sure you pick whichever one you actually want to use for your installation.

You really cannot do a completely successful UEFI-native installation of an OS if you boot its installation medium in BIOS compatibility mode, because the installer will not be able to configure the UEFI boot manager (this is only possible when booted UEFI-native).

It is theoretically possible for an OS installer to install the OS in the BIOS style – that is, write a bootloader to a disk’s MBR – after being booted in UEFI native mode, but most of them won’t do this, and that’s probably sensible.

Finding out which mode you’re booted in

It is possible that you might find yourself with your operating system installer booted, and not sure whether it’s actually booted in UEFI native mode or BIOS compatibility mode. Don’t panic! It’s pretty easy to find out which, in a few different ways. One of the easiest is just to try and talk to the UEFI boot manager. If what you have booted is a Linux installer or environment, and you can get to a shell (ctrl-alt-f2 in the Fedora installer, for instance), run efibootmgr -v. If you’re booted in UEFI native mode, you’ll get your UEFI boot manager configuration, as shown above. If you’re booted in BIOS compatibility mode, you’ll get something like this:

If you’ve booted some other operating system, you can try running a utility native to that OS which tries to talk to the UEFI boot manager, and see if you get sensible output or a similar kind of error. Or you can examine the system logs and search for ‘efi’ and/or ‘uefi’, and you’ll probably find some kind of indication.

Enabling UEFI native boot

To be bootable in UEFI native mode, your OS installation medium must obviously actually comply with all this stuff we’ve just described: it’s got to have a GPT partition table, and an EFI system partition with a bootloader in the correct ‘fallback’ path – \EFI\BOOT\BOOTx64.EFI (or the other names for the other platforms). If you’re having trouble doing a UEFI native boot of your installation medium and can’t figure out why, check that this is actually the case. Notably, when using the livecd-iso-to-disk tool to write a Fedora image to a USB stick, you must pass the --efi parameter to configure the stick to be UEFI bootable.

Forcing BIOS compatibility boot

If your firmware seems to make it very difficult to boot from a removable medium in BIOS compatibility mode, but you really want to do that, there’s a handy trick you can use: just make the medium not UEFI native bootable at all. You can do this pretty easily by just wiping all the EFI system partitions. (Alternatively, if using livecd-iso-to-disk to create a USB stick from a Fedora image, you can just leave out the --efi parameter and it won’t be UEFI bootable). If at that point your firmware refuses to boot it in BIOS compatibility mode, commence swearing at your firmware vendor (if you didn’t already).

Disk formats (MBR vs. GPT)

Here’s another very important consideration:

If you want to do a ‘BIOS compatibility’ type installation, you probably want to install to an MBR formatted disk.

If you want to do a UEFI native installation, you probably want to install to a GPT formatted disk.

Of course, to make life complicated, many firmwares can boot BIOS-style from a GPT formatted disk. UEFI firmwares are in fact technically required to be able to boot UEFI-style from an MBR formatted disk (though we are not particularly confident that they all really can). But you really should avoid this if at all possible. This consideration is quite important, as it’s one that trips up quite a few people. For instance, it’s a bad idea to boot an OS installer in UEFI native mode and then attempt to install to an MBR formatted disk without reformatting it. This is very likely to fail. Most modern OS installers will automatically reformat the disk in the correct format if you allow them to completely wipe it, but if you try and tell the installer ‘do a UEFI native installation to this MBR formatted disk and don’t reformat it because it has data on it that I care about’, it’s very likely to fail, even though this configuration is technically covered in the UEFI specification. Specifically, Windows and Fedora at least explicitly disallow this configuration.

See that Partition table: msdos? This is an MBR/MS-DOS formatted disk. If it was GPT-formatted, that would say gpt. You can reformat the disk with the other type of partition table by doing mklabel gpt or mklabel msdos from within parted. This will destroy the contents of the disk.

With most OS installers, if you pick a disk configuration that blows away the entire contents of the target disk, the installer will automatically reformat it using the most appropriate configuration for the type of installation you’re doing, but if you want to use an existing disk without reformatting it, you’re going to have to check how it’s formatted and take this into account.

Handling EFI system partition if doing manual partitioning

I can only give authoritative advice for Fedora here, but the gist may be useful for other distros / OSes.

If you allow Fedora to handle partitioning for you when doing a UEFI native installation – and you use a GPT-formatted disk, or allow it to reformat the disk (by deleting all existing partitions) – it will handle the EFI system partition stuff for you.

If you use custom partitioning, though, it will expect you to provide an EFI system partition for the installer to use. If you don’t do this, the installer will complain (with a somewhat confusing error message) and refuse to let you start the installation.

So if you’re doing a UEFI native install and using custom partitioning, you need to ensure that a partition of the ‘EFI system partition’ type is mounted at /boot/efi – this is where Fedora expects to find the EFI system partition it’s using. If there is an existing EFI system partition on the system, just set its mount point to /boot/efi. If there is not an EFI system partition yet, create a partition, set its type to EFI system partition, make it at least 200MB big (500MB is good), and set its mount point to /boot/efi.

A specific example

To boil down the above: if you bought a Windows 8 or later system, you almost certainly have a UEFI native install of Windows to a GPT-formatted disk. This means that if you want to install another OS alongside that Windows install, you almost certainly want to do a UEFI-native installation of your other OS. If you don’t like all this UEFI nonsense and want to go back to the good old world you’re familiar with, you will, I’m afraid, have to blow away the UEFI-native Windows installation, and it would be a good idea to reformat the disk to MBR.

Implications and Complications

So, that’s how UEFI booting works, at least a reasonable approximation. When I describe it like that, it almost all makes sense, right?

However, all is not sweetness and light. There are problems. There always are.

Attentive readers may have noticed that I’ve talked about the UEFI spec providing a mechanism. This is accurate, and important. As the UEFI spec is a ‘broad consensus’ sort of thing, one of its major shortcomings (looked at from a particular perspective) is that it’s nowhere near prescriptive enough.

If you read the UEFI spec critically, its basic approach is to define a set of functions that UEFI compliant firmwares must support. What it doesn’t do a lot of at all is strictly requiring things to be done in any particular way, or not done in any particular way.

So: the spec says that a system firmware must do all the stuff I’ve described above, in order to be considered a UEFI-compliant firmware. The spec, however, doesn’t talk about what operating systems ‘should’ or ‘must’ do at all, and it doesn’t say that firmwares must not support (or no-one may expect them to support, or whatever)…anything at all. If you’re making a UEFI firmware, in other words, you have to support GPT formatted disks, and FAT-formatted EFI system partitions, and you must read UEFI boot manager entries in the standard format, and you must do this and that and the other – but you can also do any other crap you like.

It’s pretty easy to read certain implications from the spec – it carefully sets up this nice mechanism for handling OS (or other ‘bootable thing’) selection at the firmware level, for instance, with the clear implication “hey, it’d be great if all OSes were written to this mechanism”. But the UEFI spec doesn’t require that, and neither does any other widely-respected specification.

So, what happens in the real world is that we wind up with really dumb crap. Apple, for instance, ships at least some Macs with their bootloaders in an HFS+ partition. The spec says a UEFI-compliant firmware must support UEFI FAT partitions with the specific GPT partition type that identifies them as an “EFI system partition”, but it doesn’t say the firmware can’t also recognize some other filesystem type and load a bootloader from that. (Whether you consider such a partition to be an “EFI system partition” or not is an interesting philosophical conundrum, but let’s skate right over that for now).

The world would pretty clearly be a better place if everyone just damn well used the EFI system partition format the spec goes to such great pains to define, but Apple is Apple and we can’t have nice things, so Apple went right ahead and wrote firmwares that also can read and load code from HFS+ partitions, and now everyone else has to deal with that or tell Macs to go and get boned. Apple also goes quite a long way beyond the spec in its boot process design, and if you want your alternative OS to show up on its graphical boot menu with a nice icon and things, you have to do more than what the UEFI spec would suggest.

There are various similar incredibly annoying corner cases we’ve come across, but let’s not go into them all right now. This post is long enough.

Also, as we noted earlier, the spec makes no requirements as to how the mechanism should be represented to the user. So if a couple of software companies write OSes to behave ‘nicely’ according to the conventions the spec is clearly designed to back, and install EFI boot loaders and define EFI boot manager entries with nice clear names – like, oh say, “Fedora” and “Windows” – they are implicitly relying on the firmware to then give the user some kind of sane interface somewhere relatively discoverable that lets them choose to boot “Windows” or “Fedora”. The more firmwares don’t do a good job of this, the less willing OS engineers will be to rely on the ‘proper’ conventions, and the more likely they’ll be to start rebuilding ugly hacks above the firmware level.

To be fair, we could do somewhat more at the OS level. We could present all those neat efibootmgr capabilities rather more obviously – we can use that ‘don’t respect BootOrder on the next boot, but instead boot this‘ capability, for instance, and have ‘Reboot to Windows’ as an option. It’d be kinda nice if someone looked at exposing all this functionality somewhere more obvious than efibootmgr. Windows 8 systems do use this, to some extent – you can reboot your system to the firmware UI from the Windows 8 settings menus, for instance. But still.

All this is really incredibly frustrating, because UEFI is so close to making things really a lot better. The BIOS approach doesn’t provide any kind of convention or standard for multibooting at all – it has to be handled entirely above the firmware level. We (the industry) could have come up with some sort of convention for handling multiboot, but we never did, so it just became a multiple-decade epic fail, where each operating system came up with its own approach and lots of people wrote their own bootloaders which tried to subsume all the operating systems and all the operating systems and independent bootloaders merrily fought like cats in a sack. I mean, pre-UEFI multibooting is such a clusterf**k it’s not even worth going into, it’s broken sixteen ways from Sunday by definition.

If UEFI – or a spec built on top of it – had just mandated that everybody follow the conventions UEFI carefully establishes, and mandated that firmwares provide a sensible user interface, the win would have been epic. But it doesn’t, so it’s entirely possible that in a UEFI world things will be even worse than they were in the BIOS world. If many more firmwares show up that don’t present a good UI for the UEFI boot manager mechanism, what could happen is that OS vendors give up on the UEFI boot manager mechanism (or decide to support it and alternatives, because choice!) and just reinvent the entire goddamn nightmare of BIOS multibooting on top of UEFI – and we’ll all have to deal with all of that, plus the added complication of the UEFI boot manager layer. You’ll have multiple bootloaders fighting to load multiple operating systems all on top of the whole UEFI boot manager mechanism which is just throwing a whole bunch of other variables into the equation.

This is not a prospect filling the mind of anyone who’s had to think about it with joy.

Still, it’s important to recognize that the sins of UEFI in this area are sins of omission – they are not sins of commission, and they’re not really the result of evil intent on anyone’s part. The entity you should really be angry with if you have an idiotic system firmware that doesn’t give you good access to the UEFI boot manager mechanism is not the UEFI forum, or Microsoft, and it certainly isn’t Fedora and even more certainly isn’t me ;). The entity you should be angry at is your system/motherboard manufacturer and the goddamn incompetents they hired to write the firmware, because the UEFI spec makes it really damn clear to anyone with two brain cells to rub together that it would be a very good idea to provide some kind of useful user interface to the UEFI boot manager, and any firmware which doesn’t do so is crap code by definition. Yes, the UEFI forum should’ve realized that firmware engineers couldn’t code their way out of a goddamned paper bag and just ordered them to do so, but still, it’s ultimately the firmware engineers who should be lined up against the nearest wall.

Wait, we can simplify that. “Any firmware is crap code”. Usually pretty accurate.

Secure Boot

So now we come, finally, to Secure Boot.

Secure Boot is not magic. It’s not complicated. OK, that’s a lie, it’s incredibly complicated, but the theory isn’t very complicated. And no, Secure Boot itself is not evil. I am entirely comfortable stating this as a fact, and you should be too, unless you think GPG is evil.

Secure Boot is defined in chapter 28 of the UEFI spec (2.4a, anyway). It’s actually a pretty clever mechanism. But what it does can be described very, very simply. It says that the firmware can contain a set of signatures, and refuse to run any EFI executable which is not signed with one of those signatures.

That’s it. Well, no, it really isn’t, but that’s a reasonably acceptable simplification. Security is hard, so there are all kinds of wibbly bits to implementing a really secure bootchain using Secure Boot, and mjg59 can tell you all about them, or you can pour another large shot of gin and read the whole of chapter 28. But that’s the basic idea.

Using public key cryptography to verify the integrity of something is hardly a radical or evil concept. Pretty much all Linux distributions depend on it – we sign our packages and have our package managers go AWOOGA AWOOGA if you try to install a package which isn’t signed with one of our keys. This isn’t us being evil, and I don’t think anyone’s ever accused an OS of being evil for using public key cryptographic signing to establish trust in this way. Secure Boot is literally this exact same perfectly widely accepted mechanism, applied to the boot chain. Yet because a bunch of journalists wildly grasped the wrong end of the stick, it’s widely considered to be slightly more evil than Hitler.

Secure Boot, as defined in the UEFI spec, says nothing at all about what the keys the firmware trusts should be, or where they should come from. I’m not going to go into all the goddamn details, because it gets stultifyingly boring and this post is too long already. But the executive summary is that the spec is utterly and entirely about defining a mechanism for doing cryptographic verification of a boot chain. It does not really even consider any kind of icky questions about the policy for doing so. It does nothing evil. It is as flexible as it could reasonably be, and takes care to allow for all the mechanisms involved to be configurable at multiple levels. The word ‘Microsoft’ is not mentioned. It is not in any way, shape, or form a secret agenda for Microsoft’s domination of the world. If you doubt this, at the very bloody least, go and read it. I’ve given you all the necessary pointers. There is literally not a single legitimate reason I can think of for anyone to be angry with the idea “hey, it’d be neat if there was a mechanism for optional cryptographic verification of bootloader code in this firmware specification”. None. Not one.

Secure Boot in the real world

Most of the unhappiness about Secure Boot is not really about Secure Boot the mechanism – whether the people expressing that unhappiness think it is or not – but about specific implementations of Secure Boot in the real world.

The only one we really care about is Secure Boot as it’s implemented on PCs shipped with Microsoft Windows 8 or higher pre-installed.

Microsoft has these things called the Windows Hardware Certification Requirements. There they are. They are not Top Secret, Eyes Only, You Will Be Fed To Bill Gates’ Sharks After Reading – they’re right there on the Internet for anyone to read.

If you want to get cheap volume licenses of Windows from Microsoft to pre-install on your computers and have a nice “reassuring” ‘Microsoft Approved!’ sticker or whatever on the case, you have to comply with these requirements. That’s all the force they have: they are not actually a part of the law of the United States or any other country, whatever some people seem to believe. Bill Gates cannot feed you to his sharks if you sell a PC that doesn’t comply with these requirements, so long as you don’t want a cheap copy of Windows to pre-install and a nice sticker. There is literally no requirement for a PC sold outside the Microsoft licensing program to configure Secure Boot in any particular way, or include Secure Boot at all. A PC that claims to have a UEFI 2.2 or later compliant firmware must implement Secure Boot, but can ship with it configured in literally absolutely any way it pleases (including turned off).

If you’re going to have very loud opinions about Secure Boot, you have zero excuse for not going and reading the Microsoft certification requirements. Right now. I’ll wait. You can search for “Secure Boot” to get to the relevant bit. It starts at “System.Fundamentals.Firmware.UEFISecureBoot”.

You should read it. But here is a summary of what it says.

Computers complying with the requirements must:

Ship with Secure Boot turned on (except for servers)

Have Microsoft’s key in the list of keys they trust

Disable BIOS compatibility mode when Secure Boot is enabled (actually the UEFI spec requires this too, if I read it correctly)

Support signature blacklisting

x86 computers complying with the requirements must additionally:

Allow a physically present person to disable Secure Boot

Allow a physically present person to enable Custom Mode, and modify the list of keys the firmware trusts

ARM computers complying with the requirements must additionally:

NOT allow a physically present person to disable Secure Boot

NOT allow a physically present person to enable Custom Mode, and modify the list of keys the firmware trusts

Yes. You read that correctly. The Microsoft certification requirements, for x86 machines, explicitly require implementers to give a physically present user complete control over Secure Boot – turn it off, or completely control the list of keys it trusts. Another important note here is that while the certification requirements state that the out-of-the-box list of trusted keys must include Microsoft’s key, they don’t say, for e.g., that it must not include any other keys. The requirements explicitly and intentionally allow for the system to ship with any number of other trusted keys, too.

These requirements aren’t present entirely out of the goodness of Microsoft’s heart, or anything – they’re present in large part because other people explained to Microsoft that if they weren’t present, it’d have a hell of a lawsuit on its hands2 – but they are present. Anyone who actually understands UEFI and Secure Boot cannot possibly read the requirements any other way, they are extremely clear and unambiguous. They both clearly intend to and succeed in ensuring the owner of a certified system has complete control over Secure Boot.

If you have an x86 system that claims to be Windows certified but does not allow you to disable Secure Boot, it is in direct violation of the certification requirements, and you should certainly complain very loudly to someone. If a lot of these systems exist then we clearly have a problem and it might be time for that giant lawsuit, but so far I’m not aware of this being the case. All the x86-based, Windows-certified systems I’ve seen have had the ‘disable Secure Boot’ option in their firmwares.

Now, for ARM machines, the requirements are significantly more evil: they state exactly the opposite, that it must not be possible to disable Secure Boot and it must not be possible for the system owner to change the trusted keys. This is bad and wrong. It makes Microsoft-certified ARM systems into a closed shop. But it’s worth noting it’s no more bad or wrong than most other major ARM platforms. Apple locks down the bootloader on all iDevices, and most Android devices also ship with locked bootloaders.

If you’re planning to buy a Microsoft-certified ARM device, be aware of this, and be aware that you will not be in control of what you can boot on it. If you don’t like this, don’t buy one. But also don’t buy an iDevice, or an Android device with a locked bootloader (you can buy Android devices with unlocked or unlockable bootloaders, still, but you have to do your research).

As far as x86 devices go, though, right now, Microsoft’s certification requirements actually explicitly protect your right to determine what can boot on your system. This is good.

Recommendations

The following are AdamW’s General Recommendations On Managing System Boot, offered with absolutely no guarantees of accuracy, purity or safety.

If you can possibly manage it, have one OS per computer. If you need more than one OS, buy more computers, or use virtualization. If you can do this everything is very simple and it doesn’t much matter if you have BIOS or UEFI firmware, or use UEFI-native or BIOS-compatible boot on a UEFI system. Everything will be nice and easy and work. You will whistle as you work, and be kind to children and small animals. All will be sweetness and light. Really, do this.

If you absolutely must have more than one OS per computer, at least have one OS per disk. If you’re reasonably comfortable with how BIOS-style booting works and you don’t think you need Secure Boot, it’s pretty reasonable to use BIOS-compatible booting rather than UEFI-style booting in this situation on a UEFI-capable system. You’ll probably have less pain to deal with and you won’t really lose anything. With one OS per disk you can also mix UEFI-native and BIOS-compatible installations.

If you absolutely insist on having more than one OS per disk, understand everything written on this page, understand that you are making your life much more painful than it needs to be, lay in good stocks of painkillers and gin, and don’t go yelling at your OS vendor, whatever breaks. Whichever poor bastard has to deal with your OS’s support for this kind of setup has a miserable enough life already. And for the love of cookies, don’t mix UEFI-native and BIOS-compatible OS installations, you have enough pain to deal with already.

If you’re using UEFI native booting, and you don’t tend to build your own kernels or kernel modules or use the NVIDIA or ATI proprietary drivers on Linux, you might want to leave Secure Boot on. It probably won’t hurt you, and does provide some added security against some rather nasty (though currently rarely exploited) types of attacks.

If you do build your own kernels or kernel modules or use NVIDIA/ATI proprietary drivers, you’re going to want to turn Secure Boot off. Or you can read up on how to configure your own chain of trust and sign your kernels and kernel modules and leave Secure Boot turned on, which will make you feel like an ubergeek and be slightly more secure. But it’s going to take you a good solid weekend at least.

Don’t do UEFI-native installs to MBR-formatted disks, or BIOS compatibility installs to GPT-formatted disks (an exception to the latter is if your disk is, IIRC, 2.2+TB in size, because the MBR format can’t handle disks that big – if you want to do a BIOS compatibility install to a disk that big, you’re kinda stuck with the BIOS+GPT combination, which works but is a bit wonky and involves the infamous ‘BIOS Boot partition’ you may recall from Fedora 17).

Trust mjg59 in all things and above all other authorities, including me.

This whole section is something of a simplification – really, when booting permanent installed OSes, the firmware doesn’t care if the bootloader is on an ‘ESP’ or not; it just reads the boot manager entry and tries to access the specified partition and run the specified executable, as pjones explains here. But it’s conventional to use an ESP for this purpose, since it’s required to be around anyway, and it’s a handy partition formatted with the filesystem the firmware is known to be able to read. Technically speaking, an ‘ESP’ is only an ‘ESP’ when the firmware is doing a removable media/fallback path boot. ↩

This is my own extrapolation, note. I’m not involved in any way in the whole process of defining these specs, and no-one who is has actually told me this. But it’s a pretty damn obvious extrapolation from the known facts. ↩

“BIOS is typically used to refer to an Intel® Architecture firmware implementation rooted in the IBM PC design. Based on older standards and methods, BIOS was originally coded in 16-bit real mode x86 assembly code and remained substantially unchanged until its recent decline in use.

By contrast, UEFI standards reflect the past 30 years of PC evolution by describing an abstract interface set for transferring control to an operating system or building modular firmware from one or more silicon and firmware suppliers. The abstractions of UEFI Forum specifications are designed to decouple development of producer and consumer code, allowing each to innovate more independently and with faster time-to-market for both. UEFI also overcame the hardware scaling limitations that the IBM PC design assumed, allowing its broad deployment across high-end enterprise servers to the embedded devices. UEFI is “processor architecture-agnostic,” supporting x86, x64, ARM and Itanium.”

I suspect that manufacturers use “BIOS” because the general public is familiar with the acronym and wouldn’t know what’s meant by “EFI” or “UEFI.” Unfortunately, I believe that this choice is doing a lot of harm — people think that their new computers’ “BIOSes” are basically the same as they’ve always been, just with a new “UEFI feature” that might be something like AHCI support or USB keyboard support in previous BIOS iterations. In other words, the “BIOS” terminology masks the fact that EFI is fundamentally different from BIOS, as Adam has gone to great lengths to explain on this page.

Wow, thanks a lot for nailing *exactly* the problem with calling it a BIOS, far more clearly than I did! That’s precisely it: calling it a ‘BIOS’ is yet another ‘confusion vector’. I mean, if you think of your UEFI firmware as ‘the BIOS’, how exactly is your brain going to process the concept that your BIOS has a BIOS compatibility mode? It’s not going to end well. 🙂

I first came across UEFI in November 2010 when I decided to build a new home server. I purchased an ASRock E350-M1 EPIA motherboard and was oblivious to the fact it had UEFI firmware.

Back at that time there were few GNU/Linux distributions that could actually complete a native UEFI install. Both Fedora and OpenSUSE could boot DVD media in UEFI mode but the installations always failed. I actually ended-up using ArchLinux to manage the UEFI components and ran everything else in KVM virtual machines under that.

Of course things are much different now and UEFI distribution installs are often successful. I was just pleased to note that, after reading this excellent tome, that I had somehow managed to implement UEFI boot corectly using my manual process. Actually there were a cartload of other technologies I implemented at the same time (e.g. unified login using OpenLDAP and winbind etc.) which have all become mainstream after I did them the hard way.

The one thing I have learned from this experience, which you pointed-out, is to always use the definitive reference material for a particular technology. Guides and HOWTOs found on the Internet are always a source of either inaccurate information or information based on experimentation and deduction rather than hard fact.

I’ve had some headaches at work trying to fix broken machines running Win8 and had to wrestle with UEFI for a bit before I got the hang of it. This post makes it a bit clearer what exactly I’m wrestling with.

Win8, UEFI and Secure Boot etc seem to work perfectly fine when everything is working correctly, but can become a bit of a nightmare (at the start) when you’re trying to do something so simple as boot to a live cd or even safe mode when the stupid thing won’t start. Most people hate Win8 for its Metro UI – I hate it because I can’t get into safe mode without pleading with it first.

So effectively what uefi is, is ramming two more levels of complicated bootloader code down into the board’s soldered on flashrom, and requiring that those two complicated bootloaders follow a set of poorly described specifications.

I really don’t see how this makes anything easier than bios, particularly since a good SSD is a heck of a lot faster to read than a crappy flashrom.

Now for my story;
I have a laptop that supports some form of EFI/UEFI. Up to Fedora 17, I was able to pass the installer’s kernel a parameter or two (might have been something like noefi or nogpt) to force the installer to work in legacy bios mode, and that would work. It would set up the standard partition format boot/root/home/swap, and the firmware would happily boot in bios compatibility mode.

So couple of days ago, I found out that the F20 installer absolutely does not support this any more. So I tried wiping the disks and letting it do it the way it wanted to, and a very funny thing happened; it would only actually boot about 50% of the time. Completely unacceptable. So I pulled the disk and shoved it into another machine that has a most-definitely-not-[U]EFI firmware and did the install there before transferring back to the laptop. That works 100%.

It would seem that Fedora is using [U]EFI to load some weird kind of shim that loads GRUB, effectively using EFI *as* a BIOS.

Either use it correctly, or don’t use it at all. If working as [U]EFI, grub really shouldn’t exist at all.

“So effectively what uefi is, is ramming two more levels of complicated bootloader code down into the board’s soldered on flashrom”

Two more? No, it’s only one level. The implementation isn’t really soldered on anything – any given firmware’s implementation can be adjusted / fixed with a firmware update, like any other element of the firmware. The *configuration* of the UEFI boot manager is done via NVRAM variables (that’s what efibootmgr is actually twiddling).

“I really don’t see how this makes anything easier than bios, particularly since a good SSD is a heck of a lot faster to read than a crappy flashrom.”

You’re not going to notice a speed difference in loading something like a couple of KiB of configuration. The reason it *potentially* makes it better than BIOS land is that the UEFI level is a much more sensible level for ‘boot target selection’ to take place than ‘in the first few bytes of whichever disk you’re booting from’, a concept which comes with massively obvious layering violations. There are about fifty different ways to do boot target selection with MBR-based booting, none of which is compatible with any of the others, and all of which will happily fight with each other like cats in a sack if you don’t know exactly what you’re doing as you install all your OSes.

“Now for my story; […] ”

You’re misunderstanding quite a few things there, but the headline is that ‘noefi’ isn’t a Fedora installer parameter. It’s a Linux kernel parameter. It’s kind of a problematic thing to do, really, because you’re essentially doing a UEFI native boot and then pretending you didn’t. I’d recommend avoiding it. What you really want to do in your case is instruct your firmware to do a BIOS compatibility mode boot of your install media, not a UEFI native boot. The complications of this are discussed in the post. If your firmware absolutely does not allow you to do this, and you can’t use efibootmgr or similar to do it, what you can do is write your install medium in such a way that it’s not EFI bootable – doesn’t contain an ESP – and then the firmware will usually ‘automatically’ boot it in BIOS compatibility mode when you ask the firmware to boot it. To do this with a Fedora image when writing to USB, for instance, use livecd-iso-to-disk and do *not* pass –efi.

When doing a UEFI native install, Fedora uses grub2-efi as its OS-specific bootloader. You *can* do ‘direct’ boot of a Linux kernel using UEFI – i.e. have your Linux kernel be an EFI executable and write a UEFI boot manager entry which points to that kernel, so you booted direct from the firmware layer to the OS kernel. (Really, the kernel contains a ‘stub’ EFI bootloader which does the job of loading the kernel – it’s much like having grub2-efi, just that the bits are all baked into the kernel). This is the ‘UEFI stubs’ thing a later commenter mentions. But it’s not very common to do this, and it’s really not an Inarguably Better approach than having an EFI bootloader between the firmware and the OS kernel.

I’m not actually misunderstanding. Regardless of which level the parameter affected, the end result was something that *worked*. “that” of course, being past-tense.

Yes, obviously I could build my own install disk, but for an end user to have to do that is really pushing too far, and in my case, far far more complicated than what I did to solve the issue. Like you’ve mentioned, the installer really has to be able to deal with all the corner cases of inappropriate behavior.

Also, yes, I do mean two extra levels of bootloader. MBR and whatever actually does the multiboot.

I am not disagreeing about there being some interesting aspects of UEFI, but it is quite a major problem that all of the implementations of it are proprietary. The less proprietary code I have to depend on, the better, even if the end result is marginally less efficient. My laptop is a perfect illustration of this.

You can find all kinds of ways of doing things that work, and are bad. Sometimes those ways stop working, and no-one wants to fix them, because they’re bad. This may annoy you, but it’s not wrong.

If you want to do a BIOS compatibility mode install of your OS, boot its installation medium in BIOS compatibility mode. Really. That’s the *right way* to do it. Your firmware should make this possible relatively easily, and if it doesn’t, this post covers all kinds of things that should help you achieve it. Doing a UEFI native boot of your install media and then attempting to fool it into thinking you didn’t boot in UEFI native mode is really a messy way to go about it. If noefi really isn’t working for you any more that’s some kind of bug somewhere, sure, but I don’t develop an immediate urge to investigate that and fix it, because it’s just not the mechanism you really want to be using for what you’re trying to achieve.

“Also, yes, I do mean two extra levels of bootloader. MBR and whatever actually does the multiboot.”

What?

In the BIOS/MBR world you have some very simple logic in the firmware layer, and very complex bootloaders in MBRs.
In the UEFI world you have more complex logic in the firmware layer – but really it’s still just executing bootloader code it finds on the hard disk, there’s just more potential to configure this – and, ideally, somewhat simpler bootloaders on EFI system partitions.

Neither system viewed as a whole is inherently more complex. All the complexity that exists in the UEFI layers I describe in this post *also exists in the BIOS/MBR world*, often duplicated many, many times between different implementations, and with no standardization.

Have you read the modifications to the page I’ve made over the last day or so? They may explain this more clearly.

As it happens, someone else complained about noefi not working, and I was poking about in that code today anyway, so I went and looked at it.

Turns out the kernel’s behaviour with ‘noefi’ changed in a way which broke anaconda’s check for UEFI in this case (UEFI-native boot, but ‘noefi’ passed on the cmdline). This kind of thing is exactly why I’d suggest not relying on noefi. 🙂

Bit of a necro revival here, but I’m install RHEL7.1 and booting anaconda with tboot (1.8.2). It seems that tboot requires ‘noefi’ to be on the cmdline or the kernel panics. Unfortunately, I can’t find much information on why that is. Of course adding ‘noefi’ also causes anaconda to break because as you said, it’s a native UEFI boot and so anaconda is trying to use grub2-efi, which I guess fails when EFI runtime services are disabled. Does that all sound correct? Any suggestions here? What’s the correct solution in this type of case?

I’m thinking the code issue you called out in the linked bugreport is not the problem. I have not tried supplying the updates image you linked, however I have modified that exact code manually inside the squashfs image and based on the stacktrace I’m seeing, it’s actually grub2 that’s failing – specifically, when pyanaconda tries to execute “grub2-install –no-floppy /dev/sda”, because I’m getting: “/usr/lib/grub/x86_64-efi/modinfo.sh doesn’t exist.” At least, that’s the error I get when running the command manually that I think pyanaconda is executing

Also, per your question from that bug report: “What would be the actual use case for the behaviour you describe? (Genuine question, not snark – I wish to know). Who would want to boot in UEFI native mode, pass ‘noefi’, and have the installer do what you suggest?”

The case that I have is a necessity to boot in UEFI mode because of 4k sector drives, and needing to use tboot (apparently requires ‘noefi’), and yet anaconda fails. Maybe this is a use case you were wondering about.

It’s an *interesting* idea. I haven’t played with it enough myself to tell you whether it’s a good one or not. I guess I’d say that in my experience all the really tricky problems and misunderstandings people have with UEFI happen at the firmware layer; I haven’t really seen people have much trouble at the EFI bootloader layer. So I don’t think using an EFI stub kernel boot approach is something that would make the lives of most people struggling with UEFI any easier. But it’s sure a technically interesting approach, and if you feel like playing with it, have fun.

There are system-to-system differences in how specific boot loaders work. A couple of years ago, GRUB 2 was hideously unreliable, in my experience; it would often fail to boot kernels, would hang, and would otherwise misbehave. It’s settled down a lot recently and tends to be much more reliable these days, but it’s still the most ungainly and difficult-to-configure boot loader I’ve ever seen. It has the benefit of being worked on by many smart people so that it works reasonably well “out of the box” on MOST peoples’ systems; but in those cases when it doesn’t work, GRUB becomes a nightmare.

The EFI stub loader is one of these options, and in my experience it’s quite reliable; however, some people on the Arch Linux forum have reported sporadic difficulties with it with 3.7 and later kernels. I have yet to see a single report of such problems on other distributions, though, so I suspect that there’s something odd about the way the Arch kernels are being built that’s contributing to those problems.

A huge “thanks.rpm” for filtering & accumulating just what end users need to know for a good workable knowledge.

The main things to remember for a prospective computing device buyer nowadays:
1. “… it’s entirely possible that in a UEFI world things will be even worse than they were in the BIOS world … … you should be angry at is your system/motherboard manufacturer …”.

BIOS compatibility mode really isn’t relevant to the ARM world. No-one’s ever shipped an ARM system with a BIOS. I don’t think any ARM UEFI firmwares implement BIOS compatibility at all, it would be a ridiculous thing to do.

The evil thing about the Windows ARM case is that the Windows certification requirements *for ARM* specifically state that the user must not be able to disable or re-configure Secure Boot – precisely the opposite of the requirements *for Windows*, which state that the user *must* be able to do those things. Understood from a ‘mobile world’ perspective, where people are comfortable with the concept of a locked bootloader, what that effectively does is enshrine bootloader locking on *ARM* Windows devices as a part of the platform definition. To be a Microsoft-certified ARM Windows device, the bootloader must be locked. As I said, that is indeed bad, though only exactly *as* bad as Apple.

Nothing should be written about UEFI other than it is constraining and complicated solution to projects like coreboot (Just give us the specs and docs please, thank you)… This does nothing to help motherboard manufacturers that will continue to produce crap bioses and boards.

Thanks for the condescension. I have a degree in history including several economic history modules. I am comfortable with the concept of a cartel.

Intel designed EFI as a technical solution to a range of technical problems. BIOS is not a good standard. It isn’t even a standard, in the first place, just a convention. It has its own huge set of problems and limitations which I didn’t go into in this article because it was out of scope. There is a genuine need on a technical level for a replacement for the BIOS, and UEFI is what we wound up with. Its imperfections are cock-ups, not conspiracies.

All the bits in the post where I complain about how the UEFI spec isn’t sufficiently prescriptive? Well, that goes in spades for BIOS because *there is no BIOS spec*. All you have to go on in writing a BIOS is ‘make it behave like other BIOSes’. This is ridiculous, of course. There are certainly BIOS implementations as idiotic as any UEFI implementation.

By the logic that “Bios doesn’t need a spec,” it should be fine to write a BIOS that loads the boot loader code from the LAST sector on the disk, or the SECOND sector on the disk, rather than from the first sector on the disk. This would work, of course, if boot loader code were found there, but it would be completely incompatible with every disk that holds a BIOS boot loader. That’s just one example of the need for standards in the BIOS arena; the BIOS includes dozens of system calls that are documented in an ad-hoc manner.

“Space for the MBR” is a bit wrong. The MBR is always exactly 512 bytes, the bootstrap code 440 bytes (rest is the partition table). The space before first partition is not part of the MBR. (Which just makes it even worse to assume that it’ll be available…)

On the other hand, not all bootloaders use that space. For example, syslinux does not – its MBR code jumps directly to a file in the boot partition.

Oh, you’re right, I do believe – we usally call it the ‘bootloader embedding space’ or something like that, right? I’ll have to refresh my memory on that and come back to it tomorrow, I’m just going off my memory of all the ‘fun’ we had with that crap several releases back. Thanks for the note.

Today, though, not all disks use 512-byte sectors. I don’t know offhand how a BIOS treats such disks from a boot perspective, although I’d wager that a lot of true BIOSes can’t boot from such disks at all.

Back when we tried doing GPT by default in Fedora 16 (IIRC), I think we found that only disks larger than 2TB were using 4K logical sectors. I *think* this still holds true today – smaller disks use 512B logical sectors, i.e. they present a 512B sector size to the system even if they really use 4K sectors internally (4K physical sectors).

Lots of USB enclosures these days seem to be using 4KiB logical sectors, no matter what the disk’s size. At least, I’ve seen problem reports in online forums related to this. For instance, some enclosures translate to 4KiB logical sectors when using USB interfaces, but present the drive’s native (usually 512-byte) logical sector size when connected via eSATA. Of course, that’s a recipe for disaster!

* You strongly discourage multibooting from the same disk. Could you shed some light on this quote in particular: “understand that you are making your life much more painful than it needs to be, [..], and don’t go yelling at your OS vendor, whatever breaks.”
From what I read in your essay, your main beef with implementations of uefi is that motherboards may _display_ the boot-choice badly or not at all. Apart from this UI issue, is there anything else that’s painfull? And what else is likely/known to break?

Your text gave me the (possbily wrong) impression that you could do everything with efibootmgr, and with that, bypassing whatever downsides the motherboard UI has.

* This leads me to a dreamy crazy question: is it imaginable that we’ll see a grub (or something that looks like grub but isn’t), that massively uses what efibootmgr uses. I could imagine this having its own entry in the efi list, and being the default boot entry. The displayed list would just show the efi entries, selecting an entry would use the nextboot efi feature, a timeout would nextboot a custom default (not changing the efi default, which would keen pointing to this grub-like thing).

Letting my mind roam freely, this could bypass the problem of motherboards not displaying the list in the same fashion, and OSes being able to rely on it being there and knowing how it presents choice to the user, making it not being every individual OSes headache. (Also gone be the days of OSes overwriting each other’s bootmanagers). Any installed OS would just need to add itself to the efi boot list. Done. (And add such grub-like thing if there isn’t one yet, and make it the default).

With a few naming conventions, it could even add temporary boot entries (like for one-time modifying kernel params) which it would delete again on next occasion.

The one obvious downside would be that you’re always rebooting at least once. first boot going into this grub-like thing, and second into the os you pick in it. But I’d accept that for all the advantages I can think of 😉

“Apart from this UI issue, is there anything else that’s painfull? And what else is likely/known to break?”

If you manage to get a given multiboot setup working, and don’t go poking it with any sticks, it *ought* to keep working. It’s mainly the deployment time where people have trouble, and then understanding what actually constitutes their boot config (and hence should not be poked with sticks) after that. I mean, it can certainly work; I just see so many people struggling with it and misunderstanding stuff that I get concerned. That section was slightly tongue-in-cheek, but with a serious point: it really is easier if you can just stick to an OS per machine or per disk if you can. It may just be part of my general inclination after years of fiddling with PCs and trying to help other people fiddle with them: I really, really believe in the ‘choice is an excellent way to shoot yourself in the foot’ argument, and try to keep my setups as simple as possible. There’s enough damn complexity in dealing with computers without you going out and voluntarily adding more on top, IMO 🙂

“Your text gave me the (possbily wrong) impression that you could do everything with efibootmgr, and with that, bypassing whatever downsides the motherboard UI has.”

Yes, this is basically true – *as long as you make sure you can get into a UEFI OS to poke efibootmgr*, of course 🙂 If you take the time to learn efibootmgr, and you make sure you always have a handy way of running it, yes, it’s a very handy get-out-of-jail-free card to have around.

People have certainly written things that are meant to sit at the UEFI bootloader level and intermediate between you and all this craziness. I have to admit that I tend to view them as yet another layer of craziness ;), but some people prefer to take the approach of picking one and making it their primary interface to the whole shebang. Rod’s Refind – http://www.rodsbooks.com/efi-bootloaders/refind.html – is one of these, and he has a general page on UEFI bootloaders at http://www.rodsbooks.com/efi-bootloaders/ .

Thanks for the Rodbooks.com links. Unfortunately those bootmanagers/loaders (refind, gummyboot, ..) are, as you say, adding “yet another layer layer of craziness”. In short, they seem to mostly leave available features aside and instead focus on what EFI requires an OS to conform to and make use of that. And they’re adding some more requirements on top of that (like naming schemes, certain min kernel version, or even limitations to certain OSes).

I can only assume that that wouldn’t always work, but i’d love to hear/read from someone who knows. (like Rod)

I fear that as long as there’s no bootmanager that works with the motherboard’s efi boot entries (reading from it, possibly also with the ability to add/modify/delete/backup/restore entries), and that can boot all entries that can than be booted from the motherboard’s efi bootmanager, OSes cannot rely on available features and by that will always bring along their own bootmanager/loader that will confuse/add complexity by only working with their OS and perhaps 2-3 others, and that will possibly also mess with what other OSes put in place.

If there was such a bootmanager (and it was free and opensource ;)), each OS would only have to care about adding a working EFI-entry for itself. Of course the OS-installer could (and should) also offer to optionally install that bootmanager during installation. (and not in a grub-way where they roll their own version with adjustements for their OS)

I know it’s a lot of wishful thinking 😉 I’d love to read about that topic from someone who knows this stuff though 😉

A boot manager that presents a menu based on what’s in the firmware’s NVRAM is certainly do-able. I’ve toyed with the idea of adding such a mode of operation to rEFInd, but I haven’t done so because I don’t believe it would add anything to what rEFInd already does, and in many ways it would in fact be limiting. For instance, if you use rEFInd to boot Linux kernels directly, the approach you suggest would require using efibootmgr every time you add a kernel. Since no distribution does this, it would be an extra manual step for the user — in other words, it would DEGRADE functionality.

Another problem with this approach — and a flaw with EFI generally — is that placing boot loaders on the hard disk and information about them in NVRAM creates two points of failure in the process of launching boot loaders. This is a practical real-world problem — I’ve both heard of and personally encountered systems that “forget” their boot loader entries on a regular basis. On such systems, the only practical solution is to use the fallback boot loader (EFI/BOOT/bootx64.efi), and if it were to rely on the NVRAM entries, it would fail miserably. Even systems that don’t USUALLY forget their NVRAM entries can do so occasionally. One of my computers does so if a hard disk is temporarily unplugged, for instance. There can also be OS-driven bugs. Some versions of shim can create an ever-expanding boot list, for instance, and this would be a nightmare for a boot loader following your suggested design.

That’s not to say that such an approach doesn’t have its merits, and might not appeal to some people — it does have merits, like being configurable from any OS using the standard EFI mechanisms. IMHO, though, it’s not preferable to the auto-scanning that rEFInd does, at least not in general. I’ll consider adding it as an option to rEFInd; in fact, in writing this reply, I’m contemplating ways that the NVRAM-based data might be integrated with the auto-scanned data in ways that might be useful.

Some additional comments on your post, and on Adam’s response to it:

* My own “main beef” with EFI is with the number of bugs found in the EFI world. The poor boot manager menus that some implementations present are an issue, and they represent a good argument for using rEFInd, gummiboot, GRUB, or some other boot manager. (Note that rEFInd and gummiboot are both add-on boot managers only, whereas GRUB does double duty as both a boot manager and a boot loader.) Other bugs, like EFIs that forget NVRAM entries, are less common but often much more painful than poor user interfaces.
* The type of boot manager you describe would NOT require rebooting to boot another OS; it could, like rEFInd, gummiboot, or even GRUB, boot the next boot loader directly. (Yes, GRUB can chainload to another EFI program.)
* IMO, relying on efibootmgr and the firmware’s built-in boot manager is not a good approach to managing a dual-boot computer, except possibly in those rare cases when the built-in boot manager is really good. The built-in boot managers are usually extremely limited in what they can do. Most notably, most of them require hitting a key (which varies from one computer to another, no less!) at JUST the right time in the boot process to boot anything but the default OS. This sort of design has always struck me as brain-dead, and because of system-to-system differences, documenting it is a nightmare. Fedora’s philosophy at one point was to rely on the built-in EFI boot manager for multi-booting, but if I’m not mistaken, the Fedora developers have come to their senses on this one.
* I understand the view of rEFInd or gummiboot as being “another layer of craziness,” but I believe it’s misplaced. Recall the point I made that GRUB is both a boot loader and a boot manager. Thus, when dual-booting Linux and Windows, the boot path for Linux is likely to be EFI->GRUB->Linux kernel; and when booting Windows, it will be either EFI->Windows boot loader->Windows kernel or EFI->GRUB->Windows boot loader->Windows kernel, depending on whether the installation relies on the EFI boot manager (bad choice) or GRUB (better choice) to select which OS to boot. The boot path with rEFInd or gummiboot will look EXACTLY THE SAME, except that you replace GRUB with rEFInd or gummiboot, and the details of how the Linux kernel launches are different. rEFInd CAN launch Linux via GRUB, but GRUB can also launch Linux via rEFInd. Thus, when analyzed fully and configured optimally, neither rEFInd nor gummiboot is “another layer of craziness”; they’re both alternatives to GRUB, at least functionally. Yes, rEFInd and gummiboot are both boot managers but not boot loaders; but with a boot loader built into 3.3.0 and later kernels, you don’t really need a separate boot loader program to launch Linux.
* You mentioned the minimum kernel version as a limitation of rEFInd and gummiboot. It’s true that both rely on the EFI stub loader, which was added with the 3.3.0 kernel. Not many distributions ship with kernels older than that at this point, though. Thus, this is no longer really a practical concern.
* Compared to GRUB, file naming is NOT a limitation of rEFInd; both boot programs can be configured with manual boot stanzas, which can launch kernels and boot loaders with any filename. The difference is that rEFInd can auto-detect boot loaders (including Linux kernels) in certain locations and with certain filenames. This is an ADDITIONAL FEATURE of rEFInd compared to GRUB. Thus, when comparing the two, it’s GRUB that has the deficit, not rEFInd!

Really, my intent in designing rEFInd (or expanding rEFIt’s OS-scanning features; it’s only fair to give credit to Christoph Pfisterer, rEFIt’s creator, for designing the basic framework) is the same as what you want, at a broad level: To launch any OS’s boot loader with little or no extra configuration. rEFInd does this without relying on the NVRAM entries because they’re fallible, but rEFInd scans the locations where boot loaders are supposed to go, plus some other locations where they (especially Linux kernels) are commonly found in practice. People who use rEFInd tell me that it finds boot loaders and Linux kernels quite reliably; most of the problem reports I see relate to EFI bugs or difficulties installing rEFInd in exotic setups, not to the boot loader scanning features.

At Kubuntu Forums (where I’m an admin), I have documented my UEFI experiences over time. I have come to appreciate its superiority over BIOS. Initially, I had preferred booting the kernel directly via NVRAM variables until I discovered rEFInd. I concur with everything Rod writes about how rEFInd simplifies managing multi-boot. On one system I have set up rEFInd to boot Kubuntu, openSUSE, Arch, and Windows 8.

Actually, “set up” implies too much work — the only default I changed was to use text mode boot rather than graphical. Otherwise, rEFInd is a cinch because its autodetection is so good. I no longer need to mess with NVRAM variables because rEFInd presents a “just right” abstraction layer. For these reasons, I’ve been encouraging our members to ditch GRUB completely and go with rEFInd instead. (Yes, I am gushing. Oh well!)

One thing I _don’t_ understand: these FAT partitions:
are they little bitty ones that just hold UEFI code? They can’t be system partitions, because
FAT doesn’t support permissions. So what and where are they?

“EFI system partition” is the name the spec uses to refer to them. They’re not “system partitions” in the sense of Unix file permissions or something like that. The sense is more “the stuff here is just uninteresting bits necessary to make the system boot, not anything you’re likely to be interested in”.

The article explains what they’re for: put very generically, it’s a place where the OS layer can write stuff for the firmware layer to read. The obvious thing that falls into that category is bootloader code, and indeed that is mostly what is put in ESPs. There are other potential uses for it, but the main thing is bootloaders. As the article (I hope) explains, the role of the ESP is essentially to do the job of:

* the MBR
* the empty space between the MBR/partition table and the first partition

in a BIOS/MBR boot. In a BIOS/MBR boot, the bootloader is located in those spaces. In a UEFI boot, it’s located on the EFI system partition.

The spec explicitly states that you can have as many EFI system partitions as you like on whatever disks you like in whatever locations you like on those disks. There is a specific GPT partition type that identifies them as ESPs.

BTW, as a Fun Educational Side Note, this isn’t specific to UEFI – lots of other things do this, in fact it seems like every boot method invented since the BIOS/MBR style has done something like this. If you were to dig a PowerPC system out of your local junk heap and install Fedora on it, you’d find you needed either a ‘PReP Boot Partition’ or an ‘Apple Bootstrap Partition’. ARM systems that use uboot need a ‘U-Boot Partition’.

At least some EFIs give the option to read firmware updates from the ESP, so that’s another example of what can go there. (Apple’s EFI does this, as does the firmware in my ASUS motherboard.)

It’s possible to place EFI drivers on the ESP so that the firmware can read them. At the moment, this is probably most common in conjunction with a rEFInd installation; rEFInd comes with several EFI filesystem drivers so that rEFInd can read Linux kernels from the Linux /boot directory. In principle, though, you could put a driver for a plug-in Ethernet card, a video card, or whatever on the ESP so as to give the firmware the ability to use that hardware, even if the hardware lacks EFI-compatible firmware itself. A few such drivers do exist, but they’re pretty rare.

Some Linux boot loaders, such as ELILO, require that the Linux kernel reside on an EFI-readable partition, which in practice makes the ESP the easiest location. Along those lines, in some cases it makes sense to mount the ESP at /boot in Linux, so that the kernel will go in the ESP’s root. Some distributions don’t like this because they rely on symbolic links or other Unix filesystem features in /boot, but this approach is quite popular with Arch Linux users. The freedesktop.org boot loader proposal (http://www.freedesktop.org/wiki/Specifications/BootLoaderSpec/) would encourage, but not require, mounting the ESP at /boot.

Hi
Very nice article. I am unclear about ‘Removable devices’ and ‘Removable Media’.
Most USB hard disks are of the ‘Fixed disk’ type – the SCSI Removable Media Bit (RMB) is 0.
Most USB Flash drives are of the ‘Removable’ type – the SCSI RMB is 1.
Some USB Flash drives (e,g, WinToGo Certified drives) have RMB=0.

What does the UEFI BIOS consider to be a ‘Removable’ drive – does it look at what the drive reports in the RMB – or does it merely assume any mass storage device on an external interface (eSATA, USB, etc.) is a ‘Removable’ drive?

Looking through the spec, I don’t believe it ever defines what should be considered ‘removable’ or ‘non-removable’. The sensible thing for a firmware to do would of course be to respect the RMB. My opinions on how commonly firmwares do sensible things, are, I believe, on the public record 😉

It does explicitly list “Hard Drive” in section 12.3.4 – “This section describes how booting from different types of removable media is handled.” – so it clearly considers the case of a ‘removable hard drive’, but it does not quantify precisely what it means by that.

Of course, the distinction isn’t actually incredibly important; as I read it, what it basically boils down to is that when everything in the boot list is invalid and the firmware’s doing fallback path processing it is required to check removable media before fixed media, and removable media are required in the spec to have only a single EFI system partition.

I suspect the BIOS interrogates devices on external interfaces first. Where the spec. says ‘removable devices’ it should really say ‘external devices’???
It should be simple enough to test, but of course, it will depend on how different BIOS vendors interpret the spec.!

Your description of UEFI is quite the thing. There is so much garbage on the web on UEFI. Your article is both articulate, accurate, and a joy to read. I’m going to use this in my lecutures if you don’t (and if you do ._’ mind).

It would be nice to go into a bit more detail on how GNU-Linux and/or Windows 8 boot loads but that would probably overstredge your aim.

You say that “In the BIOS world, absolutely all forms of multi-booting are handled above the firmware layer”. Mikhail Ranish actually wrote a multi-boot loader that fit entirely within the MBR code section (Cylinder 0, Head 0, Sector 1) in the late 1990s. I’m not referring to the XOSL implementation and other similar ones that booted code in the partition boot sector, this work or art fit entirely into the MBR. The “GUI” was limited of course, but even to this day, it is a work of poetry, not IT. 🙂

One thing that’s worth mentioning re: UEFI vs. BIOS booting is the fact that, with a BIOS boot, you’re heavily constrained by the fact that the CPU is starting in 16-bit real mode. The same way 8088- and 8086-based IBM PCs booted over 30 years ago, and it hasn’t changed. That’s a real pain in the ass for loading things like the Linux kernel and initrd images into extended RAM (or as it’s called on a real OS, “RAM”). You have to dick around with poking things through the A20 address gate using the keyboard controller, effectively using it to “ship-in-a-bottle” load the kernel and initrd bit by bit into extended RAM, then either jump into the kernel or flip into protected mode (or long mode on an x86_64 processor) and then jump into the kernel. UEFI, however, is running in protected mode (or long mode), so you’re not working with your hands tied behind your back.

I think I might be in love with you. This was totally awesome. I think I might print it out and frame it.

Seriously, after years of struggling and researching, I had a pretty good grasp on BIOS and a reasonable grasp on UEFI. But that came at the cost of blood, sweat, tears, and massive headaches. This condensed it into a short, well-written, easy to understand post.

But I read it all in one sitting without alcohol or caffeine, and I finished with a much better understanding of the subject matter. Which is more than I can say for any other document on the boot process I’ve ever looked at. They’re either lacking in information, plain wrong, or stunningly dull, long, and difficult to understand.

Hi Adam, I love your job since Mandriva days. Thanks for explaining this complex topic in such a friendly way. However I have a doubt. We here the vboxdrv kernel module, it’s not signed so only works with Secure Boot disabled. What I cannot understand if I should recompile and sign my own kernel, of if can I sign only the module vboxdrv…

Thanks… best article I’ve found on UEFI and secure boot. I’m an IT professional with over 35 years of hardware and software experience. I’ve been working with hard drives and disk booting since we had to low level format each drive and scan for sector errors before shipping a PC. Back then you could only boot from hard drives if you had a BIOS extension ROM plugged in to a PC slot. Ah… BIOS extension ROMs… those were the days. LOL

Anyway, I’ve been trying to figure out for months if I can enable secure boot and still swap drives in my laptop. There was so much conflicting info on UEFI out there that I was concerned that turning SB on would tie the laptop to just a single bootable drive and that I wouldn’t be able to boot from the old drive once the OS was installed on the new one. Now after reading this I understand that as long as both drives have signed operating systems on them I can swap them at will. Now I can finally format a new hybrid drive with Win8.1 Pro and still boot from the Win8.1 Home drive that came with the system if I realize I left something important on it.

A tear rolled in my eye when I find that “someone else” has gotten to appreciate the magic of this program. I’m using it to this date, safely booting 7 OS from a single drive. Actually I use the beta-beta-beta and I’ve been using it for so many years that I want to find the coder to tell him he can remove that “beta” tag from it 🙂

You do have some explanation here, but it appears that you have failed to exclude the garbage on the other sites too. Here is how your article appears to readers:
Garbage about other sites
something about UEFI
garbage about other misconceptions
something about UEFI
garbage and (something about uefi here)funny talk
something about UEFI here
garbage

Try keeping the nonsense entirely out. People are actually spending their precious time reading stuff on the internet. Make it count.

GREAT WORK! MANY THANKS!
I NEVER learned so much about ANY subject in such a SHORT article!
And I LOVE the garbage passages – they really hit the point.

I will build a Hackintosh within the next days (for the first time – after beeing pissed off by Apple more and more), and your elucidation will surely be helpful.
I will dual boot with Linux – but as I have learned now, I shall use separate disks, what I will observe, though using rEFInd (many thanks to you, Rod, too!) for 2 years without any troble on my Mac mini on ONE disk.

Thanks for this information on UEFI. However I have a question concerning the handling of more than one EFI-System partition. Which one will be used when booting ?
Will it be selected as we know from BIOS according to the boot sequence list defined in the NVRAM of the motherboard ?

Thanks for the article.
I stuggled with UEFI on an ASUS desktop – it came with Windows 8.1 preinstalled.
I then rashly installed the latest version of PCLinuxOS (this is not EFI aware) without understanding the implications.
This installed itself in BIOS mode with GRUB as the bootloader and gave me options to boot PCLinuxOS and Windows.
The option to boot windows failed (seems obvious now) but I then lost the ability to view the Windows UEFI boot manager.
In the firmware I tried forcing it just to boot via UEFI but it just refused to start.
I tried booting via Ubuntu’s boot-repair but this gave an error because the grub-efi package wasn’t installed on PCLinuxOS (PCLinuxOS has the package grub2-efi). Finally I installed UEFI aware Ubuntu and it detected the PClinuxOS and fixed everything up – I can now boot Ubuntu, Windows and PCLinuxOS.
What I don’t understand is how I got stuck in BIOS mode and couldn’t see the Windows EFI boot manager and how Ubuntu fixed that up?

There is a UEFI BIOS practically. For compatibility reasons( some OSes don’t support UEFI), recent PC firmware always support both legacy BIOS and UEFI. A UEFI BIOS normally refers to a UEFI-compatible BIOS, or a mixture of UEFI and legacy BIOS.

No. You have UEFI _or_ BIOS. If you have UEFI, then you also have a BIOS compatibility mode, in which the UEFI emulates BIOS. “UEFI-compatible BIOS” means nothing. The only “mixture” I’m aware of is that goofy “Hybrid EFI” firmware that Gigabyte sold for a while. It was a nightmare, as Rod explains: http://www.rodsbooks.com/gb-hybrid-efi/

Hi Adam, I have a questuion . If a USB flash drive or an optical disc are not GPT partitioned , but it does have the file EFIBOOTBOOT{machine type short-name}.EFI on it , Can it boot in UEFI-native mode? can uefi detect the bootloader automatically?
Thank you.

The answer to that is ‘possibly’ =) There’s an MBR partition type for EFI system partitions, and strictly according to the spec, firmwares are supposed to respect it. But it’s not something we think has been very widely used in The Real World and I haven’t tested it for real myself. But yeah, in theory if a disk has a partition with the MBR ESP partition type and it’s correctly laid out, the firmware should be capable of doing a UEFI native boot from it, AIUI.

Any UEFI system should be able to boot from a plain FAT16 or FAT32 MBR Primary partition. That is in the spec.
For instance, make a FAT32 USB Drive and extract the files from a Win8 x64 ISO to it and it will UEFI boot.
Easy2Boot uses this feature to boot multiple UEFI payloads from one multiboot USB drive.

The partition should not need to be marked as Active (though some UEFI systems seem to require this!) and it should be the first Primary partition on the drive (though it may still boot on some systems if it is not the 1st partition).

“If you have a UEFI-based system whose firmware has the BIOS compatibility feature, and you decide to use it, and you apply this decision consistently, then as far as booting is concerned, you can pretend your system is BIOS-based, and just do everything the way you did with BIOS-style booting. If you’re going to do this, though, just make sure you do apply it consistently. I really can’t recommend strongly enough that you do not attempt to mix UEFI-native and BIOS-compatible booting of permanently-installed operating systems on the same computer, and especially not on the same disk. It is a terrible terrible idea and will cause you heartache and pain. If you decide to do it, don’t come crying to me.”
I’m metaphorically crying … Hello Adam, I’ve done a few things of the previous lines, and my computer is behaving kind of weird. I installed Debian as LVM and on BIOS mode with a GPT formatted disk, and It got really slow. After that, I deleted the entire disk and tried to install it again but in UEFI mode. However, when I turned on the UEFI mode, the UEFI boot manager didn’t recognize my disk, only the CDDVD (It does recognize the Debian installation cd but does not initialize it) and USB’s units. I thought it was because I didn’t have an ESP, so I installed it and didn’t work. I gave up and changed it to the BIOS mode , and magically, my CDDVD unit started working, but since I have read the UEFI mode advantages, I would like to know how I can fix the apparent problem. For the record, I have an USB with a UEFI Shell which I tried to use through STARTUP.nsh but the script doesn’t start and makes me think I don’t have the latest version. Also, I have the gdisk tool in a Hiren’s CD.
Thanks in advanced for reading and your possible help.

Hi Adam
Thanks for a great ‘essay’ 🙂
It’s hard to come by relevant info regarding (U)EFI booting , so I’m glad I found this page. Most of what I have seen on fora’s and websites are, as you so nicely put it, half baked truths, propaganda or downright lies.

Many ( I don’t know how many ) potential new Linux users hit a brick wall when they first try a Linux distro because their pc either can’t boot the install/live media of the distro or, even worse, if they get it to install they cant see or boot the installed system.
As I see it, this can be caused by two things ?
1. The distro has poor support for UEFI firmware.
2. The pc has a poorly designed UEFI firmware.

Hypothetical, to get as many as possible to enjoy their first encounter with a distro, you would have to have good support for UEFI in general and beyond that, you would have to do your best to deal with poorly designed UEFI firmware.
All this should, in my opinion be part of the live/installer media and incorporated in the installation routine, preferably dealt with automatically.

With my limited knowledge, the first obstacle is booting the live/installer media on a UEFI system but many distros already have dealt with that so I guess that’s not the biggest problem.
As I see it, the biggest problem is that the installation routine of many distros doesn’t deal gracefully with installing on a UEFI pc, leaving the user to try and deal with it themselves afterwards ( even worse if its a multiboot scenario ) .

Now to my question:
What would in your opinion be the best way to get a Linux distro installed on a UEFI system ?
Is it as Rod Smith suggests to use ‘the kernel’s EFI stub loader (in conjunction with rEFInd or gummiboot, if necessary)’ ?
Your opinion would be highly appreciated 🙂

This is really a bad article:
* It’s not organized well, the concepts do not follow natural logical flow of understanding. I’d like to know, at the high level, how BIOS works vs how UEFi works, and highlight the differences.
* The author added a lot of useless words/sentences which do not actually help understanding, but only to distract the readers.
* The author did not use concrete examples to illustrate important points.
5 Minutes into the reading, I still don’t get any clue of how UEFI works even at the high level. Waste of my time!

For what it’s worth, in the industry, on dell’s newest servers you must enter system bios and within you can select bios or uefi as your boot mood, so they are using the same term for two different things (the term for the basic input/output system and also the firmware).

If you absolutely insist on having more than one OS per disk, understand everything written on this page, understand that you are making your life much more painful than it needs to be, lay in good stocks of painkillers and gin, and don’t go yelling at your OS vendor, whatever breaks.

This seems to imply that UEFI is doing a pretty damn bad job, if something as simple as having a dual boot has to be a massive pain and recommanded-against.

About the statement that all x86 MUST be able to turn off UEFI/secure boot
For at least AO725 there was no entry in the bios,
after much search you MUST set supervisor password in bios
before these entries appear
PLEASE, please choose a simple password or you will
have to do an emergency bios restore
(Fn-Esc, right name for bios, in this case in / of UFD, not/EFI/BOOT)
My friend wanted a complicated password, I couldn’t be bothered to remember,
he forgot!!!!

And after reading, I can safely say…. What the hell is INTEL doing! I can understand the desperate need to replace the aging BIOS with a more robust and functional TIER system, but this was FAR from what i was expecting to happen.

Ill share my point, 15 years ago i stubled across “Terabyte Unlimiteds BootIT Next Generation” software. used for boot management. It fits on a 1440kb floppy disk. It installs using the MBR a hugely improved MBR loader, affectionatly named EMBR. Back then it was was EMBR 1.0. its since been improved.

EMBR allowed the Boot Manager (Part of BOOTIT)

Boot any partition from any drive, anywhere. (including removable devices)
Have near UNLIMITED partitions of any kind on near UNLIMIED attached drives.
TRUE hide drive(s) and partition(s) from the Operating System you choose to boot.

And best of all it was Completely Configurable with SECURITY with a SIMPLE and CLEAN LOOKING Bootloader Menu, intergrated PARTITION AND DRIVE MANAGER able to size, resize, Move, copy, create, delete partitions of ANY kind with SPEED and 100% reliablity.

And all this fit on 1 floppy disk which could have been made in to a firmware BIOS.

I still use it today on BLADE SERVERS !! as well as home PC’s

Intel should have looked around FIRST before trying to build a better BIOS, because others out there had the STANDARD in place ALREADY.

Im sticking with EMBR, it works without complication, and its a hell of a lot easier to understand.

There is nothing here that sells me on using UEFI on my computer. I am not changing the way I describe computers, to recognize UEFI as some kind of important development, at least until I know why it is a good idea.

A BIOS is a firmware that starts a computer. Therefore, a UEFI is a BIOS as far as I’m concerned. When I change my BIOS so that I can start my system on the basis of dependable 1980s technology, I disable UEFI.

Why is it better for the UEFI bootloader code to be located on the motherboard instead of on the hard drive anyway? It just scatters the data in the computer. I don’t feel a better sense of security over my data by having Microsoft offload its operating system onto my computer’s motherboard instead of keeping it contained on the hard drive.

(I know that Microsoft did not invent the part of the Windows which has been offloaded onto the motherboard. But Microsoft insists that this non-Microsoft code be incorporated into the design, so it is part of the Windows operating system. Otherwise, Windows could work without it. UEFI gets pushed in the way of other operating systems which are forced to work around the intrusion.)

I am annoyed with Microsoft about my lack of good access to the firmware boot manager. My hardware vendor may have been able to mitigate my inconvenience by designing better support, but I dwell more on the fact that Microsoft pressured my vendor to implement UEFI in the first place, and I do not want UEFI. Also, Microsoft has been refusing to license the sale of Windows on ARM devices that allow for legacy boot or Insecureboot. If it’s not Secureboot, it must be Insecureboot, right?

The blog post here observes that it is not a good idea to mix legacy boot with UEFI native boot. I need legacy boot, and I don’t have the choice to keep it unmixed with native UEFI boot. My BIOS is designed to scan the computer for internal and external drives that are natively bootable by UEFI. Then, it preferentially boots one of those drives, even if I have legacy boot turned on. It only boots legacy-style if a drive that is natively bootable by UEFI cannot be found.

UEFI is needed because the old BIOS+MBR spec only copes with disks of <2TB. Also it caters for different machine/CPU architectures and UEFI can be expanded in the future.
If your BIOS will not MBR\CSM boot, then that is the fault of the BIOS/system manufacturer.
Many non-UEFI (MBR) systems are still being produced today that cannot boot from a USB drive that has the active partition past 137GB – this is due to poor coding of the BIOS USB driver and the ‘feature’ has been around for >10 years.
Some MBR BIOSes even completely ignore the MBR boot code and just execute the PBR code instead – totally against the original IBM PC spec!
You could now blame all this on the UEFI-spec being too vague – i.e. it would have been an ideal opportunity to clearly specify both UEFI and MBR booting requirements once and for all and then UEFI\CSM BIOS writers would have a clear spec. to follow. Instead they just copped out by saying ‘we will not define the CSM compatibility functions!’
There is no offical MBR-booting spec. (it just sort of grew with all sorts of geometry translation schemes and bugs for 8GB/137GB limits, etc.!) and AFAIK, there was no developers guide produced on how to code a UEFI-BIOS with examples in pseudo-code, function descriptions, etc. plus what mandatory interfaces to provide to the user. Big mistake!