"This is not a d**k-sucking contest," says Linux's benevolent overlord.

The Linux kernel development process may welcome all those who love open source software and have the right coding chops, but one man remains the ultimate authority on what does and doesn't go into Linux—and he isn't afraid to let everyone know it.

The rants of Linux creator Linus Torvalds often become public through the Linux Kernel Mailing List archive. That's the open source way, and it gives us a glimpse into the thinking of the people behind one of the world's most widely used technologies.

The latest example comes from an argument between Torvalds and other Linux developers over whether the Linux kernel should include code that makes it easier to boot Linux on Windows PCs. This goes back to Microsoft requiring that PCs designed to run Windows 8 use UEFI firmware with the Secure Boot feature enabled. This has complicated the process of booting Linux on PCs that shipped with Windows 8, but it hasn't prevented people from doing so. There are workarounds, but some people are looking for a solution in the Linux kernel itself.

Last Thursday, Red Hat developer David Howells posed a request to Torvalds, which reads in part:

Hi Linus,

Can you pull this patchset please?

It provides a facility by which keys can be added dynamically to a kernel that is running in secure-boot mode. To permit a key to be loaded under such a condition, we require that the new key be signed by a key that we already have (and trust)—where keys that we "already have" could include those embedded in the kernel, those in the UEFI database and those in cryptographic hardware.

Now, "keyctl add" will already handle X.509 certificates that are so signed, but Microsoft's signing service will only sign runnable EFI PE binaries.

We could require that the user reboot into the BIOS, add the key, and then switch back, but under some circumstances we want to be able to do this whilst the kernel is running.

The way we have come up with to get around this is to embed an X.509 certificate containing the key in a section called ".keylist" in an EFI PE binary and then get the binary signed by Microsoft.

Warning: Graphic language ahead

Torvalds replied that the idea was "f*cking moronic." Ex-Red Hat and current Nebula developer Matthew Garrett noted in response that "There's only one signing authority, and they only sign PE binaries," referring to Microsoft signing Extensible Firmware Interface Portable Executable binaries.

If you want to parse PE binaries, go right ahead. If Red Hat wants to deep-throat Microsoft, that's *your* issue. That has nothing what-so-ever to do with the kernel I maintain. It's trivial for you guys to have a signing machine that parses the PE binary, verifies the signatures, and signs the resulting keys with your own key. You already wrote the code, for chrissake, it's in that f*cking pull request.

Why should *I* care? Why should the kernel care about some idiotic "we only sign PE binaries" stupidity? We support X.509, which is the standard for signing.

Do this in user land on a trusted machine. There is zero excuse for doing it in the kernel.

Linus

You might think that would end the conversation for good, but not quite. Howells pointed out problems in Torvalds' suggested approach, saying:

There's a problem with your idea.

(1) Microsoft's revocation certificates would be based on the hash of the PE binary, not the key.

(2) Re-signing would make the keys then dependent on our master key rather than directly on Microsoft's. Microsoft's revocation certificates[*] would then be useless.

(3) The only way Microsoft could then revoke the extra keys would be to revoke our *master* key.

[*] Assuming of course we add support for these.

David

Garrett noted that modifying the Linux kernel would be due to practicality more than anything. "Vendors want to ship keys that have been signed by a trusted party. Right now the only one that fits the bill is Microsoft, because apparently the only thing vendors love more than shitty firmware is following Microsoft specs," he wrote.

Greg Kroah-Hartman, maintainer of the Linux kernel's stable branch and the Linux driver project, put in his two cents, noting that various Linux distributions already allow Linux/Windows dual-boot scenarios due to "a UEFI bootloader/shim that Microsoft has signed." (That shim was created by Garrett.)

"I'm not saying that they are not nice things to have, personally, I want some of these things for my own machines just to make things more secure, but again, these are 'I want to have', not 'Someone else is saying Linux MUST have,'" Kroah-Hartman said.

Howells hasn't given up. In an e-mail yesterday, he noted that "Unless we get the administrator to add a key prior to Linux installation, Linux cannot be booted on a secure UEFI machine—unless the boot loader is signed by Microsoft's key." In the thread's latest e-mail from today, he responds to one of Kroah-Hartman's arguments.

The discussions illustrate how proposing changes to the Linux kernel can be controversial. We imagine these types of discussions happen all the time at software companies, but without becoming public like they do with Linux. What do the CEOs of major software companies say behind closed doors when they hate a proposal made by one of their underlings? We don't know, but when it happens with Linux, we do.

Torvalds isn't a CEO, but the final decision will likely come down to what he wants. Torvalds is an employee of the Linux Foundation, which allows him to remain independent while working full-time on maintaining Linux.

As the Foundation notes, "Torvalds remains the ultimate authority on what new code is incorporated into the standard Linux kernel."

Promoted Comments

...and RedHat has their own forked version of the kernel already, so why doesn't RedHat just include the relevant changes in their own kernel?

The whole point of Open Source is that you can make your own changes, right?

Yes, but each change you produce that upstream won't merge back means an extra point of deviation for you. For the sake of keeping your fork maintainable it is in your best interest to try and influence upstream to merge back your changes as well. Too much deviation can cause your fork to be impossible to merge back, and therefore be impossible to obtain further enhancements from the upstream without considerable effort.

So here is the situation, if you care more about what is happening than you do about the language people are using to debate it. The point of the patch boils down to enabling the Linux kernel to trust third-party binary drivers which have been signed by Microsoft. Yes you read that right; Linux drivers signed by Microsoft.

The purpose of UEFI Secure Boot is to make sure that only code that is signed by a trusted authority can be booted. Nearly all devices that support UEFI Secure Boot will ship with Microsoft's key/cert preloaded, and most will allow you to load other keys/certs, but not all. So to get around this Redhat and others wrote a small shim bootloader signed by Microsoft that will then run anything. This is nice for people who don't care about Secure Boot, and just want to be able to install Linux and have it work without messing with BIOS settings.

However, the vast majority of people who actually want to use Secure Boot in Linux would like for that code signing to extend all the way through the OS. That is UEFI verifies that the bootloader is trusted, the bootloader verifies that the OS is trusted, and the OS verifies that any drivers it loads are trusted. Linux supports this using standard x509 certificates, and all the major distributions sign the binary modules they distribute, so if you add their key as one that is trusted, everything is great.

Except for binary drivers. Redhat doesn't want to sign third-party binary drivers, but they want those drivers to work in this secure boot scenario. And they don't want the users to have to manually add the keys for NVIDIA, AMD, etc (which would the secure thing to do). Instead they had the idea to let third party driver developers submit their drivers to Microsoft to be signed, since the kernel will trust any certificate in UEFI by default anyway and the Microsoft certificate is on nearly all computers. (Edit: Microsoft is cool with this, and does it all the time; as far as they are concerned, they are just verifying the code comes from the specified developer, not that the code isn't malicious)

The complicating factor is that Microsoft will only sign executables in it's own PE format, so they want to enable support in the Linux kernel to load kernel modules that are wrapped in PE binary format. Linus thinks it is stupid to make a Microsoft certificate the root of your trust to begin with, and thinks it is even more stupid to add a bunch of code to parse the PE format into the kernel, just to enable this work-around.

UEFI secure boot is a useful tool. Like any useful tool, it can be used for good or for evil. The good is that it is harder to install rootkits or other tampered-with kernels. Only signed-trusted kernels will load. The bad is that it is harder to install your own custom-built kernels. If you want the advantage of additional security against rootkits and if your kernel of choice is available as signed-trusted, UEFI secure boot is awesome. Otherwise, UEFI secure boot is bad.

The design for UEFI secure boot allows for multiple signing authorities. So far, the only organization that has been able to get itself recognized and accepted as a signing authority by firmware vendors is Microsoft. Microsoft did not create this situation, but no other vendor has gone through the effort necessary to get itself accepted as a signing authority.

Microsoft is willing to sign 3rd-party code as long as it meets certain minimum requirements. The minimum requirements basically boil down to: 1) the software cannot be used to create a rootkit, and 2) the software is provided in a format that Microsoft's signing infrastructure can accept. The first requirement is absolutely crucial. To omit that requirement would bypass the whole point of UEFI secure boot. The second requirement is a bit more uncertain. I don't know for certain how the PE file format works with UEFI, but there is probably a way to make UEFI secure boot work without a PE file format (though it might require a revision to the UEFI standard -- I'm not sure about what formats UEFI supports).

Linus seems to be arguing that Linux shouldn't be in the business of using PE files for certificates because Linux already has a way to get certificates, and that the requirement for using PE files for certificates is arbitrary and temporary. I find this reasoning to be flawed -- there are lots of things in the Linux kernel that are present only to work around arbitrary or temporary limitations on firmware or hardware, and this would not be the least of them. I agree that it is annoying to have to use PE files to encapsulate UEFI certificates, but that's how UEFI works right now.

So it comes down to whether the Linux kernel will play nice with the existing UEFI secure boot implementation. Linux says he doesn't want it. He's well within his rights to say that and to not support UEFI secure boot. But to say that it keeps the kernel pure is quite silly -- the whole point of a kernel is to abstract the user from the impurities of the real world by abstracting away hardware details.

In other words, I really don't see why we should bend over backwards,when there really is no reason to. It's adding stupid code to thekernel only to encourage stupidities in other people.

Seriously, if somebody wants to make a binary module for Fedora 18 orwhatever, they should go to Red Hat and ask whether RH is willing tosign their key. And the whole "no, we only think it makes sense totrust MS keys" argument is so f*cking stupid that if somebody reallybrings that up, I can only throw my hands up and say "whatever".

In other words, none of this makes me think that we should do stupidthings just to perpetuate the stupidity. And I don't believe in theargument to begin with.

- why do you bother with the MS keysigning of Linux kernel modules tobegin with?

Your arguments only make sense if you accept those insane assumptionsto begin with. And I don't.

Linus

A Peter Jones responds:

Quote:

This is not actually what the patchset implements. All it's done hereis using PE files as envelopes for keys. The usage this enables is toallow for whoever makes a module (binary only or merely out of tree forwhatever reason) to sign it and vouch for it themselves. That couldinclude, for example, a systemtap module.

-- Peter

Linus further responds with:

Quote:

Umm. And which part of "We already support that, using standard X.509certificates" did we suddenly miss?

So no. The PE file thing makes no sense what-so-ever. What you mentionwe can already do, and we already do it *better*.

Linus

Oh man the picture in the article just goes with all of this perfectly.

...and RedHat has their own forked version of the kernel already, so why doesn't RedHat just include the relevant changes in their own kernel?

The whole point of Open Source is that you can make your own changes, right?

Yes, but each change you produce that upstream won't merge back means an extra point of deviation for you. For the sake of keeping your fork maintainable it is in your best interest to try and influence upstream to merge back your changes as well. Too much deviation can cause your fork to be impossible to merge back, and therefore be impossible to obtain further enhancements from the upstream without considerable effort.

I'm with Linus on this -- why should a non-Windows OS have to store X.509 certificates in a Windows (PE) executable just because the signing authority (Microsoft) does not support signing via the certificate file.

The problem here is that Microsoft have a monopoly in the PC space and have pushed manufacturers to ship machines with UEFI secure boot on by default for new Windows 8 machines (esp. ARM-based devices) that only have Microsoft's key installed on them (for the most part).

This will make things interesting in the future for refurbishing used PCs. I suspect that this is just the tip of the iceberg.

So, correct me if I am wrong- he is making life more difficult for users than it has to be because it maintains the purity of his ideology?

So would you incorporate something completely ridiculous into your life just because it might make some things easier, but in the end just be a waste of time and effort to support something that shouldn't be done in the first place?

Why are MS the only ones allowed to hold the power to revoke / issue certs? This is such vendor lock in, but worded in such a way that MS can make Bambi eyes at the courts and say "think of all the children deprived due to piracy of our valuable product" and not get slapped with another anti-competitive lawsuit.

I agree 100% with Linus on this one. Red Hat should be going after hardware vendors for bowing to MS.

I'm with Linus on this -- why should a non-Windows OS have to store X.509 certificates in a Windows (PE) executable just because the signing authority (Microsoft) does not support signing via the certificate file.

The problem here is that Microsoft have a monopoly in the PC space and have pushed manufacturers to ship machines with UEFI secure boot on by default for new Windows 8 machines (esp. ARM-based devices) that only have Microsoft's key installed on them (for the most part).

This will make things interesting in the future for refurbishing used PCs. I suspect that this is just the tip of the iceberg.

Because users may want to easily boot Linux on machine equipped with UEFI?

I, for one, don't want Microsoft in the trust chain for any code I run. If nothing else, their signing policy is super lax, as evidenced by their willingness to sign Red Hat's own loader... which from Microsoft's point of view could just go off and load anything.

Doing anything so that users can keep rooting their trust at Microsoft and have things "just work" isn't doing anybody any favors. It just lets them be misconfigured without knowing it.

For that matter, I'm not so sure I want Red Hat in the trust chain either.

Fix your UEFI trust roots, people, if you want to use this stuff. Cryptographic security has a cost, and if you're not willing to pay that cost you should just turn it off and forget it.

"What do the CEOs of major software companies say behind closed doors when they hate a proposal made by one of their underlings?"

Linus created Linux. Except, in the case where a CEO of a company actually created its product, he or she would not be participating in a decision on this kind of issue. Typically, the decision on this kind of issue would be driven by marketing requirements. The manager with profit and loss responsibility for the particular product would chose someone to drive the process of resolving it. That person would gather some kind of workgroup of people with technical knowledge related to the issue. Based on their input, the workgroup leader would report back with a reccommendation or possibly a set of choices and the issues with them. The product manager would make a decision. Other people's opinions would not be solicited and not be welcomed if offered.

So, correct me if I am wrong- he is making life more difficult for users than it has to be because it maintains the purity of his ideology?

Incorrect. He's making it more difficult for Microsoft to lock Linux out of OEM hardware.

It makes it easier for users because there's fewer hoops to jump through to install Linux on the hardware they paid for that happens to be win8-certified.

Edit: grammarfix.

This is incorrect. To install on the "hardware they paid for" the user has to turn off Secure Boot. If they are just running Linux and don't care about bootloader integrity, fine. But if they are dual-booting, this is a problem, because you lose the protection that Secure Boot provides. This is a very important point made in the LKML thread but 100% ignored by Linus. He seems to think letting an OS installation be compromised is a good thing for users.

Really disappointing. I'd somehow expected better from Linus. Regardless of whether his argument is right or wrong, there's no need to be such an asshole.

In some cultures there's nothing wrong with being an "asshole". In fact, in some cultures fear of coming across as an asshole will simply encourage everyone to ignore you. You obviously don't care enough to be worth listening to.

I'm Canadian. We're commonly considered polite to the point of hilariousness. But I also work in construction. If you're unwilling to swear or yell every once in a while, you aren't going to get far on a construction development site.

Maybe there's something about actually building something that makes people passionate. My experience is that polite people are often middle managers, lawyers, lobbyists, politicians and bureaucrats. I've never met a genuinely good engineer, architect, tradesman, foreman, superintendent or project manager that was afraid of a little colourful language. Or afraid of telling someone they're wrong. Really wrong.

I, for one, don't want Microsoft in the trust chain for any code I run. If nothing else, their signing policy is super lax, as evidenced by their willingness to sign Red Hat's own loader... which from Microsoft's point of view could just go off and load anything.

This is addressed in the LKML thread. Red Hat agreed to not load just "anything" as part of the signing agreement.

Or people could just disable Secure Boot if they want to run Linux. Except on ARM hardware, but products like Surface RT are really just stopgaps until Intel gets their power consumption under control. Backwards compatibility is too important to Windows users, especially with the limited variety of apps in the Windows Store at the moment.

So, correct me if I am wrong- he is making life more difficult for users than it has to be because it maintains the purity of his ideology?

Incorrect. He's making it more difficult for Microsoft to lock Linux out of OEM hardware.

It makes it easier for users because there's fewer hoops to jump through to install Linux on the hardware they paid for that happens to be win8-certified.

Edit: grammarfix.

This is incorrect. To install on the "hardware they paid for" the user has to turn off Secure Boot. If they are just running Linux and don't care about bootloader integrity, fine. But if they are dual-booting, this is a problem, because you lose the protection that Secure Boot provides. This is a very important point made in the LKML thread but 100% ignored by Linus. He seems to think letting an OS installation be compromised is a good thing for users.

You can do the same thing by taking a SHA512 hash of your kernel, offloading that to a 3rd party and occasionally verifying the hash. It's nothing tripwire or any other software hasn't been doing for ages. SecureBoot is all about MS stopping piracy and vendor lock-in. Period.

I'm with Linus on this -- why should a non-Windows OS have to store X.509 certificates in a Windows (PE) executable just because the signing authority (Microsoft) does not support signing via the certificate file.

The problem here is that Microsoft have a monopoly in the PC space and have pushed manufacturers to ship machines with UEFI secure boot on by default for new Windows 8 machines (esp. ARM-based devices) that only have Microsoft's key installed on them (for the most part).

This will make things interesting in the future for refurbishing used PCs. I suspect that this is just the tip of the iceberg.

Because users may want to easily boot Linux on machine equipped with UEFI?

We're commonly considered polite to the point of hilariousness. But I also work in construction. If you're unwilling to swear or yell every once in a while, you aren't going to get far on a construction development site.