On Wednesday, in a presentation at Black Hat Europe, Positive Technologies security researchers Mark Ermolov and Maxim Goryachy plan to explain the firmware flaws they found in Intel Management Engine 11, along with a warning that vendor patches for the vulnerability may not be enough.
Two weeks ago, the pair received thanks …

COMMENTS

Insecurity by obscurity

"Two weeks ago, the pair received thanks from Intel for working with the company to disclose the bugs responsibility. "

Well, had Intel published all the architectural details about a decade ago, none of this would have been necessary. Someone at Intel took the decision to put this into their chips but not publish how it worked bother to make it secure. (The two versions of that sentence are entirely equivalent to anyone who knows anything about software development.)

And whilst we are on the subject ... I'm told that AMD have a similar feature. Have they documented it? Or are they enjoying Intel's discomfort without realising that the same inexorable logic applies to them.

Re: Insecurity by obscurity

Re: Insecurity by obscurity

Not usually for as long though for something as low level. ME in its various forms has existed since the CORE line of processors came around. That's 10 or so years of hardware deployment, potentially tens if not hundreds of millions of deployed platforms (not just desktops, laptops and servers - embedded devices too). What is a given though, is that if you rely on obscurity for security (rather than doing good security analysis, and having a third party double check your work), is that once someone figures out how to understand what you are doing, your logical mistakes will become very public - as has happened here.

If you publish how something works ...

Someone with lots to lose will promptly fix it for you for free and share the fix with confidence that someone else with lots to lose will promptly fix the next flaw and share that fix too. Intel's management engine has convinced me to shop elsewhere whenever possible - that and their 'low power' CPUs are expensive, defective and not particularly low power.

Re: Insecurity by obscurity

There is no silly naive "once someone figures out how to understand what you are doing, your logical mistakes will become very public".

That can only happen thru spies. Stealing classified hardware and software manuals and specifications to find backdoors, flaws and do any reverse engineering. It is all illegal stuff. It happens on purpose.

Competitors and governments secret agencies are behind these black ops.

Re: Insecurity by obscurity

> [...] There is no silly naive "once someone figures out how to understand what you are doing, your logical mistakes will become very public".

...

That can only happen thru spies.

...

Competitors and governments secret agencies are behind these black ops.

I think you are giving WAY to much credit to the large pool of lazy inept software engineers employed in a wide distribution of jobs in tech who make the lives of the energetic properly skilled and qualified engineers lives hell. You must not also be a frequent reader of The Register either, software exploits and bugs are a common problem in tech, and very widely reported. Don't believe me? Look up "CVE Reports" on Google - those reports in the system are just the ones that were honestly reported to be corrected. You don't hear from spies about spies' work - they typically play that stuff close to the chest and don't tell anyone as it would make their job harder.

When you get down to it, a machine that runs machine code has to keep it somewhere for it to be ready to run. If you can figure our what the instructions are, then you can find the mistakes.

If all you do all day long is look at C++ code and you put a === where there should be a ==, you aren't likely to see that without help (rules checker, good compiler, third party, etc). Making mistakes is easy, getting it right is hard - humans make mistakes.

And as someone who works in Tech, I can tell you that if a piece of code or logic "reasonably approximates intended functionality", there is little incentive to revisit it unless a problem is found that causes a manager somewhere with task scheduling power some grief. If you then hide your code and refuse to publish the specifications for any peer review, you are only delaying further debugging by an adversary, not preventing it. The adversary will never care about honesty or rules or damage, imagining they do care is security suicide.

Re: Insecurity by obscurity

"What is a given though, is that if you rely on obscurity for security "

Intel didn't rely on obscurity for security. None the security model relies on the operation methods / code not being known. They just chose not to publish the source code which is not the same thing at all.

"giving WAY to much credit to the large pool of lazy inept software engineers"

Frankly calling the Intel development team* "Software Engineers" is a f**king outrage.

The correct term for groups who put out such work is "Code Monkeys"

*Assuming they didn't abdicate all responsibility to some no-name sweat shop in the a** end of fu**knows where. The secretiveness of this f**kup points to some MBA PHB type coming up with a "cunning plan" to slash costs on the development.

"None the security model relies on the operation methods / code not being known."

Except the "security model" does not actually work does it?

And had the code been published the fact that it was going to run on a very high proportion of every PC, Server and laptop on the planet (regardless of what people think they are running on the actual processor they bought) would have made for quite an energetic effort to scrutinize it for bugs.

Of which there have been a distressingly large number.

You aren't by any chance one of the development team? That would explain your AC.

AMD and ARM

AMD Secure Processor (formerly “Platform Security Processor” or “PSP”) is a dedicated processor that features ARM TrustZone® technology, along with a software-based Trusted Execution Environment (TEE) designed to enable third-party trusted applications. AMD Secure Processor is a hardware-based technology which enables secure boot up from BIOS level into the TEE. Trusted third-party applications are able to leverage industry-standard APIs to take advantage of the TEE’s secure execution environment. Not all applications utilize the TEE’s security features. AMD Secure Processor is currently only available on select AMD A-Series and AMD E-Series APUs.

There is more information on ARM TrustZone here: https://en.wikipedia.org/wiki/ARM_architecture#Security_extensions

Re: AMD and ARM

Re: AMD and ARM

It resides in the PCH (what we used to call the "chipset" consisting of Northbridge and Southbridge). This is why they are pushing the blame for support to "vendors", the BIOS ROM contains the ME code.

This is the full scale of the problem: the vendors first need to give a damn about your platform, then they need to download the patched ME runtime from Intel, then they need to recompile their BIOS with the new Intel ME runtime blob included, then sign the build so that the PCH will allow a valid BIOS flash, then publish the new BIOS - all that before we have to deal with the large swath of end-users who probably don't know enough to care or couldn't be bothered to figure our each vendor's BIOS update process (I'd also hazard a guess, most users will never update a BIOS after retail sale). Meanwhile, all a hacker has to do is get control of an ME OS while running, and block any attempts to update the firmware (because why stop a good thing!!). I'm sure Intel's lawyers are going every bit of marketing over the last 10 years to quickly scrub any evidence of "promised security" from the written record. ;-)

Intel PR people say "Yay we found a bug with the help of security researchers and published a patch! Job done!". No - now they get to sit back and realize how bad a business decision putting a hidden computer and OS inside everyone's computer below the reach of the common users and security market was, as the world's hackers spend the next few years fine tuning their persistent hacks that block any effort by Intel to fix this mess. NSA was smart at the beginning to ask for the High Assurance flag in BIOS ROM, basically disabling the threat from the factory. They likely saw the writing on the wall, and probably let this happen knowing that they were basically being handed a globally distributed backdoor by Intel (either on purpose or by accident, the outcome is the same).

This is also why having a single vendor effectively have a monopoly over data-center hardware is a potentially catastrophic global risk. If they are the only ones owning the market and they screw up, we all have no recourse.

Re: AMD and ARM

I consider that to be a feature

After all once you get your own code into Intel ME, you can control it. This feature by itself isn't actually a security problem as it requires hardware access (in which case you can do far worse). However it might allow you to actually kill ME from within, or even make it do sensible things.

Re: I consider that to be a feature

"What's this? It isn't even cryptographically signed?"

Yes, it is cryptographicaly signed, but that only applies to the normal way of loading code into it. If you can find something like a buffer overflow to get code execution, that code is actually data and that's likely not signed.

Code signing is not a security feature. It will, at best, only protect business models of hardware companies.

Who is behind all this bashing against Intel ME , uh?

Clearly competitors...

Why no one is checking coding flaws in AMD, IBM Power, Oracle SPARC, ARM ? They all have the same as Intel Management Engine chipsets running full embedded OS that don't have the specifications disclosed either.

Also all this easy hacking of Intel ME how? The full hardware and software specifications are not public so other than government spies and competitors spies there is no way to steal that data to hack the system either.

Re: Who is behind all this bashing against Intel ME , uh?

Actually they don't. Some AMDs have something similar. IBM power uses a different chip outside the CPU for management and it has its own network interface that you are not required to connect if you don't want to. No idea what sparc has (are they still doing anything?). ARM supports running trustzone code, although not all of them use it, so it is certainly possible to buy arm systems that don't have it enabled at all. \

Intel's big mistake is putting optional stuff and essential stuff together in the same place. The essential system startup stuff has no reason to have network access at all, and the optional stuff that does have reason for network access should be able to be turned off so it should have been an independent device from the essential startup stuff. Secureboot and remote system management have no reason to share the same CPU and OS.

Re: Who is behind all this bashing against Intel ME , uh?

> [...] Why no one is checking coding flaws in AMD, IBM Power, Oracle SPARC, ARM ? They all have the same as Intel Management Engine chipsets running full embedded OS that don't have the specifications disclosed either.

Well, 80/20 rule: first 20% effort gives 80% results, last 80% only gives final 20% results. If you want to get into as many distributed platforms are possible with as much RAM, CPU power, and storage resources attached to the services you want to compromise - do you attack AMD, POWER, SPARC, or ARM? No, you go after Intel x86. Being a big target means taking the most shots. Ask, Windows vs Linux who is attacked more?

If you attack the OS and they have good security practices in place, but you find there is a low level OS that is not protected that can circumvent the protections the User's OS has in place - would you continue attacking the hardened User OS or the insecure OS below the security features? This attack makes perfect sense to me...

> [...] Also all this easy hacking of Intel ME how? The full hardware and software specifications are not public so other than government spies and competitors spies there is no way to steal that data to hack the system either.

Don't underestimate the power of common debugging tools in the hands of interested/committed engineers... ;-) JTAG, I2C, SPI, and USB are not alien technologies to most engineers, and even hobbiests are getting in the game now that ARM and FPGAs are getting into the masses hands. All of those interfaces have fairly standard communication methods for data to go over them, which any person familiar with can peal apart. You give the complications of modern technology too much credit for being an obscurity wall IMHO.

Re: WTF?

Certainly none of what they listed is required and many people would rather not have it.

Sure Microsoft wants you to use secureboot, and it does have some good features. So why is it in the same environment as remote management which clearly is not required or desired in most cases? Do not combine useful local stuff with optional risky remotely accessible stuff.

Re: WTF?

I wouldn't be so sure. It is nice to think that builders of home PCs are a significant market for Intel, but I strongly suspect this is not true. If you look into the part of the market where volumes are, i.e. server farms or office PC, it starts to make sense that administrators cannot attend to each PC in person. They want remote administration capability and unfortunately ME is part of it. I am not trying to argue that these administrators actually want ME, but instead that this is Intel's positioning of this feature on the market.

Re: WTF?

Sigh. If you want "LOM" (Lights-off Management), then just stick a LOM card in the machine. If you are selling corporate boxes, put the LOM hardware on the motherboard. Either way, we get to remove the LOM functionality by yanking the card or pulling the configuration links. So any security snafu is completely fixable in the future.

Intel Management Engine Source code

"System owners with specialized requirements should contact the equipment manufacturers for this type of request. However, since any such configuration necessarily removes functionality required in most mainstream products, Intel does not support such configurations."

We don't necessary want to remove it, what we want is to see exactly what it does. A good faith response to the revelations would be for Inter to publish the source code to the ME

Re: Intel Management Engine Source code

"We don't necessary want to remove it, what we want is to see exactly what it does. A good faith response to the revelations would be for Inter to publish the source code to the ME"

Actually even if we had the full source code, it would still make lots of sense to disable it. After all since it's so powerful, if there is one single unpatched security critical bug your system is toast. It's simply a risk Intel (and other companies) forces upon us. Security is in large part about avoiding unnecessary risks.

Only on more recent ME versions

My understand (correct me if I'm wrong) is that these problems appeared when Intel switched over from a version of ME based on ThreadX to a competing one based on Minix. (The underlying OS isn't to blame, of course, it's just what the crap code happened to be written to run on.) The current version was chosen for flashy new features, in hindsight at the expense of real security. ME version 7, if I recall, was the first Minix one. The switch from an OS very much focused on embedded systems to a more generalized one probably reflects the two teams' proficiencies, and the wrong team lost.

TL;DR - they lowered the bar for their internal developers, and got the expected lower quality internal developers.

Only more recent...

I'm pretty sick of those who say "just use this language" (or technique, or process) and all your troubles will go away - it'll all become easy.

NO! Just because you want to believe that monkeys can write Shakespeare doesn't mean that it's going to happen with your set, and no matter what, it won't happen often (and who watches the watchers? Who notices the one critical typo?).

Rust ain't gonna fix it. Dev ops ain't gonna fix it. Drag drop or declarative programming ain't gonna fix it. Crappy software guys will find a way to code at the tube till it works, even if they don't understand why it worked that one time - and ship total crap. They'll get the job because, being crap, they don't get as much pay, and those who decide such things are short-termist MBA types who don't have a clue.

It was always thus and will always be thus. Technological solutions to human being problems NEVER WORK going forward. Never have. I await the first proof that it ever will.

Mistrust goes in what direction ?

This technology is premised on the idea that I trust the hardware, so that's who should decide if incoming software is trusted.

But in practice, it looks like I trust a signed Linux kernel more than I trust some recent hardware. So it's not clear why I would pay for something whose function is to decide if it wants to argue with me.

Re: Mistrust goes in what direction ?

You misunderstand. Things like Trusted Computing and Code signing are not about you trusting your hardware, but about your hardware not trusting you. After all the only think code signing does is to try to prevent you from running code the hardware vendor didn't allow you to put on your hardware. There is no actual user benefit from it, as any attacker with physical access will also be able to just swap the mainboard or even the whole computer. (company computers typically are identical)

Re: Mistrust goes in what direction ?

Actually, I don't misunderstand. Intel thinks that when this device has an argument with me, the device should win. And, like with DRM, I fail to see why I'd want to pay extra money to have diminished authority over my possessions.

Plonkers

Anything that lets a White Hat remote in with beneficial intentions can and ultimately will allow Black Hats in. This idea of remote admin is thus fundamentally insecure.

The only way to reasonably achieve actual security is to require physical presence at the device, preferably with special physical equipment. Even if that special equipment is nothing more than a header jumper, because that puts modification beyond reach of remote operators.

Yes, that might inconvenience the local IT crew, to have to actually visit the machines they admin, but just look at the situation we're facing now. Does anyone think the mess Intel created is *convenient*

Anything that lets a White Hat remote in with beneficial intentions can and ultimately will allow Black Hats in. This idea of remote admin is thus fundamentally insecure.

Not entirely true. If the remote admin mechanism has appropriate security in place, and no daft exploits are in place that bypass this, then it shouldn't be an issue. For example requiring a valid client certificate and secure credentials should work fine as long as these credentials and certificate were kept secure. No security system is perfect and such credentials can be lost but there is always a risk with any system and it's about balancing the risk compared to the benefits. When you have an estate of thousands of systems you do not not want to have to physically visit each and every one of them for maintenance reasons.

This is fine, of course, until some code monkeys implement a system where there the auth system can be bypassed with ease.

Easy solution?

Hey Intel, AMD, et. al. - Go ahead and put your management engine on my mobo. Add all the functionality that you want. I simply want a damned physical DIP jumper on the power trace that goes in there so I can definitively disable it. Corp types can have it on for their LOM, everyone else (or secure corp types) can remove it to completely and absolutely disable the chip/processor/subsystem. Make it part of the spec.