Multicore CPUs move attack from theoretical to practical

A recently rediscovered technique allows malware to disable the protection of …

Many of us use software firewalls, virus scanners, and other security software on our PCs. We expect this software to make our computers safer, but some new research suggests that it contains a whole host of exploitable vulnerabilities.

The Matousec researchers found that common software tools, including Norton Internet Security 2010, McAfee Total Protection 2010, and Trend Micro Internet Security Pro all had flaws that allowed attackers to bypass the protections that these programs offer. The malicious software can do this without even having to run as an Administrator.

The common feature of the vulnerable software is that it patches the Windows kernel to enable it to intercept certain operations like opening files or killing processes, a process called hooking. Windows lists all these functions in a table, the System Service Descriptor Table (SSDT), with each function having a number specifying its position in the table. To call a kernel function from nonkernel—user-mode—software, Windows essentially tells the processor to switch into kernel mode and call the function with the desired number. By overwriting entries in the table, the security software can intercept function calls.

In addition to choosing the function to call, the user-mode program must also pass in any necessary data to the function—for example, the name of the file to open. The kernel has very strict rules on which memory it can access; incorrect memory accesses that merely cause a user-mode program to crash will cause the entire kernel to fail, resulting in a system-halting blue screen of death. This means that these hooked functions have to carefully validate the parameters that are passed to them to ensure that they do not refer to any memory that is inaccessible.

After validating the parameters, the security software then typically performs some check such as ensuring that a call to NtTerminateProcess, the API that terminates processes, does not attempt to terminate one of the security software's own programs.

Finally, the hooks call the real functions, so that Windows can actually perform the requested operation. When doing this, the hooks pass in the original parameters unmodified, to ensure that the real kernel function operates correctly.

This is where the vulnerability lies. Programs can change the meaning of the parameters after they pass to hooked functions. If the change is made at just the right time—after the validation, but before they get sent to the original kernel function—the program can give the hook function data that it accepts, but then replace it with data that would be rejected. So, for example, it would make a call to NtTerminateProcess with a reference to a harmless process, and then quickly replace the reference with one to the security software itself. The hook would see the harmless process and permit the operation to continue, but the real kernel function would see the reference to the security software, and duly terminate it.

This requires careful timing on the part of the attacker. The replacement has to be made just at the right moment. Too soon, and the hook will attempt to validate the malicious parameters and reject the call. Too late, and the harmless data will already have been passed on to the real function. This might seem improbable, but it turns out that an attack specially written to target anti-virus and firewall software by using these hooks can successfully switch around its parameters after just a handful of attempts.

The researchers found exploitable versions of this vulnerability in every program they tested, including products from McAfee, Trend Micro, and Kaspersky. In fact, the researchers said that the only reason that they found exploits in only 34 products was that they only had time to test 34 products (Microsoft, for its part, believes that its security software is not affected, but is still investigating the issue). Many others may be vulnerable too. They also developed a toolkit dubbed KHOBE ("kernel hook bypassing engine") to allow the rapid detection and exploitation of such flaws.

Matousec initially believed the technique they were using to exploit the security software was newly discovered. After publication, however, they became aware that the basic technique was documented as a way of attacking Unix way back in 1996. A 2003 posting to security mailing list Bugtraq described using the same technique against Windows.

A matter of timing

However, until relatively recently, successful exploitation was almost impossible. To have time to switch the parameter data around, the attacking program needs to make sure that one of its threads gets a chance to run during the window of vulnerability. On a single-core machine, where only one thread can run at any one time, the probability of a thread switch occurring at just the right time (to allow the replacement to be made) is so low that the software is all but unexploitable. But multicore systems, capable of running multiple threads simultaneously, are a lot more plentiful today than they were in 2003—let alone 1996.

With multiple cores running concurrently, the timing issue is greatly reduced. Though software cannot guarantee that it will be able to replace the data at just the right time, it is free to try again over and over, and after a few attempts it can generally pull off a successful attack. The growth of multicore processors has turned an attack that was mostly theoretical into one that's practical and reliable.

This isn't the first vulnerability found in security software that modifies the SSDT. In 2007 Matousec published a list of flawed programs that had errors in their hooks that allowed attackers to crash machines or even escalate their privileges, enabling malicious software to run with all the privileges that the kernel has.

Though this new, old flaw does not appear to carry this kind of risk—it can't be used to achieve escalated privileges, and attackers must already be able to run software on the victim's system. But it does mean that victims can have their security software disabled at the whims of attackers.

The researchers didn't describe any solutions for the problems in its public paper, though they do have them available to paying customers. The difficulty inherent in any solution is that to ensure that the real function operates properly, the hooks shouldn't modify any of the parameters, but instead must pass the original ones. This is what permits the attacks to be made, but seems hard to avoid.

In particular, what might seem the obvious naive approach—have the hook function copy all the user data into memory that it controls (so it cannot be altered by the malicious software—does not work well. This is because Windows applies different security checks depending on whether a call is made from user-mode, using user-mode memory addresses, or kernel-mode. In addition to performing the validation mentioned previously, user-mode calls are also subject to access control list (ACL) checks. ACLs are used to secure things like files and registry keys, and are an essential part of Windows' security.

Only user-mode calls need to have ACLs checked. Kernel-mode calls do not, because the kernel is privileged and is allowed access to any file on the system. Hooks generally don't perform this kind of verification because getting it right is complicated and error-prone. Moreover, there's no need: as long as the hooks pass the original parameters to the real function, the real function will do all that work for them. The complexity of these checks means that the naive approach is itself risky, and hence a poor solution to the problem.

A 64-bit solution?

One thing that might be a solution is switching to 64-bit Windows. 64-bit Windows largely prohibits this kind of kernel modification with a system called PatchGuard. With PatchGuard, any attempt to update the SSDT will result in the machine blue-screening shortly thereafter. On the face of it, PatchGuard should be a perfect cure, since it forces security software vendors to use different techniques such as using Windows' built-in filtering mechanisms.

Unfortunately for end-users, a number of security companies—the same security companies that routinely write insecure, exploitable kernel hooks—complained about PatchGuard, claiming that it prevented them from writing effective security software for 64-bit operating systems. Microsoft eventually relented, offering an API to allow certain patching operations to be performed by third parties.

The API that Microsoft offered in response to the security software vendors' demands is not public, so whether it permits SSDT patching is uncertain. Given the consistently flawed attempts to hook the SSDT, and the difficulty in making those hooks robust, a good case can be made that such operations should remain prohibited. Furthermore, not all security software uses SSDT hooks, so clearly this risky technique can be avoided altogether.

In spite of its limitations, this research reveals an effective technique for crippling a wide range of security software. It is unfortunate that Matousec could not test more software so that we could better gauge just how widespread the problem is, and see if there's anything on the market that avoids it. As things stand, though, it appears that even unprivileged malware can readily disable the firewall and anti-virus software that many people depend on, and that can't be a good thing.

So, using this attack, does having security software installed actually provide a bigger, badder attack vector than not having any security software installed?

I'd say not right now, because this flaw is not exploited commonly. Maybe in a few years, it'l become the de facto way to attack Windows machines, since anti-virus will be the highest privileged software besides the OS that also has access to the kernel.

Although it is true that these same companies could in theory write secure security software instead of using the kernel hooks, but that makes their lives too difficult Ah the irony... they write insecure software and they're a security company.

So, using this attack, does having security software installed actually provide a bigger, badder attack vector than not having any security software installed?

I'd say not right now, because this flaw is not exploited commonly. Maybe in a few years, it'l become the de facto way to attack Windows machines, since anti-virus will be the highest privileged software besides the OS that also has access to the kernel.

Although it is true that these same companies could in theory write secure security software instead of using the kernel hooks, but that makes their lives too difficult Ah the irony... they write insecure software but don't get blamed for it.

In a few years if you are still running XP or older you deserve what you get. (Vista and 7 should not be affected by this.)

The research was done on Windows XP Service Pack 3 and Windows Vista Service Pack 1 on 32-bit hardware. However, it is valid for all Windows versions including Windows 7. Even the 64-bit platform is not a limitation for the attack. It will work there against all user mode hooks and it will also work against the kernel mode hooks if they are installed, for example after disabling the PatchGuard.

Unfortunately for end-users, a number of security companies—the same security companies that routinely write insecure, exploitable kernel hooks—complained about PatchGuard, claiming that it prevented them from writing effective security software for 64-bit operating systems.

I'd REALLY like to know exactly which vendors are patching the kernel, so I can avoid them at all costs. I'm assuming it's the standard crew, Symaptec, Norton, et al.?

This is a common mutlitreaded bug, its called a race condition. Why is this data not protected with a lock of somesort so only one thread can access it at a time?

Remember, locks are a cooperative sharing mechanism, not a security barrier for that structure that prevents all access to it by another thread. They're only good if you're willing to honor them. Somehow I don't think malware writers will be interested in honoring a lock that would stop their exploit...they'll just go after the data directly.

That said, for those who were paying attention back in 2006 when the major security vendors cried and made noise like they were going to go to the DoJ because of PatchGuard, this is the result that many predicted. The vendors of the world made the system less secure for their own benefit, and the community mostly let it happen because no one wants to defend Microsoft, even when they're designing effective security measures. Congratulations to us.

...I'd REALLY like to know exactly which vendors are patching the kernel, so I can avoid them at all costs. ...

As mentioned in the Ars writeup, nothing has yet been shown to be clean. The table in the original article shows only failures. A lot of failures.

There may be a distinction among programs that use these kernel hooks between those that do so securely and those that do so insecurely (i.e., subject to the multithread attack). However, the researchers have not yet found any AV programs which are proof against this attack (or if you want to get paranoid, they haven't yet published any, maybe while they call their broker).

Supposedly because they didn't have time. But I was wondering about that too to be honest. No time before what? Someone else stole the limelight and published an article with similar research? Before one of their employees went solo with the results?

I'm so very glad I don't have to use AV software. When one has no at-risk activities, there is really no risk to not running AV. That and constant patching helps too!

Quote:

Supposedly because they didn't have time. But I was wondering about that too to be honest. No time before what? Someone else stole the limelight and published an article with similar research? Before one of their employees went solo with the results?

Whether leaving out MSE was an oversight or not, they probably thought that an 0-34 record was good enough to report with. MS is doing their own testing, so even if it fails they'll probably quietly fix it and say it has worked all along.

One way to solve this problem is to copy the arguments (and the memory they point to, in the case of pointers) to another location and use them instead. That is, the anti-virus programs should copy the arguments to a random memory location, check them, then call the real system call with the copied arguments. After the call, if needed, copy the arguments (which may be modified by the system call in some case) back to the original location.

@tcowher: a lock is a *voluntary* solution: programs must respect the lock to protect the shared data. Since the process in question is a malicious, it does what it feels like doing :-).(There are also various kinds of kernel-enforced locks, but they're irrelevant here the the memory that's modified *belongs to the attacking process*.)

The kernel itself is free from this issue: it copies the data to internal kernel memory, and only then decides if the request is valid. The malicious program could modify the memory while it's being copied - but that wouldn't help it: some mixed data would be copied into the kernel, but in any case the kernel would use the same data both when deciding whether to grant the request and when executing it.

This only proves once again that user space antivirus apps can't enforce security. They can present a user-friendly interface that controls the policy - but the mechanism enforcing it must be in the kernel.

I'm so very glad I don't have to use AV software. When one has no at-risk activities, there is really no risk to not running AV. That and constant patching helps too!

Quote:

Supposedly because they didn't have time. But I was wondering about that too to be honest. No time before what? Someone else stole the limelight and published an article with similar research? Before one of their employees went solo with the results?

Whether leaving out MSE was an oversight or not, they probably thought that an 0-34 record was good enough to report with. MS is doing their own testing, so even if it fails they'll probably quietly fix it and say it has worked all along.

Tell that to anyone who has ever gotten any worm.No matter what you do, you're always vulnerable to worms.

It's not about patching the kernel, it's about hooking the kernel. Hooks have been around since forever and are indispensable for many programming tasks, not just writing security software. What changed things here is finding out that multi-core CPUs have a race state that allows regular programs IN USER MODE to change the kernel parameters after they have been verified.

On the face of things, this really is just a race condition. After it is properly researched and tested, there's no reason this can't be patched.

This will be exploited by the bad guy and probably soon. Expect to see new malware using this in a matter of weeks.

@tcowher: a lock is a *voluntary* solution: programs must respect the lock to protect the shared data. Since the process in question is a malicious, it does what it feels like doing :-).(There are also various kinds of kernel-enforced locks, but they're irrelevant here the the memory that's modified *belongs to the attacking process*.)

Idea: mark the memory pages containing those parameters as read-only, during the critical time window. Then unmark (if they were previously r/w). If the malicious process tries to concurrently modify those pages, have a handler that checks these attempts and validates them (either emulate the write if it doesn't overwrite the protected parameters, or just block the process until the page is unmarked). This solution may limite the scalability of multithreaded programs, remarkably when using large pages, but maybe the userland layer that talks to the kernel could handle this by using private, small pages for each downcall.

This is a common mutlitreaded bug, its called a race condition. Why is this data not protected with a lock of somesort so only one thread can access it at a time?

Remember, locks are a cooperative sharing mechanism, not a security barrier for that structure that prevents all access to it by another thread. They're only good if you're willing to honor them. Somehow I don't think malware writers will be interested in honoring a lock that would stop their exploit...they'll just go after the data directly.

That said, for those who were paying attention back in 2006 when the major security vendors cried and made noise like they were going to go to the DoJ because of PatchGuard, this is the result that many predicted. The vendors of the world made the system less secure for their own benefit, and the community mostly let it happen because no one wants to defend Microsoft, even when they're designing effective security measures. Congratulations to us.

Exactly why I use no antivirus on any system I oversee (mine, friends, family, etc.) except MSE, I was absolutely disgusted by the crap all the AV companies pulled about patchguard. This is why nobody should support these companies. Thanks to the environment on the internet, anything MS does can be considered antitrust, and the users suffer for it.

How about this nugget of wisdom: in my 15 years of dealing with computers online, I've never seen a worm get into a patched computer. I have, however, seen worms on boxes with Norton installed. Hell, I remember when blaster was about, the simple Windows XP firewall prevented it from working properly.

I am to this day still amazed that we have this big a problem with viruses etc. How hard is it to not run/open/click stuff when the outcome of that action is not 100% predetermined in your head? If you saw a billboard on the way home telling you to pour canola oil in the gas tank to promote injector lubrication, would you do it?

That analogy only tells half the tale about how stupid the problem really is. They make more cars every day. You can always get another one (funds permitting). Computers however, you can store a lot of memories on a desktop PC; pictures, email, resumes, bank info, etc. If something corrupts or deletes that data there is almost no way to get it back. Yet most people have no problem clicking install and then allow, when some random popup tells them too.

Insane.

I work in Operations, and have to install this crap on servers everyday, so I get to see quite a few problems. Due to this, I have not run AV on any of my PCs at home for over 7 years. I guess I am not surprised this finally happened.

You mean to tell me that bloated buggy AV interfaces were a hint to the quality of the code behind them. Say it isn't so.

How dare you. Norton is a respectable company that blocks real viruses. Anything they can't block, they'll have a press release telling everyone that it's not a real virus, and then we can all sleep better at night./s

On the face of things, this really is just a race condition. After it is properly researched and tested, there's no reason this can't be patched.

True dat. Of course, that begs the question of why the condition exists in all these products in the first place. I suppose, as the article points out, it's because it wasn't a practical vector until the advent of multicore processors... but still.

Quote:

This will be exploited by the bad guy and probably soon. Expect to see new malware using this in a matter of weeks.

Unfortunately, that will likely be long before vendors have patched their products.

Supposedly because they didn't have time. But I was wondering about that too to be honest. No time before what? Someone else stole the limelight and published an article with similar research? Before one of their employees went solo with the results?

Yeah, I guess I find it hard to believe that MSE isn't in the top 34 virus protection products, but I guess it's possible -- it's not like it's available through MS update, and doesn't really seem to be "pushed" by Microsoft at all.

Supposedly because they didn't have time. But I was wondering about that too to be honest. No time before what? Someone else stole the limelight and published an article with similar research? Before one of their employees went solo with the results?

Yeah, I guess I find it hard to believe that MSE isn't in the top 34 virus protection products, but I guess it's possible -- it's not like it's available through MS update, and doesn't really seem to be "pushed" by Microsoft at all.

The Exchange 2007/2010 experts on the team I work on say Forefront (the server grade MS AV) is really surprisingly good. So it is in that sense at least leading in certain circles.

You mean to tell me that bloated buggy AV interfaces were a hint to the quality of the code behind them. Say it isn't so.

+1

Solution to all your problems = 1 computer that is sanitsied (no questionable surfing/stupidity) and 1 computer for the rest. Feel free to replace the 2nd computer with a 2nd hardrive and another OS installtion. Hey look at that? I cought a virus/worm! Let's see what happens to it after I rewrite the entire disk in 1's and 0's...

Drastic solution I know, but highly effective none the less.

Oh and to those who wish to punish the various security vendors for forcing microsoft to weaken security - hit them where it hurts and use a free alternative aka Comodo

Solution to all your problems = 1 computer that is sanitsied (no questionable surfing/stupidity) and 1 computer for the rest. Feel free to replace the 2nd computer with a 2nd hardrive and another OS installtion. Hey look at that? I cought a virus/worm! Let's see what happens to it after I rewrite the entire disk in 1's and 0's...

Drastic solution I know, but highly effective none the less.

Oh and to those who wish to punish the various security vendors for forcing microsoft to weaken security - hit them where it hurts and use a free alternative aka Comodo

How many computer literate people are really having problems with viruses? I've been on broadband since 1997 and I've never been infected with anything.

Solution to all your problems = 1 computer that is sanitsied (no questionable surfing/stupidity) and 1 computer for the rest. Feel free to replace the 2nd computer with a 2nd hardrive and another OS installtion. Hey look at that? I cought a virus/worm! Let's see what happens to it after I rewrite the entire disk in 1's and 0's...

Drastic solution I know, but highly effective none the less.

Oh and to those who wish to punish the various security vendors for forcing microsoft to weaken security - hit them where it hurts and use a free alternative aka Comodo

How many computer literate people are really having problems with viruses? I've been on broadband since 1997 and I've never been infected with anything.

Referring to it as "literacy / illiteracy" makes an automatic assumption that the folks who don't know are stupid. They're not. There's many intelligent folks out there that just don't have the time or inclination to learn about computers. But that doesn't mean they're dumb or "deserve what's coming to them." Like folks do with cars (rely upon a mechanic to work on it), some folks rely on a comp tech. And just like auto mechanics, there are some unsavory comp techs that leave security holes on peoples' comps so they WILL get attacked, bring the comp back in, and get charged another $100.

But that's the problem- most users are not computer literate. Most schools don't teach it as a skill set, and most older people didn't grow up with it so they didn't pass it on. It will take at least one generational shift before it becomes part of the culture (like looking before crossing the street- people didn't worry about that until cars came along since they had all learned to listen for horses already- it took time for everyone to get onboard).

I am to this day still amazed that we have this big a problem with viruses etc. How hard is it to not run/open/click stuff when the outcome of that action is not 100% predetermined in your head?

People call the big box beside the desk the HDD and the keyboard/monitor the CPU. Also, people believe that some computing devices are "magical". People still forget to check to see if a computer is turned on or even plugged into the wall. You should never be surprised at any 'odd' user behavior. The vast majority of users (and even a lot of them who consider themselves having 'high' user ability... just look at newegg customer reviews where people mark themselves as 'high' or 'very high' level of technical ability) don't know a thing about computers... or worse, know just enough to be dangerous.

I am to this day still amazed that we have this big a problem with viruses etc. How hard is it to not run/open/click stuff when the outcome of that action is not 100% predetermined in your head?

People call the big box beside the desk the HDD and the keyboard/monitor the CPU. Also, people believe that some computing devices are "magical". People still forget to check to see if a computer is turned on or even plugged into the wall. You should never be surprised at any 'odd' user behavior. The vast majority of users (and even a lot of them who consider themselves having 'high' user ability... just look at newegg customer reviews where people mark themselves as 'high' or 'very high' level of technical ability) don't know a thing about computers... or worse, know just enough to be dangerous.

I get all that. But again like the car analogy (which is easy for me to make, because I think there are leprechauns under the hood of my car), if you cannot fathom the workings of something to at least a level where you can describe it's workings on a high level, then don't muck about with it.

Seems easy to me!

And I will agree that most computer illiterate folks are not "stupid" per say. Unjustifiably confident would be the term I would use.

All of these "issues" arise because Windows (and many other OS implementations) do not REALLY provide anything like complete memory/thread/user protection. There are a long list of reasons for this -- some just historical sloppiness, some issues of hardware implementations, some issues of speed.

The whole problem of user/supervisory communication (and rogue user-level code attempting to "spoof" same) can be made rigorously secure IF you are willing to force an interface system which can do so, AND can actually enforce true memory protection for all processes AND the storage system. There are LOTs of papers and some working OS/hardware implementations which do this, too.

The problem with it is that it comes with "performance penalties" (particularly for code that spawns and kills processes at high rates) .. .and for Microsoft it has the "big problem" that it forces HUGE penalties and restrictions on the inter-process communications and "anything can talk/drive anything 'feature seamlessness' that Microsoft has always wanted to deliver."

Active-X is the poster-child here: it was "an insanely great idea" ... as long as nobody misuses it (hah!).

Developers DON'T WANT good security; it becomes a pain in the ass complexity to them. Most users never think about the issue; until something borks their system and turns it into a zombie spambox and they get disconnected by their ISP. And so many users are complete idiots for the truly dumbest "social engineering" exploits ... the "email me your root password because I tell you to" kind of thing ... and there's really nothing one can do about that except going to lock-down non-extensible kernels ... which actually seems like a good solution to me for most users ... but of course developers HATE the idea.

One way to solve this problem is to copy the arguments (and the memory they point to, in the case of pointers) to another location and use them instead. That is, the anti-virus programs should copy the arguments to a random memory location, check them, then call the real system call with the copied arguments. After the call, if needed, copy the arguments (which may be modified by the system call in some case) back to the original location.

The article specifically addressed this as not-a-solution because the new arguments now have kernel privileges instead of user-mode, and you lose ACLs.

This sounds like a race condition with parameter checking on system calls. Why does hooking matter, in this case?

How does Unix resolve this, so I can't change the pid given to kill() after the kernel has validated me? Does it get copied into a kernel buffer? Is that why hooked functions in Windows are vulnerable, because they aren't able to copy the arguments to kernel memory before validation?

This is a common mutlitreaded bug, its called a race condition. Why is this data not protected with a lock of somesort so only one thread can access it at a time?

Because there's nothing to lock. Locks only work when both parties modifying the shared state use the lock to arbitrate access to that state. But that's not the case here. The user-mode program doesn't guard access to its memory with a lock.

Couldn't this just be fixed in the kernel? Copy the values passed in by reference into local vars, do verification, do any other work, and then when it comes time to kill the thread or whatever, the last line of code before the thread killing is to verify that the data passed into the function matches the copy made earlier in the function. Presumably the copies would be put into some protected memory that the hack cant read or modify.

Its not a complete fix, there is still a tiny window of opportunity but its much smaller than before.

This sounds like a race condition with parameter checking on system calls. Why does hooking matter, in this case?

How does Unix resolve this, so I can't change the pid given to kill() after the kernel has validated me? Does it get copied into a kernel buffer? Is that why hooked functions in Windows are vulnerable, because they aren't able to copy the arguments to kernel memory before validation?

In theory they could copy the arguments first. But in practice they don't, because, like I write in the article, this causes considerable problems in parameter validation.

The kernel--not unreasonably--treats parameters that lie in kernel mode differently from those in user mode. If hooks were to copy memory to kernel mode, they would have to perform the extensive verification themselves.