Posted
by
Soulskillon Saturday July 03, 2010 @01:43PM
from the who-uses-apps-anyway dept.

eldavojohn writes "The latest versions of Microsoft Windows have some good security options available — now if only they could get their most popular third-party applications to use them. A report from Secunia takes a look at two such options — DEP and ASLR — and Brian Krebs breaks down who is using them and who is not. A security specialist noted, 'If both DEP and ASLR are correctly deployed, the ease of exploit development decreases significantly. While most Microsoft applications take full advantage of DEP and ASLR, third-party applications have yet to fully adapt to the requirements of the two mechanisms (PDF). If we also consider the increasing number of vulnerabilities discovered in third-party applications, an attacker's choice for targeting a popular third-party application rather than a Microsoft product becomes very understandable.' Among those with neither DEP or ASLR: Apple Quicktime, Foxit Reader, Google Picasa, Java, OpenOffice.org, RealPlayer, and AOL's Winamp. While Flash player can't implement DEP, it does have ASLR. Google Chrome is the only popular third-party application listed with stars across the board."
It's worth noting that several apps highlighted in the Secunia research paper have added support for those security options in recent patches, or are in the process of doing so. Examples include Firefox, VLC, and Foxit Reader.

Somehow I think that adding both of those options to anything Adobe makes wouldn't make an ounce of difference. They first need to patch that whole "putting features and pretty design before security" thing.

Neither ASLR nor DEP completely mitigate attacks. A buffer overrun is a buffer overrun, those just make it much harder to exploit - ideally, the chance of success is so low that the attack is impossible for any practical purpose, but pragmatically, there are many creative ways in which a programmer can screw up even further and open the hole wider.

Then, of course, no ASLR will save you if you do downright moronic things, such read untrusted input that is potentially downloaded from the Net and pass it over,

Why should this be up to an application at all? You either have a secure install or you don't, if you do, then no application would have the authority to run outside of the rules, if you don't, you have to acknowledge it as a user and force the OS not to bother forcing this.

DEP sounds similar to what simcity did back in the dos days, use memory after it had freed it. Funny thing is, microsoft made sure that if windows detected a dos binary named simcity do that, it would allow it. This to maintain backwards-compatibility.

and i suspect this is also why DEP is made optional pr program, as there may have been some lazy code written back in the day thats still in use somewhere.

DEP isn't really similar to that at all. That was a case of misusing a memory manager, which is bad behavior and can cause security holes, but doesn't really count as failing to use a security feature. DEP - Data Execute Protection - does just what it sounds like: it prevents the data (stack and heap) of a program's memory representation from being executed. More specifically, if the instruction pointer tries to move to a page of memory that has the NX (No eXecute) bit set, it throws a hardware interrupt an

The problem is, a lot of programs - especially those that execute any kind of code, such a JavaScript in Foxit or ActionScript in Flash - use executable code in data pages legitimately, and intentionally call into it. The CPU doesn't know the difference, so those programs get killed too. The OS *can* know the difference - you can set exemptions for specific apps in Windows - but adding such an exemption just turns of DEP for that program entirely.

Any application that doesn't run unless exempted from DEP should be considered seriously broken and require fixing (indeed, this is how it is for MS's own software). Any application that needs writable-executable data pages for whatever reason (JIT etc) should use the appropriate API calls to request OS to change page permissions from writable to executable and back as needed.

It's already been the user's choice since WinXP SP2. The deal is, 1) you cannot turn it on by default because many apps will break. 2) most users are ignorant, they wouldn't know about the choice, understand the choice, or figure out what to do if stuff doesn't work and how to exclude them if desirable.

You can enable DEP on Windows and still allocate executable memory. You just can't to get it from malloc(). This feature is needed so little that it should be a pretty trivial amount of modifications to get code working. It's probably not that they can't, but that they simply won't because it's too low a priority compared to the next big shiny feature.

No DEP only prevents execution on memory that is not marked executable. Enabling DEP marks all memory as nonexecutable by default, but you can use the VirtualAlloc [microsoft.com] function in windows to allocate memory that is marked executable. This allows for the implementation of JIT compilers even with DEP turned on.

you can still use VirtualAlloc() with PAGE_EXECUTE | PAGE_READWRITE as the third parameter, and voila, you have read/write memory that's also executable.

Which is actually a very bad idea in general, for precisely the reasons DEP was introduced in the first place. Really, there's no reason why an app needs a memory page to be both writable and executable at the same time. A typical JIT generates the code once and executes it after; more advanced ones (e.g. JVM HotSpot) can periodically re-JIT stuff, but they don't do it all that often.

Consequently, the proper technique is: use VirtualAlloc with PAGE_READWRITE only, write whatever you want there, then use Vir

"App" has been short for "application" for a long time. I'm more annoyed by people who think it's specific to the iPhone (an intranet blog at work not long ago claimed (with no iContext, it was about the progress of technology rather than anything directly Apple-related) that the "first app" appeared in 2008).

In my understanding, "application" means a piece of software with which users interact directly. "Program" means a piece of software in general, even kernels and libraries are programs. As "program" comes from a broader meaning (a set of contents/instructions, a plan) it is not limited to user interaction.

Nevertheless, I keep using the word "program" for applications. Probably because, back in the days of Basic et al, we talked about writing "programs", and "application" was a later term I associate with

To me, apps are modules of code you find on smartphones. Applets are Java based pieces of code. Applications are executables made for a general purpose computer like a Windows machine, Mac, or pSeries. Programs are a catch-all, but I tend to use the word programs for code written on a full computer OS, as opposed to a smartphone.

You probably know but, for people not using actual OS X or never used NeXT OS, the extension of application on OS X is ".app", of course it has nothing to do with the.exe format, it is a self contained directory "acting like" single application file.

WindowMaker (GNUStep) dock applets are called.app too

More interestingly, Symbian calls them ".app" (e.g. Opera.app) internally too. J2ME applications? Called.fakeapp:) If I was a J2ME developer to target Symbian devices, that would really make me think twice

Defeating DEP in and of itself is trivial. That's what ASLR is for. It's still technically possible to exploit an application that uses both, but it's much, much harder, and generally speaking you can't get a guarantee of success like you can with a return-to-shellcode or return-to-libc attack - the first of which DEP prevents and the second of which ASLR prevents.

Data execution prevention is a no-brainer. Unix has had that since the 1970s.

ASLR, though, is iffy. Randomizing the position of code in memory is a form of security through obscurity. If there's a bug that's exploitable with ASLR, it's a bug that can crash the program without it. It also makes debugging harder. No two crash dumps for the same bug are the same. Not even close.

What's more useful is running applications with very limited privileges. If the browser's renderer can't do much except render the single page it's supposed to be rendering, then corruption within it isn't a big deal. Firefox's approach to running plugins in a separate process is a big step forward, and the more jail-like that process becomes, the better. You really need a mandatory security model like SELinux to make this work, and Windows doesn't have that.

None, really. ASLR doesn't mean that every single instruction winds up somewhere random, it just means that when loading a file of executable code - either a program or a library - it places the in-memory representation at a random address. This means you can't, for example, do a return-to-libC attack by simply figuring out the address that your target platform places its C runtime at; it will instead be different on every system and every day. However, within any given binary, the relative locations of ins

You really need a mandatory security model like SELinux to make this work, and Windows doesn't have that.

Oh? Since Vista, Windows can run executables in "low integrity mode". When a low integrity mode process is started, the security token of the process (which is inherited from the user) is stripped of all admin privileges, stripped of write access to anywhere but a designed cache area and barred from making changes to the registry.

Basically, Windows allows a user account to be sub-divided based on the activity the account is used for. If it is a potentially internet faced activity the app should use low-inte

for an app such as IE (or Chrome) to allow files to be downloaded, a separate "helper" or "broker" process must be used. [...] a lower integrity process *can not* send messages to a higher privileged process.

Furthermore, DEP sounds good, but my eyes were opened recently to return-oriented programming, which allows arbitrary exploits to run without ever modifying any executable code. And ASLR/DEP are useless when the exploits run as managed code anyway: a common attack vector ever since the first MS Word macro viruses.

ASLR, though, is iffy. Randomizing the position of code in memory is a form of security through obscurity.

Yes, and guess what? Security through obscurity works, too. It's not foolproof, for sure, but it can make it much harder to break something. Especially - as is the case with ASLR - the "obscured" bits change every time.

The direct analogy would be passwords - they are themselves a classic example of security through obscurity (indeed, the security of the password-protected system hinges on only trusted person/people knowing the password, and no-one else), and the more often you change them, the more secure y

While DEP does prevent stack overflow types of attacks, it also complicates writing high security software. The inability to execute data means:

1. You can't run self-decrypting programs.
2. You can't alter instructions at runtime to fool debuggers.
3. You can't place keys in executable code sections at runtime, making it much easier for someone to stop your program and dump the keys out of the memory image.
DEP actually makes attacks against cryptographic software *easier* to implement.

Microsoft also added, "If only those applications would use our special memory access functions, they wouldn't go overwriting other programs' memory. There's nothing we can do at the OS level to prevent this, so it's up to application developers to do the ritght thing."

Do you mean, were Microsoft's bad decisions meant to be funny, or did you mean, was the executive summary of Microsoft's bad decisions highlighted at an opportune time with ironic phrasing meant to be funny?

I'd be a bit surprised if Java could take advantage of either of these mechanisms due to the nature of the dynamic compiler and class-loading, without major, major problems. MS probably had to build special mechanisms into the CLR runtime for it to work in.NET.On the other hand, Java has a reputation of being a pretty bulletproof platform in terms of the exploits that these two mechanisms are designed to protect against.

I'd be a bit surprised if Java could take advantage of either of these mechanisms due to the nature of the dynamic compiler and class-loading, without major, major problems.

It is entirely possible to take advantage of these counter-measures. I believe that Java on BSD does something like copying memory around to support the NX bit and still allow the running process to write new code. The restriction that is enforced is that a memory block cannot be *both* executable *and* writable. It is perfectly ok to write memory and then switch it to executable code.

MS probably had to build special mechanisms into the CLR runtime for it to work in.NET.

No, they just designed.NET to always execute fully compiled. Unlike Java,.NETs "intermediate code" was never intended to b

No, they just designed.NET to always execute fully compiled. Unlike Java,.NETs "intermediate code" was never intended to be interpreted at runtime. Instead.NET JITs an assembly (dll) before executing..NET even supports creating assemblies dynamically (no hacks) through Reflection.Emit (no need to save to files and do bytecode manipulation). A dynamic assembly is still compiled fully to machine instructions before execution begins.

I was just reading the.NET 4 help file on this this week, and the JIT compiler is invoked on a per-method basis. The virtual function table is used to substitute the compiled methods for the original bytecode.

One difference between.NET and Java is that.NET invokes the JIT on the "first call", whereas Java still prefers to run code using emulation until a method has been called a certain number of times, after which the JIT is invoked. You're almost right:.NET never executes anything other than compiled

You simply have to ask for memory that doesn't have the NX bit set when requesting a memory allocation.

Translation: You don't call malloc(), you use VirtualAlloc with the right flags. Then you get a block of memory back that can be executed.

Either way, with interpreted languages, there is no requirement to be able to directly execute the memory. The interpreter is the executing code, reading and basing its execution path based on what the 'compiled' java byte code looks like. Java doesn't compile to nat

First of all, DEP is technically a kernel feature, or at least parts of it require kernel support. MS even wrote a completely software-based feature that tries to implement DEP on systems without the NX bit (it's not perfect, but it helps a bit).

Windows has 4 settings for DEP enforcement:

* Turn it all off (generally not used, unless you have a misbehaving driver). This option is only available if you know where to look; it's not in the UI
* Turn it on if a program opts in (most MS software do

...when it installs itself, in Windows, at %Userprofile%\Application Data\Google Chrome?
That is just amateur programming, and is a real beast if you're in an Active Directory environment with Roaming Profiles, 'cause the damn software keeps getting copied to/from the server with ever logon/logoff.
I understand Google might consider compliance with separation of programs from their data might be "difficult," but the ease with which any malmare can corrupt Chrome because of it's lack of installation securi

There's nothing wrong with installing a program in the Application directory; it's pretty much the norm on Windows for per-user installations. Think of it as equivalent to ~/bin on Unix systems. Of course that doesn't fit in well for enterprise environments, but Google does provide a pack installer for managed systems, [google.com] which installs under "Program Files" and lets you disable auto-updates. And while the current version of Chrome is lacking other enterprise features, the next version will have full support for GPO configuration, Admin templates, and all the other things you'd expect in an enterprise.

As for your absurd claim that per-user installations are somehow a security vulnerability, you're going to have to provide something to back that up. Because I've spent about 15 years in the information security field I can't even get close to a rationale for that one.

There is a balance between a walled garden and complete anarchy. Right now, Windows programs are such a poor quality level because they can get away with it. It is SOP in the Windows arena to ship alpha or beta code, call it a release, then fix it after launch, if ever. Most of the time, bugs end up given a "FNR", or fixed in next release status.

When Vista came out that added UAC for basic security, and the screaming of app developers whining about not being able to have all their code have Administrator privs by default, was unbelievable. In that time, Apple changed architectures and even though there was a tad of griping, it was not this hand-wringing that was observed from the Windows camp. Similar when something changes under Linux that forces program developers to change course. Similar with drivers in Vista. I know of more than one company which shipped broken drivers deliberately and pointed the finger at Microsoft when things crashed, as opposed to actually writing production quality code.

I'd like to see a compromise between the two extremes: First, applications that manage to pass a code quality review get a certificate. Second, have a rule that Authenticode-signed programs adhere to some code quality guidelines. Failure to do so gets the cert revoked. This way, programs install as normally. Finally, Other programs that don't do either of these wind up in a virtual machine, completely isolated from the main OS and the app windows they put up are clearly marked as coming from an untrusted application, similar to untrusted applets in Java's sandbox.

Microsoft has to both address being able to handle legacy code, and be able to keep a hand on lazy developers who will do the absolute minimum it takes to ship, even if means ignoring every security guideline out there. This is what virtualization is for -- Allow well behaved apps, and companies who agreed to code quality standards to install on the OS, while the legacy stuff can go play at the kiddie table in an encapsulated VM. Of course, if someone wants to drop a self signed cert in for their code as they are developing it, or a company wants to write code in-house and wants their CA to be trusted for code revisions, they can feel free to do so.

[Programs not signed by a commercial code review agency] wind up in a virtual machine, completely isolated from the main OS and the app windows they put up are clearly marked as coming from an untrusted application, similar to untrusted applets in Java's sandbox.

Then any program that doesn't have a commercial entity behind it would have to run in the sandbox. For example, a lot of free software [wikipedia.org] for Windows lacks Authenticode signatures because many individuals who maintain free software in their spare time don't want to incorporate ($100 or more depending on state) in order to become eligible for an Authenticode certificate and then keep the certificate up to date ($179.95/year [instantssl.com]).

This is probably a reference to this clause, found both in Vista and Windows 7 eulas. Available here in various eulas, such as Vista Home Premium English and Windows 7 Home Premium English, found here:
microsoft [microsoft.com]
Searching the XP sp2 eula does not seem to contain a similar clause.

Also, the DEP setting is opt-in on workstation SKUs (your app has to say that it wants it) -- for compatibility, and opt-out for server SKUs (your app has to say that it doesn't want it) -- for security.

I have manually set it to opt-out on the Vista system I am posting this on myself. On compatibility issues, I once had to add a DEP exception for Parallels Workstation 2.2, otherwise starting a virtual machine using it would cause a BSoD. It was even worse in the original version 2.0 dating back to 2005 which did not support the PAE page table format at all forcing PAE and thus NX to be completely disabled.

its a term from the retail worldeach different version of Windows has a different SKU (hint its the UPC barcode or mapped to same)so for each combo of 32/74 bit Home/Business/Ultimate and Upgrade or Full Install there is a different SKU

Because enforcing that every application use these would mean certain sorts of applications couldn't be written (or at least not as easily).

DEP is data execution prevention. It marks certain areas of address space as being "data only", so the processor won't execute them. While this is generally a good idea, as it prevents a hacker from constructing a NOP sled and then using an access violation bug somewhere to execute code they've stuck in memory, it also has the side effect of making self-modifying code m

You mean despite the fact that other OSes enforce the security model on all the applications that expect to run on it? I know that under FreeBSD and Linux applications are expected to run with the provided resources unless they're specifically run as root or similar. I'm not sure I understand why MS would allow third party apps to do so without having the user make adjustments themselves. Ultimately this is MS' fault for allowing in the first place.

The trouble is if Mozilla is pwned, and runs "arbitrary code of the attacker's choice", that code can do anything that user account can do, and access anything that user account can access. This is true for FreeBSD, Linux and Windows.

Just because I run a browser doesn't mean I want to allow it full access to whatever my account can access/do.

Windows Vista and Windows 7 actually sandbox IE, so in fact Windows is one up on most major Linux distros in that respect.

I've seen the default apparmor template for firefox on ubuntu. 1) It's not enabled by default, and 2) Even if you enable it, it doesn't really help if you want security, you have to modify the template if you want to protect all your nonbrowser-related files from a pwned browser instance.

That's not entirely true, I'm not as well versed at Linux as BSD, but we've got things like security levels, flags on top of that. An exploit of that fashion is not going to be able to do things to the kernel if you've got it properly configured, nor is it going to be able to make things run at boot without ones say so.

Sandboxing helps, but Windows has to do it, because it's just way too easy for viruses to install crap to the boot sector.

Actually, it's not (and hasn't been for years). Opening a drive's boot sector (or loading kernel drivers) requires administrative privileges, and starting with Vista the default configuration is that your apps don't *have* admin privileges (I configured XP this way too, but it didn't have a nice mechanism like UAC or sudo for those times when Admin is needed - runas is a pain by comparison). NT has a very powerful security model... it's just that most users say "Give me and everything I run full permissions

This is more like SELinux than about resource restriction. UAC does it's best to ensure that even admin users(nothing wrong with them for single user pcs) have to explicitly grant privilege escalation(admin is more like wheel now), and in 7 it's actually tolerable to leave it on.

Unfortunately most desktop apps don't conform to those kind of rules in windows any more than they do in Linux so it doesn't enforce by default any more than selinux is generally enabled by default.

Properly written applications will mark data areas as executable if code is going to be executed from it, it is just that many older applications aren't written properly and thus crashes when DEP is enabled.

Managed execution environments, such as.NET and Java, usually recompile each method as it is executed for the first time. In a DEP environment, the JIT recompiler needs a way to tell the OS to flip parts of memory between data and executable. So if "some" argue that managed code is broken by design, I'd guess "some" work for Apple's iOS division, the only company I can think of that has explicitly banned managed code.

Managed execution environments, such as.NET and Java, usually recompile each method as it is executed for the first time. In a DEP environment, the JIT recompiler needs a way to tell the OS to flip parts of memory between data and executable.

The flags [microsoft.com] to request the newly allocated memory block to be executable have been there since WinNT 3.1.

So if "some" argue that managed code is broken by design, I'd guess "some" work for Apple's iOS division, the only company I can think of that has explicitly banned managed code.

Not really, JS is also managed code, and Apple's implementation is even a JIT.

Because enforcing that every application use these would mean certain sorts of applications couldn't be written (or at least not as easily).

Unless setting "Turn on DEP for all programs and services except those I select" doesn't do what it says (i.e., a program can still "opt-out" in code), then there are very few apps that have a problem with DEP.

I have this set on dozens of machines (both server and desktop), and have had to make exceptions for less than 5 programs, with the only really annoying one being the driver installer for a TV tuner card (since I think that means that any program named "SETUP.EXE" would be exempted). After I ran the

How would you write a JIT without the ability to turn off DEP on certain pages of memory?

The JIT engine would have to tell the operating system to mark a given range as writable, write, mark the range as executable, and finally execute. Opting in to DEP is an application's way of telling the OS that it is aware of these newly introduced DEP syscalls.

Because then 90% of old Windows apps won't run and since people only buy Windows to run Windows apps, they get pissed off.

It's bad enough with 64-bit Windows 7 where many games require hacks and workarounds or simply won't run at all in the case of old 16-bit games. I only use Windows on my laptop for games and video editing and given the incompatibility issues I'm not sure it's even worth bothering; the average older game seems about as likely to run in Wine as Windows.

16-bit Windows apps generally won't work in DOSBox, in my experience. In any case, emulating another OS on top of your current OS does not actually mean that your software will run on your current OS. It's annoying, but the simple truth is that due to the design of the processor, you can not natively run 16-bit software on 64-bit Windows.

Because the high-end part of the PC market has been all Mac for many years now, well over 90%, leaving Windows as just a low-end commodity system where nobody pays for software so it has to run stuff that's 10 years old. Because there is no incentive for the authors of Java or QuickTime to fix Microsoft's problems for them. Most Windows users are still on XP and don't even have these features.

No, for most applications it wouldn't have much impact on the code base to implement these changes, especially compared to the other changes in GUI, Networking, IPC, and other system libraries that they already have to maintain.

The two features are both about preventing memory access errors from turning into exploits. The only apps that need to be changed before enabling DEP are ones that do some sort of JIT compilation of code into data memory and then execute it - and even these apps can enable DEP if they allocate memory for this compiled code using a windows specific api that marks it a executable. The only apps that will run into problems with ASLR are those that hardcode memory locations. No one should be doing this and a cross-platform app definitely won't be.

So it isn't a big deal for cross-platform applications, they probably just haven't spent the time to investigate all the ins and outs of MS's features, since they aren't native to that platform. I know I haven't on my in-house applications; I probably should.

Also I should add that Linux, OS X, and other operating systems have these same features under different names, so any work required to clean up the code to meet the standards required to enable them would be beneficial to all the platforms. Only a small amount of platform specific code would be needed to enable the features on each platform.

So, basically run your own malloc function that, in turn, detects the OS and uses the required API?

Even simpler. Since we're talking about native code here, you have to compile it separately on different platforms - and, on each, you compile it against the version of the library that wraps the native OS API. So there's no "detection" to speak of, it's just a thin wrapper (and if you do link-time optimizations, it may even be stripped out completely by the compiler in the output binary).

If it's that simple, why hasn't it been done yet?

It has been done. Thing is, most applications which are written as cross-platform to begin with usually don't have any p

Not to mention that all of these these features are themselves cross-platform too. Linux had NX support since 2.6.8 released right around the release of XP SP2 (in around August 2004) for example, it was just that most distros was not enabling it because they were defaulting to non-PAE kernels. What made it worse was Intel made the mistake of releasing Pentium Ms without PAE in 2003 and 2004. They had to finally add PAE in order to add NX to Pentium M which was done at the beginning of 2005 but by then it was too late. Mandriva tried to default to PAE kernels back in 2005, but was forced to back off after that mistake was discovered. Ultimately Ubuntu and Fedora added auto-detection to their installer last year, finally installing a PAE and thus NX capable kernel on capable processors.

Depends on what the specifics of the code are. That's usually the responsibility of a library to deal with, you can also use ifdefs in languages like C if you have to, but generally speaking the ideal cross platform code will segregate platform specific code from the rest of it.

Apple Quicktime is Windows/Mac, shares a lot of same code between clients. VLC? Insanely multiplatform and multi CPU, Real Player is almost like Firefox , the pack the open source Helix Player for different target platforms. OS X/Linux Real Players are said to differ a little from the raw material while on Windows, you know the story.

For Opera, things go really interesting. Opera Core is actually a single, amazingly portable pure C. UI is tailored for different operating systems and their needs and no need