The most significant change is an initial implementation of a secondary sandbox based on seccomp filter, as recently merged to Ubuntu 12.04. This secondary sandbox is pretty powerful, but I'll go into more details in a subsequent post.

For now, suffice to say I'm interested in testing of this new build, e.g.

Does it compile for you? (I've added various new gcc flags etc).

Any runtime regressions?

Does it run ok on 64-bit Ubuntu 12.04-beta2 or newer?

This last question is key as that is the configuration that will automatically use a seccomp filter. The astute among you will note that beta2 is not due out until tomorrow, but an apt-get dist-upgrade from beta1 will pull in the kernel that you need.

Will Drewry's excellent work on seccomp filter is the most exciting Linux security feature in a long time and the eventual vsftpd combined sandbox that will result should be a very tough nut to crack indeed.

Thursday, March 22, 2012

This year's Pwn2Own and Pwnium contests were interesting for many reasons. If you look at the results closely, there are many interesting observations and conclusions to be made.

$60k is more than enough to encourage disclosure of full exploits

As evidenced by the Pwnium results, $60k is certainly enough to motivate researchers into disclosing full exploits, including sandbox escapes or bypasses.

There was some minor controversy on this point leading up to the competitions, culminating in this post from ZDI. The post unfortunately was a little strong in its statements including "In fact, we don't believe that even the entirety of the $105,000 we are offering would be considered an acceptable bounty", "for the $60,000 they are offering, it is incredibly unlikely that anyone will participate" and "such an exploit against Chrome will never see the light of day at CanSecWest". At least we all now have data; I don't expect ZDI to make this mistake again. Without data, it's an understandable mistake to have made.

Bad actors will find loopholes and punk you

One of the stated -- and laudable -- goals of both Pwn2Own and Pwnium is to make users safer by getting bugs fixed. As recently noted by the EFF, there are some who are not interested in getting bugs fixed. At face value, it would seem to be counterproductive for these greyhat or blackhat parties to participate.

Enter VUPEN, who somehow managed to turn up and get the best of all worlds: $60k, tons of free publicity for their dubious business model and... minimal cost. To explore the minimal cost, let's look at one of the bugs they used: a Flash bug (not Chrome as widely reported), present in Flash 11.1 but already fixed in Flash 11.2. In other words, the bug they used already had a fixed lifetime. Using such a bug enabled them to collect a large prize whilst only handing over a doomed asset in return.

Although operating within the rules, their entry did not do much to advance user security and safety -- the bug fix was already in the pipeline to users. They did however punk $60k out of Pwn2Own and turned the whole contest into a VUPEN marketing spree.

Game theory

At the last minute at Pwn2Own, contestants Vincenzo and Willem swooped in with a Firefox exploit to collect a $30k second place prize. The timing suggests that they were waiting to see if their single 0-day would net them a prize or not. It did. We'll never know what they would have done if the $30k reward was already sewn up by someone else, but one possibility is a non-disclosure -- which wouldn't help make anyone safer.

Fixing future contests

The data collected suggests some possible structure to future contests to ensure they bring maximal benefit to user safety:

Require full exploits, including sandbox escapes or bypasses.

Do not pay out for bugs already fixed in development releases or repositories.

Saturday, March 17, 2012

I've had cause to be staring at memory maps recently across a variety of systems. No surprise then that some suboptimal or at least interesting ASLR quirks have come to light.

1) Partial failure of ASLR on 32-bit Fedora

My Fedora is a couple of releases behind, so no idea if it's been fixed. It seems that the desire to pack all the shared libraries into virtual address 0x00nnnnnn has a catastrophic failure mode when there are too many libraries: something always ends up at 0x00110000. You can see it with repeated invocations of ldd /opt/google/chrome/chrome|grep 0x0011:

Which exact library is placed at the fixed address is random. However, any fixed address can be a real problem to ASLR. For example, in the browser context, take a bug such as Chris Rohlf's older but interesting CSS type confusion. Without a fixed address, a crash is a likely outcome. With a fixed address, the exact library mapped at the fixed address could easily be fingerprinted, and the BSS section read to leak heap pointers (e.g. via singleton patterns). Bye bye to both NX and ASLR.

Aside: in the 32-bit browser context with plenty of physical memory, a Javascript-based heap spray could easily fill most of the address space such that the attacker's deference has a low chance of failure.

Aside #2: my guess is that this scheme is designed to prevent a return-to-glibc attack vs. strcpy(), by making sure that all executable addresses contain a NULL byte. I'm probably missing something, but it seems like the fact that strcpy() NULL-terminates, combined with the little-endianness of Intel, makes this not so strong.

2) Missed opportunity to use more entropy on 64-bit

If you look at the maps of a 64-bit process, you'll see most virtual memory areas correspond to the formula 0x7fnnxxxxxxxx where all your stuff is piled together in xxxxxxxx and nn is random. At least, nothing is in or near a predictable location. One way to look at how this could be better is this: If you emit a 4GB heap spray, you have a ~1/256 chance of guessing where it is. Using the additional 7 bits of entropy might be useful, especially for the heap.

3) Bad mmap() randomization

Although the stack, heap and binary are placed at reasonably random locations, unhinted mmap() chunks are sort of just piled up adjacent, typically in a descending-vm-address fashion. This can lead to problems where a buffer overflow crashes into a sensitive mapping -- such a JIT mapping. (This is one reason JIT mappings have their own randomizing allocator in v8).

In both cases, the heap doesn't have to grow too large before it cannot grow any larger. When this happens, most heap implementations fall back to mmap() allocations, and suffer the problems of 3) above. These things chained together with a very minor infoleak such as my cross-browser XSLT heap address leak could in fact leak the location of the executable, leading to a full NX/ASLR bypass.

Conclusion

A 32-bit address space just isn't very big any more, compared with todays large binaries, large number of shared library dependencies and large heaps. It's no surprise that everything is looking a little crammed in. The good news is that there are no obvious and severe problems with the 64-bit situation, although the full entropy isn't used. Applications (such as v8 / Chromium) can and do fix that situation for the most sensitive mappings themselves.