Posted
by
timothy
on Saturday February 25, 2012 @07:33PM
from the sounds-spicy dept.

An anonymous reader writes "Communications of the ACM is carrying twoarticles promoting the Capsicum security model developed by Robert Watson (FreeBSD — Cambridge) and Ben Laurie (Apache/OpenSSL, ChromeOS — Google) for thin-client operating systems such as ChromeOS. They demonstrate how Chrome web browser sandboxing using Capsicum is not only stronger, but also requires only 100 lines of code, vs 22,000 lines of code on Windows! FreeBSD 9.0 shipped with experimental Capsicum support, OpenBSD has patches, and Google has developed a Linux prototype." While the ACM's stories are both paywalled, the Capsicum project itself has quite a bit of information online in the form of various papers and a video, as well as links to (BSD-licensed) code and to various subprojects.

Did you know its kind of pointless as ChromeOS is a giant fail? Oh while I'm sure the OS is just fine if its anything like Android, the problem is that the ODMs slapped it on underpowered hardware and then slapped a "ZOMFG! What are they thinking?" pricetag on it! Hell they wanted nearly $550 for the Samsung? its a fricking Atom! You could buy either a full size laptop OR a MUCH nicer netbook for much less and actually have more functionality?

Did you know that ChromeOS (a Linux-based OS) is completely irrelevant in an article about a FreeBSD feature that is now used in Chromium (a web browser)? The only vague relevance is that one of the people mentioned in TFS works on both the Chrome / Chromium and ChromeOS projects. But don't let that get in the way of a good anti-Google rant.

Actually I was originally FOR ChromeOS as i thought it was a brilliant idea for clueless home users. An OS that will protect them by basically running everything on the server and thus get rid of local exploits. As for Chrome, why should we care exactly? Unless you are running XP (or Linux) you have low rights mode which by default has lower permissions than users which makes it pretty damned hard to infect a machine anyway. i know as i tried to infect a machine that I planeed to wipe anyway and went to eve

halt and reboot, in unix (and also in linux in the past), would inmediantely halt or reboot. No informing users, killing processes, etc, just HALT.Shutdown on the other hand, informed users, first sent sigint, and then sigkill, etc. It was the "friendly" choice, and gave processes a chance to exit gracefully and save anything they should.

Since so many users where using halt and shutdown -h as if they were the same, most linux distros have adapted the behaviour to what those users (who had not read the manpa

So, we have our first solid metric: it's 220 times as hard to make Windows secure as it is for BSD or Linux.

BSD or Linux? Check the paper [cam.ac.uk]. It took 100 lines of code with Capsicum, 200 with SELinux, 605 with Linux-with-chroot, 11,301 with Linux-with-seccomp and 22,350 with Windows. So (with your metric) it is somewhere between 2 and 10 times as hard to make Windows as secure as Linux and between 2 and 10 times as hard to make Linux as secure as FreeBSD. And the best score for Linux - the SELinux sandbox - requires enabling SELinux which is a massive blob of code and has introduced several of its own security holes in the last couple of years, while Capsicum is much simple and easier to audit.

See, to post your fact-filled response, you actually had to RTFA, and you had to take my silly post seriously. To post my silly joke, I only had to read the slashdot summary. I win!:)

BTW, check your math. It's between 2 and 100 times for Win v Linux, not between 2 and 10. 22,350 is more than 100 times greater than 200. Likewise, it's between 2 and 100 with Linux v BSD. Also, chroot is part of GNU coreutils, so with a stock, vanilla GNU/Linux system, the answer seems to be 605, which means the normal,

It may sound that way, but it doesn't read that way. Perhaps if you stopped listening to articles and read the written words you'd know what they were about.

Specifically, Capsicum is a Unix (and therefore heavily C- and process-based) framework for sandboxing applications. Android applications, on the other hand, are written in Java and executed on the Dalvik VM. The "process" model is completely different from that of Unix. C applications and modules in Android can only use and link against the NDK, which doesn't expose any operating system interfaces at all. So, again, Capsicum is useless.

Capsicum also debuted, like, years ago. I doubt it will pick up steam because the necessary underpinnings will never be adopted in the Linux kernel. For one thing, anything which comes from FreeBSD always has to be re-engineered, and usually poorly. It hardly matters that the Capsicum researchers chose FreeBSD as their test bed for probably arbitrary reasons. What matters is that FreeBSD has now infected it.

Second, there are two interest groups in the Linux community that dictate security frameworks: the SELinux people and the anti-SELinux people. The anti-SELinux folk are already wedded to a host of alternatives. Capsicum will have a cold reception.

Here is what BSD magazine described as the Capsicum implementation in FreeBSD:

Capsicum is a lightweight framework which extends a POSIX UNIX kernel to support new security capabilities and adds a userland sandbox API. It was originally developed as a collaboration between the University of Cambridge Computer Laboratory and Google, sponsored by a grant from Google, with FreeBSD as the prototype platform and Chromium as the prototype application. FreeBSD 9.0 provides kernel support as an experimental feature for researchers and early adopters. Application support will follow in a later FreeBSD release and there are plans to provide some initial Capsicum-protected applications in FreeBSD 9.1.

Traditional access control frameworks are designed to protect users from each other through the use of permissions and mandatory access control policies. However, they cannot protect the user when an application, such as a web browser, processes many potentially malicious inputs, such as HTML, scripting languages, and untrusted images. Capsicum provides application developers fine-grained control over files and network sockets to provide privilege separation within an application, with minimal code changes. In other words, it provides application compartmentalisation, allowing the application itself to provide many different sandboxes to contain its various elements. As an example, each tab in the Chromium browser has its own sandbox; it is also possible to contain each image in its own sandbox. Creating sandboxes under Capsicum does not require privilege, a key problem with current UNIX sandbox approaches.

As an example, the insecure tcpdump application can be sandboxed with Capsicum in about 10 lines of code and the Chromium web browser can be sandboxed in about 100 lines of code. capsicum(4) provides an overview of the available system calls. More information, including links to technical publications, projects, and a mailing list, can be found at the Capsicum website [cam.ac.uk].

So stale it got imported into the base system and kernel newest release of FreeBSD.

Besides, they've proven the system works, what else is really needed? It will take time to change userland utilities to use it, and only at that point will there perhaps be a need to add more capabilities for use cases that may not have been thought of. As it is, I'd be hard pressed to think of a program more complicated than a web browser (network, disk, IPC, and UI access all needed in varying degrees).

Looks like their web site got updated after this article got posted: http://www.cl.cam.ac.uk/research/security/capsicum/ -- they have working group slides from October 2011, and news flashes from both January and February 2012, including Communications of the ACM articles are definitely from 2012! However, it sounds like the hard part is yet to come -- getting APIs into operating systems is the beginning, not the end, of the story.

Disclaimer: I am a FreeBSD developer, and was visiting cl.cam.uk last week.

Capsicum is very much under active development. It's being used in Cambridge in several projects, funded by DARPA and Google. It is no longer developed on github because it is now merged upstream into FreeBSD. As TFS said, it is part of FreeBSD 9, and the core FreeBSD utilities are slowly being modified to use it (it's easy to incrementally deploy capsicum). If you want up to date documentation, check the man pages.

C applications and modules in Android can only use and link against the NDK, which doesn't expose any operating system interfaces at all.

I doubt it will pick up steam because the necessary underpinnings will never be adopted in the Linux kernel.

And, of course, a good concept with an incompatible implementation could never, ever be reimplemented to work on a different operating system or programming language.

The point is that if you are programming in java, you can offer arbitrary security models to applications running inside the VM, without the need for any special operating system support. The hard part is enriching the security model in a useful and backwards-compatible way for applications that run natively on the hardware, which is what capsicum does.

The point is that if you are programming in java, you can offer arbitrary security models to applications running inside the VM, without the need for any special operating system support

This is true if and only if your JVM is 100% bug free. Do a CVE search for JVM, Flash, or JavaScript to see how likely that is. With Capsicum, the JVM can restrict itself to the capabilities that the Java code should have, so even if the VM itself is compromised, the Java code can't escape from the sandbox.

Android applications, on the other hand, are written in Java and executed on the Dalvik VM. The "process" model is completely different from that of Unix.

First sentence is true, second one isn't. To wit, Android Activity classes run under Dalvik, which itself runs as an ordinary Linux process. Moreover, Android makes extensive use of the Linux process model as part of its security system.

You don't escape the Linux process model simply by wrapping a large application framework like Android around your code. The only way to escape the Linux process model is to not run as a Linux process e.g. run in kernel space.

For one thing, anything which comes from FreeBSD always has to be re-engineered, and usually poorly. It hardly matters that the Capsicum researchers chose FreeBSD as their test bed for probably arbitrary reasons. What matters is that FreeBSD has now infected it.

God forbid they release and modify truly free code that can be taken and be further modified, then sold without risk of litigation from the original author*
If anything, unless I missed the sarcasm, I think it was wise to not infect the project with GPL or taint it early by trying to make it compatible with everything maintained seperately rather then a single released package that is FreeBSD.
The poor re-engineering is what makes L \edd\eiWere you trying to be flamebait there?!

Android applications get almost no shielding from the OS and filesystem. security separation is based on UID and file system permissions. Since most apps are seriously lacking in file system permissions (app developers just turn on permissions for everyone, so their app works) and things like immutable files and ACLs aren't used, in practice Android is about as safe as an unpatched Windows ME machine directly connected to the Internet (slightly exaggerated for theatrical purposes). I wouldn't trust my comp

How does something like this get modded up? OP know exactly two things here, jack and shit.

First, any trivial amount of searching would reveal that Robert Watson, author of Capsicum, is pretty much the FreeBSD project lead, and has been for a very long time. His reasons weren't arbitrary, this is a technology deliberately designed for FreeBSD. This is also not the same as SELinux. Robert Watson already wrote that 10 years ago when he worked on TrustedBSD. This is application level sandboxing, not system lev

In UNIX, everything that interacts with anything outside of the process goes via file descriptors. Capsicum provides special file descriptors with capabilities. When you enter capability mode, the kernel no longer allows you to create new arbitrary file descriptors. This means you can't create new sockets, you can't touch the filesystem, and you can't touch any devices. You are completely isolated unless some other process passes you a file descriptor or you create one via a set of special rights. For example, if you have the correct permission, you can use open_at() to create a new file in a directory for which you have a descriptor. This allows you to, for example, set up a sandbox where an application can store files in a per-application location and can use a temporary directory. If it wants to open a socket, it has to ask another process. If it wants to open other files, it has to ask another process. The typical way of handling the second is to have a file-chooser application that allows the user to select files and then passes the rights to access them into the sandbox.

Then I don't understand the question. Capsicum is a kernel extension. It checks capability rights on file descriptors in the kernel. Nothing stops the process from making system calls - that's the point. Capsicum just stops certain system calls from doing anything...

I think the question is, why would it be "useless" for apps on Android (assuming it was implemented in Linux), if the virtual machine itself is subject to the same capabilities when it tries to do system calls for apps? That would seem to work fine.

Of course, the possibility exists that the AC is full of crap, but I'll defer to someone who might actually know rather than just speculating as I am.

Ah, I misunderstood the original claim. The original AC seems to think that only linking to the NDK means that you can't do anything malicious, which the second AC points out is nonsense because it can still do anything it wants by issuing system calls directly. I think Android does some chroot() stuff to ensure that NDK applications can't do this (they can only access most of the system by going via Dalvik). That said, if Android used FreeBSD, Capsicum could be used to enforce the Dalvik permissions at

It would work yes, but it would provide no granularity. So there would be little if any added security. I suppose you *might* identified some calls user land software of the type installed on a cell phone or tablet should *never* need to do and disable those but that is such a finite number other approaches to that likely make more sense.

It may sound that way, but it doesn't read that way. Perhaps if you stopped listening to articles and read the written words you'd know what they were about.

Specifically, Capsicum is a Unix (and therefore heavily C- and process-based) framework for sandboxing applications. Android applications, on the other hand, are written in Java and executed on the Dalvik VM. The "process" model is completely different from that of Unix. C applications and modules in Android can only use and link against the NDK, which doesn't expose any operating system interfaces at all. So, again, Capsicum is useless.

Capsicum also debuted, like, years ago.

I agree this will add nothing at an app level. However if it is sufficiently lightweight and powerful in could conceivably be used at an OS level below the Dalvic JVM,

The paper goes to great length to point out that applications can only be as secure as the operating system lets them. In section 5 of the A Taste of Capsicum: Practical Capabilities for UNIX paper the authors talk about how Chrome has been secured in all the major operating systems. They succulently sum up the great lengths Google has gone to, to make Chrome secure. Then promptly shoot holes in each of the approach's. Windows gets the worst treatment (lack of capabilities), followed by Linux (complicated approach's.) The authors give the best marks to FreeBSD and then to the Mac OS X MAC implementation.

The point I took away from the paper was, who ever has the most complete and easiest to implement sandbox (MAC, DAC) implementation to take advantage of can have the most secure applications. But at the end of the day its still up to the developer to take advantage of those capabilities.

But at the end of the day its still up to the developer to take advantage of those capabilities.

True, although there are some ways around this. For example, you can quite easily with capscium create an application launcher that opens a set of file descriptors and then execs an untrusted application. It's also relatively easy to integrate this functionality into a filesystem browser or application launcher, so if you try to run an application that is not on a whitelist it will run it in a sandbox. This is likely to appear by default around FreeBSD 10.0.

No it's not a bullshit metric. Minimizing the amount of work the applications has to do in order to use the feature means less code that those applications coders can fuck up while the larger number of lines of code that is "under the hood" is shared so more work can be put into it in order to make it secure.

Not really. CTSRD (pronounced 'custard') is a related project. It involves a custom MIPS-based chip and will be using capsicum as a benchmark. The aim is to see how systems like capsicum could be improved by adding some hardware support. Capsicum itself is largely finished as a research project, but that doesn't mean it's dead, quite the reverse. It's now shipped in the standard FreeBSD 9 install and it's being used outside the lab. There is still some Capsicum-related research going on, but this is l

Finally? Multics and others had such features before the birth on Unix.Ecclesiastes 1:10 Is there anything of which one can say, "Look! This is something new"? It was here already, long ago; it was here before our time.

On windows, notepad can save a file as foo.exe, and you have an executable.

That's a rather bad example. Go ahead, open some executable file in notepad and save it, then try to run it, what do you get? Yes, that's right, it's totally corrupt. I do actually get your point, but notepad cannot do what you claim.

Very few programs actually need the capability to generate executables (the compiler, and the application installer, for instance). Allowing any and all programs to make executables along with regular file access means that any executable can be compromised into dropping malware onto your system.

How would the system know what the application is writing to a file unless it first allows the application to write to said file? Executables are not some sort of magic thingies, they're just binary data, and if you allow an application to write binary data then you are also al

If you only mean that applications shouldn't be allowed to name their files "*.exe" then yes, that would certainly be possible, but that wouldn't solve the problem. Especially on non-Windows platforms where filename doesn't determine whether or not a file is executable.

So you control which applications can set the "executable" flag.

e.g. applications shouldn't just be able to even get a directory listing on the user's files unless the user specifically allows for that

The problem is that this means any app that wants to access any data needs to ask for permission to do so. I don't see how prompting users for permission for every app that wants permission will help. I expect most apps to require at least the ability to read and write their own files to a location the users has access to from other apps, and properly managing which files/directories which apps can access seems to me like it would become incred

The problem is that this means any app that wants to access any data needs to ask for permission to do so. I don't see how prompting users for permission for every app that wants permission will help.

Opening standard operating system - provided File Open/Save dialog is in most cases more than enough; the application wouldn't be told the actual path to the file or allowed to get directory listing, but would be able to read and write the file selected in the dialog. There are exceptions, yes, but again that is nothing that couldn't be solved.

The problem is that this means any app that wants to access any data needs to ask for permission to do so. I don't see how prompting users for permission for every app that wants permission will help.

There are basically three things that most apps need to access. Shared libraries, a scratch location, and user documents. Shared libraries are stored in the binary and rtld can grant access to them. A scratch location is easy - every application can be granted access to/tmp/{app name} or similar. User documents are easy to pass in from another process. You invoke the standard file chooser and it runs in an external process and gives you a file descriptor for the selected location.

On windows, notepad can save a file as foo.exe, and you have an executable.

That's a rather bad example. Go ahead, open some executable file in notepad and save it, then try to run it, what do you get? Yes, that's right, it's totally corrupt. I do actually get your point, but notepad cannot do what you claim.

Actually, there is some trick which allows to convert arbitrary machine code (.com, not.exe) to plain readable ASCII text.Even changes in whitespace or line breaks will not hinder proper execution.

I wouldn't call ChromeOS as Thinclient OS as the same Linux operating system there is running as all popular Linux distributions and what is available for everyone from kernel.org.

Even that ChromeOS only offers to user a web browser (simplifying), it does not mean the operating system is for thinclients.

And even that web browser loads all "web apps" from network, it does not mean system is thinclient. Or otherwise I am typing this on thinclient what is really thin, as it is just 3.5cm thin and weights under