Security Through Boredom

Menu

Tag Archives: Linux

Post navigation

Pulseaudio is an application used on many Linux systems to handle audio. It isn’t PIE, so it’s not a bad idea to restrict it. I believe Fedora uses an SELinux profile for Pulseaudio, but as an Ubuntu user I’m left having to make an AppArmor profile for it. If you’ve been reading my blog you’ll know that AppArmor is a Mandatory Access Control system used by default by Ubuntu, among other Linux distributions. Restricting programs with AppArmor limits potential damage of vulnerabilities in those programs.

This profile works on my 64bit Ubuntu system. I’ll keep it updated here in case something changes, but I’m watching video via Chrome just fine. It’s obviously not a very strong AppArmor profile as Pulseaudio starts off running with very high rights/ capabilities, but we can at least somewhat limit file access. I’m going to try to limit lib access further, but for now this is something.

I’ll update this as needed, but as it is things should work smoothly. Follow me @insanitybit for consistent updates.

For years users of Linux have been finding workarounds to get Netflix running on Linux, primarily by running Windows in a virtual machine and then Netflix within that virtual machine. The reason for this is that Netflix will only run with DRM support, and although Linux has created projects that work with Silverlight content they could not recreate/ bypass the DRM.

Recently there has been a major advancement. WINE, the software used to run Windows software within Linux, has a few patches that allow it to run Netflix on Linux systems. It’s not perfect yet, it’s a little choppy, but you can run Netflix straight from your Linux OS without having to resort to resource heavy virtual machines.

Browsers will keep ‘pieces’ of a webpage in what’s called a cache. This cache allows them to quickly pull files from the disk (which is quite somewhat quick) instead of having to redownload them (which is slow). Your system’s RAM is even faster than your disk, hundreds of times so, and keeping a file in RAM means accessing it will be nearly instant. Browsers are going to load up these files to RAM regardless but we can speed up writes to the cache and improve privacy by having the entire cache kept in RAM from the beginning. To do so we’ll be creating a RAM disk and then telling Chrome to use it.

Remember, your cache is deleted every time you shut down your computer if you follow this guide. It will get rebuilt the next session.

First off we’re going to create a directory in /tmp/ , which we’ll call ccache.

mkdir /tmp/ccache

Then we need to open up /etc/rc.local. The commands in /etc/rc.local are run whenever the system starts up. Enter the following line, which tells the system to mount the RAM disk at /tmp/ccache/. You’ll see size=700M. Keep in mind that this is in Bytes and it’s how much RAM you’re allocating to the new filesystem. You can change that size to whatever you want but I don’t think anyone’s going to be using much more than 300MB, but I keep some extra room in there.

mount -t tmpfs -o size=700M,mode=0744 tmpfs /tmp/ccache/

We now set the permissions on /tmp/ccache (recursively) to 777. For some reason 777 is all that’ll work for me.

chmod 777 /tmp/ccache/ -R

Now that’s all set up we need to create a Chrome desktop shortcut. However your distro lets you do that, just drag it wherever. Right click it, properties, and add the following (word press messes up double -‘s. You’ll have to type those out).

–disk-cache-dir=”/tmp/ccache/” –disk-cache-size=600000000

Now when you launch Chrome from that shortcut it’ll use the disk cache we’ve set up.

Now, in terms of privacy what we’ve done is eliminated the ability for an attacker with access to your system to view your cache/ try to see what you’ve been up to *except* for that session. Every time you shut your system down you lose the cache, as such no one can see where you’ve been.

If you were on some dodgy site or doing something sensitive or whatever all you have to do is restart the system and there will be no trace left (in the cache at least).

It may or may not be worth the trouble to you. Since Chrome’s already mapping this stuff to RAM anyways you shouldn’t expect any major performance improvements. But those who fear micro-writes to their SSD can use this to prevent wear/tear.

The Linux security model is the same as Windows and OSX. Your ACLs are based on Users and Groups. If an attacker gains access to a process with User ID 100 it is the assumption that the entire User ID is compromised. This is the model of separation. This is in contrast to a security model similar to that of Android where every app is separate from another app (intents being the IPC that bridges apps). Android actually does use separate user accounts for security reasons as well but the central security model is that each app gets its own set of rights and abilities as opposed to each user group.

Neither is ‘right’ and both are compatible. They keep things separated and that’s all good.

The issue comes with X. The X Window System provides a Graphic User Interface (GUI) for many Linux distros including Ubuntu 12.04. X (or X11) bridges the gap between separated users and groups. A process in User ID 100 can both send and receive input to a process in User ID 0 or 5 or 50 or whichever. It breaks the model of separation.

Let’s say I’m running a graphic program such as Pidgin. Pidgin is running in its own separate UID and it’s in an apparmor profile that’s enforced. I then have Xchat running as a separate user and it’s also confined within apparmor. I also have Firefox running in the Pidgin UID.

As an attacker I gain access to Pidgin. I’m now restricted through Apparmor and I can only access the files available to the UID. Because of Apparmor I can’t touch Firefox’s files but I can interact with it through X, sending and receiving keys. I’m actually alright with this.

The issue is that I can also, from Pidgin, use X to send and receive keys to Xchat. That’s not ok – whereas Firefox and Pidgin share a UID Xchat does not and interaction like this should not be allowed without root.

Basically, X should be split into sessions (or treat it as if it were a separate session) based on users/ groups and global hotkeys should require root . I’m not sure how possible this is but the idea is that users can access keystrokes but separate users should not be allowed to.

Until this is solved there is a massive hole in every Linux users system – you can use grsecurity, PaX, apparmor, whatever and if an attacker so much as gets shell in a process they can potentially do whatever they want. SELinux provides a potential solution but ultimately this is a design flaw that needs to be handled at the design level.

The Wayland Compositor will be available to Ubuntu users via PPA. Though it’s not ready for stable release users will be able to install and test it out and track the progress.

Moving from X to Wayland yields various benefits and pitfalls and they’re not really within the scope of this blogs focus. Essentially they work with Compiz and have to do with your Graphic User Interface.

What’s more interesting about the move to Wayland is the separation of global hotkeys.

Now begin typing. You’ll quickly see your keys are appearing in the terminal. No matter where you type (sudo, gksudo, for some scary examples) your input will be logged. And without root.

This is because hotkeys for a single X service are global – any application can register hotkeys. An exploited process can both send and intercept all input if it’s running within the same X service.

Wayland separates hotkeys. They aren’t globally registered so one programs hotkeys should be isolated from the next. That’s how I’ve understood it at least but I haven’t read much – I can’t really find a ton out there. If anyone has more information on this leave it in a comment – thanks.

This vulnerability in X is fairly major and it’s been known and demonstrated for ages. Yet there’s no fix or any plans to fix this. It’s a bit ridiculous. With a compromised Pidgin process (for example) I can read any input to any other windows. If a user opens up truecrypt I get the input for their root password to GKsudo. I also get their Truecrypt password.

If they open up a terminal I can sniff their root password. From that point I can actually send the terminal my own input, from Pidgin, allowing me to do just about anything.

Recently I read a great blog post from Azimuth Security entitled Poking Holes In Apparmor Profiles. The piece highlights the issues with Apparmor profiles currently deployed and how using Ux, Px, and Cx, can lead to privilege escalation or even a complete breakout from the profile.

It’s because of the points brought up in the post that I tend to audit and recreate profiles for my specific needs. Oftentimes when I read through the profiles I see a lot of #include ‘s that are too ‘wide’ and a lot of profile authors leaving “Ux”‘s in the profile with a note saying they’ll profile it later.

Adding a Ux or Px or Cx is not the worst possible thing, your attacker isn’t suddenly in control of the system. But it gives a huge amount of room to them so you want to avoid it whenever possible. An attacker can write their payload to the disk with the initial exploited program and then use Ux to have the other program run it unconfined.

The story is mostly the same with Px and Cx but instead of full execution you just get potential privilege escalation (going from a very confined profile to a less confined one).

Apparmor can be fixed though. You can read more about that in Azimuth’s blog post because I’m too lazy to talk about it myself/ I just got back from my trip.

Let’s also keep in mind that there are other benefits to Apparmor. Initial exploits often won’t work on a program running in Apparmor because the exploit will try to read/write to some files that might allow for an information leak or give details that the attacker needs to successfully exploit the program. If you aren’t using Ux none of this even matters and, personally, I’ve made it kind of a habit to profile what I can.

This post will be dedicated to showing you how to run Pidgin in a separate user account. You can apply this to other programs as well. I’ll be adding a bit later for setfacl and allowing for shared files between user accounts.

Why Are We Doing This?

There are three main benefits to running programs in a separate user account.

1) The Linux ACL system is user/group based therefor one user account is largely limited in its interaction with another.

2) The X11 system allows for key passing between all applications in a user group. You can restrict X11 access to specific users so, for a program that doesn’t need X11 access (ie: some service) we can run it in a separate user account and prevent keylogging through X11. Pidgin uses X11/ needs access so it unfortunately will not benefit from this.

3) IPTables can work as a group. While an outbound Firewall may be virtually useless for a typical system if you were to separate each application into its individual group you would essentially create an application firewall, allowing only specific groups to use specific ports. This is far better than the typical outbound Firewall setup that allows all applications to use any outbound port.

Notes

If I use ‘<username>’ I’m talking about your default username. If I use <username.program>’ I’m talking about, in this case, username.pidgin.

If you run your Pidgin as another user and someone links you to something and you click it the browser will open up under that user. There is likely a way around this by using setfacl but I haven’t gotten to that yet.

If someone sends you a file it will be in the other user accounts folder.

There is a distinct hit to your basic user convenience for the benefit of a potentially more secure system. If you are not looking for a hit in convenience I suggest you set up a comprehensive apparmor profile instead.

It’s quite easy to undo everything in this guide. You simply remove the user and use your old shortcut.

Let’s Get Started

The first thing we do is actually create the user. This is simple.

sudo adduser –force-badname <username.pidgin>

It doesn’t have to be username.pidgin it can be just pidgin or it can be ‘koala’ I really don’t actually care what you name it and neither does Linux. It’s purely organizational.

We need to give Pidgin X11 access, it’s a graphic program after all.

sudo xhost +SI:localuser:<username.pidgin>

If you ever want to remove that simply turn the + to a -.

This only gives access until a reboot. Anyone know how to make it permanent? Other than rc.local.

Now we create a shortcut to this new Pidgin. Open gedit and enter the following

Right now there’s no way to stop a program from having network access and outbound Firewall rules are basically useless.

The fact is that if I open literally any single (outbound) port on my system an attacker can use it. Whether I have 1 port open or 1,000 if they’re on my system they’ll have access to it.

What I want is a way to give applications network access on a per-application basis, not on a per-port basis. I’d love for a simple Firewall that just says “X application can bind X port” instead of “Only allow UDP out of X port.”

Without an application Firewall an outbound Firewall is only going to prevent automated attacks.

I still don’t like outbound Firewalls but at least make a useful one.

The network rules in AppArmor are really terrible too. I want to be able to restrict everything with AppArmor. Chrome only needs to use very specific protocols – I want to blacklist the rest. Same goes for xchat and Pidgin. This would prevent actual attacks like NAT Pinning.

The Valve Linux Team has started a blog dedicated to gaming on Linux. There was confirmation some time ago that games would be coming to Linux through Steam and it’s great to see them putting their full force behind this.

The truth is that this whole situation has been surrounded in rumors and this blogs purpose is to give you a direct line to the source.

I think what’s exciting is that even non-gamers will benefit.

1) More games = more users. I know dozens of people who would switch to Linux if it weren’t for games.

2) More games = better driver support! Well, hopefully. It would be great to see GPU vendors take Linux more seriously.

So I’m excited. I won’t ever ditch Windows entirely for one reason or another but at least now I’ll have one less reason to boot into it.

The seccomp filters implemented in the 3.5 and Ubuntu kernel is really cool and I’m bored so I want to write about it (hooray for having a blog.) I’m going to explain what seccomp filters actually do at as low a level as I feel comfortable. I’ll leave some stuff out and gloss over a few other things because either 1) I personally don’t know it well enough 2) it would take forever to explain. I want to make this as accessible as possible for those readers who aren’t necessarily familiar with all of this terminology.

Seccomp Filters are a compile-time whitelist of what System Calls can be made by the compiled program. If a new system call (one that hasn’t been whitelisted) is called the program closes.

What Is A System Call?

A system call is basically how a program speaks to the kernel. Programs are basically (or literally, I guess) instructions, they want to get something done. Oftentimes they have to (for performance or ease of use reasons) outsource that action to the kernel. They do this through a system call, something like write(). The () is your parameters, so you might have (and this is not a real world example at all nor is it even correct, in reality a write() creates a file buffer among other things, passing the information to the syscall) write(“hello world”) and your program passes that to the kernel, which sees “the syscall is ‘write’ and the argument is ‘hello world'” and then it does what it wants to do and you end up writing “hello world” somewhere.

What’s The Issue?

There are a few issues with this. The first is that the previously mentioned kernel is the highest level that software can reach in terms of the OSI model of security. This means exploits in the kernel are also going to be at the highest level and they can practically do anything at that point including directly interact with your hardware. Following this it’s only possible to exploit code that you can interact with either directly or indirectly. A system call is a way for programs at any level to interact with the kernel therefor it’s a way for any program to escalate to kernel level via an exploit.

The other issue is that there are a lot of system calls and new ones can be created over time as new kernel features appear. This means new kernel attack surface and it also means new capabilities for programs. What if I don’t want my program to be able to write? Well it has access to write() so I would have to find some other way to stop that like LSM – but there’s a lot of other syscalls not so easily stopped. By whitelisting the syscalls we implement absolute least privilege, meaning that programs can only use the syscalls they really need.

The short answer is that abusing syscalls allows for new and unforseen behaviors as well as the potential for privilege escalation. Filtering syscalls directly limits kernel attack surface and what programs can do.

Where Filters Really Help

To understand where these filters really help I think I should explain the concept of least privilege. Least privilege is the implementation of a program in which the program only has access to what it needs and nothing more. This means if there’s files A-Z on a system and the program only ever uses A, B, C, then it won’t have access to D-Z. It may also not need Inter Process Communication abilities with various programs, the IPC may be restricted too. Maybe it shouldn’t be able to execute specific files, again, limit it. The idea is to make it so that it can do only what it needs to function and nothing else.

This is one of the more important concepts in computer security. What this means is that if the aformentioned program gets exploited and my critical file is at E the hacker can’t get to E, they’re stuck only using some useless config files at A-C. And maybe there’s a way to exploit program F but, again, they can’t access F so the visible attack surface is reduced.

The simplest way out of a good sandbox (one not full of holes or, in our case, letters) is usually privilege escalation and a kernel exploit is great for that. So if the above program is exploited and then I send it write(exploit code) I’ve made breaking out a lot simpler.

This is where seccomp filters are best used. Reinforcing least privilege. They directly reduce visible kernel attack surface thereby reinforcing any strong sandbox.

And Hopefully…

Right now Chrome, OpenSSL, and a few other programs have implemented these filters. It’s not too difficult to implement them and I’d really like to see it in more applications, especially running services. In an ideal world everything would have seccomp filters as least privilege should be applied universally but I’d settle to have a few services like cupsd running with one. The biggest issue is that third party libraries can have compatibility issues.

What I Left Out

I didn’t go into libraries and APIs, I just kinda combined the ideas into the system calls themselves. For those interested in programming you already know what an API is and you probably know what a library is.

If I got anything wrong let me know. I’m a crap programmer and I extrapolate a lot. If you notice a gaping hole in what I’m saying point it out (be gentle) and I’ll be happy to learn something and will correct it asap.