For the last couple of weeks I’ve been using HTTPSwitchBoard. It’s reminiscent of NoScript or Request Policy on Firefox, but has a wonderful and intuitive user interface. The goal of the extension is to allow users to control what content is loaded on their webpages. It intercepts request for content and displays them to the user, allowing them to decide which they would like to allow. It is the first ‘script control’ or ‘content control’ extension that I have used on Chrome that has a decent user interface and isn’t totally broken. And, it works – it passes various Javascript tests.

As you can see in the above screenshot you get quite a lot of information about what a website needs. In this case I’m creating a whitelist of content for http://www.insanitybit.com and any third party content loaded onto it.

Creating the whitelist is simple, and you can get quite strict with the settings.

As you can see I’ve opened up a ‘list’ in the top left, that list determines where these rules apply. By default it apples to “*”, which means that whatever I whitelist will be whitelisted on all sites (that don’t have more specific rules). I can also do http://*.insanitybit.com, which means if there’s a forum.insanitybit.com the whitelist applies. Or, in this case, I’ve limited the rules to http://www.insanitybit.com. This is a wonderful feature. I, for example, have globally whitelisted imgur.com, because it’s loaded on so many sites. I can then also have my Facebook rules apply only to Facebook, leaving it blocked by default on all other sites. Very simple, very powerful.

Scripting control extensions have suffered in the past due to Chrome not allowing developers access to powerful APIs. The developer solved this by having all script control handled by Content Security Policy, and modifying the header before the request is made, thus disallowing Javascript reliably.

I like the extension a lot – while I am not really worried at all about security on my system, I find it much faster than Adblock Plus, simple to use, and I enjoy being able to control the content on webpages.

The developer is very responsive and all of the code is on GIT, which is wonderful.

A lot of people aren’t super into Chrome OS, but I personally think it’s a great operating system for netbooks. They’re light as hell on your resources and Chrome OS is arguably the most secure consumer operating system around.

So, why did I buy the Chromebook?

The Hardware

The Chromebook has somedecent specs for the price (270 dollars after everything),

Now, this is not the most powerful device in the world. Intel really screwed up in my opinion when they left AVX and AES-IN instructions out of this CPU, but it’s still not weak at all. 4GB of RAM is definitely adequate for browsing and using many apps. A decent screen, and a SSD.

The hardware is really quite decent for a netbook, certainly for the price (comparable ACER notebooks are the same price). There’s also a really great battery life – 8.5 hours, and in my experience Chromebooks typically get as good or better battery life than advertised.

This is perfect for travel or going to my classes, which is 99% of the workload it’ll get.

The Software

Chrome OS is a really cool operating system. In my opinion, it’s the ideal operating system for a netbook. Whereas other operating systems will boot up taking 1GB of RAM, or more, just for the OS itself, ChromeOS (last I checked) boots with under 100MB usage. It’s a very stripped down and optimized Linux system, booting in just a few seconds. The hardware is completely dedicated to the operating system, so even though the specs aren’t very powerful, they’re not going to waste time on anything.

Chrome OS is easily the most secure operating system in terms of protecting the user from infection or exploitation. The Chrome sandbox on Linux is something I’ve written about in the past and I feel very confident in its security. As I’ve recently written about, Native Client apps, which allow for very low level and powerful programs to run on your Chromebook, are also placed into a sandbox.

On the topic of Native Client, I think it could be huge for Chromebooks. Right now many apps are glorified bookmarks – you click them, they take you to a site, and that’s it. Once Portable Native Client is released in Chrome 31 developers will have the tools to port projects that already exist over to ChromeOS with ease. LastPass has already started work on a Native Client binary plugin, and other projects can potentially be ported.

I’ll also be able to use my Chromebook to control other computers I own that run Chrome via the Chrome Remote Desktop plugin. That means that, should anything arise that my Chromebook can’t handle, I can simply control a system that I own that can handle the task.

The majority of the Chromebook usage is going to be Netflix, Google Docs, and Cloud 9 IDE, but I think I’ll have a lot of fun with it. I may at some point turn on Dev mode and start hacking at the low level stuff, but for the most part I just want a low maintenance system that I can take around with me.

For those of you don’t know, Google’s Native Client is a way for browsers to run native code within the browser. In other words, I can write a C/C++ program (or any other LLVM supported program) and run it within the browser – pretty cool! The benefits are all over the place but, basically, ChromeOS has been largely criticized for being a ‘limited’ operating system, with apps that aren’t very powerful, and NaCl provides a way for developers to create secure and powerful applications.

But NaCl isn’t the first project to try to do this. The infamous Active X tried beforehand and, as we all know, totally sucked in terms of security. Will NaCli be a massive hole in an otherwise secure browser? Nope. Because Google poured on the security goodness here once more. Seriously, I realize most people don’t have the monetary capabilities of Google, but they do a hell of a lot when it comes to securing products these days.

We all know by now (if you don’t, read more of my posts!) that Chrome runs in a pretty cool sandbox. On Windows sandboxing is limited and, while Chrome does an excellent job, Linux provides more tools for sandboxing that address critical issues. On Linux, even conservatively, the sandbox is very impressive. Your renderer process, the most exposed codebase, is running with no rights – it can’t interact with the kernel, it has no file access, it basically gets fonts and that’s it. It’s locked into a tight sandbox. Yet Google decided that, for NaCl, they’re going to add *yet another sandbox*, which means that all NaCl code runs within the Chrome sandbox and the NaCl sandbox. In short, the Native Client process is a PPAPI process that runs in the Chrome Renderer process, so it is limited in the same ways.

That’s pretty cool. What’s cooler is how the NaCl sandbox works (without getting into PPAPI and the proxy it’s kind of not doing it all justice, but I’m writing this spontaneously at 3am oh well!).

On x86 NaCl uses a processor specific feature called segmentation. Segmentation, something I’ve seen used in PaX, the group who invented security techniques such as ASLR, is a method for the CPU to change which areas of address space are accessible to programs, and their rights. Unfortunately, segmentation is not supported on other architectures, and NaCl supports ARM and 64bit as well as x86. Just like PaX found a work around, so did Google – the implementation differs between ARM and x86_64 but the goals are all the same. (Upon watching a video on NaCl he also skims over it – anyone know more on the documentation? Seems like for 64bit they just use guard pages to separate the data/ code ‘segments’.)

NaCl executables are built with a toolchain that does a couple of pretty interesting things. NaCl executables are compiled without specific instructions, they’re blacklisted and will simply not be allowed into the codebase. Interestingly, they ban ret… so instead of returning, you jmp, push, pop. There’s a toolchain feature that has to do with alignment of instructions, rather than get into the details, the point is that you can’t jump into the middle of a chain of instructions, you have to jump to the beginning. When the toolchain returns your assembly it’s provided a safer and saner memory model that invalidates the ability to exploit specific types of vulnerabilities.

NaCl also performs instruction validation. If it sees any blacklisted instructions it kills the process, naturally. It basically does a check, before runtime, on the file to ensure that it’s not trying to perform actions that shouldn’t be allowed (though if you use the toolchain these should never be built in anyways).

Again, all of the visible attack surface from a NaCl executable is also sandboxed. That means that even if I get out of the NaCl sandbox through the proxy interface or through the renderer I’m still stuck in what are essentially the strongest sandboxes currently implemented on consumer systems and I still need to leverage another attack to get out.

I’d love to take each specific area of the sandbox (like the ret removal) and just break down exactly how that works and how effective it is, but this was a post of boredom and inability to sleep. The sandbox itself is very complex, but pretty cool. I’m not quite sure how I feel about it right now, but, as an extra layer I think it’s somewhat ideal in its goals at least. We’ll see how it works out, I’m looking forward to the next Pwnium when we’ve got NaCl built in. I’d also love to see Google add a 20,000 dollar bug bounty reward for NaCl sandbox bypasses like they’ve done for broker sandbox bypasses..

I probably missed a lot of stuff, most of what I’ve read was a while ago, but I’m hoping that we get more documentation soon.

Honestly, I just wish every company had the resources to do what Google does with security. NaCl was some experimental little project hack they made, and they are able to pour massive resources into fuzzing and all sorts of stuff. Really cool.

So I got a message asking me to expand on my previous post on browser exploitation. The user wanted to know about how security software such as NoScript and Sandboxie would deal with a browser exploit. I’m going to just go through each one on their own and explain what an attacker would be dealing with in each case.

The scenario is that you’re running Firefox with NoScript and Firefox with Sandboxie (separately, for simplicity) and you’ve visited a malicious website where the attacker controls the entire page of content. The attackers goal is to exploit the browser and monetize the system.

NoScript

NoScript works in a few ways. For the purposes of this post I’ll be focusing on the scripting whitelist aspect of it, as things like HSTS/XSS won’t make a difference in our scenario.

As an attacker I’m incredibly limited by NoScript. Most exploits are going to be in the Javascript renderer or through some plugin. With NoScript I have none of that attack surface. Instead I have to resort to exploiting some other component, like a font renderer, or find a flaw in NoScript that will allow a bypass.

This limitation is significant. I can’t even start my attack unless it’s a very specific (and less common) type. So NoScript is incredibly effective here.

If, however, I trick the user into whitelisting the site (or I have hacked an already whitelisted site) my options are much better. Now I can run Javascript, and now my exploit should work just about perfectly, as long as it doesn’t rely on XSS/CSRF.

On a whitelisted site the user is partially protected, specifically against XSS/CSRF attacks, but if I control the entire site and it is whitelisted I have enough power to exploit the browser as if it weren’t there.

Sandboxie

Sandboxie is a program designed to create a copy-on-write sandbox for programs. It emulates system services and attempts to isolate the browser as best it can. As an attacker Sandboxie doesn’t come into play until I’ve actually taken over the browser.

So, I get you to click a website, I break into your browser (see other post on browser exploitation), and now I’m in a somewhat confined environment. Anything on the system is readable by default, giving me a massive amount of valuable information about the system, like what programs are installed, security policies, personal documents, passwords, databases, etc. Post exploitation becomes much easier when read access is granted so gratuitously, making later steps much easier.

Is an attacker I can probably already make serious money off of this user. I have their browser info, potentially passwords or hashes, I can get personal documents, I can keylog, I can read work documents, etc. But what if I want to get persistence? What if I want this to be part of my new botnet? I have to get out of the sandbox.

Now I have to get out of the sandbox if I want enough rights to hook this machine up to my botnet. How do I go about doing this? Well, thanks to the read access I’ve been given I have a ton of info on the system. This makes local exploitation much easier. I can exploit the kernel in the sandbox (reducing kernel attack surface on Windows is ridiculously difficult read: not a logical approach) and break right out, once I’m kernel level I simply unhook Sandboxie and I own the computer, I can do whatever I want.

Depending on the sandbox configuration things can be much much easier or potentially more difficult (I see more weak policies than strong policies in my experience).

Conclusion

And there you have it. Two security programs that a few people have been asking me to discuss for some time. I’m avoiding talking about the programs themselves and their own attack surface, but if you read my posts you’ll be able to extrapolate.

I would say that NoScript adds a very significant layer of security, and should be on every Firefox users browser. Sandboxie is a good choice if you’re willing to set up powerful policies and start denying read access – a default install is OK though.

For a while I’ve had to keep the Restrict mprotect() option in PaX disabled because it wasn’t compatible with certain programs. It was kind of a huge pain to deal with for that reason. But I’ve finally taken the 30 seconds to just deal with it and I’ll post how.

The program that has the biggest issue with the restrictions is Unity, the program that handles your user interface on Ubuntu. So, we need to kill Unity so that we can use the paxctl program to disable mprotect restrictions.

Keep in mind that you need to enable CONFIG_PAX_PT_PAX_FLAGS in your kernel config for this.

1) Download paxctl

A simple ‘apt-get install paxctl’ is enough here.

2) Kill Unity and Xorg

This is the annoying part. Xorg just restarts every time it’s killed. So you have to run the following command:

service lightdm stop

And then hit ctrl + alt + F4.

You should now have a terminal.

3) Apply flags

Run:
paxctl -c /usr/bin/unity
paxctl -m /usr/bin/unity

Now you can reboot and your UI should work. You’ll have to do this for a few programs (like Chrome) as well.
From the Grsecurity wiki on mprotect() restrictions:

Enabling this option will prevent programs from
– changing the executable status of memory pages that were
not originally created as executable,
– making read-only executable pages writable again,
– creating executable pages from anonymous memory,
– making read-only-after-relocations (RELRO) data pages writable again.

You should say Y here to complete the protection provided by
the enforcement of non-executable pages.

I’ve seen a lot of reports in the last year that have been prompted by the massive password dumps on major websites. The focus of these reports has been about ‘killing passwords’ and replacing them with new technology. The thing is, passwords are actually great, and they don’t need to go anywhere.

First of all, passwords simply aren’t going anywhere. You’re not going to reinvent every websites authentication – we can barely convince sites to stop storing in plaintext, or use something other than MD5, so you’re absolutely not going to convince anyone to change their entire authentication method from the ground up.

On top of that… there’s just nothing wrong with passwords. Passwords on their own are kind of awesome, and, if used properly, way beyond most attacks. If you were to come up with a completely random 16 character password you could rest assured for the next wonderful couple hundred million years of your life you wouldn’t have to worry about anyone bruteforcing it.

The problem is that remembering something like L10F!E4d1I4U8Nhr is difficult, and remembering a unique password for every site is even harder, given that most people have at least a dozen websites that they log into.

So should we dump the password? Definitely not. We should instead move to password management systems, like LastPass, and implement two-factor auth on critical websites. This should have a very small effect on usability while having a very significant effect on security.

With a password manager like LastPass you don’t have to remember any of your passwords, so there’s no reason for you to use the same password twice, or use something easy to remember – you can very easily use 16 character random passwords for every site you visit. The only password you have to remember is your master password, and that’s the ‘point of failure’ that needs to be addressed.

Addressing that master password security is actually not so difficult. LastPass deals with it in two ways.

1) PBKDF2 rounds make bruteforcing far less useful, with a default of 5,000, and an incredibly high maximum value of 256,000. That means every single password attempt will take ~5,000x as long as a single password attempt. You can raise this number significantly to make even weaker passwords way too difficult to bruteforce.

2) Two-Factor Authentication means that even if an attacker has compromised your password they still need access to a physical device that’s used for authentication, such as an Android device, or a piece of paper.

So bruteforcing the master password just isn’t practical anymore, if you use even a slightly strong password with PBKDF2 and 2FA.

It’s dead easy to use and you can access it anywhere with internet connection (or use the Android App, which is great) and it would solve users reusing passwords, users using weak passwords, and other issues.

Of course, websites themselves should always assume the worst. They should always use PBKDF2 or bcrypt, and websites that store critical information should use 2 Factor Auth as well. But, for the users end of things, a password manager solves most issues.

So rather than scrap the most basic authentication mechanism used everywhere, just harden it. It’s not difficult.

CloudNS is a DNS host that supports a few cool security features. I’ve set it up, and it’s working for me on Linux Ubuntu 13.04. I think its security features give it the potential to be the preferred choice for those looking for that higher level of security and privacy.

* DNSCrypt Support
We only allow connections to our service using DNSCrypt, this
provides confidentially and message integrity to our DNS
resolver, and makes it harder for an adversery watching the
traffic of our resolver to identify the origin of a DNS query as
all the traffic is mixed together.
* DNSSEC Validation
Our server does complete trust validation of DNSSEC enabled
names, protecting you from upstream dns poisoning attacks or
other DNS tampering.
* Namecoin resolution
Namecoin is an alternative, decentralized DNS system, that is
able to prevent domain name censorship. Our DNS server does local
namecoin resolution of .bit domain names making it an easy way to
start exploring namecoin websites.
* Hosted in Australia
Our DNS Server is hosted in Australia, making it a faster
alternative to other open public DNS resolvers for Australian
residents.
* No domain manipulation or logging
We will not tamper with any domain queries, unlike some
public providers who hijack domain resolution for domains that
fail to resolve. Our servers do not log any data from connecting
users including DNS queries and IP addresses that make
connections.

I think those are some really interesting features. For one thing, it forces DNSCrypt and validates with DNSSEC, and it appears to be the only resolver to do both of these things. And it’s also hosted outside of the US, which has its own implications for security.

So I went ahead and set up CloudNS using the following command (and setting this in rc.local) after configuring DNSCrypt from this guide. You can check Cloudns.com.au for the updated information, but as of today (Aug 8th, 2013) this command works for me.
dnscrypt-proxy --user=dnscrypt --daemonize --resolver-address=113.20.6.2:443 --provider-name=2.dnscrypt-cert.cloudns.com.au --provider-key=1971:7C1A:C550:6C09:F09B:ACB1:1AF7:C349:6425:2676:247F:B738:1C5A:243A:C1CC:89F4

So the three big improvements for me are DNSSEC, DNSCrypt, and Australia hosting.

DNSSEC

DNSSEC is an extension of DNS that aims to provide authentication and integrity of DNS results; it ensures that you know who the result is from and that no one else has tampered with it. DNS responses are authenticated but they are not encrypted, so DNSSEC does not prevent someone between you and the resolver from viewing the request.

DNSCrypt

DNSCrypt provides encryption of DNS requests, which provides confidentiality of the requests, meaning that an attacker between you and the resolver can not view the traffic between you and your DNS resolver.

Stacking DNSSEC and DNSCrypt works out very well, as you end up covering your bases and achieving confidentiality, integrity, and authentication.

Hosting In Australia

While I’m not particularly familiar with Australia’s laws, hosting outside of the US definitely provides a bit more peace of mind. Just yesterday we learned that Lavabit (the email provider chosen by Edward Snowden) has shut down due to the US government trying to compromise their ability to protect their users. The truth is that hosting in the US just makes a service less trustworthy at this point, and hosting outside is a big plus. This, combined with Namecoin and their pledge to not log, is really somewhat comforting.

So, while I can’t absolutely recommend it at this point (I haven’t been using it long enough) I think there’s a lot of potential here.

I read a lot of “If you’re smart you’ll be fine” posts on the internet about information security. “Just don’t go to shady websites” and the like. This is a really common attitude, even (or especially) among those with backgrounds in security. But it’s really just not the truth anymore, as has been demonstrated time and time again. Sophos reports have shown that the majority of attacks go through hacked legitimate websites, and Google’s malware transparency reports have shown the same thing.

Recently Ubuntuforums.org was hacked, and I feel like it’s just the pinnacle of “being smart doesn’t do shit for you”. I post, on occasion, on the ubuntu forums to give security advice and whatnot. There are some really smart people there, people with certifications in security, and who do this sort of thing for a living. These are not stupid people, they are definitely more informed than your average user. But they visited ubuntuforums.org. And for six days that website was under the control of an attacker, and for six days that attacker had the opportunity to put up an exploit page, knowing full well that everyone was running Linux.

The attacker did not do this, he pulled passwords and emails, and as far as we know that’s all. But being “smart” didn’t stop anyone from visiting a website that was under the control of an attacker.

Instead of putting up a page saying “You just got hacked” he could have put up an exploit. Being smart would not have saved you, common sense would be useless.

I think people need to consider that being smart is not a strong security policy. If someone’s got a gun on you does being smart help much? Not really, you’re kinda at their mercy. Attackers are actively working against you, and it is to their benefit to do things that you can’t anticipate. Blaming people for visiting a hacked site is just as silly as blaming anyone on the ubuntu forums for visiting a webpage that they go to often.

Keep that in mind when you think that ‘average users’ must be so stupid to get infected.

Android 4.3 came out about a week ago and it’s brought SELinux to the operating system. Now, maybe it’s just me, but I feel this is a massive waste of resources. SELinux is going to take a very long time to get working properly (right now if you set it to enforce the system won’t boot, I believe), probably months, and the benefits are not significant.

SELinux is an LSM used to confine services and users, implementing Least Privilege on the system. But attacks on Android have often leveraged kernel exploits, something that SELinux simply doesn’t address. Where SELinux comes in handy is securing services, and preventing an attacker from abusing that service.

So I think the real question is… how much is this hurting Android security? Because SELinux is addressing issues that aren’t so considerable, and the amount of work is absolutely quite high.

Given that Grsecurity/ PaX have ported their main and most important features (ex: UDEREF) to ARM, I would imagine that implementing those features would have significantly less cost, while providing a very high level of security. There are numerous Grsecurity features that have been ported, and should work on Android, and they would make attacking both services and the kernel considerably more difficult.

Beyond that, implementing a MAC system before you harden the kernel is not the most sensible approach. Your MAC relies entirely on the kernel, so protection of the kernel should be the priority. An exploit in an SELinux service will lead to confinement, but on a weak kernel an attacker can break out easily using local kernel escalation. So it makes sense to focus on the kernel itself before you try to have it enforce policies.

Grsecurity also leverages user restrictions well, with a multitude of features (like TPE partial restriction) that apply generically to user accounts. These features would layer beautifully with Android’s own security model, which is heavily reliant on users and groups.

So while we wait for months for a working SELinux profile for Android, we could have significant advances in Android security very quickly if the focus were changed to projects like Grsecurity.

SELinux also fails to deal with Android’s other security issue – apps requesting privileges that they don’t need, and shouldn’t have. For example, Angry Birds asks for GPS and all sorts of other information but you absolutely don’t need that to play the game. OpenPDroid addresses this by allowing the user to remove arbitrary permissions from apps. SELinux does not address this (as it works at the Linux layer, not the Java layer).

OpenPDroid is a framework that already exists. Just as with Grsecurity it would likely not take nearly as long to implement it compared to implementing SELinux.

So focusing on SELinux means less focus on projects that would take less time and provide a higher level of (more relevant) security.