Security Through Boredom

Menu

Category Archives: Security

Post navigation

I’ve seen a lot of reports in the last year that have been prompted by the massive password dumps on major websites. The focus of these reports has been about ‘killing passwords’ and replacing them with new technology. The thing is, passwords are actually great, and they don’t need to go anywhere.

First of all, passwords simply aren’t going anywhere. You’re not going to reinvent every websites authentication – we can barely convince sites to stop storing in plaintext, or use something other than MD5, so you’re absolutely not going to convince anyone to change their entire authentication method from the ground up.

On top of that… there’s just nothing wrong with passwords. Passwords on their own are kind of awesome, and, if used properly, way beyond most attacks. If you were to come up with a completely random 16 character password you could rest assured for the next wonderful couple hundred million years of your life you wouldn’t have to worry about anyone bruteforcing it.

The problem is that remembering something like L10F!E4d1I4U8Nhr is difficult, and remembering a unique password for every site is even harder, given that most people have at least a dozen websites that they log into.

So should we dump the password? Definitely not. We should instead move to password management systems, like LastPass, and implement two-factor auth on critical websites. This should have a very small effect on usability while having a very significant effect on security.

With a password manager like LastPass you don’t have to remember any of your passwords, so there’s no reason for you to use the same password twice, or use something easy to remember – you can very easily use 16 character random passwords for every site you visit. The only password you have to remember is your master password, and that’s the ‘point of failure’ that needs to be addressed.

Addressing that master password security is actually not so difficult. LastPass deals with it in two ways.

1) PBKDF2 rounds make bruteforcing far less useful, with a default of 5,000, and an incredibly high maximum value of 256,000. That means every single password attempt will take ~5,000x as long as a single password attempt. You can raise this number significantly to make even weaker passwords way too difficult to bruteforce.

2) Two-Factor Authentication means that even if an attacker has compromised your password they still need access to a physical device that’s used for authentication, such as an Android device, or a piece of paper.

So bruteforcing the master password just isn’t practical anymore, if you use even a slightly strong password with PBKDF2 and 2FA.

It’s dead easy to use and you can access it anywhere with internet connection (or use the Android App, which is great) and it would solve users reusing passwords, users using weak passwords, and other issues.

Of course, websites themselves should always assume the worst. They should always use PBKDF2 or bcrypt, and websites that store critical information should use 2 Factor Auth as well. But, for the users end of things, a password manager solves most issues.

So rather than scrap the most basic authentication mechanism used everywhere, just harden it. It’s not difficult.

CloudNS is a DNS host that supports a few cool security features. I’ve set it up, and it’s working for me on Linux Ubuntu 13.04. I think its security features give it the potential to be the preferred choice for those looking for that higher level of security and privacy.

* DNSCrypt Support
We only allow connections to our service using DNSCrypt, this
provides confidentially and message integrity to our DNS
resolver, and makes it harder for an adversery watching the
traffic of our resolver to identify the origin of a DNS query as
all the traffic is mixed together.
* DNSSEC Validation
Our server does complete trust validation of DNSSEC enabled
names, protecting you from upstream dns poisoning attacks or
other DNS tampering.
* Namecoin resolution
Namecoin is an alternative, decentralized DNS system, that is
able to prevent domain name censorship. Our DNS server does local
namecoin resolution of .bit domain names making it an easy way to
start exploring namecoin websites.
* Hosted in Australia
Our DNS Server is hosted in Australia, making it a faster
alternative to other open public DNS resolvers for Australian
residents.
* No domain manipulation or logging
We will not tamper with any domain queries, unlike some
public providers who hijack domain resolution for domains that
fail to resolve. Our servers do not log any data from connecting
users including DNS queries and IP addresses that make
connections.

I think those are some really interesting features. For one thing, it forces DNSCrypt and validates with DNSSEC, and it appears to be the only resolver to do both of these things. And it’s also hosted outside of the US, which has its own implications for security.

So I went ahead and set up CloudNS using the following command (and setting this in rc.local) after configuring DNSCrypt from this guide. You can check Cloudns.com.au for the updated information, but as of today (Aug 8th, 2013) this command works for me.
dnscrypt-proxy --user=dnscrypt --daemonize --resolver-address=113.20.6.2:443 --provider-name=2.dnscrypt-cert.cloudns.com.au --provider-key=1971:7C1A:C550:6C09:F09B:ACB1:1AF7:C349:6425:2676:247F:B738:1C5A:243A:C1CC:89F4

So the three big improvements for me are DNSSEC, DNSCrypt, and Australia hosting.

DNSSEC

DNSSEC is an extension of DNS that aims to provide authentication and integrity of DNS results; it ensures that you know who the result is from and that no one else has tampered with it. DNS responses are authenticated but they are not encrypted, so DNSSEC does not prevent someone between you and the resolver from viewing the request.

DNSCrypt

DNSCrypt provides encryption of DNS requests, which provides confidentiality of the requests, meaning that an attacker between you and the resolver can not view the traffic between you and your DNS resolver.

Stacking DNSSEC and DNSCrypt works out very well, as you end up covering your bases and achieving confidentiality, integrity, and authentication.

Hosting In Australia

While I’m not particularly familiar with Australia’s laws, hosting outside of the US definitely provides a bit more peace of mind. Just yesterday we learned that Lavabit (the email provider chosen by Edward Snowden) has shut down due to the US government trying to compromise their ability to protect their users. The truth is that hosting in the US just makes a service less trustworthy at this point, and hosting outside is a big plus. This, combined with Namecoin and their pledge to not log, is really somewhat comforting.

So, while I can’t absolutely recommend it at this point (I haven’t been using it long enough) I think there’s a lot of potential here.

I read a lot of “If you’re smart you’ll be fine” posts on the internet about information security. “Just don’t go to shady websites” and the like. This is a really common attitude, even (or especially) among those with backgrounds in security. But it’s really just not the truth anymore, as has been demonstrated time and time again. Sophos reports have shown that the majority of attacks go through hacked legitimate websites, and Google’s malware transparency reports have shown the same thing.

Recently Ubuntuforums.org was hacked, and I feel like it’s just the pinnacle of “being smart doesn’t do shit for you”. I post, on occasion, on the ubuntu forums to give security advice and whatnot. There are some really smart people there, people with certifications in security, and who do this sort of thing for a living. These are not stupid people, they are definitely more informed than your average user. But they visited ubuntuforums.org. And for six days that website was under the control of an attacker, and for six days that attacker had the opportunity to put up an exploit page, knowing full well that everyone was running Linux.

The attacker did not do this, he pulled passwords and emails, and as far as we know that’s all. But being “smart” didn’t stop anyone from visiting a website that was under the control of an attacker.

Instead of putting up a page saying “You just got hacked” he could have put up an exploit. Being smart would not have saved you, common sense would be useless.

I think people need to consider that being smart is not a strong security policy. If someone’s got a gun on you does being smart help much? Not really, you’re kinda at their mercy. Attackers are actively working against you, and it is to their benefit to do things that you can’t anticipate. Blaming people for visiting a hacked site is just as silly as blaming anyone on the ubuntu forums for visiting a webpage that they go to often.

Keep that in mind when you think that ‘average users’ must be so stupid to get infected.

Android 4.3 came out about a week ago and it’s brought SELinux to the operating system. Now, maybe it’s just me, but I feel this is a massive waste of resources. SELinux is going to take a very long time to get working properly (right now if you set it to enforce the system won’t boot, I believe), probably months, and the benefits are not significant.

SELinux is an LSM used to confine services and users, implementing Least Privilege on the system. But attacks on Android have often leveraged kernel exploits, something that SELinux simply doesn’t address. Where SELinux comes in handy is securing services, and preventing an attacker from abusing that service.

So I think the real question is… how much is this hurting Android security? Because SELinux is addressing issues that aren’t so considerable, and the amount of work is absolutely quite high.

Given that Grsecurity/ PaX have ported their main and most important features (ex: UDEREF) to ARM, I would imagine that implementing those features would have significantly less cost, while providing a very high level of security. There are numerous Grsecurity features that have been ported, and should work on Android, and they would make attacking both services and the kernel considerably more difficult.

Beyond that, implementing a MAC system before you harden the kernel is not the most sensible approach. Your MAC relies entirely on the kernel, so protection of the kernel should be the priority. An exploit in an SELinux service will lead to confinement, but on a weak kernel an attacker can break out easily using local kernel escalation. So it makes sense to focus on the kernel itself before you try to have it enforce policies.

Grsecurity also leverages user restrictions well, with a multitude of features (like TPE partial restriction) that apply generically to user accounts. These features would layer beautifully with Android’s own security model, which is heavily reliant on users and groups.

So while we wait for months for a working SELinux profile for Android, we could have significant advances in Android security very quickly if the focus were changed to projects like Grsecurity.

SELinux also fails to deal with Android’s other security issue – apps requesting privileges that they don’t need, and shouldn’t have. For example, Angry Birds asks for GPS and all sorts of other information but you absolutely don’t need that to play the game. OpenPDroid addresses this by allowing the user to remove arbitrary permissions from apps. SELinux does not address this (as it works at the Linux layer, not the Java layer).

OpenPDroid is a framework that already exists. Just as with Grsecurity it would likely not take nearly as long to implement it compared to implementing SELinux.

So focusing on SELinux means less focus on projects that would take less time and provide a higher level of (more relevant) security.

A couple of months ago I wrote a post about antivirus as attack surface. The benefits for an attacker going after an antivirus being that they bypass a security mechanism and typically gain administrative privileges on the machine. Well, recently a new exploit tool came out and it’s targeting McAfee ePO.

The tool allows an attacker on the local network to add rogue systems to an enterprise ePO server, steal domain credentials if they are cached within ePO, upload files to the ePO server, and execute commands on the ePO server as well as any systems managed by ePO.

Basically, if someone gets onto your network they can control any systems under the protection of the ePolicy Orchestrator. For enterprise this is a huge deal as you can have hundreds of systems under the ePO “protection” and therefor a compromise of the ePO means the attacker controls the workforce.

With antivirus software injecting itself into all sorts of processes, allowing remote endpoint management, and more, it makes for very tempting attack surface. In an enterprise environment where you’re dealing with so-called ‘APT’ this is exactly the type of attack that would be used. Full compromise of the majority of systems, allowing for a massive amount of credentials to be stolen, more successful phishing attacks, etc. It is not hard to imagine an entire network being controlled in relatively little time with this type of attack.

This is just one example of an attack on security software, it will definitely not be the last we see.

So I’m sick today, and very very bored. Nothing out there is interesting right now so I have nothing to really write about. So I’m just going to write about how I secure my system, and where the threats are on a system configured this way. Very bored.

I run Ubuntu 13.04.

Attack Surface

The attack surface on this system is fairly easy to determine – where is data coming in from? Keep in mind that my system is not default, I remove and disable many programs/ services that could otherwise expose me to local attacks.

Mainly, these are the only programs that interact with the outside world:

1) DNSCrypt

2) Chrome

3) Pidgin

4) DHClient

5) Updates (apt)

6) Kernel *

These are the programs that interact with the internet, and that’s where the initial attack is most likely to begin – physically local attacks are less important to me, as that would entail someone broke into my house and I have other shit to worry about.

* The kernel is only exposed to the outside world through TCP/IP stack, and in this case I’m only assessing it as a threat for local attacks.

DNSCrypt

DNSCrypt is a program used to encrypt your DNS requests. It’s a cool program, and while it interacts with the internet, it’s pretty secure. It chroots itself and uses secure flags. I have it running in an Apparmor profile that limits rights significantly, I have iptables rules set up to restrict its internet access, I have the UID it runs in running with few rights and TPE restrictions.

The product isn’t out there that much as far as I know, so there could still be some surface vulnerabilities in there. Thankfully, because DNSCrypt’s developer took the time to secure it, attacks on it would be very difficult – an attacker may compromise DNSCrypt but they are left in a severely limited environment and the easiest way out would likely be a local kernel exploit (more on this in the kernel section).

I would deem DNSCrypt very difficult to attack.

Chrome

My browser of choice is Google Chrome Beta, which, like DNSCrypt, also makes use of modern mitigation techniques and chroots. Chrome has a really powerful sandbox, making use of many of the security features of Linux including namespaces, chroots, and seccomp filters.

Chrome takes in untrusted input all of the time. Loading this page I sent out input, and I got unencrypted input back – I have no real way to determine whether an attacker intercepted the transmission and sent me back an exploit page and this is the case every time I load an unencrypted page.

So Chrome definitely deals with a lot of hostile code very directly. But the areas of code that are most exposed – the Javascript renderer, Flash plugin, etc, are all run within very restricted environments. On top of that Javascript is limited by TLD and plugins are all click to play. I also run Chrome in an Apparmor profile.

So while an attacker has a lot of opportunity to exploit Chrome they will have a very difficult time breaking out of the sandbox. In a typical sandbox the path of least resistance is often exploiting the kernel to bypass the restrictions but Chrome limits kernel exposure with seccomp filters making that much harder.

I would deem Chrome incredibly difficult to attack.

Pidgin

Pidgin is an IM Client, and it also takes in untrusted input by default. It makes use of modern mitigation techniques as well, but doesn’t use any sort of sandbox like the above programs. Pidgin messages are not encrypted, so an attacker could potentially MITM them, substituting their own messages, potentially allowing an exploit. They could also simply instant message me through a friends compromised account, or even through a new account if I’m convinced to ‘allow’ the message (I almost always do if the name isn’t blatantly spam).

I run Pidgin in an apparmor profile, so there is a sandbox around it, but it’s less than ideal as a lot of exceptions must be made to allow it to work properly. An attacker could potentially abuse Px, Cx rules to gain rights.

Pidgin is likely the easiest program to attack on my system, though that isn’t saying much as an attacker still needs to do quite a bit of work.

DHClient

DHClient is responsible for assigning the computer an IP address based on the network configuarion. It only listens to the local network, so it’s not exposed outside of my network. Because of this an attacker must go through another component on the network – my phone, another laptop that’s connected, or the router. they must then attempt to exploit DHClient.

DHClient also uses modern mitigation techniques, and it’s run within an Apparmor sandbox.

Because it is only exposed locally I would consider DHClient incredibly difficult to exploit.

Updates

Every time I update I take in some executable file and run it as root. I don’t know of a single distro that updates over a secure connection like HTTPS, but they do all sign packages by default. The issue is that signing is not enforced (that I know of) so one application may not use a signature. If that’s the case an attacker just replaces the application with their own, and it’s installed.

I believe every package I have installed is in fact signed, which means that an attacker would have to find a hash collision, which is incredibly difficult and something only an incredibly motivated and rich attacker (as in a country) could possibly due, and doubtful in the case of SHA any time soon.

Attacking apt in this way is not *currently* feasible, though once there’s more work on attacking SHA it could very well be – Flame did it for MD5.

Kernel

The kernel is the core of the operating system – I’ve mentioned local privilege escalation multiple times as a way to bypass other security measures such as sandboxes. That escalation exploit is most likely to go through the kernel.

I compile my kernel with Grsecurity, which makes exploitation of kernel vulnerabilities significantly harder. There are numerous mitigations that are years ahead of what you’ll find elsewhere, so an attack on the kernel may be completely removed, or, it might take weeks instead of days to create.

Attacking a Grsecurity kernel, even after you get remote code execution in one of the above programs, is very difficult.

Where I Could Be Better

I could set up RBAC through Grsecurity, which would be an extra layer of access control.

Ubuntu could start shipping PIE so that an attacker can’t attack a root service locally as easily.

Ubuntu could start using TLS for updates.

Pidgin and DHClient could implement seccomp filters.

Conclussion

That’s pretty much it. That’s where I see my attack surface, and you get a little insight into how my own system is set up. I honestly don’t feel that I need to do most of this to stay safe, but I get very bored and using these tools is nice motivation to learn how they work in detail. So you get to see what ‘security through boredom’ truly means.

It was requested that I give a plain English explanation of how an attacker compromises a browser. I’m going to try to give a lot of detail in some areas but I will leave specific things out in order to not confuse. Hopefully by understanding how programs are exploited readers will be better equipped to choose security programs.

The first step in an attack is getting the victim to hit some kind of content that an attacker controls. This is generally achieved by either hacking a legitimate site and waiting for the user to get there, or sending the victim a link and tricking them into clicking it. Those are not the only ways, nor are they mutually exclusive.

Attacks against a browser can potentially be in the JavaScript renderer, a piece of the browser that looks at the JavaScript code on a webpage. The attacker can slip in some malformed and malicious code that may do many things. One of those things may be to overflow a buffer set up by the renderer.

By doing this an attacker can write to data outside of the buffer, and overwrite some piece of code or data that will essentially give them control over the renderer process. This initial code that runs the necessary instructions to compromise the process is the stage one shellcode. All of these instructions are essentially manipulated instructions that already exist in the process, and an attacker calls them from the legitimate program (in order to bypass DEP) using ‘return oriented programming’ (ROP). ASLR randomizes the location of these usable instructions, which forces an attacker to search for them in some manner (information leak is the most reliable, or heap spray). Once an attacker knows the location of the instructions they can chain them (ROP chain) together to call various functions. There are tools to automate ROP chain generation, which means attackers have to do a lot less work.

Attackers will now typically (they could do it all from stage one through ROP) download their second stage shellcode into the victim browsers address space, and this shellcode will carry out some instruction, such as creating a connection to a remote server.

At this point an attacker has full control over the renderer process (in Firefox this is the same as the browser process, in Chrome it is separated into another process) and can execute code in the context of this process. They can make calls that download and execute payloads, or they can create new threads in separate process, or read a file, or write a file, etc. It is important to know that the attacker’s code is the renderer process – they are the same process, no new process has spawned, it’s all happening within the renderer.

If the attacker wants to they can now drop and execute their third payload (as is common), and this is where programs such as antiexecutables come in – very late in the game. Keep in mind that although an antiexecutable can potentially prevent this payload from executing, attackers have already compromised the process – they can enumerate processes in the system, hop control to another process, keylog, read/write to the file system and registry, they can even run local privilege escalation attacks long before they ever execute this final payload or touch the disk.

So as an attack continues the attacker gains more and more control over a process. Defenses that prevent attacks in stage 1 are ideal, and attacks that don’t kick in until stage 3 have to be very powerful to be effective.

One of the restrictions an attacker will come across is the space that they have to fit their shellcode, which is why, oftentimes, they simply drop the initial shellcode in and have it immediately pull down and execute the file payload. But the advantage to staged shellcode that waits before downloading that final payload is obvious – an attacker can gain significant information on the system, they can avoid forensic analysis, AV, AE, etc. But as most users make virtually no effort to secure their systems users of such products have gotten away with it by being a just-difficult-enough target relative to the rest of the world.

It’s important to note that attacks do not have to follow this patter. Java attacks are typically ‘sandbox escapes’, they run Java code on the system and then trick the Java Virtual Machine into running the code as a standard user, so there’s no shellcode, there’s just a malicious class file launching and running the malware directly. I didn’t go into information leaking, I really just talked about a very specific attack using a buffer overflow, and the focus was more to explain how staged shellcode works.

DNSCrypt is a DNS Resolver that encrypts the DNS requests between you and the first level DNS resolver. I have a guide for setting it up here. This guide will be about restricting the process and user account, making DNSCrypt more resilient to attack – I will continue to update this guide, I have a few more ideas.

One of the nice features of DNSCrypt is that it actually takes security into account. I wish this weren’t something to be shocked by, but, *gasp* it actually uses compiler security flags. Specifically, it uses the following flags:

-fPIC -fPIE -fstack-protector-all -fno-strict-overflow -fwrapv **

-fPIC and -fPIE tell the compiler to create a relocatable binary, completing the implementation of ASLR. It’s a mitigation technique we rarely seen used, despite it being somewhat critical, and it having been around for years. So right off the bat they’re doing more than most.

-fstack-protector-all (unlike the oft used -fstack-protector, which only protects functions using char arrays/strings) tells the compiler to protect every function with a stack canary. If an overflow occurs the canary may be overwritten, and the function will fail.

-fno-strict-overflow and -fwrapv are essentially the same (in other words, I don’t know the difference) and they tell GCC to not make assumptions about overflows, basically to not assume that overflows won’t occur. Compilers make the assumption that overflows won’t happen when they generate the optimized assembly, so they can build optimizations with that assumption – this prevents that, which is safer.

So these are nice, and we like them. But DNSCrypt also does a bit more.

You can create a new DNSCrypt user with no write rights, and it will chroot itself into that user, and drop rights. This is great, since a chroot’d process with no ability to write is difficult to break out of. And running as a separate user means no X11 access, it gets its own home folder, and it’s generally more isolated from the system – all good things!

But it means some other stuff too. Because it does all of the above we as users can take that protection further – beyond where typical programs allow us to. I think this demonstrates what a strong security model really can do when built from the ground up.

So, on to what we can do.

First thing’s first, we’re going to want some information on our DNSCrypt user.

run ‘id dnscrypt’

You should get something similar to:

id dnscrypt
uid=109(dnscrypt) gid=123(dnscrypt) groups=123(dnscrypt)

We’re going to need this.

IPTables On User

Note that if you’re using UFW this may cause issues, using UFW/GUFW with iptables isn’t recommended, and your mileage may vary – to remove your UFW rules run ‘iptables -F’.

Normally I’m not fond of outbound filtering, but because DNSCrypt separates itself into another user, it’s actually not such a bad idea. It means that DNSCrypt can’t just switch its outbound connection to another program under the same user account, and it means that the ports we limit will be limited to that user account specifically. This assumes you are using DNSCrypt under a user called ‘dnscrypt’.

So it’s a lot more worthwhile to set up outbound filtering here.

DNSCrypt should only need outbound access to port 443, with UDP. So we can restrict it to just port 443 and UDP with the following IPTables rules:

Basically, the first rule allows outbound access to the DNSCrypt user over port 443 and UDP, and the second rule denies everything. If the first rule is hit, and it passes, the second rule doesn’t have to come into play.

***

DNSCrypt is now restricted to UDP over port 443, and all processes running under the dnscrypt user are as well. If you followed the tip then no new connections can be made to your system except over port 53 (you can have dnscrypt use another port, in which case you’ll switch that port to whatever that one is. I have yet to figure the details of this out, I’ll edit it in when I do.)

Trusted Path Execution

If you care about security you’re already running Grsecurity, but if not, see my guide here.

Grsecurity has an option called Trusted Path Execution that allows us to limit a group, preventing it from executing files owned and only writable by root – since our program doesn’t run as root, and can’t write anywhere, it means it won’t be able to execute anything at all.

So check the TPE box and add the GID for untrusted users, in this case 123.

Now this protection is superfluous, DNSCrypt shouldn’t be able to write to the filesystem, so it shouldn’t be able to execute any payloads off of the file system, but it’s still good to have as the protection is now implemented by the user account itself, and doesn’t rely on the program to drop rights properly, or a perfect implementation of chroot.

Chroot Restrictions

While you’re compiling your Grsecurity kernel, you can also go ahead and turn on every single chroot restriction without worry – DNSCrypt works fine with them all. DNSCrypt already can’t write to its chroot, so as far as I know there’s no known bypass as is, but you can safely enable all of these restrictions. Although some of the protections are a bit redundant due to the aforementioned write restrictions, there are a few that are quite nice, such as:

Apparmor

Apparmor is an LSM (Linux Security Module) program that restricts a process. If Apparmor is the LSM used on your distribution (Ubuntu derivatives) you can find my profile here. Apparmor will restrict file access, what programs can be executed, what libraries can be loaded, etc. An attacker who winds up in a program that is confined with apparmor must either find a flaw in apparmor, or the profile, or they have to use a local escalation attack. If you’re using everything listed above this is going to be a lot of work for them.

Users of other LSM such as SELinux will need to build their own profiles. This shouldn’t be hard, DNSCrypt needs little file access to work.

Conclusion

Given the situation where an attacker finds himself compromosing the DNSCrypt-Proxy on a system that has done all of the above, they’re going to be pretty pissed off. There is still room for improvement, (seccomp filters) but right now an attacker is going to have to do a lot to get an exploit to be reliable.

For a program like DNSCrypt this level of security is great. It already chroots itself to a directory that it can’t write to, and they use compiler security, so you know they’re taking this stuff seriously. That’s what allows us to spend our time securing it further. If DNSCrypt did not so gracefully run as another user, and if it weren’t built to drop its rights to the extent that it does, then our apparmor profile would be more convoluted, TPE may not be possible, and an outbound Firewall would have been a useless attempt at security through obscurity. But because it’s built from the ground up to be this way we can reinforce it well.

Notes/ Tips

Much of this can be done to any process/ service with a bit of a change, but it’s nice to be able to do this to a process like DNSCrypt.

**

Keep in mind that you can add your own files to the makefile, such as “-march=native”, optimizing for your CPU. I can’t guarantee that this will play nice, or that it won’t add in unsafe compiler optimization! But you may end up using something like AES instructions since this deal swith crypto, and math, and this could speed things up.

***

Tip: The following commands will set your firewall so that:

1) If a connection is new, is over the loopback interface, is udp, and uses port 53, we accept it (allows dns resolution)
2) If a connection is already established from an outbound connection then we allow an inbound connection.
3)All other connections that do not meet the above criteria are blocked.

The dnscrypt-proxy service can run as a separate user, and chroot itself into the directory and drop rights. It also makes use of compiler security flags, so it’s PIE enabled, uses full RELRO, and stack protection. It’s pretty cool, but I like to be sure, so enforcing an apparmor profile is always nice.

With this apparmor profile enabled an attacker who compromises DNSCrypt will have absolutely no write access to the file system, and incredibly limited read access. The most viable option at this point is for them to go for a local kernel exploit.

edit: I want this edit right at the top. ES has apparently stated that they have now (October ’13) added in stage one exploit mitigation techniques. They have provided zero documentation on how these techniques supposedly work. My verdict of ‘use EMET’ has not changed, and I suspect one of their techniques is quite similar to the one used in EMET.

Recently a program called ExploitShield by the start up Zero Vulnerability Labs made its way into the security market, offering to protect against a wide variety of exploits. Just the other day it was purchased by Malwarebytes and rebranded as Malwarebytes AntiExploit. I find MBAM to be one of the best antiviruses, and I understand why something like ES would appeal to them, but I don’t see ES as being such a boon to security.

ExploitShield is essentially a “smart antiexecutable”, though they wouldn’t call it that. It actually has very little to do with exploits, and it certainly doesn’t prevent them. Instead it attempts to detect the exploit, and then prevent any new payloads from executing. This is nice, compared to a regular AE, because it’s not so stupidly overbearing. But just like an AE the attacker already has full remote code execution once they get shell, and the defense takes place too late in the game.

ExploitShield does not prevent exploit (despite its name)s, it does not make vulnerabilities more difficult to exploit. What it does is it attempts to detect exploits in various ways, and then, based on that detection, it decides if the ‘shielded’ program should be allowed to execute a payload.

My real issue with ExploitShield is that it actually doesn’t do anything to prevent exploits. It detects them, and then takes basically a single measure against them, preventing their final payload from executing. This is not nearly comprehensive enough. Until the recent merger with MBAM ExploitShield claimed to prevent Advanced Persistent Threats, or APT. APT is basically an attack in which an attacker is dedicated to defeating your system, and has some knowledge of the mechanics behind your security policies. An attacker with knowledge that you are running ExploitShield would tear through it. All of the 0days that work without ExploitShield work with ExploitShield the only thing an attacker has to do is change how they execute the payload.

How I would bypass ExploitShield is creating a buffer in memory, and using reflective DLL injection (optional, just to avoid AV detection, and forensics). I’d load that DLL into another process (AFAIK ExploitShield does not hook CreateRemoteThread()) and then execute it in the context of another, unshielded process, or really any process, shielded or not, because it won’t detect an exploit in Pidgin if I exploited Firefox. The details are a bit more complex, if they detect process ID enumeration, or one of the other steps, that’s that. But I think with just a bit of extra ROP you can prevent their detection.

I’m sure there are many other ways to bypass ExploitShield. For example, I could drop my payload and not execute it, and then, from shell, write a startup entry to the Windows registry. The user reboots at some point, the startup entry activates, and there you go.

I haven’t tried these things, but I’d like to. I’ve been meaning to write a POC “malware” to bypass AV/AE/ES for a while now, and I have the framework written out, I just have to get around to it. Either way, whether these specific bypasses work, or if there’s some tiny flaw, I am certain that, by design, ExploitShield is not going to protect you from exploits, only their payloads.

I take issue with a product saying it prevents something when it doesn’t. All it does is detect specific traits that exploits can leave behind.

A semi-formal review of ExploitShield came out, in reaction to the way-too-positive reviews from journalists who don’t know what they’re talking about, and it wasn’t exactly positive (read it here). The response from ZeroVulnerabilityLabs was to pick out a specific part that was wrong (having to do with detection) and state that that’s not how it works – they don’t say how it works, just that it’s different. Of course the method of detection is irrelevant and the real issue is that it’s simply hooking a few functions and trying to detect exploits when they’re called.

The post details that ExploitShield is fundamentally not about exploits, and that they are misusing the term.

It is my belief that when ExploitShield uses the term ‘exploit’, they really mean ‘payload’.

[…]
ExploitShield is great if the attacker doesn’t know it’s there, and, isn’t globally represented enough to be a problem in the large for an attacker. If the attacker knows it’s there, and cares, they can bypass it trivially.

Enough said? Well, not really. The author goes on to discuss other issues, but I don’t feel the need to.

I like Malwarebytes, and I find it to be the best antivirus. They’re a legit company, and maybe they’ll turn ExploitShield into something legitimate as well. But until then I don’t see its use, it just seems like extra attack surface.

If you want a real anti-exploit program, use EMET. It actually prevents exploits, or mitigations stage 1/2 payloads (as opposed to stage 3, which is the final executable) and prevents an attacker from ever getting shell access.

I wish I could write something nicer, because I do like MBAM, but I really dislike products with misleading names. Since the takeover they’ve removed the nonsense about preventing APT, but the name “ExploitShield” is making people ask “Do I still need EMET?” as if they do the same things. It’s a disservice to your customers when they don’t understand your product.

Keep in mind that I began writing this quite some time ago, and only published it recently. In all of this time MBAE may have changed, and they certainly claim to have done so. While I consider their flaws to be somewhat inherent to the design, I can’t confirm that everything I say here will always remain true, and until I perform an in depth reverse of their product I can not claim anything about how they are right now and only how they were in the past. This isn’t to say that I believe they are suddenly incredible, or worse, or anything – but people seem to have some issue so yeah, there’s your little disclaimer.