Tuesday, July 22, 2008

The DNS is falling

What's the danger? It means hacker can attack ISP DNS resolvers in order to hijack the ISP customers sessions. A hacker can deliver browser/plugin attacks. A hacker can inject cross-site-scripting style attacks. A hacker can take control of their online accounts. A hacker can deliver Trojans, convincing them to run the hacker's malware. Hijacking a single ISP could deliver tens of thousands to a hacker's botnet.

Mainstream (unskilled) hackers probably don't have the tools to do this. That's the good news. By the time the tools become available, ISPs will have updated their servers. On the other hand, a skilled hacker can put together a robust toolkit in 24 hours and start hacking in your Gmail account.

ISPs are quickly updating their DNS resolvers with a patch to mitigate the problem. The core issue is that transactions are tracked with a 16-bit number, which is small enough to be guessed. This allows a hacker to guess the transaction ID and provide a different answer. The solution ISPs are putting into place is to randomize the 16-port number as well, providing 32-bits total of randomization. The highly regarded djbdns already does this, other DNS serves (BIND and Microsoft DNS) were recoded to make this their default behavior as well.

There are other potential solutions to this problem. I wrote a signature for cache poisoning for the BlackICE intrusion-detection system. What made BlackICE different from standard intrusion-detection is that it was "state-based". In this case, BlackICE would check to see if more than one response arrived for the same query. This signature is now in IBM's Proventia. It should detect this new cache poisoning vulnerability. IBM's managed services group should be able to detect if there has been an upswing in cache poisoning attacks. Unfortunately, IBM's managed clients are most corporations rather than ISPs, so they won't see nearly as much cache poisoning.

DNS resolvers can do the same sort of detection as my intrusion signature. They can keep track of the most recent transactions in order to see if a duplicate response is received. If there is a duplicate response, they can flush the cached entry. There are tricks you can use to make this fast, although if you code it wrong, you might introduce some DoS attacks against the DNS server such that hackers can continually flush cache entries.

Another thing to look at is "additional records". Resolving a single name requires a chain of requests. A server can send more data back than was specifically requested -- but which a server believes the resolver is about to request further in the chain. In my experience, there is little reason to cache these additional records beyond the needs of the current request chain.

Another mitigation strategy is to make resolvers "private". Resolvers at an ISP should only respond to requests from customers. Companies should run resolvers only within their firewalls. Of course, one person in an ISP can hack other people of the same ISP, but at least the danger is reduced. Unfortunately, if you scan the Internet, you'll find a lot of open resolvers that can be attacked by anybody.

You can run your own resolver in your home that would be protected from this nonsense. The reason to use ISP resolvers is that they likely have responses cached, so you'll get a response quicker than if you resolved a name yourself. On the other hand, most sites resolve pretty quickly anyway, and once you've resolved it, you won't need to resolve it again for a long time. Indeed, it may even be a better experience, because ISP resolvers are often overloaded, and they fail often.

Some have suggested that SSL fixes this. It doesn't - users rarely pay attention to correct certificates. Some have suggest that DNSSEC would solve this problem. Unfortunately, DNSSEC is sufficiently complex that it has resisted widespread adoption. Another solution is to use OpenDNS, which keeps up with patches, rather than your ISP's DNS, which doesn't. Presuming your ISP restricts usage to only its customers, using OpenDNS exposes you to a wider range of possible attackers.

Lastly, one debate is whether this attack is worth the hype. It's a stupid question. It's a man-bites-dog sort of story. The latest browser vulnerabilities are far more serious, yet boring. This resurrects an old bug, and is kinda interesting. It's worthwhile to tell ISPs to take is seriously while hiding the details.

2 comments:

Well put. I find that the hype machine has out-done itself on this one, not to say that Dan's find and coordination of industry isn't impressive. What's interesting as well is the age of this type of vulnerability, and the fact that years after we've gone far down the rabbit-hole of fixing complex security issues, DNS is still as vulnerable, and as simple as it was when the Internet was used by little more than enthusiasts and computer nerds. I can't help like we're missing the forest for the trees here... the bigger-picture issue isn't that DNS is vulnerable to this attack, it's that we've completely been asleep at the wheel chasing more "sexy" solutions to Web 2.0 problems when Web 0.1Beta issues are still just as nasty, or worse. A compromise in the underpinnings of the Internet is unconscionable... and the fact that it's out there in 2008 is a black-eye to Internet engineers/security alike.

Bad News,Although Dan did a private disclosure to the companies and had it patched without external review of the bug/vulnerability, there are still big problems. Because of the lack of peer review of potential patches to DNS servers, a exploit has been found of the patch by a Russian Physicist. You can read more here: