Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

L3sPau1 writes "Network security researcher Dan Kaminsky has had a year to reflect on the impact of the cache poisoning vulnerability he discovered in the Domain Name System. In the time since, Kaminsky has become an advocate for improving security in DNS, and ultimately, trust on the Internet. One way to do this is with the widespread use of DNSSEC (DNS Security Extensions), which essentially brings PKI to website requests. In this interview, Kaminsky talks about how the implementation of DNSSEC would enable greater security and trust on the Net and provide a platform for the development of new security products and services."

Coming up on celebrity death match we have Dan Kaminsky vs. Dan J. Bernstein. Let's stay tuned. In all honesty I tend to agree with the notion that SSL is joke and hence DNS based on SSL is just as bad. SSL suffers from many flaws that most people are either don't know or choose to remain ignorant too based on the popular notion that SSL is safe. SSL relies on you trusting a third party as being secure when it only takes one corrupt employee to violate the sanctity of a PKI private key. Verisign, the globa

The "Kaminsky bug" is a hoax. Kaminsky didn't discover anything. The only thing that Kaminsky can put his name on is the hoax. In my NTIA commentshttp://www.ntia.doc.gov/dns/comments/comment027.pdf [doc.gov] I traced down everything Kaminsky claimed to have discovered to find the true author.

There are no (or rare) "Kaminsky exploits" in the wild. All servers but BIND have implmented UDP port randomization for years. WITHOUT port randomization, one can exhaust the 16bit of Query ID in 65000 spoofed UDP packets--if o

I actually completely agree with your desire to see trust in the edges. That's what's so interesting about DNSSEC -- DNS, by its very design, is all about getting the core the hell out of the way and delegating, delegating, and delegating some more until the organization that manages the namespace can directly control it.

Indeed, in the ultimate vision of DNSSEC, we have full on validating resolvers in clients. The endpoints themselves can finally - finally! - recognize their peers directly, without having to trust anyone or depend on the admitted messiness of the existing SSL CA infrastructure.

The reality about Active MITM is that it's out there. See the BGP work that came out in tune with my talk -- note that all that still works, today, even with my big fix. Active MITM isn't some theoretical attack, and the reality is that everything is vulnerable to it. An ounce of cryptography is worthless without a metric asston of key management. DNSSEC is just the best way I can see to do it.

Regarding the trust anchor size, the reality is that we have 25 years of it not being a problem. That's the simple truth. Everything I did last year could have been done by a malicious root. It wasn't.

Your corporate/intranet myth is funny, because it strikes at the heart of the problem. You think there's just one corporate/intranet to authenticate. It's literally like you're suggesting, email to other companies is complicated, so lets just do email to our own company. Nice, but not good enough. We need cross organizational trust. We need something to bootstrap it. DNSSEC should be that.

Do you even know what DNSSEC is? It is nothing he is trying to sell, it is him trying to completely take care of the flaw he found in DNS last summer. A flaw that could have seriously fubared the net if he didn't go through massive patching with internet providers and large companies. So you know, just about all DNSSEC software is opensource, meaning it is free. He isn't a conartist and it is pretty ignorant to call him one, he spent countless hours last summer trying to get the patch out which he didn't make money off of since he released it for free.

The flaw is really in DNS - the only authentication field in a DNS request is a 16-bit query id, plus the implicit authentication of a 16-bit port number, and IIRC correctly you could also birthday-attack the query-id. Kaminsky's changes to DNS implementations such as BIND (which was build into djbdns etc. since the beginning) get you a few more bits of protection against an attack, but that just means that DNS is "still pretty weak" as opposed to "really really weak".

And unfortunately, IPv6 DNS is no better - it keeps the same basic header for compatibility, adds some new longer record types, and adds some 128-bit addressing, but the QueryID's still the same old 16 bits.

DNSSEC gets to the root of the problem, with cryptographic signatures on the data. It may be overkill compared to just putting in a 128-bit or 256-bit Query-ID field, but basically it's something that's possible actually get deployed, because it's a set of additional data transported in DNS, not a replacement for DNS's transport protocols. The reasons it wasn't done years ago have a lot to do with the NSA/FBI anti-crypto policies of the 90s, and Verisign's reluctance to do a huge amount of work nobody cares about to protect.com, but we're finally getting the root signed.

DNSSEC must be the most wildly overrated technology to ever come out of the internet.

Seriously, it's just terrible from a system administrator's perspective.

It's been a year since I listened to a speech about DNSSEC (from an official ISC representative) at a Linux User Group meeting, so I don't have every last detail on the tip of my brain. However, it provides a little more security in some ways, while making other things worse.

Trust me, I'm raising more hell than you can imagine about the deployment issues of DNSSEC. Here's the truth:

1) You don't actually need to do all that resigning stuff. When best practices involve increasing your costs 100x, something is wrong.2) You don't actually need to have your signatures expire.3) You don't actually need that cron job.4) They fixed that zone walking problem with NSEC3. If you have online keysigning, which I expect everyone to have, you don't even need that.5).org is signed..com is coming, as is the root itself. Things have changed.

Standby. Seriously, this is coming, and it's not going to be miserable by the time you actually need to deploy it.

To be fair, I don't see much of a difference between NXDOMAIN and SERVFAIL except possibly impact on negative caching. Stuff doesn't work.

DNSSEC planners have been way, way too willing to let things break in order to protect non-critical features. DNS is not allowed to just return SERVFAIL. Luckily, the protocol itself is flexible enough to allow much more stable deployments.

You don't host anything for real paying customers do you?Let me give you a summary of how interaction with "security consultants" usually goes:

1. Customer gets cold called or sees some FUD on local TV, or portscans or the "consultant" has some dude in Malaysia digging around to find the sites hosted for pennies an hour.2. Customer gets bilked out of a couple hundred dollars for a 'security audit' (a scan using a common tool with default settings usually)3. Customer fails to understand any of it.4. Passes a

Aside from a few articles here and there, the "real world exploits" for this stuff, where someone actually gets harmed... well, where are THOSE reports?

Since Dan Kaminsky is active in this thread, I'd love to see him answer this question. I'm guessing he's probably bound by non-disclosure agreements and can't give us any details, but I'd like to know if he's seen succesful, real-world attacks out "in the wild" that resulted in real damage done.

The DNS cache poisoning attack was used the same week it was put into metasploit on a Verizon DNS in Texas where the attacker was forwarding people to a fake Google page with malware on it. Just one I can recall from when this first came out.

There's lots of shysters in every field. Doesn't change the fact that there are problems out there that need fixing. Security is in fact a problem.

In concrete terms, just for my vuln, about 1% of one of Brazil's largest bank's customers had their info taken by my bug. That wasn't fun. And China Netcom got hit pretty hard too, go Google that. Of course, there's a lot of data we're missing, because nobody needs to report. But anecdotally, this was a problem, but not the end of the world. Good! I didn't exactly set out to end the world:)

In terms of what I see fixing, I see a lot of technologies repeatedly sold as "and then you get certificates", and nobody does, because it's just such a cross organizational nightmare to manage. Certs aren't working, and it's holding back auth technology after auth technology. Verizon Business' data says that 60% of vulns aren't implementation flaws -- they're authentication flaws, with passwords at the heart of them.

Why so many passwords? Because they work. Well, DNS works too. Maybe we can use it to make all the better things scale across organizational boundaries.

Security is a tricky thing. You say security people sell you things "you don't need". But if you wait until you NEED security, it is already too late because you have a breach.

Security is not an ER visit, it is a regular preventative exam with your physician. It is something you have to take a pro-active approach with. Yes, this oten means investing time and money in something that has no immediate ROI. But that is the nature of the problem you are dealing with.

There are numerous issues with implementing DNSSEC. The idea has been around for like 14 years now. Also, there are alternatives like DNSCurve which provide more security with considerable ease of deployment.

DNSCurve can't achieve end-to-end security while still caching. Without the former, you're trusting the name server at Starbucks not to be malicious. Without the latter, there's a 10x (minimum) increase in DNS traffic and the internet collapses.

I'm just going to repost my last comment on this subject. I don't think things have changed since then, but if they have I'd certainly be interested to know.:)

You might be interested in this thread:https://lists.dns-oarc.net/pipermail/dns-operations/2008-May/002736.html [dns-oarc.net]
where Paul Vixie recommends that nobody should ever deploy a stub resolver that supports DNSSEC, but instead use TSIG to talk to the recursive resolver. Which really makes DNSSEC's security characteristics look very much like DNSCurve. Th

DNSSec doesn't let you set your laptop's DNS to Starbucks' NS and be safe, because you can't do DNSSec from a stub resolver and have to use TSIG (which protects the client->server data transfer, not the zone data).

DNSCurve doesn't let you set your laptop's DNS to Starbucks' NS and be safe, because DNSCurve protects the client->server data transfer (of each step of the process) not the zone data.

You're screwed either way.

Or, you can decide to either use a different resolver somewhere on the interne

So the deal is, DNSSEC lets the server at Starbucks cache DNSSEC records for you. So even if it's not doing the validation, it can at least remember the crypto such that each backend host that is doing validation can enjoy the cached records on the shared NS.

You can't do that with DNScurve, since the crypto is link based.

I've been playing with the Curve25519 code lately. It's cool, I have use for it (understatement), and it's a joy to work with. But DNS

1. Without an unbroken chain of trust to the root, it's worthless. Self signed DNSSEC is no better than no DNSSEC.2. It's vulnerable to MITM attacks - just strip the DNSSEC information from the returned packets and return a normal (modified) DNS reply.3. Because of the chain of trust setting it up will cost $$$ - probably going to Verisign, as usual.4. Until absolutely everyone in the world uses DNSSEC the fallback to normal DNS cannot be removed, so 2. remains a problem. 3. gu

Oh and I'd add the thing is f..ing *hideously* complex to setup, with multiple competing implementations that aren't compatible with each other because some have special DNS tags, some use TXT records, the formats keep changing, etc. Spent 4 days on it once.. I got maybe 10% of the available dnssec testers to even recognize that I implemented their brand of dnssec.

1) Agreed. I'm not very popular in some DNSSEC circles because of it:) But yes, the entire Trust Anchor Repository thing is a mess. That's why it's so important to get the root signed.2) With the root signed, you always have a trusted path that says if a given domain has DNSSEC or not. If it does, stripping the DNSSEC won't matter, you'll know there's *supposed* to be signatures there.3) Because DNSSEC delegates, it's not really amenable to the sort of tricks that have cost money in the pa

A "validating stub resolver" will actually need to be (basically) a recursive resolver itself (it needs to do multiple queries to verify each signature in the delegation chain from the root down to the record it's actually interested in). And thus, if you want it to perform well, it'll need a local cache.

Now it's basically a caching recursive resolver.

So, is your claim that because the caching recursive resolver which you run on localhost can use a second-level remote cache instead of talking to the authori

Estimates on cache hit rates in DNS are about 90% -- meaning for every query that hits a server, ten queries got chomped in a cache.

I'm uncomfortable asking the Internet to increase their DNS query capacity by 10x. DNS has a performance curve where once it dies, it dies kind of catastrophically. 10x increases are asking for trouble.

...not to mention that DNSCurve requires per-query crypto on the server, while DNSSEC does not (by a design that really, really wants to allow offline key signing). Curve25519 is fast but it's not *that* fast.

I'll also note that DNSCurve lookups use less network bandwidth than DNSSEC. DNSSEC drastically increases network bandwidth requirements with all those individual record signatures that need to be returned...

The point is that we can actually share DNSSEC responses across multiple nodes, not just a single node, using the existing framework. Yes, we will need clients that *can* go straight to the root. But they won't *have* to, which is a neat design element of DNSSEC.

The one difference is that in most cases the recursive resolver will be under the control of either the user themselves or the owner of the network. Eg., home computers implement stub resolvers talking to the nameserver included in the home router, which does the DNSSEC. You have to trust the server the stub resolvers are talking to, but that server's the little box sitting on the shelf above your computer instead of some random nameserver several hops out under the control of someone you don't know. Or you

Well, I was one of the guys who was wrong (about DNSSEC, anyway) so it doesn't completely match up.

Look, simple question: Do you think the existing system, of X.509 certificates, of SSL CA's, of private PKI's, is at all working? I sure don't. All I see are crappy passwords everywhere, being left as default, getting leaked, being brute forced, etc. Most security technology isn't working.

I find the (Slashdot) story title and summary to be a bit misleading, though I doubt that was intentional. As a result, I think most of the replies to you fail to appreciate that you are not really talking about fixing a flaw in DNS; you are talking about using DNS as an infrastructure to make many more things start depending on it. From TFA:

DNSSEC is interesting not because it fixes DNS. DNSSEC is interesting because it allows us to start addressing core problems we have on the Internet in a systematic a

Sure, DNS is a single point of failure with security implications. What else is new? Half my talk last year showed what sort of damage you could do if you could corrupt any DNS name. The root can, today.

It also scales really, amazingly, wonderfully well. See, DNS actually delegates, unlike X.509. That means the root doesn't interact with most people, just a few countries and gTLDs.

So, how many people do you have in your GPG keyring? Few dozen? Few hundred? I spent six months interacting with people over email securely. It added an average of 72 hours of time before work could be done, and often it didn't work. C'mon, this ain't scaling.

Sure, DNS is a single point of failure with security implications. What else is new? Half my talk last year showed what sort of damage you could do if you could corrupt any DNS name. The root can, today.

I guess what I am asking is this: such a failure that results in corrupting a DNS name is already bad enough. Would it not be worse if many other security mechanisms also depended on it?

What makes DNSSEC better than using protocols (such as SSH) which can independently verify the identity of a host by

What makes DNSSEC better than using protocols (such as SSH) which can independently verify the identity of a host by means of cryptographic signatures? This to me also seems consistent with the idea that good security is done in layers, not by single ultimate solutions.

I don't mind SSH, with its key caching, but what about initial trust? What about the fact that, in the real world, keys change (just like IPs change) and when they do, something needs to say the new content is OK? It'd be nice to have a sys

If I get a cert for www.doxpara.com from a CA, I need to get another cert for mail.doxpara.com, foobar.doxpara.com, and so on (assuming I want one private key per server).

If I acquire doxpara.com from Verisign, I can handle the rest myself thank you very much.

It's a pretty major difference.

Also a major difference is the reality of who can screw you. In DNSSEC, you have to worry about your registrar (GoDaddy), your registry (Verisign), and the root. In x.509, you have to trust every CA in the

I don't see Verisign really being in a position to "stick it" to the states that control ccTLDs or registry's that control various gTLDs (org, info, etc). And while Verisign will in fact be able to place toll on names under com and net, they're in the competitive position of needing to be reasonable compared to org, info, and other domains. This is exactly analogous to the position Verisign has on.com and.net today, as they're the exclusive registry for those TLDs. If you don't like.com/.

Yes, with the SSL situation, you can pick any random "el-cheapo" SSL CA _you_ like (or can con). Your users' browsers will still trust them. There's less of a monopoly there compared to DNSSEC right?Go to Firefox,Tools,Options,Advanced,Encryption,View Certificates,Authorities. Pick the cheapest CA there who will do what you want[1], and you're set.

But don't forget, you still end up having to pay them every year or so, it doesn't matter that they are crap, you still need to pay toll to somebody. You yourself

Excellent, excellent questions. This is the sort of stuff I was asking before I switched sides on the DNSSEC war.

The problem with SSL is it doesn't matter if *you* aren't paying a worthless CA; as long as a worthless CA is out there, he can corrupt every domain, everywhere. That sucks. So SSL becomes a matter of finding the least secure CA possible and compromising that.

Things are different in DNSSEC. Because of delegation, the root is the only entity with absolute power over everyone -- an

But the fix for SSL is not about fixing the CAs, it's getting the browsers to behave more like SSH (or better). Then at least the browser will give useful warnings for a change, that'll help people who really care about security. While it won't help the "click through" users, nothing much will help those against attackers anyway.

Then that's the way it should be - YOU decide who you want to trust, with the help of technology.

Yes, because browsing securely should look like UAC, with every new site throwing a prompt in your face as if you had enough information to go on.

No. We can, and need to stop imagining the user is some sort of god that can accurately judge risk of accepting unknown keys (or worse, keys 'recognizable' with some arbitrary sequence of hexadecimal characters). This is a lie we're telling ourselves, and I'm done with it.

You're right that Verisign controls.com. Guess what, they control it *today*

DNS doesn't take up a lot of network bits - at the beginning of a data flow, you typically look up a DNS name to find the IP address, then start doing things, and even if all you're doing is a small text email or fetching a small text html page, the protocol headers alone are a lot bigger than the DNS query, and usually the data's a lot bigger. Changing to DNSSEC adds a few hundred bytes to that query, but it's almost always a drop in the bucket.

I haven't deployed DNSSEC yet on my external domains because of cost/complexity. When I looked into it, my options for DNSSEC were:

1) implement BIND and do the key management and rotation from the command line
2) spend $10,000 or so for an appliance from secure64 or nixu
3) spend $1k/month for a hosted DNS provider like neustar or verisign
4) install Win2008R2 RC and use it in producition

I work in a windows shop, so I'll probably go with option 4, but I'm surprised there aren't more set-it-and-forget-it tools out there for DNSSEC deployment. I'm open to recommendations.

How hard is it to implement DNSSEC in my recursive cache? How many RFCs am I going to have to toil over to understand DNSSEC well enough to implement it? About how long will it take me to code MaraDNS to have full DNSSEC support?

I have a bad feeling that DNSSEC is a monster to implement and that we will not see many independent implementations of it; right now BIND and Unbound appear to be the only DNS servers to support it. DjbDNS doesn't support it, of course, and probably never will. My own MaraDNS and PowerDNS also don't support.

What are your thoughts? Has a reasonable effort been made to make DNSSEC easy to implement?

Considering you can't be bothered to read the information out there to even understand it, I think you'll find it very hard.

Its not actually hard if you use someone elses encryption libraries, but if you are too lazy to even lookup how it works, and its fairly clear you have no understanding of how it works, its probably safe to say you are going to consider it hard.

In reality, its not really a any worse than say adding SSL support to a web browser.

And, since you're too lazy to post links to DNSSEC howtos (like this one [dnssec.net]), you're not helping and only name calling. The issue is that there are 15 RFCs with DNSSEC in the title [rfc-editor.org] and no clear idea where to get started.

But, hey, this is Slashdot, where any idiot can get a lame name like "BitZream", and post insult anonymously.

It's a bit tricky, but we can work with you on the trickiness. DNSSEC is *much* easier to implement if you drop the somewhat unnecessary requirement for offline key signing, which is why BIND is so messy. Libunbound/ldns is flexible enough so you can integrate it, otherwise we can help you with the various wonkiness. Email me offline, dan@doxpara.com ?

The kaminsky attack is an attack on a client that is dumb enough to allow to make thousands of dns requests to subdomains of a domain. If the solution is changing to DNSSEC the clients will have to change to DNSSEC too, so we may as well prevent the attack by making the DNS clients smarter. For example a rule could be added that says that if there were more than 10 invalid responses for subdomains of the same domain then when a valid response arrives only to cache the IP for the subdomain requested and not

It's a matter of the scope of the attack. Any resolver (including your ISP's caching nameserver) can be the target. It wouldn't make much sense for an individual's resolver (their PC) to be the target--first of all, it's hard to get them to query for thousands of requests. Second, your payoff is small--you got one machine to think that ns1.example.com resolves to your IP address.

The real target is any caching server that lots of people use. It's much easier to get these to make requests for lots of subd

Yes. But the us govt could do that before. DNSSEC doesn't enable this.

DNSSEC only enables a false sense of security that it wouldn't happen, while leaving the man-in-the-middle attack ignored and vulnerable.

There are basically two kinds of attacks: Man-in-the-middle (MITM) and blind attack which cannot see responses. UDP Port randomization makes blind attacks quite nearly impossible, and this has been known since 1999 or before. TCP DNS makes blind attacks impossible.

Well, to be fair, the US govt has waged two wars against countries Iraq and Afganistan, and did not interfere with their domains. There are some lines that are technically possible to cross, but aren't likely to be crossed. It would create a lot of diplomatic fallout to alter domains.

The only case I can think of where it might be a real problem is the case where there is a government in exile that requests changes to their country's domain that harms the de facto government in the country.

I was actually thinking about this the other day.
Http has absolutely no security at all built into the protocol. It is all hacked together with cookies and the server remembering sessions etc.
The protocol itself is dumb. Make a request... get a response, thats it. Any security is on top of that.
If there was a standard for secure HTTP, all of these gimmicks and schemes could be removed from the hundreds of web frameworks and implemented in the browser / http server.

There already is a standard: SSL. It includes encryption (to secure the content going across the channel against eavesdropping) along with bidirectional authentication (server certificates to verify that you're talking to the server you think you're talking to, as well as the less-commonly-used client certificates to authenticate to the server that you're who you claim to be).

If I were setting up a secure site, it'd be SSL-only. As part of the account-setup process, you'd be asked to generate a client certi

Well, SSL/TLS can, with the proper use of certificates, secure the endpoints against impersonation attacks. If, for instance, the browser has the server's certificate directly in it and won't accept any others, then it doesn't matter if the attacker can redirect the traffic to their server and forge DNS perfectly, the browser will still reject the server because it doesn't have the certificate the browser expects. Ditto the other way using client certificates. An attacker would have to compromise the endpoi

If I were setting up a secure site, it'd be SSL-only. As part of the account-setup process, you'd be asked to generate a client certificate and upload the public certificate (over an SSL connection) to the server to be attached to your account. From that point on, when you attempted to log on using your username the server would only accept the request if it came over a connection presenting the client certificate attached to that username.

Why should it be hard? We're not talking about getting a CA-signed certificate verifying identity here, it's a self-signed certificate to insure that only the person who created the account accesses it. Prompt for a couple of things like name, generate and sign and add to the appropriate certificate stores for use. Assuming you're not just reusing a certificate you've already generated. This should be trivial, it's just that the toolmakers (browser makers, etc.) don't bother with proper support for more tha

This should be trivial, it's just that the toolmakers (browser makers, etc.) don't bother with proper support for more than the bare basics of SSL.

I think you answered yourself.

The Web works because it has very low friction. Having to play idiot games with certificates and managing them adds tremendous friction. That you don't seem to see this suggests you haven't thought very carefully about the consequences of such a system, if at all.

That's the thing, though: it shouldn't need to add significant friction. There's no games to play, and minimal management that ought to be needed. It should be about as complicated as keeping track of the credit cards in your wallet, which seems to be well within the realm of the majority.

Some parts of the DNSSEC process invite clean UI connections - setting the keys, running human-oriented query tool, etc. It's far easier if you include registering the key as part of the process of registering the name, of course. But that's not how most people use DNS.

So you type a URL into your browser, and the browser hands it to a a resolver client, and the resolver client does a query. With Vanilla DNS, if the query fails, the client tells the browser it fails, and the browser either gives you so

That's kind of the point. Dan has found a flaw in the basement of your house.The entire house is in jeopardy, no matter how well built. Every house affected.

Do you:A: Call Kaminsky a damn liar, denounce his snake oil, sip your turpentine.B: Stucco and paint every 10 days, whistling to yourself forcefully.C: Try to jackhammer out the flaw and form up some new foundation meantimeD: Nuke the house from orbit, start from scratch, total web tech re-over in IPv6

Nothing's perfect, but the DNSSEC signature process is mostly out in the open - you can see the public keys for the name servers, and you can check the signatures on the keys for yourself, and you can also get yourself domain names almost anywhere in the world if you don't like a given registrar/registry. So while a government _could_ probably bully a registry into signing a forged certificate for your domain name, it would at least be publicly visible that "your" key had changed.

I think my comments to the NTIA on DNSSEC hit the point on Kaminsky and the DNS scam. As others pointed out, this is a group of shysters. MIT's "Technology Review" picked up the "Media Hack" aspect of the Story in December. That article is a good read if someone has a link.

The.ORG operator won't respond to the question of whether they had regulatory approval to carry out this action.

A Top Level Domain(TLD) is operated under the supervision of ICANN and IANA, just like the root DNS servers. So TLDs should should have permsission from ICANN & IANA (and so from the NTIA of the Department of Commerce of the US Govt)--again--just like the root DNS servers need approval. The NTIA requested comments on DNSSEC(which I responded to) but NTIA has not announced any authorization to

Absolutely true. However, the ability to delegate/federate security is such a powerful force for lowering the costs of proper design and management that making this one technical change will facilitate the operational strength you correctly call out.