Posted
by
timothy
on Friday July 16, 2010 @10:40PM
from the signing-of-the-times dept.

r00tyroot writes with news that slipped by yesterday, quoting from the Internet Systems Consortium's release: "ISC joined other key participants of the Internet technical community in celebrating the achievement of a significant milestone for the Domain Name System today as the root zone was digitally signed for the first time. This marked the deployment of the DNS Security Extensions (DNSSEC) at the top level of the DNS hierarchy and ushers the way forward for further roll-out of DNSSEC in the top level domains and DNS Service Providers."

That depends on if the registry for your TLD supports DNSSEC. There has to be a chain of trust all the way down from the root nameservers to yours..ORG does support DNSSEC now.

I'm currently trying to find a registrar that definitely has DNSSEC support in their web management interface for.ORG domains. GoDaddy looks like a good bet on this point, but I'd also like IPv6 glue support (i.e. so I can create a new A record with an IPv6 address and then also set that as an NS record and have that data in th

Actually, you can't transfer a domain when it's close (~30 days I think) to expiring to avoid it expiring mid-tranfer. You shouldn't not loose any time off of the original registration. It should just extend it so it's probably better to transfer now. Check on the rules for that from both registrars.

All I'd been reading so far was no transfers for X days after a new registration or expiry. But I'll check and make sure, thanks.

I'll also mention that name.com also support both DNSSEC and IPv6 Glue now. They've only done so for a week and haven't yet updated their FAQs. I'd even read the TheRegister article about this just last week!

“ISC has been intimately involved with the development of DNSSEC for more than fourteen years..." "Today's milestone marked the final step in a seven month process of evaluation and incremental deployment, assuring operational readiness of systems, software, and processes necessary for any significant change to the DNS root."

Just like the good old days. Not like the Rapid Application Development that pushes crap out the door that goes obsolete before all the bugs are fixed. I miss those days.

Rapid application development has its place. The point is to iterate quickly and have short milestones, it doesn't have anything to do with "shove stuff out the door and stop maintaining it."

That said, the majority of software projects, in my experience, would be much better off adopting a more waterfall-like development model rather than that agile crap or whatever the latest buzzword is. Obviously a system designed that affects the entire fricken internet is one such example.

Things have changed, a bit. The once radical idea of domain names have become so infrastructural that the failure of the DNS system would cause a DOS attack on the global economy. Basically, there probably isn't a single system that is more critical to the global economy than DNS except perhaps the IMF.

So, 7 months to roll out... pretty aggressive, if you ask me! I can't imagine the pressure that people in these positions actually have to endure...

What kind of services rely on DNS? Web and email communication, obviously, but would voice communication either via cell phones or landlines break down? I suppose much of the voice traffic is routed over the same physical backbone as the Internet, but does it share the same server infrastructure including DNS? What about bank transactions? Are companies smart enough to handle internal communication (even if it touches the net) in a way that would work without DNS? Or would my t

though your toilet may continue to work without DNS being there, the company that keeps your water flowing would likely slow to a crawl if they were unable to e-mail/call the partners they do business with.

Voip servers, when calling other voip servers, will make DNS lookups to get IP's to establish such calls, though anything that's done over the PSTN just goes through the phone companies version of DNS, the CO.

E-mail would fall apart inside the TTL of the cache entries. web browsing would quickly deteriorate, most debit machines that I've installed are hand coded with Static IP's, though most ABM's were DNS names. (because the service cost for ABM's is much higher than just leading the business owner/tech through changing IP's on a terminal over the phone)

However, as the DNS system follows the CO ideology, the ISP's all along the way would have the simple ability to just switch away from the CO stored root zone, and only provide certain names resolvability. this would allow ISP's the ability to offer "services like Google! something not all providers are able to say!" as a promo, attracting people that don't know better.

in my city, the vast majority of DNS names for city locations/devices are internal names anyways. none of them are accessible via the root zone. to systems like these the aforementioned changes would make no difference in the world.

DNSSEC is generally optional. You can now speak DNSSEC to your local DNS server and now it can stay DNSSEC all the way to the root domain (assuming there are no breaks). Prior to this you could authenticate your own DNS server's response, but you were never sure that it was talking to the right person. If you send a standard DNSSEC request out it will respond in a standard, albeit insecure, way. DNSSEC's sole purpose in life is to prevent DNS hijacking.

A better question is whether there is any portable API for accessing this information. When I call getaddrinfo(), can I tell whether a particular address is DNSSEC-signed? OpenBSD has a flag for this, but is it going to be standardised? Do other platforms have anything equivalent? If it is using DNSSEC, can I also check easily if there is an IPSECKEY record and establish an IPsec connection using it if there is?

You can get a plugin for Firefox that does inform you if something is signed and validated, signed and not validated and signed and broken. But you need a caching server that does all the checks for you. If you don't have a chain of trust, either through the entire chain . -> com -> domain -> www or a Parent/Child lookup, DNSSEC doesn't provide any verification of the results.

Then Google gets all your DNS queries, too. Don't be surprised if one day you get targeted ads based on which DNS queries were done from your IP. Well, maybe you shouldn't have clicked on that goatse link anyway...:-)

Also like to meantion namebench, which will recommend the best DNS servers to use for you, as well as tell you all the sites that are pulling tricks like www hijacking or google.com hijacking (al la opendns).

That is not generally true. Clients should not configure root servers as one of their recursive resolvers. There's nothing wrong with using root servers as non-recursive resolvers though.

I recommend running Unbound [unbound.net] locally. Unbound is a small recursive resolver which validates records with DNSSEC. You can run it as a service on your Windows machine and point your "DNS" to 127.0.0.1. This way your computer does all the cryptographic checking. It will talk to the root servers directly, but only infrequently (

They poison *all* DNS requests to any DNS server and return a random IP address for sites like twitter. This is precisely the type of thing that DNSSEC should help with (if only people knew how to set it up... it shouldn't be that hard)

Of course they *can* easily block DNSSEC. They can also easily block OpenVPN and other such things but they are choosing not too for now. But while they are not, wouldn't it sure be good to make use of this new standard?

DNSSEC has always seemed to me as being overly complex for what it is actually doing (I'd say the same thing about the DNS protocol in general).

It seems to me that DNSSEC was "designed by ISC for ISC" in the sense that the only people who have the time, resources and willpower to setup Bind/DNSSEC correctly are running the root nameservers. However I would have thought the interface between users and multitudes of privately operated nameservers would be the most critical aspect of securing DNS. If administr

DNSSEC doesn't need to be complex at all. Basically, any secure communications system of this kind must have a mechanism for authenticating whatever provides the service, authenticating (if/as necessary) the recipients of that service, both encrypting and tamper-proofing both the request and the result, and (where necessary) limiting queries to those authorized for that recipient.

There are plenty of authentication mechanisms out there (Kerberos, SASL, SSL, TLS, S/Key) - some for the server, some for the cli

DNSSEC has always seemed to me as being overly complex for what it is actually doing (I'd say the same thing about the DNS protocol in general).

Given that the DNS protocol is about the simplest protocol currently deployed on the Internet, and yet has managed to scale to the insane degree demanded of it, I can't help think that this implies that you have absolutely no idea what you are talking about.

Since you are dealing with public-key cryptography, your private keys have to be maintained as private. That's not so difficult if you have a machine that's not connected to the Internet. If your private key-signing key got out, your signatures could easily be compromised. Then you sneeker-net the zone-signing keys over and sign your zones. Not too difficult if you follow the NIST 140 page manual.

Of course, a machine that could do all the work for you would be what's best.

DNSSEC has always seemed to me as being overly complex for what it is actually doing (I'd say the same thing about the DNS protocol in general).

...

When I read about DNSCurve it seems much simpler in achieving similar goals.

I read comments like this quite regularly. Actually, DNSCurve does something pretty different from DNSSEC.

DNSCurve encrypts communication between DNS clients and servers (or between DNS servers). Like with HTTPS or IMAPS, this means someone between you and your DNS provider can't see what you're looking up, or MITM you to change results.

But DNSCurve does nothing to guarantee you're getting a good answer. You have to trust your DNS provider: both that they are trustworthy and that they have their server s

What should DNS server administrators do to sign our own domains, and configure our servers to pay attention to DNSSEC when performing lookups?

I learned how to configure BIND a decade ago, and it's mostly just been smooth sailing since then. I have no idea what's involved in setting up DNSSEC, whether it's something I can figure out how to enable in 20 minutes or a huge project that really won't be feasible for me to undertake at all. Can somebody point me in the right direction?

What should DNS server administrators do to sign our own domains, and configure our servers to pay attention to DNSSEC when performing lookups?

I learned how to configure BIND a decade ago, and it's mostly just been smooth sailing since then. I have no idea what's involved in setting up DNSSEC, whether it's something I can figure out how to enable in 20 minutes or a huge project that really won't be feasible for me to undertake at all. Can somebody point me in the right direction?

It's apparently been over a decade since you've tried to look up information on the internet too. We no longer use gopher. There's this new thing called HTTP and WWW. There's also an upstart new search engine company that'll probably die out in a few years--but you can use them here [lmgtfy.com].

Configuration is relatively easy if all you've got is a couple of zones. Maintenance is what takes work. You don't just turn a switch on and let things go on their own.

Keys expire and need to be rolled over. Signatures expire even more often and need to be refreshed. Your TLD registrar needs to have a robust mechanism for establishing and maintaining the trust chain. And it can all go to hell in an instant if someone's behind a router that is filtering EDNS, or TCP DNS queries, or truncating DNS packets, or

I never really understood the reason for this. What I mean is, in the DNS RFC, if the request exceeds (or is equal?) to 512 bytes, it then decides to use TCP instead of UDP. Fair enough. But why 512 bytes? Why can't we go all the way up to at least 1500 bytes? (the size of an ethernet frame). Or beyond? I suppose it comes down to packet fragmentation issues but still, at least 1024 bytes should be fine.

I suppose this 'stupid' 512 byte limit was designed in the early days of the internet, when it was a munge

The Internet is not an Ethernet network. The Internet Protocol guarantees that datagrams under 576 bytes (including packet header) are not fragmented, but a 1500 byte Ethernet frame still will be. You don't find Ethernet anywhere other than the edges of the Internet. The backbones still use a variety of other standards.

Fragmentation is a problem for a UDP-based protocol, which is why pretty much any UDP-based protocol tells you not to use packets bigger than the network MTU (1500 bytes for Ethernet, 576 for the Internet).

Thanks for the explanation. Interestingly, I noticed an exception to this 512 byte size limit - the 'unbound' resolver daemon. I run this on my LAN and from what I can see, it seems to ignore this 512 limit and continues to do full UDP lookups against the root name servers, which are still happy to serve a valid reply to this (here is a full tcpdump):

The "packets of 576 bytes can't be fragmented" is a commonly stated reason, but it is wrong. It is a myth/misunderstanding. It is, in practice, true has has been true since probably the late 1980s, but DNS was around long before that. Indeed, if you read some of the earlier RFCs, it is quite clear that packets of any size could be fragmented, down to something like 16 bytes of payload per fragment.
No,the reason for the 512 byte payload size is much more basic than that. Back in the early 80s, memory was tight, you could have mainframes supporting dozens of users on a machine with maybe 1MB of memory, each of user could have more than one active network connection. IP supports packets sizes up to around 64k, but it would be unreasonable to expect every host to be able to accept such a large packet size. It would mean that they could get fragments from all those packets piecemeal and out of order, so reconstructing each packet would require holding lots of 64k buffers, each of those buffers would be 6% of all available memory.
It would be very unreasonable to expect every host on the internet to be able to accept any size packet, even if those packets came in fragment that wouldn't saturate your connection. Now, protocols like TCP have the ability to negotiate the packet size, but for UDP, it gets messy and slow. So, it is a *requirement* that each host on the internet can accept a packet with 512 bytes of payload. That packet can be fragmented, but it has to be accepted.

Sooner or later it will be common for DNSSEC-enabled servers to have expired keys, and the sysadmin who installed DNSSEC (the only person who knows how to renew the key), will have moved on. At that point Aunt Maude will be surfing the Net and she'll get a popup, "Warning! Zone server key has expired!" (or whatever). Auntie will of course click on "Continue Anyway," because she's seen that popup and bypassed it many times before. Of course, sooner or later Maude will log on to what she thinks is the bank