A new DNS protocol extension aims to reduce network congestion and latency for …

Share this story

A group of DNS providers and content delivery network (CDN) companies have devised a new extension to the DNS protocol that that aims to more effectively direct users to the closest CDN endpoint. Google, OpenDNS, BitGravity, EdgeCast, and CDNetworks are among the companies participating in the initiative, which they are calling The Global Internet Speedup.

The new DNS protocol extension, which is documented in an IETF draft, specifies a means for including part of the user's IP address in DNS requests so that the nameserver can more accurately pinpoint the destination that is topologically closest to the user. Ensuring that traffic is directed to CDN endpoints that are close to the user could potentially reduce latency and congestion for high-impact network services like video streaming.

The new protocol extension has already been implemented by OpenDNS and Google's Public DNS. It works with the CDN services that have signed on to participate in the effort. Google and OpenDNS hope to make the protocol extension an official IETF standard. Other potential adopters—such as Internet ISPs—are free to implement it from the draft specification.

It's not really clear in practice how much impact this will have on network performance. It's worth noting that GeoIP lookup technology is already used by some authoritative DNS servers for location-aware routing. The new protocol extension will reportedly address some of the limitations of previous approaches.

It certainly does not take 5 s for me. More like 1s for a complete refresh.Pay attention to what your status bar says. "Looking up www.ars... is DNS, "connecting to ..." or "Transferring from ..." is HTTP and Ars' fault.

Be sure that you use a good DNS provider. At my home switching from Earthlink/Time Warner own DNS to OpenDNS was like night and day, a good two years before today's announcement. Visit opendns.com for instructions.

I don't want to use this protocol unless there is a way for the individual user to blacklist certain DNS providers who are notorious for misdirection or non-direction or hijacking (you ISPs know who you are and what you are doing).

Because you (and lots of other people) are not understanding the difference between bandwidth and latency.

I've never found snarky replies that assume ignorance to be very productive, but to each his own. In this case your assumption is wrong.

My point was while optimizing protocols for video requests are nice, there are still quite a few optimizations / advanced protocols that could benefit the simple process of loading a web page.

A lot of the latency is caused by the need for compatibility and wide spread development techniques rather than the speed of light between the west and east coast.

Google IS working on this. Look at SPDY, which is a replacement protocol for HTTP that is focussed on reducing latency. SPDY is implemented by a number of Google properties, and in Chrome.

It would be fair to complain that SPDY is not yet public, so no-one else can benefit from its speedup. Google's reply to this is that they are still working on improving it, and as soon as they go public that severely constrains future improvements. I see, so far at least, no resin to doubt them on this score.

Why is this necessary? Content providers already see your IP address, so why can't they already shift you to a better server?

Some CDNs (the Apple store for one) tries to load balance based on DNS lookups, thus when you use a third party such as Google or OpenDNS, every customer using OpenDNS gets the same Apple server, making performance go to shit.

This will help the DNS server on the remote end make intelligent decisions on what IP to give back, based on the end users IP, instead of the middleman (OpenDNS, Google, Etc)

Why is this necessary? Content providers already see your IP address, so why can't they already shift you to a better server?

Some CDNs (the Apple store for one) tries to load balance based on DNS lookups, thus when you use a third party such as Google or OpenDNS, every customer using OpenDNS gets the same Apple server, making performance go to shit.

This will help the DNS server on the remote end make intelligent decisions on what IP to give back, based on the end users IP, instead of the middleman (OpenDNS, Google, Etc)

It's not limited to OpenDNS and Google, actually. *All* caching DNS servers will do the same.

The difference in the case of Google & OpenDNS initiative is that instead of having the CDN's Authoritative nameservers do the load-distributing, the open DNS provider's servers will do it. They (the open DNS providers) only need 'participation' from CDNs to properly do load-distribution.

This initiative actually does make sense: Almost no one ever accesses the CDNs' authoritative nameservers, but a *lot* of people uses Google's and OpenDNS' nameservers (I myself use 8.8.8.8 and 8.8.4.4 everywhere), thus bypassing the ISPs' caching nameservers.

Hmm... I'm not really sold on SPDY. It seems to me like 'mere' compression of HTTP, which will be pointless for images, audio, and video (they are already compressed). CMIIW.

I'd rather see a protocol built from the ground up to address HTTP's shortcoming in regards to today's interactive and/or multimedia-heavy webpages. Waka comes to mind.

SPDY is NOT DESIGNED to make video (or large file transfers) faster. It is designed to reduce the latency when transmitting small packets of info between client and web server --- think UI interactions in GMail for example. It is silly to complain that it doesn't solve a problem it wasn't designed to solve. And there is rather more to it than just "compression of HTTP".

Hmm... I'm not really sold on SPDY. It seems to me like 'mere' compression of HTTP, which will be pointless for images, audio, and video (they are already compressed). CMIIW.

I'd rather see a protocol built from the ground up to address HTTP's shortcoming in regards to today's interactive and/or multimedia-heavy webpages. Waka comes to mind.

SPDY is NOT DESIGNED to make video (or large file transfers) faster. It is designed to reduce the latency when transmitting small packets of info between client and web server --- think UI interactions in GMail for example. It is silly to complain that it doesn't solve a problem it wasn't designed to solve. And there is rather more to it than just "compression of HTTP".

Multiplexing, prioritization, most browsers do that. Content compression using gzip is also supported by modern browsers. So that leaves HTTP header compression that's uniquely SPDY.

Despite whatever HTTP/SPDY are designed for, we can't argue the fact that a significant amount of web traffic comprises multimedia information (images, audio, and video), and HTTP/SPDY are just not efficient enough.

With that context, SPDY in my opinion is just a stop-gap 'band-aid' solution.

Waka protocol is designed with such usage in mind. It is designed to match/support REST, and its 'framed' delivery also matches how virtually all multimedia content is constructed. This allows interspersing of metadata and/or REST client-server interaction between frames.

Too bad the Waka protocol still exists only in the minds of Roy Fielding; no formal definition/standard that I know of. But that's beside the point.

Now, if only Google is willing to contact Mr. Fielding and figure out a way to push the finalization and deployment of the Waka protocol...

OpenDSN? Arent those who keep undermining the Dns system by redirecting all false requests, instead of sending a proper a "does not resolve"? And now they hack the protocol to speed up the Internet? LOL!

Seems to me that the DNS server needs to decide which of several possible IPs it returns to the requestor. To do that, it needs to know which host IP is closest to the requestor's IP. But surely the DNS already knows the requestor IP because it has to answer the request, so I don't understand what's new here. How does "adding a portion of the requestor's IP to the DNS request" provide info that wasn't already known?

It certainly does not take 5 s for me. More like 1s for a complete refresh.Pay attention to what your status bar says. "Looking up http://www.ars... is DNS, "connecting to ..." or "Transferring from ..." is HTTP and Ars' fault.

Be sure that you use a good DNS provider. At my home switching from Earthlink/Time Warner own DNS to OpenDNS was like night and day, a good two years before today's announcement. Visit opendns.com for instructions.

Same here, Time warner was driving me mad, sometimes the DNS servers would not resolve for almost a minute (!!!) at a time, id get page load errors, etc. Switching to OpenDNS solved every issue I had, I dont know how time warner and others can let thier servers get that bad and not even realize it (this was going on for over a month!)

Seems to me that the DNS server needs to decide which of several possible IPs it returns to the requestor. To do that, it needs to know which host IP is closest to the requestor's IP. But surely the DNS already knows the requestor IP because it has to answer the request, so I don't understand what's new here. How does "adding a portion of the requestor's IP to the DNS request" provide info that wasn't already known?

Usually the DNS server receives a recursive lookup from a client like you or me.

I.e., what typically happens is a client makes a DNS query to the ISP's nameserver. If the ISP nameserver doesn't already know the answer (i.e. it was cached because you went to arstechnica.com 5 minutes before me), it eventually goes out and asks arstechnica's nameserver what the IP of the website is.

For a typical consumer ISP, you will probably have been given a 'close' nameserver when the ISP gave you an IP address. Arstechnica sees this and perhaps chooses to return the address of a webserver near that nameserver (or perhaps even a server that is part of a content distribution network that is sitting in leased space within the ISP itself.)

When you use OpenDNS or Google's public DNS though, that locality information is lost. All Arstechnica sees is Google or OpenDNS's servers.

On the subject of "Global Internet Speedup", why does in 2011 it still take 3 to 5 seconds to load a web site like Ars (same as most sites)?

I'm talking 3 to 5 seconds on a brand new high end laptop, with one of the fastest browsers available (Chrome 13), and reasonably fast broadband connection?

Why can we reliably stream HD video from Netflix at over 2Mb/sec, yet it takes 5 seconds to refresh a typical commercial web site?

Because you (and lots of other people) are not understanding the difference between bandwidth and latency.

For example one could tag a mail pigeon a 16GB memory card from him to his friend rather than over a landline 2Mb/s connection. If his friend lives anywhere reachable over 18 hours flight by a pigeon (which can be up to several hundred miles away), the bandwidth of this IP over Avian Carriers (IPoAC) protocol is magnitudes higher than the landline Internet conenction. However, the latency of this IPoAC is the flight time of the pigeon, whereas for the landline conenction it's in milliseconds. If you just want few kilobytes of data transferred, a typical activity of visiting a webpage, the latency is the concern, not the bandwidth.

Why is this necessary? Content providers already see your IP address, so why can't they already shift you to a better server?

My content provider (or, rather, my content providers DNS system) currently sees the location of my ISP's DNS servers which are several hundred miles away from my actual location. My content provider won't see my actual IP until my first connection to them - after the DNS process has finished.

On the subject of "Global Internet Speedup", why does in 2011 it still take 3 to 5 seconds to load a web site like Ars (same as most sites)?

I'm talking 3 to 5 seconds on a brand new high end laptop, with one of the fastest browsers available (Chrome 13), and reasonably fast broadband connection?

Why can we reliably stream HD video from Netflix at over 2Mb/sec, yet it takes 5 seconds to refresh a typical commercial web site?

Because you (and lots of other people) are not understanding the difference between bandwidth and latency.

For example one could tag a mail pigeon a 16GB memory card from him to his friend rather than over a landline 2Mb/s connection. If his friend lives anywhere reachable over 18 hours flight by a pigeon (which can be up to several hundred miles away), the bandwidth of this IP over Avian Carriers (IPoAC) protocol is magnitudes higher than the landline Internet conenction. However, the latency of this IPoAC is the flight time of the pigeon, whereas for the landline conenction it's in milliseconds. If you just want few kilobytes of data transferred, a typical activity of visiting a webpage, the latency is the concern, not the bandwidth.

First swallows can carry coconuts, now pigeons can carry 16GB of data?

Because mainstream media like Ars like to embed javascript from all over the Internet. Facebook "Like" buttons, "Tweet" buttons, Google analytics, slow adservers...

You could probably count 10-20 different scripts from different sources, all contributing to perceived latency and plastering cookies to track you.

+1 - This is also why I use Firefox + Adblock Plus to remove over 99% of the advertisements on websites. People start to complain about a website's ads, and it's one I use, I'll turn off Adblock just to see what the fuss is about.

Most websites I wouldn't even bother to visit if I didn't have that addon.

On the subject of "Global Internet Speedup", why does in 2011 it still take 3 to 5 seconds to load a web site like Ars (same as most sites)?

Google analytics, doubleclick, flash, etc. Use privoxy and block javascript and flash and it doesn't take anywhere near a full second to load most pages. Or at least that's my experience.

Seriously. If Google wants to speed up the web, then they need to start with themselves. When I find a page loading slowly Google analytics is cause more than half the time. The second biggest cause are ads including double-click. The fact that some podunk website can serve there portion of the website faster than Google is a disgrace.

+1 - This is also why I use Firefox + Adblock Plus to remove over 99% of the advertisements on websites. People start to complain about a website's ads, and it's one I use, I'll turn off Adblock just to see what the fuss is about.

Most websites I wouldn't even bother to visit if I didn't have that addon.

Inb4 some Ars staff throws a fit. Technically, it's an EULA violation to block ads on ARS. They even blocked access to Adblock users for a day and wrote a highly biased article aftwerwards, that many other site admins debunked and/or laughed at.

They didn't say anything about blocking social crap though (fanboy's annoyances and tracking list).

I'm not sure, though IIRC it doesn't. OpenDNS does by default, but you can opt-out. But really, unless the DNS server of your ISP is unreliable or overloaded during peak hours (or messes with responces and you can't opt-out), there's no need to use a third party one. Most of the time your ISP will have a resolver in the exchange near you (as I suppose ISPs in the USA do, where networks are spread over long distances) and it will be lightning-fast.