Meet the network operators helping to fuel the spike in big DDoS attacks

SoftLayer, GoDaddy, AT&T, and iWeb make a list of top 10 most abused networks.

Enlarge / A list of the the 10 network operators with the highest number of open DNS resolvers, as measured by CloudFlare. Over the past three weeks, third-party attackers have been abusing them around the clock in an attempt to knock a website offline.

A company that helps secure websites has compiled a list of some of the Internet's biggest network nuisances—operators that run open servers that can be abused to significantly aggravate the crippling effects of distributed denial-of-service attacks on innocent bystanders.

As Ars recently reported, DDoS attacks have grown increasingly powerful in recent years, thanks in large part to relatively new tools and methods. But one technique that is playing a key role in many recent attacks isn't new at all. Known as DNS amplification, it relies on open domain name system servers to multiply the amount of junk data attackers can direct at a targeted website. By sending a modest-sized domain name query to an open DNS server and instructing it to send the result to an unfortunate target, attackers can direct a torrent of data at the victim site that is 50 times bigger than the original request.

Engineers at San Francisco-based CloudFlare have been shielding one customer from the effects of a DDoS attack that has flooded it with 20 gigabits-per-second of data around the clock for three weeks. While attacks of 100Gbps aren't unheard of, that's still a massive attack even large botnets are generally unable to wage.

CloudFlare engineers soon determined the attackers behind the assault were abusing the open DNS resolvers belonging to a variety of large network operators. Many of these are well-known brand names: US-based SoftLayer, GoDaddy, AT&T, iWeb, and Amazon. The sustained attack comes as several distinct botnets appear to have been updated to enumerate huge lists of open resolvers. That means amplification attacks could become more common.

Given the damage they can have on innocent bystanders, such open servers have long been considered a nuisance. It's the Internet equivalent of a dilapidated crack house in the inner city or a rural front yard filled with old washing machines and rusted car parts. As a result, operators have been admonished repeatedly to make DNS resolvers available only to addresses located on their network, rather than to the Internet as a whole.

The CloudFlare engineers compiled a list of the networks hosting the open DNS servers and ranked them by those responsible for the most damage. With 68,459 unique open resolvers participating in the ongoing attack, there was plenty of blame to go around. The list names networks located on every corner of the globe, including those owned by Amazon, Turk Telekomunikasyon Anonim Sirketi, and Nepal Telecommunications Corporation. Still, CloudFlare CEO Matthew Prince found that the top 10 offenders provided 15,611 of those servers—or almost 23 percent of the firepower behind the attack.

"Wonder why there's been an increase in big DDoS attacks?" Prince wrote in a blog post published on Tuesday. "It's in large part because the network operators listed above have continued to allow open resolvers to run on their networks and the attackers have begun abusing them."

In a previous blog post documenting CloudFlare's work in blocking DDoS attacks that reached an astounding 65Gbps in size, Prince said the company regularly reaches out to the worst open DNS offenders. Frequently, the advisories fall on deaf ears.

"One of the great ironies when we deal with these attacks is we'll often get an e-mail from the owner of the network where an open resolver is running asking us to shut down the attack our network is launching against them," he explained. "They're seeing a large number of UDP packets with one of our IPs as the source coming in to their network and assume we're the ones launching it. In fact, it is actually their network which is being used to launch an attack against us."

Ars contacted representatives of all four US-based companies and received replies from all but AT&T. The three responding operators stressed they take the issue of open, "recursive" DNS servers seriously and recognize them as a security issue that can affect the overall health of the Internet. They went on to describe the difficulty of ensuring each DNS server running on their network is secured properly, in large part because improper configurations are often the result of decisions made by paying customers.

"As an unmanaged hosting provider, SoftLayer does not make proactive direct changes to our customers' servers," said Ryan Carter, a manager in the abuse department at SoftLayer. "These customers are able to run their own authoritative name servers on their servers, and they're able to configure them for resolvers. DNS is the hardest simple protocol out there because so many people have no clue what it is or how it works. Instead of learning the best practices of DNS management, they'll take the path of least resistance to just get the functionality online."

A statement attributed to GoDaddy Director of Information Security Operations Scott Gerlach said a "handful of Go Daddy customers are using the dedicated and virtual dedicated server environments to configure DNS on their systems" and disputed the number of open DNS servers cited by CloudFlare.

"Anyone who detects malicious traffic emanating from our network can best serve the interest of the Internet community by contacting us quickly and directly," the statement continued. "This will trigger a specific and swift investigation so we can take appropriate action."

In an e-mail, iWeb co-founder Martin Leclair wrote: "Open resolvers are vulnerable to multiple malicious activities and... the best practice is to prevent open resolvers. So when we detect open resolvers on our network we recommend to our users to follow the best practices. It is not that easy because the DNS products can sometimes default to open resolver when installed, and customers need to tweak the configurations to limit DNS resolution."

Given that many private efforts by CloudFlare haven't worked, the latest name-and-shame approach can't hurt. If you're a manager at one of above-named operators—or at any of the almost 4,000 other operators named in the complete list, you might think about getting a hold of someone at CloudFlare. They'll be happy to help you make the Internet a more secure place by restricting access to your DNS servers.

Isn't the Google 8.8.8.8 DNS server/infrastructure considered to be "open"? edit: And if so, why are they getting a pass from being on this list? Having a load balancer/distribution architecture to hide all their open DNS servers behind a single IP, they still probably have dozens if not hundreds of boxes serving the load, and gobs of bandwidth to boot.

If you're a manager at one of above-named operators—or at any of the almost 4,000 other operators named in the complete list, you might think about getting a hold of someone at CloudFlare. They'll be happy to help you make the Internet a more secure place by restricting access to your DNS servers.

Either misinformed, or disingenuous. The actual network operators aren't the ones hosting the 15,611 open resolvers - customers who have leased and/or colocated servers there are.

It would be nice to see more info about how this "amplification" works better than directly targeting the victim does. Definitely nicer than seeing a summation like this that makes it clear that the author himself (or his editor, more likely) is missing the ball.

So why doesn't this work with authoritative nameservers? Presumably those need to be open/queryable by all, right? And they can be used in the same way.

No, not unless they're configured for open recursion. Authoritative nameservers are normally supposed to return no answer for queries to zones they don't host, rather than recursively fetching the answer for the client in question.

And regarding Softlayer/The Planet, I'd wager a guess that the sole reason they're so highly listed is because of one of their biggest customers: Hostgator.

Hostgator is a primarily cPanel shop, and it's been known to have serious flaws before. Also a lot of customers fail to properly secure their webservers, either the applications they run to just outright having OS vulnerabilities or weak passwords. From there, it's an easy matter of getting exploited and participating.

So why doesn't this work with authoritative nameservers? Presumably those need to be open/queryable by all, right? And they can be used in the same way.

No, not unless they're configured for open recursion. Authoritative nameservers are normally supposed to return no answer for queries to zones they don't host, rather than recursively fetching the answer for the client in question.

Right, but as described in the post, they're simply doing a query and having the result of the query sent to another ip. So what does it matter if I query google.com from an open resolver and have the result sent to DoS and ip or whether I query the authoritative nameserver for google.com and have the result sent to DoS an ip?

I'm guessing here that the answer lies in the actual query being performed, it's likely randomized or something else that wouldn't be possible (or more difficult) if it had to figure out and go to the authoritative server.

It would be nice to see more info about how this "amplification" works better than directly targeting the victim does. Definitely nicer than seeing a summation like this that makes it clear that the author himself (or his editor, more likely) is missing the ball.

@Alan H.You have to spoof the UDP packet source address first. So the source address is different from actual source. If you have a large network and analysis infrastructure like Google, it's probably not hard to tell that the packet doesn't come from specified source address. Then Google can safely ignore that request.

In addition, how often do you need to resolve a domain name? Not quite a lot. Any open DNS server can just record requests from each IP address, and set some dynamic threshold to prevent too frequent requests. It's easy anomaly detection for any decent-sized network. It's not any harder than blocking repetitive malicious request.

I think those US company representatives made it fairly clear that it's customers' DNS servers inside their network causing the problems. It could indeed be the case. If customers are clueless, they make simple mistakes. DNS amplification is really not that hard to prevent once you are aware of the configurations.

Isn't the Google 8.8.8.8 DNS server/infrastructure considered to be "open"? edit: And if so, why are they getting a pass from being on this list? Having a load balancer/distribution architecture to hide all their open DNS servers behind a single IP, they still probably have dozens if not hundreds of boxes serving the load, and gobs of bandwidth to boot.

Wut? A pass? This article is based on *actual* problems noticed on actual networks. If you have evidence that this problem happens with Google's infrastructure, I'm sure they would love to hear about this. But any suggestion that they are picking and choosing who to target is FUD unless you have evidence to back that up. They reported on the networks whose problems they noticed. If they didn't mention Google the default assumption is that they didn't encounter any attacks that used their servers. You know, reporting based on evidence rather than theorizing.

Isn't the Google 8.8.8.8 DNS server/infrastructure considered to be "open"? edit: And if so, why are they getting a pass from being on this list? Having a load balancer/distribution architecture to hide all their open DNS servers behind a single IP, they still probably have dozens if not hundreds of boxes serving the load, and gobs of bandwidth to boot.

I dont think Google gets a pass.. BUT my guess is they have measures in place to help protect against malicious use of their services. Being that they are so secretive about their infrastructure we will never know unless there is an issue.

Open recursive resolvers are attractive targets for launching amplification attacks. They are high-capacity, high-reliability servers and can produce larger responses than a typical authoritative nameserver — especially if an attacker can inject a large response into their cache. It is incumbent on any developer of an open DNS service to prevent their servers from being used to launch attacks on other systems.

Amplification attacks can be difficult to detect while they are occurring. Attackers can launch an attack via thousands of open resolvers, so that each resolver only sees a small fraction of the overall query volume and cannot extract a clear signal that it has been compromised.

Malicious traffic must be blocked without any disruption or degration of the DNS service to normal users. DNS is an essential network service, so shutting down servers to cut off an attack is not an option, nor is denying service to any given client IP for too long. Resolvers must be able to quickly block an attack as soon as it starts, and restore fully operational service as soon as the attack ends.

The best approach for combating DoS attacks is to impose a rate-limiting or "throttling" mechanism. Google Public DNS implements two kinds of rate control:

Rate control of outgoing requests to other nameservers. To protect other DNS nameservers against DoS attacks that could be launched from our resolver servers, Google Public DNS enforces per-nameserver QPS limits on outgoing requests from each serving cluster.

Rate control of outgoing responses to clients. To protect any other systems against amplification and traditional distributed DoS (botnet) attacks that could be launched from our resolver servers, Google Public DNS performs two types of rate limiting on client queries:

To protect against traditional volume-based attacks, each server imposes per-client-IP QPS and average bandwidth limits.

To guard against amplification attacks, in which large responses to small queries are exploited, each server enforces a per-client-IP maximum average amplification factor. The average amplification factor is a configurable ratio of response-to-query size, determined from historical traffic patterns observed in our server logs.

If queries from a specific source IP address exceed the maximum QPS, or exceed the average bandwidth or amplification limit consistently (the occasional large response will pass), we return (small) error responses or no response at all.

Isn't the Google 8.8.8.8 DNS server/infrastructure considered to be "open"? edit: And if so, why are they getting a pass from being on this list? Having a load balancer/distribution architecture to hide all their open DNS servers behind a single IP, they still probably have dozens if not hundreds of boxes serving the load, and gobs of bandwidth to boot.

Google has spend a lot of effort securiing Google Public DNS. They use rate limiting and enforce a maximum amplification ratio by IP address. While I'm sure that Google's solutions aren't perfect, they clearly are doing enough to drastically limit the amount of attacks sent through the service. In contrast, the unwitting administrators unknowingly running open DNS resolvers have, by definition, no extra security measures in place.

To use an analogy, I lock my front door and only open it for authorized visitors, just as a secured DNS resolver only serves requests for machines on its own network. In contrast, the local shopping mall keeps its doors unlocked so it can serve the public, but hires security guards, uses CCTV, and bans troublemakers to provide safety, just as Google Public DNS is open to the public, but uses special security measures to mitigate the risks. Either approach can work. What doesn't work is leaving your place unlocked, unattended, and looking like an inviting place for some enterprising folks to set up a meth lab.

Here is how the attack works:Send a request to a DNS server, but make the request look as if it comes from your attack target. Make it a request for a lot of data. The DNS server sends a large response (larger than the request, anyway), but it sends the response to the fake return address which is really the attack target.

The main reason that DNS servers are attractive for this is that DNS uses UDP, where the return address is very easy to spoof. By comparison, http servers use TCP, which is harder to spoof. (DNS uses UDP because it is more efficient.) DNS servers can send moderately large responses by asking for a domain's TXT record, which is just a human-readable comment. (So the attacker needs to find some domains that have large TXT records, then ask the DNS for those domains.)

An open (and recursive) DNS server is simply a DNS server that responds to requests from anywhere (that's the open part) and gives responses about any domain (that's the recursive part). There are two other types of DNS servers: authoritative (open but non-recursive) servers, which normally only send responses about the domains they contract with (less useful for the attack unless they happen to have large TXT records). And closed (and recursive) DNS servers that only respond to requests from some addresses; this is typically a server owned by the ISP which only responds to requests originating inside its own network.

While limiting DNS to closed servers would certainly help (this is what CloudFlare suggested, I think), the trend for many users has been away from that because of poor response time or because ISPs were abusing their DNS for advertising purposes. Instead of blaming open DNS servers, you might as well blame fast DNS servers, and publish accusatory lists of people with powerful machines.

Now, the ISP responses to CloudFlare suggest that many DNS servers are only accidentally recursive, and were really intended to be authoritative. If that is the case, better defaults in DNS software and better education of those setting it up are what is needed. The ISPs could certainly play a role in that.

DNS servers can send moderately large responses by asking for a domain's TXT record, which is just a human-readable comment. (So the attacker needs to find some domains that have large TXT records, then ask the DNS for those domains.)

The attacks I observed asked for the DNS records for isc.org. ISC writes the software that does most Internet DNS, they have all the bells and whistles set on their domains. In particular their DNSSEC records are very large. A small query generates a huge response.

Quote:

So what does it matter if I query google.com from an open resolver and have the result sent to DoS and ip or whether I query the authoritative nameserver for google.com and have the result sent to DoS an ip?

Because you want a response MUCH larger than your query. The attacker's entire outbound bandwidth is full of queries, he can't manage any more. If the DNS responses are the same size as the queries he might as well have directed his outbound bandwith right at the target.

The only advantage of your scenario is that the traffic is laundered through google. If the victim wants to (have his ISP) manually back trace the forged packets to find the botnet he can't - he has to ask Google to start the trace.

edited to add: I just ran a sniffer. My 78 byte single packet DNS query for "isc.org. any" generates 3357 bytes of response, in 3 packets. You can see that if an attacker with a small 1M pipe uses it entirely to forge queries he can drive some high bandwidth DNS server to generate way more bandwidth directed at his target.

The attacks I observed asked for the DNS records for isc.org. ISC writes the software that does most Internet DNS, they have all the bells and whistles set on their domains. In particular their DNSSEC records are very large. A small query generates a huge response.

True. And if you look at the UDP header, there's also a tell-tale sign that they're not legit, because I can't think of any reason why a real DNS stub resolver would do this; and the only reason I come up with for why the amplification packets are like this is to get through imprecise firewalls.

And regarding Softlayer/The Planet, I'd wager a guess that the sole reason they're so highly listed is because of one of their biggest customers: Hostgator.

Former HostGator employee here. I can confirm that a large number of the nearly 2000 shared servers and the-hell-if-I-know-how-many reseller servers are programmed to be open resolvers, at least as of several months ago. (I won't even mention the justification I received when I asked about it, as I can't be sure how official it was.)

In their defense, the server monitoring team probably pays attention to this sort of thing and shuts it down when it happens. They're quite good, given the large amount of shennanigans going on with customers and attackers.

EDIT: Also, given the recent purchase of HG by EIG, things may be changing more for the better in terms of technical people making the decisions.

DOUBLE EDIT: Also, this is not to impuign the idea of an open resolver. I myself run one, and I keep tabs on it and filter known-bad queries over UDP, so that no amplification takes place. Nice try, though, a-holes.

TRIPLE EDIT: I just ran a test on about 4000 IPs (likely shared between the Softlayer and ThePlanet AS spaces) which I know belong to HG shared and reseller servers, and 1706 of them returned an A record when queried about google.com.

Again, in their defense, I wouldn't be surprised if new servers are not being provisioned like this, and that it's mostly a bunch of old ones which answer recursive queries publicly.

The attacks I observed asked for the DNS records for isc.org. ISC writes the software that does most Internet DNS, they have all the bells and whistles set on their domains. In particular their DNSSEC records are very large. A small query generates a huge response.

Funny.

bersl2 wrote:

True. And if you look at the UDP header, there's also a tell-tale sign that they're not legit, because I can't think of any reason why a real DNS stub resolver would do this; and the only reason I come up with for why the amplification packets are like this is to get through imprecise firewalls.

The only thing that is not legit is the return address in the original request, but the DNS server has no way to tell. Generally a DNS server supports the entire DNS protocol, even the parts that seem useless. Only malformed requests are rejected.

The problem has nothing to do with firewalls. The victim of the attack can easily tell that the received DNS response is fake because he didn't send a request in the first place. Receiving the response doesn't do any damage. The problem is that his network connection is saturated with those responses, so legitimate traffic cannot get through.

That's why DoS attacks are hard to fight: typically each hardware or software component involved does exactly what it is supposed to do, it just does it faster than the victim can handle. Yet for components in the middle it is hard to tell that anything fishy is going on, because each individual packet is fine by itself.

True. And if you look at the UDP header, there's also a tell-tale sign that they're not legit, because I can't think of any reason why a real DNS stub resolver would do this; and the only reason I come up with for why the amplification packets are like this is to get through imprecise firewalls.

The only thing that is not legit is the return address in the original request, but the DNS server has no way to tell. Generally a DNS server supports the entire DNS protocol, even the parts that seem useless. Only malformed requests are rejected.

In general, that's true. I guess everyone just sees a more representative sample of attackers than I do, because what I see is easy enough to block and has nothing to do with the DNS layer.

EDIT:

Quote:

The problem has nothing to do with firewalls. The victim of the attack can easily tell that the received DNS response is fake because he didn't send a request in the first place. Receiving the response doesn't do any damage. The problem is that his network connection is saturated with those responses, so legitimate traffic cannot get through.

This is relevant if you are the target victim of a DoS attack. Those who run open resolvers and do not manage the traffic are not the targets, but rather are enablers of the attack. The traffic each resolver sees is modest, but the target sees a flood of unsolicited DNS answers. If you run a resolver, by blocking the trickle of spoofed queries, you aren't contributing to the DDoS anymore with a flood of outgoing DNSSEC-laden answers against the target.

If people were really serious about solving the problem, they'd cut the offenders off the Internet until they fixed the problem. In my (simple, but effective) world, any network broadcasting spam, botnet traffic, open DNS servers, or any other malicious service/device would be removed or blocked from communicating with anyone until the problem was resolved. Regardless of whether they're a paying customer. If the company or rackspace renter won't fix it, hold the provider responsible. They'll only have to get burned once or twice to realize how badly they're affecting (and pissing off) the rest of their customers and take notification seriously.

This is effectively digital terrorism, and until we hunt it down, this is going to keep happening.

Isn't the Google 8.8.8.8 DNS server/infrastructure considered to be "open"? edit: And if so, why are they getting a pass from being on this list? Having a load balancer/distribution architecture to hide all their open DNS servers behind a single IP, they still probably have dozens if not hundreds of boxes serving the load, and gobs of bandwidth to boot.

In addition to what has already been pointed out, the Google public DNS servers use anycasting. They may have hundreds or even thousands of servers, or it could be an app on the internal cloud of tens/hundreds of thousands of servers, but you as an attacker can only talk to one of them, so your effort to leverage the strength of their network in your attack is seriously kneecapped right from the start. Then layer on the defensive measures others outlined.

I think their IP's are 8.8.8.8 and 8.8.4.4 so if they were on this list, they'd likely be somewhere toward the bottom, not shown, with "# of open resolvers: 2." But they take extra steps that Joe Blow, VPS "sysadmin" doesn't, so they don't merit inclusion.

I think it's somewhat disingenuous of Cloudflare to imply that the network operators are completely responsible here.

We've seen a lot of these attacks at work recently (standard disclaimer: I do not speak for my employer on Ars) - enough that I have a standard email for them, and a standard set of throttling rules prepared to try and block these attacks.

In every single case that I can recall, the issue was as other commenters above me have described - somebody wanted to run an authoritative DNS server, and did not realise that it was recursive. Many implementations of BIND are recursive by default, and Microsoft's DNS server will do the same thing, for example.

In every case, we have contacted the customer and they have fixed it.

Perhaps it would be fair of Cloudflare to argue that these companies need to do more of that, and it's possible that that's what they meant. But I don't think these companies are 'enabling DDoS attacks' (and I don't feel that Ars is out of line for describing it that way - I think that's how the original post may read).

As a side note, these attacks require the ability to spoof IPs. It's fairly trivial to spot incorrect IP addresses leaving your networks and drop them, and most people do.

However, there are certain providers who do allow this, and while it may be an oversight in some cases, in many, it is not. As an experiment, try Googling 'Hosting that allows spoofed IPs', or similar. I can guarantee that you'll find certain providers that have a reputation for being 'DDoS friendly'.

The way I see it, the real problem here isn't DNS, it's UDP... and there are two solutions here:

1. DNS over TCP instead of UDP. (Which already exists, it's just not mandated.)2. ISPs start configuring routers to filter UDP packets with source addresses set to impossible locations. This could either be done at the customer-facing edge routers (you can't pass a source address outside your own subnet) or at the peering points (you can't pass a source address outside the ISP's entire network space). Doing it the first way might be simpler, and it would keep one ISP customer from using this technique to DoS another customer of the same ISP.

The kinds of things going on now (rate-limiting, trying to keep people from running open recursive servers, etc) aren't really solutions, they're just mitigations.

As a side note, these attacks require the ability to spoof IPs. It's fairly trivial to spot incorrect IP addresses leaving your networks and drop them, and most people do.

For this attack, spoofing isn't needed (or even desired) on the DNS server side, it's needed on the DNS client side. The DNS server has no way of knowing if the client address is spoofed or not, and it does no spoofing of its own.

Now, if the *end user ISPs* were detecting and filtering UDP and ICMP packets with spoofed source addresses... then you'd have something, and by "something" I mean "this problem not existing anymore".

There really should be an RFC regarding filtering spoofed source packets on edge routers, IMO.

they're simply doing a query and having the result of the query sent to another ip. So what does it matter if I query google.com from an open resolver and have the result sent to DoS and ip or whether I query the authoritative nameserver for google.com and have the result sent to DoS an ip?

You're limited to the number of actual authoritative nameservers for that domain, in the second case. There aren't 15,000+ authoritative nameservers for google.

In particular, certain domain names make MUCH better vehicles for "amplification" because they have extremely large records (like isc.org's DNSSEC record, as outlined above). So rather than being able to target tens of thousands of recursive DNS servers with requests for that record, you'd then be limited to the 4 authoritative servers for isc.org. If you wanted more bandwidth than they could provide, you'd then need to find a different zone with another record that was really useful. All this means a lot more complexity required to craft an effective attack.

All of this, of course, is just mitigation. As long as source spoofing isn't caught and filtered out at the edge router level, and as long as UDP services are being run, this kind of thing will be possible. What makes DNS such an effective medium for it now is its ubiquitous and well-published nature.

As a side note, these attacks require the ability to spoof IPs. It's fairly trivial to spot incorrect IP addresses leaving your networks and drop them, and most people do.

For this attack, spoofing isn't needed (or even desired) on the DNS server side, it's needed on the DNS client side. The DNS server has no way of knowing if the client address is spoofed or not, and it does no spoofing of its own.

Yes, sorry if that wasn't clear. My meaning was that the attacker wouldn't be spoofing packets, if their hosting was blocking it properly. Sadly, there'll always be *somewhere* for these people - I'm just saying we should place at least some of the blame on the people who willingly make their network that somewhere.

There really should be an RFC regarding filtering spoofed source packets on edge routers, IMO

There are several, for instance RFC3704 from 2004. See the wikipedia entry for "reverse path forwarding".

If we can't keep the Russian Business Network and other "bullet proof hosting" off the Internet, I'm not sure how we can get lesser restrictions like these enforced.

Get a few major ISPs on board to have it implemented by a target date, then start refusing all UDP traffic from any major network which hasn't implemented spoof prevention after that target date. Once you've gotten the major ISPs (Time Warner, Comcast, ATT, Hinet, etc) on board, and the problem has subsided to the occasional "lol-friendly" network here and there, it becomes easier to just blackhole the living hell out of offenders as they pop their heads up.

@Alan H.You have to spoof the UDP packet source address first. So the source address is different from actual source. If you have a large network and analysis infrastructure like Google, it's probably not hard to tell that the packet doesn't come from specified source address. Then Google can safely ignore that request.

In addition, how often do you need to resolve a domain name? Not quite a lot. Any open DNS server can just record requests from each IP address, and set some dynamic threshold to prevent too frequent requests. It's easy anomaly detection for any decent-sized network. It's not any harder than blocking repetitive malicious request.

I think those US company representatives made it fairly clear that it's customers' DNS servers inside their network causing the problems. It could indeed be the case. If customers are clueless, they make simple mistakes. DNS amplification is really not that hard to prevent once you are aware of the configurations.

Any firewalling technique based upon source IP address frequency detection will need to be revised so it doesn't fall apart with IPV6. You don't get IPV6 allocation less than /64, one of which gives you 2**64 addresses. Same problem goes for email DNSBLs.

Isn't the Google 8.8.8.8 DNS server/infrastructure considered to be "open"? edit: And if so, why are they getting a pass from being on this list? Having a load balancer/distribution architecture to hide all their open DNS servers behind a single IP, they still probably have dozens if not hundreds of boxes serving the load, and gobs of bandwidth to boot.

Clearly there's a difference between being open for legitimate use and bing open for any kind of [ab]use.

Fore example, this comment forum is "open" for anyone to comment on. However, unlike some other comment forums, you don't see any spam here. Clearly there's some kind of gate that prevents abuse. Nothing wrong with that.

@Alan H.You have to spoof the UDP packet source address first. So the source address is different from actual source. If you have a large network and analysis infrastructure like Google, it's probably not hard to tell that the packet doesn't come from specified source address. Then Google can safely ignore that request.

In addition, how often do you need to resolve a domain name? Not quite a lot. Any open DNS server can just record requests from each IP address, and set some dynamic threshold to prevent too frequent requests. It's easy anomaly detection for any decent-sized network. It's not any harder than blocking repetitive malicious request.

I think those US company representatives made it fairly clear that it's customers' DNS servers inside their network causing the problems. It could indeed be the case. If customers are clueless, they make simple mistakes. DNS amplification is really not that hard to prevent once you are aware of the configurations.

Any firewalling technique based upon source IP address frequency detection will need to be revised so it doesn't fall apart with IPV6. You don't get IPV6 allocation less than /64, one of which gives you 2**64 addresses. Same problem goes for email DNSBLs.

You can ignore the last 64bits when setting up filters. The nice thing about IPv6 is it is supposed to be mostly hierarchical. This should make it relatively easy to block entire countries/ISPs