Posted
by
Soulskill
on Friday July 30, 2010 @04:22PM
from the cleaning-up-the-e-streets dept.

penciling_in writes "ISC has made the announcement that they have developed a technology that will allow 'cooperating good guys' to provide and consume reputation information about domain names. The release of the technology, called Response Policy Zones (DNS RPZ), was announced at DEFCON. Paul Vixie explains: 'Every day lots of new names are added to the global DNS, and most of them belong to scammers, spammers, e-criminals, and speculators. The DNS industry has a lot of highly capable and competitive registrars and registries who have made it possible to reserve or create a new name in just seconds, and to create millions of them per day. ... If your recursive DNS server has a policy rule which forbids certain domain names from being resolvable, then they will not resolve. And, it's possible to either create and maintain these rules locally, or, import them from a reputation provider. ISC is not in the business of identifying good domains or bad domains. We will not be publishing any reputation data. But, we do publish technical information about protocols and formats, and we do publish source code. So our role in DNS RPZ will be to define 'the spec' whereby cooperating producers and consumers can exchange reputation data, and to publish a version of BIND that can subscribe to such reputation data feeds. This means we will create a market for DNS reputation but we will not participate directly in that market.'"

First of all, didn't they say that the reputation would be determined by "cooperating good guys"? Since when has Comcast ever been described as "cooperative", or "good"?;-)

But seriously, reputations aren't usually vetoes where one person can blackball a server, are they? I would imagine that they would realize that it would be a waste of time, given that all of the other "good guys" would collectively carry too much weight for one entity to effectively sabotage.

I also imagine that they'd realize that this would be a good way to lose credibility as a "good guy", and maybe have it revoked.

Hopefully the same principal would apply on the other end if a "non-good guy" gets in the system in order to push bad sites.

I have a lot of time for Paul Vixie, but in this particular case he has come up with a bad idea. This should absolutely not be handled in DNS. There are plenty of reputation-based schemes already in operation for per-protocol black or white listing which work as well (and as badly) as any such scheme can do. There is no need to drag it down to the core, polluting DNS with yet more protocol shenanigans as we do so.

DNS was always a simple protocol which did one job and did it well. Please stop trying to expand it to solve problems which have already been solved (by those who wish to do so) elsewhere.

There are plenty of reputation-based schemes already in operation for per-protocol black or white listing which work as well (and as badly) as any such scheme can do. There is no need to drag it down to the core, polluting DNS with yet more protocol shenanigans as we do so.

Given that connections via most protocols are preceded by DNS queries (unless you're using hardcoded IP addresses for everything), I think whether this is or isn't a good idea comes down to one question:

Are there are lot of domains out there that deserve a bad reputation for things they do on certain ports or over certain protocols, but are otherwise fine and upstanding members of society?

I think there are plenty of companies out there that are respectable outfits but make some poor choices vis-a-vis email

No. SSL certificates are useful for providing encryption and a better sense of security, but they're far too corporate. The certificate companies aren't going to spend much time checking people are who they say they are for a cheap certificate because it will cost them money. Not to mention that they aren't used on most of the internet (because they're a waste of money on personal sites). This creates a way to come up with better security information for every site.

Self-signed certificates are one of my biggest problems with SSL. It gives you the same general level of security as SSH[1], but browsers are set up to make people trust sites with self-signed certificates less than site with no certificate.

[1] You can't be sure it's the right computer the first time you connect (unless you already have the certificate), but every time after that you can know it's the same computer and the connection is encrypted.

Wikipedia is wrong then. Internet Explorer doesn't do SNI on Windows XP, but Firefox is fine. More specifically the library SChannel is broken on XP, and therefore the browsers depending on SChannel have a problem. That includes Internet Explorer and it at least used to include Chrome, although Google has been working on an alternative NSS implementation. It seems that Chrome M6 has the problem fixed.

Stopping the names from resolving leaves the user wondering whether they're experiencing a network failure. What is needed is a new response, and support for this response, and not simply resolution failure.

It doesn't just prevent the name from resolving, though. It will also return the fact the query was blocked by RPZ via a STATUS code. At that point, I think it should be up to the application, such as the browser, which is causing the DNS query, to read the STATUS code for the query and provide the appropriate message, such as "server not found" in response to a query with an NXDOMAIN status.

I actually think this is pretty cool and am excited about it, although I suspect that I'm in the minority on this h

it looks like you can also define policy in the RPZ zone so that the domain you're trying to block can pointed to a web server were you have a block message up, presumably describing the policy reason that the site is being listed.

additionally, there is no requirement that says one must subscribed to a Spamhause-style service, that's just a hypothetical option. Besides, if your recursive DNS servers are blocking stuff you want to get to anyway, you can choose different ones, or set up your own. Setting up BIND as a recursive DNS server is ridiculously easy, and you can ignore RPZ zones to your hearts content then.

Besides, if your recursive DNS servers are blocking stuff you want to get to anyway, you can choose different ones, or set up your own.

Unless, for example, your ISP has a transparent proxy that redirects all outgoing traffic to known public recursive servers (e.g. Google's 8.8.4.4) to your ISP's recursive server instead. Do any ISPs in the developed world do this?

Is it April Fool's Day already?
This strikes me as viscerally wrong on so many levels, but one is immediately articulable: This would be an attempt to solve a social issue via technical means, and such efforts are usually doomed to failure. But not before wasting a lot of money, effort, and billable hours...

Paul Vixie already has quite the reputation for high-handed wholesale blocking of sites deemed to be improper. MAPS RBL was his baby and while the political fallout from that misadventure cost him much of his reputation - it looks like he's trying to keep at it but put the blame on someone else this time.

Regardless of that, this scheme will be afflicted with the same problems that MAPS had. When what the people can see or read depends upon the ratings applied by some special (and probably secret) group then they'll twist this power to serve themselves. Malware or spam? Blocked. Porn? Blocked. Negative opinions about the blocking? Blocked. Wrong political position? Blocked. Didn't pay protection or get approval from the government? Blocked.

Paul Vixie is undeniably talented and knows a lot about networking. But his knowledge of human nature and how society works is woefully inadequate. Something that is always true: when you attempt to apply technological solutions to societal problems, it doesn't solve the problems and introduces new and usually worse problems. See RIAA / MPAA VS. Everyone for insight as to how blocking creates more problems than it solves.

A whole lot depends on implementation. The initial intent seems to be to provide a mechanism of blocking domain names that have just been created and have high probability of being phishing/spamming/whatever nefarious. Theoretically, DNS could be updated to include the age of the record to help clients make up their own minds of whether to connect or not, but then you'd start on a slippery slope of additional information about records.

By building the protocol around a layer of abstraction, additional information can be considered - the actual IP that it's resolving to, how rapidly that's changing, how many different domain names are being created against the netblock that this one is created against, and so on. Much richer information, and theoretically can provide much more useful results.

The implementation? It's going to be problematic for some, since the decision is being made by a 3rd party as to what is trusted. But this is the case with many ISPs DNS servers anyway - if it doesn't resolve, you end up at a search page instead of getting a DNS error. This won't affect the majority of users in a way they perceive. Is that a good thing? Most of the time...

Overall, if the DNS server I used was smart enough to prevent successful lookups of records created recently (>1 day), records associated with IPs that saw more than n records added per time period, and a maybe one or two other basic things, I'd probably have a significantly reduced vulnerability to drive by downloads, bots depending on fast fluxing C&C servers, and other actively nefarious threats.

It seems there is no doubt that this will be used the wrong way. Just look at all the domains that don't resolve which your ISP tries to "help" by sending you to lots of lovely ad pages. What if Wikileaks gets on the Blacklist? No matter what this is goodbye net neutrality.