Posted
by
michael
on Tuesday January 15, 2002 @01:59PM
from the icann-refuses-to-do-its-sole-job dept.

Damalloch writes: "The BBC website has this story about the EU's concern over ICANN's refusal to make guarantees about root server stability. Domain name registrars such as Nominet are threatening to withhold payment of ICAAN's fees unless something is done to reassure them. So far ICAAN has remained stubborn because of the huge lawsuit potential if a root server were to go down but with the possibility of having their income reduced, they might just be convinced to do something."

But if one server went down wouldn't the requests just go to the other root servers instead? Isn't that how DNS works?

So presumably they've got decent machines and power supplies and connections for each server. And so the chance of one going down is quite low. The chance of enough of them going down at the same time to cause disaster has to be vanishingly small. If it's too big, add a few more servers.

Unless they include the possibility of them being hacked I suppose. But then they could just use several different operating systems and name server software to hugely reduce the chances.

I'm not sure I'm convinced that this is really the reason they won't give any guarantees, it seems like a reasonably safe thing to do to me.

It would (go to another root) - but if these systems are already running close to capacity, then that may be enough to cause the next server to choke, crash, and the next server will fall even faster.

It's a scenario much like the AT&T switch fiasco, where a seldom exercised chunk of code took out one server. Once one server was down, the others took more load, which, coupled with the fact part of the problem was a live switch receiving a "I'm back!" message while under heavy load, caused more switches to go. Cascade failure all the way.

After reading the article, I'm actually rather surprised myself. These systems must chew a ton of bandwidth, but it seems ICANN doesn't pay for it? Not to mention that all but three are in the US - isn't that going to oversaturate the cross-oceanic links?

I think I'm definately with the registrar organizations - ICANN should be having contracts in place to require certain things, rather than a wink and a nod and a handshake.

Reading through the page will give you an idea of the bandwidth matrix has at their disposal. The fact that most TLD servers are still 100+ msec ping on average would indicate, IMO, that those servers are under load.

A faulty version of software was released. And yes the fault was buried waaay down in a giant case or if/elseif statement. Normally no big deal, right? Just roll back. But they had things set up so that any machine connected to another would poll it for the version of software it had. If what it connected to had a newer version, it would download that and then hand it off to all its fellows. So by the time the bad code triggered and they realized they had a problem it had already spread virus-like across the whole network. Going back to the older version one one machine was futile because as soon as it booted up it would connect to other machines and download the flawed software.

They had to eventually take their old version, give it a new, higher number, and then compile and release that. So that that 'feature' once again became a feature and not a bug. Many lessons to be learned.

The nameservers are near capacity at the moment, however since name servers effectively load balance it's rather difficult to notice.
Theres a fascinating paper about it here [mit.edu]
The root/gTLD name servers are in a lot worse state than most people think. It is possible in a few years that they become too overloaded and just melt down. Imagine the internet without a functional DNS:)

Well yes that would be pretty bad. Shame I can't read the article as it's a.ps file which I have no way of reading.
But I can think of a number of ways to help with this.

For example, set up a new set of root name servers. Make sure that the database is duplicated to those machines at more or less the same time as the original machines, and then pursade of of the big providers. AOL or someone to use those as their root name serves instead. They would get better service, having dedicated machines, and it would lessen the load on the existing servers.

Not an ideal solution, but I'm sure it would work to reduce the load for a while.

Since DNS is a hierarchy, wouldn't it be just the DNS
servers at the next level down that need modifying.

No.

Crash course in DNS: In a typical setup, your machine asks the
DNS server at your ISP (this is called a "recursive resolver" and it's
really not part of the DNS hierarchy) where www.foo.com is. Then it does
the following (assuming all cache misses -- in real life, not all these
connections would really happen most of the time): it has a list of root
DNS servers
stored in a config file somewhere. It picks a root DNS server and asks
where the.com DNS server is. The root tells your ISP's DNS server
where.com is, and then your ISP's DNS server asks the.com DNS server
where foo.com is, and then when it gets the answer to that, it asks
the DNS server for foo.com where www.foo.com is. Then your ISP's DNS
server passes the result to you. (Glossing over a few details.)

The root servers are what makes a sea of unconnected networks into the apparently seamless internet. What you are suggesting would fragment the internet back into separate networks. Typing slashdot.org in europe could go to their 'root' servers and be directed to whoever their root says owns that domain. While typing the same address elsewhere in the world would take you to a different site.

Pretty big change. There have been companies that set up new top level extensions (impatient with ICANN and who can blame them) and sell those addresses, but for visitors to get to those sites the visitors need to have the dns settings in their computer modified. And if ICANN eventually rolls out the new extension (and I think there is one extension that this applies to, can anyone remember? biz maybe?) you could then have two company.biz sites, and which one the browser goes to depends on which root it's querying. Man, what a mess.

What does it matter if some DNS servers think there are 13 root nameservers and some think there are 14? This isn't fragmenting anything - just that some of the servers the next level down have more choices than others.

company.biz will always be the same, because all 14 root nameservers have the same information.

The problem is that 13 is a magic number in DNS. The maximum size of a DNS message when carried over UDP is 512 bytes. And guess how many NS records and associated A records you can fit in 512 bytes, assuming domain name compression is working as efficiently as possible? Thirteen.

If you add more root name servers, when name servers look up the list of root name servers (via something called a system query) you truncate the DNS message, and then those name servers retry over TCP and all hell breaks loose.

That said, two of the existing roots (j and l) are temporarily housed at ISI and VeriSign, which already have roots. Those two really need to be deployed to parts of the Internet that need them.

All DNS does is translate human friendly names to IP addresses. If the root servers died tomorrow, hitting slashdot's IP address would still work.

Granted, you'd have a tough time finding the IP if the roots were really done, but the failure of DNS has nothing to do with how the "networks" talk to each other.

What you're really talking about is the unified domain name space, having most of the users of the Net being able to resolve names certainly does keep the Net moving.

However, the ICANN roots (and their name space) are not the only ones in town. There are currently several different groups of alternate Network Information Centers (NICs) such as OpenNIC [unrated.net]. Using them is fairly trival for any admin; if enough of us start using them, ICANN no longer has power.

Individual users don't need to modify their DNS setups, they should be pointing to their ISP's name servers anyway; saves both bandwidth and lookups.

Firstly and foremost because it's a U.S. entity who pretends to be an international entity and the Internet quit being a U.S. entity a long time ago.

I suspect that China will be the first to set up its own root DNS servers and start issuing non-ICANN-approved domain names, probably in competition with ICANN and Versign. Other's will soon follow. Soon every big ISP both in the U.S. will see the need to have its own root DNS server. Of course there will be some cooperation required between the different DNS roots if their customers are going to be happy. Hopefully, this new cooperation will end the monopoly ICANN has over the administration of the Internet, leaving unsportsman like players like Versign standing out in left field, wondering why nobody is tossing them the ball anymore.

Yeah, after
the last story about DNS [slashdot.org], I started considering switching to one of the alternate DNS systems. Hmmm...I wonder if uber.geek is taken?

The claim that they're worried about lawsuits seems silly to me (at least to some degree). They can at least try to increase their redundancy, security and stability--then just put some blurb in their agreements that they don't guarantee stability (just like many modern corporations do). The article said they weren't even paying the companies/organizations that ran the root level servers! Isn't this a big part of what they are supposed to be doing? What kind of crap is that?

I have to wonder if this is just some ploy so that the players can stuff their wallets with ICANN money...

Firstly and foremost because it's a U.S. entity who pretends to be an international entity and the Internet quit being a U.S. entity a long time ago.

The catch for ICANN is it needs legitimacy to enforce policy both in the U.S. and especially abroad, but it can't gain global recognition and respect without enforcing policy and taking responsibility.

From the article:Nigel Roberts, head of the Channel Island domain registry and a member of Icann's country code committee, said the row was leading people to question just what Icann was for."The issue is not the amount of money," he said. "It is about the role that Icann has."

I think ICANN could help resolve this by giving guarantees to take an active role in use and abuse of the root servers. They could more closely track and monitor root server usage, make recommendations and requests to providers on where to put root servers to improve DNS efficiency and reliability. ICANN could also publish data on who's root servers are performing well and who's are not, shaming poor providers who create bottlenecks into better service. If ICANN performs those duties well and is responsive to concerns like those in the EU, it will become a more effective body.

Charge for a subscription to a root DNS server. One can make money off both ends: charge the domain name holder for the reservation on your server, AND charge the end user a yearly or a per use fee for DNS resolution. The latter requires some form of micropayment, but it's probably quite workable.

The benefit to the end user is that one could subscribe to a completely Disne-fied root that would have only family-friendly sites, whereas another server would have all those wacky pr0n sites you could ask for. Somebody would probably even have a free root server out there based on his/her special interest groups.

Heck, you could even charge for translating addresses to other systems. No need to worry about foreign DNS servers - if they don't pay up, they don't get access to your root.

Some people would still get around the whole thing by just typing in the octet directly, but that would be such a small percentage that it wouldn't even matter.

China will not take over existing root servers. But if they establish their own root server for use within China, then that will actually further their goal of controlling Chinese citizens' access to information. Even if no one else in the world uses the China root server, it could still help to control the Chinese population, which is their overall goal.

When they pull stuff like jailing people for posting disenting opinions of the government on the internet. Or restrict peoples internet activities, as they do, who will be willing to sign up with them as a root or any other higher level management function, for that matter.

That's just it... they can make it law that everyone in China has to use the Chinese root servers. Those who dissent are jailed.

No, China will not take over root servers. Some other nation might, maybe. But, definitely not China.

I disagree. While taking over root servers may not be the most successful policy, it is certainly a viable option. It's not an issue of whether or not China can setup root servers, China definitely has people who can, and would be willing to, setup and maintain root servers. The bigger issue is their motivation. Given a desire to prevent people from posting and reading dissenting opinions, the Chinese government may perceive root servers as a very viable option.

unles you enter a computers addres in hex, in which case it goes straight to the system.
There are also people who maintain there own DNS system, albeit smaller and personalized.
But in general that is true.

Hex? Are you referring to the MAC address of a machine? That's only really valid on local lan. To route to a machine on the opposite "side" of the internet you need its IP number (thinking dotted-decimal), and DNS is the services used to obtain the IP number from readable names like "slashdot.org".

An IP number is a 32-bit binary number.
It can be represented in the familiar
dotted-decimal format.
It can be represented as a hex number.
It can even be represented as a huge decimal
number. Take your pick. If you convert the
dotted decimal to a regular decimal number
(and that isn't just taking out the dots
and stringing the numbers together), and type
that into your browser, you will get to the
same destination as the dotted-decimal number.
I don't know if the browsers will behave the
same if you enter it in hex, though.

you are correct, unless the clients dns server doesn't have the answer, then it looks to the root servers, unless the dns server is set to forward queries, then it'd forward them to another dns server, who might or might not have the answer... "Almost every time anyone looks for a webpage these root servers are consulted..." is hyperbole.

Yes, DNS servers do significant amounts of caching. When I worked at a small ISP a few years ago, they cached DNS lookups for a week. I believe that almost everyone caches for 48 hours or less nowadays.

Plus, I'm sure that at least 10% of normal web browsing comes straight from the user's cache on their hard drive, so the internet isn't accessed at all.

Looking at my DNS config files, it looks like each domain can set it's own TTL (Time To Live) duration for its current settings before it needs refreshing. The default setting is 3 hours, which is what I presume everyone normally leaves it at.

the EU's concern over ICANN's refusal to make guarantees about root server stability.

It sounds as if all that's required is a standard Service Level Agreement. The kind of thing that's standard through most big corporates, and has a clause along the lines of "we guarantee 99.5% uptime, if service drops below this we pay £x.xx per quarter percent below.".

It seems that it's the refusal to provide something like this, rather than technical worries, that are underlying this dispute.

However, many of the servers are looked after on an ad hoc basis by very different companies. Icann does not pay the wages of the people that oversee the servers, nor has it signed contracts with the organisations that look after the root servers to establish service levels, standards of reliability or ecurity.

If ICANN can't legally hold accountable the people running the root servers, then there's no way they'd provide any guarantees to anyone. That much makes sense.

Furthermore, the root servers (again, from the article, don't flame me if I'm missing a nuance or two) don't really DO much. They just tell you where to go to get info for each of the top-level domains. Not exactly a whole lot to running one of these other than keeping it from crashing.

My question, though, is why is anyone worried about a root server crashing? There are 13 of 'em. Wouldn't your DNS server ask someone else if the "preferred" root server suddenly went Tango Uniform? Are there backup root servers out there to jump in? Ways to route around the damage, as it were?

What I still find amazing is that ICANN hasn't managed to take full physical and financial control of all the root servers. When I was in school, I remember thinking it was cool that we had one of the root servers (terp) in my building. It was amazing to see how a loose group of unrelated institutions had somehow set up a reliable, workable, DNS system.

In fact, it sounds like this is still the case, somewhat. Do these root server operators have ANY contractual controls on what they do? If not, then why the hell can't we just get THEM to add new top level domains? Screw ICANN. The servers don't belong to them, they belong to the people running 'em. As long as the guys running the roots don't point.com to some other universe, they should be able to avoid getting sued into oblivion, right?

And, if they were to do this, could ICANN even stop them? They'd have to repoint all the root.hints files across the entire globe, wouldn't they?

Furthermore, the root servers (again, from the article, don't flame me if I'm missing a nuance or two) don't really DO much. They just tell you where to go to get info for each of the top-level domains. Not exactly a whole lot to running one of these other than keeping it from crashing.

What a root server doesn't isn't very hard. What is hard is keeping the damm thing running. They a high load (every DNS server in the world hits once once a day for each TLD), they get all sorts of script kiddies hitting them, and because of their profile, it's very hard to make changes.

If a root server goes down, there are lots of redundant alternatives. However, the posability and damage of Domain name hijacking is much more serious... This is especially true since ICANN does not even operate the root servers!!. What's stopping one of the companies that operate root servers from suddenly deciding to take over the.uk top level domain? There is probably no law or contract stopping them from doing so.

You're probably right. NSI hijacks people's domains all the time and doens't get in trouble. Makes me wonder if there actually is any law that prevents it. If they hijacked a TLD, I'm sure everyone would make a law against it real quick though.

The real issue here is that many 1000's of companies have based their businesses on the assumption that DNS will always be available and reliable. The original intent of the DNS system was to provide a convenient service to Internet users, not to serve as a point-of-failure for the entire net.

Why should ICANN promise to deliver something that they know they are unable to?

What we really need is to start over with a new specification for domain names that reflects the needs of the current Internet - a systerm that can provide the security and reliability that we now depend on.

Great idea... but, it has taken the entire community years of fighting to agree on things like IPv6. How long until that gets implemented? Can you even imagine how long it would take to: a) come up with a new spec and b) implement it?

It sounds like they are using this as an excuse to not pay. I doubt they really care, but it is a convienant excuse to use, since they know ICANN can't come up with a solution and implement it rapidly due to politics.

If your company was administering a ccTLD and ICANN comes knocking at your door for money when they can't make any assurances of your ccTLD being served to the rest of the world, why should you pay them?

To make an analogy, ICANN is to the Internet like the UN is to an international government; they are both generally ineffective but continuously demanding an ever increasing sum of money to be able to join the party.

The simple fact is that ICANN can't... (make any assurances) because they ultimately can't step in and takeover the root servers. Otherwise, they'll find themselves in a bigger controversy. Mind you, ICANN is no stranger to controversy.

Maybe we should be wondering where all ICANN's money goes? According to their budget [icann.org], the law firm Jones, Day, Reavis, and Pogue [jonesday.com] gets about $734,000.00 !!!

ICANN should be less worried about the CCtlds and focus on their own organization! The total personnel costs for ICANN are projected at $2.217 million dollars! I would like to know what EXACTLY the staff members do to deserve this type of money? ICANN is the biggest bunch of hypocrites to come along since the US Congress!

Thats ~3150 queries per second. I imagine a good chunk of that 8 gigs is ram used to create sockets and threads that do the lookups - I also suspect that is's a heavy SMP machine, each processor with it's own ram. If there were, say, 32 processors, each with 256 megs of ram, and each processor ran (X) threads to handle requests...

Thats ~3150 queries per second. I imagine a good chunk of that 8 gigs is ram used to create sockets and threads that do the lookups - I also suspect that is's a heavy SMP machine, each processor with it's own ram. If there were, say, 32 processors, each with 256 megs of ram, and each processor ran (X) threads to handle requests...

Err no, none of the memory is used for sockets and none for threads.

DNS is a UDP protocol and there is no good reason to talk TCP to a root name server so those requests would be firewalled off to a different node.

As a UDP protocol DNS is stateless and there is not a good reason to use threads. Ungranted requests can be cached in the network interface drivers. At least that is the way the servers running BIND function. I have not read the Nominet code but I doubt it is different.

I don't know why Paul would have so much RAM on his box. The dotcom zone is many gigabytes but the root zone only has 200 records.

http://www.cisco.com/public/sw-center/sw_download_ guide/dnsfaq.html gives a list of root servers and their IP Addresses, as well as some good information about the basics of DNS.

http://www.isi.edu/in-notes/rfc2870.txt talks about the requirements for a root server. From this:

1.1 The Internet Corporation for Assigned Names and Numbers (ICANN)has become responsible for the operation of the root servers. The ICANN has appointed a Root Server System Advisory Committee (RSSAC) to give technical and operational advice to the ICANN board. The ICANN and the RSSAC look to the IETF to provide engineering standards.

As such, it looks like ICANN is the only organization that can take responsibility of the system.

section 2.3 estimates that 2/3rds of the servers could be taken out and functionality would be maintained.

The Internet Software Consortium runs F on BIND 8.2.3 (Hrmmn... their latest release is 8.3.0 and they've noted that 8.2.5 has a security bug, and the 9 series *is* out and at the 9.2 series, does anyone else find it disconcerting that they run 8.2.3?) Does anyone know of a list of who takes care of these root servers?

8.2.5 has the bug. The only remote exploits I know of myself were introduced after 8.2.3

Actually, both 8.1.2 and 8.2.3 are Very stable and secure in the 8 series.

I personally run 8.1.2 on half of my servers (Slaves) as i dont need the newer features of 8.2 on them.

8.1.2 is also not effected by the holes introduced in the 8.2.2 series that existed up until i believe 8.2.2p5 (But dont quote me on that patch level)
8.2.3 was basicly a pollished version of this.

Any 8.2 released after optentially has bugs still, adn they did not fix them in the 8.2 tree as 9.x was pending so close.

I have no paid anymind or attention to the 9.x tree at all myself, and wont until it gets a tad more stable.

Additionally, there are still 4.x versions that are extreamly stable and secure and running over the internets backbones.

Just because the version is older doesnt mean it automaticly has bugs.
Some people either know/feel more comfortable with the 4.x zone files than they do with 8.x.
They should not be forced to upgrade if they dont want to.
Its the same with 8.x to 9.x.

Most of the changes are not security or stability anyways, only new features.

The EU believes that because the root
servers are not controlled and administered
by one central authority, they are unreliable.

This is true, to an extent. Different and widely
spread organizations run the root name servers,
using different OS's, hardware configurations,
and network connectivity.

And this is a Good Thing

Concentrating and centralizing the root name
servers would defeat the diversity that now
exists. If one goes down, the others pick
up the load. If there's a fatal hardware bug
in one, it probably won't affect the servers
running on different hardware. And, most of
all, A single business or management
failure will not disrupt root nameservice.

The OpenNIC is a user owned and controlled Network Information Center offering a democratic, non-national, alternative to the traditional Top-Level Domain registries.

Users of the OpenNIC DNS servers, in addition to resolving host names in the Legacy U.S. Government DNS, can resolve host names in the OpenNIC operated namespaces as well as in the namespaces with which we have peering agreements (at this time those are AlterNIC and The Pacific Root).

Membership in the OpenNIC is open to every user of the Internet. All decisions are made either by a democratically elected administrator or through a direct ballot of the interested members and all decisions, regardless of how they are made, within OpenNIC are appealable to a vote of the general membership.

So a couple years ago Jon Postel (RIP) can rediredct all authoritative root server queries to his lab PC and the internet is no worse for the ware, but ICANN, with substantially more resources, redundant locations and dozens of authoritative root server, cannot guarantee that some subset of them will always been online?

Huh?

What did I miss? We all have to meet requirements, whether your a 5 nines shop (god help you) or not with respect to uptime and service availability. Why should ICANN be any different?

A couple of years ago certian destabilizing influences were not on the net. Today, the net is littered with cracked coppies of win2k on cable modems, not to mention serving "the enterprise" whatever that is. The venerability demonstrated by all those crippled machines did start to desabilize routers all around the world. You did not miss all the fun, did you?

Unless people get smart and dump M$, it's hard for anyone to gaurantee any service. It's kind of like planning to meet someone on Burbon Street for Mardi Grass, your voice will be lost in the noise. All the resources in the world won't protect you from irresponsible net usage.

/quote/
2.3 At any time, each server MUST be able to handle a load of requests for root data which is three times the measured peak of such requests on the most loaded server in then current normal conditions. This is usually expressed in requests per second. This is intended to ensure continued operation of root services should two thirds of the servers be taken out of whether by intent, accident, or malice.
/quote/

Given the nature of how DNS works, and how the root servers are run, how can ICANN guarantee anything? (it can't) If they do provide some sort of guarantee then haven't they added a financial incentive for someone to DOS the root servers?

The Europeans are asking for something that cannot be delivered (currently), and if they get it the chances increase that someone will DOS the servers for some financial gain. (i.e. your server went down, I now don't have to pay you x dollars). If I was ICANN I wouldn't want to sign an agreement. It may be time for ICANN to change the way it does business, and the "ad hoc" nature that the root servers are maintained may have to change. DNS the protocol itself needs to be very carefully looked at as well.

The root servers should be owned by a formal co-op, owned collectively by everyone who has a domain name registered, and run by an elected board with a hired staff. This would be a "producer co-op", like Agway [agway.com], the giant co-op for farmers, rather than the more common consumer co-op. This would bring together the interests of the people who need the root servers to stay up, the domain owners, and the ownership of them.

Add 'no corporations nor governments' to your statement, and you have me %50 sold, just dont call them 'socialist' servers, there are people whose support would be needed in this idea, who, like you mention, think that socialism = comunism = red = bad, long bearded dude who smokes stinking cigars

Thats one that has always puzzled me? root.hints contains the list of root servers. and it doesn't have through Z in the current naming convention, so why can't be have more root servers. I mean esspecially with the price of hardware, and such being what it is, it shouldn't be that hard to set up additonal root servers. I mean if the DNS howtos of the world just included a line like, "Your root.hints now includes the ICANN servers, add these additional listings for the other servers"? I partially agree that there does need to be a central authority for all this, but I do think ICANN is handling it in the best way. There is a need for some control so that two people don't try to register the same name with different authorities, and create a conflict. However, I also think its should be a case of first come first serve on getting the names, and the trademark game should not be a consideration.
But I could be completely wrong because I so think, that DNS records should also include rudimentry routing info that helps the rest of the world find that last hop to my network since my ISP will not route for me. And I also think that DNS should have the ability to have a PORT record so when doing a DNS lookup the person looking me up can be directed to service ports within my IP so www.foo.com can live on port 8090 for instance because cable modem companies sometimes block port 80. That way when www.foo.com gets looked up the client not only gets the IP, but the port on the server to connect too, so users don't have to have stupid IPs like http://www.foo.com:8090, DNS takes care of passing the 8090 as part of the lookup reply.
I am working on the RFC for this since there doesn't seem to be one.

Uh, your post shows that you don't know the difference between the internet and the WWW. Not everything runs on port 80. Domain names have nothing to do with ports. Your domain name points you to an IP, which identifies your machine. You then connect to a port on that machine. The port you connect to is either a) identified by convention, such as port 80 = http. If the server is running a server on non-standard ports, it is the responsibility of the server to redirect clients to the correct port.

Actually I do, I was using port 80 as an example of what i was talking about....

Perhaps a closer read the next time, instead of just skimming for flamebait.

To reiterate and expand...If a user such as one on a cable modem wants to have to have a WEB site, and the ISP blocks the Port 80, if DNS had the ability to pass port information with the DNS reply then the user could have www.foo.com, as the URL leading to the site instead of having www.foo.com:8090.

Another example, I have one IP I want two sites, but they live on different boxes. The same rational applies, one could live directly on 80, and with DNS carrying the Port # the the other could live on 8090, and they can both have simple names www.foo.com, www.bar.com,(I am actually running into this problem now, as I have a domain already, and my girlfriend would like to have one as well)

I am the DNS admin more several Internet domains, and have been for 5+ years in a professional capacity. I have been on the internet in one cpacity or another for 10+ years. I remember a time before the web didn't even exist.

I agree with Eristone, I'm afraid. If you can't figure out how to do port forwarding for subdomains and virtual hosting...

I'm still wondering what ports have to do with DNS, and why you'd want port information attached to your DNS if you're running more than one service. Even assuming that this wasn't doable in other ways that didn't require major changes to DNS, 99% of the time services will be running on thier standard ports on legit servers any. (BTW, your ISP blocks port 80 cause running servers is against the AUP. That means it's not a legit server)

OK, I CAN do all that, but I am trying to solve problems for average users...I know geeks trying to help non-geeks is generally frowned upon, but sue me I have a big heart and want to help them get solutions. See the answer above for yet another expanded description of what I am attempting to do for these people.

BTW, I don't believe that ISP's should be able to limit what you do with the Bandwidth, call my desire to help people who have their port 80 blocked civil disobidiance...

I am with this conversation because obviously you people have so limited a view of things that you can't open your minds enough to understand what it is that I am trying to accomplish.

Call me wacky, but I just don't think that DNS (a way of identifying a machine) is the proper place to be showing ports (a way of identifying a process). For what it's worth, theres at least one company that provides everything you want to do in an easy-to-use solution for home users, without needing changes to the DNS system (yeah, watch that happen).

The reason why there are only 12 or 13 root servers is based on several factors.

The most basic factor is that the DNS specification imposes an obsolete 512 byte limit on the size of UDP DNS packets. (DNS can run on TCP but the overhead is much higher than with UDP.

Since reply packets often contain many resource records, and DNS names can be up to 255 bytes each, you can see that one can brew up server names that would strongly press that 512 byte limit even with two servers. Fortunately, server names are usually not all that long.

DNS name compression comes into play to help, and that situation has improved since most root servers now support root-servers.net as the right hand part of their names.

Internationalization of domain names under the ACE rules coming out of the IETF will work the other way - internationalized server names will tend to be longer than than the a.root-servers.net form that we see today.

Now, just because we see one NS record in a list of servers doesn't mean that there is only one computer there - or even that it is in one place. Many servers are actually clusters that are hiding behind load balancers.

And with IP "anycast" technology (essentially a way of establishing multiple instances of the same address block by using localized more specific route announcements) we can have as many servers as we want at the same apparent address but located in widely scattered locations around the world. The.biz servers are, I believe, handled this way.

Oh, by-the-way, don't fall into the belief that the names/addresses listed in the "hints" file are the root - those addresses merely serve as a way to find a single root server. That server, in turn, will provide the actual set of root servers. That's why the hints file is called "hints" - it's just there to get the ball rolling.

There's no reason we have to use whichever ACE becomes the standard in the domain names of root name servers. We sacrificed the old domain names of the root name servers (e.g., ns.nasa.gov) to the greater good of better domain name compression years ago.

The countervailing force is EDNS0, which will allow 4096 byte UDP-based DNS messages. And BIND 8.3.0, recently released, supports EDNS0. f's already running it. Once 8.3.0 is fully deployed on the roots, I think additional root name servers are just a quick hack away:

- System query without EDNS0: You get 13 root name servers
- System query with EDNS0: You get more

And I also think that DNS should have the ability to have a PORT record so when doing a DNS lookup the person looking me up can be directed to service ports within my IP so www.foo.com can live on port 8090 for instance because cable modem companies sometimes block port 80.

Been there, done that. It is called the SRV record and it works in the same way as the email MX record.

Not supported inany of the browsers yet, but is used extensively in W2K for other purposes.

Which is sort of the point I was trying to make origially, because SRV records aren't fully supported. There needs to be an agreement on making something of this sort happen that will allow all clients and servers to respect the information coming back....

Thanks I will check into that, I think i looked into those before, but that was a while back when i was first learning the DNS admin job, and lets face it for the ordinary everyday tasks the zone file doesn't get that complicated, so those sorts of things slip your mind...but at least it was a rational response, with useful information rare around here.

RT is not the record of interest for ports, SRV is.
This is from chapter 15.7.6

Quoting the book (and all credits due)
~~~~~

The experimental SRV record, introduced in RFC 2052, is a general mechanism for locating services. SRV also provides powerful features that allow domain administrators to distribute load and provide backup services, similar to the MX record.

A unique aspect of the SRV record is the format of the domain name it's attached to. Like service-specific aliases, the domain name to which an SRV record is attached gives the name of the service sought, as well as the protocol it runs over, concatenated with a domain name. So, for example:
ftp.tcp.movie.edu
would represent the SRV records someone ftping to movie.edu should retrieve in order to find the movie.edu FTP servers, while:
http.tcp.www.movie.edu
represents the SRV records someone accessing the URL http://www.movie.edu/ should look up in order to find the www.movie.edu web servers.
~~~~~~~~~~~

The computer of someone searching for www.bbc.co.uk for the first time would consult the closest root server and would find out that Nominet handles the database of net domains ending.uk.

The root server then would pass on the net address of Nominet to allow the searching machine to find the exact web address of the BBC website.

This is totally inaccurate. If you are searching for www.bbc.co.uk, your computer asks the local DNS cache (listed in/etc/resolv.conf, unless you have some retard OS). That cache then asks a root server for www.bbc.co.uk (if that information has not already been cached). This produces a referral to the.uk nameservers. The process continues for co.uk and bbc.co.uk as necessary. Note that it does not ask the closest root server, because the cache has no way to know what this is. BIND uses the "fastest" server (until it overloads from all the other BIND servers using this strategy); djbdns's dnscache picks one at random.

One way to avoid delays at the root servers is to run your own local root server, and periodically download the root zone. This [open-rsc.org] shows you how to do it using the ORSC root zone, but you can do it with the standard root as well. You can AXFR it from one of the root servers. Then you tell your local cache to use your local root as the root server.

I wrote a document about some simple steps that could be taken to improve DNS security before ICANN's meeting last November.

[cavebear.com]http://www.cavebear.com/rw/steps-to-protect-dns. ht m

Don't let the fact of 12 or 13 servers lul one into a sense of security - they are all fed data from the same source, and if that source is corrupted, then all the root servers will be corrupted. And that's not a hypothetical - the entire.com top level domain disappeared for a few hours in 2000. (Most people didn't notice this because of the damping provided by DNS caching, but it would have become really bad had the situation continued for a few more hours.)

Also, because all of the root servers run a nearly common code base, they are potentially vulnerable to a common weakness.

In addition, one need not bring down a server to take it off-line, an attacker need merely saturate the network in the vicinity of a target server so that no good traffic can get through. An even scarier notion is that of corruption of Internet routing so that packets flowing to DNS server addresses are forwarded out router interface null0.

If I read this correctly, the reason why the EU local registries don't have their own root servers, and hence control over service levels is a historical issue [isc.org].

Excerpting from the Internet Software Consortium's page, linked above - and please allow me to state that such a reference is anecdotal rather than given fact,

We then discussed potential candidates and found no volunteers in the AsiaPacific region, none in Africa and only one in Europe.

The "one in Europe" btw was NOT Nominet or another registrar, it was a guy working for LINX, the London INternet eXchange.

There's good reason for this, as late as the early 1990s, Europe was still thinking that X.500 was the way forward, and a large amount of resources from universities, telcos and local standards agencies was devoted to "interoperability" testing of X.500 directory services. What really happened was the standards lagged the implementations so badly that vendors and implementors went ahead and did their own thing, creating, as anyone who has dealt with X.500, a nightmare for inter -vendor interoperability. That created the space in which the InterNet and DNS / BIND could flourish. FWIW, LDAP is a (nor precisely, so please don't flame me, too large a subject for absolute accuracy here) derivative of X.400, itself a cut down form of X.500. Novell's eDirectory, which runs some of the largest sites (CNN.com, AOL messenger services) is itself a souped up LDAP implementation.

You can find a brief overview of X.500 and what the "authorities" in Europe were up to as late as 1990 and beyond in this history of X.500 [salford.ac.uk]

I'm British born myself, but this all seems to me to be Euro - Whining. Particularly the UK's Nominet making an issue of this is absolutely BS. Nominet has, IMO, very sharp practises. If you "buy" a domain in the UK (domain.co.uk) via an ISP, Nominet maintains a "tag" linking your domain to the "provding" ISP, until another ISP takes it over. Domains _never_ go back into circulation when they expire. Nominet refuses, on the whole, unless you threaten or cajoule them with considerable effort, to "release" your domain because it states it will not get involved in contractual disputes between you and your ISP. Most UK ISPs make contracts which lock you in to your services and charge a considerable and hefty severance fee, usually buried in the small print. You _can_ get a "Neutral Tag" applied to a UK domain, if you pay GBP £80 for two years, which fee goes back to the ISPs who are members of Nominet, which is a for profit company, limited by guarantee, a rare form of UK company which offers very lax statutory reporting. Even though you _can_ do all this, I've had several clients now who've complained to Nominet, e.g. when their ISP is TU and no longer provides service, and Nominet tells them anyway that they can only deal with an ISP who is a member of Nominet. Obviously that's BS. But you can't register a domain in the UK for.co.uk and run your own DNS and maintain it under your own authority without a *lot* of expensive hassle, and possibly an attoney. You could hire me, of course, but this kind of work sucks, so I wouldn't offer it generally.

Sorry for that rant against Nominet, but it's Crocodile Tears time again and minus several million points for the Brits, as per usual.

Nominet has, IMO, very sharp practises. If you "buy" a domain in the UK (domain.co.uk) via an ISP, Nominet maintains a "tag" linking your domain to the "provding" ISP, until another ISP takes it over.

Oh, for heaven's sake!

Anyone can be a Nominet tag holder [www.nic.uk]. I'm a tag holder myself. You don't have to be an ISP. You don't have to run your own DNS. If you want complete control over your domain, just register your own tag.

The point that folk seem to miss is that the root name server IP addresses are hard coded into the infrastructure. To change the root servers you have to either wait for everyone to redeploy BIND or get an address reassigned somehow. There is a hard limit of 13 servers that is set by the length of an ethernet packet, the size of the records and the need to guarantee that the packets don't fragment.

Reassigning a root server address is hard because the operator likely has other machines in the address block whose numbers would also have to change.

The EU concern is not irrational, it is pretty wierd that the root zone is essentially a volunteer effort given that the costs are not negligible and the responsibility immense.

Against this however there is a major political issue at stake. The root operators are in effect the arbiters of the DNS. If ICANN gets too big for its boots they are a check on it.

The other issue is that there are very few companies that could credibly manage the root zone on a contractual basis. It is one thing to run a server on a volunteer basis, quite another to provide a service guarantee.

One thing that is in the pipe that may well change some of the concerns, in particular anycast addressing which allows multiple servers to sit on the same IP address. The packets are routed to the 'nearest' machine. That will allow the deploment of additional root servers. It will also address some of the denial of service concerns.