Posted
by
CmdrTaco
on Monday September 17, 2007 @10:19AM
from the adopt-a-puppy-instead dept.

alphadogg writes "For a decade, IPv6 proponents have pushed this upgrade to the Internet's main communications protocol because of its three primary benefits: a gargantuan address space, end-to-end security, and easier network administration through automatic device configuration. Now it turns out that one of these IPv6 benefits — autoconfiguration — may not be such a boon for corporate network managers. A growing number of IPv6 experts say that corporations probably will skip autoconfiguration and instead stick with DHCP, which has been updated to support IPv6."

DHCP in an IPV6 world is a buggy whip [wikipedia.org]. It's not necessary. An IPV6 device can discover its own IP address and gateway router and subnet mask (if necessary) without the help of any servers because it's built into the protocol stack.

DHCP doesn't give a network admin any more control over a network, either. That's just a silly statement. How does having a server doling out IP addresses make it any easier to control a network? It's not a like a device *must* be set to use DHCP. It's not difficult to figure out what IP address ranges a DHCP server is not doling out and use that, even on IPV4.

The reality of the situation is that stateless autoconfig in IPv6 is one way to get basic networking connectivity setup, DHCP is another. Depending on your situation, the phase of the moon, and any of a number of philosophical viewpoints held by the network admin, stateless autoconfig might or might not get used. *shrug* Even with stateless autoconfig, DHCPv6 might also get used to configure other information that is not handled by stateless autoconfig (DNS servers, NTP servers, any of a huge list of other things).

The important point to remember, though is *2 YEARS*. That's how long we have until the IPv4 address space is fully allocated at the top level. It may take a little longer (months?) before people start really feeling any pain from that at the end-user level. But its the critically important point for people to realize. Can you be ready for IPv6 in 2 years? You need to be. If its gonna take you 2 years to get IPv6 functioning in your network, then you need to start *NOW*.

Yes, that's a smaller task (though I wouldn't call it small...there are all sorts of corner cases to consider with that sort of solution).But its the same answer. If its gonna take you 2 years, you need to start now. If its gonna take less than 2 years, then you've got a little bit of time...but do you really want to risk painting yourself in a corner if it doesn't go as smoothly as you expect?

I would say start now (if you haven't already) and if you get done before hand...well, good for you, you're ready

But what about the application protocols? That's what matters. We are not just talking "gateway", but a full-blown proxy that translates IPv4 / IPv6 sessions / protocols. Then there is encryption that is built-in to IPv6 which makes this MUCH more difficult.

"The important point to remember, though is *2 YEARS*. That's how long we have until the IPv4 address space is fully allocated at the top level. It may take a little longer (months?) before people start really feeling any pain from that at the end-user level. But its the critically important point for people to realize. Can you be ready for IPv6 in 2 years? You need to be. If its gonna take you 2 years to get IPv6 functioning in your network, then you need to start *NOW*.

The important point to remember, though is *2 YEARS*. That's how long we have until the IPv4 address space is fully allocated at the top level. It may take a little longer (months?) before people start really feeling any pain from that at the end-user level. But its the critically important point for people to realize.

Son, they've been saying that for 15+ years.

Yes, there is a limit. But once IPV4 address space at the "top level" becomes scarce, it will be handled according to the rules of any scarce commodity - it'll become more expensive. That will encourage efficiency, free space from wasteful users, etc. Then we'll get close again, lather rinse repeat, etc. We will eventually hit the point of "full" but it's not like in September 2010 suddenly there will be no more routable IPs for the next system that needs one.

The important point to remember, though is *2 YEARS*. That's how long we have until the IPv4 address space is fully allocated at the top level. It may take a little longer (months?) before people start really feeling any pain from that at the end-user level. But its the critically important point for people to realize. Can you be ready for IPv6 in 2 years? You need to be. If its gonna take you 2 years to get IPv6 functioning in your network, then you need to start *NOW*.

About once a year I investigate the current state of ipv6 support, and every time so far I have found every major operating system (including linux-based ones) to be inadequate to the task of deploying ipv6. The software support is just not there, on both the system and application levels. Sure, I can configure ipv6 interfaces on hosts and even have some of them set up tunnels and talk to each other, but it is entirely impossible for me to configure a non-trivial network without ipv4 support on every host and still expect it to work, so there's no damned point.

NAT is the solution to the address space problem. Get used to it, because ipv6 has spent the last five years failing to become a solution. When we finally run out of ipv4 addresses, we aren't going to switch to ipv6, we're going to switch to using NAT at the ISPs. You won't get an internet-routeable address for anything other than a server, after that happens - regular DSL lines will be allocated an address from one of the private ranges and NATted onto a smaller pool of routeable addresses as they leave the ISPs network.

It's going to come down to a choice between a technology that has spent years going nowhere and a technology that has spent years being used as the solution to the problem. I know which way the ISPs are all going to jump.

NAT is the solution to the address space problem. Get used to it, because ipv6 has spent the last five years failing to become a solution. When we finally run out of ipv4 addresses, we aren't going to switch to ipv6, we're going to switch to using NAT at the ISPs. You won't get an internet-routeable address for anything other than a server, after that happens - regular DSL lines will be allocated an address from one of the private ranges and NATted onto a smaller pool of routeable addresses as they leave th

I suppose there might be (I no longer remember since autoconfig always seemed like a gimmick to me) but it takes time to convince an admin that they should change from old and not busted to new and unknown.

I can see both sides of this argument pretty easily. Perhaps you have multiple gateways for different reasons but are too dumb to subnet/VLAN. Of course with 802.1X authentication I'm not sure how that would work without DHCP given that an address is assigned dynamically and RADIUS accounting determines when and if they gain access and for how long among other features.

Theoretically it could be done without DHCP although I imagine the software clients wo

Yes, you can get your IP address and router, but you won't get a DNS server. I don't know about you, but I'm not a huge fan of manually entering 128-bit addresses...

IPv6 Autoconf resembles bootP or inverse-arp more than it does DHCP. Also, DHCP has steadily developed a bunch of knobs over the years so that (for instance) IP phones can be told about which TFTP server to use - that sort of functionality doesn't exist in v6 autoconf today. Not to say that it never will, but v6 autoconf doesn't currently have anywhere near the capabilities that v4 DHCP does.

And DHCPv6 provides for more information than merely the IP, Subnet, and Router addresses (say, DNS, boot server, configuration file name, time server, etc). And yes, you can configure a network in such a way that the device is required to be known by the DHCP server before it is allowed to talk (off of its local network anyway...).

DHCP doesn't give a network admin any more control over a network, either. That's just a silly statement. How does having a server doling out IP addresses make it any easier to control a network? It's not a like a device *must* be set to use DHCP. It's not difficult to figure out what IP address ranges a DHCP server is not doling out and use that, even on IPV4.

I beg to differ.

DHCP combined with modern network infrastructure allows network administrators complete control over all addressing issues in the network - including preventing non-DHCP hosts from participating in the network (called DHCP snooping) and location-based services ("DHCP option 82"). DHCP is so much more than just a kludge to get an IP address to the host. Scalability of DHCP allows network administrators to append information such as DNS, NTP, TFTP (for IP Telephony/TV) server information and so much more - default gateway, static routes just to name few. All this is pretty much lacking from IPv6 autoconfiguration.

Don't forget the WPAD information so that web browsers can find their proxy server. Handing this out in DHCP is faster for the browser than just configuring WPAD in DNS (it can be done both ways, and should be for redundancy - but setting it in DHCP generally results in better behavior).

DHCP doesn't give a network admin any more control over a network, either. That's just a silly statement. How does having a server doling out IP addresses make it any easier to control a network? It's not a like a device *must* be set to use DHCP. It's not difficult to figure out what IP address ranges a DHCP server is not doling out and use that, even on IPV4.

As others have said, DHCP does so much more.

I run DHCP on my home network to setup: DNS, WINS, Gateway, IP Address, NTP (Time), and other services. I also use it to record MAC addresses for security reasons, and to easily grab them so that I can configure static IPs and DNS names for specific systems without having to ask people for their specific MAC (unless they're coming in wireless...then they need to to get access anyway...).

Point is, it gives me an easy way to manage my network. And honestly, I was looking forward to playing around with IPv6 in the same manner on my home network (because I could, and wanted to experiment), but some things just aren't ready for it yet, and the lack of a DHCPv6 server (at the time) to manage the auto-configuring was an issue too.

Additionally, I have played with IPv6 with its auto-config, which at least under Windows 2003 is a joke as it is a half-baked implementation that is just plain broke. Half the time it works, and half the time it doesn't. And when it doesn't, it is seriously borked, and breaks everything else too. (And I was only running a set of 6 systems (4 servers, and 2 clients) in my test network.) It took a lot of time to get systems reconnected when things failed out due to IPv6 addressing not working. Haven't tried it much under Linux yet...but I would still have Windows clients to support, and VPN and other software that would need supported. (Software I don't control the version or support on...work does.)

Any how...DHCPv6 would have made supporting that test network a lot easier, and actually would have kept it functional. I cannot imagine what kind of problems admins will have trying to deploy IPv6 with auto-config on a larger network. (Imagine, your computer gets a new IPv6 address just because you rebooted...not make that your server and you could really screw up your network quickly.)

>DHCP doesn't give a network admin any more control over a network, either.

Sure it does.I can set certain classes of my clients to use a certain set of DNS servers; I can black list specific MAC addresses from getting an address, or I can grant them addresses on a VLAN that has no corporate access but has Internet access; I can have a central location that records the addresses of my clients and who they're linked with, etc...

At least these are services I'm used to right now with DHCPv4. I'm going to be

That's not a great analogy. A buggy whip is obsolete in an automotive world because automobiles can't be made to accelerate by scaring them.It's true that stateless address configuration takes away one problem that DHCPv4 solves. So stateless is a bit like Bonjour, only you get a routable address. Not a bad thing. But there are a lot of other things that DHCP does in a network infrastructure that stateless can't do so easily, from DNS updates (unless you solve the key distribution problem somehow) t

The same way that it's done under ipv4 with small servers. The server itself informs the DNS of the change and the DNS propagates the change. The DHCP server presently only does that if it is told to do so and knows that there are servers on the network, which mac addresses they have etc. They don't do this if you're running a small server over a private network unless you've specifically set it up like that.Most if not all of the dynamic DNS outfits provide a utility that will do the updating. I see no rea

I could be wrong about that, I haven't delved into the technical specs for it. The thing that I don't know for certain is whether that function is built into the ipv6 instead of being an addon like with ipv4. If it isn't built in, the people doing the design work should be ashamed of themselves, as it is probably one of the easiest to predict needs.A datacenter should be easier though, as any datacenter is going to be in a world of hurt without a couple of people around to manage the network. The scenarios

There is a long thread about ipv6 & dynamic updates located here [ietf.org]

There is a draft rfc for adding a router message to the autoconfiguration of ipv6 addresses to include sending dns addresses. The draft is available here [ietf.org]. Of course after the draft is finalized kernel(l

I like the fact that you can map an MAC address to a IP address in DHCP. So, my machines in my network always get the same IP address, even though I all set them to be dynamically allocated. Autoconfig removes this ability, as far as I can see.

IPv6 autoconfig for addresses uses the machine's MAC address to generate its IPv6 address. The method is referred to as EUI-64. There is a tutorial here. [ieee.org]

As long as the machine's MAC doesn't change(e.g. generating a new MAC for privacy reasons) it will configure itse

First off, it was never "finished", insofar as many features available in other things were/are not available in OSI.... Given the level of "optional" features of OSI, in practice, full systems never did manage to communicate with each other. Given the complexity of the standards, building software, and debugging things, was very, very hard.

I am more then willing to grant that some very specific bits coming out of the OSI process were good

What's up with the OSI protocols? NIH I guess. Lets all re-invent the wheel instead.

That's not the problem that I saw, 15 or 20 years back, when I was involved in a number of OSI implementation projects. We were in fact looking at several competing protocols, with the idea of implementing them all and developing test suites to determine their good and bad points.

But something interesting happened on all the OSI projects: We'd need the specs, of course, and you couldn't download them. You had to order the hard copy. This meant going through the usual corporate red tape for ordering stuff. You'd fill out a requirement doc, get it ok'd. You'd fill out a purchase req, figure out whose signatures you needed, and have the secretaries work on collecting the signatures. You'd mail off the order, and wait.

Meanwhile, since there was a lot of waiting to do, we'd work on the IP version. We'd download the RFCs, spend an hour or so reading and a few hourse discussing, and then we'd sit down at a terminal and start coding. We'd be at the testing stage within a day, and have usable results in a few days. By the time the OSI specs showed up on our desks, we'd have had the IP version up and running for weeks. While we were reading the OSI specs (always much larger than the IP specs), we'd have users getting experience with the IP version, and sending in bug reports and/or change/feature requests. By the time we finally got an OSI version to the alpha stage, the IP version would be ready to send to the first customers.

If the OSI gang had had the sense to make their docs available free on the Internet, they might not have lost so badly. But by trying to make the specs a profit center, and by using a different competing delivery network (the postal system), they put a major time blockade in the way of developers. So they lost out big time to IP.

I've never been all that convinced that IP was any better than OSI, especially now with the big migration to IPv6 peering over the horizon. But I never really got a good chance to test them and compare their capabilities. The OSI version of our code was always so far behind the IP version that the whole issue was moot. IP won every race, because OSI was so slow out of the starting box. And that was because we developers couldn't get out hands on the specs in a timely manner.

IPv6 Autoconfiguration is close but no cigar in a couple of signignificant ways:

1) DNS server information wasn't baked in from the beginning (there are now some drafts to fix this, but I haven't yet seen the working code) - all this time, and we managed to recreate BootP...

2) Because autoconfiguration uses/64 addresses for hosts, the address size gain, while large, isn't anywhere near the original promise, and encoding the MAC address into a globally-visable IP address does release information about hosts which was formerly private (NIC vendor, for one, as well as the more theoretical complaint about the layering violation).

3) Just try it with VMWare or other virtualization software. Ouch. There's a whole lot of borked there.

4) Obviously you wouldn't want to use it for a true server, becuase who wants their server IP to change when a NIC burns out?

All that said, in a dual-stack environment it works reasonably well: but it doesn't honestly look like anyone gave much thought to a time when IPv4 wouldn't be present on the LAN or on the hosts...

"3) Just try it with VMWare or other virtualization software. Ouch. There's a whole lot of borked there."

Eh, what?

As far as I could tell, as soon as I started radvd on my gateway all my xen guests autoconfigured their global v6 address. Perhaps you have a VMWare specific issue?

"4) Obviously you wouldn't want to use it for a true server, becuase who wants their server IP to change when a NIC burns out?"

Obviously you dont have a server-hardware ip address to use for a true server service. You dedicate an IP address to the actual service so you can move it around freely decoupled from the hardware and any other services on the box. (And to tie back to your earlier point; if you're virtualizing, there's no connection between the hardware and the MAC address anyway).

When you have a bazillion ip addresses it's not like you have to save them for a rainy day.

I haven't used Xen with v6. VMWare had problems getting the guest to do the autoconfiguration instead of having the host do it - that provides a vector to get from guest -> host...You do have a fair point: I should probably consider that a VMWare issue rather than an autoconf issue, but the general v6 approach is to have a single gigantic broadcast domain per "site," instead of learning the lessons we all have in the past 10 years about the benefits of small layer-2 islands connected with layer-3... So

"VMWare had problems getting the guest to do the autoconfiguration instead of having the host do it"Could be a bridging issue with VMWare. As long as the virtualization software acts as an unfiltered bridge, v6 autoconf really should work.

"instead of learning the lessons we all have in the past 10 years about the benefits of small layer-2 islands connected with layer-3."

I agree in part, but it depends on how you look at it. I think the idea is to use it as huge, very sparsely populated layer-2 islands conne

You do have a fair point: I should probably consider that a VMWare issue rather than an autoconf issue, but the general v6 approach is to have a single gigantic broadcast domain per "site," instead of learning the lessons we all have in the past 10 years about the benefits of small layer-2 islands connected with layer-3... So the natural way of doing things in v6 will encounter this problem.

Are you sure, I though this would have been link-local? Basically you would break your net into pools and then the com

The problem is that "link-local" isn't. Layer-2 devices will happily forward frames with fe80:: source and destination addresses. Routers are supposed to stop it, but that requires a layer-3 boundary, which defeats the point of having a/64 for a single site (i.e. single router or router-pair)

On an IPV4 network, DHCP is quite handy even for true servers, because it gives you single point-of-control of MACIPhostname mapping. When you have to move a machine from one subnet to another, it means that you can take the machine down, update your DNS/DHCP tables, and restart the machine on the new subnet. You don't have to update anything on the machine, itself. Autoconfig doesn't do that.I've looked a little at DDNS with DHCP, and from what I've been able to tell, the trust model appears to be rever

Autoconfig is nice for home networks and such. For the corporate world, DHCPv6 is far more useful.

Most people think of DHCP as just giving an IP address, mask, gateway, and DNS. DHCP can do SO much more. We're talking HUNDREDS of pieces of data, including custom strings. Want to tell your IP phone where the call manager is? DHCP. Want to tell your Netware clients where the nearest replica server is? DHCP. Still using WINS for some strange reason? DHCP.

Autoconfig is nice for the lazy admin, but for folks who want to keep track of where their IPs are going and want to deploy additional features, DHCP is the better option.

IPv6 Anycast returns the nearest server that supports the capability you want. True, you wouldn't use the router advertisement protocol, but there are major advantages to having lightweight protocols that can be added to as extra needs develop, as opposed to having one monolithic protocol that requires excessive space on the network and heavyweight processes to churn over.

Sounds like Anycast is similar to a service advertisement though. Wouldn't it make more sense to use a challenge-response mechanism like DHCP instead of a multicast? DHCP can take a bit of bandwidth, yes, but it only transmits when asked. That's why you see less and less service advertisement kind of stuff coming from Novell and Microsoft.

Anycast is not routed to the nearest device, anycasting is multicast to ALL devices by means of the anycast common address and the nearest one replies. Thus, it does indeed find the nearest device without any device at all having knowledge of where such devices are. No hardcoding is required. There is no single point of failure. If a server goes down, then that doesn't respond and the next nearest is the one that responds. Thus, with anycast, if you have an address, it is a working address. DHCP offers no s

I've started my open DHCPv6 implementation over 4 years ago. Once in a while, someone reports a bug or says that it works fine, so people are using it. The rate of adoption is not that great, but I've got feedback from 28 countries.
Anyway, that's hardy a news. Basic DHCPv6 spec has been published in 2003.
By the way: there's a small misunderstanding. Formally, the whole autoconf process in IPv6 is split into stateless and stateful (DHCPv6) parts.

Most people think of DHCP as just giving an IP address, mask, gateway, and DNS. DHCP can do SO much more. We're talking HUNDREDS of pieces of data, including custom strings. Want to tell your IP phone where the call manager is? DHCP. Want to tell your Netware clients where the nearest replica server is? DHCP. Still using WINS for some strange reason? DHCP.

Forgive me if I'm wrong, but that sounds less like "DHCP is awesome" and more like "Lazy devs have added extensions to DHCP rather than implement a proper

Forgive me if I'm wrong, but that sounds less like "DHCP is awesome" and more like "Lazy devs have added extensions to DHCP rather than implement a proper auto-configuration protocol for their other services."

What what what???

Instead of writing proprietary, incompatible protocols, developers have plugged their products into the industry standard, openly documented auto-configuration protocol, which was designed to be extensible. And they get called lazy?!

Want to tell your IP phone where the call manager is? DHCP. Want to tell your Netware clients where the nearest replica server is? DHCP. Still using WINS for some strange reason? DHCP.

So can it also tell my wife where her keys are? If so, I'll be adopting it right away.

(I've been looking for a key-chain gadget that combines GPS and wifi capabilities. I could write my own program that queries it and tells me where she left it. Then the only remaining problem would be the not-so-good accuracy, to within ab

From what I've been able to tell from the discussions on the IETF's IPv6 mailing list, it probably won't just be corporate networks going with DHCPv6. The greatest problem with IPv6 autoconfiguration (probably since its inception) is the fact that while you get a network address, you don't get any information about available DNS servers, which no modern IP node can do without in reality.

There have been a number of suggestions to solve it that problem, of course, ranging from adding an extra field for DNS servers in the autoconfig ICMP messages to using well-known unicast addresses for the closest recursive DNS server to using a dedicated protocol just to discover DNS servers. The first and last of those have (rightfully, IMNSHO) been shot down because then one might "just as well" use DHCP, which exists and has a solution ready for the issue at hand. I cannot remember why the unicast suggestions have been rejected, though, and it has been disturbing me, because I think it is the best solution. I really just cannot see the drawbacks to it. I guess there might have been some talk about lack of security in that model, but that's a problem with DNS in general, though. That's why DNSSEC was invented.

Last I looked, the consensus seems to be to use autoconfig for address generation, and then request network information (such as DNS servers) from a link-local DHCPv6 server. When everything comes around, I think that's a rather good solution. Clients can still get whatever non-occupied address they want (which means the privacy extensions will also continue to work), and still get the information they find relevant, and a DHCPv6 server should be easy to implement on a network of any scale.

I'm deploying an IPv6 VPN at my company, and I've avoided the use of DHCPv6, as they have avoided DHCPv4 to make it less easy for people to attach to our networks. By using stateless autoconfig and not having DHCPv6, I can still restrict use of company nameservers to systems with our loadsets that have the DNS setup statically defined.

Note that that won't restrict use of your nameservers. It just means a rogue machine has to find out what the IP addresses of the nameservers are so it can configure them. That may be easy if the rogue machine is an unauthorized laptop belonging to a legitimate user who's got the configuration of his desktop readily to hand to copy information from.

And autoconfig pretty much makes it impossible to restrict access to the network at all. Autoconfig'd machines probably can't get through the router and may not

Autoconfig is a nice default for something that "just works" without much need for an admin to plan out the network, and DHCP is great for tighter control where needed. What's wrong with having both options available?

IPv6 may offer a range of new features over IPv4, but realistically, people will move to IPv6 for one of two reasons

1. They have run out of IP addresses ( remember the 10.0.0.0 private network is pretty big! )2. Everyone else is doing it.

Option 1 is only really going to be a problem for the really big firms, and they will be really careful. All those Corporate apps need retesting with the new IP addresses, and that is a non trivial exercise ( think Y2K all over again!, except you could do it piecemeal ). It's a hard sell to the business : Mr PHB, we'd like to spend a large amount of money retesting all the applications in Globocorp to use a new IP numbering scheme. Nope, you won't get any business benefit.

ISPs may force people of IPv6 at some point, but that's only been an issue in South Korea so far. Everyone else still has enough IP addresses right now.

And until we get a critical mass of people going for Option 1, option 2 is a no go.

Isn't the headline and summary putting this entirely backwards? Now that DHCPv6 is available IN ADDITION to auto configuration, that's one MORE reason to adopt IPv6, or rather, on less reason to stick with IPv4. It's not like auto configuration suddenly can't or doesn't work now that DHCP is available as an alternative option.

Look, if we don't switch to IPv6 one of these days, then in 100 years from now an angry IT network sys admin is going to go insane with the mess we left him and invent a time machine and come back to blow us all up.

It is going to have to happen and the longer we put it off the more expensive it is going to get over time to replace all the equipment. Yes, NAT works but its like trying to keep an old system road infrastructure in place that will be more costly to maintain at a certain point than to replace.

Methinks one reason IPv6 hasn't been adopted is because those who have chunks of the IPv4 space are quite happy having what is essentially an artificially precious resource.

Most people think the IP address space is "nearly full", but a handful of companies are sitting on prime real estate (nevermind there is a huge amount of "reserved" space which is not in use.) For example, why do the following companies have entire class A's to themselves?

It all seems a little premature to me. Both sides have benefits and pains.But IPv6 (while i remember it being chosen as "the standard" we'll go with moving forward back in 1994 or 5?) is seriously at a point where IPv4 was when the internet was nothing more than a research network used by universities.

DHCPv6 has a number of advantages for a corporation, where it exists in the network and where it doesn't will still remain the same.

Cisco IOS's many integrations into dhcp v6 are interesting, but so much of it

DHCP (and bootp and RARP before it) served primarily the purpose of letting an otherwise unconfigured node discover which IPv4 address it should use. Along with that, DHCP could deliver more information important to the node as well - default router, DNS information and so on.With IPv6, the basic networking information is automatically configured when a node connects to the network. DHCP's purpose in such environments is to allow unconfigured nodes, once they've configured IPv6, to discover things like DNS

So there are huge swaths of IP space tied up in entities which don't need any where near as many as before NAT. If ARIN's requirements for usage were enforced then we may be fine for the next 10 years. Anyone with a Class A needs to figure out what they're doing and return some major swaths of IP space:1.0.0.0/8 - IANA2.0.0.0/8 - IANA3.0.0.0/8 - GE4.0.0.0/8 - Level 35.0.0.0/8 - IANA6.0.0.0/8 - DoD7.0.0.0/8 - DoD8.0.0.0/8 - Level 39.0.0.0/8 - IBM10.0.0./8 - NAT (we all love it)11.0.0.0/8 - DoD

I'm not sure if autoconf and IPv6 addresses in general have too much or too little magic. Admittedly, these are in part Linux implementation bugs, but the kitchen sink nature of v6 sure isn't helping. If I can't reach anything by v6, I don't really care that I can secure my connection to nowhere with crypto.

On the too much magic side, I announced 5001::/64 on my LAN at home as a test. Yet, when I try to go to a v6 enabled site, firefox tries to use v6 even though 5001::0/64 in no way impleis that I can reach 2001:: from here.

On the too little magic side, some sites have more than one router (It's called a private network, some people don't care to splatter their private business all over the world). I tried setting that up with both overlapping and non-overlapping prefixes. Neither worked at all. The machines bounced from one to another so that they didn't even keep a consistant address. Good thing I had v4 so I could fix it without hopping from machine to machine. Possible good behaviours might have been:

Just pick one and use it

Actually assign both and just pick one when there is an overlap otherwise use the most specific route.

The success or failure of v6 will be in client support. Nobody is going to accept a v6 only web server if clients can't reach it. Many of those clients have v4 only ISPs with dynamic IPs. They will not want their entire lan to renumber every time they get a new dynamic assignment. So, they will want a non-routable prefix for local to local traffic. Too bad the silly machines will try to use that for non-local traffic!

Admins are often busy people. They don't want to devote their professional life to a v6 rollout, especially when practically nothing out there is reachable by v6 yet. If it's not dead simple it won't happen at all. I tried dead simple and as a result some sites became unreachable. I killed it again and they came back (falling back to their v4 addresses).

Link local addresses don't seem to work AT ALL. I see the route but I get EINVAL if I try to ping.

I suppose my next attempt will be to rip all the autoconfig stuff out of the kernel and implement a userspace daemon. It doesn't really belong in the kernel anyway. That's what initramfs is for.

In general, this business of having to add one of 6, -6, -A inet6 or other junk to the command line or appending a 6 to the name of the utility has got to go. It's one thing to need a software update to handle v6, I can understand that older software might not even know v6 exists, but the v6 version has no excuse for not knowing v4 exists. ping6 192.168.2.1: unknown host.

All of this tells me that v6 still has the status of a toy to play around with, not a supported standard ready for worldwide use. That's fine as far as it goes, but it's not exactly making people anxious to get on with the upgrade.

It's been over a decade now. Let's quit pretending it's just inertia and try to address the real adoption barriers while there are still v4 addresses left.

My advice:

Just forget about scope. Prefix lengths are more than adequate for making routing decisions.

Quit appending a 6 to everything (this is not marketing!). That includes structs and function calls in the library. For the most part, a v6 address as ASCII is distinguishable from v4. Likewise, in binary 4 octets followed by nulls or nulls followed by 4 octets is a v4 address. An app that won't run when the size of sockaddr_in changes was wrong to begin with. If you must, make -DV4ONLY use the old v4 only structs etc so broken source will compile and run. Done right, a number of old but well written programs could support v6 just by recompiling.

Allow multiple router announcements and behave gracefully. Use the prefix that best matches the destination.

Simpler handling of 6to4 and 4to6 translations. With so many people using APs and other routers for a broadband connection now, automatic en/de-capsulation in IPv4 can go a long way with very little configuration and a few simple rules.

Link local addresses are only valid when coupled with an interface name. The reason why is rather obvious.

As in assigned to an interface? It is (or rather they are) assigned to an interface and have (automatic) routing entries. Manual manipulation of the routing table doesn't help. Smells like hard coded policy implemented in kernel code (bad form).

Why would my belief that autoconfiguraation belongs in userspace indicate a lack of understanding?

No, you need to specify which interface you're talking about when using link local addresses. Sit back and think for a moment. How is the kernel supposed to guess which interface a link local address is on? Do you want it to just spit the packets out all interfaces?

Why would my belief that autoconfiguraation belongs in userspace indicate a lack of understanding?

The fact that you think that a link local address is valid without an interface name indicates a lack of understandin

I'd be reluctant to go without IPv6 autoconfig. I run a DHCP server for IPv4 and enabling IPv6 wouldn't be too much hassle, but it's a great deal nicer not to have to. Arguments re shifting addresses are bogus, since you'll usually allocate addresses for services/network entities when you wish them to remain persistent anyway, just like you currently reserve static IPs in DHCP for servers.I'm not too happy that no agreement has been reached on anycast DNS, though. The lack of a mechanism for clients to disc

Of course they'll skip "stateless autoconfiguration." Even if it could be upgraded to provide enough information to the client (such as the location of the DNS servers) the fact remains the the autoconfigured IP address is selected by running a reversable algorithm on the MAC address. This means that in every communication you're advertising to the entire network:1. What manufacturer made your NIC chipset2. With high probability which NIC chipset is in use3. With high probability which firmware revision is

What's news is that we're still dragging our heels on IPv6. We dodged the bullet once by developing and widely deploying NAT and at the same time reclaiming large amounts of unused address space via switching core routing to CIDR. However, that trick only bought us a certain amount of time. As the world becomes increasingly connected, we're going to face the same problem again. Why are we waiting until it's a crisis to deal with it?

So the accepted wisdom became that the whole thing was just an alarmist fiasco that chewed up a bunch of money unnecessarily. They don't realize that y2k was no problem precisely because of all the noise. A LOT of people did a lot of planning and a lot of work, and that all paid off in how few problems there really were.

But the common man, and unfortunately the common leaders don't understand that. So now y2k was a so-called crisis, wasn't a problem, and we can approach our next so-called crisis without the extensive preparation we "wasted" on y2k. Oh boy!

Because M$ does not currently support DHCP static lease address assignment in their IPv6 implementation. Many large enterprises like to have their primary servers at a certain known address, e.g. "10.1.2.3" for proxy "10.1.2.5" fopr file server, etc. Sorry no can do in a windoze IPv6 environment. MS isn't the only culprit. Don't get me started on the debacle they called "site-local". Every place says its deprecated but precious few will inform you what to use as a replacement and best practice of that use.

Ironically, the longer you wait to deal with it, the cheaper it may be!

There are some obvious reasons why waiting longer makes it cost more, but there are quite a few subtle reasons why it's cheaper to wait. For example:

1) If your current hardware is not IPv6 capable and you buy new IPv6-capable hardware now, it may reach end-of-life before you need the IPv6 capability.

2) IPv6 routes take more memory than IPv4 routes. The longer you wait, the cheaper it will be to add this memory. (Note that we're not just talking cheap main memory, we're talking expensive CAM and custom chip memory.)

3) Research and development are constantly progressing. The longer you wait, the better researched the solution you ultimately deploy may be. (To a limit, of course. You also lose the chance to gain experience.)

On balance, I think we're progressing at a sensible pace, perhaps a bit slower than perfect. People are continuing to do test deployments to see how IPv6 will work and make sure they'll be able to implement it for real when the demand comes. But they're not wasting money replacing working hardware or increasing network instability on the real, live Internet we all depend on for our daily (hourly? half-hourly?)/. fix.

Thank you. I had been wondering how to get DNS to Just Work [TM] over an autoconfigured v6 interface for a while. I had been falling back to using the dhclient-enter-hooks file to stop a rewrite of resolv.conf on v4 lease acceptance and manually applying my nameservers' v6 addresses. This is the problem with any "new" technology that replaces something ubiquitous; the accepted ways of doing things sometimes no longer apply. It doesn't help that OpenBSD's dhclient dropped support for many options that I succ

The idea is that the Internet might establish that a particular anycast address is the logical address of the DNS server. Then host software could be configured at the manufacturer to always send DNS queries to the DNS anycast address. In other words, anycasting could be used to support autoconfiguration of DNS resolvers.