Posted
by
Soulskillon Sunday November 30, 2008 @10:35AM
from the getting-things-done dept.

ruphus13 points out news from the Linux Foundation, which announced that all major Linux distributions meet certification requirements for the US Department of Defense's IPv6 mandates. The announcement credits work done by the IPv6 Workgroup, whose members include IBM, HP, Nokia-Siemens, Novell and Red Hat. Quoting:
"Linux has had relatively robust IPv6 support since 2005, but further work was needed for the open source platform to achieve full compliance with DoD standards. The Linux Foundation's IPv6 workgroup analyzed the DoD certification requirements and identified key areas where Linux's IPv6 stack needed adjustments in order to guarantee compliance. They collaboratively filled in the gaps and have succeeded in bringing the shared technology into alignment with the DoD's standards."

Well Apple and MS has had some IPv6 support for a while but they are shades to the amount of support. I believe that IPv6 has been available in Linux before MS or Apple (since 1996). However it was deemed "experimental" until 2005 even though it worked well enough for most people and distros. MS has had limited IPv6 starting with Win2K and has had some IPv6 support with XP in 2002. As for DoD compliance, only Vista with SP1 is partially compliant [disa.mil] and OS X does not to appear to have been tested.

Until Vista, SMB/CIFS didn't support IPv6, so sharing resources over an IPv6 local network didn't work. On top of that, 2005 is the year the "experimental" status was removed. In fact this status is rather conservative and many distros routinely ship kernels with experimental options enabled (e.g. tickless kernel, the WMI drivers, etc.)

Apple didn't spend much at all. They use the KAME stack, which was developed by a consortium of Japanese companies for BSD-family systems. It was started in 1998 and achieved full compliance in 2006. Apple just pulled in the code and merged it. Since it already ran on BSD/OS, FreeBSD, NetBSD, OpenBSD and DragonflyBSD, this was not a huge undertaking.

The support in win2k was an experimental addon published by microsoft research, it was never an official feature.It was XP which first introduced support in the base distro, but it was not turned on by default and if autoconfig didn't work you had to use the cli tools to configure it. Also it wouldn't do DNS over ipv6 so you still need ipv4 connectivity for your dns at least.

Linux had support a lot earlier as you pointed out, as did digital unix (aka tru64 unix), the bsd's got support fairly early too. It was only market experimental because there was really no other reason to use it, you could pretty much only get tunneled ipv6 from a free tunnel broker with no guarantee you would keep the addresses etc. In terms of functionality, the stack worked great even in the 2.2.x kernel, ipv6 has long been popular on IRC because you can create more vanity hostnames more easily, and its a little harder for some of the script kiddies to dos you.

I always thought NAT was a good solution from a security perspective for most homes and organizations.

It does help against some security problems, but it also introduces new security problems (for example DNS is sometimes done from a random port to help against poisoning, but if that goes through a NAT the random port is replaced with a non-random port). And the workarounds needed because of NAT are not improving security either. They make software more complicated for no good reason, and more complicated means more bugs, including security vulnerabilities.

NAT forces the router to do connection tracking, and it is also forced to filter out incoming packets that don't match a known connection. The security it provides is just by coincidence, not by design. You can do all the connection tracking and filtering without translation, that way you'd get the benefits without the drawbacks. The vendors just have to start making routers that support IPv6 and does connection tracking and filtering by default. Apple already makes routers that will do 6to4 tunneling by default, I don't know if they also do connection tracking and filtering on IPv6 by default.

There's support and support. The first OS to have certified DOD compliant IPV6 support (what this topic is about) was Vista. Solaris 10 came second. Neither had IKEv2 capability. Then came Novell and RedHat, both with IKEv1 and IKEv2.

So it's not only a neck-to-neck race, but you can also be first, and you can be first (with IKEv2).

> What happens if NAT is used all over the place? You could imagine a bunch of> subnets that use one address to the outside world but have hundreds or> thousands of machines internally.

It *is* used all over the place. It's even used on an ISP-wide scale (expect that to become more common in the west). NAT delayed IP address exhaustion for a few years, a few years ago. The current rate of IP usage is what's happening *with* widespread use of NAT.

> There's a lot to be said for NAT from a security point of view too. Since you> need to open up holes manually for incoming services, incoming connections> for anything else will be blocked which makes it impossible for people to> exploit most security flaws on the machines behind the router.

You can get all of that from a stateful firewall that blocks inbound connections by default.

> Reading between the lines it seems like IPv6 was a revolutionary solution to> running out of address space. NAT was an evolutionary one. As usual the> market has picked the evolutionary solution and more purist types are whining> about it.

NAT isn't a solution at all, it's a way to delay the inevitable. It has successfully done that, into approximately 2011-2012. What it doesn't do is change the fundamental problem, it's not possible to use it *enough* to hold off exhaustion indefinitely.

Breaking end-to-end connectivity isn't the primary concern. This has already largely happened with NAT, and will continue to happen to a certain extent with IPv6 because we'll be using stateful firewalls. We can deal with this for most home users.

The problem is that NAT still consumes IPs, and other hosts like servers really do need to be reachable. The market prefers NAT now because exhaustion hasn't happened yet, and as the last few months have demonstrated, the market is remarkably good at ignoring problems for as long as possible.

Purist types *are* whining about it. But pragmatic types like me are also concerned that people like you seem to think NAT is something we can use later as a solution, when we've already been using it for years as a way to buy time.