Share this story

In hindsight, we reached peak IPv4 two years ago. The good news is that IPv6 is doing very well—but not nearly well enough. Is the IPv6 glass 1 percent full or 99 percent empty?

"Hi, I'd like to sign up for Internet service at my new apartment."

"That's great! We have the highest speeds at the best prices, you won't be disappointed. But unfortunately, last week Europe ran out of IPv4 addresses. We still have plenty of IPv6, though."

"IPv6? So I can use that to visit all my favorite websites, use IM and VoIP, download podcasts, and watch videos?"

"Well..."

Luckily, I escaped this conversation when recently signing up for an Internet connection. But if I move again next year or even the year after, I could end up with a faster Internet connection that is less functional, because it will no longer let me connect to every other Internet user. All because we ran out of numbers, which don't even cost anything. Sadly, not having them will cost us a lot of time, money, and effort as some cling to IPv4 and others adopt IPv6—by choice or otherwise—over the next few years.

Where we stand

First, let's look at IPv4. Five Regional Internet Registries (RIRs) give out IP addresses in different parts of the world. APNIC (Asia, the Pacific, and Australia) ran out in April of 2011, and this past September the RIPE NCC (Europe, the Middle East, and the former Soviet Union) did the same. As a result, the number of IPv4 addresses given out this year is about a third of what it was in 2010: only 80 million.

(The statistics are derived from files the five RIRs publish on their FTP sites every day. However, the ARIN numbers didn't look right, so I replaced them with those found here. This also changes the earlier reported totals for previous years.)

So ISPs and other users of IPv4 addresses in the RIPE and especially APNIC regions have to make do with whatever is left in their own pipelines—which typically hold six months' to two years' worth. There's a final block of 1024 addresses, or they have to do some trading.

ARIN (North America) has 45 million addresses left and gave out 24 million IPv4 addresses this year. So barring unforeseen events, ARIN will be in a situation similar to those of APNIC and the RIPE NCC in the first half of 2014. LACNIC (Latin America and the Caribbean) also has about a year and change until IPv4 addresses run out, while AFRINIC (Africa) has enough for several more years.

Back in 1994, Microsoft's Christian Huitema looked into networks running out of address space, coming up with an "HD ratio." This is the logarithm of the number of systems connected to the network divided by the logarithm of the number of possible addresses. Experience with several different networks showed that an HD ratio of up to 80 percent was reasonable, 85 percent painful, 86 percent extremely painful, and 87 percent the practical maximum.

According to the ISC Domain Survey, there were 909 million systems present in the Domain Name System (DNS) as of July 2012 (resulting in an HD ratio of 93 percent). Obviously, that is well beyond that practical maximum. In this sense, IPv4 is like a tube of toothpaste that's almost empty: every day, if you squeeze hard enough, a little more will come out. But at some point it's easier to just buy a new one.

Now, with IPv4 in decline, surely IPv6 must be ready to pick up the slack? Yes and no. Yes, IPv6 is doing incredibly well compared to even one or two years ago, but... it's not enough.

IPv6 refresher

IP addresses are 32 bits in size, which means there can be some four billion of them. In the early 1990s, the Internet Engineering Task Force (IETF) realized the Internet was growing toward a size that requires more than four billion addresses. Increasing the size of IP addresses required modifications to the layout of IP packets, which meant that all systems that handle IP packets—in other words, everything connected to the Internet—must be upgraded. To be on the safe side, the new system uses an address length of no less than 128 bits, allowing a mind-boggling number of addresses:

340,282,366,920,938,463,463,374,607,431,768,211,456

For unknown reasons, the existing Internet Protocol has version number four. Five was already taken by something else, so the new version got six; hence IPv4 and IPv6. In addition to the larger addresses, IPv6 differs from IPv4 in a number of aspects, so the IPv4 ways of doing things don't always translate one-to-one. But IPv6 is still IP, and it can fulfill the same functions as IPv4. Just on a much larger scale.

IPv6 by the end of 2012: 1 percent full or 99 percent empty?

2012 was a good year for IPv6. Netapp's Lars Eggert has been measuring how many of the top 500 websites have IPv6 enabled. After last year's World IPv6 Day and this year's World IPv6 Launch, we're now at around 10 percent for the top 500 sites in Finland, Germany, India, Japan, South Korea, the UK, and the US. China is lagging behind at 4.8 percent. And of the worldwide top 500 sites, 22.4 percent have an IPv6 address in the DNS, up from eight percent a year ago. However, of the Alexa top one million websites, only five percent have an IPv6 address in the DNS.

According to Google's measurements (Flash required), currently about one percent of its users is able to reach those IPv6-enabled websites over IPv6, up from 0.4 percent a year ago and 0.2 percent two years ago. So the rate at which Google's users are taking up IPv6 has increased from a factor two in 2011 to a factor 2.5 in 2012. If we can stick with that factor 2.5, the entire Internet will have IPv6 by the end of 2017. Of course, these types of growth tend to slow down as they approach 100 percent.

More evidence that IPv6 is taking off can be found in a paper on measuring the deployment of IPv6. Researchers at the Cooperative Association for Internet Data Analysis (CAIDA) observe that after years of linear growth, IPv6 deployment across the autonomous systems (mostly ISP networks) that make up the Internet started to go up along an exponential curve around 2008. The growth of IPv4 autonomous systems, on the other hand, had been exponential until about a decade ago. It's linear since. Note the slightly different scales in the figure, though: 40,000 IPv4 ASes versus 4500 IPv6 ASes.

Last but not least, there are actual IPv6 traffic statistics. Akamai's IPv6 statistics show the content network has 0.8 percent IPv6 hits in North America, 0.3 percent in Europe, and less than 0.1 percent elsewhere. The 0.3 percent number is similar to the amount of IPv6 traffic at two of Europe's big Internet Exchanges: AMS-IX in Amsterdam and DE-CIX in Frankfurt. AMS-IX IPv6 traffic has always been relatively high, but DE-CIX IPv6 traffic has increased from 1 to 5 Gbps in the past twelve months.

Share this story

Iljitsch van Beijnum
Iljitsch is a contributing writer at Ars Technica, where he contributes articles about network protocols as well as Apple topics. He is currently finishing his Ph.D work at the telematics department at Universidad Carlos III de Madrid (UC3M) in Spain. Emaililjitsch.vanbeijnum@arstechnica.com//Twitter@iljitsch

298 Reader Comments

It's a shame that quite a few people who have problems with IPv6 just end up disabling it instead of diagnosing the problem.

Personally I've found IPv6 a joy to use - once you understand it. If you stay in an IPv4 mindset and consider the NAT box to be your firewall which secures everything and that port forwarding is trivial, then IPv6 won't offer any benefits for you.

Having end to end connectivity is great, being able to connect to my home machines whilst at work without having to deal with port forwarding, VPNs, or IP address conflicts between the private IPs of my home and work LAN IP addresses. I can just SSH directly to the hostname and get in quickly and easily.

Then there's the advantage of the large address space and being able to do your own routing within your own /48 if you need to. Virtual machines? No problem, just route a /64 to that specific server.

Could I do this with IPv4 on a typical ISP? No.

The era of easy IPv4 addresses is coming to an end. At some point it's going to be increasingly more difficult to get IPv4 addresses from an ISP, from your dedicated server host, etc. You'll be behind an ISPs NAT, and port forwarding isn't going to save you there.

Quite frankly the whole move to Ipv6 doesn't mean a hill of beans when the majority of router vendors flat out refuse to provide firmware updates to add ipv6 functionality to existing routers let alone ensuring that routers shipping today have ipv6 out of the box. Btw, for the record my ISP is giving out ipv6 addresses, it just so happen that most of the router vendors are lazy.

Two words, DD-WRT & Tomato.

I have a Belkin DB600 ADSL Modem/Router (F9J1102 v1) which pretty much makes me SOL when it comes to DD-WRT and Tomato.

Wireless tech can't get faster? That's why 802.11AC is 1GB instead of a max of 150MBs like 802.11n, right?

It can get faster, but there are stricter limits and raised costs on when this speed is attainable. Shannons Capacity Theorem is useful to realize this limit, and contains a clearly marked "impossible region" you'd be interested in.

Further raising capacity is done at increasing costs, which aren't trivial in the wireless space.

For example, 1gigabit throughput for AC requires the use of 256QAM rather than 64QAM - which has a far higher likelihood to symbol (bit) errors. You can counter this by raising the power output, but that obviously isn't too viable in wireless devices. (And yes, AC jumps down to 64QAM & 16QAM when its SNR is inadequate just like N)

In addition, the bandwidth requirement for a single 1gbps stream is 160MHz. Compare that to the standard 20MHz used today.

Due to these reasons, AC is a 5ghz only technology - not 2.4ghz which is the effective norm for home networks.

An alternative to increased bandwidth is additional spatial streams (MIMO) - but again that means multiplying the power output and occupied space as each stream requires its own antenna.

Grr, Negative4 doesn't understand how networking works, and your analogy indicates you don't understand how Government spending works. The debt ceiling is about current debt, i.e. debts that are already owed due to some past-tense spending that was authorized. In your analogy, the root problem that needs fixing is the pork barrel and other over-spending that causes the debt in the future to increase to the point that the ceiling needs to be increased too.

Negative4, just because you start some normal download or Netflix stream doesn't cause your entire ISP-provided bandwidth allocation to be consumed. Netflix uses a max of about 5mbps, so if you're using the 1mbps connection you mentioned, you're going to see congestion (well if Netflix didn't back it down). But if you've got a 100mbps connection you've still got 95 or so left. Priority doesn't help for any other traffic you might want to start at that point since there's nothing to prioritize. (Although I suspect you may be saying you're downloading dozens of torrents or something silly that truly does max out all your available bandwidth, while simultaneously trying to play a game. If so, who cares, save your movie stealing for another time.)

Yeah netflix uses 5 mbps now but in the future it'll be more. Either way I run into bandwidth problems frequently and so do a lot of people I know, it's definitely not a minority thing. Bandwidth needs to be prioritizable.

Just had a look at my stats at home over the last month, and it looks like at least some amount of traffic has been going over the tunnel (no native IPv6, nor plans of it on Cablevision). My v4 traffic was 57GB down/15GB up, and my v6 was 6GB down/1GB up. I've been very happy with he.net's tunnel service, I really have no clue where the traffic is going unless it's something where I see my own IP flashed at me like a webpage that announces "you're using IPv6!" or logging into an IRC server or something where my IP is echoed back at me. I certainly don't notice it due to lag or lower bandwidth.

On the downside, this does make me yearn for the internet of old. Back in the 90's, you'd have seen all the ISPs that were implementing something new like this actually discussing the challenges publicly and sharing information. Now, it's everyone for themselves lest they lose a competitive advantage, tarnish their image by asking something stupid on a mailing list, or get spanked by their legal department. So now smaller shops that could learn something from the large ISPs are in the dark as to all the little practical issues. I'm in the middle of turning up v6 for a small ISP and it's so depressing that there's no good "best practices" documents around, or "securing IOS for IPv6", or "Gigantic fuckups to avoid when turning first enabling IPv6 on IOS".

Now the question is: in his new apartment, did the author of this article choose a full dual stack IPv4 + IPv6 connection from a great ISP, or did he choose a IPv4-only connection from another, lagging provider ... ? ;-)

I think that if we don't reward hardware, software and service providers for their IPv6 steps with our money, it's not going to happen any sooner.

Yeah netflix uses 5 mbps now but in the future it'll be more. Either way I run into bandwidth problems frequently and so do a lot of people I know, it's definitely not a minority thing. Bandwidth needs to be prioritizable.

Negative4, can you elaborate a bit ?How fast is your connection, and what do you mean by "bandwidth problems" ?What problems are you experiencing ?

Grr, Negative4 doesn't understand how networking works, and your analogy indicates you don't understand how Government spending works. The debt ceiling is about current debt, i.e. debts that are already owed due to some past-tense spending that was authorized. In your analogy, the root problem that needs fixing is the pork barrel and other over-spending that causes the debt in the future to increase to the point that the ceiling needs to be increased too.

Negative4, just because you start some normal download or Netflix stream doesn't cause your entire ISP-provided bandwidth allocation to be consumed. Netflix uses a max of about 5mbps, so if you're using the 1mbps connection you mentioned, you're going to see congestion (well if Netflix didn't back it down). But if you've got a 100mbps connection you've still got 95 or so left. Priority doesn't help for any other traffic you might want to start at that point since there's nothing to prioritize. (Although I suspect you may be saying you're downloading dozens of torrents or something silly that truly does max out all your available bandwidth, while simultaneously trying to play a game. If so, who cares, save your movie stealing for another time.)

Yeah netflix uses 5 mbps now but in the future it'll be more. Either way I run into bandwidth problems frequently and so do a lot of people I know, it's definitely not a minority thing. Bandwidth needs to be prioritizable.

No for the love of god no Negative4, you really do not know your networking fundamentals here at all as has been stated before. QOS is a bandaid around misconfigured switch/firewall buffers and poor bandwidth. It is the wrong solution to the whole problem. Prioritization primarily only helps for a saturated connection, in which case you need to get a faster connection not start fiddling with tcp packets and deciding who is more important.

Adding bufferspace is also the wrong solution. Once you start to buffer, you've lost the war and are not only admitting defeat but burning your own country on the way down. The problem we have today is that we have horribly configured uplinks past our networks as well as our own networks. Queueing algorithms like red help somewhat.

That is why you want to use an os, right now Linux is it with the 3.5+ kernel or cerowrt that supports CODEL. Which stands for COntrolled DELay. It is a prioritization algorithm with the best configuration possible. It has one knob, which amounts to: how fast is my link. There is no other configuration. The basic algorithm is described in this paper (http://queue.acm.org/detail.cfm?id=2209336), and by Van Jacobson, the guy that saved ipv4 in the 80's from collapsing back then (Jacobson's algorithm, wikipedia it). Right now qos is solving the latency problem, poorly. CODEL does it better.

Once CODEL is put in more OS's/routers/etc but primarily edge switches where it will be the most beneficial due to speed disparities, you will notice latency and rtt going down overall.

After installing openwrt at home on my edge network, I am able to saturate my uplink, and still get webpages to load as if there was no delay at all. This is with zero configuration for qos. QOS and buffers are BAD, stop trying to say they are good, they are not, they are as bad as NAT. Buffers should be empty by default, if they fill up it is also bad, CODEL fixes that last bit and makes QOS unnecessary by and large.

...Prioritization primarily only helps for a saturated connection, in which case you need to get a faster connection not start fiddling with tcp packets and deciding who is more important...

...but if you're an average home user then there is no "faster connection" - you already have whatever your ISP provides. And if that's ADSL then moving to a different ISP isn't going to increase your speed, given that your line length won't change.

Of course a faster connection will reduce your need for QoS, but for most folks that "faster connection" isn't an option !

...Prioritization primarily only helps for a saturated connection, in which case you need to get a faster connection not start fiddling with tcp packets and deciding who is more important...

...but if you're an average home user then there is no "faster connection" - you already have whatever your ISP provides. And if that's ADSL then moving to a different ISP isn't going to increase your speed, given that your line length won't change.

Of course a faster connection will reduce your need for QoS, but for most folks that "faster connection" isn't an option !

This! WARNING: Personal anecdote. I'm not running QoS right now because I'm not sharing my connection, so if a torrent or something is causing latency I can just turn it off. However, when I had 3 roommates and we were sharing a 5down/1up DSL line (fastest we could get at the time without submitting ourselves to a 60GB cap and massive overage charges), QoS saved our collective asses. 1 roommate is watching Netflix, 1 has torrents running, one is gaming online, and one is browsing ... no problem, the QoS settings on our Smoothwall sorted it all out for us and nobody perceived any significant latency. Before setting up QoS, 1 person torrenting could cause high latency for 1 person browsing.

...Prioritization primarily only helps for a saturated connection, in which case you need to get a faster connection not start fiddling with tcp packets and deciding who is more important...

...but if you're an average home user then there is no "faster connection" - you already have whatever your ISP provides. And if that's ADSL then moving to a different ISP isn't going to increase your speed, given that your line length won't change.

Of course a faster connection will reduce your need for QoS, but for most folks that "faster connection" isn't an option !

Precisely. And no, your ISP is not going to rip out every piece of their carrier network gear in and put "teh Linux" in place to solve what amounts to a last-mile problem.

QoS is not "bad", NAT is not "bad", and I'm sure CODEL is not "bad". These are all imperfect solutions that exist because we live in an imperfect world.

I run QoS at home, mainly on the upstream side because I only have 8Mb/s or so up and I use VoIP. It was a three click setup that basically says "trust the DSCP markings/TOS and shove these packets to the front of the queue, also please prioritize ACKs, DNS and ICMP". Works beautifully. I can saturate the upstream with BT, ftp, scp, etc. and still talk on the phone, do work in ssh terminals without noticing that I am maxing out my meager connection. When the cable fails and the 1.5/384 DSL takes over, it still works. If I didn't have QoS as an option, my home internet access would be much suckier.

It appears permanently swichting on IPv6 support by major producers of traffic such as Google, YouTube, Facebook etc. has made all the 'dormant' IPv6 connections people have at home suddenly very active. They watch their videos over IPv6 without probably even realising.

One of the larger ISPs (XS4ALL) in the Netherlands enabled IPv6 on all existing fibre connections and on all new fibre and DSL connections on that day as well.

I'm in Boston, also with a 6121 I bought, with a 4th gen Apple Time Capsule behind it. I have IPv6 with 6-to-4 working perfectly using a combination of Comcast's (IPv4) and Google's DNS servers as an Extreme 105 customer.

No for the love of god no Negative4, you really do not know your networking fundamentals here at all as has been stated before. QOS is a bandaid around misconfigured switch/firewall buffers and poor bandwidth. It is the wrong solution to the whole problem. Prioritization primarily only helps for a saturated connection, in which case you need to get a faster connection not start fiddling with tcp packets and deciding who is more important.

As a consumer it doesn't matter how fast connections get, businesses are going to have way more and downloads will ALWAYS saturate your connection. Prioritization is the only fix.

Maybe I'm missing something but all the FUD sounds an awful lot like Y2K and I don't see why the vast vast majority of Internet users have anything to concern themselves with.

I'm interested because I'm a geek and web developer, but ultimately the vast majority of people have ISP owned and managed equipment (modems) between thier local ipv4 network (router) and the Internet at large. If thier ISP gives their modem an IPv6 address and the modem handles the ipv4 tunnel where appropriate then its totally transparent to them. Those folks still think thier IP address is 192.168.1.100 and this is already going on.

Those few of use that actually run publiclly accessible Internet services are the tiny group that actually does need to worry a little about this stuff. Even then we only need to worry marginally as most of us that run web servoces have a hosting companies to manage the servers.

While there are certainly folks that really DO need to concern themselves with IPv6 (and I'm one of them) I really wish the reporting in it awknologed that 99% of people have no need to concern themselves with it, just like Y2K.

Now the question is: in his new apartment, did the author of this article choose a full dual stack IPv4 + IPv6 connection from a great ISP, or did he choose a IPv4-only connection from another, lagging provider ... ? ;-)

I considered ADSL from XS4ALL, which has been doing IPv6 for some time. I'm not sure about whether their 41 euros for 40 Mbps requires a phone line for an additional fee. But when I was looking earlier last year, it looked like their prices were higher for similar speeds than Ziggo, the local cable co in The Hague. I got 50 Mbps down and 5 up from them for 52 euros, which includes a phone line and about 60 digital cable channels, some 15 in HD. They since upgraded the bandwidth to 60/6.

Although Ziggo doesn't have IPv6 (there were news reports that they would start rolling it out in late 2012) it has one very big thing going for it in addition to the price: no commitment. You can cancel at any time. Since I don't know how long I'll be staying here that's a big plus.

the designers of IPv6 did a poor job of providing for a clear migration path. Surprisingly, though the very problem they were trying to solve was that we were running out of IPv4 address space, it seems they failed to grasp just how much hardware and software depended on IPv4.

Of course the vast majority of that hard- and software was yet to be created when the first IPv6 specifications were published in 1995.

Stuff needing IPv4 could be a reason to not turn off IPv4, not a reason to not turn on IPv6. (Sorry about the compound negatives.)

Its still true though, if there wqas some form of upgrade path that was built into the IPv6 spec, then we wouldn't have this problem right now. I know you simply cannot get IPv4 systems to connect to a IPv6 system without some form of translation, but if every router knew what kind of device was connected to it and did a translation between v4 and v6 then it might make sense (though ok, I've just described 6to4 NAT - though you'd have to assign an IPv6 address to map to an internal IPv4 address which shouldn't be much of a hardship). If this was in the spec and fully supported by all routers, you'd be ok.

As it is, you can have an IPv4 address encoded as IPv6 and its readable, with IPv6 if you have 0:0:0 (etc), this is represented by 2 colons, so an IPv4 address would be written ::192:168:0:1 which isn't exactly brain-numbing, so lets stop with the FUD about impossible to remember addresses.

As for home kit, you can still run an IPv4 network internally, so your TV or printer will still work fine. They just won't be able to connect to the big, bad IPv6 internet and vice-versa, which is probably a very good thing.

I still want to see more, no all, home modem/routers support IPv6. Without that, there is no IPv6 internet.

That is why you want to use an os, right now Linux is it with the 3.5+ kernel or cerowrt that supports CODEL. Which stands for COntrolled DELay. It is a prioritization algorithm with the best configuration possible. It has one knob, which amounts to: how fast is my link. There is no other configuration. The basic algorithm is described in this paper (http://queue.acm.org/detail.cfm?id=2209336), and by Van Jacobson, the guy that saved ipv4 in the 80's from collapsing back then (Jacobson's algorithm, wikipedia it). Right now qos is solving the latency problem, poorly. CODEL does it better.

Once CODEL is put in more OS's/routers/etc but primarily edge switches where it will be the most beneficial due to speed disparities, you will notice latency and rtt going down overall.

CODEL is not a prioritization algorithm and has nothing to do with QoS.CODEL is a dynamic queue length management algorithm and aims to solve one specific problem in router design: how to have large buffers, needed to deal with incoming packet bursts and sustain high bandwidth operation, without risking excessive queuing and latency (buffer bloat)

Case in point: my cable modem suffers of buffer bloat. If I try to saturate it's 2 Mbit/s upstream, latency rises up to ~2000 ms. If I keep it below ~98% of those 2 Mbit/s, it works fine with <100 ms latencies.In my case, I worked arround the problem by having a Linux computer server as router to the rest of the network and limit upstream to ~98% of those 2 Mbit/s with a token bucket filter (TBF).TBF does the job quite well, but it's CPU intensive and needed quite a bit of trial and error with the parameters to get it right.CODEL is less CPU intensive, single knob solution to this problem. Hopefully, simple enough so it will be more widely used in consumer/modem routers.

Maybe I'm missing something but all the FUD sounds an awful lot like Y2K and I don't see why the vast vast majority of Internet users have anything to concern themselves with.

You only think of Y2K as FUD because a lot of work was done to successfully fix most of the issues.

Quote:

I'm interested because I'm a geek and web developer, but ultimately the vast majority of people have ISP owned and managed equipment (modems) between thier local ipv4 network (router) and the Internet at large. If thier ISP gives their modem an IPv6 address and the modem handles the ipv4 tunnel where appropriate then its totally transparent to them. Those folks still think thier IP address is 192.168.1.100 and this is already going on.

Those few of use that actually run publiclly accessible Internet services are the tiny group that actually does need to worry a little about this stuff. Even then we only need to worry marginally as most of us that run web servoces have a hosting companies to manage the servers.

While there are certainly folks that really DO need to concern themselves with IPv6 (and I'm one of them) I really wish the reporting in it awknologed that 99% of people have no need to concern themselves with it, just like Y2K.

You're wrong.When your ISP changes from giving your modem a public IPv4 address to giving it a public IPv6 address (this is DS-Lite), then you'll be depending on a form of large scale NAT64 to talk to other IPv4 sites and users out there.And you're subject to the same kind of problem LS-NAT incurs.This problems include applications like P2P and games not working properly, generally worse performance and the possibility of stuff like being banned from a website because of what someone else did, who happens to use the same LS-NAT64 gateway you do.

I doubt I'm the only one who doesn't find it a tragedy that you can't.

Did you just post this to be sadistic? It's a problem a LOT of people have, not just me.

I'm saying that I don't find it a tragedy that you can't consume all of the bandwidth available to you twenty four hours a day, 365 days a year, but instead have to cut back a little in order to game.

Cheap consumer Internet uplinks are made economical by oversubscription. When you continually use all the bandwidth available to you, that's a lesser amount of possible link oversubscription, which means increased costs that have to be passed on, or the introduction of traffic shaping or quotas for everyone.

Yeah netflix uses 5 mbps now but in the future it'll be more. Either way I run into bandwidth problems frequently and so do a lot of people I know, it's definitely not a minority thing. Bandwidth needs to be prioritizable.

They run into bandwidth problems because they don't have enough bandwidth, not because they need priorities.

If you have a 5Mb Netflix stream and only 5Mb of bandwidth, you may get buffering issues. No amount of priorities can increase your bandwidth.

If you don't have enough bandwidth because it costs too much, then blame the monopoly of the telecom system.

Everyone should have 100Mb of bandwidth for at most $70, and we should be seeing 1Gb and 10Gb in the future.

Technology is outpacing demand and we'll be seeing a jump in bandwidth supply with recent huge break-throughs in fiber optics that should be trickling down in the next 5 and 10 years.

Another interesting thing is a lot of the current bandwidth demand growth is being driven by more and more people switching over to Netflix type things. These switch-overs are more of a one time thing. Once Netflix type services reach market saturation, demand is going to slow down until the next big thing, and even then it won't be as big of a difference as people going from FaceBook/email to Video Streaming.

Priorities address the symptom of a broken system that doesn't even follow industry standards. Fix the problem, not cover the symptom.

Priorities address the problem of a finite resource, it's never going to be not needed. If everyone had 100 Mbit connections businesses would download to you at 100 Mbits. Say you want one download to take priority over others, what then? You're leaving your computer and you have 5 things to download and you want one to finish first. Priorities are necessary on a finite resource.

Precisely. And no, your ISP is not going to rip out every piece of their carrier network gear in and put "teh Linux" in place to solve what amounts to a last-mile problem.

QoS is not "bad", NAT is not "bad", and I'm sure CODEL is not "bad". These are all imperfect solutions that exist because we live in an imperfect world.

I run QoS at home, mainly on the upstream side because I only have 8Mb/s or so up and I use VoIP. It was a three click setup that basically says "trust the DSCP markings/TOS and shove these packets to the front of the queue, also please prioritize ACKs, DNS and ICMP". Works beautifully. I can saturate the upstream with BT, ftp, scp, etc. and still talk on the phone, do work in ssh terminals without noticing that I am maxing out my meager connection. When the cable fails and the 1.5/384 DSL takes over, it still works. If I didn't have QoS as an option, my home internet access would be much suckier.

I said right now Linux is the only OS with implemented CODEL. Where in my original reply did I state "ZOMG LINUX EVERYWARE ZOMGS GUYZ"? It is a queuing algorithm like RED. Any OS can support it. Linux is just ahead of the game. It is rather simple to patch systems to support new features. Not sure where you go this impression from my post.

Qos is bad in the sense that it is masquerading bandwidth problems. Yes it can be of some use, but queueing for latency is an overall better solution.

And yes, NAT is bad. IPSEC tunnels are a good example of why.

I had qos setup before this for my home connection. After turning on fq_codel I've not found it to ever fail where qos would succeed. You do realize that CODEL is optimizing for latency and not bandwidth right? Your use case is exactly what it is intended to fix besides bufferbloat. I run bulk rsync's from home that peg my upload and ssh/etc... connections running fine. With no qos setup or overhead on the perimeter router.

CODEL is not a prioritization algorithm and has nothing to do with QoS.CODEL is a dynamic queue length management algorithm and aims to solve one specific problem in router design: how to have large buffers, needed to deal with incoming packet bursts and sustain high bandwidth operation, without risking excessive queuing and latency (buffer bloat)

Case in point: my cable modem suffers of buffer bloat. If I try to saturate it's 2 Mbit/s upstream, latency rises up to ~2000 ms. If I keep it below ~98% of those 2 Mbit/s, it works fine with <100 ms latencies.In my case, I worked arround the problem by having a Linux computer server as router to the rest of the network and limit upstream to ~98% of those 2 Mbit/s with a token bucket filter (TBF).TBF does the job quite well, but it's CPU intensive and needed quite a bit of trial and error with the parameters to get it right.CODEL is less CPU intensive, single knob solution to this problem. Hopefully, simple enough so it will be more widely used in consumer/modem routers.

However, it has nothing to do with QoS and packet prioritization.

From my experience using it at the border at home it scales quite nicely. The issue I run into is from 4-10ish weekdays, I have approximately 500-2000ms of buffer between me and not-comcast. Past 10pm this goes down to a reasonable sub 100ms.

I've scp'd entire files out of my network only to have the transfer fail due to the upstream buffer using RED randomly on my stream. Looking at the network dumps is most enlightening. Qos and packet prioritization do nothing to alleviate or fix these problems. Unless you are talking about qos'ing file transfers on say port 22 to be exceedingly slow, in which case yes they will help somewhat. But the cure would be worse than the disease.

A couple of german ISPs (Deutsche Telekom, Unitymedia) started switching their customers over to a dual stack connection a few weeks ago and at least Unitymedia is met by quite a bit of backslash. The primary problem people are facing is that their Xbox 360 isn't dual stack compatible. People are of course blaming their ISP for this ("it worked yesterday and than you changed something").

I'm rather surprised that a company like Microsoft introduced in 2005 - even Windows XP had rudimentary IPv6 support - a product that's not dual stack compatible and hasn't fixed it since.

Than I discovered their technet document outlining IPv6 support (http://technet.microsoft.com/en-us/netw ... 94905.aspx) and I'm even more surprised that Windows Phone 7.5 - which is from this year and Windows Phone 7 isn't that old either - supports neither IPv6 nor dual stack.

We still have a long way to go.

It would be great to see an ars article about the problems typical consumer devices might face now or in the future with IPv6 or especially dual stack. If Microsoft can't get their cell phones in line I wonder what about Blu-ray players, TVs, AV Receivers, game consoles, etc.

From my experience using it at the border at home it scales quite nicely. The issue I run into is from 4-10ish weekdays, I have approximately 500-2000ms of buffer between me and not-comcast. Past 10pm this goes down to a reasonable sub 100ms.

I've scp'd entire files out of my network only to have the transfer fail due to the upstream buffer using RED randomly on my stream. Looking at the network dumps is most enlightening. Qos and packet prioritization do nothing to alleviate or fix these problems. Unless you are talking about qos'ing file transfers on say port 22 to be exceedingly slow, in which case yes they will help somewhat. But the cure would be worse than the disease.

Of course not. QoS is not meant to fix buffer bloat, CODEL (and RED) are. Conversely, CODEL is not meant to minimize the problems QoS is meant to minimize.

There's no such thing as not dual stack compatible. IPv4-only systems work without issue in a double stack environment, as they totally ignore the IPv6 packets.

What exists are broken dual stack systems and environments.

not comaptible != not implemented

Appearently stuff doesn't work in a dual stack (lite) environment and they even document it. To me that's not compatible.

Ah! DS-Lite is something else. In a DS-Lite environment, your router does not have a public IPv4 address; access to the IPv4 internet is made via a ISP level NAT (LS-NAT).And of course, when you're behind LS-NAT, then things do break (hence the need for people to get onboard with IPv6).

What big name web servers don't support both stacks? This is the main problem. If I could switch to IPv6 as a consumer and be able to connect to everything I normally connect to AND be able to get an actual address for each device in my house, I would do it in a heartbeat.

First, every part of the code that touches IP addresses needs to be modified to handle IPv6 addresses. Ie, surely Ars here logs the IP address for each of our posts. The code that does that needs to be upgraded to handle IPv6 addresses.

The second issue is fear of poor IPv6 connectivity. Ie, when a domain has both IPv4 and IPv6 addresses and the computer has native IPv6, browsers will use IPv6. However, sometimes IPv6 is broken somewhere along the path and with few users, these issues go longer unoticed.Due to that, many sites prefer not to enable IPv6. That's also why Google had ipv6.google.com for years before enabling IPv6 on www.google.com.

From my experience using it at the border at home it scales quite nicely. The issue I run into is from 4-10ish weekdays, I have approximately 500-2000ms of buffer between me and not-comcast. Past 10pm this goes down to a reasonable sub 100ms.

I've scp'd entire files out of my network only to have the transfer fail due to the upstream buffer using RED randomly on my stream. Looking at the network dumps is most enlightening. Qos and packet prioritization do nothing to alleviate or fix these problems. Unless you are talking about qos'ing file transfers on say port 22 to be exceedingly slow, in which case yes they will help somewhat. But the cure would be worse than the disease.

Of course not. QoS is not meant to fix buffer bloat, CODEL (and RED) are. Conversely, CODEL is not meant to minimize the problems QoS is meant to minimize.

But the problems that people are describing here that they are trying to fix is buffer bloat. Latency problems on saturated links is what CODEL fixes and what posters here are trying to fix using QoS on existing equipment that doesn't support CODEL yet. There _may_ be other problems that QoS can help fix but I would consider it a last resort as there are usually better and more fundamental ways to solve network problems than layering on QoS

First, every part of the code that touches IP addresses needs to be modified to handle IPv6 addresses. Ie, surely Ars here logs the IP address for each of our posts. The code that does that needs to be upgraded to handle IPv6 addresses.

The second issue is fear of poor IPv6 connectivity. Ie, when a domain has both IPv4 and IPv6 addresses and the computer has native IPv6, browsers will use IPv6. However, sometimes IPv6 is broken somewhere along the path and with few users, these issues go longer unoticed.Due to that, many sites prefer not to enable IPv6. That's also why Google had ipv6.google.com for years before enabling IPv6 on http://www.google.com.

But I think it is worth highlighting that www.google.com does have a AAAA record and the world is not crashing in. Whatever incompatibility is left out in the world is now part of the current normal. I'm sure if an ISPs users can't get to google this is not a failure that will go unnoticed.