Cox ready to throttle P2P, non “time sensitive” traffic

Cox Cable rolls out a traffic management system designed to throttle content …

It takes guts—or perhaps something a bit further down the anatomy—to wait until Comcast has been smacked down for singling out P2P, the Obama administration has come to power, and Democrat Michael Copps (temporarily) heads the FCC to roll out a new Internet traffic management system that delays only some kinds of content during moments of congestion.

But that's exactly what Cox Cable, the third largest cable system in the US, has just announced.

According to the announcement made Tuesday night, Cox will trial the system in Kansas and Arkansas first, expanding it to the rest of its territory later in the year if all goes well.

Here's how the company describes the new setup: "During the occasional times the network is congested, this new technology automatically ensures that all time-sensitive Internet traffic—such as web pages, voice calls, streaming videos and gaming—moves without delay. Less time-sensitive traffic, such as file uploads, peer-to-peer and Usenet newsgroups, may be delayed momentarily—but only when the local network is congested."

All traffic defaults into the "time sensitive" category, but Cox has decided on a set of uses that can be delayed. At the beginning of the trial, this list includes:

File Access (Bulk transfers of data such as FTP)

Network Storage (Bulk transfers of data for storage)

P2P (Peer to peer protocols)

Software Updates (Managed updates such as operating system updates)

Usenet (Newsgroup related)

When the network is not experiencing congestion, all traffic will flow without delay, no matter what its type; only during periods of congestion will some traffic be delayed. The system sounds quite a bit like Comcast's new system—with one huge distinction that could well land Cox in hot water at the FCC: Comcast looks only at each user's overall bandwidth usage of the last few minutes, while Cox singles out specific uses of the Internet for delay.

This would certainly seem to go against the tone of the recent FCC ruling against Comcast, which suggested that allowing ISPs to pick and choose what traffic would be throttled was a bad idea. Nevetheless, as Cox says on its site, "the technology and policies at work in this trial also factor in the guidance provided by the Federal Communications Commission."

The actual "Internet Policy Statement" adopted some years back by the FCC says only that:

Consumers are entitled to access the lawful Internet content of their choice

Consumers are entitled to run applications and use services of their choice, subject to the needs of law enforcement

Consumers are entitled to connect their choice of legal devices that do not harm the network

Consumers are entitled to competition among network providers, application and service providers, and content providers

All of these principles are subject to "reasonable network management."

If groups like Free Press (instrumental in the Comcast case) have anything to do with it, the FCC could well be dragged into another network management battle over the new policy.

Free Press policy director Ben Scott already fired off a statement saying, "The lesson we learned from the Comcast case is that we must be skeptical of any practice that comes between users and the Internet. As a general rule, we're concerned about any cable or phone company picking winners and losers online. These kinds of practices cut against the fundamental neutrality of the open Internet. We urge the FCC to subject this practice to close scrutiny and call on Cox to provide its customers with more technical details about exactly what it's doing."

Already anticipating the objections, Cox makes clear that "these congestion management techniques are not a replacement for upgrades to our network." In other words, it's not just "damn the cost of upgrades, let's just throttle like the Boston Strangler!"

Cox customers are asked to send their thoughts to coxmessage@cox.com, an e-mail address we are quite sure will soon get a workout.

I don't really see a problem with it. They are not saying they are blocking it, they are just saying one set of packets that require specific timing is going to take priority over those which don't. So if you are playing a game you won't get a sudden amount of lag when your next door neighbor tries to download all of the New Kids On The Block mp3s at once. Makes perfect sense to me, especially given that it is not a blanket slow down and only occurs when the network is overloaded. I think I would be a bit more pissed about my movies stuttering, or getting killed repeatedly because my games keep lagging out than I would having to wait another hour to get down the 5 gig file I am downloading.

I really would like them to define the term "Delay" as it is used in all these silly traffic management posts. The aren't storing the packets to be rebroadcast when the link is less congested, they are dropping them and relying on the client to resend it, which I think is a very significant difference.

Also: Who is Cox to decide that software updates (assuming OS especially) aren't time-critical? That seems like a particularly dumb idea. Sure, the chances of running into a bug or virus during network congestion probably aren't much greater than a few hours later when traffic clears, but... isn't it still in the interest of security and stability (and sometimes performance) to get patches to as many users as possible, ASAP?

Personally, I'd think that if they're going to do something like this, then they need to be also able to relay real-time congestion status to users to allow them to make informed decisions in their data usage. Also, explain how the throttling works, so that people know up front what is involved.

Or, perhaps, we take a more modest proposal as has been advocated in another industry (with relative approval from Arsians in the past): How about we take the "smart meter" approach that people like for the power grid, apply that to network congestion, and charge people a premium for usage beyond a certain level during times when congestion is high? What's good for the goose is good for the gander, and all that...

I guess I don't see the big deal with their solution to network congestion. It seems to me as long as they don't prioritize their services instead of a third party, it's not a big deal. I'm fine with having my P2P app slightly slower so that someone can make a Vonage phone call (for example)

(Granted, I've always personally believed that "network neutrality" just means that all ISPs refuse to prioritized traffic based on host address or destination address, not that certain protocols can be given higher priority)

So... Cox has oversold its service. Cox already has usage caps in place. These aren't enough to hold back the hordes of Torrent-loving pirates? lolIf they are suffering congestion problems, why does Cox offer Powerboost?Cox wants to sell service without upgrading their infrastructure, that's why you hear about the high speeds and Powerboost on TV, but in the background you hear about usage caps and throttling. Combine that with the constant price hikes, and it is easy to feel taken advantage of.Maybe Cox is strapped for cash and can't find financing to beef up their network, and by the same token can't afford to stop taking on new subscribers. Blame the economy I guess. Everybody is getting cheap these days out of necessity.

Sounds fair to me - a well thought out system. Hope Time Warner does the same. It's pretty naive to think that all internet access should be treated equally. At the end of the day, the internet is just like a system of roads, and as such needs rules to run efficiently - you know, freeways, stoplights, yield signs, congestion zone pricing, etc.

I have to agree this is an entirely reasonable thing to do. I'd rather never deal with congestion, but if I must, I'd rather enjoy fast web browsing and video streaming at the expense of large file downloads. I knew the large downloads were going to take multiple seconds/minutes/hours anyway, and already budgeted time for that, so a few minutes longer makes no different. The difference between five minutes and ten minutes is much less than the difference between five seconds and ten seconds.

Also, Cox needs to publish more details about how neighbors' connections will interact. If I understand the article, Cox might prioritize my neighbors' web & video traffic over my file DL. Why is someone's Youporn traffic more important than my recovery DL from an online backup service? If the network's overloaded, throttle everybody's total bandwidth proportionally & prioritize traffic streams only within each user's pipe.

Too many people wail that their cheap-ass internet connection should get the full bandwidth 24/7, without realising that it's only cheap because it is shared between multiple subscribers.

It's really unfair that someone's enjoyment of their service is ruined by someone else downloading and uploading non-time-critical media. Therefore boosting the real-time priority seems sensible to me.

And to the person saying that this is achieved by dropping packets and relying on the client to resend - if it's done correctly using something like DiffServ then no packets are dropped, they're just queued differently in the routers, and the client won't continue to send once it's TCP window is full for that connection, so you can get very effective rate restriction without losing efficiency.

Also: Who is Cox to decide that software updates (assuming OS especially) aren't time-critical? That seems like a particularly dumb idea. Sure, the chances of running into a bug or virus during network congestion probably aren't much greater than a few hours later when traffic clears, but... isn't it still in the interest of security and stability (and sometimes performance) to get patches to as many users as possible, ASAP?

I agree. Imagine a DDoS attack causing congestion, but we've got no way to DL patches to fix the problem...be cause the updates are delayed.

It at least seems better thought out than many other versions, and promises to only do it when bandwidth is an issue in the area. How the implementation works relative to how it is marketed is of course yet to be seen.

I don't see why telcos and ISP go to such troubles. There is a simpler way to do this.

1. Accumulate incoming packets in a queue.2. Move the packets to an intermediate storage.3. Go through the intermediate storage and send only one packet per destination.4. Add any incoming packets that came in during step 3 to the intermediate storage.5. Go to step 3.

This algorithm has several advantages:

It's simple. That means it's easy to implement and debug.

It's fast. It only adds one step to the packet handling: moving them to the intermediate storage.

It's fair. Everyone one gets one slice of the time pie. The difference is that during heavy loads, there are fewer pies.

It's automatic. There is no need for additional code. More code means more bugs and more time delays. And time delays is not something you want to add during congestion.

It's ethical. There is no need for packet inspection. All that needs be checked is the destination, something that needs to be looked at anyway, to determine where the packet is going.

Throttling is built in. If someone downloads a large file, his packets spend more time in the intermediate storage than others.

It sounds reasonable the way they SAY it. "Delay". Like SuperSpy said, what does that REALLY mean? Is the "delayed" traffic simply dropped? We've seen this before (Ars link needed). The way they SAY it, if I start a download and go to bed, it should be finished in the morning, even if it was "delayed" at some point during the night. If instead I wake up to find that 5% completed before it was dropped, I'd be pissed off.

Does all "delayable" traffic need smarter client software that will auto-restart downloads? How many clients can do that today?

If Cox is anything like Comcast, just complain to every service rep you can. Thus far I've been pushing over 400GB per month with no notices from Comcast. I also recently switched to Dish network for my television service. Comcast has been calling occasionally to see if I plan on dropping any other services, and to ask if they can "offer me better rates". In short, make the cable companies realize that they're no longer monopolies when it comes to television and Internet access!

Cheap? Cable modem service in the US isn't really cheap. You average monthly cost is about $40/mo.

The service isn't being SOLD as a shared service with all the other subscribers, so there is no reasonable expectation in the mind of the average customer in that regard. BTW, I am tired of people trotting out that old worn out line that cable service is shared but DSL is not... it isn't any truer now than it was in the 90's... with DSL you can STILL have congestion at the DSLAM. It is incumbent on cablecos AND telcos alike to properly segment their networks to avoid over-congestion at aggregation points.You get Cox and Comcast-style shenanigans when the cableco or telco won't or can't upgrade their network. Let's not forget AT&T wanted to charge Google for bandwidth in the not too distant past.

Where do you live that you get $40/mo cable internet? My regional monopoly (Mediacom) charges $65 + fees if you don't get their TV service (minimum charge of $25/mo + fees = 11 channels). So the smallest monthly bill you can have from them is around $70. DSL is worse... Embarq FTL.

If Cox throttles P2P, how are they going to know whether or not its Skype? Skype uses P2P on its phone network.

And a lot of streaming audio and video uses P2P technology as well. TVAnts, Sopcast, and the like uses P2P to send the streams out.

Also, since they allow VPN, based on the press release, someone could bypass throttling by simply logging onto a public subscription VPN. There are lots of them on the Net from as little as $5/month up to $60/month. With your traffic encrypted, there is no possible way for Cox to know what you are up to.

Originally posted by MathRockBrock:Also: Who is Cox to decide that software updates (assuming OS especially) aren't time-critical? That seems like a particularly dumb idea. Sure, the chances of running into a bug or virus during network congestion probably aren't much greater than a few hours later when traffic clears, but... isn't it still in the interest of security and stability (and sometimes performance) to get patches to as many users as possible, ASAP?

Cox is practicing sane network management policies. File transfers are not time sensitive from a functional standpoint, and Cox isn't blocking them completely. It just means that your updates will take a few minutes longer. VOIP, Gaming, and VPN are examples of time and bandwidth-sensitive functions that stand to suffer without ISPs managing their networks against file sharing given how fast the demands of file sharing are increasing and inundating networks. This is reasonable network management.

I always find it interesting when people accuse ISPs of overselling. Of course they're overselling! That's what makes a 5Mb/sec service so affordable as the connectivity ISPs sell is very much a time-share-like situation. If you want a guaranteed 5mb/sec with the availability of an unabaited and constant 5mb/sec draw, it would cost you probably in excess of $1000 a month when gotten directly from the Backbone as many of the ISPs do. Co-location fees would be extra, and then you would have to backhaul it to your home.

The heaviest P2P downloaders are highly subsidized by the casual users, and yet they complain the most...

Originally posted by whquaint:It sounds reasonable the way they SAY it. "Delay". Like SuperSpy said, what does that REALLY mean? Is the "delayed" traffic simply dropped? We've seen this before (Ars link needed).[snip]

I assume you're referring to Comcast's initial method of filtering where they were injecting forged RSTs between bittorrent users. This is quite different from just dropping packets. If memory serves, TCP accounts for dropped packets, whereas the forged RSTs fool it quite handily, and would require unreasonable application behavior to overcome the ISP's RSTs.

Assuming by "delay" that Cox means dropping few enough packets that TCP connections will shrink the transfer window but not drop the connection, I don't think that's an unreasonable way to manage a particular connection. This (and worse) is what would happen to every connection in times of congestion without network management.

However, I do think Cox's idea is a problem for the reasons the Free Press brings up, that they are "picking winners and losers online", by prioritizing one protocol over another. I'm no fan of Comcast, but I find their current method more reasonable (if I understand it correctly): throttle only when necessary, by dropping packets, only on the heaviest users at that time, on a protocol-agnostic basis. (Of course, I do find it unfortunate that they had to be arm-twisted into that, and that they also have a transfer cap. Still glad I'm not their customer. )

Originally posted by johnsonfromwisconsin:Cox is practicing sane network management policies. File transfers are not time sensitive from a functional standpoint, and Cox isn't blocking them completely. It just means that your updates will take a few minutes longer. VOIP, Gaming, and VPN are examples of time and bandwidth-sensitive functions that stand to suffer without ISPs managing their networks against file sharing given how fast the demands of file sharing are increasing and inundating networks. This is reasonable network management.

I understand that argument, but I don't agree that it's "reasonable". Do you, or do you not have a compelling reason why my neighbor's all-night WoW raids and porn streams (Cox includes video under "priority") should not be delayed, while critical system patches should? What about people who use VPN for filesharing?

Originally posted by MathRockBrock:I understand that argument, but I don't agree that it's "reasonable". Do you, or do you not have a compelling reason why my neighbor's all-night WoW raids and porn streams (Cox includes video under "priority") should not be delayed, while critical system patches should? What about people who use VPN for filesharing?

Because both of you are paying for a pipe. You are not paying for priority access on some services over others or even guaranteed speeds - that's what SLA's are for.

Both of you have equal access to that pipe, the condition of which, both of you have the right to use that pipe and get a decent experience out of it.

Your neighbor is paying for a decent experience on his internet, which means having his videos stream without stuttering, or excessive lag on his games.

"Critical system patches?" You're really and truly that worried that your latest update is going to take 2 minutes instead of 1? Besides, your ISP really sucks if one person next door to you streaming a video or playing a game cuts your transfer rate in half.

This is INDUSTRY STANDARD QOS FILTERING. Look it up. This technology was designed to solve exactly these problems.

File transfers are, by definition, "Bulk data", and anything that requires its packets to arrive on time, or else suffer lagging or skipping or otherwise degradation, (video, voice, gaming) is upgraded to time sensitive. Which it is.

Nothing about this is industry standard. They're substituting a cheap, bizarre form of QOS that won't work instead of spending real $ to upgrade their network and providing the service that I was promised and pay for.

I canceled Cox yesterday and signed up for uverse, hopefully AT&T won't start doing this sort of crap. I'm not confident about that.