Posted
by
kdawson
on Friday December 04, 2009 @10:45AM
from the careful-they-got-tusks dept.

eldavojohn writes "Benoit Felten, an analyst in Paris, has heard enough of the elusive creature known as the bandwidth hog. Like its cousin the Boogie Man, the 'bandwidth hog' is a tale that ISPs tell their frightened users to keep them in check or to cut off whoever they want to cut off from service. And Felten's calling them out because he's certain that bandwidth hogs don't exist. What's actually happening is the ISPs are selecting the top 5% of users, by volume of bits that move on their wire, and revoking their service, even if they aren't negatively impacting other users. Which means that they are targeting 'heavy users' simply for being 'heavy users.' Felten has thrown down the gauntlet asking for a standardized data set from any telco that he can do statistical analysis on that will allow him to find any evidence of a single outlier ruining the experience for everyone else. Unlikely any telco will take him up on that offer but his point still stands." Felten's challenge is paired with a more technical look at how networks operate, which claims that TCP/IP by its design eliminates the possibility of hogging bandwidth. But Wes Felter corrects that mis-impression in a post to a network neutrality mailing list.

They are generally using UDP so the original assertion that degrading the other users experience should be true as UDP should break down long before TCP does. Though I do agree that if Comcast's system works as described it's probably the best solution for a network that can't implement QoS.

Why do you think they are using UDP? Most of the bandwidth being used at this point, to my knowledge, is for streaming video (read: porn) and BitTorrent (read: porn). Both of them use TCP for the majority of their bandwidth usage (Some BitTorrent clients support UDP communication with the tracker, but the file is still transferred by TCP). Getting built-in error-checking, congestion control and streaming functionality in TCP makes much more sense than a UDP based protocol where you have to reimplement that

Why do you think they are using UDP? Most of the bandwidth being used at this point, to my knowledge, is for streaming video (read: porn) and BitTorrent (read: porn). Both of them use TCP for the majority of their bandwidth usage (Some BitTorrent clients support UDP communication with the tracker, but the file is still transferred by TCP).

Most of the streaming protocols that I dealt with used UDP as their basis. The need to deliver the next frame or sound byte as soon as possible outweighs the need to guarantee that every single frame or byte arrives. We accept the occasional drop out in return for expedited delivery of data.

Unfortunately when trying to achieve the necessary data rate to satisfy the occasional drop outs, some protocols neglect being a good stewart of network bandwidth and have no throttle (ie congestion relief).

Yeah, on looking around, it looks like the streaming protocols are UDP based. That still doesn't give it a flat majority of traffic; BitTorrent, along with the dedicated file sharing programs, are huge bandwidth consumers (the customers that are maxing out their connections aren't actively streaming video 24 hours a day), so if overusers of congestion unfriendly UDP are the problem, dropping the users of the highest bandwidth won't solve the problem (because they're using the relatively network friendly TCP

One problem is by default many network devices/OSes do bandwidth distribution on a per _connection_ basis not on a per IP basis. So if there are only two users and one user has 1000 active connections and the other has just one active connection the first user will get about 1000 times more bandwidth than the second user.

P2P clients typically have very very many connections open. Wheres other clients might not.

A much fairer way would be to share bandwidth amongst users on a per IP basis. That means if two users are active they get about 50% each, even if one user has 100 P2P connections and the other user has only one measly http connection.

Then within each customer's "per IP" queue, to improve the customer's experience you could prioritize latency or loss sensitive stuff like like dns, tcp acks, typical game connections, ssh, telnet and so on, over all the other less sensitive/important stuff.

Of course if you have oversubscribed too much, you will have way too many active users for your available bandwidth. A fair distribution of "not enough" will still be not enough.

If you have two people and you give each a fair slice of one banana, they each get half a banana. Maybe both are satisfied.If you have 1000 people and you give each a fair slice of one banana, they each get 1/1000th of a banana. Not many are going to be satisfied;).

And that's where we come to the other problem.

The problem with P2P is many customers will often leave their P2P clients on 24/7, EVEN when some of them don't really care very much about how fast it goes (some do, but some don't). To revisit the banana analogy, what you have here is 1000 people, and 1000 of them ask for a slice of the banana, EVEN though some of them don't really care - they'll only really feel like having a slice next week, when they're back from their holiday!

So how do you figure out who cares and who doesn't care?

Right now what many ISPs do is have quota limits - they limit how much data can be transferred in total. When the quota runs out "stuff happens" (connections go slow, users get charged more etc). So the users have to manage it.

BUT this is PRIMITIVE, because if you can figure out when a user doesn't care about the speed etc, technology allows you to easily prioritize other traffic over that user's "who cares" traffic.

So what's a better way of figuring it out?

My proposal is to give the customers a "dialer" which allows users to "log on" to "priority Internet" (and then only something starts counting the bytes;) ), BUT even when they "log out" they _still_ get always-on internet access except it's just on a lower priority (but NO byte quota!). A customer might be restricted to say 10GBs at "priority" a month.

The advantage of this method is:1) There is no WASTED capacity - almost all the available bandwidth can be used all the time, without affecting the people who NEED "priority" internet access (and still have unused quota).2) It allows a ISP to better figure out how much capacity to actually buy.3) If there is insufficient capacity for "priority Internet" the ISP can actually inform the user and not put the user on "priority" (where the quota is counted). While the user might not be that happy, this is much fairer, than getting crappy access while having your quota still being deducted.

Perhaps this system is not needed and will never be needed in countries that don't seem to have big problems offering 100Mbps internet access to lots of people.

But it might still be useful in countries where the internet access and telcos are poorly regulated/managed. For example - you run a small ISP in one of those crappy countries and so you pay more for bandwidth from your providers- this system could allow you to make better use of your bandwidth and to be a more efficient competitor. And maybe even give your customers better internet service at the same time.

Yes the ISP could always buy enough bandwidth so that _everyone_ can get the offered amount even though not everyone really cares all the time (believe me this is true). But that could mean the ISP's internet access packages being much more expensive than they could be.

One problem is by default many network devices/OSes do bandwidth distribution on a per _connection_ basis not on a per IP basis.Standard IP networks do bandwith distribution on the basis of the clients backing down (or if nobody backs down on the basis of who can get packets in when the queue isn't full). If the systems all use equally agressive implementations of TCP then each TCP connection will get a roughly equal bandwidth. OTOH a custom UDP based protocol can easilly take far more or far less than it's fair share depending on how agressive it is.

A much fairer way would be to share bandwidth amongst users on a per IP basis. That means if two users are active they get about 50% each, even if one user has 100 P2P connections and the other user has only one measly http connection.

It's a nice idea but there are a couple of issues1: it takes more computing resources to implement a source based priority queue than to implement a simple fifo queue.2: to be of any use such prioritisation needs to happen at the pinch point (that is at the place there is actually a queue built up), unless you deliberately pinch off bandwidth or you overbuild large parts of your network predicting the pinch point location may not be easy.

You say that like you think you were really 'showing them' by taking your business elsewhere. They were trying to get rid of you and when you left their attitude was more of "I pity the fool that signs that a**hole!"

> Voting with my dollars worked for me back in the 90s, but now, there are just less viable players> in the field, and none of them seem much better than the others.

Yes, there used to be a few 'hacker friendly' ISPs who were usually run by people like thee and me and utterly clueless about ISP economics. They went out of business. It isn't quite as bad in the broadband world but back in the dialup era it was just insane to keep a nethog once we got past the period when those heavy users were also helping bring in new customers. Do the math.

Customer 1 is a nethog. They nail up a connection pretty much 24/7/365 and push as many bits through it as they can. They pay regular price. In the dialup days that was typically $19.95/mo and you pretty much had to dedicate a modem and terminal server port to the idiot. ISP's cost one modem, port telco charge for one business line plus about 4% of a T1 for upstream. Hint, the ISP is paying more than $20/mo for the inbound phone line unless the ISP is AT&T.

Customer 2 is a normal. They connect for four or five hours per day, perhaps six or seven the first six months. You can sign up four to six of these per port. And since most of their activity is bursty the impact on your upstream is minimal.

Customer 3 is a light user. After the shiny wore off the Internet they typically do email and hit a few websites. One port will support ten or more of these people.

It should be obvious that you should want to lose Customer 1 ASAP since they cost you more money than they pay in. If you have a good mix of the second and third type you can get six to eight customers per line and not have too much fussing. AOL used to run ten to twenty customers per dialup port.

In the broadbad era the upstream is the biggest contended resource and depending on the market can be very expensive. Again, the P2P user is the one you want to gift to your competition if you can do it in a way that won't lose his friends/family or generate undue media attention.

> The problem really is ignorance. Too many people just don't understand the service that they are> buying well enough to know when they are being offered less for their money than advertised...

Agreed, but it is you who are ignorant. I admin at a public library. We have a 6MBps link delivered as four T1 circuits and we also have a 6Mbps business grade DSL that I use to push most of the http traffic through to help the load on the main link. The DSL link is pretty much what us normal people buy and is just a little over $100/mo. It is good but doesn't always deliver full capacity. If it goes out we just fail over to driving everything out the T1s. The T1 circuits cost a hell of a lot more but always deliver the goods and have a great SLA.

Question, in your world why would we pay for the T1 lines? Why would anyone? Really, if we could lawyer up and force AT&T to give us the 6Mbps we 'paid for' 24/7 why are we paying for the dedicated service? Because we understand the real world. And the C block we get as part of the statewide WAN is a big plus.:)

That depends on how you man bandwidth. I would say that, in the case you describe, he is NOT getting the bandwidth promised. It is the responsibility of the ISP to A) enforce their caps fairly (and as advertised) and B) make sure they have enough "pipe" to handle ALL of the demands that are made within the allowed amounts.

So if I subscribed for 100 units of bandwith (it doesn't matter what the units are), I should have that available to me, regardless of what any other network user is taking up. If they can provide that while oversubscribing (because most people seldom use even a fraction of their allotment), then more power to them.

If that means they need a LOT more pipe, to deliever what they sold, then maybe they shouldn't have oversold so much pipe. Oversubscription is always a bit of a gamble.

Sometimes you gamble and lose, sometimes you can't prevent your commitments to one customer from causing issues with the commitments to another. When that happens to a company, they need to find a way to deal and to adjust either their capabilities or their offerings.

It is regrettable to offer a service, have genuine problems, and have to modify agreements or spend more on expansion. It is downright dishonest to continue to advertise a service that you already know that you can't provide as advertised, and continue to take your customers money while blaming them for being why you can't provide the service that they are paying you for.

Modern Torrent clients that support DHT (most of them) generally default to UDP. Since the Torrent protocol already includes block checksumming there's no reason to also use TCP for that, congestion control generally isn't an issue with Torrent traffic either, you just push the pipe till it's full. For video unless you have significant buffering there's little reason to have error checking or congestion control because if you can't get the bits in fast enough without retransmits then the video's not going t

I acknowledged my error on streaming video, but BitTorrent (and other file sharing programs) are still big TCP users.

While DHT is UDP based, the file transfer is still TCP based, and no client I know of allocates more than 10% of its bandwidth to DHT use (usually much less). Beyond the protocol compatibility issues, why waste a download of an up to 4 MB block when you could have TCP fix the error much earlier? TCP has a rough error interval of one bit in every trillion bits (that's from memory, but it's wit

(disclaimer: I am living in Eastern Europe, so things may look very differently from US, but then again, maybe it's for the better for people to get a glimpse of how things are done somewhere else on the globe)

Well, as usually the truth is somewhere right down in the middle.
I have 2 ISPs (2 different providers). One is CAT5-based (plus optical fiber going out of the area) and the other uses CaTV (tohgther with those infamous CaTV modems I hate). To make things shorter, I'll name the CAT5-based one as ISP1 and the other as ISP2.
ISP1 offers max metropolitan bandwidth 24/7. My city has roughly 300K home Internet subscribers, not counting businesses. I can download from any of them at 100 mbps max theoretical transfer rate. When using Torrent-based downloads, every single one caps at 95-97% of the maximum theoretical amount, which is impressive to say the least. Furthermore, I can browse at the same time without interruptions or latency. I was playing games such as EVE Online and WoW while downloading literally tens of gigabytes of data at max speed and my latency as shown in WoW was about 150-250 ms, which is excellent according to my view.
I have never ever had any warnings from my ISP1 during last 3 years, mainly because they do not count metropolitan data transfers (I asked). They also told me why. All ISPs which offer metropolitan high speed access have an agreement to let those transfers flow freely (mutual advantage) and not count them against customers. It seems the logical thing to do. It's pretty much like throwing a cable between me and my neighbour and turning the pipe on. It's a self-managed thing, and if it works like shit, then it's our fault, not the ISP's.
ISP2 offers CaTV-based Internet Access. Now I have my reasons to loathe Cable Modems because they proved to be unreliable, slower than other types of Internet Access and prone to desync. I've had countless problems with this sort of implementation. Anyways, ISP2 downloads cap at 2 MB/s when downloading from either metropolitan or external sources. They brag about offering 50 mbps transfer rates from Metropolitan sources, but this doesn't seem to be true. I keep ISP2 for backup purposes, so it goes largely unused (i think I used their service for like 10 days during last year or so).
Maybe ISP1 or ISP2 do have a policy to cap heavy downloaders which access data from outside the metropolitan network area, but I've never heard of any case when they did. So either the policy exists but is not applied, or doesn't exist at all.Oh, I forgot to mention that ISP2 gas nation-wide coverage and ISP1 is just city-wide.
So I was wondering what makes US-based (and probably other) ISPs come to such a conclusion and apply such policies. I think it's because their network implementation plainly sucks. Maybe they rely on third party networks to get data across areas where they have no coverage and that costs them. makes sense for a company looking to maximize profits (I don't like this approach though). Don't they have a minimum guaranteed bandwidth? We do have it here, and if one starts complaining that he only can download at 2x of minimum guaranteed speed limit, the ISP just laughs in your face, because that's twice what they guarantee. And to that I agree:)
Let's assume I use videoconferencing from home. A lot. I know people who participate in high-bandwidth audio+video conferences all day, from home. So they eat up a lot of bandwidth for business purposes. They would be pissed to have a cap limit enforced on them:) So what's the ISP's take on such cases?
one more thing: if this policy is written in your contract with them, then you're legally screwed. If not, they're legally screwed. it all comes down to this in the end.
As a conclusion, I don't think "Bandwidth hogs" exist. They're mythical creatures indeed. But what is real is piss-poor network implementation, especially on WAN.

So I was wondering what makes US-based (and probably other) ISPs come to such a conclusion and apply such policies.

Modern American society has a sense of entitlement, and that applies even to the government-granted monopolies. ISPs were given hundreds of BILLIONS of dollars to push broadband out to every address in the US. They didn't do it, never got spanked for it, and abuse the customers and continually charge more using "service enhancements" and "upgrades" as their justification, when in actuality, the

The only way I've ever been able to peg my connection is to start three or four Linux ISO downloads from different FTP sites. Just what are people doing with these connections that I'm unable to do with mine?

Because the operators pay for the bandwidth. The high bandwidth users are less profitable than the other ones.

That is why tiered services would solve the problem. High bandwidth users should be more profitable than other ones. Then the ISPs would be profit motivated to encourage heavy bandwidth usage, and the users would be cost motivated to be efficient with their usage.

No single thing is more at the root of this problem than the word "unlimited."

They don't negatively impact operations in the sense of taking up a scarce resource that degrades other customers' performance. However, they do still use above-average amounts of bandwidth, which costs ISPs money. When offering a flat-rate, unlimited-use service, your economics come out ahead if you can find some way to skew your customers towards those who don't actually take advantage of your claimed "unlimited use".

Because they're worried that if they don't, they'll have to pay for equipment upgrades to handle the extra load, and I doubt they don't have the monitoring in place to figure out whether a "hog" is actually impairing the experience of other customers (after all, you'd need to analyze a whole lot of factors at each link in the chain where connections join, and that costs money too). Their paranoid belief is that half the customers will up and leave because there connection is one step shy of perfect, so they

I have personally witnessed hogging of bandwidth and, I'd wager, so have you. This term describes when an individual user uses more bandwidth resources than they were assumed to need.

Example: My brother moves in with two of his friends. His latency is horrible. When his roommate is not home, the internet is fine. When he's away at work it becomes unusable. He calls me to look at the situation, and we determine that one of his roomies is a heavy torrent user. Turns out the roommate was ramping up torrents of anime shows he wanted to watch while he was gone. He was aware of the impact to his own internet experience, and so ramped it back down when he wanted to use it himself.

If that's not hogging bandwidth, I'm not too sure what is.

If this doesn't scale, logically, up to the network at a whole, I'm not sure why.

Now, to be completely clear - I feel overselling bandwidth is wrong. I feel the proper response to issues like this on the larger network is guaranteed access to the full amount of bandwidth sold at all times. On the local scale, these men should have brought in another source of internet. On the larger scale, the telco should do the same.

Denying that the issue can happen, however, is stupid to the point of sabotage.

An end-user can download all his access line will sustain when the network is comparatively empty, but as soon as it fills up from other users' traffic, his own download (or upload) rate will diminish until it's no bigger than what anyone else gets.

So, if I understand this statement, if a user is hogging all the bandwidth until no one gets any connectivity - since it is all the same it is totally fair. One user can bottleneck the pipes, but since their stuff isn't fast any more either, we're all good?

I used QoS with iproute2 and iptables (see http://lartc.org/howto [lartc.org]) when I faced that issue.I do not mean to say I had room mates, but when I used bittorrent and noticed how it abused the network, I used that howto to limit it's bandwidth.

I've seen a similar thing with a neighbor of mine in our apartment building. We're on Comcast--and the congestion stopped when he switched to DSL (He had been sharing the connection of another neighbor, who moved out.)

that is, its like going back to pre-hub style ethernet, where every computer is listening for the next millisecond of no signal on the coax so that it can hopefully push its next packet on there. There is a reason why this was quickly replaced with switches when said tech became available at acceptable prices...

No, No NO! For the love of God, NO! You're completely wrong, and you have no idea what you're talking about. There is no such thing as "coax style networking", and there never has been. And the network behavior of cable broadband connectivity has nothing whatsoever to do with the fact that some cable connections use coaxial wiring.

You are probably thinking of the old 10BASE2 Ethernet standard (http://en.wikipedia.org/wiki/10BASE2), which used coaxial cable with BNC connections and T-connectors to a shared cable bus medium. Cable broadband uses the DOCSIS protocol (http://en.wikipedia.org/wiki/DOCSIS) over coaxial cable with F connectors. The cable is the only really similar thing between the two technologies, everything else is pretty different.

10BASE2, like all Ethernet technologies, is a shared-medium, PURE collision-detection protocol. The hosts share the cable segment as a broadcast medium, so that a transmission by one host will be "heard" by all the rest. Each host makes its own decisions about when it wants to transmit, independent of the rest, and then transmits when it senses that the cable is "silent". If multiple hosts start transmitting at almost exactly the same time, they will all shortly detect the "collision". They all cease transmitting, and each picks a short random-length interval to wait before trying to transmit again, unless another host that picked a shorter timeout window starts transmitting, first. Statistically, it's unlikely that two hosts will pick the same random wait timeout, so most collisions resolve quickly unless the network is particularly congested.

DOCSIS uses a mixture of time-division, code-division, and collision-based contention behaviors (depending on the exact revision, too), but the impact of contention is really limited. From a bandwidth scheduling and congestion standpoint, it's nothing like 10BASE2, because the TDMA and CDMA elements of the protocol help each node sees a "fair share" of throughput. Plus, modern DOCSIS supports quality-of-service tags, which (if properly implemented) are pretty much a brick wall against congestion issues.

mostly to me it seems that the ISPs that cries highest are the ones that geared up when the net was mostly static webpages and ftp file transfers, able to handle the odd spike of traffic when someone clicked a link. But now the gear they have sitting around, and that they where banking on where not to be replaced for the next decade or so, baring hardware failure, is being swamped by continual "spikes". And the only way they can fix that at their end is by replacing the gear ahead of schedule, playing havoc with their earnings estimates. And rather then doing that, they break out the whip, trying to force the "cattle" back into the "pen".

I don't think you have any kind of real grasp on the technical implications of terms like "swamped" or "spike" in this context. You certainly understand the metaphor, and I bet you could analogize extensively comparing electrical, water, or highway systems to the Internet, but you don't seem to know too much about actual networking beyond setting up your home LAN.

You pay for a 70Mbps connection. The ISP is saying that if you buy that service and then have the audacity to use the service you buy you're doing something wrong. Taking up 60Mbps and leaving 10Mbps for your roommate is one thing, but if the two of you are paying for 70Mbps you should get to use it.

The ISP should be required to provide the service paid for. If they throttle, they should be required to specify say 70Mbps instantaneous rate 10Mbps sustained, for example. That would provide a clear descriptio

I think you pointed-out the real problem. The telcos want you to pay for the 70Mbps line, but don't want you to use it. If you cannot support a users doing 70Mbps, don't sell 70Mbps. I know that building an infrastructure based on the assumption that all users will use maximum bandwidth would be costly, but then adapt your marketting practices; sell lower sustained speed and put a "speed on demand" service that is easy to use so when you want/need to download the new 8GB PS3 game you can play before the nex

You pay for an 'up to' 70Mbps connection. 'Up to' means exactly what is sounds like - you are never going to go above that rate. It says absolutely nothing about the minimum or average rate. Since they make no claims at all about minimum or average rate, there is no false advertising. Every consumer is well familiar with what 'up to' means. How many times do you see an ad that says 'Sale! Save up to 50%'. Does that imply that you are in fact going to save 50% on everything you buy? No, it implies th

The basic counter argument is that TCP "fairness" assumes everyone wants the same experience. As you pointed out, a true bandwidth hog doesn't care about the latency during their hogging sessions since they plan around them, and therefore arguing that TCP is fair because it treats all packets the same is pure rubbish. If everyone (including the hog) is trying to make a VOIP call or play WOW then sure, the system is fair because the hog has degraded service just like everyone else. The enterprising hog si

I should point out that this sort of thing, while true, is often overstated because of poor local network configuration. When I first set up my new Vista machine a couple years back, I noticed that torrents on it would frequently interfere with internet connectivity on other networked devices in the house. I hadn't had this problem before and was curious as to the cause. I initially tried setting the bandwidth priorities by machine IP and by port, setting the desktop and specifically uTorrent's port to the lowest priority for traffic (similar to what ISPs do when they try to limit by protocol, but more accurate and without an explicit cap), but that actually made the situation worse; the torrents ran slower, and the other machines behaved even worse.

Turned out the problem was caused by the OS. Vista's TCP settings had QoS disabled, so when the router sent messages saying to slow down on the traffic, or just dropped the packets, the machine just ignored it and resent immediately, swamping the router's CPU resources used to filter and prioritize packets. The moment I turned on QoS the problem disappeared. The only network using device in my house that still has a problem is the VOIP modem, largely because QoS doesn't work quickly enough for the latency requirements of the phone, but it's not dropping calls or dropping voice anymore, it's just laggy (and capping the upload on uTorrent fixes it completely; the download doesn't need capping).

"If this doesn't scale, logically, up to the network at a whole, I'm not sure why."

Plenty of reasons why that won't scale up to the network as a whole. First and foremost, your ISP's network topology is a lot more effective for many users than the simple "star" topology most home router/switch combos give you. Beyond just the topology, the ISP uses better equipment that can cap bandwidth usage and dynamically shift priorities to maintain a minimum level of service for all users even in the presence of a very heavy user. The ISP also has much higher capacity links than what you have at home, and certainly more than the link they give you, and so even if there were a very poor topology and no switch level bandwidth management, it would be very difficult for a single user to severely diminish service for others.

I do not have any sympathy for ISPs when it comes to this issue. If they sell me broadband service and expect me to not use it, then they are supremely stupid, and retaliating against those users who actually make use of the bandwidth they are sold is just insulting. They oversold the bandwidth and they should suffer for it; blaming the users is just misguided.

I disagree. The system wasn't designed, nor sold, with torrents in mind. End points are supposed to be content consumers, not content providers. It isn't exactly the ISP's fault that those end users want the system to function in a way against which it is designed.

The ISP should redesign the system. Absolutely, without a doubt. Meanwhile those users that aren't getting what they want shouldn't necessarily be ruining it for everyone else, should they?

"It isn't exactly the ISP's fault that those end users want the system to function in a way against which it is designed."

I would agree if the ISPs were honest about what how they built their network, but they continue to lie and then complain about people believing their lies. If an ISP designed its network with downloading in mind, and provides only the minimum upload capacity needed to facilitate such service, they should be very clear about that. It is not "unlimited Internet access," it is "Intern

I disagree. overselling is fine, the problem comes when they squeeze too much overselling out of what they've got.

For example, ISP A had 100gb of bandwidth and 1000 customers. They sell each customer 0.1gb, everyone's fine but no customer will use that much bandwidth so most of the network cap is wasted, and when the upstream ISP sells it to you a quite a large sum, you'll find you have no customers as the price you have to charge them is proh

The system wasn't designed, nor sold, with torrents in mind. End points are supposed to be content consumers, not content providers.

This is incorrect. TCP is designed so that every computer on the network is a peer. There is no fundamental difference between my computer at home and slashdot.org. The great promise of the internet is that everyone can be a content provider. The ISPs seek to destroy this notion in favor of simply creating a content distribution mechanism that they control. That is far, far worse than any "bandwidth hog" could ever be.

Companies overselling is a very popular and acceptable thing too (for them). Airlines, hotels, and movie theaters often do this expecting no-shows and cancels. But i expect the percentage oversold is based on historical facts for that particular day the previous year. ISPs might have been able to oversell so much in the past but as more content moves from tv/phone/radio to the internet, the typical usage might be outstripping the previous years usage numbers. Just my thoughts..

Yes. Your brother's room-mate was hogging the available bandwidth in their apartment.

If this doesn't scale, logically, up to the network at a whole, I'm not sure why.

It will only scale up to the network as a whole if you're overselling your bandwidth.

Now, to be completely clear - I feel overselling bandwidth is wrong.

Err... Well, if it's wrong to oversell bandwidth... Then it is wrong to create a situation where it is possible to hog bandwidth...

If I buy a 5 Mbps connection from a small ISP here in town, I expect to be able to get roughly 5 Mbps. And unless they make it very clear to me ahead of time, I'm going to expect that "unlimited" really mean

So... Yes, it is possible to hog the bandwidth. But only if the ISP oversells their bandwidth. Which means that if the ISP is being honest in its marketing and sales material, it should be impossible to hog the bandwidth.

And it has been generally accepted that the network is oversold by design and that using it in a manner that pretends that it was not oversold is hogging it.

Yes, overselling wrong.

Pretending that it was not oversold is also wrong.

And to go one further, pretending that your price for a non-oversold network would be the same is also wrong.

Over selling isn't wrong, it is necessary for services like this. The fact is, all service providers oversell their capacity. The trick is to manage the overselling to a ratio that on average, within a specific scope, doesn't cause bandwidth jams for a prescribed statistical level.

Having run an ISP, the oversell ratio is about 10:1 - 15:1 depending on who your subscribers are, and their usage patterns. This means you can get 10-12 people on a data circuit that is really designed to handle 1 fully loaded client. This statistical usage only works at large scales, and actually as the scale increases, may raise the over subscription to 20 or 25 to one.

I guarantee you that if everyone wanted to Torrent all the time, at full speed, the internet could not handle the traffic. It wasn't designed to. It has been over sold.

Bittorrent is not normal traffic pattern. A Torrent is a congestion point on the internet, at a place where one is not expected. Most people don't need 80 gigabytes of streaming data, day in and day out. If this were DVD movies, you'd be downloading more movies than you could watch.

I don't have any complaints for ISPs that throttle Torrents and take other measures against "high usage" users, who are file sharing. I'm not against filesharing, I'm against idiots who cause congestion because they don't know how to configure Bittorrent client to be "reasonable".

I don't have any complaints for ISPs that throttle Torrents and take other measures against "high usage" users, who are file sharing.

My only issue with throttling is that there are better ways to manage your network than arbitrarily lowering someones bandwidth. My office is located out in the sticks and we can only get a T-1. I have to share 1.5mbits with 60 employees, including time critical services such as VOIP and VPNs. I set up a priority list that looks like this:

The ISP already limits my bandwidth. Say you purchase 'unlimited' 5mbps/768kbps service.
You have a 5mbps download limit and a 768kbps upload limit. If they can't support you using that at full blast, then they should lower the limit. Instead they punish you for using the allocation they gave you.
It would be like imposing a limit on your roommate, then kicking him out for using the limit every day.

The correct metaphor would be if each roommate paid for a separate internet connection, and when one roommate ramped up bittorrent, the other two's connections dropped. THAT would prove the existence of a "Bandwidth Hog"

The fact is that what most telcos call hogs are simply people who overall and on average download more than others. Blaming them for network congestion is actually an admission that telcos are uncomfortable with the 'all you can eat' broadband schemes that they themselves introduced on the market to get people to subscribe. In other words, the marketing push to get people to subscribe to broadband worked, but now the telcos see a missed opportunity at price discrimination...

It's nice of him to declare that without evidence. Now I know it to be true.

I'm not saying he's wrong... quite possibly he's right, but seriously - how does someone's blog entry that doesn't provide one single data point to back up the claim make it to the front page?

I'm not saying he's wrong... quite possibly he's right, but seriously - how does someone's blog entry that doesn't provide one single data point to back up the claim make it to the front page?

The important thing that he's doing is trying to shift the burden of proof back onto the ISPs and telcos. They just declared that some people are bandwidth hogs and terminated their connection. They didn't give the public any proof that they were ruining the internet experience for anyone else... nor did anyone come forward after the purge and say, "Gee, my internet sure is fast now that the bandwidth hogs are disconnected!"

So he calls for proof since he hasn't seen any. He has to say that there are no bandwidth hogs in order to get a response from the telcos. Saying someone might be wrong is not the same impact as calling someone a liar. Yes, he's basing this on an assumption but it's just the same that everyone assumed there were individuals out there ruining the experience. All of us just let the telcos terminate the service of whoever they wanted to and then we moved on with our lives.

I welcome his opposing viewpoint and challenge to "because we said so." They can release anonymous usage data without harming anyone so why not open it up to a request?

What he states in that quote there is telcos call people hogs when they maximize their utilization of the connection they were sold. The telcos blame them for causing network congestion, ergo they believe that they cannot provide what they sold to their customers.

The telco T claims they can provide bandwidth B to the customer C. The average customer Q never uses what they've been sold, while the alleged hog H does, all the time

Lately I've had to deal with this problem. Our solution was rather simple. We use NTOP on an Ubuntu box at the internal switch. We replicate all the traffic coming into that switch to a port that the NTOP box listens on.

It may not be a perfect solution, but it can easily let us know who the top talkers are and give us a historical look at what they are doing.

From that report, we look for anyone uploading more than they download. We also look for people who upload/download a consistent amount every hour. If you see someone doing 80gb in traffic each day with 60gb uploaded, you probably have a file sharer. When you see the 24-hour reports for the user and see 2~3gb every hour on upload, you *know* you have a file sharer.

After that, it's as simple as going to the DNS server and locking their MAC address to an IP. Then, we drop all that traffic (access list extended is wonderful) to another Ubuntu box. That box has a web page explaining what we saw, why the user is banned, and the steps they need to take to get back online.

Most users are very apologetic. We help them to set up upload/download limits on their bittorrent client and then we put them back online.

This upsets the customer. I know it sounds completely back-asswards, but most people would rather be blocked for an hour, told why they are blocked, and told to change, and then resume their normal speeds, as opposed to NOT getting a warning, having speeds decrease what they are paying for, and are left alone and angry to the point where they will go somewhere else.

Is there really a problem with allowing your users to actually use their connection? By my rough calculations 2-3gb/hr is only 60-90kb/s upload. I really don't understand why you can't handle that unless you're massively overselling.
I would be a lot more sympathetic if we were talking about users maxing out fiber connections or something higher speed.

Well, another small ISP here. Couple of things. First off, customers are NOT paying for what's called a CIR. So, of course the service is "oversold". Every service provider industry is "oversold". Landlines, Cell Phones, Car Mechanics, TV Repairmen, Satellite TV, even Tech Support. You think there's one guy in India sitting there waiting for you to call about your Dell? No, of course not. By definition, service providers HAVE to oversell to survive.

Secondly, it's really not about just one person doing something like this as a small ISP. Yes, one person doing such can have a seriously negative impact on the rest of the users, but it's when you get multiple people doing it that really compounds the problem. One torrent user generally isn't too much of a problem. Get two or three with high connection limits, and up/down set to unlimited, and you have a serious problem on your hands.

Finally, equipment is expensive, commercial connections are expensive. If you don't believe me, go price out some comparable commercial internet connections from Cogent, Level3, any of the baby bells (Verizon, Qwest, AT&T/Cingular, etc), and you'll see that you'll easily be paying 10x more than what a cable/FiOS user is going to pay for a residential connection. There's a reason, and it's up in the first point.

You need a TCP/IP stack with queueing. You give each IP address a fair chance to transfer and/or receive some data, and as always you drop any traffic for which you have no time.... but you drop them from the bottoms of queues first, and the queues are per-IP (or per-subscriber, which is harder to manage unless you are properly subnetted... in which case, they can be per-network, for which computation is cheap.) This should be the only kind of QoS necessary to preserve network capacity for all users and pre

These ISPs sold what they ain't got. Sold more bandwidth than they can sustain, and when someone actually takes delivery of what was promised, these telcos bellyache, "we never thought you will ask for all we sold you! whachamagontodoo?". Eventually they will introduce billing by the Gigabytes, and pipesize. Like the electric utilities charge you by the kWh and limit the ampearage of your connection.

Then they will introduce the "friends and family" of ISP, some downloads and some sites will be "unmetered", and the sources will be the friends and family of the ISP. You know? the "partners" who provide "new and exciting" products and content to their "valued customers". Net neutrality will go down the tubes. ha ha.

Google needs the net to be open and neutral for it to freely access and index content. When the dot com bubble burst Google bought tons and tons of bandwidth, the dark fibers, the unlit strands of fiber optic lines. If the net fragments, I expect Google to step in, light up these strands and go head to head with the ISPs providing metro level WiFi. Since it is not a government project, it could not be sabotaged like Verizon and AT&T torpedoed municipal high peed networks.

These ISPs sold what they ain't got. Sold more bandwidth than they can sustain, and when someone actually takes delivery of what was promised, these telcos bellyache, "we never thought you will ask for all we sold you! whachamagontodoo?

Basically, they want to sell a product with high speed - but not continual use. A product where more of the bandwidth is used - or dedicated, not oversubscribed - is vastly more expensive, and is what they sell to businesses. To fix this problem, they should start with met

I also go through my client list and drop those that consume more of my time and resources in favour of the easier clients who ultimately improve my business at a lesser cost. What's wrong with that? My company, my rules. "We reserve the right to refuse service to anyone" -- it's in every restaurant. Why would you expect a business to serve you? Why would you consider it a right?

I also go through my client list and drop those that consume more of my time and resources in favour of the easier clients who ultimately improve my business at a lesser cost. What's wrong with that? My company, my rules. "We reserve the right to refuse service to anyone" -- it's in every restaurant. Why would you expect a business to serve you? Why would you consider it a right?

Your company's service isn't based on federal subsidies meant to provide internet access to all citizens.

because the net have become as integral to modern life as water and electricity?

with a restaurant, one can go somewhere else to get a ready meal, or one can make ones of by parts sold almost anywhere. The net however is not something one can make at home if needed, and rarely one find more then 2 suppliers (or even that many) in a area...

I also go through my client list and drop those that consume more of my time and resources in favour of the easier clients who ultimately improve my business at a lesser cost. What's wrong with that? My company, my rules. "We reserve the right to refuse service to anyone" -- it's in every restaurant. Why would you expect a business to serve you? Why would you consider it a right?

Let's say you sell widgets.

You have 5 people come to you, each one wants to buy 1 widget. And another guy shows up and wants to buy 5 widgets.

You only have 5 widgets in stock, you need 10, but you really want their money. So you sell each of those people a coupon for their widgets, and tell them to pick it up at your warehouse. You figure they won't all run over there right now, and you'll probably have time to get a couple more widgets in stock before anybody notices.

Of course you don't tell your customers this. You don't tell them "I only have 5 right now, you'll have to wait 'til the next shipment" You just take their money and leave them with the impression that the widget is there, waiting for them, available for pickup whenever they want.

So all of them show up at the warehouse about 5 minutes later. All of them want their widgets now. But you don't have enough widgets to go around. So you call the guy who bought 5 widgets a "widget hog", cancel his order, and throw up a hastily-made sign that says "limit 1 per customer."

Legal? Yeah, I guess... Assuming you refund his money.

Right? Not so much. You should have clearly explained that you only have 5 widgets in stock, or that the coupon couldn't be redeemed for a week, or that there was a limit of 1 per customer, or something. You mis-represented what you were selling to your customers.

The problem I have is with an ISP selling something called "unlimited" when they know perfectly well that they have neither the ability nor the intention of delivering anything vaguely resembling unlimited service.

And while I can assume that they don't actually mean unlimited literally, I don't generally have any way to know what they do mean by unlimited. Most of the times the limits or caps are not documented or are not predictably enforced or are not made available to customers.

I have absolutely no problem paying for the level of quality that I want.

I do have a problem paying for the level of quality that I want, and then finding out that the ISP has a different definition of "quality."

"We reserve the right to refuse service to anyone" -- it's in every restaurant.

Actually, only sort of. If there's a pattern to who you refuse service to, it can get you into big trouble. For instance, if you consistently refuse service to black people, you are in violation of a number of civil rights laws.

If you bought a month of internet use at up certain speed, you can't be blamed if you use it, even if you use all of it. If doing that causes problems to other customers or the ISP, is isp fault for selling more than what they have, not yours.

First of all, I am, and always will be, a bandwidth hog. Why? Because I'm better at using the internet than everyone around me. That means I find more things, and bigger things, to download. If they someone banned P2P, I'd still have more streamed video than anyone I know. If they banned that, too, I'd still download more images. If they banned that, i'd still have more web traffic, email, IM, etc etc etc. I will always be a 'hog' in any environment. I was even told that I was "#1 abuser" of the "Unlimited" service when I was on dial-up in a small town and they tried to charge me an extra $300 that month. As someone else had just come into town, I switched, obviously.

I don't pay for the top tier of residential service to just let it sit idle. I'm going to -use- it.

I have absolutely no sympathy for people that sell me something and then get upset when I actually use it within the original limitations. I have only a small amount of sympathy for people that sell me something and I use it beyond their arbitrary limitations, even if I agreed to them.

Why?

America has -crap- for internet compared to other developed countries. We are quickly falling behind the rest of the world in terms of internet bandwidth. This is purely from greed and laziness on the part of the ISP. They refuse to upgrade and try to prevent competition at the same time. Sprint even has the nerve to advertise Pure and claim that it's faster than Cable internet, despite being 1/10th of the speed.

I work for a large ISP, and for residential accounts, we don't particularly care if you're a "bandwidth hog," as long as you're not affecting other customers around you. If we see that one person is causing significant congestion, then that's a problem that we'll address (but only when it happens repeatedly and consistently). Most of the time the customer is either unaware, has an open router, or has a virus/worm/trojan.

During a Slashdotting, the problem is rarely network-related (aside from people who use a cheap host and have very low artificial bandwidth limitations, or are hosting their site on a low-end cable connection).

More often than not, the database goes down. MySQL is especially prone to just dying when put under any significant workload. That's why you'll often see error messages saying that the web front end can't connect to the database. You can still get to the site because the network can handle the volume

If those numbers are correct (5% taking up 95% of bandwidth), then kicking out the top few percent of users (of the entire population, not necessarily of the current customer base) seems exceptionally good for one's bottom line. After that, there's no point removing the new "heavy users" since you've already removed most of your traffic, and your existing infrastructure can more than handle the remaining traffic, so you're better off getting the revenue.

Yes, actually, false advertising is a problem. If an ISP tells me I can make unlimited use of my 10Mbps connection, I expect to be able to make unlimited use of it -- including sustaining 10Mbps or something reasonably close all day and all night. If such a level of service is impossible for an ISP to provide and remain profitable, why the hell are they advertising these plans?

If they are lying to consumers about the level of service they can provi

Comcast isn't/wasn't lying by saying I have unlimited bandwith. It put a new rule in that stated 250g was my limit. Ok no problem I've not hit a 200g mark in quite a while so it's not usually an issue for me. NOW the fuckheads at Comcast also want to throttle me if I use 100% of my line speed for 15 minutes.

What the fuck? Can I actually download at full speed now? Well sure I can if it's under X amount. Anything over it and the brakes go on. I totally have no issue with a drop in line speed if my connection

I don't know whether bandwidth hogging is a problem but it is possible that 1 heavy user isn't impacting everyone but the top 5% might be sucking up lots of bandwidth. So either Benoit is raising a red herring or the Slashdot precis' is defective.

I do think that if a teleco is going to advertise unlimited bandwidth, they are required to provide unlimited bandwidth. It's a stupid ad to make anyhow. They should say they will support up to x where x really is what they will support.

Billing is a very large cost. In the telco world, the cost of billing passed the cost of transmission about two decades ago. That's even more true for Internet transmission at the retail level.
With proposed "economic" solutions, you have to factor in the costs of billing. Those costs apply both to the provider and the customer. If customers have to meter to control their expenses, that's a cost to the customer in attention, and drives business to