There is a falling out between governments & ISPs on the one hand and consumer groups and companies like YouTube and Netflix on the other. Lately more punitive measures affecting these companies and consumers have emerged that include increased throttling, greater per-usage billing and lower internet caps. The internet as whole is struggling to find a self-sustaining business model that supports the rising speed and bandwidth requirements of consumers and online media purveyors. The conflict boils down to who should pay and to what degree they should pay.

having worked in networking, I've had to deal with the realities of network equipment, dropped packets, peering...

1. Ban usage charges. We should not have usage charges. Companies can already control the usage of users by throttling users. So ban usage charges.

2. Allow throttling. Maybe you get the first X GB throttle free. After which point the ISP can throttle you down. Companies can experiment on various models here. I should note here... this is user throttling... not throttling based on service type.

This will cause ISPs to compete based on the quality of their network. The better managed the network, the more people they will attract. The ISPs will have to pay for their network equipment. Well users will pay in the end... but it keeps the money in the right direction. If one doesn't upgrade, they will have a poorer network and users will leave it.

Why, in economic terms, should you ban usage charges? I understand why you may not like them. But what is the rationale for banning them? It is really not at all clear why someone who downloads 10s of movies in a month should pay the same as someone who downloads a few emails. Why someone who listens non-stop to the radio over the net should pay the same as someone who only uses email.

The proposal to ban usage charges is what? As a matter of law or telecoms regulation?

Why don't we ban usage charging for beer. That way people who only have one a couple of times a week would subsidize those who really, really enjoy their drinking?

In 'economic terms', it is very difficult to price data to have an accurate cost. It's certainly not a cost per GB. I could go into more detail here... but basically... it's very difficult to price it. It would be some overly complex formula with time of day usage, destination of the packet (peering), current congestion, infrastructure pricing... All to create what is an artificial price. There is really no cost to data once the infrastructure is in place... with the exception of transit charges.

But more importantly, it is too tempting for a natural monopoly like an ISP to abuse it's power to get overuse charges from users beyond what it actually costs.

Also ISPs can abuse their power to prevent competition for voip, video services... as many ISPs are involved in many markets. This is what happened in Canada. Netflix sets up... suddenly Rogers, Bell... decide to start charging for usage above 25 GB... basically making Netflix not usable.

Lastly, I don't believe the regulators should play numbers games or deal in details when regulating. They should make big blanket regulations and enforce them. That's my own political view of regulations.
And I don't want government regulators having to go into pricing details and trying to figure out if ISPs are abusing their monopoly or pricing correctly for data...

Given that an ISP can throttle traffic... just do a big blanket regulations and ban usage charges and avoid the whole mess.

Back when I first got internet access, the web was at least partially sane - a lot more content was "static", and a lot more content was cached in proxy servers. In this case an ISP could provide high speed connections from the proxy cache to the clients and the ISP could use the bandwidth provided by their upstream supplier much more efficiently. For example, an ISP that pays $1 per MiB to get data from upstream into their proxy cache could maybe charge $0.1 per MiB to get data from the proxy cache to the clients, on the basis that enough clients would download the same content to recover costs (and make profit).

As time passed things got more stupid - streaming content and idiotic web-monkeys abusing dynamic web pages for little reason (instead of trying their best to make sure their content could be cached). They've mostly made proxy caches futile, and probably caused "upstream" bandwidth requirements across the entire internet infrastructure to increase be several orders of magnitude for no sane reason.

It doesn't help that the low level protocols (e.g. HTTP, FTP) were very badly designed to be begin with. For example, the client could/should be able to say "I've got version 0x12345678 of file www.example.com/foo/bar.html" and get a response that is either "that's the most recent version" or "here's the new version". Instead, FTP has always been virtually impossible to effectively cache, and HTTP has used stupid crap like expiry times (but only for the HTML pages, and not for pictures/images, CSS, etc).

Basically what I'm saying is that the entire web needs to be redesigned, including everything from the low level protocols up to the content itself; and should be treated as a hierarchical tree of caches and not "dumb" pipes between content providers and end users.

The pricing model should reflect this. Bandwidth charges should depend on how often the content changes (regardless of whether the ISP or backbone actually has the most recent version cached or not); where content that can't be cached (e.g. VoIP) attracts a relatively high bandwidth change and things that almost never change are almost free.