Getting the Twain to Meet

I was riding around with Scott Bradner the other day when he uttered a piece of wisdom that brought great clarity to the black-hole argument-sink that network neutrality has become — and which had been depressing me for so long that I had all but given up trying to get past it.

He said the argument had become a religious one, in the sense that opposing sides had very different beliefs. In particular this line stood out: The carriers believe the Internet doesn't work.

That is, the carriers—phone companies in particular—are in the business of billing for uses. They are also committed to such virtues as "carrier-grade" service and five or six nines of reliability. The Internet protocol wasn't built for billing, or for those other concerns. Nor were (far as I know) any of the other protocols in the Net's protocol suite. Geeks behind many of those protocols believe in "best effort" delivery and "rough consensus and running code". So it's no wonder that the two groups talk past each other.

Meanwhile, as dialog fails to work, carriers continue to build out "last mile" connections that serve one virtue users like -- downstream speeds -- while sacrificing others that most users don't miss but the absence of which prevent many future uses. This sleight-of-service works by misdirection: drawing attention to one thing while something quite different happens in plain sight that nobody notices or cares about.

We were recently offered a trial of Optus' new Supersonic Broadband service, which uses the DOCSIS 3.0 cable modem standard to boost the peak speed of Optus cable services in parts of Sydney, Melbourne and Brisbane from around 20Mbps to 100Mbps (Telstra offers a similar service, called BigPond Ultimate Cable, in parts of Melbourne).

We were recently offered a trial of Optus' new Supersonic Broadband service, which uses the DOCSIS 3.0 cable modem standard to boost the peak speed of Optus cable services in parts of Sydney, Melbourne and Brisbane from around 20Mbps to 100Mbps (Telstra offers a similar service, called BigPond Ultimate Cable, in parts of Melbourne).

So they tested it. The summary paragraphs:

Overall, the faster service was responsive and snappy, with many pages loading quickly and streaming video starting pretty quickly. We were able, for example, to watch 1080p YouTube videos at full-screen resolution with minimal buffering, and jumping to unbuffered parts of the video took less than a second before buffering was complete and the video could resume playing.

So it handles well. But how fast does it go?

Using a Speedtest.net server from our test site in Melbourne against an Optus-hosted server also located in Melbourne, benchmarking repeatedly clocked up speeds of up to 101Mbps downstream and around 1.7Mbps upstream, confirming the service is performing as expected (unless, that is, you expected faster upstream performance).

Note the dismissive parenthesis around upstream performance. (These are not the droids you are looking for.) Likewise, the rest of the piece ignored the extreme asymmetry of the test results, and the consequences, focusing instead on other failings. Fortunately, the first three commenters all noted the problem. Wrote one,

Interesting that the technology has an downstream:upstream ratio of about 3:1 or 4:1, but the 100Mbps services offered by Optus and Telstra are both deliberately strangled to a maximum of 2 Mbps upstream - a ratio of 50:1. The services are still aimed at passive entertainment, not business or even SMB users who send outbound a greater proportion of traffic.

Exactly. Note that DOCSIS is a cable standard. It's for networks that serve TV first and Internet second (or third, after telephone service) over coaxial cables to homes. These companies have always been in the entertainment delivery business, not the all-purpose network business. They're couch potato farmers. Many cable companies do have business customers, but most of those don't work in homes. So maybe we can excuse cable companies for not caring less about business and more about passive consumption. But we shouldn't let ZDnet/CNET off the hook.

Look up CNET and Cloud on Google and you get more than 1,500,000 results. Look up ZDnet and Cloud and you get more than 1,300,000 results. It matters hugely that many cloud computing services that are prevented by strangled upstream speeds. Want to store data in Amazon S3? Don't even bother if you're only getting 1Mb of upstream service. How about computing remotely with Amazon EC2? Not if you need to move a lot of data between your local machine and the virtual ones in Amazon's cloud. How about future services, such as remote video editing and rendering in the cloud? Not even worth thinking about if your upstream speeds are throttled down that far.

Now think about this for a minute. What's the least compromised video you can show on your new 1080p flat screen? Is it--

HD channels from cable or satellite?

What you can shoot yourself with a new 1080p camcorder?

The answer is 2. That's because what you see on cable and satellite is compressed to the max, so you get plaid skies and other 'artifacts'. They carriers compete with each other by squeezing as many channels as they can in one data path, and the result is many channels that are HD but ugly. You don't have to do that. Your camcorder has some compression artifacts, but nothing like the highly lossy compression the carriers feed you. (Verizon's fiber-based FiOS is an exception, but it's still compressed to some degree and at the mercy of sources that also compress the video.)

Now, what happens when more of us start producing videos with higher resolutions—say, with a camera from Red? Check out the specs. (And check out how much we've covered Linux + movies here in Linux Journal.) Within a few short years many of us will be in the position to shoot and edit cinema-quality video, if we aren't already. Whether or not we do our editing and rendering in some cloud (surely Linux-based, regardless of our client devices), widespread video shooting and production will turn more and more data paths into two-way streets. Earth to carriers: there is business here.

But so far most carriers are clueless, even as they do contribute plenty to the advancement of networking technology and deployment. (Credit where due, and they've never gotten much from net-heads like me.) For one mind-blowing example of how lame carrier thinking can get, dig Operator Giants Mull Creation of New OS, in Mobile Business Briefing. It begins,

French newspaper Le Figaro has reported that Stephane Richard... chief executive officer of France Telecom-Orange, has invited the heads of Deutsche Telekom, Telefonica and Vodafone to discuss the possible creation of a common platform for mobile devices. The talks, which are scheduled to take place 8 October in Paris, are motivated by a view that Apple's iOS and Google's Android operating systems have become a "Trojan horse" for these companies to establish their own relationships with mobile customers, reducing the significance of the operators in the value chain. While operators globally have moved away from a closed ecosystem to support products and services from Apple, Google and other companies from the computing and Internet industries, these new partners have been gradually increasing their influence in the mobile space, at the expense of operators and their traditional ownership of the relationship with subscribers. Due to the early state of the talks, it has not been decided what form the alliance will take, with options mooted including the formation of a joint venture or creation of a common apps development unit. With the four operators having access to a subscriber base of around one billion, they will certainly have the buying power to attract the support of vendors, who will benefit from the ability to serve a large customer base using a common device platform.

The value chain these carriers want to preserve (or toward which they yearn nostalgically) is the one that was wrapped around users' and application developers' necks before Apple and Google blew it to pieces, adding many $billions in new value to the whole industry—and, more importantly, to other industries this one's infrastructure and services support. (Did any of us like having our relationships "owned" by anybody, least of all phone companies?) The idea of creating a new OS platform (read: user and developer trap) at this point in history is insane.

To be fair, the piece goes on to say that Android is one option being considered. But even there the goal is to do with Android what phone makers and carriers together did with Linux before Android came along: bury its open virtues so deep in closed proprietary designs that it hardly mattered what the OS was. In other words, they put billing ahead of every other interest. And they clearly don't believe in the Net. Especially mobile carriers, which seem to consider "bill shock" for customers using 3G and 4G data devices outside national boundaries a bonus billing opportunity.

One good thing about the ongoing Australian experience is that building out fiber capacity across the country is widely understood as an essential public good that can support far more private wealth creation than would be produced just by leaving it all up to phone and cable operators. The topic is so hot, in fact, that it played a key role in deciding the recent election. The Net should be so lucky here in the U.S.

So, what can we do here in America, Land of Opportunism? Forget, if you can, arguments about network neutrality for now. Think about the ideal end state for what we now call the Internet. Think about that end state the way we currently regard the electric grid, roads and water systems: as familiar capacities we take for granted. How do we get there? The answer will have to involve the carriers, unless we want to start from zero. And it can't steam-roll the net-heads who want to keep the Net as open and free (as in freedom, not beer) as possible. It will have to pull in the best of both sides, because we need both sides, even if they don't agree with each other or believe the same things. How do we do that?

I used to think I knewtheanswers. Now I'm mostly trying to stay optimistic about the prospects.

Comments

Comment viewing options

This is why something like WiMax on open spectrum is important. It would be a huge kick in the pants to the wire nazis. It would completely solve the last mile issue.

My dream business is wimax-like wireless "Data-Only" internet access. Complete with IPv6 and synchronous bandwidth. We just need the protocol to be open, and the spectrum to be just as open. I have almost every single piece to make it work.

I've worked too long as a data monkey for telcos, or in places who partner with the telcos to completely understand their point of view. It's desperation, and it will happen, but slowly. In the meantime, I'm sharpening my skill-set, keeping my eyes on the prize.

-- FLR or flrichar is a superfan of Linux Journal, and goofs around in the LJ IRC Channel

I worked for a phone company, they did put compiled linux kernel in "clearcase". That piece is ages old. I guess they want to sell this to 3rd world countries. So the best help for those poor is to get them DIY with corret tool.

That's why the carriers don't like it and hope it will fail.
But the truth of the matter is they've lost the war - its just the buggers are going to try and squeeze as much out of us as possible before we realise they've lost and don't actually have anything to sell but their petty little roadblocks on the information highway.
We're in the strange situation where the luddites own the looms!

A good wireless router that supports IPv6 and the exterior routing protocol(s) of your choice is pretty cheap to obtain (about $80 for a Ubiquiti Nanostation) and not too taxing for a technical person to set up.
It should only take a few of these nodes to set up a neighborhood-level IPv6 network, where everyone is publically reachable and has a synchronous connection, at least within the network and some of the other networks to which that one has gateways.
The carriers have no incentive to provide consumers with a decent connection; if you need a good up/down ratio or a reachable IP address (not even static!) they want you to pay more for the "business" package. It costs them money to reconfigure their equipment for IPv6, and what do they get out of it? It isn't worth the carriers' while.
It's worth our while as technical folks who have neighbors to do it ourselves. The end state I see is unbilled synchronous IPv6 connectivity for everyone in areas of sufficient population density, if nor for everyone globally.

Ah yes, reminds me of the story when the at&t gray beards where presented with the concept of packet switching and wondered what to do with a loop when there was no packet being transferred. The response of "well you turn it off" made them basically cry heresy as their way of thinking was that you do not turn off a loop unless either end had hung up.

Still, with voip becoming more and more common, one may well find that one expect it to be as reliable as the fixed line phones of the past, while resting on a less reliable transport technology. I suspect it will take maybe one death from a failed voip call to make all politicians shut down any talk about a neutral net.

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.