Category: Internet

Swinging off something I discussed in another place, the Wikipedia list of culture-bound syndromes is fascinatingly odd, although several of them seem to reduce to depression and several more to sexism. I wonder if different Wikipedias have different ones?

But what interests me is this: what with globalisation an’ all, will these get smoothed out by the invisible hand like so many obscure languages, until we’re all crazy according to world-class best practice and international standards?

Or will we get something different? Weird jarring mashups from the grab-bag of available symptoms are a possibility. Try a combination of ufufuyane, tanning addiction, and scrupulosity, or perhaps boufée délirante, smilorexia, and puppy pregnancy syndrome.

That’s if nothing entirely novel emerges.

Perhaps it already has and “Troll (Internet)” should be in the list. Perhaps I should put it there.

Oh Rugby League, must it always be so? The answer is always yes. The FFR XIII, the French governing body, had the great idea of streaming their match with Wales today on the web, presumably because TV wasn’t interested and there are plenty of weirdos who would get up for the England/Samoa and Australia/New Zealand who would also watch the French game.

But tell me, having made the momentous decision, did they do a good job? Did they ask people who knew how to do a good job? You know the answer.

It ended up on Dailymotion, in really terrible quality, with no score, but not before they’d also knocked over their own WordPress site by putting the embedded video on the front page and handing out the link, so the thundering herd hit whatever VPS they bought for their website first rather than Dailymotion’s CDN infrastructure. Not surprisingly the database got its knickers in a twist. Why involve a database when what you really need is a cache?

So a good idea that our amateurish execution turned into a humiliating fiasco. Where have we heard that one before?

OK, the net neutrality. Just to set down how I think about this. The fundamental issue here is the termination fee regime on the Internet, or rather the lack of one.

So what is termination? When a phone call (remember them?) goes from network A to network B, network B charges network A for “terminating” the call, i.e. accepting it for final delivery to one of their subscribers. This practice originates with the way 19th century German postal administrations handled cross-boundary mail. There were other options, but the Prussians insisted on being paid for accepting inbound mail, and what they said went. This practice was taken over by the Universal Postal Union as a worldwide standard, so you can blame Anthony Trollope.

When international telephone interconnection became a thing, the ITU took over the postmen’s practices. Later we got privatisation, and with the emergence of GSM, a vast increase in the importance of interconnection through the issue of roaming.

But on the Internet, there is no termination regime, or rather there is one, but the termination fee is in principle zero. It’s possible, of course, to charge termination on IP traffic, and in fact mobile operators do this to each other via the GRX/IPX system, but there isn’t very much mobile-to-mobile traffic because everything interesting is on the proper Internet.

The first big, important point about termination is that it’s a purely regulatory construct. It feels aesthetically right to think in terms of “importing” or “exporting” traffic, but it doesn’t actually describe an economic reality. For two internetworks to interconnect, they have to run a cable down to a meet-me room somewhere, perhaps place some equipment, and do some configuration. These costs are all incurred at the time of setting up the link.

This takes us to the second big, important point, which is that it’s a purely regulatory construct. Very importantly, the direction that the traffic is flowing doesn’t affect the costs at all. The glass fibres don’t know which way the packets are moving. Whether the up/down ratio is 50:50, 80:20, or something else doesn’t change the costs of production. How much total traffic there is certainly does change the total cost, but that’s a separate issue.

This has important consequences.

Mobile carriers love termination; this is why they whine so much about Neelie Kroes and why they tend to report their numbers as “ex-MTR” and also as “including regulatory MTR changes”. (The technical term for the second metric is “the truth”.) They have two reasons for this. The first is that, although termination is charged to other operators, that doesn’t mean it’s a wash. Even if all the operators had roughly balanced termination accounts, they would still be able to pass the cost on to the customer, i.e. you.

The second is that termination fees flow towards the big battalions. If I have 20 million subscribers and you have 1 million subscribers, your subs are much more likely to call mine and hence generate termination revenue than the other way around. And because termination has nothing in principle to do with the underlying costs of production, it’s pure economic rent and hence, margin. Mmm, margin!

There is of course no reason to treat 1870s Prussian postal practice as sacrosanct, to say the least. There might have been more reason back when telecoms operators were public services, but that was then, and anyway there were plenty of problems with that set-up. Now that the big battalions are private interests, it’s much harder to defend.

By contrast, a system with no termination regime, known variously as bill-and-keep, settlement-free interconnection, etc has the property that big operators implicitly subsidise small ones, and specifically, access operators implicitly subsidise hosting operators. I don’t have to pay Deutsche Telekom or whoever to let its subscribers read my blog. This is a very important part of the Internet’s special nature.

The telephony ecosystem provides for universal interconnection – anyone on the phone can call any other number – but it provides only very poorly for applications, rather than access, operators. It sometimes claims to provide more thoroughly for public service, but it usually only does so when forced to by regulatory action. The Internet termination regime provides for universal interconnection, but also provides much more thoroughly for the existence of stuff you might want to interconnect with.

It is also true that, because termination is a regulatory construct, major access operators who want to charge something like a termination fee are usurping the powers of the regulator.

Universal interconnection is very valuable. Fans of markets in everything tend to think that it wouldn’t be so bad if you needed to subscribe to multiple ISPs to get the whole of the Internet. This is just a reflection of their general pollyannaism. We can see this because the market has spoken; nobody wants not-quite the Internet. When something like that is offered, customers invariably demand the real thing. Similarly, when filtering is offered as a commercial product, nobody ever buys it. They may think it’s good for everyone else, but they don’t want it for themselves. Everyone would like 1Gbps symmetric, thanks, and get out of the way. They would also want a full BGP routeview if they only knew what one was.

This is also why it is a much less important issue in the EU than in the US. In the EU, structural separation and wholesale requirements mean that the whole of the Internet and shut up is always likely to be on offer. In the US, not so much.

As the former chief engineer at Akamai, Patrick Gilmore, said on NANOG recently, having seen the size of the routing table pass 500,000, why don’t we make an effort to tidy up and push it back underneath? Especially as this would give all the world’s Cisco Super 720 routers a new lease of life.

In general, brilliant schemes to reorganise the Internet aim for incremental efficiency gains at the cost of the weirdness, slack, and flexibility that makes it special. The slack and quirks are a source of antifragility and strength. It is crazy talk to spoil the ship for three months’ deferred investment, especially when tweaking and better practice often deliver more.

So #Heartbleed, perhaps the best software bug ever. I spent much of today checking websites and changing passwords. Fortunately, I use the Firefox password manager to store mine and sync them with the browser on my mobile phone, so I could open it, search for “https://”, and work through the list. I eventually used 30 or so random sequences from random.org, starting with anything that had money attached. It was an advance on my plan, over a decade old, of using the names of Australian cattle stations.

That was fair enough, but I kept running into the same problem – I had to log in, root around in some e-commerce site to find the “change password” link, and then futz around still more to persuade Firefox to save the new password. The champion was probably a ticketabc site where I had to feign interest in a Pharcyde gig to change my password.

The problem that you can’t explicitly edit the passwords is solved with this extension, which also helps with some web sites that don’t flag the password fields properly. PayPal even stops you copying and pasting, to make absolutely sure you can’t use it without passing a typing test.

But this is all kludge. The main problem with passwords is that if they are any good, you can’t remember them. The other main problem with passwords is that if you can think them up, they probably aren’t any good. The other other main problem with them is that the whole life-cycle is so almost.

What I want is this: my Web browser generates a genuinely long and random password whenever I need one, and stores it. It fetches it whenever I want to log in. When I don’t want it any more, it deletes it. If there is some reason to think it’s been compromised, I press a button and the password is revoked and a new one generated.

Seems simple enough, and I was thinking about getting the JavaScript book out and making a browser extension…until I started changing the passwords. The problem is that there are so, so many daft, broken, almost ways of implementing simple password schemes. And wouldn’t it be that bloody horrible Verified by Visa mess that doesn’t either pass or fail the test for Heartbleed, when it is supposedly all that stands between my money and the scum of the Internet? secure5.arcot.com, I’m looking at you.

What I want, then, is a simple standard that allows a Web site (or if you like, anything else using it) to trigger the creation of a password by the password manager, which then stores it for later use, and that provides for the password to later be changed. This must allow for an external device to generate the password if desired, for a master credential, and for the password store to be sync’d between machines if desired. It must also allow for a big REVOKE ALL THE THINGS button that causes all (or a subset) of the stored passwords to be expired and regenerated.

That’s basically an API with five calls:

>makePassword(site, username)

>login(site, username, password)

>logout(site, username)

>deletePassword(site, username, password)

>revokePassword(site, username, password)

and the fifth is really just a delete followed immediately by a make.

Why the hell hasn’t W3C done anything like this? It seems such a basic and useful project compared to the vast effort poured into the semantic web black hole.

Update: Naadir Jeewa objects.

@yorksranter i mistrust the identity management of 95% of websites that much, that I'd rather risk it

I think he is wrong. Not only is OAuth in the sense of “sign in with Facebook”, i.e. the sense in which it gets used, a bad case of pre-Snowden thinking, it’s also true that it works for me about 25% of the time.

Something we’ve needed for a while: a good hard stomp on the knuckles of all this MAGIC FACEBOOK DRONEZ FOR AFRICA nonsense. Provided. I especially like the point that in fact, mobile operators are building 3G coverage in these places right now using the exciting new technology of sticking the antenna on a pole. A case in point: Vodafone’s M-PESA mobile payments platform is moving this spring, into hosting in Kenya, having so far been based in a VF data centre in Germany. That’s a huge vote of confidence in Kenyan infrastructure.

I would only add that a typical national cellular network is between 3,000 and 13,000 Node Bs, and that’s a lot of flying robots, especially when you think that they will need to rotate home for downtime. It’s also a hell of a lot of aerial activity for countries that don’t have much in the way of air traffic control. And typical monthly blended ARPU in these areas is around $5. If you want to attach a flying robot to each cell, how’s that going to add up?

It’s basically the equivalent to all the people who were going to cover this or that with free WiFi back in about 2004 and we wouldn’t need boring carriers with all their boring regulation and boring unions and boring universal service and boring and why you so boring, Sven Radioplanner?

Speaking of which, I saw a Bell Labs presentation in about 2005 of research into mobile base stations that would actually be mobile themselves, chugging about airports on their wheels to optimise the network design. I note that I’ve yet to find a Node B chasing me into a tube station, like the infrastructure for the Direct Line phone. I suspect that the problem of designing such a highly dynamic radio network might be quite complicated. Presumably the drones talk to each other, so it’s a mesh network, and one thing we know about those is that they don’t scale particularly well.

It is actually true that the bellhead/nethead divide persists after all these years. At MWC the other week, I was amazed at the big deal people on the main site made about having an app or a Web site, while over at the developer event people would start up a RESTful service during their own presentations. Similarly, at IETF this week, I mentioned BCP38 to someone and they had no idea what it was – the stereotype of being a bit unworldly and not really interested in user or operator problems has a grain of truth.

But this sort of stupid cap-badge politics divide is just that – stupid, and misleading. It also acts as camouflage for all sorts of ugly prejudices and assumptions, in this case that Africans need saving by DRONEZ, that Facebook is the first of their concerns, that everyone who works for a telco or worse, a government, is an idiot, and that only idiots get involved with infrastructure.

Meanwhile in the UK, we still haven’t fixed the thing where you get to not pay rates on new fibre until it’s sold and profitable, but only if you’re BT, and Cory Doctorow is worrying about the renewed London property boom eating start-ups so they can be replaced by oligarch units.

Here’s a really nice group profile of Xavier Niel, Stéphane Richard, and Martin Bouygues from Le Monde. It’s a pity the reporter doesn’t sound able to assess anything technical they say critically – it’s certainly not true that Free doesn’t do engineering – but it does point up the way they seem to come from three different versions of France. Richard, the super-elite but entirely general purpose technocrat; Bouygues, the Neuilly heir to a fortune built on selling construction projects to the government; Niel, the guy from post-1983 who ran away to the Internet and thinks everyone should learn programming.

This is only one of the reasons why squatting in other people’s netblocks is a bad idea. To understand the point, you’ve got to go back to the BT 21CN project, which was one of those “the Internet is just another service over our private network” ideas telcos tend to love. Although a lot of it didn’t work, like the weird ethernet-level multiservice router, they did build a huge MPLS core network that carries all the other stuff – i.e. mostly the Internet – as encapsulated traffic.

Because they did it this way, they also didn’t do IPv6, which left them with a problem. One of the advantages of doing it the way they did was that they could trivially have a parallel management network. But that meant finding at least two addresses per device for the whole of the UK. So they had the bright idea of picking a big netblock that doesn’t appear in the Internet routing table, and “borrowing” that.

Sensibly, they looked for one that would be very unlikely to ever be announced. Some organisations who got huge IP allocations back in the day, like MIT with its 3 /8 blocks, have been prevailed on to give at least some of them back for public use. The classic case is the trade show Interop, which used to own 45/8 and only use it one week a year.

The US Department of Defense, however, has a hell of a lot of address space, and usually doesn’t route publicly for fairly obvious reasons. And if they don’t want to give it up, who’s going to make them? So they peeked into the DODNIC allocation and picked 30/8. This is quite common; one day somebody will audit it all and there will be surprises.

I think it is probably important to direct attention to this post, which contains the only convincing explanation of PRISM I’ve yet seen, including the tiny budget (if it only cost $20m to process everything in Apple, Google, Facebook etc, what do they need all those data centres for), the overt denials, and the denial of any technical backdoor.

Basically, the argument is that PRISM is an innovation in the technology of law rather than the technology of computing, some sort of expedited court order programmed in Lawyer requiring the disclosure of specified data, and perhaps providing for enduring or repeated collection. This would avoid the need to duplicate vast amounts of infrastructure or trawl every damn thing, would stick to the letter of the law, and would help engineers sleep, as it wouldn’t imply creating a vulnerability that could be used by both the NSA and God-knows-who. It would also permit the President and such folk to deny that everyone was being monitored, as of course they are not.

That said, data could be requested on anybody who the court could be convinced was of interest. As the legalities seem quite permissive and anyway the court is a bit of a flexible friend, this means a lot of people. And in an important sense it doesn’t matter. The fact that surveillance is possible is important in itself. Bentham’s panopticon was based on the combination of overt surveillance – the prisoners knew that there was a guard watching them – and covert surveillance – the fact that the prisoners didn’t know at any given moment who the guard might be watching and therefore could not be certain they were not being observed.

The degree to which this was an aim of PRISM must be limited, because it was after all meant to be secret. But it is hard to avoid the conclusion that it’s there.

Something else. I’ve occasionally said that the Great Firewall of China should be seen as a protectionist trade-barrier as much as an instrument of censorship. Huge Chinese Internet companies exist that probably wouldn’t if everyone there used Facebook, Google, etc. Here you see another benefit of it – the Public Security Bureau gets to spy on QQ, but it’s harder for the Americans (or anyone else) to poke around. This may explain why the NSA seems to pick up lots of data from India and much less from KSA or China; you can PRISM for terrorists trying to affect the Indo-Pak nuclear balance and you can’t for Chinese targets.

Borders are always interesting, and this is today’s version.

Iran, of course, does another twist on this. It has a vigorous internal ISP industry, but monopolises international interconnection through a nationalised telco, DCI, that practices serious censorship. However, the same company also sells unfiltered, real Internet connectivity to actors outside Iran, notably in Oman, Pakistan, Iraq, and Afghanistan, almost certainly following Iranian foreign policy goals. DCI has even gone so far as to invest heavily in a new Europe-Middle East submarine cable to add capacity and improve quality (notably by taking a shorter route to Europe, and adding path-diversity against Cap’n Bubba and his anchor). Back in 2006, supposedly, the best Internet service in Kabul was in the cybercafe they installed in the Iranian embassy’s cultural centre.

Yahoo! has not joined any program in which we volunteer to share user data with the U.S. government. We do not voluntarily disclose user information. The only disclosures that occur are in response to specific demands. And, when the government does request user data from Yahoo!, we protect our users. We demand that such requests be made through lawful means and for lawful purposes. We fight any requests that we deem unclear, improper, overbroad, or unlawful. We carefully scrutinize each request, respond only when required to do so, and provide the least amount of data possible consistent with the law.

The notion that Yahoo! gives any federal agency vast or unfettered access to our users’ records is categorically false. Of the hundreds of millions of users we serve, an infinitesimal percentage will ever be the subject of a government data collection directive. Where a request for data is received, we require the government to identify in each instance specific users and a specific lawful purpose for which their information is requested. Then, and only then, do our employees evaluate the request and legal requirements in order to respond—or deny—the request.

Yahoo!’s top lawyer, spinning like a top, but basically confirming the notion of PRISM as a surveillance technology implemented in Lawyer.

A case of China exporting its internal chaos, as Jamie Kenny would say; I was recently talking to someone who had installed a wireless broadband network in China, and they mentioned that they’d had an exciting experience with a Huawei router. Politicians whose constituents include Huawei’s competitors are endlessly insinuating that their equipment is always secretly talking back to the Chinese, but no-one has ever caught them at it.

So our chap was suitably fascinated when they turned the thing up and they immediately started to see traffic heading for an apparently inexplicable address within China Telecom’s provincial network in Guangdong. Now, they weren’t in the province, but of course Huawei HQ is. Of course they fired up a monitoring tool to capture the traffic and see what it was.

It turned out to be the router’s internal inter-chassis traffic, which should have been going to its own loopback interface, but was instead leaking onto the Internet. It seemed that someone in Huawei had borrowed some public IP addresses to use in their lab, rather than either using Huawei address space privately, or else using the designated private address space, had used the address in the router firmware, and had then forgotten about it. (Rather like that time all the D-Link Wi-Fi boxes in the world started asking some guy in Denmark for a time signal, in case you think it’s just the Chinese who do these things.)

Obviously, routing via China would have been…suboptimal, and would have involved passing through the Great Firewall. But it would have worked in Huawei’s lab, or locally in Guangdong. No conspiracy, just internal chaos leaking across the border.

So, that PCCSpoil blog. To begin with, it was a collection of spoiled ballots from the police commissioner elections, a large (>75%) proportion of which seemed to add the hashtags #PCCSpoil or #policespoilballot. I had the impression that this suggested a campaign of some sort. After all, why the hashtag if you weren’t planning to put it on the web?

Since then, the blog has vanished, briefly shown a Mike Giggleresque student politics video, and now points at a petition to explain one’s spoiled ballot to No.10 Downing Street. Someone on Twitter thought the slogan “Don’t politicise the police”, which many of the spoilers used, might be a Police Federation internal line. But it’s not.

Spoiltpapers.org.uk was registered by someone in Wrexham on the 9th of November, the same day the facebook page appeared. On the way, Plaid Youth glommed on. But many of the people involved were in comments to this Guardian piece, on their Northerner blog, from the 5th of November.

A special note. Tumblr, like Facebook, doesn’t delete your photos if you kill your account or even delete stuff from it. They remain in whichever content-delivery network they used. I know this because, after the PCC blog vanished, I noticed I still had a copy open in a browser tab, and I was able to wget all the images and the HTML wrapper into an archive.

Update: One of the people in this post is now claiming to be me! As a note to the TV producers who are asking me for copies of the spoiled ballots, PCCSpoil is not my blog and has nothing to do with me. My bet is the Plaid guy.

there was a great distinction between telephones and such subjects as gas and water. Gas and water were necessaries for every inhabitant of the country; telephones were not and never would be. It was no use trying to persuade themselves that the use of telephones could be enjoyed by the large masses of the people in their daily life. [An hon. MEMBER: “America.”]

He did not think his hon. Friend was aware of the fact that in the large towns of America subscribers had to pay £40 to £50 for the service which a subscriber in London obtained for from £10 to £20. He went further and said that in a town like London, or Glasgow, or Belfast, an effective telephone service would be practically impossible if the large majority of the houses were furnished with telephones, so great would be the confusion caused by the increased number of exchanges. He was not stating his own opinion, but that of experts.

You’ve got to love the appeal to nameless experts there, and the general 640K-ness. But there’s more, and as this evening’s unstated theme is turning out to be “blog about why things everybody agrees on don’t get on the ballot”, it’s worth reading on.

For a start, they’re debating the question of whether cities ought to be allowed to run their own networks, a topic which is just as fresh today as it was then. Everyone agrees that a private-sector monopoly is undesirable, but is the answer muni-fibre (well, muni-copper), regulation and a universal-service fund, a nationalised industry, or something else? The minister, Arnold Morley, argues that it’s mostly a national-level or even supranational (he says imperial) infrastructure issue. Glasgow Labour MP A. C. Corbett makes a vigorous case for municipal socialism in telephony as in everything else. Sir J.E. Gorst sketches out the situation, which is almost as much of a mess as UK telecoms policy is now. A good row is fought out about who is responsible.

A. D. Provand, yet another Glasgow MP, invents settlement-free peering 100 years early and points up the difference between peering and termination:

The terminals would operate in this way: If, for example, London had a telephone licence, it could not send a message to Brighton unless Brighton had also an exchange, which would deliver the message free, otherwise the London message would he delivered through the present Company at Brighton, which would charge a terminal for doing so, The effect of that would lie to double the rate between London and Brighton.

And A.C. Corbett basically hits on the solution everybody who’s seriously thought about it prefers: open access to shared civil infrastructure! But why put up with the open access bit when the town hall can run the whole thing?

It had been suggested that where underground telephones were necessary, and where it was impossible for the municipality to entrust the control of the streets to any private corporation working for profit, that the municipality should lay these underground wires and take all the care of them. If the municipality was to lay the wires and take all the care of maintenance, there was no possible reason why the municipality should not take over all the undertaking and derive all the profit to be got from it.

Then again, once you’ve got the rights-of-way and the ducts in the public hand, you can do both, like Singapore’s NBN.

Well, at least we were spared the private sector monopoly…until we got it anyway. It is pretty astonishing, though, when you think of some of the places that have got 150Mbps FTTH networks and some of the places that have already sorted out LTE spectrum, and then of the UK. Agenda-setting is a powerful force.

Wisdom. But perhaps the EU should be read exoterically. It needs a big transfer budget to work without Greekifying a country every 5 years to show who's boss. What chances are there of such a budget? Right. POSIWID!

RAF decision to drop ALARM, de-prioritise EW because we're getting stealth! is beginning to look a bit tatty. Not sure what the logic is here - whole of EuroNATO can offer jets, and plenty will get F-35, but who has a "100 Group" capability?