It looks like Google is going to start explicitly labelling slow sites as such in their search results (much like they recently started explicitly labelling mobile-friendly sites). So far it’s limited to Google’s own properties but it could be expanded.

Personally, I think this is a fair move. If the speed of a site were used to rank sites differently, I think that might be going too far. But giving the user advanced knowledge and leaving the final decision up to them …that feels good.

There are some good points here comparing HTTP2 and SPDY, but I’m mostly linking to this because of the three wonderful opening paragraphs:

A very long time ago —in 1989 —Ronald Reagan was president, albeit only for the final 19½ days of his term. And before 1989 was over Taylor Swift had been born, and Andrei Sakharov and Samuel Beckett had died.

In the long run, the most memorable event of 1989 will probably be that Tim Berners-Lee hacked up the HTTP protocol and named the result the “World Wide Web.” (One remarkable property of this name is that the abbreviation “WWW” has twice as many syllables and takes longer to pronounce.)

Tim’s HTTP protocol ran on 10Mbit/s, Ethernet, and coax cables, and his computer was a NeXT Cube with a 25-MHz clock frequency. Twenty-six years later, my laptop CPU is a hundred times faster and has a thousand times as much RAM as Tim’s machine had, but the HTTP protocol is still the same.

I don’t tend to be a “magic pill” kind of believer, but I can honestly say that embracing progressive enhancement can radically change your business for the better. And I’m glad to see Google agrees with me.

Google has updated its advice to people making websites, who might want to have those sites indexed by Google. There are two simple bits of advice: optimise for performance, and use progressive enhancement.

Just like modern browsers, our rendering engine might not support all of the technologies a page uses. Make sure your web design adheres to the principles of progressive enhancement as this helps our systems (and a wider range of browsers) see usable content and basic functionality when certain web design features are not yet supported.

An early look at the just-in-time interactions that Scott has been working on:

Nearby works like this. An enabled object broadcasts a short description of itself and a URL to devices nearby listening. Those URLs are grabbed and listed by the app, and tapping on one brings you to the object’s webpage, where you can interact with it—say, tell it to perform a task.

Here’s a ridiculous Heath-Robinsonesque convoluted way of getting the mighty all-powerful Googlebot to read the web thangs you’ve built using the new shiny client-side frameworks like Angular, Ember, Backbone…

Here’s another idea: output your HTML in HTML.

That solution works for machines and humans. As a bonus, outputting your HTML in HTML avoids turning JavaScript into a single point of failure.

We have lost an ally in the fight to maintain net neutrality. I wonder how Vint Cerf feels about his employer’s backtracking.

The specific issue here is with using a home computer as a server. It’s common for ISPs to ban this activity, but that doesn’t change the fact that it flies in the face of the fundamental nature of the internet as a dumb network.

Stuart nails it: the real problem with delegating identity is not what some new app will do with your identity details, it’s what the identity provider—Twitter, Google, Facebook—will do with the knowledge that you’re now using some new app.

A superb piece by Marco Arment prompted by the closing of Google Reader. He nails the power of RSS:

RSS represents the antithesis of this new world: it’s completely open, decentralized, and owned by nobody, just like the web itself. It allows anyone, large or small, to build something new and disrupt anyone else they’d like because nobody has to fly six salespeople out first to work out a partnership with anyone else’s salespeople.

And he’s absolutely on the money when he describes what changed:

RSS, semantic markup, microformats, and open APIs all enable interoperability, but the big players don’t want that — they want to lock you in, shut out competitors, and make a service so proprietary that even if you could get your data out, it would be either useless (no alternatives to import into) or cripplingly lonely (empty social networks).

Google’s track record is not looking good. There seems to be a modus operandi of bait-and-switch: start with open technologies (XMPP, CalDav, RSS) and then once they’ve amassed a big enough user base, ditch the standards.

Good news from Google: it’s going to start actively penalising sites for perpetrating the worst practices for mobile e.g. redirecting a specific “desktop” URL to a the homepage of the mobile site, or for shoving a doorslam “download our app” message at users.

I wish that we could convince people not to do that crap on the basis of it being, well, crap. But when all else fails, saying “Google says so” carries a lot of weight (see also: semantics, accessibility, yadda, yadda, yadda).

Charles Arthur analyses the data from Google’s woeful history of shutting down its services.

So if you want to know when Google Keep, opened for business on 21 March 2013, will probably shut - again, assuming Google decides it’s just not working - then, the mean suggests the answer is: 18 March 2017. That’s about long enough for you to cram lots of information that you might rely on into it; and also long enough for Google to discover that, well, people aren’t using it to the extent that it hoped.

This looks like being an excellentâ€”and freeâ€”resource "...meant to provide web application developers, browser engineers, and information security researchers with a one-stop reference to key security properties of contemporary web browsers."