5

The enigmatic Steven Pemberton is at XTech to tell us Why you should have a Web site: it’s the law! (and other Web 3.0 issues). God, I hope he’s using Web 3.0 ironically.

Steven has heard many predictions in his time: that we will never have LCD screens, that digital photography could never replace film, etc. But the one he wants to talk about is Moore’s Law. People have been seeing that it hasn’t got long to go since 1977. Steven is going to assume that Moore’s Law is not going to go away in his lifetime.

In the 1980s the most powerful computers were the Crays. People used to say One day we will all have a Cray on our desk. In fact most laptops are about 120 Craysworth and mobile phones are about 35 Craysworth.

There is actually an LED correlation to Moore’s Law (brighter and cheaper faster). Steven predicts that within our lifetime all lighting will be LCDs.

Bandwidth follows a similar trend. Jakob Nielsen likes to claim this law; that bandwidth will double every year. In fact the timescale is closer to 10.5 months.

Following on from Moore’s and Nielsen’s laws, there’s Metcalfe’s Law: the value of a network is proportional to the square of the number of nodes. This is why it’s really good that there is only one email network and bad that there are so many instant messenger networks.

Let’s define the term Web 2.0 using Tim O’Reilly’s definition: sites that gain value by their users adding data to them. Note that these kinds of sites existed before the term was coined. There are some dangers to Web 2.0. When you contribute data to a web site, you are locking yourself in. You are making a commitment just like when you commit to a data format. This was actually one of the justifications for XML — data portability. But there are no standard ways of getting your data out of one Web 2.0 site and into another. What if you want to move your photos from one website to another? How do you choose which social networking sites to commit to? What about when a Web 2.0 site dies? This happened with MP3.com and Stage6. Or what about if your account gets closed down? There are documented cases of people whose Google accounts were hacked so those accounts were subsequently shut down — they lost all their data.

These are examples of Metcalfe’s law in action. What should really happen is that you keep all your data on your website and then aggregators can distribute it across the Web. Most people won’t want to write all the angle brackets but software should enable you to do this.

What do we need to realize this vision? First and foremost, we need machine-readable pages so that aggregators can identify and extract data. They can then create the added value by joining up all the data that is spread across the whole Web. Steven now pimps RDFa. It’s like microformats but it will invalidate your markup.

Once you have machine-readable semantics, a browser can do a lot more with the data. If a browser can identify something as an event, it can offer to add it to your calendar, show it on a map, look up flights and so on. (At this point, I really have to wonder… why do the RDFa examples always involve contact details or events? These are the very things that are more easily solved with microformats. If the whole point of RDFa is that it’s more extensible than microformats, then show some examples of that instead of showing examples that apply equally well to hCalendar or hCard)

So rather than putting all your data on other people’s Web sites, put all your data on your Web site and then you get the full Metcalfe value. But where can you store all this stuff? Steven is rather charmed by routers that double up as web servers, complete with FTP. For a personal site, you don’t need that much power and bandwidth. In any case, just look at all the power and bandwidth we do have.

To summarise, Web 2.0 is damaging to the Web. It divides the Web into topical sub-webs. With machine-readable pages, we don’t need those separate sites. We can reclaim our data and still get the value. Web 3.0 sites will aggregate your data (Oh God, he is using the term unironically).

Questions? Hell, yeah!

Kellan kicks off. Flickr is one of the world’s largest providers of RDFa. He also maintains his own site. Even he had to deal with open source software that got abandoned; he had to hack to ensure that his data survived. How do we stop that happening? Steven says we need agreed data formats like RDFa. So, Kellan says, first we have to decide on formats, then we have to build the software and then we have to build the aggregators? Yes, says Steven.

Dan says that Web 2.0 sites like Flickr add the social value that you just don’t get from building a site yourself. Steven points to MP3.com as a counter-example. Okay, says Dan, there are bad sites. Simon interjects, didn’t Flickr build their API to provide reassurance to people that they could get their data out? Not quite, says Kellan, it was created so that they could build the site in the first place.

Someone says they are having trouble envisioning Steven’s vision. Steven says I’m not saying there won’t be a Flickr — they’ll just be based on aggregation.

Someone else says that far from being worried about losing their data on Flickr, they use Flickr for backup. They can suck down their data at regular intervals (having written a script on hearing of the Microsoft bid on Yahoo). But what Flickr owns is the URI space.

Ian says that Steven hit on a bug in social websites: people never read the terms of service. If we encouraged best practices in EULAs we could avoid worst-case scenarios.

Someone else says that our focusing on Flickr is missing the point of Steven’s presentation.

Someone else agrees. The issue here is where the normative copy of your data exists. So instead of the normative copy living on Flickr, it lives on your own server. Flickr can still have a copy though. Steven nods his head. He says that the point is that it should be easy to move data around.

Time’s up. That was certainly a provocative and contentious talk for this crowd.

It seems to have become a tradition that I do a “bluffing” presentation every year. I did How to Bluff Your Way in CSS two years ago with Andy. Last year I did How to Bluff Your Way in DOM Scripting with Aaron. This year, Andy was once again my partner in crime and the topic was How to Bluff Your Way in Web 2.0.

It was a blast. I had so much fun and Andy was on top form. I half expected him to finish with “Thank you, we’ll be here all week, try the veal, don’t forget to tip your waiter.”

As soon as the podcast is available, I’ll have it transcribed. In the meantime, Robert Sandie was kind enough to take a video the whole thing. It’s posted on Viddler which looks like quite a neat video service: you can comment, tag or link to any second of a video. Here, for instance, Robert links to the moment when I got serious and called for the abolition of Web 2.0 as a catch-all term. I can assure you this moment of gravity is the exception. Most of the presentation was a complete piss-take.

My second presentation was a more serious affair, though there were occasional moments of mirth. Myself and Derek revisited and condensed our presentation from Web Directions North, Ajax Kung Fu meets Accessibility Feng Shui. This went really well. I gave a quick encapsulation of the idea of Hijax and Derk gave a snappy run-through of accessibility tips and tricks. We wanted to make sure we had enough time for questions and I’m glad we did; the questions were excellent and prompted some great discussion.

Again, once the audio recording is available, I’ll be sure to get it transcribed.

And yes, once the audio is available, I’ll get it transcribed. Seeing a pattern here? Hint, hint, other speakers.

As panels go, the microformats one was pretty great, in my opinion. Some of the other panels seem to have been less impressive according to the scuttlebutt around the blogvine.

Khoi isn’t keen on the panel format. It’s true enough that they probably don’t entail as much preparation as full-blown presentations but then my expectations are different going into a panel than going into a presentation. So, for something like Brian’s talk on the Mobile Web, I was expecting some good no-nonsense practical advice and that’s exactly what I got. Whereas for something like the Design Workflows panel, I was expecting a nice fireside chat amongst top-notch designers and that’s exactly what I got. That’s not to say the panel wasn’t prepared. Just take one look at the website of the panel which is a thing of beauty.

Anyway, whether or not you like panels as a format, there’s always plenty of choice at South by Southwest. If you don’t like panels, you don’t have to attend them. There’s nearly always a straightforward presentation on at the same time. So there isn’t much point complaining that the organisers haven’t got the format right. They’re offering every format under the sun—the trick is making it to the panels or presentations that you’ll probably like.

In any case, as everyone knows, South by Southwest isn’t really about the panels and presentations. John Gruber wasn’t keen on all the panels but he does acknowledge that the real appeal of the conference lies elsewhere:

At most conferences, the deal is that the content is great and the socializing is good. At SXSWi, the content is good, but the socializing is great.

This is quite possibly the best thing I’ve seen since breakfast: Cerado’s Web 2.0 or Star Wars Quiz. The premise is simple: decide whether a silly-sounding word is the name of an over-hyped web company or whether it’s really the name of a character from Star Wars.

I scored a reasonably good 41 thanks to my knowledge of Star Wars, I’m glad to say… I’d hate to have scored well by actually recognising half of the companies listed:

31-40: As your doctor, I recommend moving out of your parents’ basement.

All in all, it was a good conference. A lot of the subject matter was more techy than I’m used to, but even so, I found a lot to get inspired by. I probably got the most out of the “big picture” discussions rather than presentations of specific technology implementations.

Apart from my outburst during Paul Graham’s keynote, I didn’t do any liveblogging. Suw Charman, on the other hand, was typing like a demon. Be sure to check out her notes.

The stand-out speaker for me was Steven Pemberton of the W3C. He packed an incredible amount of food for thought into a succinct, eloquently delivered presentation. Come to think of it, a lot of the best stuff was delivered by W3C members. Dean Jackson gave a great report of some of the most exciting W3C activities, like the Web API Working Group, for instance.

I had the pleasure of chairing a double-whammy of road-tested presentations by Tom Coates and Thomas Vander Wal. I knew that their respective subject matters would gel well together but the pleasant surprise for me was the way that the preceding presentation by Paul Hammond set the scene perfectly for the topic of open data and Web Services. Clearly, a lot of thought went into the order of speakers and the flow of topics.

Stepping back from the individual presentations, some over-arching themes emerge:

The case for declarative languages was strongly made. Steven Pemberton gave the sales pitch while the working example was delivered in an eye-opening presentation of Ajax delivered via XForms.

Tim O’Reilly is right: data is the new Intel Inside. Right now, there’s a lot of excitement as to do with access to data via APIs but I think in the near future, we might see virtual nuclear war fought around control for people’s data (events, contacts, media, etc.). I don’t know who would win such a war but, based on Jeffrey McManus’s presentation, Yahoo really “gets it” when it comes to wooing developers. On the other hand, Jeff Barr showed that Amazon can come up APIs for services unlike any others.

Standards, standards, standards. From the long-term vision of the W3C right down to microformats, it’s clear that there’s a real hunger for standardised, structured data.

Put all that together and you’ve got a pretty exciting ecosystem: Web Services as the delivery mechanism, standardised structures for the data formats and easy to use declarative languages handling the processing. Apart from that last step — which is a longer-term goal — that vision is a working reality today. Call it Web 2.0 if you like; it doesn’t really matter. The discussion has finally moved on from defining Web 2.0 to just getting on with it (much like the term “information architecture” before it). The tagline of XTech 2006 — Building Web 2.0 — was well chosen.

But the presentations were only one part of the conference. Just like every other geek gathering, the real value comes from meeting and hanging out with fellow web junkies who invariably turn out to be not only ludicrously smart but really, really nice people too. It helps that a city like Amsterdam is a great place to eat, drink and talk about matters nerdy and otherwise.

Something that became very clear — both at the Carson Workshops Summit and at the many web app panels at South by Southwest — is that websites like these are never finished. Instead, the site evolves, growing (and occasionally dropping) features over time.

Traditionally, the mental model for websites has been architectural. Even the term itself, website, invites a construction site comparison. Plans are drawn up and approved, then the thing gets built, then it’s done.

That approach doesn’t apply to the newer, smarter websites that are dominating the scene today. Heck, it doesn’t even apply to older websites like Amazon and Google who have always been smart about constantly iterating changes.

Steve Balmer was onto something when he said “developers, developers, developers, ad nauseam”. Websites, like Soylent Green, are people. Without the people improving and tweaking things, the edifice of the site structure will crack.

I’m going to make a conscious effort to stop thinking about the work I do on the Web in terms of building and construction: I need to find new analogies from the world of biology.