Power vs. Authority

More than a year ago I had the pleasure meeting a man by the name of Amit Green. At the time we got to talking about lots of things, and eventually his arguments helped turn the tide on making the big changes required for Dojo 0.9. As a part of of those discussions, he convinced me to read this book, which in 2007 read as part wise tome and part time-capsule from a simpler, slower time in business. Perhaps the most important thing I took away from it was the difference between power and authority. To quickly summarize, “power” is the ability to affect change while “authority” is the right to make any given decision. It’s easy to see how these are different: the person actually doing the work has all the power while the person who signed off on the spec has the authority. Sometimes these things are embodied in the same person but the minute they’re not you have a “management problem”. The solution isn’t to just assume that one person will do everything; you’ll get no division of labor that way which defeats the entire point of modern economics. Instead, you have to get everyone to respect the other parties such that authorized decisions represent the interests of those with power and that those with power have some agency.

But what happens when that process breaks down? When those with power forget that they have (or should have) instigating new decisions or framing them correctly? Or when everyone on the outside assumes that those with authority are the ones who can really make something happen? I’ll argue that you get the web as we know it today.

As a case study in putting your faith in the wrong idols, you can’t do better than posts like this which “blame the W3C” (via Molly). Blaming the W3C for not pushing the web forward is both humorously off-target and distressingly common. I’ve written about this before, but fundamentally you can’t blame the W3C for failing to act because it’s not the W3C’s job to act. An MBA should be able to tease this out a bit more effectively – any decision only requires that you have answers for five questions: why? what? how? when? who?

Answering these for pushing the web forward is straightforward, even on a simplistic level:

What?: new tags, JS and DOM APIs, CSS syntax, and renderer support for all of the above. Eventually, a spec or five reflecting these new technologies.

How?: we could try asking the W3C to do it, but they don’t have any power. When they’ve been left to their own devices, the W3C has failed. Miserably. Over and over and over again. Instead, browser makers should introduce new stuff and then agree to agree on it (via the W3C or similar organizations).

When?: introducing new features in any given browser seems doable in short-order. In the case of Open Source browsers, the answer is “as soon as someone decides to invest in them”. Competition has even spurred Microsoft to some level of action. The likely time-scale for new features over all, though, appears to be on the order of 5+ years. That’s clearly not soon enough. TODO: investigate ways to speed this up.

Who?: browser makers and others in a position to affect the code that goes into the renderers we use.

Figuring out “how” leads you directly to “who” in this case. The action we all want is the sole purview and responsibility of the browser vendors and they alone have the power to push the web forward. The “web standards community” has made it clear that they’ll need the imprimatur of some authoritative body where agreement can be forced, but that hasn’t kept the browser vendors from taking the initiative there, either. The big, open questions then center around how the “web standards community” can make enough room for renderer vendors to try out new stuff, since that’s how we get new things. Demanding agreement on what to do before trying it out demonstrably doesn’t work, so it’s then imperative that there be a mechanism for the web to iterate prior to standardization. In fact, I’ll argue that this is now the biggest reason that Paul Ellis isn’t getting the improvements he wants out of the web: there’s no mechanism in place by which any browser vendor can take significant risks without incurring the wrath of a swarm of WaSPs, or worse. Attempts to even begin to lay the groundwork for such a mechanism have been shot down forcefully by may folks who, like Paul, view “fixing the web” as the W3C’s job.

Standards bodies are animated only by the needs of industry to reduce costs by forcing vendors to agree on things. Like Open Source, they can act as a back-stop to the monopoly-creating power of network effects by ensuring that the price of software commodities eventually does reach zero. In this context, then, the W3C’s only effective function is to drive consensus when visions for how to go forward diverge or lead down proprietary ratholes. Asking the W3C for more is the fast path to continued disappointment.

The W3C is just a sail and all sails need the wind to function. You can’t blame the sail for the wind not blowing.

22 Comments

Interesting parallel between the W3C process and the Waterfall model of software development. In the extremes of the waterfall model, practical work can languish while waiting for a perfect specification that can never come. Most developers have also realized that both the specification and the final product can often be improved by rapid cycles of work and feedback. While most developers are not pure Agilists, most have recognized the limits of a pure waterfall approach, and rejected it.

Why, then, having rejected this model for our own development, do we demand browsers follow it? Yes, I insist my sites adhere to standards, and want the browser makers to implement existing standards correctly. However, browser makers should not be forced to wait for an up-front consensus before innovating. That way, we woulds be stuck in 1999, without even XHR. If browsers can make a better way for me, I want them to try. I can decide what to adopt, or not adopt.

The “proprietary rathole” is a real danger at the other end of the spectrum, particularly for single-OS browsers. Standards bodies, rather than holding back browser makers, should find some way to encourage those makers to document the features/interfaces of their added features, so that other makers can implement them. Should multiple vendors adopt them, and they prove popular with users, they could even be folded into the “standard.” Bottom up, driven by need — that is how technology is driven, and standards created, in virtually every arena; why are web browsers any different?

Alex, I’m in total agreement until your final conclusion. I agree that browsers will need to try new features out before standardization if we want to see faster iteration. But I don’t agree that WaSPs and other standardistas are standing in the way of this approach.

This all goes back to the long-held opinion you aired at the WaSP panel at SXSW this year. I simply don’t agree that WaSP projects a my-way-or-the-highway way of doing things.

Consider the recent CSS features added by WebKit: transformations, animations, gradients, masks, et cetera. They’ve very nearly _run out_ of standards to implement, so they’re starting to implement the wouldn’t-it-be-cool-if stuff. If I’m not mistaken, this is the exact sort of thing you’re wishing for.

I do concede that similar efforts on IE’s part would be met with far more hostility. But I think it’s far more important for them to implement DOM L2 Events than, say, CSS animations. They’re free to innovate, but I’m free to criticize them for misplaced priorities. They can have their cake as soon as they eat their broccoli.

good analogy! I think the 10-cent answer about why browsers are different is that the developers of the content don’t control them in any meaningful way. The rest of the software world has some amount of control regarding deployment environment. Changing the renderer (which is what we’re taking about when we talk about upgrading “the web”) goes hand-in-hand today with upgrading the *rest* of the browser as well, which requires the user to care…and users (to a one) don’t give a flying leap about CSS 2.1 support. They care about tabs and history features and extensions and security and compatibility.

In short, browsers are different because the reasons that they evolve in terms of rendering content are completely divorced from the reasons that anyone will pick a particular browser as their day-to-day client (assuming that said browser renders most of the web).

I find that stance odd: it’s ok for the minority market share browser to innovate, but not the browser that we REALLY NEED improvement in? I mean, sure, they need to get their CSS 2.1 story in order, but if they added all of Safari’s CSS extensions or implemented hbox and vbox, would that not cover a multitude of sins? Those who would answer “no”, IMO, are part of the problem and are the reason why organizations like WaSP have zero strategic viewpoint. It’s not that WaSP is “my way or the highway”, it’s that it actually can’t see what’s good for the web beyond whether or not something is standardized. It’s just not in the charter:

WaSP clearly lays out the problems that every webdev faces and then proceeds to suggest how it will continue to be brining knives to gunfights for the foreseeable future:

Thus one of WaSP’s primary goals is to provide educational resources that can help our peers learn standards-compliant methods that are in their interest and that of their clients and site users.

Our problem isn’t that people are building non-standard markup, it’s that the markup and styling tools we’ve got are completely inadequate. Having won the last decade’s war, WaSP seems totally content to sit around the VFW hall and tell the youngins about how semantic markup will solve all their problems if only they’d listen.

Companies on the Web market are competing very hard. W3C is a platform where companies come and decide to discuss around the table. W3C offers tools (such as the Royalty Free Patent Policy to keep the Web open) and system resources for achieving this. It takes time to create standards, and it doesn’t mean that everyone will agree. As long there is no agreement on *implementations* between major vendors, it is very hard to achieve.

On the topic of other technologies, be semantic web, mmi, css, etc. It really depends on their own market. Semantic Web is really successful in some communities. CSS has lately taken a new pace.

I don’t know if you followed the work on html 5, where all browser vendors are in the group and actively discussing and implementing. There is a lot of work to do. It is a functional specifications including the DOM, the markup, apis, parsing algorithm. It is not written for Content authors, but developers.

Often things go wrong when the main companies of the market are not here.

You cite also Javascript. It is done by ECMA which is another standard organization. A new version of Javascript is being ironed there.

I’m aware of the W3C’s activities around HTML5, DOM, WebAPIs, etc and appreciate that it is very hard work. Having served (poorly) as an invited expert to the ECMAScript working group and having participated in OpenAjax I know that it takes folks more patient than myself to build good standards. I didn’t mean to suggest by my post that the W3C isn’t doing the right things of late, only that the impetus for taking on specs that actually matter to the web at large comes from outside; or in your parlance, from whatever motivates the Membership.

To lead the World Wide Web to its full potential by developing protocols and guidelines that ensure long-term growth for the Web.

But I argue that where the W3C has failed at that, it’s probably not the organization’s fault per sae. Indeed, the whole post was simply and extended way of saying that blaming the W3C for a lack of progress in achieving that goal is misdirected opprobrium.

It would have been nice for the W3C to have exercised considerably better judgement about what activities to undertake since the turn of the century, but no one really had any right to expect better. The W3C is a pay-to-play membership organization and members shape what activities are pursued. In order to provide the level playing field (patent policies, etc.) that you outline it would be difficult to be organized any other way.

Please don’t think that I’m somehow angry or disappointed in the W3C. Much to the contrary; I just have expectations of the organization which I think more closely match its structure as a member-directed body. Many web developers, I fear, don’t understand this and therefore expect too much of the W3C, venerate it unnecessarily, and then blame it for outcomes which it cannot control.

Thanks very much for the follow-up. An organization is a strange thing but not a stone. The W3C Process evolved, and will continue to evolve and the way it is organized will also evolve given the new requirements and the strong push of the community for the open Web standards.

It has happened in the past already. W3C had the same RAND system than IETF. The community pushed very hard saying that the W3C should give more guarantee that it will be opened and then the RF Patent Policy has been created. It created a lot of discussions and frictions but in the end, it is something which was beneficial.

And I agree with you there will be interesting challenges to come but pushing us to evolve too.

I have to mirror Karl Dubost’s line here: the Semantic Web hasn’t failed – it is just taking a little longer than planned. The first SemWeb specs came out in 1999. Nine years is a long time to wait, sure. But we have had equivalently long waits for CSS. And in the Semantic Web space, the stuff that W3C has worked on in the last few years has been driven by practicality, and much more by the sort of agile process that is desirable.

Look, for instance, at SPARQL. This was not some kind of waterfall model – they waited until there was some kind of agreement, took two years and just nailed the spec down, providing comprehensive test cases, involving the community. Obviously, the process used for SPARQL will not fit all W3C activities – HTML and CSS, and other standards directly relevant to widely-implemented browser technology will take longer and necessitate a lot more pain.

I think that the W3C is learning very quickly that they need to change and that the nature of the Web as a decentralised system is going to change what it means to be a standards body. Two of the four browser rendering engines are open source, after all. Overall, there are a great many reasons to be optimistic about the W3C.

Alex, thank you for an interesting post. You link to the Semantic web initiative under the word “Miserable [failure]”. I am not sure how you come to the conclusion that something is a “failure”? I know of many projects that are seeing a lot of benefit from many of the semweb technologies. I am sure they won’t consider that initiative a failure.

“I find that stance odd: it’s ok for the minority market share browser to innovate, but not the browser that we REALLY NEED improvement in? I mean, sure, they need to get their CSS 2.1 story in order, but if they added all of Safari’s CSS extensions or implemented hbox and vbox, would that not cover a multitude of sins?”

They are free to innovate as they wish. But until they’re caught up with the other three browsers, I’ll wonder aloud why implementing, say, hbox/vbox was more important to them than implementing DOM L2 Events, the remainder of CSS 2.1, or *anything else that we would be able to use TODAY were it not for IE*.

Your rubric is clearly different from mine. But don’t misunderstand: I find it most critical for IE to implement stuff that will make my life easier *tomorrow*, regardless of whether it’s a standard or not. For instance: I think IE should support the CANVAS element, even though HTML5 is not yet final. I’d want them to support it even in the absence of any standard simply because they’re the only browser left that *hasn’t*.

You may be right that WaSP needs to restate their focus. They mean “standards-compliant” as an alternative to the browser wars (proprietary features and APIs designed to stay proprietary), rather than as an alternative to innovation within a web standards framework (features that act as proposals for new standards). That’s my interpretation, at least.

Consider the recent CSS features added by WebKit: transformations, animations, gradients, masks, et cetera. They’ve very nearly _run out_ of standards to implement, so they’re starting to implement the wouldn’t-it-be-cool-if stuff. If I’m not mistaken, this is the exact sort of thing you’re wishing for.

One of the things I find interesting–and even amusing–is that (IMHO) Webkit is doing these things not because “they’ve nearly run out of cool things to implement”; I think they are doing this as part of the Mobile Me push, and as part of the iPhone browser/app push. The irony (and probably it’s a good thing, not a bad one) is that this is exactly the reason why XMLHTTP evolved–IIRC, it was created at the request of the Outlook Web team (i.e. Microsoft) to help them make Outlook Web a more seamless experience.

I’m not pessimistic about the W3C, I’m indifferent to it. I’m not a member, I pay no dues, and while it ratified some good specs back in The Day, it just hasn’t done much to keep its legitimacy or build its brand this century. Were I at the helm of such an organization, I’d be deeply worried that it has fallen prey to a the problem that Hamming reported of Shannon:

When you are famous it is hard to work on small problems.

I’ve loved a lot of “dead” technologies: capabilities for security applications, functional programming languages, non-relational databases, javascript – , lord knows I’ve worked on my share of things that took much longer than planned…so I’m completely willing to accept the premise that I’ll be terribly wrong some day about the importance of the things that I linked to, and in specific the “Semantic Web”.

It’s no stretch, however, to suggest that what the web has needed most, and for a long time, is for our existing markup language (HTML) to evolve some much less academically interesting semantics to better handle the applications being demanded of it. That work is now (slowly) under way via HTML 5, but it seems to have taken some doing to talk the W3C’s membership and participants down from the Semantic Web and XML ledge. That HTML 5 even clings to an XML serialization at all as a goal is pretty laughable given the glaring design flaws and adoption failure of XML as a format. What I worry most about is that it’s indicative of how jaundiced the “main line” W3C participant perspective has become with regards to the real-world application of W3C specs.

Real-world application is what buys the organization legitimacy, and my point wasn’t that those specs were “failures” in solving their specific problems, it was that the W3C can’t drive their adoption much beyond the already appreciated need in the market for any given standard. The Semantic Web might yet succeed (I’m a pessimist), but the W3C surely failed to drive its adoption by its own might. No one should have expected differently then or now, as Ian points out. Least of all the W3C membership and staff.

Jon Ferraiolo said to me once:

A good spec is one that everyone implements.

The W3C’s failure here isn’t that it took on Semantic Web, it’s that it did so when there was no clear, broad, deep constituency for it, no existing set of vendors jockeying to have their specs ratified, and no discernible relationship to how the real, actual web is evolving. It’s a great-big organizational fail.

Alex, good to see another Boilermaker bringing some more (sane) discussion of this topic to light. My point was really more that the W3C has failed in being a place where vendors can standardize the needed advancements in the web. They can’t _make_ anything happen, but they can _foster_ it.

I think you hit the nail on the head when you say “there’s no mechanism in place by which any browser vendor can take significant risks without incurring the wrath of a swarm of WaSPs”. Conceptually, why couldn’t Silverlight become a standard or at least considered? (not that I’m arguing for or against that happening) There are two different implementations of it Silverlight which is one of the W3Cs requirements.

Hey, you’re saying HTML5 is evolving slowly, which might be true, but from the specifications I contribute towards, it seems to be moving the fastest. Generally it also tries to specify those features vendors are interested in implementing now and will in fact drop features browser vendors do not implement.

In any case, tips on how to make it move faster? I’d like for that to happen if possible. :-)

No doubt that HTML 5 is moving fast as a spec, particularly in comparison to the dearth of new stuff over the last 8 years or so. The reality for web authors, though, is that even when the spec work wraps up, it’s still gonna be half a decade until we can use HTML 5. The bits that are pervasive (some variant of local storage, etc.) are already getting wrapped up by JavaScript libraries, which gives us new stuff on an accelerated timeframe, but that approach doesn’t really have the power to tackle things like tags.

[…] of browser vendors decide to follow. The W3C can make recommendations as to what should be done but it has no authority to force implementation. If the W3C could actually create standards, then we’d not still be […]