Posts in the Standards Category

One of the few things I think XHTML2 got absolutely, totally, 125% right was freeing the href attribute from the few elements that accepted it and spreading it all over the language. It saddens me that this isn’t happening in HTML5, especially since at least 1.5 of the four reasons given seem off base or flat-out incorrect. From where I stand, at any rate.

Here, let me explain by having a pseudo-dialogue with the four reasons.

It isn’t backwards compatible with existing browsers.

Neither was CSS, table markup, PNG support, or any number of other worthwhile advancements in the web. And yes, table markup was an absolutely worthwhile advancement: previous to that, the only way to have a table of data that lined up in any fashion was to space-format it and throw the whole thing into a pre element. Ugly nonsemantic fun!

For that matter, if lack of backwards compatibility is an accepted reason to exclude something from HTML5, then a whole bracket of new elements—like, say, nav, article, aside, dialog, section, time, progress, meter, figure, video, datagrid, header, footer, need I go on?—need to come out of the specification right now. They’re totally unsupported, and may not even be stylable, by older browsers.

(Yes, I just proposed that the term for a group of elements be a “bracket”. A pod of whales, a flock of seagulls, a bracket of elements. Try it out, see how it feels on the tongue. A little angular, perhaps? Don’t worry, you’ll get used to it.)

It adds no new functionality that can’t already be achieved using the a element.

What? What?!?

Given a table where each row contains several cells of of summary data, and there is a desire to be able to click on a row to get more detailed information via a search keyed off that summary information, please explain to me how being able to use <tr href="..."> on each row as opposed to writing a whole bunch of JavaScript to associate a click event listener and delegation code and handler functions and target assembly logic just to simulate what <tr href="..."> would do, were it permitted, constitutes “no new functionality”. Please. I would love to hear that one.

Unless of course HTML5 is going to let us wrap a elements around whatever arbitrary collection of elements we like, in which case, never mind. I’ll just wrap all my trs in as and be done with it. That’d be keen. Will that be possible? Will the HTML5 syntax be so flexible as to permit that?

And for the flip side of this, see Wilson Miner’s “Accessible Data Visualization with Web Standards“, where a bar graph is built out of spans so that they can all be wrapped in an a element in order to let you click on any “row”—that is, what would have been a row had he been able to use table markup—and get more information. Yes, absolutely, all that stuff should be in a table, but it was a case of have a table with a bunch of not-that-easy JS forced onto it, or have the contents of every cell in a row be a separate hyperlink to the same destination, or do simple markup with savaged semantics. We shouldn’t be forced into that choice.

It doesn’t make sense for all elements, such as interactive elements like input and button, where the use of href would interfere with their normal function.

True enough. So don’t add it to those elements which would suffer, like input and button. Or, alternatively, define behavior conflict resolution in those cases. There might actually be good reasons to have button accept an href value as a fallback in cases where the normal function of the button fails in some manner.

Either way, the fact that adding href doesn’t work in some cases is no reason to forgo its addition in all cases.

Browser vendors have reported that implementing it would be extremely complex.

I’m always willing to hear why implementors think something is complex to implement, as they’re often subtle and fascinating insights into web browser development. Still, it seems to me that everything ubiquitous href attribution would imply can be recreated with a heavy dose of JavaScript event handlers and related code—(on)click for the sending you off to the target (and any :active-style effects you wanted to bring in), (on)mouseover or plain old :hover for the interactive effects, et cetera, et cetera. Are they really saying that it’s more complex to support this sort of thing in markup than it is to support all the scripting and DOMiness that permits people to laboriously recreate it on their own? If so, why? I’m really curious to know what would make this “extremely complex”, which sounds a good deal more dire than “complex” or just plain old “difficult”.

I’m open to having my mind changed by strong evidence that this would be borderline impossible to implement, even though it can apparently be simulated via existing DOM/JS implementations. Anything short of that, however, isn’t going to convince me that this should be dropped. It was a good idea when it was in XHTML2, and it shouldn’t be abandoned if there’s any chance to save it.

When I first wrote Cascading Style Sheets: The Definitive Guide, the part that caused me the most difficulty and headaches was the line layout material. Several times I was sure I had it all figured out and accurately described, only to find out I was wrong. For two weeks I corresponded with Ian Hickson and David Baron, arguing for my understanding of things and having them show me, in merciless detail, how I was wrong. I doubt that I will ever stop owing them for their dedication to getting me through the wilderness of my own misunderstandings.

Later on, I produced a terse description of line layout which went through a protracted vetting process with the CSS Working Group and the members of www-style. At the time it was published, there was no more detailed and accurate description of line layout available. Even at that, corrections trickled in over the years, which made me think of it as my own tiny little The Art of Computer Programming. Only without the small monetary reward for finding errors.

The point here is that line layout is very difficult to truly understand—even given everything I just said, I’m still not convinced that I do—and that there are often surprises lurking for anyone who goes looking into the far corners of how it happens. As I’ve said before, my knowledge of what goes into the layout of lines of text imparts a sense of astonishment that any page can be successfully displayed in less than the projected age of the universe.

Why bring all this up? Because I went and poked line-height: normal with a stick, and found it to be both squamous and rugose. As with all driven to such madness, I now seek, grinning wildly, to infect others.

Here’s the punchline: the effects of declaring line-height: normal not only vary from browser to browser, which I had expected—in fact, quantifying those differences was the whole point—but they also vary from one font face to another, and can also vary within a given face.

I did not expect that. At least, not consciously.

My work, let me show it to you: a JavaScript-driven test file where you can pick from a list of fonts and see what happens at a variety of sizes. (Yes, the JS is completely obtrusive; and yes, the JS is the square of amateur hour. Let’s move on, please. I’m perfectly happy to replace what’s there with unobtrusive and sharper JS, as long as the basic point of the page, which is testing line-height: normal, is not compromised. Again, moving on.)

When you first go to the test, you should (I hope) see a bunch of rulered boxes containing text using the very common font face Webdings, set at a bunch of different font sizes. The table shows you how tall the simple line boxes are at each size, and therefore the numeric equivalent for line-height: normal at those sizes. So if a line box is using font-size: 50px and the line box is 55 pixels tall, the numeric equivalent for line-height: normal is 1.1 (55 divided by 50).

On my PowerBook, Webdings always yields a 1:1 ratio between the font-size and line box height. The ten-pixel font size yields a ten-pixel-tall line box, and so on.

Tells user agents to set the used value to a “reasonable” value based on the font of the element. The value has the same meaning as <number>. We recommend a used value for ‘normal’ between 1.0 to 1.2. The computed value is ‘normal’.

This is basically what CSS has said since its first days (see the equivalent text in CSS1 or in CSS2 for confirmation) and there’s always been a widespread assumption that, since 1.0 is probably too crowded, something around 1.2 is much more likely.

So finding a value of 1 was a surprise. It was an even bigger surprise to me that this held true in Camino 1.5.2, Firefox 2.0.0.14, and Safari 2.0.4, all on OS X. Firefox 3b5 didn’t render Webdings at all, so I don’t know if it would do the same. I actually suspect not, for reasons best left for another time (and, possibly, a final release of Firefox 3).

Various browsers doing the same thing in an under-specified area of the spec? That can’t be right. It’s pretty much an article of faith that given the chance to do anything differently, browsers will. The sailing was so unexpectedly smooth that I immediately assumed was that a storm lurked just over the horizon.

Well, I was right. All I had to do was start picking other font faces.

To start, I picked the next font on the list, Times New Roman, and the equivalent values for normal immediately changed. In other words, the numeric equivalents for Times New Roman are different than those for Webdings. The browsers weren’t maintaining a specific value for normal, but were altering it on a per-face basis.

Now, this is legal, given the way normal is under-specified. There’s room to allow for this behavior. It’s actually, once you think about it, a fairly good thing from a visual point of view: the best default line height for Times New Roman is probably not the best default line height for Courier New. So while I was initially surprised, I got over it quickly. The seemingly obvious conclusion was that browsers were actually respecting the fonts’ built-in metrics. This was reinforced when I found that the results were exactly the same from browser to browser.

Then I looked more closely at the numbers, and confusion set back in. For Times New Roman, I was getting values of 1.1, 1.12, 1.16, 1.15, 1.149, and 1.1499. If you were to round all of those numbers to two decimal points, you’d get 1.10, 1.12, 1.16, 1.15, 1.15, 1.15. If you round them all to one decimal place, you’d get 1.1, 1.1, 1.2, 1.2, 1.1, 1.1. They’re inconsistent.

But wait, I thought, I’m trying to compare numbers I derived by dividing pixels by pixels. Let’s turn it around. If I multiply the most precise measurement I’ve gotten by the various font sizes, I get… carry the two… 11.499, 28.7475, 57.495, 114.99, 1149.9, 11499. As compared to the actual values I got, which were 11, 28, 58, 115, 1149, and 11499.

Which means the results were inappropriately rounded up in some cases and down in others. 28.7475 became 28 and 1149.9 became 1149, whereas 57.495 became 58. Even though 11.499 became 11 and 114.99 became 115.

This was consistent across all the browsers I was testing. So again, I was suspecting the fonts themselves.

And then I switched from Times New Roman to just plain old Times, and the storm was full upon me. I’ll give you the results in a table.

Derived normal equivalents for Times in OS X browsers

font-size

Camino 1.5.2

Firefox 2.0.0.14

Safari 2.0.4

10

1

1.2

1.3

25

1

1

1.16

50

1

1

1.18

100

1

1

1.15

1000

1

1

1.15

10000

1

1

1.15

Much the same happened when comparing Courier New with plain old Courier: full consistency on Courier New between browsers, albeit with the same strange (non-)rounding effects as seen with Times New Roman; but inconsistency between browsers on plain Courier—with Camino yielding a flat 1 down the line, Firefox going from 1.2 to 1, and Safari having a range of values above the others’ values.

Squamous! Not to mention rugose!

Now it’s time for the stunning conclusion that derives from all this information, which is: not here. Sorry. So far all I have are observations. I may turn all this into a summary page which shows the results for all the font faces across multiple browsers and platforms, but first I’ll need to get those numbers.

I do have a few speculations, though:

Firefox’s inconsistency within font faces (see Times and Courier, above) may come from face substitution. That’s when a browser doesn’t have a given character in a given face, so it looks for a substitute in another face. If Firefox thinks it doesn’t have 10-pixel Times, it might substitute 10-pixel something else serif-ish, and that face has different line height characteristics than Times. I don’t know what that other face might be, since it’s not Times New Roman or Georgia, but this is one possibility. It is not the minimum font size setting in the preferences, as I’ve triple-checked to make sure I have that set to “None”.

Another possibility for Firefox’s line height weirdness is a shift from subpixel font rendering to pixelly font rendering. 10-pixel text in Firefox is distinctly pixelly compared to the other browsers I tested, while sizes above there are nice and smooth. Why this would drive up the line height by two pixels (20%), though, is not clear to me.

Much of what I’ve observed will likely be laid to rest at the doorsteps of the font faces themselves. I’d like to know how it is that the rounding behaviors are so (mathematically) messed up within faces, though. Perhaps ideal line heights are described as an equation rather than a simple ratio?

Again, this was all done in OS X; I’ll be very interested to find out what happens on Windows, Linux, and other operating systems. Side note for the Mac Opera fans warming up their flamethrowers: I’ve left Opera 9.27 for OS X out of this because it seems to cap font sizes at a size well below 1000, although this limit varied from one face to another. Webdings and Courier capped at 507 pixels, whereas Courier capped at 574 pixels and Comic Sans MS stopped at 707 pixels. I have no explanation, though doubtless someone will, but the upshot is that direct comparisons between Opera and the other browsers are impossible. For sizes up to 100 pixels, the results were exactly consistent with Camino, if that means anything.

The one tentative conclusion I did reach is this: line-height: normal is a jumbled terrain of inconsistent behaviors, and it’s best avoided in any sort of precision layout work. I’d already had that feeling, but at least now there’s some evidence to back up the feeling.

In any case, I doubt this is the last I’ll have to say on this particular topic.

Update 7 May 08: I’ve updated the test page with a fix from Ben Lowery so that it works in IE. Thanks, Ben! Now all I need is to add a way to type in any arbitrary font-family’s name, and we’ll have something everyone can use. (Or else a way to use JavaScript to suck up the names of all the fonts installed on a machine and put them into the dropdown. That would be cool, too.)

Seriously; no sarcasm or passive-aggressiveness intended. If I thought my reset styles, or really anything I’ve ever published or advocated, was a be-all end-all ultimate solution for every designer and design that’s ever been and could ever be, I’d be long past due for six rounds on the receiving end of a clue-by-four.

Reset styles clearly work for a lot of people, whether as-is or in a modified form. As I say on the reset page, those styles aren’t supposed to be left alone by anyone. They’re a starting point. If a thousand people took them and created a thousand different personalized style sheets, that would be right on the money. But there’s also nothing wrong with taking them and writing your own overrides. If that works for you, then awesome.

For others, reset styles are more of an impediment. That’s only to be expected; we all work in different ways. The key here, and the reason I made the approving comment above, is that you evaluate various tools by thinking about how they relate to the ways you do what you do—and then choose what tools to use, and how, and when. That’s the mark of someone who thinks seriously about their craft and strives to do it better.

I’m not saying that craftsmen/craftswomen are those people who reject the use of common tools, of course. I’m saying that they use the tools that fit them best and modify (or create) tools to best fit them, applying their skills and knowledge of their craft to make those decisions. It’s much the same in the world of programming. You can’t identify a code craftsman by whether or not they use this framework or that language. You can identify them by how they decide which framework or language to use, or not use, in a given situation.

Craftsmanship is something I’ve been thinking about quite a bit recently, as has Joshua Porter. I delivered a keynote address on that very topic just a few days ago in Minneapolis, and my thinking infuses both of the talks I’m giving next week at An Event Apart New Orleans. I’ve started looking harder for evidence of it, both in myself and in what I see online, and I believe striving toward being a craftsman/craftswoman is an important process for anyone who chooses to work in this field.

Because this isn’t a field of straightforward answers and universal solutions. We are often faced with problems that have multiple solutions, none of them perfect. To understand what makes each solution imperfect and to know which of them is the best choice in the situation—that’s knowing your craft. That’s being a craftsman/craftswoman. It’s a never-ending process that is all the more critical precisely because it is never-ending.

So it’s no surprise that we, as a community, keep building and sharing solutions to problems we encounter. Discussions about the merits of those solutions in various situations are also no surprise. Indeed, they’re exactly the opposite: the surest and, to me, most hopeful sign that web design/development continues to mature as a profession, a discipline, and a craft. It’s evidence that we continue to challenge ourselves and each other to advance our skills, to keep learning better and better how better to do what we love so much.

Because as lovely as it is to see that you can, in fact, get one or more browser implementation teams to jump in a precisely defined sequence through a series of cunningly (one might say sadistically) placed hoops, half of which are on fire and the other half lined with razor wire, it doesn’t strike me as the best possible use of the teams’ time and energy.

No, I don’t hate standards, though I may hate freedom (depends on who’s asking). What I disagree with is the idea that if you cherry-pick enough obscure and difficult corners of a bunch of different specifications and mix them all together into a spicy meatball of difficulty, it constitutes a useful test of the specifications you cherry-picked. Because the one does not automatically follow from the other.

For example, suppose I told you that WebKit had implemented just the bits of SMIL-related SVG needed to pass the test, and that in doing so they exposed a woefully incomplete SVG implementation, one that gets something like 2% pass rates on actual SMIL/SVG tests. Laughable, right? Yes, well.

Of course, that’s in a nightly build and they might totally support SMIL by the time the corresponding final version is released and we’ll all look back on this and laugh the carefree laugh of children in springtime. Maybe. The real point here is that the Acid3 test isn’t a broad-spectrum standards-support test. It’s a showpiece, and something of a Potemkin village at that. Which is a shame, because what’s really needed right now is exhaustive test suites for specifications– XHTML, CSS, DOM, SVG, you name it. We’ve been seeing more of these emerge recently, but they’re not enough. I’d have been much more firmly in the cheering section had the effort that went into Acid3 had gone into, say, an obssessively thorough DOM test suite.

I’d had this post in mind for a while now, really ever since Acid3 was released. Then the horse race started to develop, and I told myself I really needed to get around to writing that post—and I got overtaken. Well, that’s being busy for you. It’s just as well I waited, really, because much of what I was going to say got covered by Mike Shaver in his piece explaining why Firefox 3 isn’t going to hit 100% on Acid3. For example:

Ian’s Acid3, unlike its predecessors, is not about establishing a baseline of useful web capabilities. It’s quite explicitly about making browser developers jump… the Acid tests shouldn’t be fair to browsers, they should be fair to the web; they should be based on how good the web will be as a platform if all browsers conform, not about how far any given browser has to stretch to get there.

That’s no doubt more concisely and clearly stated than I would have managed, so it’s all for the best that he got to say it first.

By the by, I was quite intrigued by this part of Mike’s post:

You might ask why Mozilla’s not racking up daily gains, especially if you’re following the relevant bugs and seeing that people have produced patches for some issues that are covered by Acid3.

The most obvious reason is Firefox 3. We’re in the end-game of building what I really do believe is the best browser the web has ever known, and we expect to be putting it in the hands of more than 170 million users in a pretty short period of time. We’re still taking fixes for important issues, but virtually none of the issues on the Acid3 list are important enough for us to take at this stage. We don’t want to be rushing fixes in, or rushing out a release, only to find that we’ve broken important sites or regressed previous standards support, or worse introduced a security problem. Every API that’s exposed to content needs to be tested for compliance and security and reliability… We think these remaining late-stage patches are worth the test burden, often because they help make the web platform much more powerful, and reflect real-web compatibility and capability issues. Acid3’s contents, sadly, are not as often of that nature.

You know, it’s weird, but that seems really familiar, like I’ve heard or read something like that before. Now if only I could remember… Oh yeah! It’s basically what the IE team said about not passing Acid2 when the IE7 betas came out, for which they were promptly excoriated.

Huh.

Well, never mind that now. Of course it was a totally different set of circumstances and core motivations, and I’m sure there’s absolutely no parallel to be drawn between the two situations. At all.

Returning to the main point here: I’m a little bit sad, to tell the truth. The original acid test was a prefect example of what I think makes for a good stress test. Recall that the test’s original name, before it got shorthanded, was the “Box Model Acid Test”. It was a test of CSS box model handling, including floats. That’s all it was designed to do. It did that fairly well for its time, considering it was part of a CSS1 test suite. It didn’t try to combine box model testing with tests for PNG support, HTML parse error recovery, and DOM scripting.

To me, the ideal CSS test suite is one that has a bunch of basic property/value tests, like the ones I’ve been responsible for creating (1, 2), along with a bunch of acid tests for specific areas or concepts in that specification. So an acidified CSS test suite would have individual acid tests for the box model, positioning, fonts, selectors, table layout, and so on. It would not involve scripting or markup parsing (beyond what’s needed to handle selectors). It would not use animated SVG icons. Hell, it probably wouldn’t even use PNGs, except possibly alphaed PNGs when testing opacity and RGBA colors. And maybe not even then.

So in a DOM test suite, you’d have one test page for each method or attribute, and then build some acid tests out of related bits (say, on an entire interface or set of closely related interfaces). And maybe, at the end, you’d build an overarching acid test that rolled verything in the DOM spec into one fiendishly difficult test. But it would be just about the DOM and whatever absolute minimum of other stuff you needed, like text rendering and maybe GIF support. (Similarly, the CSS tests had to assume some basic HTML and CSS selector support, or else everything else fell down.)

And then, after all those test suites have been built up and a series of acid tests woven into them, with each one culminating in its own spec-spanning acid test, you might think about taking those end-point acid tests and slamming them all together into one super-ultra-hyper-mega acid test, something that even the xenomorphs from the Alien series would look at and say, “That’s gonna sting”. That would be awesome. But that’s not what we have.

I fully acknowledge that a whole lot of very clever thinking went into the construction of Acid3 (as was true of Acid2), and that a lot of very smart people have worked very hard to pass it. Congratulations all around, really. I just can’t help feeling like some broader and more important point has been missed. To me, it’s kind of like meeting the general challenge of finding an economical way to loft broadband transceivers to an altitude of 25,000 feet (in order to get full coverage of large metropolitan areas while avoiding the jetstream) by daring a bunch of teams to plant a transceiver near the summit of Mount Everest—and then getting them to do it. Progress toward the summit can be demonstrated and kudos bestowed afterward, but there’s a wider picture that seems to have been overlooked in the process.

I woke up this morning (duh DAAAH dah DUH) and yesterday’s announcement was the first thing on my mind. No doubt it’ll be a recurrent topic, at least for a little while.

One of the takeaways is what this change demonstrates about the IE team: standards is and was their preferred default. If it weren’t, they just would have found a way to square the IE7-default behavior with the Interoperability Principles announced late last month (slightly tricky but entirely possible). That they initially chose otherwise speaks volumes about the pressures they face internally, and their willingness to publicly change direction speaks volumes about their commitment to supporting standards. While I’m sure community feedback informed their decision, they pretty much knew what the reaction would be from the get-go. If that was going to be the deciding factor, they would’ve chosen differently up front.

So what drove that change? I keep coming back to two things, both of which were explicitly mentioned in yesterday’s announcement.

The first is, perhaps obviously, the previously mentioned Interoperability Principles. Head on over there and read Principle II, “Support for Standards”. If that isn’t a solid foundation on which to build an internal case for change, I don’t know what is. I’m wryly amused by the idea that the IE team used the Interoperability Principles as a way to batter their way out of the grip of those internal pressures I mentioned. The former aikido student in me finds that very satisfying. True, the Principles came under fire for being just another set of empty words, but it would seem that they can be used for at least some concrete good.

As for the second, there’s a phrase repeated between the two announcements that I didn’t quote yesterday because I was still pondering its meaning. I’m still not certain about it, but having had a chance to sleep on it, my initial reading hasn’t changed, so I’m going to quote and comment on it now. First, from the press release:

“While we do not believe there are currently any legal requirements that would dictate which rendering mode must be chosen as the default for a given browser, this step clearly removes this question as a potential legal and regulatory issue,” said Brad Smith, Microsoft senior vice president and general counsel.

Speaking of Opera, there’s another side to all this that I find quite interesting. So far, the reaction to Microsoft’s announcement has been overwhelmingly positive. The sense I’ve picked up is, “Hooray! IE will act like browsers always have, and the problem is solved!”.

But is it? The primary objection raised by Opera and several members of the community was that version targeting is an anti-competitive move, one which will force browser makers like Opera and authors of JavaScript libraries to support an ever-increasing and complex web (sorry) of rendering-engine behaviors in the market leader. So far as I can tell, the change in default behavior does next to nothing to address that objection. The various versions will still be there and still invoke-able by any page author who so chooses. Yes, the default will be better for authors, but I don’t see how things get any better for Opera, Firefox, Safari, jQuery, Prototype, et. al.

Perhaps I’ve missed something basic (“Again!” shouts the chorus). If so, what? If not, then why all the hosannas?

Now here’s something I didn’t expect to see when I woke up this morning:

“Microsoft Expands Support for Web Standards: Company outlines new approach to make standards-based rendering the default mode in Internet Explorer 8, will work with Web designers and content developers to help with standards behavior transition.”

Seriously, that’s the title and subhead of Microsoft’s latest press release.

About halfway through, there’s this from Ray Ozzie:

…we have decided to give top priority to support for these new Web standards. In keeping with the commitment we made in our Interoperability Principles of being even more transparent in how we support standards in our products, we will work with content publishers to ensure they fully understand the steps we are taking and will encourage them to use this beta period to update their sites to transition to the more current Web standards supported by IE8.

Microsoft recently published a set of Interoperability Principles. Thinking about IE8’s behavior with these principles in mind, interpreting web content in the most standards compliant way possible is a better thing to do.

We think that acting in accordance with principles is important, and IE8’s default is a demonstration of the interoperability principles in action.

I’m relieved and glad on the one hand, and a little worried on the other. It’s not like the issues I discussed, or Jeffrey wrote about, have gone away. It’s just that the way in which they’re handled by IE has shifted—which in some ways is a huge difference.

I think what worries me most is the possibility that when the public beta hits, there will be enough incompatibility problems that pushback from other constiuencies forces a change back to the original behavior. I hope not. I hope that what will happen is that any problems that come up will be addressed by spreading the news far and wide that there’s a simple one-line fix for those sites.

I’m glad that IE will act as browsers have always done, and default to the latest and greatest in the absence of any explicit direction to the contrary. I’m doubly glad that the IE team is willing to do that, even knowing what they have to handle. And I’m triply glad that the proposal was made in public ahead of time, with plenty of opportunity for debate, so that we could have a chance to weigh in and affect the browser’s behavior.

I’m not going to comment on the views presented; both gentlemen do a fine job. What I do wish to add, or perhaps to restate, is an observation about everyone interested in, and thinking or arguing about, this topic:

We all care about the same thing.

We all want to advance web standards. We all want browsers to improve their support. We all want better and more advanced specifications. We all want to reduce inconsistencies. We all want a better web.

The disagreement is over how best to get there given the situation we face now, as well as how we perceive that current situation. A recurrent metaphor for me is that we’re a large group of pioneers trying to chart the best course through an unknown country, and there is disagreement on which route entails the least risk to the whole group. Cross the desert or the mountains? Traverse a swampy delta or a hilly forest? Move through this valley or that one?

Sometimes what binds us is strong enough that the few differences seem sharper by comparison. That shouldn’t keep us from remembering what we have in common, and the importance of that commonality.