In this article:

Introduction

His article “To Hell with WCAG 2” was an eye-opener. Joe Clark expressed his angriness about the W3C, its slow, lobby-driven, bureaucratic processes, the gruelling internal fights within working groups and the Web Accessibility Initiative (WAI) in particular. The WCAG 2.0 Working Draft was a disappointment, basically unreadable, impossible to understand, and failing in major issues after five years in the making (WCAG 1 took a little longer than two years to become a finalized standard).

At that time I hadn’t read WCAG 2 yet. Joe Clark’s passionate article was a revelation, and all the blogs I read agreed with him. Perhaps I wasn’t the only one who hadn’t read the working draft by then. Because now I have, and while there are some valid points in his inflammatory speech, there are also major flaws.

Did you know that, contrary to Joe Clark’s beliefs, validating code is a requirement? That semantic markup is enforced? Other issues are totally overrated, like claiming a decent tab order was equal to the “prohibition of CSS layouts.” But more on that later.

Clark’s public criticism might be seen as a bold and important step. However, actually calling group members “arrogant and ignorant,” hanging up during conference calls and allegations like “some teenagers have greater understanding of valid, semantic markup than the Working Group” (ibid.) hurts the cause. If you search the WAI mailing list for more contributions from Joe Clark, you will immediately notice his egocentric, cynical, and often insulting style. He criticizes ignorance and harassment within that working group, but he’s part of the problem.

The latter has to be seen in context with IBM’s contribution to the Mozilla source code for DHTML accessibility. To implement that important technique, IBM has proposed an extension of the HTML specification to allow a negative tabindex. Joe Clark overstates that simple extension as “IBM actively promoting a DHTML technique that breaks the HTML specification.”

Differences between WCAG 1 and WCAG 2

WCAG 2 introduces four basic principles of accessibility. Content must be POUR:

Perceivable

Operable

Understandable

Robust

The guidelines are organized around these four principles, and checkpoints for the guidelines are now called success criteria. Each success criterion comes with an extended commentary, techniques to meet the guidelines, and common failures. That’s more comprehensive than version 1 and leaves less room for ambiguity.

The success criteria for each guideline are organized into three levels, though not all guidelines contain success criteria at every level. Levels are like WCAG 1 priorities, but they are more precise. WCAG 1 checkpoints had only one priority allocated, while a success criterion with multiple levels can be more differentiated.

In the past programming a website to conform to level triple-A was an almost impossible and disproportionate effort. Besides guidelines always have room for interpretation, so it is possible that experts disagree if a criterion has been met. Therefore the working group was pragmatic enough to grant triple-A conformance if at least 50% of all level 3 criteria are fulfilled.

Techniques for a website are defined in a public baseline, like “the specification that this content relies upon is: XHTML 1.0 (Strict). The specifications that this content uses but does not rely on are: JavaScript 1.2, CSS 2.” You can specify any reasonable technique, there’s no longer a preference for W3C techniques — if Flash is accessible enough and sufficient for the job, you don’t have to use SMIL. More on the baseline concept later. In the same way your conformance claims and conformance scope are published.

Since the working group tried to be as generic as possible, some terms changed. They speak of “web units” instead of “pages”, because web units include things like Ajax applications, which wouldn’t be covered by the term “page.” You get used to the terms fairly quick.

What’s wrong with WCAG 2?

The information is yet too scattered among different documents, which makes it hard to read and comprehend. However, the working group notes it’s still work in progress, and they want to create separate files for each success criterion as well as a navigation structure. That sounds like the commented German accessibility guidelines, so I’m positive that the end-result will be more usable.

I must admit it could also be more readable. A professional copywriter could help, but I can imagine the exact wording of each success criterion was probably a big issue in WAI meetings, so the stakeholders won’t give up so easily just for enhanced readability.

We’ve come a long way to understand these features are not for cognitively challenged people alone, but they are basic requirements for all of us. Young people with insufficient reading skills, elderly who are new to the Internet — we all benefit from features like breadcrumb paths, a consistent navigation with clear wording, well written text, or relevant search results. These guidelines spell usability!We can’t allow having them removed!

Discussion

His article raised critical awareness for the last-call W3C working draft, which lead to the extension of the comments period. Still the degree of concern and fear didn’t need to be raised. Many issues are exaggerated, distorted by omission, or plain wrong. Let’s discuss them in detail:

1. Definitions of “page” and “site”

Exactly what a “page” is, let alone a “site,” will be a matter of dispute.

Agreed. Although everybody knows what these terms mean, a formal definition is missing. But there are more important issues. That’s a typical Joe Clark.

Not true. Success criterion 4.1.1 in plain English demands that IDs must be unique and elements properly nested. One technique to ensure that is validation. How was it possible that Joe Clark missed such an important point? Had he actually read his printouts, or was he just summarizing his correspondence with the working group?

However, it is true that a comparison of DOM outputs is seen as an alternative technique to ensure proper nesting or well-formedness. That won’t work. Even valid code doesn’t result in identicalDOM outputs in multiple browsers. Take for example line-breaks and code indented with tabs (white-space): Mozilla counts these as text-nodes, while IE ignores them in the DOM tree. Besides it would be a tremendous effort to compare the trees manually. Would somebody please create a validation tool with different browser engines under the hood?

3. Table layout

Of course we know the disadvantages of table layout, and they are a pain to every standardista. From a pure accessibility standpoint they are tolerable as long as they can be linearized and semantic markup like <th> isn’t misused. But speaking semantically, isn’t a <td> supposed to represent tabular data?

4. Blinking elements

Everybody hates the infamous blink tag because it makes text unreadable. But I don’t think that’s what they are speaking of. It’s more like in banner ads, or in Ajax notifications. For three seconds I can live with that. The difference between blinking and flashing is the frequency; the latter can result in epileptic seizures. Simply avoid anything between three and fifty flashes per second.

5. Baseline technologies

You’ll be able to define entire technologies as a “baseline,” meaning anyone without that technology has little, if any, recourse to complain that your site is inaccessible to them.

Not true. Baselines have to be reasonable. If they are not reasonable, others, including your government, can set a baseline.

There are a couple of examples for reasonable baselines, like “only technologies that have been widely supported by more than one accessible and affordable user agent for more than one release” for a government site. More examples can be found in the …

6. Conformance claims

If you wish to claim WCAG 2 compliance, you must publish a checklist of declarations more reminiscent of a forced confession than any of the accessibility policies typically found today.

Hmm, which of the following do you think is clearer?

On 5 May 2006, “G7: An Introduction” http://telcor.example.com/nav/G7/intro.html conforms to W3C’s WCAG 2 Conformance Level Double-A. The following additional success criteria have also been met: 1.1.2, 1.2.5, and 1.4.3. The baseline for this claim is UDBaseline#1-2006 at http://UDLabs.org/baselines#1-2006.html. The specification that this content “relies upon” is: XHTML 1.0 (Strict), and Real Video. The specifications that this content “uses but does not rely on” are: JavaScript 1.2, CSS 2.

My vote goes for the WCAG 2 conformance claim. Include this elegantly as a RDF file within a <link> (something missing in this working draft), and I’m a happy developer.

7. Conformance scope

You’ll be able to define entire directories of your site as off-limits to accessibility (including, in WCAG 2’s own example, all your freestanding videos).

That’s true. But you can’t exclude integral parts of a process, like parts of a shop, though further definition would be required which parts can be excluded and which can’t. It becomes clearer when you take an example where the scope is set with a date:

Materials with creation or modification dates before 31 December 2006 conform to WCAG 1.0 Level Double-A. Materials with creation or modification dates after 31 December 2006 conform to WCAG 2.0 Level Double-A.

I can imagine cases where new content does conform to WCAG 2, while nobody bothers to touch really old content somewhere deep in the archives. So scoping is a practical issue: rather have conformance for the new and important parts than none at all.

8. Video

Not that anybody ever made them accessible, but if you post videos online, you no longer have to provide audio descriptions for the blind at the lowest “conformance” level. And only prerecorded videos require captions at that level.

Joe Clark was dubbed the king of closed captions, so from his point of view WCAG 2 must be a step backwards: audio descriptions for prerecorded video are required on level 2, on level 1 either audio descriptions or a transcript will be sufficient. Audio descriptions for live video content were dropped in November 2005, and captions for live videos are level 2.

WCAG 1 was very vague and didn’t make distinctions between live and prerecorded video. Everything had to have audio descriptions on the lowest level. The WCAG 2 approach doesn’t go that far. Although this might be disappointing, it is more differentiated and more reasonable.

9. Audio

I agree not many people have the equipment to correctly measure a difference of 20 dB(A), but as a rule of thumb any dialogue should be easy to understand. Having my own program on a free radio station I can dig that. But there’s another major point Joe misses: although technically audio only is not multimedia, it still has to be transcribed.

10. Skip links

You can put a few hundred navigation links on a single page and do nothing more, but if you have two pages together that have three navigation links each, you must provide a way to skip navigation.

Skip links are good, I don’t mind to have them everywhere. But in WCAG 1 “a few hundred navigation links” would have been required to be structured with subheadlines to enhance understanding. That’s the real issue here!

11. Offscreen positioning

You can’t use offscreen positioning to add labels (e.g., to forms) that only some people, like users of assistive technology, can perceive. Everybody has to see them.

Although the intent of that criterion clearly is to make structure available to screen readers through semantic markup, the sentence Joe Clark refers to could be interpreted the way he does. Here’s the original:

The purpose of this success criterion is to ensure that when such relationships are perceivable to one set of users, those relationships can be made to be perceivable to all.

That’s not really new. Think of zoom readers with tab navigation via keyboard. If you place content offscreen, it can be quite irritating if an element gains focus and still is not visible. Good practice would require at least changing the element’s position to a visible area when it gets :focus. Although labeling forms, like in Joe’s example, can be achieved without offscreen positioning since the title attribute is deemed sufficient.

12. Tab source order

CSS layouts, particularly those with absolutely-positioned elements that are removed from the document flow, may simply be prohibited at the highest level. In fact, source order must match presentation order even at the lowest level.

Death to any order columns!Death to the Holy Grail! I can understand the feelings some have towards that, but again it’s nothing new. In fact, in Germany’s eAccessibility law it was implemented as a priority 2 feature, not 3. The tab navigation shouldn’t be irritating and jump across the page, so take care of your source order.

13. Idioms, acronyms, and pronounciation

Also at the highest level, you have to provide a way to find all of the following:

Definitions of idioms and “jargon”

Expansion of acronyms

Pronunciations of some words

While for English developers marking-up foreign-language passages might be an exception with an unusual amount of “fanatical care” to achieve this, people in countries like Japan or Germany, where you have many anglicisms especially in technical texts, had a long time to get accustomed to it.

In fact, according to WCAG 1 any language changes had to be identified as a priority 1 requirement, while in WCAG 2 only the document’s primary language is level 1, and only passages or phrases within the text are level 2, not single words. This is an incredible work reduction. Considering screen readers make a subtle pause before language switches that can get annoying in texts with many foreign words, it is also an improvement for those users.

14. Alternate documents

You also have to provide an alternate document if a reader with a “lower secondary education level” couldn’t understand your main document. (In fact, WCAG 2 repeatedlyproposes maintaining separate accessible and inaccessible pages. In some cases, you don’t necessarily have to improve your inaccessible pages as long as you produce another page.)

Again, that’s an old friend, and it’s emphasized that an alternate version “is a fallback option and is not preferable to making the content itself accessible.” So what? If a technical document for a specific target audience can’t use very simple language, it is common to provide a text summary for people with lower reading skills. That doesn’t mean we forget about accessibility and start publishing nothing but alternate text versions again.

Conclusion

Joe Clark takes some points, but on a closer look his article leaves a bitter taste as another tool for enforcing his point of view on the Web Content Accessibility Working Group. Okay, probably most of us who care about web standards today have been nerds as teenagers. But then again most of us have quit playing Dungeons and Dragons and got some social life. We don’t spend hours in front of our TV and write angry nitpicking letters to the CBC. We don’t harass W3C working groups, and when they won’t be intimidated, we do not found our own. Be gentle and play with the other kids.

If you want reform then do it from the inside — the subtlety of subversion is more effective than revolution.

I have no problem admitting I once played D&D, also my hairstyle in the eighties was ugly. I would have commented in your blog, alas it seems comments are disabled.

You are right about the lateness of my response, but the reason was not a language barrier, it was simply that I didnâ€™t have the time to research earlier. About two weeks ago I held a lecture about WCAG 2 at a local developer barcamp for which I read the entire working draft, the other documents, and compared them with the claims in your article. I was disappointed so many of your claims were just hot air, and I decided to share the information.

I donâ€™t have anything against you as a person. Please excuse my stereotyped irony about D&D nerds, but judging from the very positive responses about your @media 2005 speech I think you got some humo(u)r.

Valid markup is not enforced. “Unambigous parsing” is. This is not the same thing – validation is only one technique you may use to try and ensure unambiguous parsing. I don’t agree with everything Joe Clark says, but I do agree with a lot of it. He can be acerbic at times, but that doesn’t mean he’s not right.

Although one thing both you and Joe missed (and I did too, at first) was that the scoping of conformance claims isn’t new to WCAG 2.0 â€” you can do this in WCAG 1.0 also.

I disagree with you on the baselines thing because I don’t think, in practice that governments will set baselines for all sites from that country, meaning that people can exclude any sorts of content they want.

If you want more detail you can read my initial take on WCAG 2.0 or my overview of it if you like. I am hopeful WCAG 2.0 can be saved but I do think it will need a lot of work.

And yes, I’ve still got my D&D rulebooks. So Joe, Martin, if you fancy a game…

Actually, from what I can tell, you agree with me on nine items, disagree on four, and have no opinion or a different opinion on four others. So bravo, I guess.

Given that the title of my A List Apart article was ironic (Cf. â€œX considered harmfulâ€?) and wished an inanimate object would go to hell, the title of your post is unnecessarily aggressive. I am not an inanimate object. I have feelings and, contrary to your insinuations, I did more research on WCAG 2, including multiple full readings, than anyone else had done at the time.

Despite your objections, your aim all the while was to harm me in some way, if only by making me look extremist and foolish (though apparently not in in 13 out of 17 examples, as it turned out). So again bravo, I guess.

Incidentally, valid code is not required by WCAG 2, though nearly all commenters who expressed an opinion on the subject wanted it to be required. If youâ€™re so sure itâ€™s already there, then you wonâ€™t object if WCAG Working Group makes it crystal-clear, will you, now?

[...] So, as I say, I came across this post entitled To Hell With Joe Clark â€” and yes, he has noticed it â€” where someone disagrees with some of the things Joe is saying and then goes on to perpetuate an Ad hominen attack on him, saying that he has an egocentrical, cynical and often insulting style. [...]

Joe, everybody has some â€œlearning the worldâ€? ahead, every day. Anyway, as you certainly have noticed the title was a reference to your original article, so as you certainly know it was not intended to hurt your feelings.

Besides, from my count I do agree on four items, disagree on 14, and partially agree/disagree on two. Valid code is one technique to prove criterion 4.1.1, although the alternative with comparing DOM trees will not work. So technically speaking it is not required to have valid code, but you act as if valid code has never been a part of the working draft, at all, ever, and thatâ€™s not true.

@ThePickards: His personal behavior would have been irrelevant, but the tone within the working group is part of the critique. Joe Clark was an invited expert and thus part of the working group, and his tone on the mailing list didnâ€™t improve the manners for sure. Thatâ€™s why I had to address it with links to relevant sections of the mailing list archives for citation, so you can build your own opinion. Alas the sources were not recognizable in your blog since the links were not included.

People would be more apt to pay attention to these accessibility-related discussions if they didn’t always degenerate into name-calling and mud-sligning. Start talking about accessibility, and stop talking about each other, and maybe someone will listen.

@Martin: fair enough. I don’t necessarily agree with you, but I understand where you’re coming from better now. It had seemed to be that you were dismissive of Joe’s arguments because you didn’t like his personality. Instead, you’re saying his personality hasn’t helped matters and as a separate issue, you disagree with some of his points. That’s fair enough (although like I say, I don’t necessarily agree).

[...] I write this post for two reasons. First, I am alarmed by my own experiences with webpage development and application procurement in higher education in that accessibility. Despite being a legal mandate for many institutions and a moral mandate for all institutions, accessibility is not even on the radar screen. Itâ€™s not a low priority – itâ€™s not a priority at all. I understand that some of the issues may appear complicated but for us to make no effort whatsoever is shameless and unethical. I place some of the blame on the vendors who continue to ignore the issue (the major projects in which Iâ€™ve helped purchase, configure, and maintain web-based systems left us with no accessibility options unless we developed the systems ourselves; we lacked the resources to develop the systems in-house). [...]