In case you don't know, the canvas API provides a postscript-like drawing api to javascript running inside webpages. It was introduced by apple to provide a richer graphical experience for the "dashboard" feature introduced in their OS X Tiger.

The argument on one side seems to be that if you don't bake accessibility right into the API such that programmers don't have to do anything particularly special to make it accessible, then people just won't do accessibility.

There's the extreme position that the canvas API just should never have been added to browsers in the first place, and it shouldn't be in any standard, because it's inherently not accessible.

Then there's reactions to the extreme positions, that argue that sometimes we just need to make content that is inherently not accessible to everyone, and this is okay, and they are legitimate forms of expression. And besides, we don't ask that everyone put wheelchair accessible ramps on their houses, and so we shouldn't require every single website be accessible.

Of course there already is a simple accessibility mechanism built into the canvas tag: Simply put some plain text inside the canvas tag as fallback content. If a client doesn't understand the canvas tag, it will simply display the text content.

A counter example to that approach that is being used, and which I believe sparked the debate is Bespin. Bespin implemented a widget/gui system that runs entirely inside a single canvas tag. They did this for performance, and for control over the gui experience. Static text content is not a sufficient fallback for such a program.

Here's my opinion: It will be a grave mistake to get distracted by projects like Bespin into thinking that the canvas api's accessibility features are lacking. A widget system is only one potential use for the canvas tag, and gearing an accessiblity design around this one use case will cripple the canvas API, and ultimately, I believe, make accessibility worse for uses that are NOT widget systems.

The truth is that I don't think there's any way to know, from the perspective of a browser that is receiving canvas API instructions, precisely what the meaning of those instructions are. Yes, it could be something like bespin, or it could just be a simple animation. It could be a graph, or it could be some abstract artwork. You could ask the programmer to provide hints in the API calls, but that's not really any better than the "tack on" accessibility that the accessibility proponents speak out against. However without such hints, all you see is "Curve, curve, rectangle, image, textbox" etc. etc. No way to know whether you can interact with anything, no way to know what the text in a textbox is referring to, and no way to know what nature of image a sequence of shapes is constructing.

A low level drawing API is the wrong target for accessibility. You might as well try and make the PNG image format "accessible" by allowing a screen reader to read out the color of each pixel, or making SVG accessible by making it read out loud each curve and shape in sequence.

For bespin, what you need to make accessible is its widget toolkit. That's the level of abstraction where you have some actual useful information. That's where you know whether something is a menu or a button or a text-box. There's already a standard called "aria" that aims to make dynamic widget toolkits in html accessible. The limitation in this case though, is ARIA's assumption that you'll have some kind of 1:1 correspondence between HTML tags and widgets, and its dependance on your ability to assign attributes to each of those html tags.

I think what we really need is some kind of low level accessibility API, at a level equal to the canvas API, but not cannibalizing it, and without attempting to combine them, or make them into the same API. Then to make programmers want to use it, you need to make it useful for things other than Accessibility. Here's a suggestion:

Let's provide a low level javascript API that makes it relatively straightforward to turn your javascript program, embedded in an html page, into a command line utility, an interactive terminal program, a webservice, or even the basis for a desktop application using a native OS widget toolkit. Just add a few extra API calls, and you get to reuse your efforts in multiple and many different contexts, accessible interfaces just being one of them by chance.

What's the best way to do that? I'm not sure exactly, but I think it's a more productive direction to explore than the futile task of trying to make abstract low level drawing commands mean something to a computer: Something which I think can only be achieved via advanced artificial intelligence of the sort that is perpetually 10 years away.

My instinct is to encourage MVC style programming via the design of the in browser API for constructing applications, the same way that Apple's cocoa encourages good MVC design by simply making it the path of least resistance. This would involve encouraging programmers to build a clean "domain model" version of their javascript programs, and making it incredibly easy to express the logic of that program through rich UI's (potentially using the canvas tag), or via a command line program/interactive console program/webservice/accessible client without changing the code of their "model" tier. Go further: make a model tier simply work on its own without any gui programming. Then adding a rich gui with graphics is simply enhancement. Make this the easiest way to write a program, and they will come. Consider the analogy:

html : css :: javascript models : canvasApi

Nobody complains that CSS is not accessible. Why is that?

One thing I think we should definately not do: Standardise an "accessible" version of the canvas API that doesn't have a single implementation, and hasn't yet gone through the trials and tribulations of real world practice.