Introduction

JG: Please note that if we don't have implementation experience, we will
have to spend time at Candidate Recommendation status

EH: I would not want the document to get stale in CR for 6 months.

RS: This is a complex document. Few UAs will conform when the document
becomes a Recommendation. If there are really sticky issues, we should push
them off to the next version

AG: A guidelines document is somewhat different from a lower-level
technical specification. It's not clear that W3C understands how to handle
guidelines entirely. I think it's ok to have some checkpoints be
future-looking.

DA: We need to remember that this document is to promote accessibility, and
we shouldn't sacrifice accessibility in order to get the document out
faster.

JG: If we know about a problem but don't have solutions, we may want (in
another document) to spend time in CR to get developer input.

MQ: It is possible to note in the document which issues are important but
not entirely implemented yet.

WCAG says to create, ATAG says help people create them, UAAG says make
them available

Hierarchical definition track and use of precise language.

DP: Ascii art as braille is not very helpful

EH: WCAG requires text equivalents for non-text content. But generally
speaking, an equivalency target doesn't have to be accessible.

AG:

It is critical that user agents implement the format. Talking about
author's intent is problematic (and how to capture it in the format). We
want the user agent to inspect the markup and to offer substitutes. But
the user agent needs to give the user the ultimate choice.

I think we are in agreement on requirements, but not language. There are
some cases where polarity is clear (e.g., IMG/alt). But in the discussion
of equivalence, we are also addressing the case (e.g., SMIL), where the
"ruling" case is not clear.

JG: I think in PF they are moving away from specific markup to more general
solutions that also benefit accessibility.

EH: I think that some requirements such as making all alternatives
available (including those that don't have clear accessibility imlications
such as alternative languages) fall under checkpoint 2.1. I think that the
definition of equivalency is an assertion about accessibility of pairs of
content.

RS: I think the bottom line is text equvalents.

AG: Because of cases like smil, where the accessibility impact may not be
in the markup, if you don't capture the general case in the UAAG, you miss the
special case of accessibility. I think that alternative languages is an
accessibility requirement.

EH: Refer to my comments to the SYMM WG (Member-only?) about needing
additional markup to identify accessibility content explicitly. If the markup
is insufficient, you can't have rationale support for accessibility.

AG: I think that you should make the link to accessibility not in the
definitions but in the checkpoints or at the guideline level.

EH: If you unbind the terms from the accessibility implication, the WCAG
definition of non-text element falls apart. It doesn't handle, e.g., ascii art
and scripts. These consist of text characters but are "non-text elements". So
if you define text element as only being composed of text characters, you
break the WCAG definition. If you are willing to tinker with WCAG language,
you can shift the accessibility criterion to other definitions or spell it out
in the checkpoint.

AG: Fuzzy definition not a big problem for WCAG because they are speaking
to the human author. If there were something that the UA had to do
automatically in software, we would have a problem since the definition is not
good enough. But we don't have that problem for alternatives.

IJ: But we do for 1.5, for example.

AG: But then you can talk to authors again (the UA doesn't have to do
anything).

RS: We need to be able to have access to the equivalent so that we can
render it in other modes. In the UAAG, we have no control over what the author
did. I think we should refer to WCAG and address the problems there.

DA: I think our problem here is that we are talking about things with
linguistic content.

EH: Another way of talking about text elements is that it has a quality of
"rendering independence". The trimodal approach is not totally open, not total
independence.

AG: I think the improvements that we're talking about are good and should
be in WCAG 2.0. But for UAAG 1.0, you need to move forward.

EH: I wouldn't use the word "equivalent" however - I would use another
term.

IJ: We already proposed "alternative"

AG: Check out how ATAG uses "alternative". That should work.

/* AG notes that people are reading glossary entries as definitions */

IJ: So rationale for broader 2.3:

Some languages have inadequate markup

Some languages are designed to use more general markup (equivalents not
just for accessibility)

Resolved:

Broaden 2.3 to include all recognized alternatives. This is broader than
WCAG requirements to ensure that the user has access.

Issue
323 Using accessibility APIs rather than standard APIs to make non-W3C
based content accessible

RS: I think you can differentiate standard APIs used for keyboard and mouse
access to support physical disabilities from the actual rendering to the
screen (e.g., in the case of Java or vector graphics). The accessibility API
provides the same information in a better format than the output API.

IJ: Note that 1.2 still helps ATs that use an offscreen model. Be careful
about removing the requirement to use the standard output APIs.

RS: MSAA and Java are about device-independent access (at the component
level), access to pre-rendered content. They do not support the standard OS
features for mobility access - that's the role of the application.

RS: There are cases where MSAA doesn't support all text. They're fixing
this.

IJ: Note that in Princeton we explicitly decided to require all
standard APIs, and suggesting higher-level APIs (not requiring them).

RS: Only recently did MSAA add access to element content. The Java
approach: we knew we had to run across several operating systems, so we
created an API to access text independent of platform. In that particular
case, you have a solution that is the "only solution" for that platform.

Proposal: Change 1.2 to be "Use accessibility APIs, or if you don't, use
standard device APIs".

RS:

Use the accessibility APIs for the target platform (e.g., Java, Windows,
etc.)

Where those APIs don't provide access to all content, use the standard
system APIs for output (e.g., drawing text) or when they do not support
the system APIs for mobility access. For Java, you would have to do this
for mobility access features, system high-contrast settings, and fonts.

Change 1.2 to require implementation of available standard accessibility
APIs, and where these APIs do not provide required functionality (by this
document), support standard device APIs.

AG: Make clear that information available as text must be available as text
to the ATs (bits written to the screen don't count). Make the text case a
clear example.

IJ: Note that the Note needs to be edited in light of this. And the part
about not bypassing the standard output API is deleted.

AG: I think that the fact that the spec supports access to information
through both the access level and at the DOM level is part of the reasoning
why all of the info needn't be available at the device level.

RS: Need to ensure that there's documentation about how to have access to
these APIs.

Issue
324 How do developers interpret the phrase "appropriate for a task" in
checkpoint 6.2

/* Scott Totman arrives */

IJ: Are we, in effect, requiring all conforming user agents to implement
one or more W3C specs?

JG: People can create xml interfaces to documents in formats that are not
in a w3c format.

IJ: How do you reduce the instances where UAAG requirements need
interpretation?

DP: WCAG requires use of accessible formats. Adobe is trying hard to make a
user agent that is responsible for rendering their content. I think they fall
within our purview, but their format may not fall in the scope of WCAG.

RS: Note that the latest release of MSAA doesn't support access to
tables.

JG: Recently I was playing with SPSS (a statistics package) and you could
output the results as HTML. One idea is to require output in at least one W3C
format.

IJ: That makes it an authoring tool.

AG: Is a piece of software that doesn't implement at least one W3C spec
really a Web app?

AG: One approach is to say that we don't have enough experience with the
accessibility process for PDF (e.g., WCAG 1.0 doesn't cover) and therefore
that shouldn't be our focus today.

HB: XSLT could be a valid way to get around this, but even after the
transformation you might have an inaccessible result due to lack of
information in PDF to begin with.

RS: We are starting to see formatting objects on the Web...

AG: I agree with Ian - to what extent can we write this document to be
format-independent, and to what extent should it push people to use W3C
formats. This is a balancing act we are stuck with. I think it's a practical
problem that for W3C formats, we have access to the specs and it's easier to
be clear about general principles. Yes, we'd like to write functional
requirements to help Adobe promote accessible practices, but it's not so clear
that we are in a position to write general, clear requirements.

AG: The realities that we're looking at are like APIs: W3C formats are like
APIs - they deliver the best engineered solution today for accessibility. We
should have something in here to promote those formats.

HB: We have problems of W3C Recommendations are in conflict.

IJ: Note that "available" has some wording around it in techniques about
implementation schedules.

AG: You shouldn't have to go to the Techniques document to get this
information.

JG:

If you use W3C specs, conform to them.

If you don't support a W3C format, support an accessible format.

EH: We could say that we don't have specific criteria for identifying what
is "appropriate for a task" and leaving as it.

HB: These specs need to be open.

IJ: I have a problem saying "accessible spec" since we don't have a spec
that explains what that means.

AG: You need the format plus the software. You can say in a Note that the
developer needs to implement functionalities in the manner of W3C specs.

EH: This reminds me of our discussion about the scope of our repair
requirements. When you talk about outputting PDF as HTML, that's a repair
functionality.

Issue
327 Add requirement for support of charset expected of each API?

AG: Proper character encoding is required for proper text handling.

AG: This could be a requirement that is included in a general "conform to
specs" requirement. Otherwise, I think this needs to be a separate requirement
for handling text properly, and that is very important for accessibility.

Resolved: Include a P1 requirement for proper support of character
encodings for each supported API. You can't break text.

Action IJ: Get wording from Martin for this
requirement (e.g., "conform", "implement", etc.)

Is perfect support for a language that the user doesn't understand an
accessibility problem?

DA: This could be a problem for users with cognitive disabilities. One
idea is to allow the user to say "don't give me content in these
languages".

Support for language, but resources not available

Support for language, but language specified by author unknown

No support for language

IJ: There are different issues for graphical rendering and speech
rendering. For graphical, encoding (should be) sufficient to tell UA which
character (though UA may not have glyphs). For speech, need more than encoding
information, need natural language information.

Apparent requirements:

Alert that there is lack of support for content in some language

Indication in context of where lack of support occurs

Skip over content in a language that isn't supported

IJ: (Phill Jenkins comment #3): Why is this an accessibility issue?

AG: This is an issue for speech users more than cognitive users (due to
serial access).

EH: Change this to P2 and saying more strongly "Do not" instead of saying
"Avoid".

JG: Note that 5.8 is a P2 to use OS conventions.

DA: You should be able to provide other input configs if they are
better.

GR: There are two things going on:

Avoid conflicts with system conventions

Don't mess with bindings explicitly for the purpose of
accessibility.

IJ: We might add at the end of the checkpoint "for input".

EH: You can better justify the P1 by narrowing the scope this way.

AG: My problem with saying "for input" is that you are creating a total
input/output division and that's not how GUIs work. You shouldn't interfere
with some output features either (that may work with input configs, e.g.,
sound sentry).

AG: It's important to support conventions of the OS even if they are not
specifically for accessibility (e.g., F1 bound to help) - that standardization
promotes accessibility.

JG: We wanted keyboard access for all functionalities. If we reduce the
requirements for other input devices (mouse, voice), we will address some
requirements from users.

IJ: Is full functionality through the mouse an accessibility
requirement?

DA: Yes. Some people only have access through head pointers (and cannot use
voice input).

JG: You can exclude voice from your conformance claims.

IJ: It might be useful to have a "voice" content label (but that would the
the first input in this section, so I'm not so sure...)

GR: I would prefer to see 1.1 stay as is. In the conformance claim, have
the developer have to specify which input APIs are supported.

AG: It's P1 to have all functions available through an API. If you've got
the keyboard API there, the fact that you add another interface that does some
of the functions should not degrade you from having Single-A conformance.

RS: If the OS is controlled by voice, your application should also be
controlled by voice, period. If it's controlled by keyboard and mouse, then
those should be the input APIs.

Demote 1.1 to P2 for mouse, voice, other input APIs. This assumes that
you can emulate everything through the keyboard API.

Narrow scope of 1.1 to certain functionalities.

Allow conformance for input methods other than the keyboard (namely
pointing device and voice).

Note that people can make conformance claims with other software such as
an onscreen keyboard.

JG: HotMetal provide integrates an onscreen keyboard.

AG: A voice browser running at the end of a telephone is not really the
focus of these guidelines.

DA: Keyboard APIs already let you do mouse things and vice versa. There are
examples of full mouse accessibility through the keyboard API.

JG: Our goal with 1.1 is to access functionalities of the UA. The APIs used
to do that are not as big a deal.

EH: What if we limit the scope of 1.1 to keyboard and pointer API.

EH: What about this: All functionality has to be available through the
keyboard or mouse API.

GR: Proposed: Leave 1.1 as is, but add that to conform you may require
emulation via a standard API.

RS: Not required, here's an example: you may do voice recognition, but be
able to do some things in a device-independent manner. If you claim support
for a specific input modality, then the user with a disability should expect
that all functions are available through that modality.

JG: Introduce modality in the section on conformance. Talk about the
checkpoints you must satisfy to make a claim for that modality.

EH: I hear:

Rich wants to talk about devices/modalities in 1.1 and APIs in 1.2: if
you allow voice input, you must allow control of all functionalities
through voice.

AG: This is about the user, not the API.

AG: You could satisfy 1.1 for three modalities by implementing just one
standard API (for the keyboard).

IJ: It sounds like 1.1 is about native user interface and 1.2 is about
APIs.

DA: You should not be able to conform for voice modality if you don't
allow access to all functionalities through voice.

For increased conformance granularity, include in the conformance
section the ability to claim conformance for individual modalities.
Keyboard is a required modality always. The other two in UAAG 1.0 will
deal with are pointing device and voice.

There is no longer need for clarifying language about
keyboard-support-through-the-mouse-and-vice-versa.

AG: You need to be able to style the pseudo-properties focus/select. You
don't need different rules for styling: what you style is the sum of content +
the state of the user's interaction. Visited and non-visited links is also
state information. You style all the properties of content: both static and
dynamic.

IJ: Note that our requirements were developed in parallel (we have about
the same for content and selection/focus).

JG: Is the cost of reorganization worth it?

RS: Note that CSS lets you style focus but not selection. I think we should
push this reorg to another version.

Action IJ: Propose new checkpoints to see how it feels to harmonize the
requirements. If the WG isn't thrilled, we will leave document as is.

Issue
349 New requirement for support for deprecated features (currently
informative in 6.2)

AG: Note that pwWebSpeak, you can have content rendered and focus is moved
automatically by the software - the user doesn't navigate the active elements.
You need to get the focus to all elements so that you can interact with them.
That may not require navigation on the part of the user. Navigation assumes a
geometry of the content - it's a more structured process than the minimal
requirement.

IJ: The user must be able to activate all active elements.

Action IJ: Add some more explanation about the difference between 7.3 and
7.4.

Resolved:

No change to the document, even though the WG appreciates the
clarification about the minimal requirement to activate active
elements.

There was discussion about PDF and our discussion of checkpoint 6.2
yesterday.

DA: There's potential abuse here that people will claim conformance for
formats that "don't allow" control of any properties.

IJ: It is my interpretation that 6.1 doesn't always apply: if a spec
doesn't include any accessibility features, but the spec is accessible.

AG: For PDF, background images, Adobe can recognize a supertype (all
images). Therefore they can implement a policy that addresses this issue.

IJ: I want to say:

Your formats should include accessibility features

If they don't, some requirements may not apply.

IJ: Note that plain text doesn't meet the needs of all users with
disabilities. Would we want to say that a viewer of plain text could not
conform to the UAAG 1.0 because the format it renders doesn't (potentially)
meet the needs of all users with a disability?

AG: The list of speech characteristics will grow. You don't want to be
locked in to a list.

IJ: History - we used to have a broader requirement for control of speech
characteristics (in general). The WG explicitly chose this list (and confirmed
it at the 24
August teleconference).

MQ: Cheaper synthesizers will not support the full list we ask for.

RS: I think that the UA should support configuration of all characteristics
supported by the speech synthesizer.

JG: SAPI lets you pick persona, pitch, speed, volume. If picked this synth
and provided access to the full range of values offered by this synthesizer,
would I conform?

DP: Depending on the synth, some of these properties may not be supersets
of others (which is Phill's suggestion).

EH: Should we say something like "support at least one synthesizer that
provides such capabilities"?

DA: You have the capability if the synth offers it.

MQ: We can't require people to have a particular synthesizer.

JG: We are not - we are requiring a synth with particular
functionalities.

DP: This is not intended for users who are blind. We are talking about
content delivered as speech.

AG: I don't agree with making hardware synths out-of-scope. But I think
that there should be an escape clause that for what the synth doesn't support,
the UA doesn't have to allow configuration. The requirement is punctuation
plus three degrees of freedom (like colors, fonts, plus one...). You need to
have control over reading punctuation (period). I think that differentiation
is important. Then list the 6 characteristics and say that if these are
available, then must allow user to choose from among them.

IJ: Reminder - this checkpoint was written to say "These are the required
characteristics". Thus, the exception clause goes against this design.

MQ: Every synth has a different definition for the terms we have in our min
reqs list. Note that some speech plug-ins don't offer keyboard access...

JG: Voice processing software can generate punctuation and handling
numbering even if synths don't have this capability natively.

AG: I agree with Phill that the current list of min reqs is too long. If we
pull back, we could have a requirement the default style sheet has a mapping
from supported characteristics to CSS-like properties. It's a requirement that
the UA not ignore stress because they say I don't recognize the property.

IJ: Note - we don't have in our conformance provisions a requirement to
identify the speech engine that is used to meet the requirements.

DA: Content is important - puncutation. UA can control that. Speech engine
has control of dynamics.

AG: Some of these requirements came from people having hearing
disabilities, and you need the "rendering space" to accommodate these needs.
People with lower bandwidth hearing need to move the center.

MQ: I think we need to keep pitch in the list (for hearing
impairments).

DP: Remember I18N requirements - you need to look at speech from the
perspective of natural language (e.g., accents, etc.). Do we have information
from playback organizations (e.g., Daisy) that we can use to help us?

JG: Recall big problem of event bubbling. It's thus problematic to include
a requirement to navigate to event handlers since it's possible to make the
whole document an active element.

DA: "Dragger" from origin instruments lets you move the mouse to a location
and generate a double-click at a location. This one works with the Microsoft
mouse driver.

JG: I think our change in 1.1 yesterday affects this. We went from an API
requirement to a functional requirement. If you support the mouse, you must
provide pointer-based event handling. If you're compatible with mouse keys,
you can do the mouse movements with the keyboard.

RS: One problem with that: you don't know that something is active by
visual inspection. You could use style sheets to indicate which elements are
active.

JG: Recall that we said "explicit event handlers" in the definition of
active elements.

IJ: We don't have implementation experience for navigation to elements with
event handlers attached.

RS: Flash content may be attached to elements and may have event
handling.

AG: You could have handlers that are specified outside the document.

RS: Need to clarify that what is active must be identified in markup (not
through scripts).

RS: What about CSS ":hover"?

JG: That doesn't trigger a script, only style.

IJ: Another answer: style is disposable. If not, this is an authoring bug.
Suppose you are using style sheets to expand the table of contents. It's an
authoring bug to rely on style sheets alone (in my opinion).

Resolved:

Clarify that active elements must be identifiable through markup. Add to
definition of active element.

Add a Note to in the definition that styling events are out of scope.
This is only about scripting events.

Issue
374 Definition: Selection, current selection and use of inflected speech.

RS: I think that we already have this by virtue of our requirement of
access to viewports.

IJ: For viewports that are included in a conformance claim, 7.1 covers
this. But we lack a requirement that the UI focus be available to other
software. I think that the issue is that the other application requires the
right to process events in the user interface (mouse input, keyboard input,
etc.). It's uncool to steal focus from other applications and not give it
back. But is this an accessibility issue?

Resolved:

If the plug-in is part of a conformance claim, covered by 7.l

A requirement for applications to provide and/or not steal UI focus is
out of scope for this document and doesn't seem to be an accessibility
issue.

AG: If Adobe can provide an equivalent from a logical equivalent from the
data structure, but they are unable to track where this goes on the screen,
what do they do? Our 2.3 sounds very visually-oriented.

Resolved:

Techniques to satisfy 2.3 do not have to be screen-position based.

Option three (query) is based on the logical document structure.

If you don't know where the equivalents are in the content at all, then
the format is problematic per checkpoint 6.2.

Note that 2.3 is not simply about access to all alternatives (that's
2.1), it's about knowing close relationships among alternatives.