This section lists each checkpoint of "User Agent Accessibility Guidelines
1.0" [UAAG10] along with some possible
techniques for satisfying it. Each checkpoint definition includes a link to the
checkpoint definition in "User Agent Accessibility Guidelines 1.0". Each
checkpoint definition is followed by one or more of the following:

Notes and rationale: Additional rationale and explanation
of the checkpoint;

Who benefits: Which users with disabilities are expected
to benefit from user agents that satisfy the checkpoint;

Example techniques: Some techniques to illustrate how a
user agent might satisfy the requirements of the checkpoint. Screen shots and
other information about deployed user agents have been included as sample
techniques. References to products are not endorsements of those products by
W3C;

Doing more: Techniques to achieve more than what is
required by the checkpoint;

Related techniques: Links to other techniques in section
3. The accessibility topics of section 3 generally apply to more than one
checkpoint.

References: References to other guidelines,
specifications, or resources.

Note: Most of the techniques in this document are designed
for graphical browsers and multimedia players running on desktop computers.
However, some of them also make sense for assistive technologies and other user
agents. In particular, techniques about communication between user agents will
benefit assistive technologies. Refer, for example, to the appendix on loading assistive technologies
for access to the document object model.

If the user agent does not satisfy this checkpoint, one or more groups of
users with disabilities will find it impossible to access the Web. Satisfying
this checkpoint is a basic requirement for enabling some people to access the
Web.

If the user agent does not satisfy this checkpoint, one or more groups of
users with disabilities will find it difficult to access the Web. Satisfying
this checkpoint will remove significant barriers to Web access for some
people.

If the user agent satisfies this checkpoint, one or more groups of users
with disabilities will find it easier to access the Web.

Note: This information about checkpoint priorities is
included for convenience only. For detailed information about conformance to
"User Agent Accessibility Guidelines 1.0"
[UAAG10], please refer to that document.

Spatial (e.g., when the keyboard is used to move the pointing device in two-dimensional visual space to
manipulate a bitmap image).

User agents should support direct or sequential keyboard operation for all
functionalities. Furthermore, the user agent should satisfy this checkpoint by
offering a combination of keyboard-operable user interface controls (e.g.,
keyboard operable print menus and settings) and direct keyboard shortcuts
(e.g., to print the current page).

It is also possible to claim conformance
to User Agent Accessibility Guidelines 1.0
[UAAG10] for full support through pointing device input and/or voice
input. See the section on Input
modality labels in UAAG 1.0.

Notes and rationale

It is up to the user agent developer to decide which functionalities are
best served by direct access, sequential access, and access through two-dimensional visual space. The
UAAG 1.0 does not discourage a pointing device interface, but it does require
redundancy through the keyboard. In most cases, developers can allow operation
of the user agent without relying on motion through two-dimensional visual space; this
includes text selection (a text caret may be used to establish the start and
end of the selection), region selection (allow the user to describe the
coordinates or position of the region, e.g., relative to the viewport),
drag-and-drop (allow the user to designate start and end points and then say
"go"), etc.

For instance, the user must be able to do the following through the
keyboard alone (or pointing device alone or voice alone):

Select content and operate on it. For
example, if the user can select rendered text with the mouse and make it the
content of a new link by pushing a button, they also need to be able to do so
through the keyboard and other supported devices. Other operations include cut,
copy, and paste.

Use the graphical user interface menus. Some users may
wish to use the graphical user interface even if they cannot use or do not wish
to use the pointing device.

Fill out forms.

Access documentation.

Suppose a user agent such as a Web browser does not allow complete
operation through the keyboard alone. It is still possible to claim conformance
for the user agent in conjunction with another software component that "fills
in the gap".

Who benefits

Users with blindness are most likely to benefit from direct access through
the keyboard, including navigation of user interface controls; this
is a logical navigation, not navigation in two-dimensional visual space.

Users with physical disabilities are most likely to benefit from a
combination of direct access and spatial access through the keyboard. For some
users with physical disabilities, moving the pointing device using a physical
mouse may be significantly more difficult than moving the pointing device with
arrow keys, for example.

This checkpoint will also benefit users of many other alternative input
devices (which make use of the keyboard API) and also anyone without a
mouse.

While keyboard operation is expected to improve access for many users,
operation by keyboard shortcuts alone may reduce accessibility (and usability)
by requiring users to memorize a long list of shortcuts. Developers should
provide mechanisms for contextual access to user agent functionalities
(including keyboard-operable cascading menus, context-sensitive help, keyboard
operable configuration tabs, etc.) as well as direct access to those
functionalities. See also
checkpoint 11.5.

Provision one of this checkpoint applies to handlers of any input
device event type, including event types for keyboard, pointing device, and
voice input.

The user agent is not required to allow activation of event handlers
associated with a given device (e.g., the pointing device) in any order other
than what the device itself allows (e.g., a mouse down event followed by a
mouse drag event followed by a mouse up event).

The requirements for this checkpoint refer to any
explicitly associated input device event handlers associated with an
element, independent of the input
modalities for which the user agent conforms. For example, suppose that an
element has an explicitly associated handler for pointing device events. Even
when the user agent only conforms for keyboard input (and does not conform for
the pointing device, for example), this checkpoint requires the user agent to
allow the user to activate that handler with the keyboard.

Note: Refer to the checkpoints of guideline 9 for more information about focus
requirements.

Notes and rationale

For example, users without a pointing device need to be able to activate form controls and links (including
the links in a client-side image map).

Events triggered by a particular device generally follow a set pattern, and
often in pairs: start/end, down/up, in/out. One would not expect a "key down"
event for a given key to be followed by another "key down" event without an
intervening "key up" event.

Who benefits

Users with blindness or some users with a physical disability, and anyone
without a pointing device.

Example techniques

When using the "Document Object Model (DOM) Level 2 Events Specification"
[DOM2EVENTS], activate an event
handler as described in
section 1.5:

Create an event of the given type by calling
DocumentEvent.createEvent, which takes an event type as parameter,
then

Dispatch this event using EventTarget.dispatchEvent.

To preserve the expected order of events, provide a dynamically changing
menu of available handlers. For example, an initial menu of handlers might only
allow the user to trigger a "mousedown" event. Once triggered, the menu would
not allow "mousedown" but would allow "mouseup" and "mouseover", etc.

In some markup languages, it is possible (though somewhat nonsensical) for
two actions to be assigned to the same input event type for a given element
(e.g., one through an explicit event handler and one "intrinsic" to the
element). In this case, offer the user a choice of which action to take.

The DOM Level 2 Events specification does not provide a key event
module.

Sequential
navigation technique: Add each input device event handler to the navigation
order (refer to checkpoint 9.3).
Alert the user when the user has navigated to an event handler, and allow
activation. For example, an link that also has onMouseOver and onMouseOut event
handlers defined, might generate three "stops" in the navigation order: one for
the link and two for the event handlers. If this technique is used, allow
configuration so that input device event handlers are not inserted in the
navigation order.

Query technique: Allow the user to query the element with content focus for
a menu of input device event handlers.

Descriptive information about handlers can allow assistive technologies to
choose the most important functions for activation. This is possible in the
Java Accessibility API [JAVAAPI], which provides an an
AccessibleAction Java interface. This interface provides a list of actions and
descriptions that enable selective activation. See also checkpoint
6.3.

Note: For example, if the user is alerted of an event by an
audio cue, a visually-rendered text equivalent in the status bar could satisfy
this checkpoint. Per checkpoint
6.5, a text equivalent for each such message must be available through an
API. See also
checkpoint 6.6 for
requirements for programmatic notification of changes to the user
interface.

Notes and rationale

User agents should use modality-specific messages in the user interface
(e.g., graphical scroll bars, beeps, and flashes) as long as redundant
mechanisms are available or possible. These redundant mechanisms will benefit
all users, not just users with disabilities.

Who benefits

Users with blindness, deafness, or who are hard of hearing. Mechanisms that
are redundant to audio will benefit individuals who are deaf, hard of hearing,
or operating the user agent in a noisy or silent environment where the use of
sound is not practical.

Example techniques

Render text messages on the status bar of the graphical user interface.
Allow users to query the viewport for this status information (in addition to
having access through graphical rendering).

Make available information in a manner that allows other software to
present it according to the user's preferences. For instance, if proportional
scroll bars are used in the graphical interface to indicate the position of the
viewport in content, make available this same information in text form. For
instance, this will allow other software to render the proportion of content
viewed as synthesized speech or as braille.

Doing more

Allow configuration to render or not render status information (e.g., allow
the user to hide the status bar).

Rendering requirements include format-defined interactions between author
preferences and user preferences/capabilities (e.g., when to render the
"alt" attribute in HTML, the
rendering order of nested OBJECT elements in HTML, test attributes
in SMIL, and the cascade in CSS2).

When a rendering requirement of another specification contradicts a
requirement of UAAG 1.0, the user agent may disregard the rendering requirement
of the other specification and still satisfy this checkpoint; see the section
on the relation of User Agent Accessibility Guidelines 1.0 to general software
design guidelines and other specifications. for more information.

The user agent is not required to satisfy this checkpoint for all
implemented specifications; see the section on
conformance profiles for more information.

Note: If a conforming user agent does not render a content
type, it should allow the user to choose a way to handle that content (e.g., by
launching another application, by saving it to disk, etc.).

Notes and rationale

Provision two of the checkpoint only applies when the rendering requirement
of another specification contradicts the requirements of the current document;
no exemption is granted if the other specification is consistent with or silent
about a requirement made by the current document.

Example techniques

Provide access to attribute values (one at a time, not as a group). For
instance, allow the user to select an element and read values for all
attributes set for that element. For many attributes, this type of inspection
should be significantly more usable than a view of the text source.

When content changes dynamically (e.g., due to embedded scripts or
automatic content retrieval), users need to have access to the content before
and after the change.

Make available information about abbreviation and acronym expansions. For
instance, in HTML, look for abbreviations specified by the ABBR and ACRONYM
elements. The expansion may be given with the "title" attribute (refer to the
Web Content Accessibility Guidelines 1.0
[WCAG10], checkpoint 4.2). To provide expansion information, user
agents may:

Allow the user to configure that the expansions be used in place of the
abbreviations,

Provide a list of all abbreviations in the document, with their expansions
(a generated glossary of sorts)

Generate a link from an abbreviation to its expansion.

Allow the user to query the expansion of a selected or input
abbreviation.

If an acronym has no expansion in one location, look for another occurrence
in content that does. User agents may also look for possible expansions (e.g.,
in parentheses) in surrounding context, though that is a less reliable
technique.

Related techniques

Doing more

If the requirements of the current document contradict the rendering
requirements of another specification, the user agent may offer a configuration
to allow conformance to one or the other specification.

References

Sections 10.4 ("Client Error 4xx") and 10.5 ("Server Error 5xx") of the
HTTP/1.1 specification [RFC2616] state that user agents
should have the following behavior in case of these error conditions:

Except when responding to a HEAD request, the server SHOULD include an
entity containing an explanation of the error situation, and whether it is a
temporary or permanent condition. These status codes are applicable to any
request method. User agents SHOULD display any included entity to the user.

For
content authored in text formats, provide a
view of the text source. For the
purposes of this checkpoint, a text format is any media object given an
Internet media type of "text" (e.g., "text/plain", "text/html", or "text/*") as
defined in RFC 2046 [RFC2046], section 4.1.

The user agent is only required to satisfy this checkpoint for text formats
that are part of a conformance claim; see the section on
conformance profiles for more information. However, user agents should
provide a text view for all implemented text formats.

Notes and rationale

In general, user agent developers should not rely on a "source view" to
convey information to users, most of whom are not familiar with markup
languages. A source view is still important as a "last resort" to some users as
content might not otherwise be accessible at all.

Who benefits

Users with blindness, low vision, deafness, hard of hearing, and any user
who requires the text source to understand the content.

Example techniques

Make the text view useful. For instance, enable links (i.e.,
URIs), allowing searching and other navigation within the view.

A source view is an easily-implementable view that will help users inspect
some types of content, such as style sheet fragments or scripts. This does not
mean, however, that a source view of style sheets is the best user
interface for reading or changing style sheets.

Doing more

Even when an Internet media type is not available (e.g., for local files),
provide a text view for common text formats such as HTML and XHTML.

Provide a source view for any text format, not just implemented text
formats.

To satisfy provision one of this checkpoint, the configuration may be a
switch that, for all content, turns on or off the access mechanisms described
in provision two.

To satisfy provision two of this checkpoint, the user agent may provide
access on a per-element basis (e.g., by allowing the user to query individual
elements) or for all elements (e.g., by offering a configuration to render
conditional content all the time).

Note: For instance, an HTML user agent might allow users to
query each element for access to conditional content supplied for the
"alt", "title", and "longdesc"
attributes. Or, the user agent might allow configuration so that the value of
the "alt" attribute is rendered in place of all IMG
elements (while other conditional content might be made available through
another mechanism).

Notes and rationale

There may be more than one piece of conditional content associated with
another piece of content (e.g., multiple captions tracks associated with the
visual track of a presentation).

Note that the alert requirement of this checkpoint is per-element. A single
resource-level alert (e.g., "there is conditional content somewhere here") does
not satisfy the checkpoint, but may be part of a solution for satisfying this
checkpoint. For example, the user agent might indicate the presence of
conditional content "somewhere" with menu in the toolbar. The menu items could
provide both per-element alert and access to the content (e.g., by opening a
viewport with the conditional content rendered).

Who benefits

Any user for whom the author has provided conditional content for
accessibility purposes. This includes: text equivalents for users with
blindness or low vision, or users who are deaf-blind, and captions, for users
who with deafness or who are hard of hearing.

Example techniques

Allow users to choose more than one piece of conditional content at a given
time. For instance, users with low vision may want to view images (even
imperfectly) but require a text equivalent for the image; the text may be rendered with a large font or as
synthesized speech.

In HTML 4 [HTML4], conditional content
mechanisms include the following:

Do not render the long description, but allow the user to query whether an
element has an associated long description (e.g., with a context-sensitive
menu) and provide access to it.

Use an icon (with a text equivalent) to indicate the
presence of a long description.

Use an audio cue to indicate the presence of a long description when the
user navigates to the element.

For an object (e.g., an image) with an author-specified geometry that the
user agent does not render, allow the user to configure how the conditional
content should be rendered. For example, within the specified geometry, by
ignoring the specified geometry altogether, etc.

For multimedia presentations with several alternative tracks, ensure access
to all tracks and allow the user to select individual tracks. (As an example,
the QuickTime player [QUICKTIME] allows users to turn
on and off any number of tracks separately.) For example, construct a list of
all available tracks from short descriptions provided by the author (e.g.,
through the "title" attribute).

For multimedia presentations with several alternative tracks, allow users
to choose tracks based on natural language preferences. SMIL
1.0
[SMIL] allows users to specify captions in different natural languages. By
setting language preferences in the SMIL player (e.g., the G2 player [G2]),
users may access captions (or audio) in different languages. Allow users to
specify different languages for different content types (e.g., English audio
and Spanish captions).

If a multimedia presentation has several captions (or subtitles) available, allow the
user to choose from among them. Captions might differ in level of detail,
reading level, natural language, etc. Multilingual
audiences may wish to have captions in different natural languages on the screen at
the same time. Users may wish to use both captions and audio descriptions
concurrently as well.

Section 7.8.1 of SMIL 2.0 [SMIL20] defines the 'readIndex'
attribute, which specifies the position of the current element in the order in
which values of the longdesc, title, and
alt attributes are to be read aloud.

Related techniques

Doing more

If the user agent satisfies the checkpoint by implementing 1b
(placeholders), allow the user to toggle back and forth between a placeholder
and the original author-supplied content. Some users with a cognitive
disability may find it difficult to access content after turning on rendering
of too many images (even when those images were turned on one by one). Sample
technique: allow the user to designate a placeholder and request to view the
associated content in a separate viewport (e.g., through a context menu), leaving the placeholder in
context. Allow the user to close the new viewport manually.

Make information available with different levels of detail. For example,
for a voice browser, offer two
options for HTMLIMG elements:

Speak only "alt" text by default, but allow the user to hear "longdesc"
text on an image by image basis.

Speak "alt" text and "longdesc" for all images.

Allow the user to configure different natural language preferences for
different types of conditional content (e.g.,
captions and audio descriptions). Users with disabilities may need to choose
the language they are most familiar with in order to understand a presentation
for which supplementary tracks are not all available in all desired languages.
In addition, some users may prefer to hear the program audio in its original
language while reading captions in another, fulfilling the function of
subtitles or to improve foreign language comprehension. In classrooms, teachers
may wish to configure the language of various multimedia elements to achieve
specific educational goals.

This image shows how users select a natural language
preference in the RealPlayer. This setting, in conjunction with language markup
in the presentation, determines what
content is rendered.

The user agent may satisfy this checkpoint by pausing processing
automatically to allow for user input, and resuming processing on explicit user request. When
this technique is used, pause at the end of each time interval where user input
is possible. In the paused state:

Alert the user that the rendered content has been paused
(e.g., highlight the pause button in a multimedia player's control panel).

Allow the user to resume on explicit user request (e.g., by
pressing the play button in a multimedia player's control panel; see also checkpoint 4.5).

The user agent may satisfy this checkpoint by generating a
time-independent (or, "static") view, based on the original content, that offers the user the same
opportunities for interaction. The static view should reflect the structure and
flow of the original time-sensitive presentation; orientation cues will help
users understand the context for various interaction opportunities.

When satisfying this checkpoint for a real-time presentation, the user
agent may discard packets that continue to arrive after the construction of the
time-independent view (e.g., when paused or after the construction of a static
view).

This checkpoint does not apply when
the user agent cannot recognize
the time interval in the presentation format, or when the user agent cannot
control the timing (e.g., because it is controlled by the server).

Note: If the user agent satisfies this checkpoint by
pausing automatically, it may be necessary to pause more than once when there
are multiple opportunities for time-sensitive user interaction. When pausing,
pause synchronized content as well (whether rendered in the same or different
viewports) per checkpoint
2.6. In SMIL 1.0 [SMIL], for example, the
"begin", "end", and "dur" attributes synchronize presentation
components. See also checkpoint 3.5, which involves client-driven content
retrieval.

Notes and rationale

The user agent could satisfy this checkpoint by allowing the user to step
through an entire presentation manually (as one might advance frame by frame
through a movie). However, this is likely to be tedious and lead to information
loss, so the user agent should preserve as much of the flow and order of the
original presentation as possible.

The requirement to pause at the end (rather than at the beginning)
of a time-interval is to allow the user to review content that may change
during the elapse of this time.

The configuration option is important because techniques used to satisfy
this checkpoint may lead to information loss for some types of content (e.g.,
highly interactive real-time presentations).

When different streams of time-sensitive content are not synchronized (and
rendered in the same or different viewports), the user agent is not required to
pause the pieces all at once. The assumption is that both streams of content
will be available at another time.

Who benefits

Example techniques

Some HTML user agents recognize time intervals specified through the
META element, although this usage is not defined in HTML 4
[HTML4].

Render time-dependent links as a static list that occupies the same screen
real estate; authors may create such documents in SMIL 1.0
[SMIL]. Include temporal context in the list of links. For example,
provide the time at which the link appeared along with a way to easily jump to
that portion of the presentation.

For a presentation that is not "live", allow the user to choose from a menu
of available time-sensitive links (essentially making them
time-independent).

Doing more

Provide a view where time intervals are lengthened, but not infinitely
(e.g., allow the user to multiple time intervals by 3, 5, and 10). Or, allow
the user to add extra time (e.g., 10 seconds) to each time interval.

Allow the user to view a list of all media elements or links of the
presentations sorted by start or end time or alphabetically.

Alert the user whenever pausing the user agent may lead to packet
loss.

Notes and rationale

Users may wish to a read a transcript at the same time as a related visual
or audio track and pause the visual or audio track while reading; see checkpoint 4.5.

Who benefits

Users with blindness or low vision (audio descriptions and text captions,
etc.) and users with deafness or who are hard of hearing.

Example techniques

Allow users to turn on and off audio descriptions and captions.

For the purpose of applying this clause, SMIL 1.0
[SMIL] user agents should recognize as captions any media object
whose reference from SMIL is guarded by the 'system-captions' test
attribute.

SMIL user agents should allow users to configure whether they want to view
captions, and this user interface switch should be bound to the
'system-captions' test attribute. Users should be able to indicate
a preference for receiving available audio descriptions. Note:
SMIL 1.0 [SMIL] does not include a mechanism
analogous to 'system-captions' for audio descriptions, though
[SMIL20] does, called 'systemAudioDesc'.

Another SMIL 1.0 test attribute, 'system-overdub-or-captions',
allows users to choose between subtitles and overdubs in multilingual
presentations. User agents should not interpret a value of
'caption' for this test attribute as meaning that the user prefers
accessibility captions; that is the purpose of the
'system-captions' test attribute. When subtitles and accessibility
captions are both available, users who are deaf may prefer to view captions, as
they generally contain information not in subtitles: information on music,
sound effects, who is speaking, etc.

User agents that play QuickTime movies should allow the user to turn on and
off the different tracks embedded in the movie. Authors may use these
alternative tracks to provide content for accessibility purposes. The Apple
QuickTime player provides this feature through the menu item "Enable
Tracks."

User agents that play Microsoft Windows Media Object presentations should
provide support for Synchronized Accessible Media Interchange (SAMI
[SAMI]), a protocol for creating and displaying captions) and should
allow users to configure how captions are viewed. In addition, user agents that
play Microsoft Windows Media Object presentations should allow users to turn on
and off other conditional content, including
audio description and alternative visual tracks.

Notes and rationale

The term "synchronization cues" refers to pieces of information that may
affect synchronization, such as the size and expected duration of tracks and
their segments, the type of element and how much those elements can be sped up
or slowed down (both from technological and intelligibility standpoints).

Captions and audio descriptions may not make
sense unless rendered synchronously with related video or audio content. For
instance, if someone with a hearing disability is watching a video presentation
and reading associated captions, the captions should be synchronized with the audio so that the
individual can use any residual hearing. For audio descriptions, it is crucial
that an audio track and an audio
description track be synchronized to avoid having them both play at once, which
would reduce the clarity of the presentation.

Who benefits

Users with deafness or who are hard of hearing (e.g., for audio
descriptions and audio tracks), and some users with a cognitive
disability.

Example techniques

The idea of "sensible time-coordination" of components in the definition of
synchronize centers on the idea of
simultaneity of presentation, but also encompasses strategies for handling
deviations from simultaneity resulting from a variety of causes. Consider how
deviations might be handled for captions for a multimedia presentation such
as a movie clip. Captions consist of a text equivalent of the audio track that
is synchronized with the visual track. Typically, a segment of
the captions appears visually near the video for several seconds while the
person reads the text. As the visual track continues, a new segment of the
captions is presented. However, a problem arises if the captions are longer
than can fit in the display space. This can be particularly difficult if due to
a visual disability, the font size has been enlarged, thus reducing the amount
of rendered caption text that can be presented. The user agent needs to respond
sensibly to such problems, for example by ensuring that the user has the
opportunity to navigate (e.g., scroll down or page down) through the caption
segment before proceeding with the next segment of the visual track.

Developers of user agents need to determine how they will handle other
synchronization challenges, such as:

Under what circumstances will the presentation automatically pause? Some
circumstances where this might occur include:

the segment of rendered caption text is more than can fit on the visual
display

the user wishes more time to read captions or the collated text
transcript

the audio description is of longer duration than the natural pause in the
audio.

Once the presentation has paused, then under what circumstances will it
resume (e.g., only when the user signals it to resume, or based on a predefined
pause length)?

If the user agent allows the user to jump to a location in a presentation
by activating a link, then how will related tracks behave? Will they jump as
well? Will the user be able to return to a previous location or undo the
action?

The user agent may satisfy this checkpoint by basing the repair text on any
of the following available sources of information: URI reference, content type,
or element type. Note, however, that additional information that would enable
more helpful repair might be available but not "near" the missing conditional
content. For instance, instead of generating repair text on a simple URI
reference, the user agent might look for helpful information near a different
instance of the URI reference in the same document object, or might retrieve
useful information (e.g., a title) from the resource designated by the URI
reference.

Who benefits

Users with blindness or low vision.

Example techniques

When HTTP is used, HTTP headers provide information about the URI of the Web resource ("Content-Location") and
its type ("Content-Type"). Refer to the HTTP/1.1 specification
[RFC2616], sections 14.14 and 14.17, respectively. Refer to "Uniform
Resource Identifiers (URI): Generic Syntax" ([RFC2396], section 4) for
information about URI references, as well as the HTTP/1.1 specification
[RFC2616], section 3.2.1.

An image or another piece of content may appear several times in content.
If one instance has associated conditional content but others do not, reuse
what the author did provide.

Repair content may be part of another piece of content. For instance, some
image formats allow authors to store metadata there; refer to "Describing and
retrieving photos using RDF and HTTP"
[PHOTO-RDF].

Note: In some authoring scenarios, empty content (e.g.,
alt="" in HTML) may make an appropriate text equivalent, such as when non-text content has no other
function than pure decoration, or when an image is part of a "mosaic" of
several images and does not make sense out of the mosaic. Refer to the Web
Content Accessibility Guidelines 1.0 [WCAG10] for more information about
text equivalents.

Notes and rationale

User agents should render nothing in this case because the author may
specify an empty text equivalent
for content that has no function in the page other than as decoration.

Who benefits

Users with blindness or low vision.

Example techniques

The user agent should not render generic labels such as "[INLINE]" or
"[GRAPHIC]" for empty conditional
content (unless configured to do so).

If no captioning information is available and captioning is turned on,
render "No captioning information available" in the captioning region of the
viewport (unless configured not to generate repair content).

Doing more

Labels (e.g., "[INLINE]" or "[GRAPHIC]") may be useful in some situations,
so the user agent may allow configuration to render "No author text" (or
similar) instead of empty conditional content.

Note: For instance, an HTML user agent might allow
configuration so that the value of the "alt" attribute is rendered in place of all
IMG elements (while other conditional content might be made
available through another mechanism). The user agent may offer multiple
configurations (e.g., a first configuration to render one type of conditional
content automatically, a second to render another type, etc.).

Who benefits

Users who have difficulties with navigation and manual access to content,
including some users with a physical disability and users with blindness or low
vision.

Example techniques

Provide a "conditional content view", where all content that is not
rendered by default is rendered in place of associated content. For example,
Amaya
[AMAYA] offers a "Show alternate" view that accomplishes this. Note,
however, cases where an element has more than one piece of associated
conditional content (e.g., render them all as a list, or as a list of links,
etc.). For long conditional content, instead of rendering in place, link to the
content.

This checkpoint does not require the user agent to allow different
configurations for different natural languages.

Note: This checkpoint is designed primarily to benefit
users with serial access to content
or who navigate
sequentially, allowing them to skip portions of content that would be
unusable if rendered as "garbage".

Notes and rationale

A script is a means of supporting the visual rendering of content in a
particular natural language. So, for user agents that render content visually,
a user agent might not recognize "the Cyrillic script", which would mean that
it would not support the visual rendering of Russian, Ukrainian, and other
languages that employ Cyrillic when written.

There may be cases when a conforming user agent supports a natural language
but a speech synthesizer does not, or vice versa.

Who benefits

Example techniques

Use a text substitute or accessible graphical icon to indicate that content
in a particular language has not been rendered. For instance, a user agent that
does not support Korean (e.g., does not have the appropriate fonts or voice
set) should allow configuration to announce the language change with the
message "Unsupported language – unable to render" (e.g., when the
language itself is not recognized) or "Korean not supported – unable to
render" (e.g., when the language is recognized by the user agent does not have
resources to render it). The user should also be able to choose no alert of
language changes. Rendering could involve speaking in the designated natural
language in the case of a voice browser or screen reader. If the natural
language is not supported, the language change alert could be spoken in the
default language by a screen reader or voice browser.

A user agent may not be able to render all characters in a document
meaningfully, for instance, because the user agent lacks a suitable font, a
character has a value that may not be expressed in the user agent's internal
character encoding, etc. In this case,
section 5.4 of HTML 4
[HTML4] recommends the following for undisplayable characters:

Adopt a clearly visible (or audible), but unobtrusive mechanism to alert
the user of missing resources.

If missing characters are presented using their numeric representation, use
the hexadecimal (not decimal) form since this is the form used in character set
standards.

CSS2's attribute selector may be used with the HTML "lang" or XML
"xml:lang" attributes to control rendering based on recognized natural language information.
Refer also to the ':lang'
pseudo-class ([CSS2], section 5.11.4).

References

For information on language codes, refer to "Codes for the representation
of names of languages" [ISO639].

Refer to "Character Model for the World Wide Web"
[CHARMOD]. It contains basic definitions and models, specifications
to be used by other specifications or directly by implementations, and
explanatory material. In particular, this document addresses early uniform
normalization, string identity matching, string indexing, and conventions for
URIs.

The user agent may satisfy this checkpoint with a configuration to not
render any images, including background images. However, user agents
should satisfy this checkpoint by allowing users to turn off background images
alone, independent of other types of images in
content.

When configured not to render background images, the user agent is not
required to retrieve them until the user requests them explicitly. When
background images are not rendered, user agents should render a solid
background color instead; see checkpoint 4.3 for information about text colors.

This checkpoint only requires control of background images for "two-layered
renderings", i.e., one rendered background image with all other content
rendered "above it".

Note: When background images are not rendered, they are
considered conditional
content. See checkpoint
2.3 for information about providing access to conditional content.

Notes and rationale

This checkpoint does not address issues of multi-layered renderings and
does not require the user agent to change background rendering for multi-layer
renderings (refer, for example, to the 'z-index' property in Cascading Style
Sheets, level 2 ([CSS2], section 9.9.1).

Who benefits

Some users with a cognitive disability or color deficiencies who may find
it difficult or impossible to read superimposed text or understand other
superimposed content.

Example techniques

If background images are turned off, make available to the user associated
conditional
content.

This configuration is required for content rendered without any user
interaction (including content rendered on load or as the result of a script),
as well as content rendered as the result of user interaction that is not an explicit user request (e.g.,
when the user activates a link).

Note: See
guideline 4 for additional requirements related to the control of rendered
audio, video, and animated images. When these content types are not rendered,
they are considered conditional content. See checkpoint 2.3 for
information about providing access to conditional content.

Who benefits

Some users with a cognitive disability, for whom an excess of visual
information (and in particular animated information) might make it impossible
to understand parts of content. Also, audio rendered automatically on load may
interfere with speech synthesizers.

Example techniques:

For user agents that hand off content to different rendering engines, the
configuration should cause the content not to be handed off, and instead a
placeholder rendered.

The "silent" or "invisible" solution for satisfying this checkpoint (e.g.,
by implementing the
'visibility' property defined in section 11.2 of CSS 2
[CSS2]) is not recommended. This solution means that the content is
processed, though not rendered, and processing may cause undesirable side
effects such as firing events. Or, processing may interfere with the processing
of other content (e.g., silent audio may interfere with other sources of sound
such as the output of a speech synthesizer). This technique should be deployed
with caution.

As a placeholder for an animated image, render a motionless image built
from the first frame of the animated image.

Note: Animation (a rendering effect) differs from streaming
(a delivery mechanism). Streaming content might be rendered as an animation
(e.g., an animated stock ticker or vertically scrolling text) or as static text
(e.g., movie subtitles, which are rendered for a limited time, but do not give
the impression of movement).

Notes and rationale

The definition of blinking text is based on the CSS2 definition of the
'blink' value; refer to [CSS2], section 16.3.1.

Who benefits

Users with photosensitive epilepsy (for whom flashing content may trigger
seizures) and users with some cognitive disorders (for whom the distraction may
make the content unusable). Blinking text can also affect screen reader users,
since screen readers (in conjunction with speech synthesizers or braille
displays) may re-render the text every time it blinks.

Configuration is preferred as some users may benefit from blinking effects
(e.g., users who are deaf or hard of hearing). However, the priority of this
checkpoint was assigned on the basis of requirements unrelated to this
benefit.

Example techniques

The user agent may render the motionless text in a number of ways. Inline
is preferred, but for extremely long text, it may be better to render the text
in another viewport, easily reachable from the user's browsing context.

Allow the user to turn off animated or blinking text through the user agent user interface
(e.g., by pressing the Escape key to stop animations).

Some sources of blinking and moving text are:

The BLINK element in HTML. The BLINK element is not defined by a W3C
specification.

The MARQUEE element in HTML. The MARQUEE element is not defined by a W3C
specification.

Note: Scripts and applets may provide very useful
functionality, not all of which causes accessibility problems. Developers
should not consider that the user's ability to turn off scripts is an effective
way to improve content accessibility; turning off scripts means losing the
benefits they offer. Instead, developers should provide users with finer
control over user agent or content behavior known to raise accessibility
barriers. The user should only have to turn off scripts as a last resort.

Notes and rationale

Executable content includes scripts,
applets, ActiveX controls, etc. This checkpoint does not apply to plug-ins; they are not part of content.

Executable content includes those that run "on load" (e.g., when a document
loads into a viewport) and when other events occur (e.g., user interface
events).

Where possible, authors should encode knowledge in a declarative manner
(i.e., through static definitions and expressions) rather than in scripts.
Knowledge and behaviors embedded in scripts can be difficult or impossible to
extract, which means that user agents are less likely to be able to offer
control by the user over the script's effect. For instance, with SVG animation
(see chapter
19 of SVG 1.0 [SVG]), one can create animation
effects in a declarative manner, using recognizable elements and attributes.

Who benefits

Some users with photosensitive epilepsy; flickering or flashing,
particularly in the 4 to 59 flashes per second (hertz) range, may trigger
seizures. Peak sensitivity to flickering or flashing occurs at 20 hertz. Some
executable content can cause the screen to flicker.

Example techniques

Some user agents allow users to turn off scripts in the "Security" part of
the user interface. Since some users seeking accessibility features may not
think to look there, include the on/off switch in an accessibility part of the
user interface as well. Also, include a "How to turn off scripts" entry in the
documentation index.

Related techniques

Doing more

When support for scripts is turned on, and when the user agent recognizes
that there are script alternatives available (e.g., NOSCRIPT in
HTML), alert the user to the presence of the alternative (and make it easily
available). If a user cannot access the script content, the alert will raise
the user's awareness of the alternative, which may be more accessible.

While this checkpoint only requires an on/off configuration switch, user
agents should allow finer control over executable content. For instance, in
addition to the switch, allow users to turn off just input device event
handlers, or to turn on and off scripts in a given scripting language
only.

When the user chooses not to retrieve (fresh) content, the user agent may
ignore that content; buffering is not required.

The user agent is not required to satisfy this checkpoint for "client-side
redirects", i.e., author-specified instructions that a piece of content is
temporary and intermediate, and is replaced by content that results from a
second request. Authors (and Webmasters) should use the redirect mechanisms of
HTTP instead of client-side redirects.

This checkpoint only applies when the user agent (not the server)
automatically initiates the request for fresh content.

Note: For example, if an HTML author has used a
META element for automatic content retrieval, allow configuration to
override the automatic behavior with manual confirmation.

Notes and rationale

Some HTML authors specify automatic content retrieval
using a META element with http-equiv="refresh", with the frequency specified by
the "content" attribute (seconds between retrievals).

Who benefits

Some users with a cognitive disability, users with blindness or low vision,
and any user who may be disoriented (or simply annoyed) by automatically
changing content.

Example techniques

Alert the user that suppressing the retrieval may lead to loss of
information (e.g., packet loss).

Doing more

When configured not to retrieve content automatically, alert the user of
the frequency of retrievals specified in content, and allow the user to
retrieve fresh content manually (e.g., by following a link or confirming a
prompt).

Allow users to specify their own retrieval frequency.

Allow at least one configuration for low-frequency retrieval (e.g., every
10 minutes).

Retrieve new content without displaying it automatically. Allow the user to
view the differences (e.g., by highlighting or filtering) between the currently
rendered content and the new content (including no differences).

Allow configuration so that a
client-side redirect only changes
content on explicit user
request. This configuration need not apply to client-side redirects
specified to occur instantaneously (i.e., after no delay). Client-side
redirects may disorient the user, but are less serious than automatic content
retrieval since the intermediate state (just before the redirect) is generally
not important content that the user might regret missing. Some
HTML user agents support client-side redirects authored using a
META element with http-equiv="refresh". This use of
META is not a normative part of any W3C Recommendation and may
pose interoperability problems.

Provide a configuration so that when the user navigates "back" through the
user agent history to a page with a client-side redirect, the user agent does
not re-execute the client-side redirect.

References

For Web content authors: refer to the HTTP/1.1 specification
[RFC2616] for information about using server-side redirect
mechanisms (instead of client-side redirects).

Notes and rationale

This priority of
checkpoint 3.2 is higher than the priority of this checkpoint because an
excess of moving visual information is likely to be more distracting to some
users than an excess of still visual information.

Who benefits

Some users with a cognitive disability, for whom an excess of visual
information might make it difficult to understand parts of content.

The user agent may satisfy provision one of this checkpoint through a
number of mechanisms, including zoom, magnification, and allowing the user to
configure a reference size for rendered text (e.g., render text at 36 points
unless otherwise specified). For example, for CSS2
[CSS2] user agents, the 'medium' value of the 'font-size' property
corresponds to a reference size.

The word "scale" is used in this checkpoint to mean the general size of
text.

The user agent is not required to satisfy this requirement through
proportional scaling. What must hold is that if rendered text A is smaller than
rendered text B at one value of this configuration setting, then text A will
still be smaller than text B at another value of this configuration
setting.

Notes and rationale

For example, allow the user to configure the user agent to apply the same
font family across Web resources, so that all
text is displayed by default using that font family. Or, allow the user to
control the text scale dynamically for a given element, e.g., by navigating to
the element and zooming in on it.

The choice of optimal techniques depends in part on which markup language
is being used. For instance, HTML user agents may allow the user to change the
font size of a particular piece of
text (e.g., by using CSS user style sheets) independent of other content
(e.g., images). Since the user agent can reflow the text after resizing the
font, the rendered text will become more legible without, for example,
distorting bitmap images. On the other hand, some languages, such as SVG, do
not allow text reflow, which means that changes to font size may cause rendered
text to overlap with other content, reducing accessibility. SVG is designed to
scale, making a zoom functionality the more natural technique for SVG user
agents satisfying this checkpoint.

The primary intention of this checkpoint is to allow users with low vision
to increase the size of text. Full configurability includes the choice of very
small text sizes that may be available, though this is not considered by the
User Agent Accessibility Guidelines Working Group to be part of the priority 1
requirement. This checkpoint does not include a "lower bound" (above which text
sizes would be required) because of how users' needs may vary across writing
systems and hardware.

Who benefits

Users with low vision, who benefit from the ability to increase the text
scale. Note that some users may also benefit from the ability to choose small
font sizes (e.g., users of screen readers who wish to have more content per
screen so they have to scroll less frequently). People who use captions may
need to change the text scale.

Example techniques

The ratios of the sizes should be compressed at large text sizes, as the
same number of different sizes must be packed into a smaller dynamic
range.

Vectorial formats such as Scalable Vector Graphics specification [SVG]
are designed to scale. For bitmap fonts, the user agent may need to round off
font sizes when the user increases or decreases the scale.

Note: For example, allow the user to specify that all text is to be rendered in a particular
sans-serif font family.

Who benefits

Users with low vision or some users with a cognitive disability or reading
disorder. Some people require the ability to change the font family of text in
order to read it. People who use captions may also need to change the font
family.

Note: User configuration of foreground and background
colors may inadvertently lead to the inability to distinguish ordinary text
from selected text, focused text, etc. See checkpoint 10.2 for more
information about highlight styles.

Who benefits

Users with color deficiencies and some users with a cognitive disability.
People who use captions may need to change the text color.

SMIL does not have a global property for "background color", but allows
specification of background color by region (refer, for example, to the
definition of the 'background-color' attribute defined in section 3.3.1 of
SMIL 1.0 [SMIL]). In the case of SMIL, the
user agent would satisfy this checkpoint by applying the user's preferred
background color to all regions (and to all root-layout elements
as well). SMIL 1.0 does not have a way to specify the foreground color of text,
so that portion of the checkpoint would not apply.

Allow the user to slow the presentation rate
of rendered audio and animation content (including video and
animated images).

As part of satisfying provision one of this
checkpoint, for a visual track, provide at
least one setting between 40% and 60% of the original speed.

As part of satisfying provision one of this
checkpoint, for a prerecorded audio track including audio-only
presentations, provide at least one setting between 75% and 80% of the
original speed.

When the user agent allows the user to slow
the visual track of a synchronized multimedia presentation to between 100% and
80% of its original speed, synchronize the visual and audio tracks (per checkpoint 2.6). Below 80%,
the user agent is not required to render the audio track.

The user agent is not required to satisfy this checkpoint for audio and
animations whose recognized role is to create
a purely stylistic effect. Purely stylistic effects include background sounds,
decorative animated images, and effects caused by style sheets.

Note: The style exception of this checkpoint is based on
the assumption that authors have satisfied the requirements of the "Web Content
Accessibility Guidelines 1.0" [WCAG10] not to convey information
through style alone (e.g., through color alone or style sheets alone).

Notes and rationale

Slowing one track (e.g., video) may make it harder for a user to understand
another synchronized track (e.g., audio), but if the user can understand
content after two passes, this is better than not being able to understand it
at all.

Some formats (e.g., streaming formats) might not enable the user agent to
slow down playback and would thus be subject to applicability.

Who benefits

Some users with a learning or cognitive disability, or some users with
newly acquired sensory limitations (such as a person who is newly blind and
learning to use a screen reader). Users who have beginning familiarity with a
natural language may
also benefit.

Example techniques

When changing the rate of audio, avoid pitch distortion.

HTML 4 [HTML4], background animations may
be specified with the deprecated background attribute.

The user agent may satisfy the navigation requirement of provision two of
this checkpoint through forward and backward serial access techniques (e.g., advance
five seconds), or direct access techniques (e.g., play starting at the
10-minute mark), or some combination.

When serial access techniques
are used to satisfy provision two of this checkpoint, the user agent is not
required to play back content during advance or rewind (though doing so may
help orient the user).

When the user pauses a real-time audio or animation, the user agent may
discard packets that continue to arrive during the pause.

This checkpoint applies to content that is either rendered automatically
(e.g., on load) or on explicit request from the user.

The user agent is not required to satisfy this checkpoint for audio and
animations whose recognized role is to create
a purely stylistic effect; see
checkpoint 4.4 for more information about what constitutes a stylistic
effect.

Note: The lower bound of three seconds is part of this
checkpoint since control is not required for brief audio and animation clips,
beeps, etc. Respect synchronization cues per checkpoint 2.6.

Notes and rationale

Some formats (e.g., streaming formats), might not enable the user agent to
fast forward or rewind content and would thus be subject to applicability.

For some streaming media formats, the user agent might not be able to offer
some functionalities (e.g,. fast forward) when the content is being delivered
over the Web in real time. However, the user agent is expected to offer these
functionalities for content (in the same format) that is fully available, for
example on the user's computer.

Playback during serial advance or rewind is not
required. For example, the user agent is not required to play an animation at
double speed during a continuous fast forward. Similarly, when the user fast
forwards or rewinds an animation, the user agent is not required to play back a
synchronized audio track.

Who benefits

Some users with a cognitive disability.

Example techniques

Serial access and sequential
navigation techniques include, for example, rewind in large or small time
increments, forward to the next audio track, etc. Direct access techniques
include access to visual track number 7, to the 10-minute mark, etc. The best
choice of serial, sequential, or direct access techniques will depend on the
nature of the content being rendered.

If buttons are used to control advance and rewind, make the advance/rewind
distances proportional to the time the user activates the button. After a
certain delay, accelerate the advance/rewind.

The
SMIL 2.0 Time Manipulations Module ([SMIL20], chapter 11) defines the
speed attribute, which can be used to change the playback
direction (forward or reverse) of any animation. See also the
accelerate and decelerate attributes.

Some content lends itself to different forward and reverse functionalities.
For instance, compact disk players often let listeners fast forward and rewind,
but also skip to the next or previous song.

Doing more

Allow fine control over advance and rewind functionalities. Some users with
a physical disability will find useful the ability to advance or rewind the
presentation in (configurable) increments.

The user agent should display time codes or represent otherwise position in
content to orient the user.

Notes and rationale

Rendering captions in a separate viewport may make it easier for users with
screen readers to access the captions.

Traditionally, captions text is rendered with a solid background color.
Research shows that some users prefer white lettering above a black
background.

Who benefits

Some users with a cognitive disability or with color deficiencies, who may
need to configure rendering to make captions more legible.

Example techniques

For the purpose of applying this clause, SMIL 1.0
[SMIL] and SMIL 2.0 [SMIL20] user agents should
recognize as captions any media object whose reference from SMIL is affected by
the 'system-captions' test attribute.

Doing more

Allow the user to turn off rendering of captions.

Allow users to position captions. Some users (e.g., users with low vision
and a hearing disability (or who are not fluent in the language of an audio
track) may need captions and video to have a particular spatial relation to
each other, even if this results in partially obscured content. Positioning
techniques include the following:

User agents should implement the positioning features of the employed
markup or style sheet language. Even when a markup language does not specify a
positioning mechanism, when a user agent can recognize distinct text transcripts, collated text
transcripts, or captions, the user agent
should allow the user to reposition them. User agents are not expected to allow
repositioning when the captions, etc., cannot be separated from other media
(e.g., the captions are part of the visual track).

Allow the user to choose whether captions appear at the bottom or top of
the video area or in other positions. Currently authors may place captions
overlying the video or in a separate box. Captions prevent users from being
able to view other information in the video or on other parts of the screen,
making it necessary to move the captions in order to view all content at once.
In addition, some users will find captions easier to read if they can place
them in a location best suited to their reading style.

Allow users to configure a general preference for caption position and to
be able to fine-tune specific cases. For example, the user may want the
captions to be in front of and below the rest of the presentation.

Allow the user to drag and drop the captions to a place on the screen. To
ensure device-independence, allow the user to enter the screen coordinates of
one corner of the caption.

Do not require users to edit the source code of the presentation to achieve
the desired effect.

Allow the user to position all parts of a presentation rather than trying
to identify captions specifically (i.e., solving the problem generally may be
easier than for captions alone).

Note: User agents should allow configuration of volume
through available operating environment
mechanisms.

Example techniques

Use audio control mechanisms provided by the operating environment. Control
of volume mix is particularly important, and the user agent should provide easy
access to those mechanisms provided by the operating environment.

The user control required by this checkpoint includes the ability to override author-specified volumes for the
relevant sources of audio.

The user agent is not required to satisfy this checkpoint for audio whose
recognized role is to create a purely
stylistic effect; see checkpoint
4.4 for more information about what constitutes a stylistic effect.

Note: The user agent should satisfy this checkpoint by
allowing the user to control independently the volumes of all audio sources (e.g., by implementing a general
audio mixer type of functionality). See checkpoint 4.10 for information about controlling the volume
of synthesized speech.

Notes and rationale

Sounds that play at different times are distinguishable and therefore
independent control of their volumes is not required by this checkpoint (since
volume control required by checkpoint 4.7 suffices).

There are at least three good reasons for strongly recommending that the
volume of all audio sources be independently configurable, not just those
synchronized to play simultaneously:

Sounds that are not synchronized may end up playing simultaneously.

If the user cannot anticipate when a sound will play, the user cannot
adjust the global volume control at appropriate times to affect this
sound.

It is extremely inconvenient to have to adjust the global volume
frequently.

Sounds specified by the author to play "on document load" are likely to
overlap with each other. If they continue to play, they are also likely to
overlap with subsequent sounds played manually or automatically.

Who benefits

Users (e.g., with blindness or low vision) who rely on audio and
synthesized speech rendering.

Related techniques

For each source of audio, allow the user to control
the volume using the same user interface used to satisfy the requirements of checkpoint 4.5.

Doing more

Provide the same functionality for audio whose recognized role is to create a purely
stylistic effect.

Note: The range of synthesized speech rates offered by the
speech synthesizer may depend on natural language.

Example techniques

For example, many speech synthesizers offer a range for English speech of
120 - 500 words per minute or more. The user should be able to increase or
decrease the rendering rate in convenient increments (e.g., in large steps,
then in small steps for finer control).

User agents may allow different synthesized speech rate configurations for
different natural languages. For example, this may be implemented with CSS2
style sheets using the :lang
pseudo-class ([CSS2], section 5.11.4).

Who benefits

Users (e.g., with blindness or low vision) who rely on audio and
synthesized speech rendering.

Doing more

Content may include commands that are interpreted by a speech synthesizer
to change the rate (or control other synthesized speech parameters). This
checkpoint does not require the user agent to allow the user to override
author-specified rate changes (e.g., by transforming or otherwise stripping out
these commands before passing on the content to the speech synthesizer). Speech
synthesizers themselves may allow user override of author-specified rate
changes. For these such synthesizers, the user agent should ensure access to
this feature.

Note: This checkpoint is more specific than checkpoint
4.11. It requires support for the voice characteristics listed in the
provisions of this checkpoint. Definitions for these characteristics are based
on descriptions in section 19 of the Cascading Style Sheets Level 2
Recommendation [CSS2]; refer to that specification
for additional informative descriptions.
Some speech synthesizers allow users to choose values for synthesized speech
characteristics at a higher abstraction layer, i.e., by choosing from present
options distinguished by "gender", "age", "accent", etc. Ranges of values may
vary among speech synthesizers.

Who benefits

Users (e.g., with blindness or low vision) who rely on audio and
synthesized speech rendering. Some users with a hearing disability as well may
require control over these parameters.

Note: Definitions for the functionalities listed in the
provisions of this checkpoint are based on descriptions in section 19 of the
Cascading Style Sheets Level 2 Recommendation
[CSS2]; refer to that specification for additional informative descriptions.

Example techniques

This image shows how ViaVoice
[VIAVOICE] allows users to add entries to the user's personal
dictionary.

Who benefits

Users (e.g., with blindness or low vision) who rely on audio and
synthesized speech rendering.

References

For information about these functionalities, refer to descriptions in
section 19.8 of Cascading Style Sheets Level 2
[CSS2].

This checkpoint only applies to user agents that
support style sheets.

Note: By definition, the user agent's default style
sheet is always present, but may be overridden by author or user styles.
Developers should not consider that the user's ability to turn off author and
user style sheets is an effective way to improve content accessibility; turning
off style sheet support means losing the many benefits they offer. Instead,
developers should provide users with finer control over user agent or content
behavior known to raise accessibility barriers. The user should only have to
turn off author and user style sheets as a last resort.

Example techniques

For HTML[HTML4], make available "class" and
"id" information so that users can override styles.

Who benefits

Any user with a disability who needs to override the author's style sheets
or user agent default style sheets in order to have control over style and
presentation, or who needs to tailor the style of rendered content to meet
their own needs.

Doing more

Allowing the user to select more than one style sheet may be a useful way
to implement other requirements of this document. Also, if the user agent
offers several default style sheets, the user agent can also use these to
satisfy some requirements. By making available alternative style sheets on the
Web, people can thus improve the accessibility of deployed user agents.

Inform the user (e.g., through a discreet flag in the user interface) when
alternate author style sheets are available. Allow the user to easily choose
from among them.

References

Chapter 7 of the CSS1 Recommendation
[CSS1] recommends that the user be able to specify user style
sheets, and that the user be able to turn off individual style sheets.

To satisfy provision one of this checkpoint, configuration is preferred,
but is not required if the content focus can only ever be moved on explicit user request.

Who benefits

Some users with a cognitive disability, blindness, or low vision, who may
be disoriented if the focus moves automatically (and unexpectedly) to a new
viewport. These users may also find it difficult to restore the previous point
of regard.

Example techniques

Allow the user to configure how the current focus changes when a new
viewport opens. For instance, the user might choose between these two options:

Do not change the focus when a viewport opens, but alert the user (e.g.,
with a beep, flash, and text message on the status bar). Allow the user to
navigate directly to the new window upon demand.

Change the focus when a window opens and use a subtle alert (e.g., a beep,
flash, and text message on the status bar) to indicate that
the focus has changed.

If a new viewport or prompt appears
but focus does not move to it, alert assistive technologies (per checkpoint 6.6) so that they
may discreetly inform the user.

When a viewport is duplicated, the focus in the new viewport should
initially be the same as the focus in the original viewport. Duplicate
viewports allow users to navigate content (e.g., in search of some information)
in one viewport while allowing the user to return with little effort to the
point of regard in the duplicate viewport. There are other techniques for
accomplishing this (e.g., "registers" in Emacs).

In JavaScript, the focus may be changed with
myWindow.focus();

For user agents that implement CSS 2
[CSS2], the following rule will generate a message to the user at
the beginning of link text for links that are meant to open new windows when
followed:

A[target=_blank]:before
{ content:"Open new window" }

Doing more

The user agent may also allow configuration about whether the pointing
device moves automatically to windows that open without an explicit user
request.

When configured per provision one of this
checkpoint, instead of opening a viewport automatically, alert the user and
allow the user to open it with an explicit request (e.g., by
confirming a prompt or following a link generated by the user agent).

If a viewport (e.g., a frame set) contains other viewports, these
requirements only apply to the outermost container viewport.

User creation of a new viewport (e.g., empty or with a new resource loaded)
through the user agent's user interface constitutes an explicit user
request.

Note: Generally, viewports open automatically as the result
of instructions in content. See also checkpoint 5.1 (for
control over changes of focus when a viewport opens) and checkpoint 6.6 (for
programmatic notification of changes to the user interface).

Who benefits

Some users with serial access to content or who navigate
sequentially, who may find navigation of multiple open viewports difficult.
Also, some users with a cognitive disability may be disoriented by multiple
open viewports.

Example techniques

For HTML[HTML4], allow the user to control
the process of opening a document in a new "target" frame or a viewport created
by a script. For example, for target="_blank", open the window
according to the user's preference.

For SMIL [SMIL], allow the user to control
viewports created with the "new" value of the "show"
attribute.

In JavaScript, windows may be opened with:

myWindow.open("example.com", "My New Window");

myWindow.showHelp(URI);

Doing more

Allow configuration to prompt the user to confirm (or cancel) closing
any viewport that starts to close without explicit user request. For
instance, in JavaScript, windows may be closed with
myWindow.close();. Some users with a cognitive disability may find it
disorienting if a viewport closes automatically. On the other hand, some users
with a physical disability may wish these same viewports to close automatically
(rather than being required to close them manually).

Note: For example, if users navigating links move to a
portion of the document outside a graphical viewport, the viewport should
scroll to include the new location of the focus. Or, for users of audio
viewports, allow configuration to render the selection or focus immediately
after the change.

Who benefits

Users who may be disoriented by a change in focus or selection that is not
reflected in the viewport. This includes some users with blindness or low
vision, and some users with a cognitive disability.

Example techniques

There are times when the content focus changes (e.g., link navigation) and
the viewport should track it. There are other times when the viewport changes
position (e.g., scrolling) and the content focus should follow it. In either
case, the focus (or selection) should be in the viewport after the change.

If a search causes the selection or focus to change, ensure that the found
content is not hidden by the search prompt.

When the content focus changes, register the newly focused element in the
navigation sequence; sequential navigation should
start from there.

Unless viewports have been coordinated, changes to selection or focus in
one viewport should not affect the selection or focus in another viewport.

The persistence of the selection or focus in the viewport will vary
according to the type of viewport. For any viewport with persistent rendering
(e.g., a two-dimensional
graphical or tactile viewport), the focus or selection should remain in the
viewport after the change until the user changes the viewport. For any viewport
without persistent rendering (e.g., an audio viewport), once the focus or
selection has been rendered, it will no longer be "in" the viewport. In a pure
audio environment, the whole persistent context is in the mind of the user. In
a graphical viewport, there is a large shared buffer of dialog information in
the display. In audio, there is no such sensible patch of interaction that is
maintained by the computer and accessed, at will, by the user. The audio
rendering of content requires the elapse of time, which is a scarce resource.
Consequently, the flow of content through the viewport has to be managed more
carefully, notably when the content was designed primarily for graphical
rendering.

If the rendered selection or focus does not fit entirely within the limits
of a graphical viewport, then:

if the region actually displayed prior to the change was within the
selection or focus, do not move the viewport.

otherwise, if the region actually displayed prior to the change was not
within the newly selected or focused content, move to display at least the
initial fragment of such content.

Configuration is preferred, but it not required if forms can only ever be
submitted on explicit user
request.

Note: Examples of automatic form submission include:
script-driven submission when the user changes the state of a particular form
control associated with the form (e.g., via the pointing device), submission
when all fields of a form have been filled out, and submission when a
"mouseover" or "change" event
occurs.

Example techniques

Allow the user to configure script-based submission (e.g., form submission
accomplished through an "onChange" event). For instance, allow these settings:

Do not allow script-based submission.

Allow script-based submission after confirmation from the user.

Allow script-based submission without prompting the user (but not by
default).

Authors may write scripts that submit a form when particular
events occur (e.g., "onchange" events). Be watchful for this type of code,
which may disorient users:

<SELECT NAME="condition" onchange="switchpage(this)">

As soon as the user attempts to navigate the menu, the "switchpage" function
opens a document in a new viewport. Try to avoid orientation problems that may
be caused by scripts bound to form elements.

Be aware that users may inadvertently pressing the Return or
Enter key and accidentally submit a form.

In JavaScript, a form may be submitted with:

document.form[0].submit();

document.all.mySubmitButton.click();

Generate a form submit button when the author has not provided one.

Who benefits

Any user who might be disoriented by an automatic form submission (e.g.,
users who navigate sequentially through
select box options, or some users with a cognitive disability) or who might
inadvertently submit a form (e.g., some users with a physical disability).

Doing more

Some users may not want to have to confirm all form submissions, so allow
multiple configurations, such as: confirm all form submissions; confirm
script-activated form submissions; confirm all form submissions except those
done through the graphical user interface (e.g., when the user moves content focus to a submit button and
activates it).

Users with serial access to content
or who navigate
sequentially may think that the submit button in a form is the "final" user interface control they
need to complete before submitting the form. Therefore, for forms in which
additional controls follow a submit button, if those controls have not been
completed, inform the user and ask for confirmation (or completion) before
submission.

For forms, allow users to search for
user interface controls that need to be changed by the user before
submitting the form.

If the user can modify HTML and XML content ("write access") through the
user interface (e.g., through form controls), allow for the same
modifications programmatically.

Notes and rationale

The primary reason for requiring user agents to provide access to the
Infoset is that this gives assistive technologies access to the original
structure of the document. For example, this means that assistive technologies
that render content as synthesized speech are not required to construct the
speech view by "reverse engineering" a graphical view. Direct access to the
structure allows the assistive technologies to render content in a manner best
suited to a particular output device. This does not mean that assistive
technologies should be prevented from having access to the rendering of the
conforming user agent; rather, that they not be required to depend entirely on
it. In fact, user agents the render as synthesized speech may wish to
synchronize a graphical view with a speech view; see checkpoint 6.4 for
information about access to some rendered information.

modify the attribute list of a document and thus add information into the
document object that will not be rendered by the user agent.

add entire nodes to the document that are specific to the assistive
technologies and that may not be rendered by a user agent unaware of their
function.

The ability to write to the Infoset can improve performance for the
assistive technology. For example, if an assistive technology has already
traversed a portion of the document object and knows that a section (e.g., a
style element) could not be rendered, it can mark this section "to be
skipped".

Another benefit of write access is to add information necessary for audio
rendering but that would not be stored directly in the document object during
parsing (e.g., numbers in an ordered list). An assistive technology component
can add numeric information to the document object. The assistive technology
can also mark a subtree as having been traversed and updated, to eliminate
recalculating the information the next time the user visits the subtree.

Who benefits

Users with a disability who rely on assistive technologies for input and
output.

Provide access to the content required in checkpoint 6.1 by conforming to the following modules of the
W3C Document Object Model DOM
Level 2 Core Specification
[DOM2CORE] and exporting bindings for the interfaces they define:

The user agent is not required to export the bindings outside of the user
agent process (though doing so may be useful to assistive technology
developers).

Note: Refer to the "Document Object Model (DOM) Level 2
Core Specification"
[DOM2CORE] for information about HTML and
XML versions covered. This checkpoint is stands apart from
checkpoint 6.1 to emphasize
the distinction between what information is required and how to provide access
to that information.

Notes and rationale

Provide programmatic read (and write) access to the document object in a
thread-safe manner, to ensure that the application and system are not
compromised. In multi-threaded environments, assistive technologies will access
the document object (in or out of process) on a separate thread. Simultaneous
access to the document object on more than one thread could result in deadlock
situations and memory access violations, corrupting the application and
possibly the assistive technology. Developers should therefore use commonly
available operating system supported interprocess communication features (such
as semaphores) to ensure synchronized, thread-safe access.

Who benefits

Users with a disability who rely on assistive technologies for input and
output.

Example techniques

When generating repair content, user agents should try
to ensure consistency after repair between the document object and the
rendering structure (see
checkpoint 6.4).

"Structured programmatic access" means access through an API to recognized
information items of the content (such as the information items of the XML
Infoset [INFOSET]). Plain text has little
structure, so an API that provides access to it will be correspondingly less
complex than an API for XML content. For content more structured than plain
text, an API that only provides access to a stream of characters does not
satisfy the requirement of providing structured programmatic access. This
document does not otherwise define what is sufficiently structured access.

An API is considered "available" if the specification of the API is
published (e.g., as a W3C Recommendation) in time for integration into a user
agent's development cycle.

Related techniques

References

Sun Microsystems Java Accessibility API ([JAVAAPI]) in Java JDK. If the
user agent supports Java applets and provides a Java Virtual Machine to run
them, the user agent should support the proper loading and operation of a Java
native assistive technology. This assistive technology can provide access to
the applet as defined by Java accessibility standards.

The ATK library [ATK] provides a set of interfaces for
accessibility in the GNOME environment.

For graphical user agents, make
available bounding dimensions and coordinates of rendered graphical objects.
Coordinates must be relative to the point of origin in the graphical
environment (e.g., with respect to the desktop), not the viewport.

For graphical user agents, provide
access to the following information about each piece of rendered text: font
family, font size, and foreground and background colors.

As part of satisfying provisions one
and two of this checkpoint, implement at least one API according to the API
cascade described in provision two of checkpoint 6.3.

Note: User agents should provide programmatic access to
additional useful information about rendered content that is not available
through the APIs required by checkpoints 6.2 and
6.3, including the correspondence (in both directions) between graphical
objects and their source in the document object, and information
about the role of each graphical object.

Notes and rationale

The first two provisions of this checkpoint refer to what is actually
rendered on the screen. In CSS for example, this means "actual values"
rather than "computed values". Note, however, that the CSS module of the
DOM) Level 2 Style Specification
[DOM2STYLE] does not provide access to actual values, only read-only
access to computed values.

This document requires programmatic access to rendering structure (even in
the absence of standard APIs) for at least the following reasons:

Some user agents (e.g,. screen magnifiers) are more interested in what is
rendered than the document object, so access to the document object may not be
helpful.

A graphical user agent knows what information is available on the screen.
Assistive technologies should not be required to recalculate what's on the
screen because the work has already been done.

The user agent's rendering is definitive. If an assistive technology is
required to build a rendering structure from the same document object, style
sheets, and user preferences, that rendering is unlikely to match exactly the
user agent's own rendering.

HTML content on the Web may be invalid. Some user agents generate rendering
structure based on content that is different from what appears in the document
object after repair. Thus, there can be a mismatch between what's on the screen
and what's available through the DOM.

Who benefits

Users with a disability who rely on assistive technologies for input and
output.

Note: APIs used to satisfy the requirements of this
checkpoint may be independent of a particular operating environment (e.g., the
W3C DOM), conventional APIs for a particular operating environment,
conventional APIs for programming languages,
plug-ins, virtual machine environments, etc. User agent developers are
encouraged to implement APIs that allow assistive technologies to interoperate
with multiple types of software in a given operating environment (user agents,
word processors, spreadsheet programs, etc.), as this reuse will benefit users
and assistive technology developers. User agents should always follow operating
environment conventions for the use of input and output APIs.

Notes and rationale

It is important to use APIs that ensure that
text content is available to assistive technologies as text and not, for
example, as a series of strokes drawn on the screen.

Who benefits

Users with a disability who rely on assistive technologies for input and
output.

Example techniques

Use conventional user interface controls. Third-party
assistive technology developers are more likely able to access conventional controls than custom controls.
If you use custom controls, review them for accessibility and compatibility
with third-party assistive technology. Ensure that they provide accessibility
information through an API as is done for the conventional controls.

Operating system and application frameworks have conventions for
communication with input devices. In the case of Windows, OS/2, the X Windows
System, and Mac OS, the window manager provides graphical user interface
(GUI) applications with this information through the
messaging queue. In the case of non-GUI applications, the compiler run-time
libraries provide conventional mechanisms for receiving keyboard input in the
case of desktop operating systems. If you use an application framework such as
the Microsoft Foundation Classes, the framework used should support the same
conventional input mechanisms.

Do not communicate directly with an input device; this may circumvent operating
environment messaging. For instance, in Windows, do not open the keyboard
device driver directly. It is often the case that the windowing system needs to
change the form and method for processing conventional input mechanisms for
proper application coexistence within the user interface framework.

Do not implement your own input device event queue mechanism; this may
circumvent operating environment messaging. Some assistive technologies use
conventional system facilities for simulating keyboard and mouse events. From
the application's perspective, these events are no different than those
generated by the user's actions. The "Journal Playback Hooks" (in both OS/2 and
Windows) are one example of an application that feeds the standard event
queues. For an example of a standard event queue mechanism, refer to the
"Carbon Event Manager Preliminary API Reference"
[APPLE-HI].

Operating
environments have conventions for communicating with output devices. In the
case of common desktop operating systems such as Windows, OS/2, and Mac OS,
conventional
APIs are provided for writing to the display and the multimedia
subsystems.

Avoid rendering text in the form of a bitmap
before transferring to the screen, since some screen readers rely on the user
agent's offscreen model. An offscreen model is rendered content created by an
assistive technology that is based on the rendered content of another user
agent. Assistive technologies that rely on an offscreen model generally
construct it by intercepting conventional operating environment drawing
calls. For example, in the case of display drivers, some screen readers are
designed to monitor what is drawn on the screen by capturing drawing calls at
different points in the drawing process. While knowing about the user agent's
formatting may provide some useful information to assistive technologies, this
document encourages assistive technologies to access to content directly
through published APIs (such as the DOM) rather than via a particular
rendering.

Common operating environment two-dimensional graphics engines and drawing
libraries provide functions for drawing
text to the screen. Examples of this are the Graphics Device Interface
(GDI) for Windows, Graphics Programming Interface (GPI) for OS/2, and the X
library (XLIB) for the X Windows System or Motif.

When writing textual information in a GUIoperating environment, use
conventional operating environment APIs for
drawing text.

Use operating
environment resources for rendering audio information. When doing so, do
not take exclusive control of system audio resources. This could prevent an
assistive technology such as a screen reader from speaking if they use software
text-to-synthesized speech conversion. Also, in operating environments like
Windows, a set of audio sound resources is provided to support conventional
sounds such as auditory alerts. These preset sounds are
used to trigger SoundSentry graphical
cues when a problem occurs; this benefits users with hearing disabilities.
These cues may be manifested by flashing the desktop, active caption bar, or
current viewport. Thus, it is important to use the conventional mechanisms to
generate audio feedback so that operating environments or special assistive
technologies can add additional functionality for users with hearing
disabilities.

API designers should promote backwards compatibility so that assistive
technologies do not suddenly break when a new version of an API is published
and implemented by user agents.

References

Some public accessibility APIs include:

Microsoft Active Accessibility ([MSAA]). This the conventional
accessibility API for the Windows 95/98/NT operating systems. See, for example,
information about the
Iaccessible interface.

Sun Microsystems Java Accessibility API ([JAVAAPI]) in the Java JDK. This
is the conventional accessibility API for the Java environment. If the user
agent supports Java applets and provides a Java Virtual Machine to run them,
the user agent should support the proper loading and operation of a Java native
assistive technology. This assistive technology can provide access to the
applet as defined by Java accessibility standards.

The user agent is not required to provide notification of changes in the
rendering of content (e.g., due to an animation effect or an effect
caused by a style sheet) unless the document object to make those
changes.

Note: For instance, provide programmatic notification when
user interaction in one frame causes automatic changes to content in
another.

Who benefits

Users with a disability who rely on assistive technologies for output.

Example techniques

Write output to and take input from conventional operating environment
APIs rather than directly from hardware. This will enable the
input/output to be redirected from or to assistive technology devices –
for example, screen readers and braille displays often redirect output (or copy
it) to a serial port, while many devices provide character input, or mimic
mouse functionality. The use of generic APIs makes this feasible in a way that
allows for interoperability of the assistive technology with a range of
applications.

Provide notification when an action in one frame causes the content of
another frame to change. Allow the user to navigate with little effort to the
frame(s) that changed.

Related techniques

Doing more

Enhance the functionality of conventional operating environment controls to
improve accessibility where none is provided by responding to conventional
keyboard input mechanisms. For example provide keyboard navigation to menus and
dialog box controls in the Apple Macintosh operating system. Another example is
the Java Foundation Classes, where internal frames do not provide a keyboard
mechanism to give them focus. In this case, you will need to add keyboard
activation through the conventional keyboard activation facility for Abstract
Window Toolkit components.

Note: Support for character encodings is important so that
text is not "broken" when communicated to assistive technologies. For example,
the DOM Level 2 Core Specification [DOM2CORE], section 1.1.5
requires that the DOMString type be encoded using UTF-16.

Who benefits

Users with disabilities who rely on assistive technologies for input and
output.

Example techniques

The list of character encodings that any conforming implementation of Java
version 1.3 [JAVA13] must support is: US-ASCII,
ISO-8859-1, UTF-8, UTF-16BE, UTF-16LE, and UTF-16.

MSAA[MSAA] relies on the
COM interface, which in turn relies on Unicode
[UNICODE], which means that for MSAA a user agent must support
UTF-16. From Chapter 3 of the COM documentation, on interfaces, entitled "Interface Binary Standard":

Finally, and quite significantly, all strings passed through all COM
interfaces (and, at least on Microsoft platforms, all COM APIs) are Unicode
strings. There simply is no other reasonable way to get interoperable objects
in the face of (i) location transparency, and (ii) a high-efficiency object
architecture that does not in all cases intervene system-provided code between
client and server. Further, this burden is in practice not large.

Note: For example, the programmatic exchange of information
required by other checkpoints in this document should be efficient enough to
prevent information loss, a risk when changes to content or user interface
occur more quickly than the communication of those changes. Timely exchange is
also important for the proper synchronization of alternative renderings. The
techniques for this checkpoint explain how developers can reduce communication
delays. This will help ensure that assistive technologies have timely access to
the document object model and other
information that is important for providing access.

Notes and rationale

This document requires that a conforming user agent provide access to
content and user interface information through APIs because assistive
technologies must be able to respond incrementally to changes in the user's
session. Simply providing a "text dump" of content to an assistive technology,
for example, would make it extremely difficult for assistive technologies to
provide timely access (as the assistive technology would have to recalculate
much more information rather than having information about incremental
changes).

Who benefits

Users with a disability who rely on assistive technologies for input and
output.

Notes and rationale

Much of the rationale behind the content requirements of User Agent
Accessibility Guidelines 1.0 also makes sense for the user agent user interface
(e.g., allow the user to turn off any blinking or moving user interface
components).

Microsoft Windows offers an accessibility function called "High Contrast".
Standard window classes and controls automatically support this setting.
However, applications created with custom classes or controls work with the
"GetSysColor" API to ensure compatibility with High Contrast.

Apple Macintosh offers an accessibility function called "Sticky Keys".
Sticky Keys operate with keys the operating environment recognizes as modifier
keys, and therefore a custom control should not attempt to define a new
modifier key.

Maintain consistency in the user interface between versions of the
software. Consistency is less important than improved general accessibility and
usability when implementing new features. However, developers should make
changes conservatively to the layout of user interface controls, the
behavior of existing functionalities, and the default keyboard
configuration.

Note: For example, in some operating environments, when a
functionality may be triggered through a menu and through the keyboard, the
developer may design the menu entry so that the character of the activating key
is also shown. See
checkpoint 11.5 for information about the user agent's default input
configuration.

Who benefits

Many users with many types of disabilities.

Example techniques

Use operating
environment conventions to indicate the current configuration (e.g., in
menus, indicate what key strokes will activate the functionality, underline
single keys that will work in conjunction with a key such as Alt,
etc.) These are conventions used by the Sun Java Foundations Classes
[JAVA-TUT] and Microsoft Foundations Classes for Windows.

Ensure that information about changes to the input configuration is
available in a device-independent manner (e.g., through visual and audio cues,
and through text).

If the current configuration changes locally (e.g., a search prompt opens,
changing the keyboard bindings for the duration of the prompt), alert the
user.

Named configurations are easier to remember. This is especially important
for people with certain types of cognitive disabilities. For example, if the
invocation of a search prompt changes the input configuration, the user may
remember more easily which key strokes are meaningful in search mode if alerted
that there is a "Search Mode". Context-sensitive help (if available) should
reflect the change in mode, and a list of keybindings for the current mode
should be readily available to the user.

Who benefits

Many users with many types of disabilities.

Example techniques

Make obvious to users features that are known to benefit accessibility.
Make them easy to find in the user interface and in documentation.

Some specifications include optional features (not required for conformance
to the specification). If an optional feature is likely to cause accessibility
problems, developers should either ensure that the user can turn off the
feature or they not implement the feature.

Refer to the following list of accessibility features of HTML 4
[HTML4] (in addition to those described in techniques for checkpoint 2.1):

When a requirement of another specification contradicts a requirement of
the current document, the user agent may disregard the requirement of the other
specification and still satisfy this checkpoint.

Note: For instance, for markup, the user agent may conform to HTML 4
[HTML4], XHTML 1.0 [XHTML10], and/or
XML 1.0 [XML]. For style sheets, the user
agent may conform to CSS ([CSS1],
[CSS2]). For mathematics, the user agent may conform to MathML 2.0
[MATHML20]. For synchronized
multimedia, the user agent may conform to SMIL 1.0
[SMIL].

Notes and rationale

The right to disregard only applies when the requirement of another
specification contradicts the requirements of the current document; no
exemption is granted if the other specification is consistent with or silent
about a requirement made by the current document.

Conformance to W3C Recommendations is not a priority 1 requirement because
user agents can (and should!) provide access for non-W3C specifications as
well.

The requirement of this checkpoint is to conform to at least one
W3C Recommendation that is available and appropriate for a particular task, or
at least one non-W3C specification that allows the creation of content that
conforms to WCAG 1.0 [WCAG10]. For example, user agents
would satisfy this checkpoint by conforming to the Portable Network Graphics
1.0 specification [PNG] for raster images. In addition,
user agents may implement other image formats such as JPEG, GIF, etc. Each
specification defines what conformance means for that specification.

Who benefits

Many users with many types of disabilities.

Example techniques

If more than one version or level of a specification is appropriate for a
particular task, user agents are encouraged to conform to the latest version.
However, developers should consider implementing the version that best supports
accessibility, even if this is not the latest version.

For reasons of backward compatibility, user agents should generally
continue to implement deprecated features of specifications. Information about
deprecated language features is generally part of the language's
specification.

When a viewport includes no enabled elements (either because the format
does not provide for this, or a given piece of content has no enabled
elements), the content focus requirements of the following checkpoints do not
apply: 1.2, 5.1, 5.4, 6.6, 7.1, 9.3, 9.4, 9.5, 9.6, 9.7, 10.2, and 11.5.

Note: For example, when two frames of a frameset contain
enabled elements, allow the user to make the content focus of either frame the
current focus. Note that viewports "owned" by
plug-ins that are part of a conformance claim are also covered by this
checkpoint. See
checkpoint 7.1 for information about implementing content focus according
to operating
environment conventions.

Who benefits

Users who rely on the content focus for interaction (e.g.,
for interaction with enabled elements through the keyboard, or for assistive
technologies that consider the current focus a point of regard). This includes some
users with blindness, low vision, or a physical disability.

Who benefits

Users who rely on the user interface focus for
interaction (e.g., for interaction with user interface controls
through the keyboard, or for assistive technologies that consider the current
focus a point of regard). This
includes some users with blindness, low vision, or a physical disability.

Note: In addition to forward sequential navigation, the
user agent should also allow reverse sequential navigation. See checkpoint 9.9 for information
about structured navigation. See checkpoints 5.1 and 6.6 for more information
about focus changes.

Who benefits

Users who rely on the focus for interaction (e.g., for interaction with
enabled elements through the keyboard, or for assistive technologies that
consider the focus a point of regard). This includes some users with blindness,
low vision, or a physical disability.

Allow the user to move the content focus to each enabled element by
repeatedly pressing a single key. Many user agents enable sequential
navigation through repeated keystrokes – for example, using the
Tab key for forward navigation and Shift-Tab for reverse
navigation. Because the Tab key is typically on one side of the
keyboard while arrow keys are located on the other, users should be allowed to
configure the user agent so that sequential navigation is possible with keys
that are physically closer to the arrow keys. See also checkpoint 11.3 for information
about overriding bindings in the default input configuration.

Maintain a logical element navigation order. For instance, users may use
the keyboard to navigate among elements or element groups using the arrow keys
within a group of elements. One example of a group of elements is a set of
radio buttons. Users should be able to navigate to the group of buttons, then
be able to select each button in the group. Similarly, allow users to navigate
from table to table, but also among the cells within a given table (up, down,
left, right, etc.).

Respect author-specified information about navigation order (e.g., the
"tabindex" attribute in HTML 4
[HTML4], section 17.11.1). Allow users to override the
author-specified navigation order (e.g., by offering an alphabetized view of
links or other orderings).

The default sequential navigation order should respect the conventions of
the natural language of
the document. Thus, for most left-to-right languages, the usual navigation
order is top-to-bottom and left-to-right. For right-to-left languages, the
order would be top-to-bottom and right-to-left.

In Java, a component is part of the sequential navigation order
when added to a panel and its isFocusTraversable method returns
true. A component can be removed from the navigation order by extending the
component, overloading this method, and returning false.

This image shows how JAWS for Windows [JFW]
allows users to navigate to links in a document and activate them
independently. Users may also configure the user agent to navigate visited
links, unvisited links, or both. Users may also change the sequential
navigation order, sorting links alphabetically or leaving them in the logical
tabbing order. The focus in the links view follows the focus in the main
view.

Doing more

Provide other sequential navigation
mechanisms for particular element types or semantic units, e.g., "Find the next
table" or "Find the previous form." For more information about sequential
navigation of form controls and
form submission, see techniques for checkpoint 5.5.

For graphical user interfaces (or for any user agent offering a
two-dimensional display), navigation based not on document order but on layout
may also benefit the user. For example, allow the user to navigate up, down,
left, and right to the nearest rendered enabled link. This type of navigation
may be particularly useful when it is clear from the layout where the next
navigation step will take the user (e.g., grid layouts where it is clear what
the next link to the left or below will be).

Excessive use of sequential navigation can
reduce the usability of software for both disabled and non-disabled users. Some
useful types of direct navigation include: navigation based on position (e.g.,
all links are numbered by the user agent), navigation based on element content
(e.g., the first letter of
text content), direct navigation to a table cell by its row/column
position, and searching (e.g., based on form element text, associated labels,
or form element names).

The viewport history associates values for these three state variables (point of regard, content focus, and selection) with a particular document
object. If the user returns to a state in the history and the user agent
retrieves new content, the user agent is not required to restore the saved
values of the three state variables.

Notes and rationale

This checkpoint only refers to a per-viewport history mechanism, not a
history mechanism that is common to all viewports (e.g., of visited Web
resources).

Who benefits

Users who may have difficulty re-orienting themselves during a browsing
session. This includes some users with a memory or cognitive disability, some
users with a physical disability, and some users with serial access to content or who navigate
sequentially, for whom repositioning will be time-consuming.

Example techniques

For each state in the history, keep track of the last time the content was
modified. When returning to that state in the history, restore the three state
variables of the content being rendered as long as the content has not been
retrieved more recently than that date.

If the user agent allows the user to browse multimedia or audio-only
presentations, when the user leaves one presentation for another, pause the
presentation. When the user returns to a previous presentation, allow the user
to resume the presentation where it was paused (i.e., return the point of regard to the same place in
space and time). Note: This may be done for a presentation that is available
"completely" but not for a "live" stream or any part of a presentation that
continues to run in the background.

Allow the user to configure whether leaving a viewport pauses a multimedia
presentation.

If the user activates a broken link, leave the viewport where it is and
alert the user (e.g., in the status bar and with a graphical or audio alert). Moving the
viewport suggests that a link is not broken, which may disorient the user.

In JavaScript, the following may be used to change the Web resource in the
viewport, and navigate the history:

myWindow.home();

myWindow.forward();

myWindow.back();

myWindow.navigate("http://example.com/");

myWindow.history.back();

myWindow.history.forward();

myWindow.history.go( -2 );

location.href = "http://example.com/"

location.reload();

location.replace("http://example.com/");

Doing more

Restore the point of regard, content focus, and selection after the user
reloads the same content.

References

Refer to the HTTP/1.1 specification for information about history
mechanisms ([RFC2616], section 13.13).

Note: For instance, in this configuration for an HTML
document, do not activate any handlers for the 'onfocus',
'onblur', or 'onchange' attributes. In this
configuration, user agents should still apply any stylistic changes (e.g., highlighting) that may occur when there is
a change in content focus.

Notes and rationale

Event handlers associated with setting (or removing focus) may cause
disorienting changes to content. The purpose of this checkpoint is to reduce
unexpected changes while navigating with the focus.

Who benefits

Users with blindness or some users with a physical disability, and anyone
without a pointing device.

Example techniques

Allow the following configurations:

On invocation of the input binding, move focus to the associated enabled
element, but do not activate it.

On invocation of the input binding, move focus to the associated enabled
element and prompt the user with information that will allow the user to decide
whether to activate the element (e.g., link title or text). Allow the user to
suppress future prompts for this particular input binding.

On invocation of the input binding, move focus to the associated enabled
element and activate it.

Note: For example, allow the user to query the element with
content focus for the list of input device event types, or add them directly to
the sequential
navigation order described in
checkpoint 9.3. See checkpoint 1.2 for information about activation of event
handlers associated with the element with focus.

Who benefits

Users with blindness or some users with a physical disability, and anyone
without a pointing device.

Example techniques

For HTML content, the left mouse button is generally the only mouse button
that is used to activate event handlers associated with mouse clicks.

Authors may specify redundant event handlers (e.g., the same handler for
both onmouseover and onfocus events). When the user
agent recognizes the same handler for two event types, present only one of them
to avoid confusion.

When using the "Document Object Model (DOM) Level 3 Events Specification"
[DOM3EVENTS], find the list of
event types for which there are event handlers explicitly associated with an
element as described in
section 1.3.1, using the methods
EventTarget.canTrigger/EventTarget.isRegisteredHere and
EventTarget.canTrigger/Event.isRegisteredHere. The first method provides
information about the target node only, the second about whether there are
handlers on any node in the path between the target node and the root
node.

As part of satisfying provision one of this
checkpoint, the user agent must not include disabled elements in the navigation
order.

Who benefits

Users who rely on the focus for interaction (e.g., for interaction with
enabled elements through the keyboard, or for assistive technologies that
consider the focus a point of regard). This includes some users with blindness,
low vision, or a physical disability.

Notes and rationale

This checkpoint involves searching through rendered content only.
Thus, the user agent should not search through unrendered conditional
content. It may be confusing to allow users to search for text content that
is not rendered (and thus that they have not viewed). Since checkpoint 2.3 requires
that the user have access to conditional content, the user can search through
that content once rendered.

Who benefits

Some users with serial access to content or who navigate
sequentially, some users with a cognitive disability (who may have
difficulty locating information among other information), and some users with a
physical disability (for whom navigation may be a significant effort).

Example techniques

Use the selection or focus to indicate found text. This will provide
assistive technologies with access to the text.

Allow users to search all views (e.g., including views of the text
source).

For extremely small viewports or extremely long matches, the entire matched
text content may not fit within the viewport. In this case, developers may move
the viewport to encompass the initial part of the matched content.

The search string input method should follow operating environment
conventions (e.g., for international character input).

When the point of regard depends on time (e.g., for audio viewports), the
user needs to be able to search through content that will be available through
that viewport. This is analogous to content rendered graphically that is
reachable by scrolling.

For multimedia presentations, allow users to search and examine
time-dependent media elements and links in a time-independent manner. For
example, present a static list of time-dependent links.

Allow users to search the element content of form elements (where
applicable) and any label text.

When searching a document, the user agent should not search text whose
properties prevent it from being visible (such as text that has
visibility="hidden"), or equivalent text for elements with such
properties (such as "alt" text for an image that has
visibility="hidden").

Doing more

Allow reverse search in addition to forward search.

Allow the user to start a search from the beginning of the document rather
than from the current selection or focus.

Provide distinct alerts for when there are no matches and when there are no
more matches.

Allow the user to easily start a search from the beginning of the content
currently rendered in the viewport.

Provide the option of searching through conditional content that is
associated with rendered content, and render the found conditional content
(e.g., by showing its relation to the rendered content).

For frames, allow users to search for content in all frames, without having
to be in a particular frame.

If the number of matches is known, provide this information to orient the
user.

References

For information about when case is significant in a
script, refer to Section 4.1 of Unicode
[UNICODE].

Allow the user to navigate efficiently to and among important structural
elements in rendered
content.

As part of satisfying provision one of this checkpoint, allow forward and
backward sequential
navigation.

Note: This specification intentionally does not identify
which "important elements" must be navigable as this will vary by
specification. What constitutes "efficient navigation" may depend on a number
of factors as well, including the "shape" of content (e.g., sequential
navigation of long lists is not efficient) and desired granularity (e.g., among
tables, then among the cells of a given table).

Notes and rationale

User agents should construct the navigation view with the goal of breaking
content into sensible pieces according to the author's design. In most cases,
user agents should not break down content into individual elements for
navigation; element-by-element navigation of the document object does not meet
the goal of facilitating navigation to important pieces of content. (The
navigation view may also serve as an expanding/contracting outline view; see
the outline view requirement of checkpoint 10.4.) Instead, user agents are expected to
construct the navigation view based on markup.

Allow navigation based on commonly understood document models, even if they
do not adhere strictly to a document type definition (DTD) or schema. For instance, in
HTML, although headings (H1-H6) are not containers, they may be treated as such
for the purpose of navigation. Note that they should be properly nested.

Use the DOM ([DOM2CORE]) as the basis of
structured navigation (e.g., a postorder traversal). However, for well-known
markup languages such as HTML, structured navigation should take advantage of
the structure of the source tree and what is rendered.

Allow the user to limit navigation to the cells of a table (notably left
and right within a row and up and down within a column). Navigation techniques
include keyboard navigation from cell to cell (e.g., using the arrow keys) and
page up/down scrolling. See the section on table navigation.

Alert the user when navigation has led to the beginning or end of a
structure (e.g., end of a list, end of a form, table row or column end, etc.).
See checkpoint 1.3 for
information about text messages to the user.

For those languages with known (e.g., by specification, schema, metadata,
etc.) conventions for identifying important components, user agents should
construct the navigation tree from those components, allowing users to navigate
up and down the document tree, and forward and backward among siblings. As the
same time, allow users to shrink and expand portions of the document tree. For
instance, if a subtree consists of a long series of links, this will pose
problems for users with serial access to content or who navigate
sequentially. At any level in the document tree (for forward and backward
navigation of siblings), limit the number of siblings to between five and ten.
Break longer lists down into structured pieces so that users can access content
efficiently, decide whether they want to explore it in detail, or skip it and
move on.

Tables and forms illustrate the utility of a recursive navigation
mechanism. The user should be able to navigate to tables, then change "scope"
and navigate within the cells of that table. Nested tables (a table within the
cell of another table) fit nicely within this scheme. The same ideas apply to
forms: users should be able to navigate to a form, then among the controls within that
form.

Navigation and orientation go together. The user agent should allow the
user to navigate to a location in content, explore the context, navigate again,
etc. In particular, user agents should allow users to:

Navigate to a piece of content that the author has identified as important
according to the markup language specification and conventional usage. In HTML,
for example, this includes headings, forms, tables, navigation mechanisms, and
lists.

Navigate past that piece of content (i.e., avoid the details of that
component).

Navigate into that piece of content (i.e., chose to view the details of
that component).

Change the navigation view as they go, expanding and contracting portions
of content that they wish to examine or ignore. This will speed up navigation
and facilitate orientation at the same time.

Provide context-sensitive navigation. For instance, when the user navigates
to a list or table, provide locally useful navigation mechanisms (e.g., within
a table, cell-by-cell navigation) using similar input commands.

Allow users to skip author-specified navigation mechanisms such as
navigation bars. For instance, navigation bars at the top of each page at a Web
site may force users with screen readers or some physical disabilities to wade
through many links before reaching the important information on the page. User
agents may facilitate browsing for these users by allowing them to skip recognized navigation bars (e.g., through a
configuration option). Some techniques for this include:

Providing a functionality to jump to the first non-link content.

If the number of elements of a particular type is known, provide this
information to orient the user.

In HTML, the MAP element may be used to mark up a navigation bar (even when
there is no associated image). Thus, users might ask that MAP elements not be
rendered in order to hide links inside the MAP element. User agents might allow
users to hide MAP elements selectively. For example, hide any MAP element with
a "title" attribute specified. Note: Starting in
HTML 4, the MAP element allows block content, not just AREA
elements.

Allow depth-first and breadth-first navigation through the document
object.

Doing more

Allow the user to navigate characters, words, sentences, paragraphs,
screenfuls, etc. according to conventions of the natural language. This benefits
users of synthesized speech-based user agents and has been implemented by
several screen readers, including Winvision
[WINVISION], Window-Eyes
[WINDOWEYES], and JAWS for Windows
[JFW].

Related techniques

See checkpoint 4.5
for information about navigating synchronized multimedia presentations.

References

The following is a summary of ideas provided by the National Information
Standards Organization with respect to Digital Talking Books
[TALKINGBOOKS]:

A talking book's "Navigation Control Center" (NCC) resembles a traditional
table of contents, but it is more. It contains links to all headings at all
levels in the book, links to all pages, and links to any items that the reader
has chosen not to have read. For example, the reader may have turned off the
automatic reading of footnotes. To allow the user to retrieve that information
efficiently, the reference to the footnote is placed in the NCC and the reader
can go to the reference, understand the context for the footnote, and then read
the footnote.

Once the reader is at a desired location and wishes to begin reading, the
navigation process changes. Of course, the reader may elect to read serially,
but often some navigation is required (e.g., frequently people navigate forward
or backward one word or character at a time). Moving from one sentence or
paragraph at a time is also needed. This type of local navigation is different
from the global navigation used to get to the location of what you want to
read. It is frequently desirable to move from one block element to the next.
For example, moving from a paragraph to the next block element which may be a
list, blockquote, or sidebar is the normally expected mechanism for local
navigation.

This checkpoint refers only to cell/header relationships that the user
agent can
recognize.

Notes and rationale

A cell may be associated with more than one header.

Who benefits

Users for whom two-dimensional relationships may be difficult to process
(e.g., users with serial access to content or who navigate
sequentially, or some users with a cognitive disability). Renderings that
provide easy access to cell header information will also help some users with
low vision or a physical disability, for whom it may be time-consuming to
scroll in order to locate relevant headers.

Example techniques

When rendering the table cell and associated header information so they are
both visible in the same viewport, use a technique frequently employed by
spreadsheet applications: the user agent fixes the position of headers in the
viewport and allows the user to scroll through associated data cells. Through
horizontal and vertical alignment, the data cells and header cells are visually
associated.

The headers of a nested table may provide important context for the cells
of the same row(s) or column(s) containing the nested table.

The
THEAD, TBODY, and TFOOT elements of HTML 4 ([HTML4], section 11.2.3) allow users
to specify portions of a large table that should remain available (e.g., when
scrolling). When a table is constructed with a TBODY element, the
'overflow' property of CSS 2 ([CSS2], section 11.1.1) may be used
to create a scrollable area.

tbody { height: 10em; overflow: auto }

In HTML, beyond the TR, TH, and TD
elements, the table attributes "summary", "abbr", "headers", "scope", and
"axis" also provide information about relationships among cells and headers.
For more information, see the section on table techniques.

Doing more

Make available (e.g., through a context
menu) information summarizing table structure, including any table head and
foot rows, and possible row grouping into multiple table bodies, column groups,
header cells and how they relate to data cells, the grouping and spanning of
rows and columns that apply to qualify any cell value, cell position
information, table dimensions, etc.

When providing serial access to a table, allow the
user to specify how cell header information should be rendered before cell data
information. Some possibilities are illustrated by the
CSS2 'speak-header' property ([CSS2], section 17.7.1).

For graphical user interfaces, as part
of satisfying provision one of this checkpoint, if a highlight mechanism
involves text size, font family, rendered text foreground and background
colors, or text decorations, offer at least the following range of values:

for text size, the range required by provision three of checkpoint 4.1.

for font family, the range required by provision three of checkpoint 4.2.

for text foreground and background colors and decorations, the range
offered by the conventional utility available in the operating environment for
users to choose rendered text colors or decorations (e.g., the standard font
and color dialog box resources supported by the operating system). If no such
utility is available, the range supported by the conventional APIs of the
operating environment for specifying text colors or drawing text.

Highlight enabled elements according to the
granularity specified in the format. For example, an HTML user agent rendering
a PNG image as part of a client-side image map is only required to highlight
the image as a whole, not each enabled region. An SVG user agent rendering an
SVG image with embedded graphical links is required to highlight each (enabled) link that may be rendered
independently according to the SVG specification.

Note: Examples of highlight mechanisms for selection and
content focus include foreground and background color variations, underlining,
distinctive synthesized speech prosody, border styling, etc. Because the
selection and focus change frequently, user agents should not highlight them
using mechanisms (e.g., font size variations) that cause content to reflow, as
this may disorient the user. Graphical highlight mechanisms that generally do
not rely on rendered text foreground and background color alone include
underlines or border styling. Per checkpoint 7.1, follow operating environment conventions
that benefit accessibility when implementing the selection and content focus.
For instance, if specified at the level of the operating environment, inherit
the user's preferences for selection styles.

Notes and rationale

In many graphical user interfaces, all links on a page are highlighted so
that users know at a glance where to interact.

Who benefits

Users with color deficiencies, low vision, or blindness, for whom color may
not be useful. Also, some devices may not render colors (e.g., speech
synthesizers, black and white screens). If highlighting is done through text
styles, some users with low vision may need to configure them.

Example techniques

Inherit selection and focus information from user's settings for the
operating
environment. Explain in the user agent documentation where to find
information in the operating environment documentation about changing these
settings.

For content highlighting:

Use CSS2 [CSS2] to add style to these
different classes of elements. In particular, consider the
'text-decoration' property ([CSS2], section 16.3.1), aural
cascading style sheets, font properties, and color properties.

Make available to the user an "outline"
view of rendered content,
composed of labels for important structural elements (e.g., heading text, table
titles, form titles, and other labels that are part of the content).

What constitutes a label is defined by each markup language specification.
For example, in HTML, a heading (H1-H6) is a label
for the section that follows it, a CAPTION is a label for a table,
the "title" attribute is a label for its element, etc.

The user agent is not required to generate a label for an important element
when no label is present in content. The user agent may generate a label when
one is not present.

Note: This outline view will provide the user with a
simplified view of content (e.g, a table of contents). For information about
what constitutes the set of important structural elements, see the Note
following checkpoint 9.9. By
making the outline view navigable, it is possible to satisfy this checkpoint
and checkpoint 9.9 together:
allow users to navigate among the important elements of the outline view, and
to navigate from a position in the outline view to the corresponding position
in a full view of content. See checkpoint 9.10 for additional configuration options.

Who benefits

Users with a memory or cognitive disability, as well as users with serial access to content or who navigate
sequentially. The outline view is a type of summary view and should reduce
orientation time. A navigable outline view will add further benefits for these
users.

Example techniques

For instance, in HTML, labels include the following:

The CAPTION element is a label for TABLE

The "title" attribute is a label for many elements.

The H1-H6 elements are labels for sections that
follow

The LABEL element is a label for the form element

The LEGEND element is a label for a set of form elements

The TH element is a label for a row/column of table
cells.

The TITLE element is a label for the document.

Allow the user to expand or shrink portions of the outline view (configure
detail level) for faster access to important parts of content.

Provide a structured view of form controls (e.g., those grouped
by LEGEND or OPTGROUP in HTML) along with their
labels.

This image shows the table of contents view provided by
Amaya
[AMAYA]. This view is coordinated with the main view so that users
may navigate in one viewport and the focus follows in the other. An entry in
the table of contents with a target icon means that the heading in the document
has an associated anchor.

The user agent is not required to compute or make available information
that requires retrieval of linked Web resources.

Who benefits

Users for whom following a link may lead to loss of context upon return,
including some users with blindness and low vision, some users with a cognitive
disability, and some users with a physical disability.

Example techniques

Some markup languages allow authors to provide hints about the nature of
linked content (e.g., in HTML 4 [HTML4], the "hreflang" and "type"
attributes on the A element). Specifications should indicate when this type of
information is a hint from the author and when these hints may be overridden by
another mechanism (e.g., by HTTP headers in the case of HTML). User agent
developers should make the author's hints available to the user (prior to
retrieving a resource), but should provide definitive information once
available.

Links may be simple (e.g., HTML links) or more complex, such as those
defined by the XML Linking Language (XLink)
[XLINK].

The scope of "recently followed link" depends on the user agent. The user
agent may allow the user to configure this parameter, and should allow the user
to reset all links as "not followed recently".

User agents should cache information determined as the result of retrieving
a Web resource and should make it available to the user. Refer to HTTP/1.1
caching mechanisms described in RFC 2616
[RFC2616], section 13.

For a link that has content focus, allow the user to query
the link for information (e.g., by activating a menu or key stroke).

Do not mark all local links (to anchors in the same page) as visited when
the page has been visited.

Related techniques

Doing more

Provide information about any input bindings associated with a link; see checkpoint 11.2 for
information about author-specified input bindings.

Allow configuration to prompt the user to confirm (or cancel) any
payment that results from activation of a fee link. For the purpose of
this document, the term fee link refers to a link that when activated, debits
the user's electronic "wallet" (generally, a "micropayment"). The link's role
as a fee link is identified through markup (in a manner that the user agent can
recognize). This definition of fee link
excludes payment mechanisms (e.g., some form-based credit card transactions)
that cannot be recognized by the user agent as causing payments. Note: Previous
versions of UAAG 1.0 included requirements related to fee links.

Additional fee link techniques:

While configuration to prompt before payment is preferred, it is sufficient
(to meeting the goal of informed consent) to only ever allow activation of fee
links on explicit user
request.

Allow the user to configure the user agent to prompt for payments above a
certain amount (including any payment). Warn the user that even in this
configuration, the user agent may not be able to recognize some payment
mechanisms.

References

User agents may use HTTP HEAD rather than GET for information about size,
language, etc. Refer to RFC 2616 [RFC2616], section 9.3

For information about content size in HTTP/1.1, refer to RFC 2616
[RFC2616], section 14.13. User agents are not expected to compute
content size recursively (i.e., by adding the sizes of resources referenced by
URIs within another resource).

For graphical viewports, as part of
satisfying provision one of this checkpoint, provide at least one highlight
mechanism that does not rely on rendered text foreground and background
colors alone (e.g., use a thick outline).

Who benefits

Users with color deficiencies or blindness, for whom color will not be
useful. Also, some devices may not render colors (e.g., speech synthesizers,
black and white screens).

Example techniques

Offer a configuration whereby a window that is the viewport with the
current focus is brought to the foreground, or maximized automatically. For
example, maximize the parent window of the browser when launched, and maximize
each child window automatically when it receives
focus. Maximizing does not necessarily mean occupying the whole screen or
parent window; it means expanding the viewport in a manner that reduces the
amount of horizontal and vertical scrolling required of the user.

If the viewport with the current focus is a frame or the user does not want
windows to pop to the foreground, use border colors, reverse videos, or other
graphical clues to indicate the viewport with the current focus.

If the default highlight mechanism is inherited from the operating
environment, document how to change it, or explain where to find this
information in the documentation for the operating environment.

For synthesized speech or braille output, use the frame or window title to
identify the viewport with the current focus.

The user agent may calculate the relative position according to content
focus position, selection position, or viewport position, depending on how the
user has been browsing.

The user agent may indicate the proportion of content viewed in a number of
ways, including as a percentage, as a relative size in bytes, etc. See checkpoint 1.3 for more information
about text versions of messages to the user, including messages about position
information.

For two-dimensional spatial
renderings, relative position includes both vertical and horizontal
positions.

This checkpoint does not require the user agent to present information
about retrieval progress. However, for streaming content, viewport
position may be closely tied to retrieval progress.

Notes and rationale

This checkpoint does not specify how to calculate the proportion in all
cases, and implementations may vary. For instance, suppose a user agent is to
render fifty audio clips one after the other. It may be costly to calculate the
proportion based on the total time required by all fifty clips (as this may
require the user agent to fetch all fifty in advance). Instead, the user agent
may represent the proportion as something like "2:43 remaining in the tenth
audio clip (of fifty)."

Who benefits

Users with serial access to content
or who navigate
sequentially and some users with a cognitive disability. This type of
context information generally benefits all users.

Example techniques

The proportion should be indicated using a relative value where applicable
(e.g., 25%), otherwise as an absolute offset (e.g., 3k) from some recognized
landmark.

Indicate the size of the document, so that users may decide whether to
download for offline viewing. For example, the playing time of an audio file
could be stated in terms of hours, minutes, and seconds. The size of a
primarily text-based Web page might be stated in both kilobytes and screens,
where a screen of information is calculated based on the current dimensions of
the viewport.

Indicate the number of screens of information, based on the current
dimensions of the viewport (e.g., "screen 4 of 10").

Use a variable pitch audio signal to indicate the viewport's different
positions.

Provide markers for specific percentages through the document.

Provide markers for positions relative to some position – a user
selected point, the bottom, the H1, etc.

Put a marker on the scrollbar, or a highlight at the bottom of the page
while scrolling (so you can see what was the bottom before you started
scrolling).

For images that render gradually (coarsely to finely), it is not necessary
to show percentages for each rendering pass.

To satisfy this checkpoint, the user agent may make available binding
information in a centralized fashion (e.g., a list of bindings) or a
distributed fashion (e.g., by listing keyboard shortcuts in user interface
menus). See related documentation checkpoints 12.2, 12.3, and 12.5.

Note: For example, for HTML documents, provide a view of
keyboard bindings specified by the author through the "accesskey"
attribute. The intent of this checkpoint is to centralize information about
author-specified bindings so that the user does not have to read an entire
document to look for available bindings.

Who benefits

Users with blindness, some users with a physical disability, and some users
with a memory or cognitive disability.

Example techniques

If the user agent offers a special view that lists author-specified
bindings, allow the user to navigate easily back and forth between the viewport
with the current focus and the list of bindings.

Related techniques

Doing more

In addition to providing a centralized view of bindings, allow users to
find out about bindings in content. For example, highlight enabled elements
that have associated event handlers (e.g., by indicating bindings near the
element).

Example techniques

Related techniques

Doing more

Allow users to choose from among prepackaged configurations, to override
portions of the chosen configuration, and to save it as a profile. Not only will the user save time
configuring the user agent, but this will reduce questions to technical support
personnel.

Allow users to easily restore the default input configuration.

Allow users to create macros and bind them to key strokes or other input
methods.

Test the default keyboard configuration for usability. Ask users with
different disabilities and combinations of disabilities to test
configurations.

Allow the user to override any binding in the
user agent default keyboard configuration with a binding to either a key plus
modifier keys or to a single key.

For
each functionality in the set required by checkpoint 11.5, allow the user to configure a single-key binding. A
single-key binding is one where a single key press performs the task, with zero
modifier keys.

Provision two of this checkpoint does not require single physical key
bindings for character input, only for the activation of user agent
functionalities.

If the number of physical keys on the keyboard is less than the number of
functionalities required by checkpoint 11.5, then provision two of this checkpoint does
not require the user agent to allow single-key bindings for all of the
functionalities. The user agent should give preference to those functionalities
listed in provision one of
checkpoint 11.5.

This checkpoint is mutually exclusive of checkpoint 11.3 since it is specific to the keyboard and to
emphasize the importance of easy keyboard access.

Note: Because single-key access is so important to some
users with physical disabilities, user agents should ensure that: (1) most keys
of the physical keyboard may be configured for single-key bindings, and (2)
most functionalities of the user agent may be configured for single-key
bindings. For information about access to user agent functionality through a
keyboard API, see checkpoint
6.7.

Notes and rationale

When using a physical keyboard, some users require single-key access,
others require that keys activated in combination be physically close together,
while others require that they be spaced physically far apart.

In some modes of interaction (e.g., when the user is entering text), the
number of available single keys will be significantly reduced.

A "single-key mode" allows user agents to "save" keys for other bindings by
default and still satisfy this checkpoint. However, even when a single-key mode
is offered, user agents should include as many required single-key bindings as
possible in the default keyboard configuration. The user should be able to
enter into a single-key mode by using a single-key.

Who benefits

Users with a physical disability (for whom single-key access is
particularly important), and some users with a memory or cognitive disability
(who may require simple interaction).

Example techniques

Offer a single-key mode where, once the user has entered into that mode
(e.g., by pressing a single key), most of the keys of the keyboard are
configurable for single-key operation of the user agent. Allow the user to exit
that mode by pressing a single key as well. For example, Opera
[OPERA] includes a mode in which users can access important user
agent functionalities with single strokes from the numeric keypad.

Consider distance between keys and key alignment (e.g., "9/i/k", which
align almost vertically on many keyboards) in the default configuration. For
instance, if Enter is used to activate links, put other link
navigation commands near it (e.g., page up/down, arrow keys, etc., on many
keyboards). In configurations for users with reduced mobility, pair related
functionalities on the keyboard (e.g., left and right arrows for forward and
back navigation).

Mouse Keys (available in some operating
environments) allow users to simulate the mouse through the keyboard. They
provide a usable command structure for individuals who require keyboard-only
and single-access, without interfering with the user interface for individuals
who do not; see checkpoint
1.1 for more information about keyboard access requirements.

Doing more

Allow users to accomplish tasks through repeated key strokes (e.g., sequential
navigation) since this can mean less physical repositioning for all users.
However, repeated key strokes may not be efficient for some tasks. For
instance, do not require the user to position the pointing device by pressing
the "down arrow" key repeatedly.

So that users do not mistakenly activate certain functionalities, make
certain combinations "more difficult" to invoke (e.g., users are not likely to
press Control-Alt-Delete accidentally).

The user agent may satisfy the functionality of entering a URI for a new
resource in a number of ways, including by prompting the user or by moving the
user interface focus to a control for entering
URIs.

Note: This checkpoint does not make any requirements about
the ease of use of default input configurations, though clearly the default
configuration should include single-key bindings and allow easy operation. Ease
of use is addressed by the configuration requirements of checkpoint 11.3.

Who benefits

Users with blindness, some users with a physical disability, and some users
with a memory or cognitive disability.

Example techniques

Input configurations should allow quick and direct navigation that does not
rely on graphical output. Do not
require the user to navigate through a graphical user interface as the only way
to activate a functionality.

Related techniques

See the techniques of checkpoint 7.4 for information about indicating input
configurations.

Doing more

Provide different input configuration
profiles (e.g., one keyboard profile with key combinations close together
and another with key combinations far apart).

Offer a mode that makes the input configuration compatible with other
versions of the software (or with other software).

Allow the user to configure how much the viewport should move when
scrolling the viewport backward or forward through content (e.g., for a
graphical viewport, "page down" causes the viewport to move half the height of
the viewport, or the full height, or twice the height, etc.).

Example techniques

Allow users to choose a different profile, to switch rapidly between
profiles, and to return to the default input configuration.

If the user can edit the profile by hand, the user agent documentation
should explain the profile format.

Doing more

If the user agent offers a way to restore the user agent default
configuration (e.g,. by pushing a button), prompt the user to save the current
configuration before restoring the default configuration. This scenario
illustrates the value of named, persistent, reloadable configurations.

Who benefits

Users with serial access to content
or who navigate
sequentially, and some users with a memory or cognitive disability (who may
have difficulty remembering where and how to access user agent
functionalities).

Example techniques

Allow the user to show and hide user interface controls. This benefits
users with cognitive disabilities and users with serial access to content or who navigate
sequentially to user interface controls.

Allow the user to choose icons and/or text.

Allow the user to change the grouping of icons and the order of menu
entries (e.g., for faster access to frequently used user interface
controls).

Allow multiple icon sizes (big, small, other sizes). Ensure that these
values are applied consistently across the user interface.

Allow the user to change the position of tool bars, icons, etc. Do not rely
solely on drag-and-drop for reordering the tool bar. Allow the user to
configure the user agent user
interface in a device-independent manner (e.g., through a text-based profile).

Provide a text equivalent for
audio user agent tutorials. Tutorials that use synthesized speech to guide a
user through the operation of the user agent should also be available at the
same time as graphical
representations.

Use clear and consistent navigation and search mechanisms;

Use the NOFRAMES element when the support/documentation is
presented in a FRAMESET;

Describe the user interface with device-independent terms. For example, use
"select" instead of "click on".

Provide documentation in small chunks (for rapid downloads) and also as a
single source (for easy download and/or printing). A single source might be a
single HTML file or a compressed archive of several
HTML documents and included images.

Ensure that run-time help and any Web-based help or support information is
accessible and may be operated with a single, well-documented, input command
(e.g., key stroke). Use operating environment
conventions for input configurations related to run-time help.

Ensure that user agent identification codes are accessible to users so they
may install their software. Codes printed on software packaging may not be
accessible to people with visual disabilities.

Doing more

Provide accessible documentation for all audiences: end users, developers,
etc. For instance, developers with disabilities may wish to add accessibility
features to the user agent, and so require information on available APIs and
other implementation details.

Provide documentation in alternative formats such as braille (refer to
"Braille Formats: Principles of Print to Braille Transcription 1997" [BRAILLEFORMATS]), large
print, or audio tape. Agencies such as Recording for the Blind and Dyslexic
[RFBD] and the (USA) National Braille Press [NBP]
can create alternative formats.

For the purposes of this checkpoint, a user agent feature that benefits
accessibility is one implemented to satisfy the requirements of this document
(including the requirements of checkpoints 8.1 and 7.3, and the API requirements of
guideline 6).

Note: The help system should include discussion of user
agent features that benefit accessibility. The user agent should satisfy this
checkpoint by providing both centralized and integrated views of accessibility
features in the documentation.

Who benefits

Many users with many types of disabilities.

Example techniques

Document any features that affect accessibility and that depart from system
conventions.

Provide a sensible index to accessibility features. For instance, users
should be able to find "How to turn off blinking text" in the documentation
(and the user interface). The user agent may support this feature by turning
off scripts, but users should not have to guess (or know) that turning off
scripts will turn off blinking text.

Document configurable features in addition to defaults for those
features.

Document the features implemented to conform with these guidelines.

Include references to accessibility features in both the table of contents
and index of the documentation.

If configuration files are used to satisfy the requirements of this
document, the documentation should explain the configuration file formats.

In developer documentation, document the APIs that are required by this
document; see the API requirements of guideline 6.

References

Apple publishes a list of Macintosh Accessibility Features
[MAC-ACCESS].

If the user agent does not allow the user to override the default user
agent input configuration (see
checkpoint 11.3), the documentation used to satisfy this checkpoint also
satisfies checkpoint
11.1.

Note: Documentation should warn the user whenever the
default input configuration is inconsistent with conventions of the operating
environment.

Notes and rationale

Documentation of keyboard accessibility is particularly important to users
with visual disabilities and some types of physical disabilities. Without this
documentation, a user with a disability (or multiple disabilities) may not
think that a particular task can be performed. Or the user may try to use a
much less efficient technique to perform a task, such as using a mouse, or
using an assistive technology's mouse emulation through key strokes.

Who benefits

Many users with many types of disabilities.

Example techniques

If the user agent inherits default values (e.g,. for the input
configuration and for highlight styles) from the operating environment,
document how to modify them in the operating environment, or explain where to
find this information in the documentation for the operating environment.

Note: Developers are encouraged to integrate descriptions
of accessibility features into the documentation alongside other features, in
addition to providing a centralized view.

Who benefits

Many users with many types of disabilities.

Example techniques

Integrate information about accessibility features throughout the
documentation. The dedicated section on accessibility should provide access to
the documentation as a whole rather than standing alone as an independent
section. For instance, in a hypertext-based help system, the section on
accessibility may link to pertinent topics elsewhere in the documentation.