The terms "must", "should", and "may" (and related terms) are used in
this document in accordance with RFC 2119 [RFC2119]. This section uses
the expression "subject of a claim" to refer to a user agent about
which someone wishes to claim some level of conformance to this
document. The subject of a claim may be one or more software
components (e.g., a browser plus additional software).
Note: Conformance to the requirements of this document is expected to
be a strong indicator of accessibility, but it is neither a necessary
nor sufficient condition for ensuring the accessibility of software.
Some software may not conform to this document but still be accessible
to some users with disabilities. Conversely, software may conform to
this document but still be inaccessible to some users with
disabilities. Please refer to the section on known limitations of this
document.
3.1 Conformance model
There are two ways to conform to this document: unconditionally or
conditionally. A user agent conforms unconditionally to this document
if it satisfies all of the requirements of all of the checkpoints.
Some checkpoints include more than one requirement.
A user agent conforms conditionally if it satisfies the set of
requirements that results from following these steps:
1. Choose a conformance level, which establishes a set of
requirements.
2. Remove the requirements associated with any unsupported content
type labels. In order to conform conditionally, a user agent must
satisfy the requirements associated with at least one content type
label.
3. Remove the requirements associated with any unsupported input
modality label. In order to conform conditionally (or
unconditionally) a user agent must be fully operable through the
keyboard, and must satisfy the input device requirements of this
document for the keyboard.
4. Remove the requirements of any checkpoints that do not apply.
Since these steps produce very different sets of checkpoints for
different user agents, a valid conformance claim must indicate which
requirements the subject of the claim does not satisfy.
Note: The checklist [UAAG10-CHECKLIST] may be used when evaluating a
user agent for conformance.
Conformance levels
Each conformance level defines a set of requirements, based on
priority.
* Conformance Level "A": the requirements of all Priority 1
checkpoints.
* Conformance Level "Double-A": the requirements of all Priority 1
and 2 checkpoints.
* Conformance Level "Triple-A": the requirements of all Priority 1,
2, and 3 checkpoints.
Note: Conformance levels are spelled out in text (e.g., "Double-A"
rather than "AA") so they may be understood when rendered as speech.
Content type labels
Each content type label defines a set of requirements based on support
for images, video, animations, visually displayed text (in color), and
synthesized speech.
VisualText
This content type label refers to all of the requirements
related to the visual rendering of text for the following
checkpoints: 3.3, 4.1, and 4.2.
ColorText
This content type label refers to all of the requirements
related to text foreground and background color for the
following checkpoint: 8.3.
Image
This content type label refers to all of the requirements
related to images for the following checkpoints: 3.1 and 3.8.
To conform, the user agent must implement at least one image
format.
Animation
This content type label refers to all of the requirements
related to animated images for the following checkpoints: 3.2,
3.4, 4.4, 4.5, 4.7, and 4.8. To conform, the user agent must
implement at least one animation format.
Video
This content type label refers to all of the requirements
related to video for the following checkpoints: 2.4, 2.5, 3.2,
4.4, 4.5, 4.7, and 4.8. To conform, the user agent must
implement at least one video format.
Audio
This content type label refers to all of the requirements
related to audio for the following checkpoints: 2.4, 2.5, 3.2,
4.4, 4.5, 4.7, 4.8, 4.9, and 4.10. To conform, the user agent
must implement at least one audio format.
Speech
This content type label refers to all of the requirements
related to synthesized speech for the following checkpoints:
4.11, 4.12 and 4.13. To conform, the user agent must support
synthesized speech.
Note: Some of the labels above require implementation of at least one
format (e.g., for images). This document does not require
implementation of specific formats, (e.g., PNG [PNG] versus SVG [SVG]
for images). However, please see the requirements of checkpoint 6.2.
Input modality labels
Each input modality label defines a set of requirements based on
support for pointing device and voice input.
Pointer
This input modality label refers to all of the input device
requirements of this document, applied to pointing device
input.
Voice
This input modality label refers to all of the input device
requirements of this document, applied to voice input.
Note: Developers are encouraged to design user agents that are at
least partially operable through all three input modalities.
3.2 Checkpoint applicability
A checkpoint (or portion of a checkpoint) applies unless any one of
the following conditions is met:
1. The checkpoint makes requirements for graphical user interfaces or
graphical viewports and the subject of the claim only has audio or
tactile user interfaces or viewports.
2. The checkpoint refers to a role of content (e.g., transcript,
captions, text equivalent, fee link, etc.) that the subject of the
claim cannot recognize because of how the content has been encoded
in a particular format. For instance, HTML user agents can
recognize "alt", OBJECT content, or NOFRAMES content as providing
equivalents for other content since these are specified by the
markup language. HTML user agents are not expected to recognize
that a text description embedded without indicative markup in a
nearby paragraph is a text equivalent for the image.
3. The checkpoint requires control of a content property that the
subject cannot recognize because of how the content has been
encoded in a particular format. Some examples of this include:
+ captioning information that is "burned" into a video
presentation and cannot be recognized as captions in the
presentation format;
+ streamed content that cannot be fast advanced or reversed,
+ information or relationships encoded in scripts in a manner
that cannot be recognized. For instance, the requirements of
checkpoint 3.3 would not apply for animation effects caused
by scripts. Input configuration bindings (refer to guideline
9) created through scripts in a manner that the user agent
cannot recognize do not apply.
3.3 Well-formed conformance claims
A claim is well-formed if meets the following conditions:
1. It includes the following information:
1. The date of the claim.
2. The guidelines title/version: "User Agent Accessibility
Guidelines 1.0".
3. The URI of the guidelines:
http://www.w3.org/WAI/UA/WD-UAAG10-20010116.
4. The conformance level satisfied: "A", "Double-A", or
"Triple-A".
5. Information about the subject. The subject of the claim may
consist of one or more software components (e.g., a browser
plus a multimedia player plus a plug-in). For each component,
the claim must include the following:
o The product name and version information (version
number, minor release number, and any required patches
or updates). The claim must also include the vendor name
if it is required to identify the product.
o The operating system name and version number.
2. It conforms to the "Web Content Accessibility Guidelines 1.0"
[WCAG10], level A.
The well-formed claim may include the following information:
1. Content type labels. Each content type label is an assertion that
the user agent does not satisfy the requirements associated with
the label. A well-formed conformance claim must not include all of
the content type labels (because the user agent must support at
least one of the content types).
2. Input modality labels. Each input modality label is an assertion
that the user agent does not satisfy the requirements associated
with the label.
3. A list of checkpoints that the claim asserts do not apply. A
well-formed claim should include rationale for why a checkpoint
doesn't apply.
There is no restriction on the format used to make a well-formed
claim. For instance, the claim may be marked up using HTML (see sample
claim), or expressed in the Resource Description Framework (RDF)
[RDF10].
3.4 Validity of a claim
A conformance claim is valid if the following conditions are met:
1. The claim is well-formed.
2. The claim indicates which requirements the user agent does not
satisfy through one conformance level and any relevant content
type labels, input modality labels, and applicability information.
3. It is verified that the user agent satisfies all other
requirements not exempted by the claim through these mechanisms.
It is not currently possible to validate a claim entirely
automatically.
Each checkpoint requirement must be satisfied by making information or
functionalities available through the user interface of the subject of
the claim unless the checkpoint explicitly states that the requirement
must be met by making information available through an application
programming interface (API). These API checkpoints are labeled
"checkpoints for communication with other software."
Note: The subject of the claim may consist of more than one software
component, and taken together they must satisfy all all requirements
that are not excluded through the claim. This includes assistive
technologies and operating system features that are part of a claim.
Some components may not have to satisfy some requirements as long as
the subject as a whole satisfies them. For instance, a particular
component of the subject may not have to conform to the DOM APIs
required by guideline 5 as long as the subject of the claim as a whole
makes all content available through those APIs.
Note: Ideally, the standard (or, default) user agent installation
procedure should provide and install all components that are part of a
conformance claim. This is because, the more software components the
user must install in order to construct a conforming user agent, the
higher the risk of failure. Failure may be due to inaccessible
mechanisms for downloading and installing plug-ins, or lack of
installation access privileges for a computer in a public space.
Use of operating system features as part of conformance
To satisfy the requirements of this document, developers are
encouraged to adopt operating system conventions and features that
benefit accessibility. When an operating system feature (e.g., the
operating system's audio control feature) is adopted to satisfy the
requirements of this document, it is part of the subject of the claim.
Developers may provide access through the user agent's user interface
to operating system features adopted to satisfy the requirements of
this document. For example, if the user agent adopts the operating
system's audio control feature to satisfy checkpoint 4.9, the user
agent may (but is not required to) include those controls in its own
user interface.
Restricted functionality and conformance
There may be scenarios where a content provider wishes to limit the
user's full access to content. For instance, a content provider may
wish to limit access to content through an API (e.g., to protect
intellectual property rights, or for security reasons), or to provide
a "read-only" view (allowing no user interaction). A valid conformance
claim remains valid even when the functionality of a conforming user
agent is restricted in a particular setting. The validity of a
conformance claim will be seriously jeopardized if a user agent does
not meet the requirements of this document for general-purpose
content.
Note: The User Agent Accessibility Guidelines Working Group recognizes
that further work is necessary in the area of accessibility and
digital rights management.
3.5 Responsibility for claims
Anyone may make a claim (e.g., vendors about their own products, third
parties about those products, journalists about products, etc.).
Claims may be published anywhere (e.g., on the Web or in paper product
documentation).
Claimants (or relevant assuring parties) are solely responsible for
the validity of their claims, keeping claims up to date, and proper
use of the conformance icons. As of the publication of this document,
W3C does not act as an assuring party, but it may do so in the future,
or it may establish recommendations for assuring parties.
Claimants are expected to modify or retract a claim if it may be
demonstrated that the claim is not valid. Claimants are encouraged to
claim conformance to the most recent User Agent Accessibility
Guidelines Recommendation available.
3.6 Conformance icons
As part of a conformance claim, people may use a conformance icon (or,
"conformance logo") on a Web site, on product packaging, in
documentation, etc. Each conformance icon (chosen according to the
appropriate conformance level) used on the Web must link to the W3C
explanation of the icon. The appearance of a conformance icon does not
imply that W3C has reviewed or validated the claim. An icon must be
accompanied by a well-formed claim.
Draft Note: In the event this document becomes a W3C Recommendation
this document will link to the W3C Web site for additional information
about the icons and how to use them.
4. Glossary
Active element
An active element is a piece of content with behaviors that may
be activated (or "triggered") either through the user interface
or through an API (e.g., by using scripts).
What constitutes an active element depends on the content. In
HTML 4 [HTML4] documents, for example, active elements include
links, image maps, form controls, element instances with a
value for the "longdesc" attribute, and element instances with
scripts (event handlers) explicitly associated with them (e.g.,
through the various "on" attributes). The requirements of this
document refer only to active elements that may be recognized
through markup (and not, for example, through scripts or style
sheets). Some element instances may be active at times but not
at others (e.g., they may be "deactivated" through scripts, or
they may only be active for a period of time determined by the
author).
Potential user interaction with a piece of content does not
imply that the content constitutes an active element. For
example, the user may select text an copy it to the clipboard,
but the selected text is not (necessarily) an active element,
because the selection is a functionality provided by the user
agent. For the purposes of this document, markup languages
determine which elements are potentially active elements.
The effect of activation depends on the element. For instance,
when a link is activated, the user agent generally retrieves
the linked Web resource. When a form control is activated, it
may change state (e.g., check boxes) or may take user input
(e.g., a text entry field). See also the definition of event
handler.
Most systems use the content focus to indicate which active
element will be activated on user demand.
Alert
In this document, "to alert" means to make the user aware of
some event, without requiring acknowledgement. For example, the
user agent may alert the user that new content is available on
the server by displaying a text message in the user agent's
status bar. See checkpoint 1.3 for requirements about alerts.
Application Programming Interface (API), standard input/output/device
API
An application programming interface (API) defines how
communication may take place between applications.
As part of encouraging interoperability, this document
recommends using standard APIs where possible, although this
document does not define in all cases how those APIs are
standardized (i.e., whether they are defined by specifications
such as W3C Recommendations, defined by an operating system
vendor, de facto standards, etc.). Implementing APIs that are
independent of a particular operating system (e.g., the W3C DOM
Level 2 specifications) may reduce implementation costs for
multi-platform user agents and promote the development of
multi-platform assistive technologies. Implementing standard
APIs defined for a particular operating system may reduce
implementation costs for assistive technology developers who
wish to interoperate with more than one piece of software
running on that operating system.
A "device API" defines how communication may take place with an
input or output device such as a keyboard, mouse, video card,
etc. A "standard device API" is one that is considered standard
for that particular device on a given operating or windowing
system.
In this document, an "input/output API" defines how
applications or devices communicate with a user agent. As used
in this document, input and output APIs include, but are not
limited to, device APIs. Input and output APIs also include
more abstract communication interfaces than those specified by
device APIs. A "standard input/output API" is one that is
expected to be implemented by software running on a particular
operating system. Standard input/output APIs may vary from
system to system. For example, on desktop computers today, the
standard input APIs are for the mouse and keyboard. For touch
screen devices or mobile devices, standard input APIs may
include stylus, buttons, voice, etc. The graphical display and
sound card are considered standard ouput devices for a
graphical desktop computer environment, and each has a standard
API.
Assistive technology
In the context of this document, an assistive technology is a
user agent that:
1. relies on services (such as retrieving Web resources, parsing
markup, etc.) provided by one or more other "host" user
agents. Assistive technologies communicate data and messages
with host user agents by using and monitoring APIs.
2. provides services beyond those offered by the host user
agents to meet the requirements of a users with disabilities.
Additional services include alternative renderings (e.g., as
synthesized speech or magnified content), alternative input
methods (e.g., voice), additional navigation or orientation
mechanisms, content transformations (e.g., to make tables
more accessible), etc.
For example, screen reader software is an assistive technology
because it relies on browsers or other software to enable Web
access, particularly for people with visual and learning
disabilities.
Examples of assistive technologies that are important in the
context of this document include the following:
+ screen magnifiers, which are used by people with visual
disabilities to enlarge and change colors on the screen to
improve the visual readability of rendered text and images.
+ screen readers, which are used by people who are blind or
have reading disabilities to read textual information through
synthesized speech or braille displays.
+ speech recognition software, which may be used by people who
have some physical disabilities.
+ alternative keyboards, which are used by people with certain
physical disabilities to simulate the keyboard.
+ alternative pointing devices, which are used by people with
certain physical disabilities to simulate mouse pointing and
button activations.
Beyond this document, assistive technologies consist of
software or hardware that has been specifically designed to
assist people with disabilities in carrying out daily
activities, e.g., wheelchairs, reading machines, devices for
grasping, text telephones, vibrating pagers, etc.
Attribute
This document uses the term "attribute" in the XML sense: an
element may have a set of attribute specifications (refer to
the XML 1.0 specification [XML] section 3).
Audio, Audio object
An audio object is content rendered as sound through an audio
viewport.
Audio-only presentation
An audio-only presentation is a presentation consisting
exclusively of one or more audio tracks presented concurrently
or in series. Examples of an audio-only presentation include a
musical performance, a radio-style news broadcast, and a book
reading.
Audio track
An audio track is an audio object that is intended as a whole
or partial presentation. An audio track may, but is not
required to, correspond to a single audio channel (left or
right audio channel).
Auditory description
An auditory description is either a prerecorded human voice or
a synthesized voice (recorded or generated dynamically)
describing the key visual elements of a movie or animation. The
auditory description is synchronized with the audio track of
the presentation, usually during natural pauses in the audio
track. Auditory descriptions include information about actions,
body language, graphics, and scene changes.
Author styles
Authors styles are style property values that come from a
document, or from its associated style sheets, or that are
generated by the server.
Captions
Captions (sometimes called "closed captions") are text
transcripts that are synchronized with other audio or visual
tracks. Captions convey information about spoken words and
non-spoken sounds such as sound effects. They benefit people
who are deaf or hard-of-hearing, and anyone who cannot hear the
audio (e.g., someone in a noisy environment). Captions are
generally rendered graphically above, below, or superimposed
over video. Note: Other terms that include the word "caption"
may have different meanings in this document. For instance, a
"table caption" is a title for the table, often positioned
graphically above or below the table. In this document, the
intended meaning of "caption" will be clear from context.
Collated text transcript
A collated text transcript is a text equivalent of a movie or
animation. More specifically, it is the combination of the text
transcript of the audio track and the text equivalent of the
visual track. For example, a collated text transcript typically
includes segments of spoken dialogue interspersed with text
descriptions of the key visual elements of a presentation
(actions, body language, graphics, and scene changes). See also
the definitions of text transcript and auditory description.
Collated text transcripts are essential for individuals who are
deaf-blind.
Configure and Control
In the context of this document, the verbs "to control" and "to
configure" share in common the idea of governance such as a
user may exercise over interface layout, user agent behavior,
rendering style, and other parameters required by this
document. Generally, the difference in the terms centers on the
idea of persistence. When a user makes a change by
"controlling" a setting, that change usually does not persist
beyond that user session. On the other hand, when a user
"configures" a setting, that setting typically persists into
later user sessions. Furthermore, the term "control" typically
means that the change can be made easily (such as through a
keyboard shortcut) and that the results of the change occur
immediately, whereas the term "configure" typically means that
making the change requires more time and effort (such as making
the change via a series of menus leading to a dialog box, via
style sheets or scripts, etc.) and that the results of the
change may not take effect immediately (e.g., due to time spent
reinitializing the system, initiating a new session, rebooting
the system). In order to be able to configure and control the
user agent, the user must be able to "read" as well as "write"
values for these parameters. Configuration settings may be
stored in a profile. The range and granularity of the changes
that can be controlled or configured by the user may depend on
system or hardware limitations.
Both configuration and control may apply at different "levels":
across Web resources (i.e., at the user agent level, or
inherited from the system), to the entirety of a Web resource,
or to components of a Web resource (e.g., on a per-element
basis). In this document, the term global configuration is used
to emphasize when a configuration must apply across Web
resources. For example, users may configure the user agent to
apply the same font family across Web resources, so that all
text is displayed by default using that font family. On the
other hand, the user may wish to configure the rendering of a
particular element type, which may be done through style
sheets. Or, the user may wish to control the text size
dynamically (zooming in and out) for a given document, without
having to reconfigure the user agent. Or, the user may wish to
control the text size dynamically for a given element, e.g., by
navigating to the element and zooming in on it.
User agents may allow users to choose configurations based on
various parameters, such as hardware capabilities, natural
language, etc.
Note: In this document, the noun "control" means "user
interface component" or "form component".
Content
In this specification, the noun "content" is used in three
ways:
1. It is used to mean the document object as a whole or in
parts.
2. It is used to mean the content of an HTML or XML element, in
the sense employed by the XML 1.0 specification ([XML],
section 3.1): "The text between the start-tag and end-tag is
called the element's content." Context should indicate that
the term content is being used in this sense.
3. It is used in the context of the phrases non-text content and
text content.
Device-independence
Device-independence refers to the ability to make use of
software with any supported input or output device.
Document Object, Document Object Model
In general usage, the term "document object" refers to the user
agent's representation of data (e.g., a document). This data
generally comes from the document source, but may also be
generated (from style sheets, scripts, transformations, etc.),
produced as a result of preferences set within the user agent,
added as the result of a repair performed automatically by the
user agent, etc. Some data that is part of the document object
is routinely rendered (e.g., in HTML, what appears between the
start and end tags of elements and the values of attributes
such as "alt", "title", and "summary"). Other parts of the
document object are generally processed by the user agent
without user awareness, such as DTD-defined names of element
types and attributes, and other attribute values such as
"href", "id", etc. These guidelines require that users have
access to both types of data through the user interface.
A "document object model" is the abstraction that governs the
construction of the user agent's document object. The document
object model employed by different user agents may vary in
implementation and sometimes in scope. This specification
requires that user agents implement the APIs defined in
Document Object Model (DOM) Level 2 Specifications ([DOM2CORE]
and [DOM2STYLE]) for access to HTML, XML, and CSS content.
These DOM APIs allow authors to access and modify the content
via a scripting language (e.g., JavaScript) in a consistent
manner across different scripting languages. As a standard
interface, the DOM APIs make it easier not just for authors,
but for assistive technology developers to extract information
and render it in ways most suited to the needs of particular
users.
Document character set
A document character set (an concept taken from SGML) is a
sequence of abstract characters that may appear in Web content
represented in a particular format (such as HTML, XML, etc.). A
document character set consists of:
+ a "repertoire", A set of abstract characters, such as the
Latin letter "A", the Cyrillic letter "I", the Chinese
character meaning "water", etc.
+ Code positions: A set of integer references to characters in
the repertoire.
For instance, the character set required by the HTML 4
specification [HTML4] is defined in the Unicode specification
[UNICODE]. Refer to "Character Model for the World Wide Web"
[CHARMOD] for more information about document character sets.
Document source, Document source view
In this document, the term "document source" refers to the data
that the user agent receives as the direct result of a request
for a Web resource (e.g., as the result of an HTTP/1.1
[RFC2616] "GET", as the result of opening a local resource,
etc.). A "document source view" generally renders the document
source as text written in the markup language(s) used to build
it. The document source is generally a subset of the document
object (e.g., since the document object may include repair
content).
Documentation
Documentation refers to all information provided by the vendor
about a product, including all product manuals, installation
instructions, the help system, and tutorials.
Element
This document uses the term "element" both in the XML sense (an
element is a syntactic construct as described in the XML 1.0
specification [XML], section 3) and more generally to mean a
type of content (such as video or sound) or a logical construct
(such as a header or list).
Equivalent (for content)
In the context of this document, an equivalency relationship
between two pieces of content means that one piece -- the
"equivalent" -- is able to serve essentially the same function
for a person with a disability (at least insofar as is
feasible, given the nature of the disability and the state of
technology) as the other piece -- the "equivalency target" --
does for a person without any disability. For example, the text
"The Full Moon" might convey the same information as an image
of a full moon when presented to users. If the image is part of
a link and understanding the image is crucial to guessing the
link target, then the equivalent must also give users an idea
of the link target. Thus, an equivalent is provided to fulfill
the same function as the equivalency target.
Equivalents include text equivalents (e.g., text equivalents
for images; text transcripts for audio tracks; collated text
transcripts for multimedia presentations and animations) and
non-text equivalents (e.g., a prerecorded auditory description
of a visual track of a movie, or a sign language video
rendition of a written text, etc.). Please refer to the
definitions of text content and non-text content for more
information.
Each markup language defines its own mechanisms for specifying
equivalents. For instance, in HTML 4 [HTML4] or SMIL 1.0
[SMIL], authors may use the "alt" attribute to specify a text
equivalent for some elements. In HTML 4, authors may provide
equivalents (or portions of equivalents) in attribute values
(e.g., the "summary" attribute for the TABLE element), in
element content (e.g., OBJECT for external content it
specifies, NOFRAMES for frame equivalents, and NOSCRIPT for
script equivalents), and in prose. Please consult the Web
Content Accessibility Guidelines 1.0 [WCAG10] and its
associated Techniques document [WCAG10-TECHS] for more
information about equivalents.
Events and scripting, event handler
User agents often perform a task when an event occurs that is
due to user interaction (e.g., document loading, mouse motion
or a key press), a request from the operating system, etc. Some
markup languages allow authors to specify that a script, called
an event handler, be executed when the event occurs. Note: The
combination of HTML, style sheets, the Document Object Model
(DOM) and scripting is commonly referred to as "Dynamic HTML"
or DHTML. However, as there is no W3C specification that
formally defines DHTML, this document only refers to event
handlers and scripts.
Explicit user request
In several checkpoints in this document, the term "explicit
user request" is used to mean any user interaction recognized
with certainty to be for a specific purpose. For instance, when
the user selects "New viewport" in the user agent's user
interface, this is an explicit user request for a new viewport.
On the other hand, it is not an explicit request when the user
activates a link and that link has been marked up by the author
to open a new viewport (since the user may not know that a new
viewport will open). Nor is it an explicit user request even if
the link text states "will open a new viewport". Some other
examples of explicit user requests include "yes" responses to
prompts from the user agent, configuration through the user
agent's user interface, activation of known form submit
controls, and link activation (which should not be assumed to
mean more than "get this linked resource", even if the link
text or title or role indicates more). Some examples of
behaviors that happen without explicit user request include
changes due to scripts. Note: Users make mistakes. For example,
a user may submit a form inadvertently by activating a known
form submit control. In this document, this type of mistake is
still considered an explicit user request.
Fee link
For the purpose of this document, the term "fee link" refers to
a link that when activated, debits the user's electronic
"wallet" (generally, a "micropayment"). The link's role as a
fee link must be identified through markup in a manner that the
user agent can recognize. This definition of fee link excludes
payment mechanisms (e.g., some form-based credit card
transactions) that cannot be recognized by the user agent as
causing payments. For more information about fee links, refer
to "Common Markup for micropayment per-fee-links"
[MICROPAYMENT].
Focus, content focus, user interface focus, current focus
The notion of focus refers to two identifying mechanisms of
user agents:
1. The "content focus" designates an active element in a
document (e.g., a link or radio button). A viewport has at
most one content focus.
2. The "user interface focus" designates a control of the user
interface that will respond to user input (e.g., a radio
button, text box, menu, etc.).
In this document, the term "focus" by itself encompasses both
types of focus. Where one is meant specifically in this
document, it is identified.
When several viewports coexist, each may have a content and
user interface focus. At all times, only one content focus or
one user interface focus is active, called the current focus.
The current focus responds to user input and may be toggled
between content focus and user interface focus through the
keyboard, pointing device, etc. Both the content and user
interface focus may be highlighted. See also the definition of
point of regard.
Graphical
In this document, the term "graphical" refers to information
(text, colors, graphics, images, animations, etc.) rendered for
visual consumption.
Highlight
In this document, "to highlight" means to emphasize through the
user interface. For example, user agents highlight which
content is selected or focused and which viewport is the
current viewport. Graphical highlight mechanisms include dotted
boxes, underlining, and reverse video. Synthesized speech
highlight mechanisms include alterations of voice pitch and
volume.
Input configuration
An input configuration is the mapping of user agent
functionalities to some user interface trigger mechanisms
(e.g., menus, buttons, keyboard keys, voice commands, etc.).
The default input configuration is the mapping the user finds
after installation of the software; it must be part of the user
agent documentation (per checkpoint 10.3]). Input
configurations may be affected by author-specified bindings
(e.g., through the "accesskey" attribute of HTML 4 [HTML4]).
Multimedia Presentation
For the purposes of this document, a multimedia presentation is
a presentation that is not a visual-only presentation,
audio-only presentation, or tactile-only presentation. In a
"classic" multimedia presentation (e.g., a movie that has sound
track or an animation with accompanying audio), at least one
visual track is closely synchronized with at least one audio
track.
Natural language
Natural language is spoken, written, or signed human language
such as French, Japanese, and American Sign Language. On the
Web, the natural language of content may be specified by markup
or HTTP headers. Some examples include the "lang" attribute in
HTML 4 ([HTML4] section 8.1), the "xml:lang" attribute in XML
1.0 ([XML], section 2.12), the HTML 4 "hreflang" attribute for
links in HTML 4 ([HTML4], section 12.1.5), the HTTP
Content-Language header ([RFC2616], section 14.12) and the
Accept-Language request header ([RFC2616], section 14.4). See
also the definition of script.
Placeholder
A placeholder is content generated by the user agent to replace
author-supplied content. A placeholder may be generated as the
result of a user preference (e.g., to not render images) or as
repair content (e.g., when an image cannot be found).
Placeholders can be any type of content, including text and
images. This document does not require user agents to include
placeholders in the document object. A placeholder inserted in
the document object should conform to the Web Content
Accessibility Guidelines 1.0 [WCAG10]. If a placeholder is not
part of the document object, it is part of the user interface
only (and subject, for example, to checkpoint 1.3).
Point of regard
The point of regard is a position in rendered content that the
user is presumed to be viewing. The dimensions of the point of
regard may vary. For example, it may be a point (e.g., a moment
in an audio rendering or a cursor in a graphical rendering), or
a range of text (e.g., focused text), or a two-dimensional area
(e.g., content rendered through a two-dimensional graphical
viewport). The point of regard is almost always within a
viewport (though the dimensions of the point of regard could
exceed those of the viewport). The point of regard may also
refer to a particular moment in time for content that changes
over time (e.g., an audio-only presentation). User agents may
use the focus, selection, or other means to designate the point
of regard. A user agent should not change the point of regard
unexpectedly as this may disorient the user.
Presentation
In this document, the term presentation refers to a collection
of information, consisting of one or more Web resources,
intended to be rendered simultaneously, and identified by a
single URI. In general, a presentation has an inherent time
component (i.e., it's not just a static "Web page" (refer to
the definition of "Web page" in "Web Characterization
Terminology and Definitions Sheet" [WEBCHAR]).
Profile
A profile is a named and persistent representation of user
preferences that may be used to configure a user agent.
Preferences include input configurations, style preferences,
natural language preferences, etc. On systems with distinct
user accounts, profiles enable users to reconfigure software
quickly when they log on, and profiles may be shared by several
users. Platform-independent profiles are useful for those who
use the same user agent on different platforms.
Prompt
In this document, "to prompt" means to require input from the
user. The user agent should allow users to configure how they
wish to be prompted. For instance, for a user agent
functionality X, configurations might include: always do X
without prompting me, never do X without prompting me, don't
ever do X but tell me when you could have done X but didn't,
don't ever do X and don't tell me, etc.
Properties, values, and defaults
A user agent renders a document by applying formatting
algorithms and style information to the document's elements.
Formatting depends on a number of factors, including where the
document is rendered: on screen, on paper, through
loudspeakers, on a braille display, on a mobile device, etc.
Style information (e.g., fonts, colors, speech prosody, etc.)
may come from the elements themselves (e.g., certain font and
phrase elements in HTML), from style sheets, or from user agent
settings. For the purposes of these guidelines, each formatting
or style option is governed by a property and each property may
take one value from a set of legal values. Generally in this
document, the term "property" has the meaning defined in CSS 2
([CSS2], section 3). A reference to "styles" in this document
means a set of style-related properties.
The value given to a property by a user agent when it is
installed is called the property's default value.
Recognize
Authors encode information in markup languages, style sheet
languages, scripting languages, protocols, etc. When the
information is encoded in a manner that allows the user agent
to process it with certainty, the user agent can "recognize"
the information. For instance, HTML allows authors to specify a
heading with the H1 element, so a user agent that implements
HTML can recognize that content as a heading. If the author
creates headings using a visual effect alone (e.g., by
increasing the font size), then the author has encoded the
heading in a manner that does not allow the user agent to
recognize it as a heading.
Some requirements of this document depend on content roles,
content relationships, timing relationships, and other
information that must be supplied by the author. These
requirements only apply when the author has encoded that
information in a manner that the user agent can recognize. See
the section on conformance for more information about
applicability.
In practice, user agents will rely heavily on information that
the author has encoded in a markup language or style sheet
language. On the other hand, information encoded in a script
may not be recognized by the user agent as easily. For
instance, a user agent is not expected to recognize that, when
executed, a script will calculate a factorial. The user agent
will be able to recognize some information in a script by
virtue of implementing the scripting language or a known
program library (e.g., the user agent is expected to recognize
when a script will open a viewport or retrieve a resource from
the Web). The Techniques document [UAAG10-TECHS] lists some
markup known to affect accessibility that user agents can
recognize.
Rendered content, rendered text
Rendered content is the part of content capable of being
perceived by a user through a given viewport (whether visual,
auditory, or tactile). Some rendered content may lie "outside"
of a viewport at some times (e.g., when the user can only view
a portion of a large document through a small graphical
viewport, when audio content has already been played, etc.). By
changing the viewport's position, the user can view the
remaining rendered content.
Note: In the context of this document, "invisible content" is
content that influences graphical rendering of other content
but is not rendered itself. Similarly, "silent content" is
content that influences audio rendering of other content but is
not rendered itself. Neither invisible nor silent content is
considered rendered content.
Repair content, repair text
In this document, the term "repair content" refers to content
generated by the user agent in order to correct an error
condition. "Repair text" means repair content consisting only
of text. Some error conditions that may lead to the generation
of repair content include:
+ Erroneous or incomplete content (e.g., ill-formed markup,
invalid markup, missing text equivalents, etc.);
+ Missing resources for handling or rendering content (e.g.,
the user agent lacks a font family to display some
characters, the user agent doesn't implement a particular
scripting language, etc.);
This document does not require user agents to include repair
content in the document object. Repair content inserted in the
document object should conform to the Web Content Accessibility
Guidelines 1.0 [WCAG10]. For more information about repair
techniques for Web content and software, refer to "Techniques
for Authoring Tool Accessibility Guidelines 1.0"
[ATAG10-TECHS].
Script
In this document, the term "script" almost always refers to a
scripting (programming) language used to create dynamic Web
content. However, in checkpoints referring to the written
(natural) language of content, the term "script" is used as in
Unicode [UNICODE] to mean "A collection of symbols used to
represent textual information in one or more writing systems."
Selection, current selection
The selection generally identifies a range of content (e.g.,
text, images, etc.) in a document. The selection may be
structured (based on the document tree) or unstructured (e.g.,
text-based). Content may be selected through user interaction,
scripts, etc. The selection may be used for a variety of
purposes: for cut and paste operations, to designate a specific
element in a document, to identify what a screen reader should
read, etc.
The selection may be set by the user (e.g., by a pointing
device or the keyboard) or through an application programming
interface (API). A viewport has at most one selection (though
the selection may be rendered graphically as discontinuous text
fragments). When several viewports coexist, each may have a
selection, but only one is active, called the current
selection.
On the screen, the selection may be highlighted using colors,
fonts, graphics, magnification, etc. The selection may also be
rendered through changes in speech prosody, for example.
Support, implement, conform
In this document, the terms "support", "implement", and
"conform" all refer to what a developer has designed a user
agent to do, but they represent different degrees of
specificity. A user agent "supports" general classes of
objects, such as "images" or "Japanese". A user agent
"implements" a specification (e.g., the PNG and SVG image
format specifications, a particular scripting language, etc.)
or an API (e.g., the DOM API) when it has been programmed to
follow all or part of a specification. A user agent "conforms
to" a specification when it implements the specification and
satisfies its conformance criteria. This document includes some
explicit conformance requirements (e.g., to a particular level
of the "Web Content Accessibility Guidelines 1.0" [WCAG10]).
Synchronize
In this document, "to synchronize" refers to the
time-coordination of two or more presentation components (e.g.,
in a multimedia presentation, a visual track with captions).
For Web content developers, the requirement to synchronize
means to provide the data that will permit sensible
time-coordinated rendering by a user agent. For example, Web
content developers can ensure that the segments of caption text
are neither too long nor too short, and that they map to
segments of the visual track that are appropriate in length.
For user agent developers, the requirement to synchronize means
to present the content in a sensible time-coordinated fashion
under a wide range of circumstances including technology
constraints (e.g., small text-only displays), user limitations
(slow reading speeds, large font sizes, high need for review or
repeat functions), and content that is sub-optimal in terms of
accessibility.
Tactile object
A tactile object is output from a tactile viewport. Tactile
objects include text (rendered as braille) and graphics
(rendered as raised-line drawings).
Tactile-only presentation
A tactile-only presentation is a presentation consisting
exclusively of one or more tactile tracks presented
concurrently or in series.
Tactile track
A tactile track is a tactile object that is intended as a whole
or partial presentation. This does not necessarily correspond
to a single physical or logical track on the storage or
delivery media.
Text
In this document, the term "text" used by itself refers to a
sequence of characters from a markup language's document
character set. Refer to the "Character Model for the World Wide
Web " [CHARMOD] for more information about text and characters.
Note: This document makes use of other terms that include the
word "text" that have highly specialized meanings: collated
text transcript, non-text content, text content, non-text
element, text element, text equivalent, and text transcript.
Text content, non-text content, text element, non-text element, text
equivalent, non-text equivalent
In this document, the term "text element" means content that,
when rendered, is understandable in each of three modes to
three reference groups:
1. visually-displayed text, for users who are deaf and adept in
reading visually-displayed text;
2. synthesized speech, for users who are blind and adept in use
of synthesized speech;
3. braille, for users who are deaf-blind and adept at reading
braille.
In these definitions, a text element is said to be
"understandable" when it fulfills its communication function to
representatives of the three reference groups. Furthermore,
these definitions make assumptions such as the availability of
appropriate hardware and software, that content represents a
general mix of purposes (information, education, entertainment,
commerce), that the individuals in the groups are able to
understand the natural language of the content, that the
individuals in the groups are not required to have specialized
skills (e.g., a computer science degree, etc.).
A text element may contain markup for style (e.g., font size or
color), structure (e.g., heading levels), and other semantics.
However, the essential function of the text element should be
retained even if style information happens to be lost in
rendering. In this document, the term "text content" refers to
content that is composed of one or more text elements. A
"non-text element" is an element that fails to be
understandable when rendered in any of three modes to their
respective reference disability audiences. Thus, text elements
have essential accessibility advantages often associated with
text while non-text elements are those that lack one or more
such advantages.
In this document, the term "non-text content" refers to content
that is composed of one or more non-text elements. Per
checkpoint 1.1 of "Web Content Accessibility Guidelines 1.0"
[WCAG10], authors must provide a text equivalent for every
author-supplied non-text element. Similarly, user agent
developers must provide a text equivalent for every non-text
element offered by the user agent to the user (see checkpoint
1.3).
Note that the terms "text element" and "non-text element" are
defined by the characteristics of their output (e.g.,
rendering) rather than those of their input (e.g., information
sources) or their internals (e.g., format). For example, in
principle, a text element can be generated or encoded in any
fashion as long as it has the proper output characteristics. In
general, text elements are composed of text (i.e., a sequence
of characters). Both text elements and non-text elements should
be understood as "pre-rendering" content in contrast to the
"post-rendering" content that they produce.
A "text equivalent" is a text element that, when rendered,
serves essentially the same function as some other content
(i.e., an equivalency target) does for a person without any
disability. Similarly, a "non-text equivalent" is a non-text
element that, when rendered, serves essentially the same
function as the equivalency target does for a person without
any disability. Please refer also to the definition of
equivalent.
Text decoration
In this document, a "text decoration" is any stylistic effect
that the user agent may apply to visually rendered text that
does not affect the layout of the document (i.e., does not
require reformatting when applied or removed). Text decoration
mechanisms include underline, overline, and strike-through.
Text transcript
A text transcript is a text equivalent of audio information
(e.g., an audio-only presentation or the audio track of a movie
or animation). It provides text for both spoken words and
non-spoken sounds such as sound effects. Text transcripts make
audio information accessible to people who have hearing
disabilities and to people who cannot play the audio. Text
transcripts are usually pre-written but may be generated on the
fly (e.g., by speech-to-text converters). See also the
definitions of captions and collated text transcripts.
User agent
In this document, the term "user agent" is used in two ways:
1. Any software that retrieves and renders Web content for
users. This may include Web browsers, media players,
plug-ins, and other programs -- including assistive
technologies -- that help in retrieving and rendering Web
content.
2. The subject of a conformance claim to this document. This is
the most common use of the term in this document and is the
usage in the checkpoints.
User agent default styles
User agent default styles are style property values applied in
the absence of any author or user styles. Some markup languages
specify a default rendering for documents in that markup
language. Other specifications may not specify default styles.
For example, XML 1.0 [XML] does not specify default styles for
XML documents. HTML 4 [HTML4] does not specify default styles
for HTML documents, but the CSS 2 [CSS2] specification suggests
a sample default style sheet for HTML 4 based on current
practice.
User interface
For the purposes of this document, user interface includes
both:
1. the "user agent user interface", i.e., the controls and
mechanisms offered by the user agent for user interaction,
such as menus, buttons, keyboard access, etc.
2. the "content user interface", i.e., the active elements that
are part of content, such as form controls, links, applets,
etc.
The document distinguishes them only where required for
clarity.
User styles
User styles are style property values that come from user
interface settings, user style sheets, or other user
interactions.
Visual object
A visual object is output from a visual viewport. Visual
objects include graphics, text, and visual portions of movies
and animations.
Visual-only presentation
A visual-only presentation is a presentation consisting
exclusively of one or more visual tracks presented concurrently
or in series.
Visual track
A visual track is a visual object that is intended as a whole
or partial presentation. A visual track does not necessarily
correspond to a single physical or software object. A visual
track can be text-based or graphic, static or animated.
Views, viewports, and current viewport
User agents may handle different types of content: markup
language, sound, video, etc. The user views rendered content
through a viewport, which may be a window, a frame, a piece of
paper, a loudspeaker, a virtual magnifying glass, etc. A
viewport may contain another viewport (e.g., nested frames).
User interface controls such as prompts, menus, alerts, etc.
are not viewports.
The viewport that contains both the current focus and the
current selection is called the current viewport. The current
viewport is generally highlighted when several viewports
coexist. A user agent must provide mechanisms for accessing all
content that can be presented by each viewport (e.g., scrolling
mechanisms, advance and rewind, etc.).
User agents may render the same content in a variety of ways;
each rendering is called a view. For instance, a user agent may
allow users to view an entire document or just a list of the
document's headers. These are two different views of the
document.
Voice browser
From "Introduction and Overview of W3C Speech Interface
Framework" [VOICEBROWSER]: "A voice browser is a device
(hardware and software) that interprets voice markup languages
to generate voice output, interpret voice input, and possibly
accept and produce other modalities of input and output."
Web resource
The term "Web resource" is used in this document in accordance
with Web Characterization Terminology and Definitions Sheet
[WEBCHAR] to mean anything that can be identified by a Uniform
Resource Identifier (URI) as defined in RFC 2396 [RFC2396].
_________________________________________________________________
5. References
For the latest version of any W3C specification please consult the
list of W3C Technical Reports at http://www.w3.org/TR. Some documents
listed below may have been superseded since the publication of this
document.
5.1 Normative references
[DOM2CORE]
"Document Object Model (DOM) Level 2 Core Specification", A. Le
Hors, P. Le Hégaret, L. Wood, G. Nicol, J. Robie, M. Champion,
S. Byrne, eds., 13 November 2000. This W3C Recommendation is
http://www.w3.org/TR/2000/REC-DOM-Level-2-Core-20001113
[DOM2STYLE]
"Document Object Model (DOM) Level 2 Style Specification", V.
Apparao, P. Le Hégaret, C. Wilson, eds., 13 November 2000. This
W3C Recommendation is
http://www.w3.org/TR/2000/REC-DOM-Level-2-Style-20001113.
[RFC2119]
"Key words for use in RFCs to Indicate Requirement Levels", S.
Bradner, March 1997.
[WCAG10]
"Web Content Accessibility Guidelines 1.0", W. Chisholm, G.
Vanderheiden, and I. Jacobs, eds., 5 May 1999. This W3C
Recommendation is
http://www.w3.org/TR/1999/WAI-WEBCONTENT-19990505.
5.2 Informative references
[ATAG10]
"Authoring Tool Accessibility Guidelines 1.0", J. Treviranus,
C. McCathieNevile, I. Jacobs, and J. Richards, eds., 3 February
2000. This W3C Recommendation is
http://www.w3.org/TR/2000/REC-ATAG10-20000203.
[ATAG10-TECHS]
"Techniques for Authoring Tool Accessibility Guidelines 1.0",
J. Treviranus, C. McCathieNevile, I. Jacobs, and J. Richards,
eds., 4 May 2000. This W3C Note is
http://www.w3.org/TR/2000/NOTE-ATAG10-TECHS-20000504/.
[CHARMOD]
"Character Model for the World Wide Web", M. Dürst and F.
Yergeau, eds., 29 November 1999. This W3C Working Draft is
http://www.w3.org/TR/1999/WD-charmod-19991129/
[CSS1]
"CSS, level 1 Recommendation", B. Bos, H. Wium Lie, eds., 17
December 1996, revised 11 January 1999. This W3C Recommendation
is http://www.w3.org/TR/1999/REC-CSS1-19990111.
[CSS2]
"CSS, level 2 Recommendation", B. Bos, H. Wium Lie, C. Lilley,
and I. Jacobs, eds., 12 May 1998. This W3C Recommendation is
http://www.w3.org/TR/1998/REC-CSS2-19980512.
[HTML4]
"HTML 4.01 Recommendation", D. Raggett, A. Le Hors, and I.
Jacobs, eds., 24 December 1999. This W3C Recommendation is
http://www.w3.org/TR/1999/REC-html401-19991224.
[MATHML]
"Mathematical Markup Language", P. Ion and R. Miner, eds., 7
April 1998. This W3C Recommendation is
http://www.w3.org/TR/1998/REC-MathML-19980407.
[MICROPAYMENT]
"Common Markup for micropayment per-fee-links", T. Michel, ed.,
25 August 1999. This W3C Working Draft is
http://www.w3.org/TR/1999/WD-Micropayment-Markup-19990825.
[PNG]
"PNG (Portable Network Graphics) Specification 1.0", T.
Boutell, ed., 1 October 1996. This W3C Recommendation is
http://www.w3.org/TR/REC-png.
[RDF10]
"Resource Description Framework (RDF) Model and Syntax
Specification", O. Lassila, R. Swick, eds., 22 February 1999.
This W3C Recommendation is
http://www.w3.org/TR/1999/REC-rdf-syntax-19990222.
[RFC2396]
"Uniform Resource Identifiers (URI): Generic Syntax", T.
Berners-Lee, R. Fielding, L. Masinter, August 1998.
[RFC2616]
"Hypertext Transfer Protocol -- HTTP/1.1", J. Gettys, J. Mogul,
H. Frystyk, L. Masinter, P. Leach, T. Berners-Lee, June 1999.
[SMIL]
"Synchronized Multimedia Integration Language (SMIL) 1.0
Specification", P. Hoschka, ed., 15 June 1998. This W3C
Recommendation is http://www.w3.org/TR/1998/REC-smil-19980615.
[SVG]
"Scalable Vector Graphics (SVG) 1.0 Specification", J.
Ferraiolo, ed., 2 August 2000. This W3C Candidate
Recommendation is http://www.w3.org/TR/2000/CR-SVG-20000802/.
[UAAG10-CHECKLIST]
An appendix to this document lists all of the checkpoints,
sorted by priority. The checklist is available in either
tabular form or list form.
[UAAG10-TECHS]
"Techniques for User Agent Accessibility Guidelines 1.0", I.
Jacobs, J. Gunderson, E. Hansen, eds. The latest draft of the
techniques document is available at
http://www.w3.org/WAI/UA/UAAG10-TECHS/.
[UNICODE]
"The Unicode Standard, Version 3.0", The Unicode Consortium,
Reading, MA, Addison-Wesley Developers Press, 2000. ISBN
0-201-61633-5. Refer also to
http://www.unicode.org/unicode/standard/versions/.
[VOICEBROWSER]
"Voice Browsers: An introduction and glossary for the
requirements drafts", M. Robin, J. Larson, 23 December 1999.
This document is
http://www.w3.org/TR/1999/WD-voice-intro-19991223. This
document includes references to additional W3C specifications
about voice browser technology.
[W3CPROCESS]
"World Wide Web Consortium Process Document", I. Jacobs ed. The
11 November 1999 version of the Process Document is
http://www.w3.org/Consortium/Process/Process-19991111/.
[WCAG10-TECHS]
"Techniques for Web Content Accessibility Guidelines 1.0", W.
Chisholm, G. Vanderheiden, and I. Jacobs, eds. This W3C Note is
http://www.w3.org/TR/1999/WAI-WEBCONTENT-TECHS-19990505.
[WEBCHAR]
"Web Characterization Terminology and Definitions Sheet", B.
Lavoie, H. F. Nielsen, eds., 24 May 1999. This is a W3C Working
Draft that defines some terms to establish a common
understanding about key Web concepts. This W3C Working Draft is
http://www.w3.org/1999/05/WCA-terms/01.
[XHTML10]
"XHTML[tm] 1.0: The Extensible HyperText Markup Language", S.
Pemberton, et al., 26 January 2000. This W3C Recommendation is
http://www.w3.org/TR/2000/REC-xhtml1-20000126.
[XML]
"Extensible Markup Language (XML) 1.0", T. Bray, J. Paoli, C.M.
Sperberg-McQueen, eds., 10 February 1998. This W3C
Recommendation is http://www.w3.org/TR/1998/REC-xml-19980210.
6. Acknowledgments
The active participants of the User Agent Accessibility Guidelines
Working Group who authored this document were: James Allan, Denis
Anson (College Misericordia), Kitch Barnicle, Harvey Bingham, Dick
Brown (Microsoft), Al Gilman, Jon Gunderson (Chair of the Working
Group, University of Illinois, Urbana-Champaign), Eric Hansen
(Educational Testing Service), Ian Jacobs (Team Contact, W3C),
Marja-Riitta Koivunen, Tim Lacy (Microsoft), Charles McCathieNevile
(W3C), Mark Novak, David Poehlman, Mickey Quenzer, Gregory Rosmaita
(Visually Impaired Computer Users Group of New York City), Madeleine
Rothberg, and Rich Schwerdtfeger.
Many thanks to the following people who have contributed through
review and past participation in the Working Group: Paul Adelson,
Olivier Borius, Judy Brewer, Bryan Campbell, Kevin Carey, Tantek
Çelik, Wendy Chisholm, David Clark, Chetz Colwell, Wilson Craig, Nir
Dagan, Daniel Dardailler, B. K. Delong, Neal Ewers, Geoff Freed, John
Gardner, Larry Goldberg, Glen Gordon, John Grotting, Markku Hakkinen,
Earle Harrison, Chris Hasser, Kathy Hewitt, Philipp Hoschka, Masayasu
Ishikawa, Phill Jenkins, Earl Johnson, Jan Kärrman (for help with
html2ps), Leonard Kasday, George Kerscher, Peter Korn, Josh Krieger,
Catherine Laws, Greg Lowney, Susan Lesch, Scott Luebking, William
Loughborough, Napoleon Maou, Peter Meijer, Karen Moses, Masafumi
Nakane, Charles Oppermann, Mike Paciello, David Pawson, Michael
Pederson, Helen Petrie, Michael Pieper, Jan Richards, Hans Riesebos,
Joe Roeder, Lakespur L. Roca, Lloyd Rutledge, Liam Quinn, T.V. Raman,
Robert Savellis, Constantine Stephanidis, Jim Thatcher, Jutta
Treviranus, Claus Thogersen, Steve Tyler, Gregg Vanderheiden, Jaap van
Lelieveld, Jon S. von Tetzchner, Willie Walker, Ben Weiss, Evan Wies,
Chris Wilson, Henk Wittingen, and Tom Wlodkowski.
_________________________________________________________________
[contents] [checklist] [linear checklist]