This document provides guidelines for designing user
agents that lower barriers to Web accessibility for people with
disabilities. User agents include browsers and other types of software that
retrieve and render Web content. A user agent that
conforms to these guidelines will promote
accessibility through its own user interface and through other internal
facilities, including its ability to communicate with other technologies
(especially assistive
technologies). Furthermore, all users, not just users with disabilities,
should find conforming user agents to be more usable.

In addition to helping developers of browsers and media players, this
document will also benefit developers of assistive technologies because it
explains what types of information and control an assistive technology may
expect from a conforming user agent. Technologies not addressed directly by
this document (e.g., technologies for braille rendering) will be essential to
ensuring Web access for some users with disabilities.

The "User Agent Accessibility Guidelines 2.0" (UAAG 2.0) is part
of a series of accessibility guidelines published by the W3CWeb Accessibility
Initiative (WAI).

May be
Superseded

This section describes the status of this document at the time of its
publication. Other documents may supersede this document. A list of current
W3C publications and
the latest revision of this technical report can be found in the W3C technical reports
index at http://www.w3.org/TR/.

Editor's Draft of UAAG 2.0

This document is the internal working draft used by the UAWG and is updated continuously and without notice. This document has no formal standing within W3C. Please consult the group's home page and the W3C technical reports index for information about the latest publications by this group.

No
Endorsement

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a
draft document and may be updated, replaced or obsoleted by other documents
at any time. It is inappropriate to cite this document as other than work in
progress.

A user agent is any software that retrieves and presents Web content for
end users. Examples include Web browsers, media players, plug-ins, and other
programs including assistive technologies, that help in retrieving, rendering
and interacting with Web content. This document specifies requirements that,
if satisfied by user agent developers, will lower barriers
to accessibility.

Overview

Accessibility involves a wide range of disabilities, including visual,
auditory, physical, speech, cognitive, language, learning, neurological
disabilities, and disabilities related to ageing. This document emphasizes
the goal of ensuring that users, including users with disabilities, have
control over their environment for accessing the Web. Key methods for
achieving that goal include:

optional self-pacing

configurability

device-independence

interoperability

direct support for both graphical and auditory output

adherence to published conventions.

Some users may have more than one disability, and the needs of different
disabilities may contradict. Thus, many of the requirements in this document
involve configuration as one way to ensure that a functionality designed to
improve accessibility for one user does not interfere with accessibility for
another. A default user agent setting may be useful for one user but
interfere with accessibility for another, therefore this document prefers
configuration requirements rather than requirements for default settings. For
some content, a feature required by this document may be ineffective or cause
content to be less accessible, making it imperative that the user be able to
turn off the feature. To avoid overwhelming users with an abundance of
configuration options, this document includes requirements that promote ease
of configuration and documentation of accessibility features.

This document also acknowledges the importance of author preferences,
however, requirements are included to override certain author preferences
when the user would not otherwise be able to access that content.

Some of the requirements of this document may have security implications,
such as communication through APIs, and allowing programmatic read and write
access to content and user interface
control. This document assumes that features required by this document
will be built on top of an underlying security architecture. Consequently,
unless permitted explicitly in a success criterion, this document grants no
conformance exemptions based on security issues.

The UAWG expects that software which satisfies the requirements of this
document will be more flexible, manageable, extensible, and beneficial to all
users.

UAAG 2.0 Layers of Guidance

In order to meet the varying needs of the different audiences using UAAG,
several layers of guidance are provided including overall
principles, general guidelines, testable success
criteria, and a rich collection of sufficient techniques and
resource links.

Principles - At the top are five principles that
provide the foundation for accessible user agents. Three of the
principles are congruent to Web Content Accessibility Guidelines (WCAG)
2.0: perceivable, operable, understandable. Two principles have
been added which are specific to user agents: follows
specifications and programmatic access.

Guidelines - Under the principles are guidelines.
The guidelines provide the basic goals that authors should work toward in
order to make user agents more accessible to users with different
disabilities. The guidelines are not testable, but provide the framework
and overall objectives to help authors understand the success criteria
and better implement the techniques.

Success Criteria - For each guideline, testable
success criteria are provided to allow UAAG 2.0 to be used where
requirements and conformance testing are necessary such as in design
specification, purchasing, regulation, and contractual agreements. In
order to meet the needs of different groups and different situations,
three levels of conformance are defined: A (lowest), AA, and AAA
(highest). Additional information on UAAG levels can be found in the
section on Conformance.

All of these layers of guidance (principles, guidelines, and success criteria) work together to provide guidance on
how to make user agents more accessible. Developers are encouraged to view
and apply all layers that they are able to, including the advisory
techniques, in order to best address the needs of the widest possible range
of users.

Note that even user agents that conform at the highest level (AAA) will
not be accessible to individuals with all types, degrees, or combinations of
disability, particularly in the cognitive, language, and learning areas.
Developers are encouraged to consider the full range of techniques, including
the advisory techniques, as well as to seek relevant advice about current
best practice to ensure that their user agent is accessible, as far as
possible, to this community.

UAAG 2.0 Supporting Documents

A separate document, entitled "Implementing User Agent
Accessibility Guidelines 2.0" (the "Implementing document" from here on) provides suggestions and
examples of how each success criteria might be satisfied. It also includes
references to other accessibility resources (such as platform-specific
software accessibility guidelines) that provide additional information on how
a user agent may satisfy each success criteria. The techniques in the
Implementing document are informative examples only,
and other strategies may be used or required to satisfy the success criteria.
The UAWG expects to update the Implementing document more
frequently than the current guidelines. Developers, W3C Working Groups,
users, and others are encouraged to contribute examples and resources.

Components of Web
Accessibility

Web accessibility depends not only on accessible user agents, but also on
the availability of accessible content, a factor that is greatly influenced
by the accessibility of authoring tools. For an overview of how these
components of Web development and interaction work together, see:

1.4.1 Follow Specifications:
Render content according to the technology specification. This
includes any accessibility features of the technology (see Guideline 1.3). (Level A)

1.4.2 Handle Unrendered
Technologies: If the user agent does
not render a technology, it allows the user to choose a way to handle content
in that technology (e.g., by launching another application or by saving it to
disk). (Level A)

Applicability Note:

When a rendering requirement of another specification contradicts a
requirement of UAAG 2.0, the user agent may disregard the rendering
requirement of the other specification and still satisfy this guideline.

2.1.3 Accessible
Alternative: If a feature is not supported by the accessibility
architecture(s), provide an equivalent feature that does support the
accessibility architecture(s). Document the equivalent feature in the
conformance claim. (Level A)

2.1.4 Programmatic Availability of
DOMs: If the user agent implements one or more DOMs, they must be
made programmatically available to assistive technologies. (Level A)

2.1.5 Write Access: If the
user can modify the state or value of a piece of content through the user interface (e.g., by checking a
box or editing a text area), the same degree of write access is available
programmatically. (Level A)

2.1.6 Properties: If any of
the following properties are supported by the accessibility platform
architecture, make the properties available to the accessibility platform
architecture: (Level A)

(a) the bounding dimensions and coordinates of rendered graphical
objects

(b) font family of text

(c) font size of text

(d) foreground color of text

(e) background color of text.

(f) change state/value notifications

(g) selection

(h) highlighting

(i) input device focus

2.1.7 Timely Communication:
For APIs (for non-web-based user agents) implemented to satisfy the requirements of this document, ensure
that programmatic exchanges proceed at a rate such that users do not perceive
a delay. (Level A).

PRINCIPLE 3: Perceivable - The user interface
and rendered content must be presented to users in ways they can perceive

3.1.1 Identify Presence of Alternative Content The user has the ability to have indicators rendered along with rendered elements that have alternative content (e.g. visual icons rendered in proximity of content which has short text alternatives, long descriptions, or captions). In cases where the alternative content has different dimensions than the original content, the user has the option to specify how the layout/reflow of the document should be handled. (Level A).

3.1.2 Configurable Default
Rendering: The user has a global option to specify which types of alternative content by default and, in cases where the alternative content has different dimensions than the original content, how the layout/reflow of the document should be handled. (Level A)

3.1.3 Browse and Render:
The user can browse the alternatives, switch between them, and render them according to the following (Level A):

synchronized alternatives for time-based media (e.g., captions, audio descriptions, sign language) can be rendered at the same time as their associated audio tracks and visual tracks, and

non-synchronized alternatives (e.g., short text alternatives, long descriptions) can be rendered as replacements for the original rendered content.

3.1.4 Rendering Alternative
(Enhanced): Provide the user with the global option to configure a
cascade of types of alternatives to render by default, in case a preferred
type is unavailable. If the alternative content has a different height or
width, then the user agent will reflow the viewport. (Level AA)

3.6.1 Configure Text: The
user can globally set the following
characteristics of visually rendered text content, overriding any specified by the author or user agent defaults (Level A):

(a) text scale (i.e., the general size
of text) ,

(b) font family, and

(c) text color (i.e., foreground and
background).

3.6.2 Preserve
Distinctions: The user has the ability to preserve distinctions in the size of rendered text when that text is rescaled (e.g., headers continue to be larger than body text) within absolute limitations imposed by the platform. (Level A)

3.6.3 Option Range: The
range of options for each text characteristic includes at least (Level A):

(a) the range offered by global preference settings supported by the operating environment (i.e configured though the Control Panel or System) utility,
or

(b) if no such utility is available,
the range supported by the conventional APIs of
the operating environment for drawing text.

3.8.3 Advanced Speech Characteristics: The
user can set all of the speech characteristics offered by the speech
synthesizer, according to the full range of values available, overriding any values specified by the
author. (Level AAA)

3.8.4 Speech Features: The
following speech features are provided (Level AA):

(a) user-defined extensions to the
synthesized speech dictionary,

(b) "spell-out", where text is spelled
one character at a time, or according to language-dependent pronunciation
rules,

(c) at least two ways of speaking numerals:
one where numerals are spoken as individual digits and punctuation (e.g. 'one two zero three point five'
for 1203.5 or 'one comma two zero three point five' for 1,203.5), and
and one where full number are spoken (e.g. 'one thousand, two hundred
and three point five')".

(d) at least two ways of speaking
punctuation: one where punctuation is spoken literally, and one
where punctuation is rendered as natural pauses.

Guideline 3.10 Help user to use and orient within
viewports.

3.10.1 Highlight Viewport:
The viewport with the current focus (including nested viewports and their containers) is highlighted, and the user is able to customize attributes of the highlighted mechanism, including, but not limited to, shape, size, stroke width, color, and blink rate (if any). (Level A)

3.10.2 Move Viewport to Selection and Focus: When a viewport's selection or content focus changes, the viewport moves as
necessary to ensure that the new selection or content focus location is at least partially in the viewport. (Level A)

3.10.3@@Editor's Note: Merged with 3.10.2. Renumber @@

3.10.4 Resizable: The user has the option to make graphical viewports
resizable, within the limits of the display, overriding any values
specified by the author. (Level A)

3.10.5 Scrollbars:
Graphical viewports include scrollbars if the rendered content
(including after user preferences have been applied) extends beyond the
viewport dimensions, overriding any values specified by the
author. (Level A)

3.10.6 Viewport History: If
the user agent maintains a viewport history mechanism (e.g., via the "back
button") that stores previous "viable" states (i.e., that have not been
negated by the content, user agent settings or user agent extensions), it
maintains information about the point of
regard and it restores the saved values when the user returns to a state
in the history. (Level A)

3.10.7 Open on Request: The user has the option of having "top-level"viewports (e.g., windows) only open on explicit user request. In this
mode, instead of opening a viewport automatically, notify the user and allow
the user to open it with an explicit request (e.g., by confirming a prompt or
following a link generated by the user agent). (Level AA)

3.10.8 Do Not Take Focus:
When configured to allow "top-level" viewports to open without
explicit user request, the user has the option that if a "top-level"
viewport opens, neither its content focus nor its user interface focus
automatically becomes the current focus. (Level AA)

3.10.9 Stay on Top: The user has the option of having the viewport with the
current focus remain "on top" of all other viewports with which it overlaps.
(Level AA)

3.10.10 Close Viewport: The
user can close any "top-level" viewport. (Level AA)

3.10.11 Same UI: The user has the option of having all "top-level"
viewports follow the same user interface configuration as the current or
spawning viewport. (Level AA)

3.10.12 Indicate Viewport Position:
Indicate the viewport's position relative to rendered
content (e.g., the proportion along an audio or video timeline, the
proportion of a Web page before the current position ). (Level AAA)

3.12.2 Outline View: An
"outline" view of rendered content is provided,
composed of labels for important structural elements (e.g., heading text,
table titles, form titles, and other labels that are part of the content).
(Level AA)

Note: What constitutes a label is defined by each markup
language specification. For example, in HTML, a heading
(H1-H6) is a label for the section that follows it,
a CAPTION is a label for a table, and the title attribute is a label for its element.

3.12.3 Configure Set of Important
Elements: The user has the option to configure the set of important elements for the "outline" view,
including by element type (e.g., headers). (Level AAA)

PRINCIPLE 4. Ensure that the user interface is
operable

4.1.1 Keyboard Operation: All
functionality can be operated via the keyboard using sequential or direct
keyboard commands that do not require specific timings for individual
keystrokes, except where the underlying function requires input that depends
on the path of the user's movement and not just the endpoints (e.g., free
hand drawing). This does not forbid and should not discourage providing mouse
input or other input methods in addition to keyboard operation. (Level A)

(a) in the UI: if keyboard focus can be
moved to a component using the keyboard, then focus can be moved away
from that component using standard sequential keyboard commands (e.g.,
TAB key)

(b) in the rendered content: provides a
documented direct keyboard command that will always restore keyboard
focus to a known location (e.g., the address bar).

(c) in the rendered content: provides a
documented direct keyboard command that will always move keyboard focus
to a subsequent focusable element

4.1.4 Separate Selection from
Activation: The user has the option to
have selection separate from activation (e.g., navigating through a set of radio buttons without changing
which is the active/selected option). (Level A)

4.1.6 Present Direct Commands in Rendered Content: The user has the option to have any recognized direct commands (e.g. accesskey) in rendered content be presented with their associated elements (e.g. "[Ctrl+t]" displayed after a link whose accesskey value is "t", or an audio browser reading the value or label of a form control followed by "accesskey control plus t"). (Level A)

4.1.7 Present Direct Commands in User Interface: The user has the option to have any direct commands (e.g. keyboard shortcuts) in the user agent user interface be presented with their associated user interface controls (e.g. "Ctrl+S" displayed on the "Save" menu item and toolbar button). (Level AA)

4.1.8 Keyboard Navigation:
The user can use the keyboard to navigate from group to group of focusable
items and to traverse forwards and backwards all of the focusable elements within each group. Groups include, but are not limited to, toolbars, panels,
and user agent extensions. (Level AA)

4.1.9 Important Command Functions: Important command functions (e.g. related to navigation, display, content, information management, etc.) are available using a single or sequence of keystrokes or key combinations. (Level AA)

4.1.10 Override of UI Keyboard Commands:
The user can override any keyboard shortcut binding for the user agent user
interface except for conventional bindings for the operating environment
(e.g., for access to help). The rebinding options must include single-key and
key-plus-modifier keys if available in the operating environment. (Level
AA)

4.1.11 User Override of Accesskeys: The
user can override any recognized author supplied content keybinding (i.e. access key). The user must have an option to save the
override of user interface keyboard shortcuts so that the rebinding persists
beyond the current session. (Level AA)

4.4.1 Three Flashes or Below Threshold: In its default configuration, the user agent does not display any user interface components or recognized content that flashes more than three times in any one second period, unless the flash is below the general flash and red flash thresholds. (Level A)

4.4.2 Three Flashes: In its default configuration, the user agent does not display any user interface components or recognized content that flashes more than three times in any one second period (regardless of whether not the flash is below the general flash and red flash thresholds). (Level AAA) [WCAG 2.0]

4.6.1 Find:The user can perform a search within rendered content (e.g., not hidden with a style), including text alternatives, for any sequence of characters from the document character set set. (Level A)

4.6.2 Find Direction: The user has the option of searching forward or backward from the focused location in content. The user will be notified of changes in search direction; and when the search reaches the upper or lower extent of the content based on the search direction. (Level A)

4.6.3 Match Found: When there is a match, the user is alerted and the viewport moves so that the matched text content is at least partially within it. The user can search for the next instance of the text from the location of the match.

4.6.4 Alert on No Match: The user is
notified when there is no match or after the last match in content (i.e.,
prior to starting the search over from the beginning of content). (Level
A)

4.6.5 Advanced Find: The user agent provides an accessible advanced
search facility, with a case sensitive and case-insensitive search
option, and the ability for the user to perform a search within all
content (including hidden content and captioning) for text and text
alternatives, for any sequence of characters from the document character
set. (Level AA)

4.7.3 Access Relationships: Provide access to explicitly-defined relationships based on the
user's position in content (e.g., show form control's label, show label's
form control, show a cell's table headers, etc.). (Level A)

4.7.4 Location in
Hierarchy:The user can view the path of nodes leading
from the root of any content hierarchy in which the structure and
semantics are implied by presentation, as opposed to an explicit logical
structure with defined semantics (such as the HTML5 Canvas Element), or
as a consequence of decentralized-extensibility (such as the HTML5 item
/ itemprop microdata elements), and only if the user agent keeps an
internal model of the hierarchy which it does not expose via the DOM or
some other accessibility mechanism. (Level A).

Editors' Note: Success Criteria from 3.3 have been
moved to 4.9. SC 3.3.3 has been moved to 5.1

4.7.5 Direct activation: direct movement to and activation of any operable
elements in rendered content is provided. (Level AA)

4.7.6 Configure Set of Important Elements: The user has the option to
configure the set of important elements for structured navigation, including
by element type (e.g., headers, list items, images). (Level AAA) @@Editor's
note: Review the definition of "important elements" @@

Guideline 4.9 Provide control of
content that may reduce accessibility.

4.9.2 Time-Based Media
Load-Only: The user has the option to
load time-based media content @@DEFINE@@
such that the first frame is displayed (if video), but the content is not
played until explicit user request. (Level
A)

4.9.3 Execution
Placeholder: The user has the option to
render a placeholder instead of executable
content that would normally be contained within an on-screen area (e.g.,
Applet, Flash), until explicit user request to
execute. (Level A)

4.9.4 Execution Toggle: The
user has the option to turn on/off the execution
of executable content that would not normally be contained within a
particular area (e.g., Javascript). (Level A)

4.9.5 Playback Rate Adjustment for Prerecorded Content: The user can adjust the playback rate of prerecorded time-based media content, such that all of the following are true (Level A):

The user can adjust the playback rate of the time-based media tracks to between 50% and 250% of real time.

Speech whose playback rate has been adjusted by the user maintains pitch in order to limit degradation of the speech quality.

Audio and video tracks remain synchronized across this required range of playback rates.

The user agent provides a function that resets the playback rate to normal (100%).

4.9.6 Stop/Pause/Resume
Multimedia: The user can stop, pause, and resume rendered audio and
animation content (including video and
animated images) that last three or more seconds at their default playback
rate. (Level A)

4.9.6 Navigate Multimedia:The user can navigate along the timebase using a continuous scale, and by relative time units within rendered audio and animations (including video and animated images) that last three or more seconds at their default playback rate. (Level A)

4.9.7 Semantic Navigation of Time-Based Media. The user can navigate by semantic structure within the time-based media, such as by chapters or scenes, if present in the media (AA).

4.9.8 Track Enable/Disable of Time-Based Media. During time-based media playback, the user can determine which tracks are available and select or deselect tracks. These selections may override global default settings for captions, audio descriptions, etc.

4.9.9 Sizing Playback Viewport: User has the ability to adjust the size of the time-based media up to the full height or width of the containing viewport, with the ability to preserve aspect ratio and to adjust the size of the playback viewport to avoid cropping, within the scaling limitations imposed by the media itself. (Level AA)

4.9.10 Scale and position alternative media tracks. User has ability to scale and position alternative media tracks independent of base video. (Level AAA)

4.9.1113 Adjust Playback Contrast and Brightness. User has ability to control the contrast and brightness of the content within the playback viewport.

Applicability Notes:

The guideline only applies to images, animations, video, audio, etc. that
the user agent can recognize.

If the browser is playing the video natively, then there is only 1 user agent. Then it falls on the browser to meet the UAAG spec.

If author uses windows media player inside the video element, the browser needs to map its native controls to the embedded wmp controls, and provide access to all the controls.

User needs to be able to define rendering parameters of playback at render-time.

Principle 5: Ensure that user interface is
understandable

5.1.1 Option to Ignore: The
user has the option to turn off rendering of
non-essential or low priority text messages or updating/changing information in the content based on priority properties
defined by the author (e.g., ignoring updating content
marked "polite" ). (Level AA)

Note 1: Although conformance can only be achieved at the stated levels,
developers are encouraged to report (in their claim) any progress toward
meeting success criteria from all levels beyond the achieved level of
conformance.

Conformance Claims (Optional)

If a conformance claim is made, then the conformance claim must meet the
following conditions and include the following information (user agents
can conform to UAAG 2.0 without making a claim):

Conditions on Conformance Claims

At least one version of the conformance claim must be published on the
web as a document meeting level "A" of WCAG 2.0. A suggested metadata
description for this document is "UAAG 2.0 Conformance Claim".

Whenever the claimed conformance level is published (e.g., product
information web site), the URI for the on-line published version of the
conformance claim must be included.

The existence of a conformance claim does not imply that the W3C has
reviewed the claim or assured its validity.

Claimants may be anyone (e.g., user agent developers, journalists, other
third parties).

Claimants are solely responsible for the accuracy of their claims
(including claims that include products for which they are not
responsible) and keeping claims up to date.

Claimants are encouraged to claim conformance to the most recent version
of the User Agent Accessibility Guidelines Recommendation.

Required Components of an UAAG 2.0 Conformance Claim

Claimant name and affiliation.

Date of the claim.

Conformance level satisfied.

User agent information: The name of the user agent and sufficient
additional information to specify the version (e.g., vendor name,
version number (or version range), required patches or updates, human
language of the user interface or documentation).
Note: If the user agent is a collection of software components (e.g., a
browser and extentions or plugins), then the name and version information must be provided
separately for each component, although the conformance claim will treat
them as a whole. As stated above, the Claimant has sole responsibility
for the conformance claim, not the developer of any of the software
components.

Included Technologies: A list of the web content technologies
(including version numbers) rendered by the user agent that the Claimant
is including in the conformance claim. By including a web content
technology, the Claimant is claiming that the user agent meets the
requirements of UAAG 2.0 during the rendering of web content using that
web content technology.

Note 1: Web content technologies may be a combination of constituent web
content technologies. For example, an image technology (e.g., PNG) might
be listed together with a markup technology (e.g., HTML) since web
content in the markup technology is used make web content in the image
technology accessible (e.g., a PNG graph is made accessible using an
HTML table).

Excluded Technologies: A list of any web content technologies produced
by the the user agent that the Claimant is excluding from the
conformance claim. The user agent is not required to meet the
requirements of UAAG 2.0 during the production of the web content
technologies on this list.

Declarations: For each success criterion:
A declaration of whether or not the success criterion has been
satisfied; or
A declaration that the success criterion is not applicable and a
rationale for why not.

Platform(s): The platform(s) upon which the user agent was evaluated:
For user agent platform(s) (used to evaluate web-based user agent user
interfaces): provide the name and version information of the user agent(s).
For platforms that are not user agents (used to evaluate non-web-based
user agent user interfaces) provide: The name and version information of
the platform(s) (e.g., operating system, etc.) and the the name and
version of the platform accessibility architecture(s) employed.

Optional Components of an UAAG 2.0 Conformance Claim

A description of how the UAAG 2.0 success criteria were met where this
may not be obvious.

"Progress Towards Conformance" Statement

Developers of user agents that do not yet conform fully to a particular
UAAG 2.0 conformance level are encouraged to publish a statement on
progress towards conformance. This statement would be the same as a
conformance claim except that this statement would specify an UAAG 2.0
conformance level that is being progressed towards, rather than one
already satisfied, and report the progress on success criteria not yet
met. The author of a "Progress Towards Conformance" Statement is solely
responsible for the accuracy of their statement. Developers are
encouraged to provide expected timelines for meeting outstanding success
criteria within the Statement.

Disclaimer

Neither W3C, WAI, nor UAWG take any responsibility for any aspect or
result of any UAAG 2.0 conformance claim that has not been published
under the authority of the W3C, WAI, or UAWG.

Content that is used in place of other content that a person may not be able to access. Alternative content fulfills essentially the same function or purpose as the original content. Examples include text alternatives for non-text content, captions for audio, audio descriptions for video, sign language for audio, media alternatives for time-based media. See WCAG for more information.

alternative content
stack:

The set of alternative content items for a
given position in content. The items may be mutually exclusive (e.g.,
regular contrast graphic vs. high contrast graphic) or non-exclusive
(e.g., caption track that can play at the same time as a sound
track).

Graphical content that is rendered such that it can automatically change over time, potentially giving the user a visual perception of movement. Examples include video, animated images, scrolling text, programmatic animation (e.g., moving or replacing rendered objects).

relies on services (such as retrieving Web
resources and parsing markup) provided by one or more other
"host" user agents. Assistive technologies communicate data and
messages with host user agents by using and monitoring APIs.

provides services beyond those offered by the host user agents to
meet the requirements of users with disabilities. Additional
services include alternative renderings (e.g., as synthesized
speech or magnified content), alternative input methods (e.g.,
voice), additional navigation or orientation mechanisms, and
content transformations (e.g., to make tables more accessible).

Examples of assistive technologies that are important in the context
of this document include the following:

screen magnifiers, which are used by people with visual
disabilities to enlarge and change colors on the screen to improve
the visual readability of rendered text and images.

screen readers, which are used by people who are blind or have
reading disabilities to read textual information through
synthesized speech or braille displays.

voice recognition software, which may be used by people who have
some physical disabilities.

alternative keyboards, which are used by people with certain
physical disabilities to simulate the keyboard.

alternative pointing devices, which are used by people with
certain physical disabilities to simulate mouse pointing and button
activations.

Beyond this document, assistive technologies consist
of software or hardware that has been specifically designed to assist
people with disabilities in carrying out daily activities. These
technologies include wheelchairs, reading machines, devices for
grasping, text telephones, and vibrating pagers. For example, the
following very general definition of "assistive technology device"
comes from the (U.S.) Assistive Technology Act of 1998 [AT1998]:

Any item, piece of equipment, or product system, whether acquired
commercially, modified, or customized, that is used to increase,
maintain, or improve functional capabilities of individuals with
disabilities.

The technology of sound reproduction. Audio can be created synthetically (including speech synthesis), streamed from a live source (such as a radio broadcast), or recorded from real world sounds.

audio description - also called
described video, video description and descriptive narration

An equivalent alternative that takes the form of narration added to
the audio to describe important visual details
that cannot be understood from the main soundtrack alone. Audio
description of video provides information about actions, characters,
scene changes, on-screen text, and other visual content. In standard
audio description, narration is added during existing pauses in
dialogue. In extended audio
description, the video is paused so that there is time to add
additional description.

authors

The people who have worked either alone or collaboratively to create
the content (includes content authors, designers, programmers,
publishers, testers, etc.).

An equivalent alternative that takes the form of text presented and synchronized with time-based media to provide not only the speech, but also non-speech information conveyed through sound, including meaningful sound effects and identification of speakers. In some
countries, the term "subtitle" is used to refer to dialogue only and
"captions" is used as the term for dialogue plus sounds and speaker
identification. In other countries, "subtitle" (or its translation) is
used to refer to both. Open captions are captions that are
always rendered with a visual track; they cannot be turned off.
Closed captions are captions that may be turned on and off.
The captions requirements of this document assume that the user agent
can recognize the captions as such.Note: Other terms that include the word "caption" may
have different meanings in this document. For instance, a "table
caption" is a title for the table, often positioned graphically above
or below the table. In this document, the intended meaning of "caption"
will be clear from context.

A collated text transcript is a text equivalent of a movie or
other animation. More specifically, it is the combination of the text transcript of the audio track and the text equivalent
of the visual track. For example, a
collated text transcript typically includes segments of spoken dialogue
interspersed with text descriptions of the key visual elements of a
presentation (actions, body language, graphics, and scene changes). See
also the definitions of text
transcript and audio description. Collated
text transcripts are essential for individuals who are deaf-blind.

Information and sensory experience to be communicated to the user by means of a user agent, including code or markup that defines the content's structure, presentation, and interactions [adapted from WCAG 2.0]

empty
content (which may be alternative content) is
either a null value or an empty string (i.e., one that is zero
characters long). For instance, in HTML, alt="" sets the
value of the alt attribute to the empty string. In some
markup languages, an element may have empty content (e.g., the
HR element in HTML).

The Document Object Model is a platform- and language-neutral interface that allows programs and scripts to dynamically access and update the content, structure and style of documents. The document can be further processed and the results of that processing can be incorporated back into the presented page. This is an overview of DOM-related materials here at W3C and around the web:
http://www.w3.org/DOM/#what.

Any information that supports the use of a user agent. This information may be found, for example, in manuals, installation instructions, the help system, and tutorials. Documentation may be distributed (e.g., as files installed as part of the installation, some parts may be delivered on CD-ROM, others on the Web). See guideline 5.3 for information about
documentation.

This document uses the terms "element" and "element
type" primarily in the sense employed by the XML 1.0 specification
([XML], section 3): an element
type is a syntactic construct of a document type definition (DTD) for
its application. This sense is also relevant to structures defined by
XML schemas. The document also uses the term "element" more generally
to mean a type of content (such as video or sound) or a logical
construct (such as a header or list).

An element with associated behaviors that can be activated through the user interface or through an API. The set of elements that a user agent enables is generally derived from, but is not limited to, the set of elements defined by implemented markup languages. A disabled element is a potentially enabled element, that is not currently available for activation (e.g., a "grayed out" menu item).

Content that is an acceptable substitute for other content that a person may not be able to access. An equivalent alternative fulfills essentially the same function or purpose as the original content upon presentation:

text alternative [WCAG 2.0]: text that is available via the operating environment that is used in place of non-text content (e.g., text equivalents for images, text transcripts for audio tracks, or collated text transcripts for a movie).

full text alternative for synchronized media including any interaction [WCAG 2.0]: document including correctly sequenced text descriptions of all visual settings, actions, speakers, and non-speech sounds, and transcript of all dialogue combined with a means of achieving any outcomes that are achieved using interaction (if any) during the synchronized media.

User agents often perform a task when an event
having a particular "event type" occurs, including user interface
events, changes to content, loading of content, and requests from the
operating environment.
Some markup languages allow authors to specify that a script, called an
event
handler, be executed when an event of a given type occurs. An
event handler is explicitly associated with an
element through scripting, markup or the DOM.

Some examples of explicit user requests include when the user selects "New viewport," responds "yes" to a prompt in the user agent's user interface, configures the user agent to behave in a certain way, or changes the selection or focus with the keyboard or pointing device.

Note: Users can make errors when interacting with the user agent. For example, a user may inadvertently respond "yes" to a prompt instead of "no." In this document, this type of error is still considered an explicit user request.

The input focus location in the active viewport. The active focus is in the active viewport, while the inactive input focus is the inactive viewport. The active input focus is usually visibly indicated. In this document "active input focus" generally refers to the active keyboard input focus. @@ Editors' Note: this term is not used in the document other than the glossary.@@

active selection

The selection that will currently be affected by a user command, as opposed to selections in other viewports, called inactive selections, which would not currently be affected by a user command. @@ Editors' Note: this term is not used in the document other than the glossary.@@

cursor

Visual indicator showing where keyboard input will occur. There are two types of cursors: focus cursor (e.g. the dotted line around a button) and text cursor (e.g. the flashing vertical bar in a text field, also called a 'caret'). Cursors are active when in the active viewport, and inactive when in an inactive viewport.

focus cursor

Indicator that highlights a user interface element to show that it has keyboard focus, e.g. a dotted line around a button, or brightened title bar on a window. There are two types of cursors: focus cursor (e.g. the dotted line around a button) and text cursor (e.g. the flashing vertical bar in a text field).

focusable element

Any element capable of having input focus, e.g. link, text box, or menu item. In order to be accessible and fully usable, every focusable element should take keyboard focus, and ideally would also take pointer focus.

highlight, highlighted, highlighting

Emphasis indicated through the user interface. For example, user agents highlight content that is selected,focused, or matched by a search operation. Graphical highlight mechanisms include dotted boxes, changed colors or fonts, underlining, magnification, and reverse video. Synthesized speech highlight mechanisms include alterations of voice pitch and volume ("speech prosody"). User interface items may also be highlighted, for example a specific set of foreground and background colors for the title bar of the active window. Note that content that is highlighted may or may not be a selection.

inactive input focus

An input focus location in an inactive viewport such as a background window or pane. The inactive input focus location will become the active input focus location when input focus returns to that viewport. An inactive input focus may or may not be visibly indicated.

inactive selection

A selection that does not have the input focus and thus does not take input events.

input focus

The place where input will occur if a viewport is active. Examples include keyboard focus and pointing device focus. Input focus can also be active (in the active viewport) or inactive (in an inactive viewport).

keyboard focus

The screen location where keyboard input will occur if a viewport is active. Keyboard focus can be active (in the active viewport) or inactive (in an inactive viewport).

pointer

Visual indicator showing where pointing device input will occur. The indicator can be moved with a pointing device or emulator such as a mouse, pen tablet, keyboard-based mouse emulator, speech-based mouse commands, or 3-D wand. A pointing device click typically moves the input focus to the pointer location. The indicator may change to reflect different states. NOTE: When touch screens are used, the "pointing device" is a combination of the touch screen and the user's finger or stylus. On most systems there is no pointer (on-screen visual indication) associated with this type of pointing device.

pointing device focus

The screen location where pointer input will occur if a viewport is active. There can be multiple pointing device foci, for example when using a screen sharing utility there is typically one for the user's physical mouse and one for the remote mouse. @@ Editors' Note: this term is not used in the document other than the glossary.@@

selection

A user agent mechanism for identifying a (possibly empty) range of content that will be the implicit source or target for subsequent operations. The selection may be used for a variety of purposes, including for cut and paste operations, to designate a specific element in a document for the purposes of a query, and as an indication of point of regard, e.g. the matched results of a search may be automatically selected. The selection should be highlighted in a distinctive manner. On the screen, the selection may be highlighted in a variety of ways, including through colors, fonts, graphics, and magnification. When rendered using synthesized speech, the selection may be highlighted through changes in pitch, speed, or prosody.

split focus

A state when the user could be confused because the input focus is separated from something it is usually linked to, such as being at a different place than the selection or similar highlighting, or has been scrolled outside of the visible portion of the viewport. @@ Editors' Note: this term is not used in the document other than the glossary.@@

text cursor

Indicator showing where keyboard input will occur in text (e.g. the flashing vertical bar in a text field, also called a caret).

content focus, user interface focus, current
focus@@Editor's Note: Need to find the hrefs to these definitions and fix them. @@

This specification intentionally does not identify
which "important elements" must be navigable as this will vary by
specification. What constitutes "efficient navigation" may depend on a
number of factors as well, including the "shape" of content (e.g.,
sequential navigation of long lists is not efficient) and desired
granularity (e.g., among tables, then among the cells of a given
table). Refer to the Implementing document [Implementing UAAG 2.0] for information
about identifying and navigating important elements. @@ Ed Note: Update links

The set of "bindings"
between user agent functionalities and user
interface input mechanisms (e.g., menus, buttons, keyboard keys,
and voice commands). The default input configuration is the set of
bindings the user finds after installation of the software. Input
configurations may be affected by author-specified bindings (e.g.,
through the accesskey attribute of HTML 4 [HTML4]).

Direct Commands* (also called keyboard shortcuts or accelerator keys) are those tied to particular UI controls or application functions, allowing the user to navigate-to or activate them without traversing any intervening controls (e.g., "ctrl"+"S" to save a document). It is sometimes useful to distinguish direct commands that are associated with controls that are rendered in the current context (e.g., "alt"+"D" to move focus to the address bar) from those that may be able to activate program functionality that is not associated with any currently rendered controls (e.g., "F1" to open the Help system). Direct commands help users accelerate their selections.

What is identified as "normative" is required for conformance (noting that one may conform in a
variety of well-defined ways to this document). What is identified as
"informative" (sometimes, "non-normative") is never required for
conformance.

To make the user aware of events or status changes. Notifications can occur within the user agent user interface (e.g., status bar) or within the content display. Notifications may be passive and not require user acknowledgment, or they may be presented in the form of a prompt requesting a user response (e.g., a confirmation dialog).

In this document, the term "override" means that one
configuration or behavior preference prevails over another. Generally,
the requirements of this document involve user preferences prevailing
over author preferences and user agent default settings and behaviors.
Preferences may be multi-valued in general (e.g., the user prefers blue
over red or yellow), and include the special case of two values (e.g.,
turn on or off blinking text content).

A placeholder is content generated by the user agent
to replace author-supplied content. A placeholder may be generated as
the result of a user preference (e.g., to not render images) or as repair content (e.g., when an
image cannot be found). Placeholders can be any type of content,
including text, images, and audio cues. Placeholders should identify
the technology of the object of which it is holding the place.
Placeholders will appear in the alternative content stack.

A programmatic interface that is specifically engineered to enhance
communication between mainstream software applications and assistive
technologies (e.g., MSAA, UI Automation, and IAccessible2 for Windows applications, AXAPI for MacOSX applications, Gnome Accessibility Toolkit API for Gnome applications, Java Access for Java applications, etc.). On some platforms it may be conventional to enhance
communication further via implementing a DOM.

The point of regard is a position in rendered content that the user
is presumed to be viewing. The dimensions of the point of regard may
vary. For example, it may be a point (e.g., a moment during an audio
rendering or a cursor position in a graphical rendering), or a range of
text (e.g., focused text), or a two-dimensional area (e.g., content
rendered through a two-dimensional graphical viewport). The point of
regard is almost always within the viewport, but it may exceed the
spatial or temporal dimensions of the
viewport (see the definition of rendered content for more
information about viewport dimensions). The point of regard may also
refer to a particular moment in time for content that changes over time
(e.g., an audio-only
presentation). User agents may determine the point of regard in a
number of ways, including based on viewport position in content, content focus, and selection. The stability of the point
of regard is addressed by @@.

A profile is a named and persistent representation
of user preferences that may be used to configure a user agent.
Preferences include input configurations, style preferences, and
natural language preferences. In operating environments
with distinct user accounts, profiles enable users to reconfigure
software quickly when they log on. Users may share their profiles with
one another. Platform-independent profiles are useful for those who use
the same user agent on different platforms.

A user agent renders a document by applying
formatting algorithms and style information to the document's elements.
Formatting depends on a number of factors, including where the document
is rendered: on screen, on paper, through loudspeakers, on a braille
display, or on a mobile device. Style information (e.g., fonts, colors,
and synthesized speech prosody) may come from the elements themselves
(e.g., certain font and phrase elements in HTML), from style sheets, or
from user agent settings. For the purposes of these guidelines, each
formatting or style option is governed by a property and each property
may take one value from a set of legal values. Generally in this
document, the term "property"
has the meaning defined in CSS 2 ([CSS2], section 3). A
reference to "styles" in this document means a set of style-related
properties. The value given to a property by a user agent at
installation is called the property's default value.

Authors encode information in many ways, including
in markup languages, style sheet languages, scripting languages, and
protocols. When the information is encoded in a manner that allows the
user agent to process it with certainty, the user agent can "recognize"
the information. For instance, HTML allows authors to specify a heading
with the H1 element, so a user agent that implements HTML
can recognize that content as a heading. If the author creates a
heading using a visual effect alone (e.g., just by increasing the font
size), then the author has encoded the heading in a manner that does
not allow the user agent to recognize it as a heading.

Some requirements of this document depend on content roles, content
relationships, timing relationships, and other information supplied by
the author. These requirements only apply when the author has encoded
that information in a manner that the user agent can recognize. See the
section on conformance for more information
about applicability.

In practice, user agents will rely heavily on information that the
author has encoded in a markup language or style sheet language. On the
other hand, behaviors, style, meaning encoded in a script, and markup in an unfamiliar XML
namespace may not be recognized by the user agent as easily or at all.
The Techniques document [UAAG10-TECHS] lists
some markup known to affect accessibility that user agents can
recognize.

Rendered content is the part of content that the user agent makes
available to the user's senses of sight and hearing (and only those
senses for the purposes of this document). Any content that causes an
effect that may be perceived through these senses constitutes rendered
content. This includes text characters, images, style sheets, scripts,
and anything else in content that, once processed, may be perceived
through sight and hearing.

The term "rendered text" refers to text
content that is rendered in a way that communicates information about
the characters themselves, whether visually or as synthesized
speech.

In the context of this document, invisible
content is content that is not rendered but that may
influence the graphical rendering (e.g., layout) of other content.
Similarly, silent content is content that
is not rendered but that may influence the audio rendering of other
content. Neither invisible nor silent content is considered rendered
content.

In this document, the term "repair content" refers
to content generated by the user agent in order to correct an error
condition. "Repair text" refers to the text portion of repair
content. Some error conditions that may lead to the generation of
repair content include:

Missing resources for handling or rendering content (e.g., the
user agent lacks a font family to display some characters, or the
user agent does not implement a particular scripting language).

This document does not require user agents to include repair content
in the document object. Repair content
inserted in the document object should conform to the Web Content
Accessibility Guidelines 1.0 [WCAG10]. For more
information about repair techniques for Web content and software, refer
to "Techniques for Authoring Tool Accessibility Guidelines 1.0"
[ATAG10-TECHS].

In this document, the term "script" almost always
refers to a scripting (programming) language used to create dynamic Web
content. However, in guidelines referring to the written (natural)
language of content, the term "script" is used as in Unicode [UNICODE] to mean "A
collection of symbols used to represent textual information in one or
more writing systems."

Information encoded in (programming) scripts may be
difficult for a user agent to recognize. For
instance, a user agent is not expected to recognize that, when
executed, a script will calculate a factorial. The user agent will be
able to recognize some information in a script by virtue of
implementing the scripting language or a known program library (e.g.,
the user agent is expected to recognize when a script will open a
viewport or retrieve a resource from the Web).

In this document, the term "selection" refers to a
user agent mechanism for identifying a (possibly empty) range of content. Generally, user agents limit
the type of content that may be selected to text content (e.g., one or
more fragments of text). In some user agents, the value of the selection is constrained by the
structure of the document tree.

On the screen, the selection may be highlighted in
a variety of ways, including through colors, fonts, graphics, and
magnification. The selection may also be highlighted when rendered as
synthesized speech, for example through changes in speech prosody. The
dimensions of the rendered selection may exceed those of the
viewport.

The selection may be used for a variety of purposes, including for
cut and paste operations, to designate a specific element in a document
for the purposes of a query, and as an indication of point of regard.

The selection has state, i.e., it may be "set," programmatically or
through the user interface.

In this document, each viewport is expected to have at most one
selection. When several viewports coexist, at most one
viewport's selection responds to input events; this is called the
current selection.

Note: Some user agents may also implement a
selection for designating a range of information in the user agent user
interface. The current document only includes requirements for a content selection mechanism.

In this document, the expression "serial access"
refers to one-dimensional access to
rendered content. Some examples of serial access include listening to
an audio stream or watching a video (both of which involve one temporal
dimension), or reading a series of lines of braille one line at a time
(one spatial dimension). Many users with blindness have serial access
to content rendered as audio, synthesized speech, or lines of braille.

The expression "sequential navigation" refers to navigation through
an ordered set of items (e.g., the enabled
elements in a document, a sequence of lines or pages, or a sequence
of menu options). Sequential navigation implies that the user cannot
skip directly from one member of the set to another, in contrast to
direct or structured navigation. Users with blindness or some users
with a physical disability may navigate content sequentially (e.g., by
navigating through links, one by one, in a graphical viewport with or
without the aid of an assistive technology). Sequential navigation is
important to users who cannot scan rendered content visually for
context and also benefits users unfamiliar with content. The increments
of sequential navigation may be determined by a number of factors,
including element type (e.g., links only), content structure (e.g.,
navigation from heading to heading), and the current navigation context
(e.g., having navigated to a table, allow navigation among the table
cells).

Users with serial access to content or who navigate sequentially may
require more time to access content than users who use direct or
structured navigation.

In this document, the terms "support," "implement,"
and "conform" all refer to what a developer has designed a user agent
to do, but they represent different degrees of specificity. A user
agent "supports" general classes of objects, such as "images" or
"Japanese." A user agent "implements" a specification (e.g., the PNG
and SVG image format specifications or a particular scripting
language), or an API
(e.g., the DOM API) when it has been programmed to follow all or part
of a specification. A user agent "conforms to" a specification when it
implements the specification and satisfies its conformance
criteria.

In this document, "to synchronize" refers to the act
of time-coordinating two or more presentation components (e.g., a visual track with captions, or
several tracks in a multimedia presentation). For Web content
developers, the requirement to synchronize means to provide the data
that will permit sensible time-coordinated rendering by a user agent.
For example, Web content developers can ensure that the segments of
caption text are neither too long nor too short, and that they map to
segments of the visual track that are appropriate in length. For user
agent developers, the requirement to synchronize means to present the
content in a sensible time-coordinated fashion under a wide range of
circumstances including technology constraints (e.g., small text-only
displays), user limitations (slow reading speeds, large font sizes,
high need for review or repeat functions), and content that is
sub-optimal in terms of accessibility.

A mechanism for encoding instructions to be rendered, played or
executed by user agents. Web Content
technologies may include markup languages, data formats, or programming
languages that authors may use alone or in
combination to create end-user experiences that range from static Web
pages to multimedia presentations to dynamic Web applications. Some
common examples of Web content technologies include HTML, CSS, SVG,
PNG, PDF, Flash, and JavaScript.

As used in this document a "text element" adds text
characters to either content or the user
interface. Both in the Web Content Accessibility Guidelines 1.0
[WCAG10] and in this
document, text elements are presumed to produce text that can be
understood when rendered visually, as synthesized speech, or as
Braille. Such text elements benefit at least these three groups of
users:

visually-displayed text benefits users who are deaf and adept in
reading visually-displayed text;

synthesized speech benefits users who are blind and adept in use
of synthesized speech;

braille benefits users who are blind, and possibly deaf-blind,
and adept at reading braille.

A text element may consist of both text and non-text data. For
instance, a text element may contain markup for style (e.g., font size
or color), structure (e.g., heading levels), and other semantics. The
essential function of the text element should be retained even if style
information happens to be lost in rendering.

A user agent may have to process a text element in order to have
access to the text characters. For instance, a text element may consist
of markup, it may be encrypted or compressed, or it may include
embedded text in a binary format (e.g., JPEG).

"Text content" is content that is composed of one or more text
elements. A "text equivalent" (whether in content or the user
interface) is an equivalent composed of
one or more text elements. Authors generally provide text equivalents
for content by using the alternative content
mechanisms of a specification.

A "non-text element" is an element (in content or the user
interface) that does not have the qualities of a text element.
"Non-text content" is composed of one or more non-text elements. A
"non-text equivalent" (whether in content or the user interface) is an
equivalent composed of
one or more non-text elements.

In this document, a "text decoration" is any
stylistic effect that the user agent may apply to visually rendered text that does not
affect the layout of the document (i.e., does not require reformatting
when applied or removed). Text decoration mechanisms include underline,
overline, and strike-through.

A text transcript is a text equivalent of audio
information (e.g., an audio-only presentation
or the audio track of a movie or other
animation). It provides text for both spoken words and non-spoken
sounds such as sound effects. Text transcripts make audio information
accessible to people who have hearing disabilities and to people who
cannot play the audio. Text transcripts are usually created by hand but
may be generated on the fly (e.g., by voice-to-text converters). See
also the definitions of captions and collated text
transcripts.

Content rendered as sound through an
audio viewport. The audio track may be all
or part of the audio portion presentation (e.g., each instrument may
have a track, or each stereo channel may have a track). Also see definition of visual track

User agent default styles are style property
values applied in the absence of any author or user styles. Some
markup languages specify a default rendering for content in that markup
language; others do not. For example, XML 1.0
[XML]
does not specify default styles for XML documents.
HTML 4 [HTML4] does not specify
default styles for HTML documents, but the CSS 2 [CSS2]
specification suggests a sample
default style sheet for HTML 4 based on current practice.

the user agent user
interface, i.e., the controls (e.g., menus, buttons,
prompts, and other components for input and output) and mechanisms
(e.g., selection and focus) provided by the user agent ("out of the
box") that are not created by content.

the "content user interface," i.e., the enabled elements that are
part of content, such as form controls, links, and applets.

The document distinguishes them only where required for clarity. For
more information, see the section on requirements for content, for user
agent features, or both @@.

The term "user interface control" refers to a component of the user
agent user interface or the content user interface, distinguished where
necessary.

The user agent renders content through one or
more viewports. Viewports include windows, frames, pieces of paper,
loudspeakers, and virtual magnifying glasses. A viewport may contain
another viewport (e.g., nested frames). User
agent user interface controls such as prompts, menus, and alerts
are not viewports.

Graphical and tactile viewports have two spatial dimensions. A viewport may also
have temporal dimensions, for instance when audio, speech, animations,
and movies are rendered. When the dimensions (spatial or temporal) of
rendered content exceed the dimensions of the viewport, the user agent
provides mechanisms such as scroll bars and advance and rewind controls
so that the user can access the rendered content "outside" the
viewport. Examples include: when the user can only view a portion of a
large document through a small graphical viewport, or when audio
content has already been played.

When several viewports coexist, only one has the current focus at a given moment.
This viewport is highlighted to make it stand out.

User agents may render the same content in a variety of ways; each
rendering is called a view. For instance, a user agent may
allow users to view an entire document or just a list of the document's
headers. These are two different views of the document.

"top-level" viewports are
viewports that are not contained within other user agent viewports.

A visual object is content rendered through a
graphical viewport. Visual objects include
graphics, text, and visual portions of movies and other animations. A
visual track is a visual object that is intended as a whole or partial
presentation. A visual track does not necessarily correspond to a
single physical object or software object.

Appendix B: How to refer to
UAAG 2.0 from other documents

Appendix C: References

For the latest version of any W3C specification please
consult the list of W3C Technical Reports at
http://www.w3.org/TR/. Some documents listed below may have been superseded
since the publication of this document.

Note: In this document, bracketed labels such as
"[WCAG20]" link to the corresponding entries in this section. These labels
are also identified as references through markup.

This publication has been funded in part with Federal funds from the U.S.
Department of Education, National Institute on Disability and Rehabilitation
Research (NIDRR) under contract number ED05CO0039. The content of this
publication does not necessarily reflect the views or policies of the U.S.
Department of Education, nor does mention of trade names, commercial
products, or organizations imply endorsement by the U.S. Government.