UAAG 2.0 provides guidelines for designing user
agents that lower barriers to Web accessibility for people with
disabilities. User agents include browsers and other types of software that
retrieve and render Web content. A user agent that
conforms to these guidelines will promote
accessibility through its own user interface and through other internal
facilities, including its ability to communicate with other technologies
(especially assistive
technologies). Furthermore, all users, not just users with disabilities,
should find conforming user agents to be more usable.

In addition to helping developers of browsers and media players, UAAG 2.0 will benefit developers of assistive technologies because it
explains what types of information and control an assistive technology may
expect from a conforming user agent. Technologies not addressed directly by
UAAG 2.0 (e.g. technologies for braille rendering) will be essential to
ensuring Web access for some users with disabilities.

The "User Agent Accessibility Guidelines 2.0" (UAAG 2.0) is part
of a series of accessibility guidelines published by the W3CWeb Accessibility
Initiative (WAI).

May be
Superseded

This section describes the status of this document at the time of its
publication. Other documents may supersede this document. A list of current
W3C publications and
the latest revision of this technical report can be found in the W3C technical reports
index at http://www.w3.org/TR/.

Editor's Draft of UAAG 2.0

This document is the internal working draft used by the UAWG and is updated continuously and without notice. This document has no formal standing within W3C. Please consult the group's home page and the W3C technical reports index for information about the latest publications by this group.

No
Endorsement

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a
draft document and may be updated, replaced or obsoleted by other documents
at any time. It is inappropriate to cite this document as other than work in
progress.

A user agent is any software that retrieves and presents Web content for
end users. User agents include Web browsers, media players, and plug-ins that help in retrieving, rendering
and interacting with Web content. UAAG 2.0 specifies requirements for user agent developers that will lower barriers
to accessibility.

Overview

Accessibility involves a wide range of disabilities. These include visual,
auditory, physical, speech, cognitive, language, learning, neurological
disabilities, and disabilities related to ageing. The goal of UAAG 2.0 is to ensure that all users, including users with disabilities, have
control over their environment for accessing the Web. Key methods for
achieving that goal include:

configurability

device independence

interoperability

direct support for both graphical and auditory output

optional self-pacing

adherence to published conventions

Some users have more than one disability, and the needs of different
disabilities may conflict. Thus, many UAAG 2.0 requirements
use configuration to ensure that a functionality designed to
improve accessibility for one user does not interfere with accessibility for
another. UAAG 2.0 prefers
configuration requirements rather than requirements for default settings, because a default user agent setting may be useful for one user but
interfere with accessibility for another. For example, a feature required by UAAG 2.0 may be ineffective or cause
content to be less accessible, making it imperative that the user be able to
turn off the feature. To avoid overwhelming users with an abundance of
configuration options, UAAG 2.0 includes requirements that promote documentation and ease
of configuration.

Although author preferences are important, UAAG 2.0 includes requirements to override certain author preferences
when the user would not otherwise be able to access that content.

Some UAAG 2.0 requirements may have security implications,
such as communicating through APIs, or allowing programmatic read and write
access to content and user interface
control. UAAG 2.0 assumes that features required by UAAG 2.0
will be built on top of an underlying security architecture. Consequently,
UAAG 2.0 grants no
conformance exemptions based on security issues, unless permitted explicitly in a success criterion.

The UAWG expects that software that satisfies the requirements of UAAG 2.0 will be more flexible, manageable, extensible, and beneficial for all
users.

UAAG 2.0 Layers of Guidance

In order to meet the needs of different audiences using UAAG,
several layers of guidance are provided, including overall
principles, general guidelines, testable success
criteria, and explanatory intent, examples and
resource links.

Principles - At the top are five principles that
provide the foundation for accessible user agents. Principles 1, 2, and 3 parallel the Web Content Accessibility Guidelines (WCAG)
2.0: to make the user agent perceivable, operable, and understandable. Principles 4 and 5 are specific to user agents: facilitate programmatic access and comply with specifications and conventions.

Guidelines - Under the principles are guidelines.
The guidelines are goals authors should work toward in
order to make user agents more accessible to users with
disabilities. The guidelines are not testable, but provide the framework
and overall objectives to help authors understand the success criteria
and better implement them.

Success Criteria - For each guideline, testable
success criteria are provided to allow UAAG 2.0 to be used where
requirements and conformance testing are necessary, such as design
specification, purchasing, regulation, and contractual agreements. Three levels of conformance meet the needs of different groups and different situations: A (lowest), AA, and AAA
(highest). Additional information on UAAG levels can be found in the
section on Conformance.

The principles, guidelines, and success criteria work together to provide guidance on
how to make user agents more accessible. Developers are encouraged to use them in order to best address the needs of the widest possible range
of users.

Even user agents that conform at the highest level (AAA) may
not be accessible to individuals with all types, degrees, or combinations of
disability, particularly in the cognitive, language, and learning areas.
Developers are encouraged to seek out current
best practice to ensure that user agents are as accessible as
possible.

UAAG 2.0 Supporting Documents

A separate document, entitled Implementing User Agent
Accessibility Guidelines 2.0 (referred to as the "Implementing document" from here on) provides explanations and
examples of how each success criteria might be satisfied. It also includes
references to other accessibility resources (such as platform-specific
software accessibility guidelines) that provide additional information on how
a user agent may satisfy each success criteria. The examples in the
Implementing document are informative only.
Other strategies may be used or required to satisfy the success criteria.
The UAWG expects to update the Implementing document more
frequently than the current guidelines. Developers, W3C Working Groups,
users, and others are encouraged to contribute examples and resources.

Components of Web
Accessibility

Web accessibility depends on accessible user agents and accessible content. Accessible content availability is greatly influenced
by the accessibility of the authoring tool. For an overview of how these
components of Web development and interaction work together, see

Users interacting with a web browser may do so using one or more input methods including keyboard, mouse, speech, touch, and gesture. It's critical that each user be free to use whatever input method or combination of methods works best for a given situation. Therefore every potential user task must be accessible via modality independent controls that any input technology can access.

For instance, if a user can't use or doesn't have access to a mouse, but can use and access a keyboard, the keyboard can call a modality independent control to activate an OnMouseOver event.

While it is common to think of user agents retrieving and rendering web content for one group of people (end-users) that was previously authored by another group (authors), user agents are also frequently involved with the process of authoring content.

For these cases, it is important for user agent developers to consider the application of another W3C-WAI Recommendation, the Authoring Tool Accessibility Guidelines (ATAG). ATAG (currently 2.0 is in draft) provides guidance to the developers of tools regarding the accessibility of authoring interfaces to authors (ATAG 2.0 Part A) and ways in which all authors can be supported in producing accessible web content (ATAG 2.0 Part B).

PRINCIPLE 1 - Ensure that the user interface
and rendered content are perceivable

Summary: The user can choose to render any type of alternative content available. (1.1.1). The user can also choose at least one alternative such as alt text to be always displayed (1.1.2), but it's recommended that users also be
able to specify a cascade (1.1.4), such as alt text if it's there, otherwise longdesc, otherwise filename, etc. It's recommended that the user can configure the caption text and that text or sign language alternative cannot obscure the video or the controls (1.1.3). The user can configure the size and position of media alternatives (1.1.5).

1.1.1 Render Alternative Content [was 1.1.3]:

For any content element, the user can choose to render any types of alternative content that are present. (Level A)## DONE 28 March 2012

1.1.2 Configurable Alternative Content Defaults [was 1.1.1]:

For each type of non-text content, the user can specify a type of alternative content that, if present, will be rendered by default. (Level AA)## DONE 28 March 2012

1.1.3 Display of Time-Based Media Alternatives:

For recognized on-screen alternatives for time-based media (e.g. captions, sign language video), the following are all true: (Level AA)## DONE 2 August 2012

Do not obscure primary media: The user can specify that displaying media alternatives doesn't obscure the primary time-based media; and

Do not obscure controls: The user can specify that the displaying media alternatives doesn't obscure recognized controls for the primary time-based media; and

Note: Depending on the screen area available, the display of the primary time-based media may need to be reduced in size to meet this requirement.

1.1.3 [old] Indicate Unrendered Alternative Content:

## DONE 28 March 2012## Deleted 4 June 2012

1.1.4 Default Rendering of Alternative Content (Enhanced):

For each type of non-text content, the user can specify the cascade order in which to render different types of alternative content when preferred types are not present. (Level AAA)## DONE 28 March 2012

Summary: The user can request useful alternative content when the author fails to provide it. For example, showing metadata in place of missing or empty (1.2.1) alt text. The user can ask the browser to predict missing structural information, such as field labels, table headings or section headings (1.2.2).

1.2.1 Support Repair by Assistive Technologies:

the user
agent does not attempt to repair the text alternatives with text values that are also available to assistive technologies.

the user agent makes metadata related to the non-text content available programmatically (and not via fields reserved for text alternatives).

## DONE 19 April

1.2.2 Repair Missing Structure:

The user can specify whether or not the user
agent should attempt to insert the following types of structural markup on the basis of author-specified presentation attributes (e.g.. position and appearance): (Level AAA)## DONE 19 April & IER done on 2 August 2012

Summary: The user can visually distinguish selected, focused, and enabled items, and recently visited links (1.3.1), with a choice of highighting options that at least include foreground and background colors, and border color and thickness (1.3.2).

1.3.1 Highlighted Items:

The user can specify that the following classes be highlighted so that each is uniquely distinguished: (Level A)## DONE 5 April 2012## DONE 4 June 2012

Summary: If synthesized speech is produced, the user can specify speech rate and volume (1.6.1), pitch and pitch range (1.6.2), and synthesizer speech characteristics like emphasis (1.6.3) and features like spelling (1.6.4).

1.6.1 Speech Rate, Volume, and Voice:

If synthesized speech is produced, the user can specify the following: (Level A) ## DONE TPAC

1.6.2 Speech Pitch and Range:

If synthesized speech is produced, the user can specify the following if offered by the speech synthesizer: (Level AA)## DONE TPAC. Updated 4 October 2012

pitch (average frequency of the speaking voice), and

pitch range (variation in average frequency)

Note: Because the technical implementations of text to speech engines vary (e.g., formant-based synthesis or concatenative synthesis), a specific engine may not support varying pitch or pitch range. A user agent will expose the availability of pitch and pitch range control if the currently selected or installed text to speech engine offers this capability.

1.6.3 Advanced Speech Characteristics:

The
user can adjust all of the speech characteristics offered by the speech
synthesizer. (Level AAA)## DONE TPAC

1.6.4 Synthesized Speech Features:

If synthesized speech is produced, the following features are provided: (Level AA)## DONE TPAC

user-defined extensions to the
synthesized speech dictionary,

"spell-out", where text is spelled
one character at a time, or according to language-dependent pronunciation
rules,

at least two ways of speaking numerals:
spoken as individual digits and punctuation (e.g. "one two zero three point five" for 1203.5 or "one comma two zero three point five" for 1,203.5), and
spoken as full numbers are spoken (e.g. "one thousand, two hundred
and three point five" for 1203.5),

at least two ways of speaking
punctuation: spoken literally, and with punctuation understood from natural pauses.

Summary: The user agent shall support user stylesheets (1.7.1) and the user can choose which if any user-supplied (1.7.2) and author-supplied (1.7.3) stylesheets to use. The user agent will allow users to save user stylesheets (1.7.4).

1.7.1 Support User Stylesheets:

If the user agent supports a mechanism for authors to supply stylesheets, the user agent also provides a mechanism for users to supply stylesheets. (Level A)

1.7.2 Apply User Stylesheets:

If user style sheets are supported, then the user can enable or disable user stylesheets for: (Level A)## DONE 9 August 2012

all pages on specified websites, or

all pages

1.7.3 Author Style Sheets:

If the user agent supports a mechanism for authors to supply stylesheets, the user can disable the use of author style sheets on the current page.
(Level A)## DONE 9 August 2012

1.7.4 Save Copies of Stylesheets:

The user can save copies of the stylesheets referenced by the current page, in order to edit and load the copies as user stylesheets. (Level AA)

Guideline 1.8 - Help users to use and orient within windows and viewports.
[Implementing 1.8]

Summary: The user agent provides programmatic and visual cues to keep the user oriented. These include highlighting the viewport (1.8.1), keeping the focus within the viewport (1.8.2 & 1.8.7), resizing the viewport (1.8.3), providing scrollbar(s) that identify when content is outside the visible region (1.8.4) and which portion is visible (1.8.5), changing the size of graphical content with zoom (1.8.6 & 1.8.12), restoring the focus and point of regard when the user returns to a previously viewed page (1.8.8). Users can set a preference whether new windows or tabs open automatically (1.8.9) or get focus automatically (1.8.10). Additionally, the user can specify that all view ports have the same user interface elements (1.8.11), if and how new viewports open (1.8.9), and whether the new window automatically gets focus (1.8.10). The user can mark items in a webpage and use shortcuts to navigate back to marked items. (1.8.13).

1.8.1 Highlight Viewport:

The viewport with the input focus is highlighted and the user can customize attributes of the highlighting mechanism (e.g. shape, size, stroke width, color, blink rate). The viewport can include nested viewports and containers. (Level A)## DONE TPAC - edited by K & J 30 August 2012.

1.8.2 Move Viewport to Selection and Focus:

When a viewport's selection or input focus changes, the viewport's content moves as necessary to ensure that the new selection or input focus location is at least partially in the visible portion of the viewport. (Level A)## DONE TPAC

1.8.3 Resize Viewport:

The user can resize graphical
viewports within the limits of the display, overriding any values
specified by the author. (Level A)## DONE 26 April

1.8.4 Viewport Scrollbars:

When the rendered content extends beyond the viewport dimensions, users can have graphical viewports include scrollbars, overriding any values specified by the author.
(Level A)## DONE 26 April

1.8.5 Indicate Viewport Position [was 1.8.5]:

The user can determine the viewport's position relative to the full extent of the rendered
content. (Level A)## DONE TPAC

1.8.6: Zoom [was 1.8.X]:

The user can rescale content within graphical viewports as follows: (Level A)## DONE 26 April

Zoom in: to at least 500% of the default size; and

Zoom out: to at least 10% of the default size, so the content fits within the height or width of the viewport.

1.8.7 Maintain point of regard [was 1.8.Z]

: To the extent possible, the point of regard
remains visible and at the same location within the viewport when the viewport is resized, when content is zoomed or scaled, or when content formatting is changed. (Level A)## DONE 31 May 2012

1.8.8 Viewport History [was 1.8.5]:

For user
agents that implement a viewport history mechanism (e.g. "back" button), the user can return to any state in the viewport history that is allowed by the content, including a restored point of regard, input focus and selection. (Level AA)## DONE 5 April 2012

1.8.9 Open on Request [was 1.8.6]:

The user can specify whether author content can open new top-levelviewports (e.g. windows or tabs). (Level AA)## DONE TPAC

1.8.10 Do Not Take Focus:

If new top-level viewports (e.g. windows or tabs) are configured to open without explicit user request, the user can specify whether or not top-level viewports take the active keyboard focus when they open. (Level AA)## DONE TPAC

1.8.11 Same UI:

1.8.12: Reflowing Zoom:

The user can request that when reflowable content in a graphical viewport is rescaled, it is reflowed so that one dimension of the content fits within the height or width of the viewport. (Level AA)## DONE 26 April 2012 ## IER DONE 4 June 2012

1.8.13 Webpage Bookmarks [was 1.8.m, 1.8.13]:

The user can mark items in a webpage, then use shortcuts to navigate back to marked items. The user can specify whether a navigation mark disappears after a session, or is persistent across sessions. (Level AAA)## DONE 31 May 2012

Summary: The user can view the source of content (1.9.2), or an outline view of important elements. (1.9.1).

1.9.1 Outline View:

Users can view a navigable outline of rendered content composed of labels for important structural elements, and can move focus efficiently to these elements in the main viewport. (Level AA)## DONE 26 April

Note: The important structural elements depend on the web content technology, but may include headings, table captions, and content sections.

PRINCIPLE 2. Ensure that the user interface is
operable

Summary: Users can operate all functions (2.1.1), and move focus (2.1.2) using just the keyboard. Users can activate important or common features with shortcut keys, (2.1.6), override keyboard shortcuts in content and user interface (2.1.4), escape keyboard traps (2.1.3), specify that selecting an item in a dropdown list or menu not activate that item or move to that new web page (2.1.4) and use standard keys for that platform (2.1.5).

2.1.1 Keyboard Operation:

All
functionality can be operated via the
keyboard using sequential or direct
keyboard commands that do not require specific timings for individual
keystrokes, except where the underlying function requires input that depends
on the path of the user's movement and not just the endpoints (e.g. free
hand drawing). This does not forbid and should not discourage providing other input methods in addition to keyboard operation including mouse, touch, gesture and speech. (Level A)## DONE 9 April 2012

2.1.2 Keyboard Focus (former 1.9.2):

2.1.3 No Keyboard Trap (former 2.1.5):

If keyboard focus can be moved to a component using a keyboard interface (including nested user agents), then focus can be moved away from that component using only a keyboard interface. If this requires more than unmodified arrow or tab keys (or other standard exit methods), users are advised of the method for moving focus away. (Level A)## DONE TPAC

2.1.4 Separate Selection from
Activation (former 2.1.4):

The user can specify that focus and selection can be moved without causing further changes in focus, selection, or the state of controls, by either the user
agent or author content. (Level A) ## DONE 8 March 2012

2.1.5
Follow Text Keyboard Conventions (former 2.1.7):

2.1.6
Efficient Keyboard Access:

The user
agent user interface includes mechanisms to make keyboard access more efficient than sequential keyboard access. (Level A)## DONE 9 April 2012

2.1.9
[deleted] Allow Override of User Interface Keyboard Commands:

## DONE TPAC ## Merged with 2.3.5 on 2 August 2012

Guideline 2.2 - Provide sequential navigation [new, includes former 2.1.8 and 1.9.8, and a new SC][Implementing 2.2]

Summary:Users can use the keyboard to navigate sequentially (2.2.3) to all the operable elements (2.2.1) in the viewport as well as between viewports (2.2.2). Users can optionally disable wrapping or request a signal when wrapping occurs (2.2.4).

Summary: Users can navigate directly (e.g. keyboard shortcuts) to important elements (2.3.1) with the option of immediate activation of the operable elements (2.3.3). Display commands with the elements to make it easier for users to discover the commands (2.3.2 & 2.3.4). The user can remap and save direct commands (2.3.5).

2.3.1 Direct Navigation to Important Elements (former 2.7.4):

The user can navigate directly to any important (e.g. structural or operable) element in rendered content. (Level A)## DONE 23 March 2012## DONE 4 June 2012

2.3.2
Present Direct Commands in Rendered Content (former 2.1.6):

The user can have any recognized direct commands in rendered content (e.g. accesskey, landmark) be presented with their associated elements (Alt+R to reply to a web email). (Level A)## DONE 28 March 2012

2.3.3 Direct activation (former 2.7.6):

The user can move directly to and activate any operable
elements in rendered content. (Level AA)## DONE 23 March 2012

2.3.4 Present Direct Commands in User Interface (former 2.1.7):

The user can have any direct commands in the user agent user interface (e.g. keyboard shortcuts) be presented with their associated user interface controls (e.g. "Ctrl+S" displayed on the "Save" menu item and toolbar button). (Level AA)## DONE 28 March 2012

2.3.5
Customize Keyboard Commands:

The user can override any keyboard shortcut including recognized author supplied shortcuts (e.g. accesskeys) and user agent user interface controls, except for conventional bindings for the operating environment (e.g. arrow keys for navigating within menus). The rebinding options must include single-key and key-plus-modifier keys if available in the operating environment. The user must be able to save these settings beyond the current session. (Level AA)## DONE 8 March 2012 MERGED WITH 2.1.9 on 2 August 2012

Summary: Users can search rendered content (2.4.1) forward or backward (2.4.2) and can have the matched content highlighted in the viewport (2.4.3). The user is notified if there is no match (2.4.4). Users can also search by case and for text within alternative content (2.4.5).

2.4.1 Text Search:

The user can perform a search within rendered content (e.g. not hidden with a style), including rendered text alternatives and rendered generated content, for any sequence of printing characters from the document character set. (Level A)

2.4.2 Find Direction:

The user can search forward or backward in rendered content. (Level A)## DONE 23 March 2012

2.4.3 Match Found:

When a search operation produces a match, the matched content is highlighted, the viewport is scrolled if necessary so that the matched content is within its visible area, and the user can search from the location of the match.
(Level A)## DONE 23 March 2012## DONE 5 June 2012

2.4.4 Alert on Wrap or No Match:

The user can be notified when there is no match to a search operation. The user can be notified when the search continues from the beginning or end of content. (Level A)## DONE 23 March 2012

2.4.5 Search alternative content:

The user can perform text searches within textual alternative content (e.g.
text alternatives for non-text content, captions) even when the textual alternative content is not rendered onscreen. (Level AA)## DONE 23 March 2012

2.5.1 Location in
Hierarchy: [was 2.5.3]

When the user agent is presenting hierarchical information, but the hierarchy is not reflected in a standardized fashion in the DOM or platform accessibility services, the user can view the path of nodes leading from the root of the hierarchy to a specified element. (Level AA)## DONE 23 August 2012 .

2.5.2 Navigate by structural element [was 2.5.5]:

The user agent provides at least the following types of structural navigation, where the structure types exist:(Level AA)

Summary:Users can interact with web content by mouse, keyboard, voice input, gesture, or a combination of input methods. Users can discover what event handlers (e.g. onmouseover) are available at the element and activate an element's events individually (2.6.1).

2.6.1
Access to input methods:

The user can discover recognized input methods explicitly associated with an element, and activate those methods in a modality independent manner. (Level AA)## DONE 23 March 2012

Summary: Users can restore preference settings to default (2.7.2), and accessibility settings persist between sessions (2.7.1). Users can manage multiple sets of preference settings (2.7.3), and adjust preference setting outside the user interface so the current user interface does not prevent access (2.7.4). It's also recommended that groups of settings can be transported to compatible systems (2.7.5).

2.8.2 Reset Toolbar Configuration:

Summary: Users can extend the time limit for user input when such limits are controllable by the user agent (2.9.1); by default, the user agent shows the progress of content in the process of downloading (2.9.2).

2.9.1
Adjustable Timing:

Where time limits for user input are recognized
and controllable by the user agent, the user can extend the time
limits. (Level A)## DONE 26 March 2012

2.9.2 Retrieval Progress:

Summary: To help users avoid seizures, the default configuration prevents the browser user interface and rendered content from flashing more than three times a second above a luminescence or color threshold (2.10.1), or does not flash at all (2.10.2).

2.10.1
Three Flashes or Below Threshold:

In its default configuration, the user agent does not display any user interface components or recognized content that flashes more than three times in any one-second period, unless the flash is below the general flash and red flash thresholds. (Level A)## DONE 9 April 2012

2.10.2
Three Flashes:

In its default configuration, the user agent does not display any user interface components or recognized content that flashes more than three times in any one-second period (regardless of whether not the flash is below the general flash and red flash thresholds). (Level AAA)## DONE 9 April 2012

2.11.1 Background Image
Toggle:

2.11.2 Time-Based Media
Load-Only:

The user can override the play on
load of recognized time-based media content such that the content is not played
until explicit user request. (Level A)## DONE 9 August 2012

2.11.3 Execution
Placeholder:

The user can
render a placeholder instead of executable
content that would normally be contained within an on-screen area (e.g.
Applet, Flash), until explicit user request to
execute. (Level A)## DONE 9 August 2012

2.11.4 Execution Toggle:

The
user can turn on/off the execution
of executable content that would not normally be contained within a
particular area (e.g. Javascript). (Level A)## DONE TPAC

2.11.5 Playback Rate Adjustment for Prerecorded Content:

The user can adjust the playback rate of prerecorded time-based media content, such that all of the following are true: (Level A)

The user can adjust the playback rate of the time-based media tracks to between 50% and 250% of real time.

Speech whose playback rate has been adjusted by the user maintains pitch in order to limit degradation of the speech quality.

Audio and video tracks remain synchronized across this required range of playback rates.

The user agent provides a function that resets the playback rate to normal (100%).

2.11.6 Stop/Pause/Resume
Time-Based Media:

The user can stop, pause, and resume rendered audio and
animation content (including video,
animated images, and changing text) that last three or more seconds at their default playback
rate. (Level A)## DONE TPAC

2.11.7 Navigate Time-Based Media:

The user can navigate along the timebase using a continuous scale, and by relative time units within rendered audio and animations (including video and animated images) that last three or more seconds at their default playback rate. (Level A)## DONE 9 August 2012

2.11.8 Semantic Navigation of Time-Based Media:

The user can navigate by semantic structure within the time-based media, such as by chapters or scenes present in the media (Level AA).## DONE TPAC

2.11.9 Track Enable/Disable of Time-Based Media:

During time-based media playback, the user can determine which tracks are available and select or deselect tracks, overriding global default settings for captions, audio descriptions, etc. (Level AA)## DONE TPAC

2.11.10 Video Contrast and Brightness [was 2.11.12]:

Users can adjust the contrast and brightness of visual time-based media. (Level AAA)

2.11.11 Scale and position visual alternative media tracks:[2.11.11 MOVED to 1.1.3 and 1.1.5 on 2 August 2012]

Summary: For all input devices supported by the platform, the user agents should let the user perform all functions aside from entering text (2.12.2), and enter text with any platform-provided features (2.12.1). If possible, it is also encouraged to let the user enter text even if the platform does not provide such a feature (2.12.3).

2.12.1 Support Platform Text Input Devices:

If the platform
supports text input using an input device, the user agent is compatible with this functionality. (Level A)

2.12.2 Operation With Any Device:

If an input device is supported by the platform
, all user agent functionality other than text input can be operated using that device. (Level AA)

2.12.3 Text Input With Any Device:

If an input device is supported by the platform
, all user agent functionality including text input can be operated using that device. (Level AAA)

3.2.4 Text Entry Undo:

Note: Submission can be triggered in many different ways, such as clicking a submit button, typing a key in a control with an onkeypress event, or by a script responding to a timer.

## DONE 19 July 2012 (was 3.2.X)

3.2.5 Settings Change Confirmation:

If the user agent provides mechanisms for changing its user interface settings, it either allows the user to reverse the setting changes, or the user can require user confirmation to proceed. (Level A)## DONE 19 July 2012 (was 3.2.Y)

Summary: User documentation is available in an accessible format (3.3.1), it includes accessibility features (3.3.2), delineates differences between versions (3.3.3), provides a centralized views of conformance UAAG2.0 (3.3.4), and is available as context sensitive help in the UA (3.3.5).

3.3.1 Accessible documentation:

The product documentation is available in a format that meets success criteria of WCAG 2.0 Level "A" or greater. (Level A)## DONE TPAC

3.3.3 Changes Between
Versions:

3.3.4 Centralized View:

There is a dedicated section of the documentation that presents a view of all features of the user agent necessary to meet the requirements of User Agent Accessibility Guidelines 2.0. (Level AAA)## DONE TPAC

PRINCIPLE 4: Facilitate programmatic access

Summary: Be compatible with assistive technologies by supporting platform standards (4.1.1), including providing information about all menus, buttons, dialogs, etc. (4.1.2, 4.1.6), access to DOMs (4.1.4), and access to structural relationships and meanings, such as what text or image labels a control or serves as a heading (4.1.5). Where something can't be made accessible, provide an accessible alternative version, such as a standard window in place of a customized window (4.1.3). Make sure that that programmatic exchanges are quick and responsive (4.1.7).

4.1.3 Accessible
Alternative:

If a component of the user agent user interface cannot be exposed through the platform accessibility services, then the user agent provides an equivalent alternative that is exposed through the platform accessibility service. (Level A)## DONE TPAC

4.1.4 Programmatic Availability of
DOMs:

If the user agent implements one or more DOMs, they must be
made programmatically available to assistive technologies. (Level A)## DONE TPAC

4.1.5 Write Access:

If the user can modify the state or value of a piece of content through the user interface (e.g., by checking a box or editing a text area), the same degree of write access is available programmatically.
(Level A)## DONE Post TPAC

4.1.6 Expose Accessible Properties:

If any of
the following properties are supported by the platform accessibility services, make the properties available to the accessibility platform
architecture: (Level A)## DONE 8 March 2012

the bounding dimensions and coordinates of onscreen elements

font family of text

font size of text

foreground color of text

background color of text.

change state/value notifications

selection

highlighting

input device focus

direct keyboard commands

underline of menu items (keyboard command/shortcuts)

4.1.7 Timely Communication:

For APIs implemented to satisfy the requirements of UAAG 2.0, ensure that programmatic exchanges proceed at a rate such that users do not perceive a delay. (Level A)## DONE TPAC

5.1.6 Enable Reporting of User Agent Accessibility Faults:

Applicability Note:

When a rendering requirement of another specification contradicts a
requirement of UAAG 2.0, the user agent may disregard the rendering
requirement of the other specification and still satisfy this guideline.

Conformance

This section is normative.

Conformance means that the user agent satisfies the success criteria
defined in the guidelines section. This conformance section describes
conformance and lists the conformance requirements.

Note 1: Although conformance can only be achieved at the stated levels,
developers are encouraged to report (in their claim) any progress toward
meeting success criteria from all levels beyond the achieved level of
conformance.

Conformance Claims (Optional)

If a conformance claim is made, the conformance claim must meet the
following conditions and include the following information (user agents
can conform to UAAG 2.0 without making a claim):

Conditions on Conformance Claims

At least one version of the conformance claim must be published on the
web as a document meeting level "A" of WCAG 2.0. A suggested metadata
description for this document is "UAAG 2.0 Conformance Claim".

Whenever the claimed conformance level is published (e.g. product
information website), the URI for the on-line published version of the
conformance claim must be included.

The existence of a conformance claim does not imply that the W3C has
reviewed the claim or assured its validity.

Claimants may be anyone (e.g. user agent developers, journalists, other
third parties).

Claimants are solely responsible for the accuracy of their claims
(including claims that include products for which they are not
responsible) and keeping claims up to date.

Claimants are encouraged to claim conformance to the most recent version
of the User Agent Accessibility Guidelines Recommendation.

Required Components of an UAAG 2.0 Conformance Claim

Claimant name and affiliation.

Date of the claim.

Conformance level satisfied.

User agent information: The name of the user agent and sufficient
additional information to specify the version (e.g. vendor name,
version number (or version range), required patches or updates, human
language of the user interface or documentation).
Note: If the user agent is a collection of software components (e.g. a
browser and extentions or plugins), then the name and version information must be provided
separately for each component, although the conformance claim will treat
them as a whole. As stated above, the Claimant has sole responsibility
for the conformance claim, not the developer of any of the software
components.

Included Technologies: A list of the web content technologies
(including version numbers) rendered by the user agent that the Claimant
is including in the conformance claim. By including a web content
technology, the Claimant is claiming that the user agent meets the
requirements of UAAG 2.0 during the rendering of web content using that
web content technology.
Note 1: Web content technologies may be a combination of constituent web
content technologies. For example, an image technology (e.g. PNG) might
be listed together with a markup technology (e.g. HTML) since web
content in the markup technology is used make web content in the image
technology accessible (e.g. a PNG graph is made accessible using an
HTML table).

Excluded Technologies: A list of any web content technologies produced
by the user agent that the Claimant is excluding from the
conformance claim. The user agent is not required to meet the
requirements of UAAG 2.0 during the production of the web content
technologies on this list.

Declarations: For each success criterion:
A declaration of whether or not the success criterion has been
satisfied; or
A declaration that the success criterion is not applicable and a
rationale for why not.

Platform(s): The platform
(s) upon which the user agent was evaluated:
For user agent platform(s) (used to evaluate web-based user agent user
interfaces): provide the name and version information of the user agent(s).
For platforms that are not user agents (used to evaluate non-web-based
user agent user interfaces) provide: The name and version information of
the platform(s) (e.g. operating system, etc.) and the name and
version of the platform accessibility service(s) employed.

Optional Components of an UAAG 2.0 Conformance Claim

A description of how the UAAG 2.0 success criteria were met where this
may not be obvious.

"Progress Towards Conformance" Statement

Developers of user agents that do not yet conform fully to a particular
UAAG 2.0 conformance level are encouraged to publish a statement on
progress towards conformance. The progress statement is the same as a
conformance claim except an UAAG 2.0
conformance level that is being progressed towards, rather than one
already satisfied, and report progress on success criteria not yet
met. Authors of "Progress Towards Conformance" Statement are solely
responsible for the accuracy of their statements. Developers are
encouraged to provide expected timelines for meeting outstanding success
criteria within the Statement.

Disclaimer

Neither W3C, WAI, nor UAWG take any responsibility for any aspect or
result of any UAAG 2.0 conformance claim that has not been published
under the authority of the W3C, WAI, or UAWG.

Content that can be used in place of default content that may not be universally accessible. Alternative content fulfills the same purpose as the original content. Examples include text alternatives for non-text content, captions for audio, audio descriptions for video, sign language for audio, media alternatives for time-based media. See WCAG for more information.

alternative content
stack

A set of alternative content items. The items may be mutually exclusive (e.g.
regular contrast graphic vs. high contrast graphic) or non-exclusive
(e.g. caption track that can play at the same time as a sound
track).

relies on services (such as retrieving Web
resources and parsing markup) provided by one or more other
"host" user agents. Assistive technologies communicate data and
messages with host user agents by using and monitoring APIs.

provides services beyond those offered by the host user agents to
meet the requirements of users with disabilities. Additional
services include alternative renderings (e.g. as synthesized
speech or magnified content), alternative input methods (e.g.
voice), additional navigation or orientation mechanisms, and
content transformations (e.g. to make tables more accessible).

Examples of assistive technologies that are important in the context
of UAAG 2.0 include the following:

screen magnifiers, which are used by people with visual
disabilities to enlarge and change colors on the screen to improve
the visual readability of rendered text and images.

screen readers, which are used by people who are blind or have
reading disabilities to read textual information through
synthesized speech or braille displays.

voice recognition software, which are used by some people who have
physical disabilities to simulate the keyboard and mouse.

alternative keyboards, which are used by some people with
physical disabilities to simulate the keyboard and mouse.

alternative pointing devices, which are used by some people with
physical disabilities to simulate mouse pointing and button
activations.

Beyond UAAG 2.0, assistive technologies consist
of software or hardware that has been specifically designed to assist
people with disabilities in carrying out daily activities. These
technologies include wheelchairs, reading machines, devices for
grasping, text telephones, and vibrating pagers. For example, the
following very general definition of "assistive technology device"
comes from the (U.S.) Assistive Technology Act of 1998 [AT1998]:

Any item, piece of equipment, or product system, whether acquired
commercially, modified, or customized, that is used to increase,
maintain, or improve functional capabilities of individuals with
disabilities.

An equivalent alternative that takes the form of narration added to
the audio to describe important visual details
that cannot be understood from the main soundtrack alone. Audio
description of video provides information about actions, characters,
scene changes, on-screen text, and other visual content. In standard
audio description, narration is added during existing pauses in
dialogue. In extended audio
description, the video is paused so that there is time to add
additional description.

authors

The people who have worked either alone or collaboratively to create
the content (e.g. content authors, designers, programmers,
publishers, testers).

An equivalent alternative that takes the form of text presented and synchronized with time-based media to provide not only the speech, but also non-speech information conveyed through sound, including meaningful sound effects and identification of speakers. In some
countries, the term "subtitle" is used to refer to dialogue only and
"captions" is used as the term for dialogue plus sounds and speaker
identification. In other countries, "subtitle" (or its translation) is
used to refer to both. Open captions are captions that are
always rendered with a visual track; they cannot be turned off.
Closed captions are captions that may be turned on and off.
The captions requirements of UAAG 2.0 assume that the user agent
can recognize the captions as such.Note: Other terms that include the word "caption" may
have different meanings in UAAG 2.0. For instance, a "table
caption" is a title for the table, often positioned graphically above
or below the table. In UAAG 2.0, the intended meaning of "caption"
will be clear from context.

A collated text transcript is a text equivalent of a movie or
other animation. It is the combination of the text transcript of the audio track and the text equivalent
of the visual track. For example, a
collated text transcript typically includes segments of spoken dialogue
interspersed with text descriptions of the key visual elements of a
presentation (actions, body language, graphics, and scene changes). See
also the definitions of text
transcript and audio description. Collated
text transcripts are essential for people who are deaf-blind.

Information and sensory experience to be communicated to the user by means of a user agent, including code or markup that defines the content's structure, presentation, and interactions [adapted from WCAG 2.0]

empty
content (which may be alternative content) is
either a null value or an empty string (e.g. one that is zero
characters long). For instance, in HTML, alt="" sets the
value of the alt attribute to the empty string. In some
markup languages, an element may have empty content (e.g. the
HR element in HTML).

reflowable content is content that can be arbitrarily wrapped over multiple lines. The primary exceptions to reflowable content are graphics and video.

When interacting with a time-based media presentation, a continuous scale allows user (or programmatic) action to set the active playback position to any time point on the presentation timeline. The granularity of the positioning is determined by the smallest resolvable time unit in the media timebase.

A viewport may also
have temporal dimensions, for instance when audio, speech, animations,
and movies are rendered. When the dimensions (spatial or temporal) of
rendered content exceed the dimensions of the viewport, the user agent
provides mechanisms such as scroll bars and advance and rewind controls
so that the user can access the rendered content "outside" the
viewport. Examples include: when the user can only view a portion of a
large document through a small graphical viewport, or when audio
content has already been played.

The Document Object Model is a platform- and language-neutral interface that allows programs and scripts to dynamically access and update the content, structure and style of documents. The document can be further processed and the results of that processing can be incorporated back into the presented page. This is an overview of DOM-related materials here at W3C and around the web:
http://www.w3.org/DOM/#what.

Any information that supports the use of a user agent. This information may be found, for example, in manuals, installation instructions, the help system, and tutorials. Documentation may be distributed (e.g. as files installed as part of the installation, some parts may be delivered on CD-ROM, others on the Web). See guideline 5.3 for information about
documentation.

UAAG 2.0 uses the terms "element" and "element
type" primarily in the sense employed by the XML 1.0 specification
([XML], section 3): an element
type is a syntactic construct of a document type definition (DTD) for
its application. This sense is also relevant to structures defined by
XML schemas. UAAG 2.0 also uses the term "element" more generally
to mean a type of content (such as video or sound) or a logical
construct (such as a header or list).

An element with associated behaviors that can be activated through the user interface or through an API. The set of elements that a user agent enables is generally derived from, but is not limited to, the set of elements defined by implemented markup languages. A disabled element is a potentially enabled element that is not currently available for activation (e.g. a "grayed out" menu item).

Acceptable substitute content that a user may not be able to access. An equivalent alternative fulfills essentially the same function or purpose as the original content upon presentation:

text alternative: text that is available via the operating environment that is used in place of non-text content (e.g. text equivalents for images, text transcripts for audio tracks, or collated text transcripts for a movie). [from WCAG 2.0]

full text alternative for synchronized media including any interaction: document including correctly sequenced text descriptions of all visual settings, actions, speakers, and non-speech sounds, and transcript of all dialogue combined with a means of achieving any outcomes that are achieved using interaction (if any) during the synchronized media. [from WCAG 2.0]

User agents often perform a task when an event
having a particular "event type" occurs, including a user interface
event, a change to content, loading of content, or a request from the
operating environment.
Some markup languages allow authors to specify that a script, called an
event
handler, be executed when an event of a given type occurs. An
event handler is explicitly associated with an
element through scripting, markup or the DOM.

An interaction by the user through the user
agent user interface, the focus, or the selection. User requests are made, for example, through user
agent user interface controls and keyboard commands. Some examples of explicit user requests include when the user selects "New viewport," responds "yes" to a prompt in the user agent's user interface, configures the user agent to behave in a certain way, or changes the selection or focus with the keyboard or pointing device. Note: Users can make errors when interacting with the user agent. For example, a user may inadvertently respond "yes" to a prompt instead of "no." This type of error is considered an explicit user request.

The input focus location in the active viewport. The active focus is in the active viewport, while the inactive input focus is the inactive viewport. The active input focus is usually visibly indicated. In UAAG 2.0 "active input focus" generally refers to the active keyboard input focus. @@ Editors' Note: this term is not used in the document other than the glossary.@@

active selection

The selection that will currently be affected by a user command, as opposed to selections in other viewports, called inactive selections, which would not currently be affected by a user command. @@ Editors' Note: this term is not used in the document other than the glossary.@@

Visual indicator showing where keyboard input will occur. There are two types of cursors: focus cursor (e.g. the dotted line around a button) and text cursor (e.g. the flashing vertical bar in a text field, also called a 'caret'). Cursors are active when in the active viewport, and inactive when in an inactive viewport.

focus cursor

Indicator that highlights a user interface element to show that it has keyboard focus, e.g. a dotted line around a button, or brightened title bar on a window. There are two types of cursors: focus cursor (e.g. the dotted line around a button) and text cursor (e.g. the flashing vertical bar in a text field).

focusable element

Any element capable of having input focus, e.g. link, text box, or menu item. In order to be accessible and fully usable, every focusable element should take keyboard focus, and ideally would also take pointer focus.

highlight, highlighted, highlighting

Emphasis indicated through the user interface. For example, user agents highlight content that is selected, focused, or matched by a search operation. Graphical highlight mechanisms include dotted boxes, changed colors or fonts, underlining, adjacent icons, magnification, and reverse video. Synthesized speech highlight mechanisms include alterations of voice pitch and volume ("speech prosody"). User interface items may also be highlighted, for example a specific set of foreground and background colors for the title bar of the active window. Content that is highlighted may or may not be a selection.

inactive input focus

An input focus location in an inactive viewport such as a background window or pane. The inactive input focus location will become the active input focus location when input focus returns to that viewport. An inactive input focus may or may not be visibly indicated.

inactive selection

A selection that does not have the input focus and thus does not take input events.

input focus

The place where input will occur if a viewport is active. Examples include keyboard focus and pointing device focus. Input focus can also be active (in the active viewport) or inactive (in an inactive viewport).

keyboard focus

The screen location where keyboard input will occur if a viewport is active. Keyboard focus can be active (in the active viewport) or inactive (in an inactive viewport). See keyboard interface definition for types of keyboards included and what constitutes a keyboard.

Keyboard interfaces are programmatic services provided by many platforms that allow operation in a device independent manner. A keyboard interface can allow keystroke input even if particular devices do not contain a hardware keyboard (e.g., a touchscreen-controlled device can have a keyboard interface built into its operating system to support onscreen keyboards as well as external keyboards that may be connected). Note: Keyboard-operated mouse emulators, such as MouseKeys, do not qualify as operation through a keyboard interface because these emulators use pointing device interfaces, not keyboard interfaces. [from ATAG 2.0]

pointer

Visual indicator showing where pointing device input will occur. The indicator can be moved with a pointing device or emulator such as a mouse, pen tablet, keyboard-based mouse emulator, speech-based mouse commands, or 3-D wand. A pointing device click typically moves the input focus to the pointer location. The indicator may change to reflect different states.When touch screens are used, the "pointing device" is a combination of the touch screen and the user's finger or stylus. On most systems there is no pointer (on-screen visual indication) associated with this type of pointing device.

pointing device focus

The screen location where pointer input will occur if a viewport is active. There can be multiple pointing device foci for example when using a screen sharing utility there is typically one for the user's physical mouse and one for the remote mouse. @@ Editors' Note: this term is not used in the document other than the glossary.@@

selection

A user agent mechanism for identifying a (possibly empty) range of content that will be the implicit source or target for subsequent operations. The selection may be used for a variety of purposes, including for cut-and-paste operations, to designate a specific element in a document for the purposes of a query, and as an indication of point of regard
(e.g. the matched results of a search may be automatically selected). The selection should be highlighted in a distinctive manner. On the screen, the selection may be highlighted in a variety of ways, including through colors, fonts, graphics, and magnification. When rendered using synthesized speech, the selection may be highlighted through changes in pitch, speed, or prosody.

split focus

A state when the user could be confused because the input focus is separated from something it is usually linked to, such as being at a different place than the selection or similar highlighting, or has been scrolled outside of the visible portion of the viewport. @@ Editors' Note: this term is not used in the document other than the glossary.@@

text cursor

Indicator showing where keyboard input will occur in text (e.g. the flashing vertical bar in a text field, also called a caret).

@@ Editor's Note: Need to find the hrefs to these definitions and fix them. @@

This specification intentionally does not identify
which "important elements" must be navigable because this will vary by
specification. What constitutes "efficient navigation" may depend on a
number of factors as well, including the "shape" of content (e.g.
sequential navigation of long lists is not efficient) and desired
granularity (e.g. among tables, then among the cells of a given
table). Refer to the Implementing document [Implementing UAAG 2.0] for information
about identifying and navigating important elements. @@ Editors' Note: Update links

The set of bindings
between user agent functionalities and user
interface input mechanisms (e.g. menus, buttons, keyboard keys,
and voice commands). The default input configuration is the set of
bindings the user finds after installation of the software. Input
configurations may be affected by author-specified bindings (e.g.
through the accesskey attribute of HTML 4 [HTML4]).

The letter, symbol and command keys or key indicators that allow a user to control a computing device. Assistive technologies have traditionally relied on the keyboard interface as a universal, or modality independent interface. In this document references to keyboard include keyboard emulators and keyboard interfaces that make use of the keyboard's role as a modality independent interface (see Modality Independence Principle). Keyboard emulators and interfaces may be used on devices which do not have a physical keyboard, such as mobile devices based on touchscreen input.

A key or set of keys that are tied to a particular UI control or application function, allowing the user to navigate to or activate the control or function without traversing any intervening controls (e.g. CTRL+"S" to save a document). It is sometimes useful to distinguish keyboard commands that are associated with controls that are rendered in the current context (e.g. ALT+"D" to move focus to the address bar) from those that may be able to activate program functionality that is not associated with any currently rendered controls (e.g. "F1" to open the Help system). Keyboard commands can be triggered using a physical keyboard or keyboard emulator (e.g. on-screen keyboard or speech recognition). (See Modality Independence Principle).

What is identified as "normative" is required for conformance (noting that one may conform in a
variety of well-defined ways to UAAG 2.0). What is identified as
"informative" (or, "non-normative") is never required for
conformance.

To make the user aware of events or status changes. Notifications can occur within the user agent user interface (e.g. a status bar) or within the content display. Notifications may be passive and not require user acknowledgment, or they may be presented in the form of a prompt requesting a user response (e.g. a confirmation dialog).

In UAAG 2.0, the term "override" means that one
configuration or behavior preference prevails over another. Generally,
the requirements of UAAG 2.0 involve user preferences prevailing
over author preferences and user agent default settings and behaviors.
Preferences may be multi-valued in general (e.g. the user prefers blue
over red or yellow), and include the special case of two values (e.g.
turn on or off blinking text content).

A placeholder is content generated by the user agent
to replace author-supplied content. A placeholder may be generated as
the result of a user preference (e.g. to not render images) or as repair content (e.g. when an
image cannot be found). A placeholder can be any type of content,
including text, images, and audio cues. A placeholder should identify
the technology of the replaced object.
Placeholders appear in the alternative content stack.

A programmatic interface that is engineered to enhance
communication between mainstream software applications and assistive
technologies (e.g. MSAA, UI Automation, and IAccessible2 for Windows applications, AXAPI for MacOSX applications, Gnome Accessibility Toolkit API for Gnome applications, Java Access for Java applications). On some platforms it may be conventional to enhance
communication further via implementing a DOM.

The point of regard is the position in rendered content that the user
is presumed to be viewing. The dimensions of the point of regard may
vary. For example,it may be a two-dimensional area (e.g. content rendered through a two-dimensional graphical viewport), or a point (e.g. a moment during an audio
rendering or a cursor position in a graphical rendering), or a range of
text (e.g. focused text), or a two-dimensional area (e.g. content
rendered through a two-dimensional graphical viewport). The point of
regard is almost always within the viewport, but it may exceed the
spatial or temporal dimensions of the
viewport (see the definition of rendered content for more
information about viewport dimensions). The point of regard may also
refer to a particular moment in time for content that changes over time
(e.g. an audio-only
presentation). User agents may determine the point of regard in a
number of ways, including based on viewport position in content, keyboard focus, and selection. The stability of the point of regard is addressed by
success criterion 1.8.7

A profile is a named and persistent representation
of user preferences that may be used to configure a user agent.
Preferences include input configurations, style preferences, and
natural language preferences. In operating environments
with distinct user accounts, profiles enable users to reconfigure
software quickly when they log on. Users may share their profiles with
one another.Platform-independent profiles are useful for those who use the same user agent on different devices.

A user agent renders a document by applying
formatting algorithms and style information to the document's elements.
Formatting depends on a number of factors, including where the document
is rendered (e.g. on screen, on paper, through loudspeakers, on a braille
display, on a mobile device). Style information (e.g. fonts, colors,
synthesized speech prosody) may come from the elements themselves
(e.g. certain font and phrase elements in HTML), from style sheets, or
from user agent settings. For the purposes of these guidelines, each
formatting or style option is governed by a property and each property
may take one value from a set of legal values. Generally in UAAG 2.0, the term "property"
has the meaning defined in CSS 2 ([CSS2], section 3). A
reference to "styles" in UAAG 2.0 means a set of style-related
properties. The value given to a property by a user agent at
installation is the property's default value.

Authors encode information in many ways, including
in markup languages, style sheet languages, scripting languages, and
protocols. When the information is encoded in a manner that allows the
user agent to process it with certainty, the user agent can "recognize"
the information. For instance, HTML allows authors to specify a heading
with the H1 element, so a user agent that implements HTML
can recognize that content as a heading. If the author creates a
heading using a visual effect alone (e.g. just by increasing the font
size), then the author has encoded the heading in a manner that does
not allow the user agent to recognize it as a heading. Some requirements of UAAG 2.0 depend on content roles, content
relationships, timing relationships, and other information supplied by
the author. These requirements only apply when the author has encoded
that information in a manner that the user agent can recognize. See the
section on conformance for more information
about applicability. User agents will rely heavily on information that the
author has encoded in a markup language or style sheet language. Behaviors, style, meaning encoded in a script, and markup in an unfamiliar XML
namespace may not be recognized by the user agent as easily or at all.

Relative time units define time intervals for navigating media relative to the current point (e.g. move forward 30 seconds). When interacting with a time-based media presentation, a user may find it beneficial to move forward or backward via a time interval relative to their current position. For example, a user may find a concept unclear in a video lecture and elect to skip back 30 seconds from the current position to review what had been described. Relative time units may be preset by the user agent, configurable by the user, and/or automatically calculated based upon media duration (e.g. jump 5 seconds in a 30-second clip, or 5 minutes in a 60-minute clip). Relative time units are distinct from absolute time values such as the 2 minute mark, the half-way point, or the end.

Rendered content is the part of content that the user agent makes
available to the user's senses of sight and hearing (and only those
senses for the purposes of UAAG 2.0). Any content that causes an
effect that may be perceived through these senses constitutes rendered
content. This includes text characters, images, style sheets, scripts,
and any other content that, once processed, may be perceived
through sight and hearing.

The term "rendered text" refers to text
content that is rendered in a way that communicates information about
the characters themselves, whether visually or as synthesized
speech.

In the context of UAAG 2.0, invisible
content is content that is not rendered but that may
influence the graphical rendering (i.e. layout) of other content.
Similarly, silent content is content that
is not rendered but that may influence the audio rendering of other
content. Neither invisible nor silent content is considered rendered
content.

Content generated by the user agent to correct an error
condition. "Repair text" refers to the text portion of repair
content. Error conditions that may lead to the generation of
repair content include:

Missing resources for handling or rendering content (e.g. the
user agent lacks a font family to display some characters, or the
user agent does not implement a particular scripting language).

UAAG 2.0 does not require user agents to include repair content
in the document object. Repair content
inserted in the document object should conform to the Web Content
Accessibility Guidelines 1.0 [WCAG10]. For more
information about repair techniques for Web content and software, refer
to "Techniques for Authoring Tool Accessibility Guidelines 1.0"
[ATAG10-TECHS].

In UAAG 2.0, the term "script" almost always
refers to a scripting (programming) language used to create dynamic Web
content. However, in guidelines referring to the written (natural)
language of content, the term "script" is used as in Unicode [UNICODE] to mean "A
collection of symbols used to represent textual information in one or
more writing systems."

Information encoded in (programming) scripts may be
difficult for a user agent to recognize. For
instance, a user agent is not expected to recognize that, when
executed, a script will calculate a factorial. The user agent will be
able to recognize some information in a script by virtue of
implementing the scripting language or a known program library (e.g.
the user agent is expected to recognize when a script will open a
viewport or retrieve a resource from the Web).

One-dimensional access to
rendered content. Some examples of serial access include listening to
an audio stream or watching a video (both of which involve one temporal
dimension), or reading a series of lines of braille one line at a time
(one spatial dimension). Many users with blindness have serial access
to content rendered as audio, synthesized speech, or lines of braille.

The expression "sequential navigation" refers to navigation through
an ordered set of items (e.g. the enabled
elements in a document, a sequence of lines or pages, or a sequence
of menu options). Sequential navigation implies that the user cannot
skip directly from one member of the set to another, in contrast to
direct or structured navigation. Users with blindness or some users
with a physical disability may navigate content sequentially (e.g. by
navigating through links, one by one, in a graphical viewport with or
without the aid of an assistive technology). Sequential navigation is
important to users who cannot scan rendered content visually for
context and also benefits users unfamiliar with content. The increments
of sequential navigation may be determined by a number of factors,
including element type (e.g. links only), content structure (e.g.
navigation from heading to heading), and the current navigation context
(e.g. having navigated to a table, allow navigation among the table
cells).

Users with serial access to content or who navigate sequentially may
require more time to access content than users who use direct or
structured navigation.

A mechanism for communicating style property settings for web content, in which the style property settings are separable from other content resources. This separation is what allows author style sheets to be toggled or substituted, and user style sheets defined to apply to more than one resource. Style sheet web content technologies include Cascading Style Sheets (CSS) and Extensible Stylesheet Language (XSL). User style sheet: Style sheets specified by the user, resulting in user styles. Author style sheet: Style sheets specified by the author, resulting in author styles.

Support, implement,
and conform all refer to what a developer has designed a user agent
to do, but they represent different degrees of specificity. A user
agent "supports" general classes of objects, such as "images" or
"Japanese." A user agent "implements" a specification (e.g. the PNG
and SVG image format specifications or a particular scripting
language), or an API
(e.g. the DOM API) when it has been programmed to follow all or part
of a specification. A user agent "conforms to" a specification when it
implements the specification and satisfies its conformance
criteria.

The act
of time-coordinating two or more presentation components (e.g. a visual track with captions, or
several tracks in a multimedia presentation). For Web content
developers, the requirement to synchronize means to provide the data
that will permit sensible time-coordinated rendering by a user agent.
For example, Web content developers can ensure that the segments of
caption text are neither too long nor too short, and that they map to
segments of the visual track that are appropriate in length. For user
agent developers, the requirement to synchronize means to present the
content in a sensible time-coordinated fashion under a wide range of
circumstances including technology constraints (e.g. small text-only
displays), user limitations (e.g. slow reading speeds, large font sizes,
high need for review or repeat functions), and content that is
sub-optimal in terms of accessibility.

A mechanism for encoding instructions to be rendered, played or
executed by user agents. Web Content
technologies may include markup languages, data formats, or programming
languages that authors may use alone or in
combination to create end-user experiences that range from static Web
pages to multimedia presentations to dynamic Web applications. Some
common examples of Web content technologies include HTML, CSS, SVG,
PNG, PDF, Flash, and JavaScript.

Atext element adds text
characters to either content or the user
interface. Both in the Web Content Accessibility Guidelines 2.0 [WCAG20] and in UAAG 2.0, text elements are presumed to produce text that can be
understood when rendered visually, as synthesized speech, or as
Braille. Such text elements benefit at least these three groups of
users:

visually-displayed text benefits users who are deaf and adept in
reading visually-displayed text;

synthesized speech benefits users who are blind and adept in use
of synthesized speech;

braille benefits users who are blind, and possibly deaf-blind,
and adept at reading braille.

A text element may consist of both text and non-text data. For
instance, a text element may contain markup for style (e.g. font size
or color), structure (e.g. heading levels), and other semantics. The
essential function of the text element should be retained even if style
information happens to be lost in rendering. A user agent may have to process a text element in order to have
access to the text characters. For instance, a text element may consist
of markup, it may be encrypted or compressed, or it may include
embedded text in a binary format (e.g. JPEG).

Text content is content that is composed of one or more text
elements. A text
equivalent (whether in content or the user
interface) is an equivalent composed of
one or more text elements. Authors generally provide text equivalents
for content by using the alternative content
mechanisms of a specification.

A non-text
element is an element (in content or the user
interface) that does not have the qualities of a text element.
Non-text
content is composed of one or more non-text elements. A
non-text equivalent (whether in content or the user interface) is an
equivalent composed of
one or more non-text elements.

Any
stylistic effect that the user agent may apply to visually rendered text that does not
affect the layout of the document (i.e. does not require reformatting
when applied or removed). Text decoration mechanisms include underline,
overline, and strike-through.

A text equivalent of audio
information (e.g. an audio-only presentation
or the audio track of a movie or other
animation). A text transcript provides text for both spoken words and non-spoken
sounds such as sound effects. Text transcripts make audio information
accessible to people who have hearing disabilities and to people who
cannot play the audio. Text transcripts are usually created by hand but
may be generated on the fly (e.g. by voice-to-text converters). See
also the definitions of captions and collated text
transcripts.

defining a common time scale for all components of a time-based media presentation. For example, a media-player will expose a single timebase for a presentation composed of individual video and audio tracks, for instance allowing users or technology to query or alter the playback rate and position.

Content rendered as sound through an
audio viewport. The audio track may be all
or part of the audio portion presentation (e.g. each instrument may
have a track, or each stereo channel may have a track). Also see definition of visual track

A collection of commonly used controls presented in a region that can be configured or navigated separately from other regions. Such containers may be docked or free-floating, permanent or transient, integral to the application or add-ons. Variations are often called toolbars, palettes, panels, or inspectors.

User agent default styles are style property
values applied in the absence of any author or user styles. Some
markup languages specify a default rendering for content in that markup
language; others do not. For example, XML 1.0
[XML]
does not specify default styles for XML documents.
HTML 4 [HTML4] does not specify
default styles for HTML documents, but the CSS 2 [CSS2]
specification suggests a sample
default style sheet for HTML 4 based on current practice.

the user agent user
interface, i.e. the controls (e.g. menus, buttons,
prompts, and other components for input and output) and mechanisms
(e.g. selection and focus) provided by the user agent ("out of the
box") that are not created by content.

the "content user interface," i.e. the enabled elements that are
part of content, such as form controls, links, and applets.

The document distinguishes them only where required for clarity. For
more information, see the section on requirements for content, for user
agent features, or both @@.

The term "user interface control" refers to a component of the user
agent user interface or the content user interface, distinguished where
necessary.

A user interface function that lets users interact with web content. UAAG 2.0 recognizes a variety of approaches to presenting the content in a view, such as:

rendered view: Views in which content is presented such that it is rendered, played or executed. There are several sub-types:

In conventionally rendered views the content is rendered, played or executed according to the web content technology specification. This is the default view of most user agents.

In unconventionally rendered views the content is rendered quite differently than specified in the technology specification (e.g., rendering an audio file as a graphical wavefront); or

source view: Views in which the web content is presented without being rendered, played or executed. The source view may be plain text (i.e., "View Source") or it may include some other organization (e.g., presenting the markup in a tree).

outline view: Views in which only a subset of the rendered content is presented, usually composed of labels or placeholders for important structural elements. The important structural elements will depend on the web content technology, but may include headings, table captions, and content sections.
Note: Views can be visual, audio, or tactile.top-level viewports are
viewports that are not contained within other user agent viewports.

The part of an onscreen view that the user agent is currently presenting onscreen to the user, such that the user can attend to any part of it without further action (e.g. scrolling). There may be multiple viewports on to the same view (e.g. when a split-screen is used to present the top and bottom of a document simultaneously) and viewports may be nested (e.g. a scrolling frame located within a larger document). When the viewport is smaller in extent than the content it is presenting, user agents typically provide mechanisms to bring the occluded content into the viewport (e.g., scrollbars).

A visual object is content rendered through a
graphical viewport. Visual objects include
graphics, text, and visual portions of movies and other animations. A
visual track is a visual object that is intended as a whole or partial
presentation. A visual track does not necessarily correspond to a
single physical object or software object.

References to the latest version of "User Agent Accessibility
Guidelines 2.0." Use the "latest version" URI to refer to
the most recently published document in the series: http://www.w3.org/TR/UAAG20/.

In almost all cases, references (either by name or by link) should be to
a specific version of the document. W3C will make every effort to make UAAG 2.0 indefinitely available at its original address in its original form.
The top of UAAG 2.0 includes the relevant catalog metadata for specific
references (including title, publication date, "this version" URI,
editors' names, and copyright information).

An XHTML 1.0 paragraph including a reference to this specific document
might be written:

For very general references to this document (where stability of content
and anchors is not required), it may be appropriate to refer to the latest
version of this document. Other sections of this document explain how to build a conformance
claim.

Appendix C: References

For the latest version of any W3C specification please
consult the list of W3C Technical Reports at
http://www.w3.org/TR/. Some documents listed below may have been superseded
since the publication of UAAG 2.0.

Note: In UAAG 2.0, bracketed labels such as
"[WCAG20]" link to the corresponding entries in this section. These labels
are also identified as references through markup.

This publication has been funded in part with Federal funds from the U.S.
Department of Education, National Institute on Disability and Rehabilitation
Research (NIDRR) under contract number ED-OSE-10-C-0067. The content of this
publication does not necessarily reflect the views or policies of the U.S.
Department of Education, nor does mention of trade names, commercial
products, or organizations imply endorsement by the U.S. Government.