Month: October 2016

How do users navigate and interact with web pages?

To help understand the reasons behind technical accessibility requirements, its worthwhile describing just a few of the strategies available to a screen reader user for navigating and interacting with web pages.

Tabbing:

Users can jump forwards or backwards, from one focusable control to the next using the TAB and Shift-TAB keys. Hyperlinks and form controls are always in the tab order, but for controls built from other HTML elements (e.g. <span>s and <div>s), these must be added to the tab order manually using tabindex.

Arrow keys, enter and space:

Arrow keys are used to interact with controls (e.g., a select dropdown or set of radio buttons); enter and space often activate buttons, links and other focusable controls.

Tables:

<caption>s and row and column headings (<th>s): Using a command key, users can jump directly from table to table in a webpage, and then navigate the rows and columns using arrow keys. If the table has a <caption>,it is announced when it receives screen reader focus; if the table has properly marked up row and column headers , they are announced while users navigate up and down rows and across columns.

Headings:

Users can navigate a hierarchical heading structure. For instance, users can move directly to the <h1> (which should head up the main content) when the page loads, or by jumping from one subsection to the next if they are headed by <h2>s. They can also access a list of headings, ordered in order of appearance, hierarchy or alphabetically, through which they can browse and move focus to any one directly.

ARIA landmark roles:

If ARIA landmark role attributes are used to mark headers, navigation, search widgets, main content and footers, users can jump from one block to the next.

Hyperlinks:

Users can jump from hyperlink to hyperlink, or bring up a list of hyperlinks, order them in a number of ways, and then move to or follow one of them.

<label>s and <fieldset>s

Likewise for form controls – users can jump sequentially through form controls, or bring up a list of labels, then order them in different ways, browse through them, choose one, and jump straight to that control. If a form control has an associated <legend> this is announced as context for each of the labels a <fieldset> contains.

Lists:

Jump to the beginning or end of lists with one command (<ul>, <ol>, and <dl>) or move to each list item in turn.

The implications? Use semantic markup

Reading through the short list of strategies above, it should be apparent that effective understanding content in, and navigation of, webpages relies heavily on semantic markup and structured content (headings, lists, labels, fieldsets, etc.). This is why it is important to always use appropriate semantic elements, and is a requirement of WCAG 2.

Remember, while semantically neutral markup may be styled using CSS to visually convey structure, this structure cannot be detected by screen readers – so this structural and semantic information is lost to their users. For example, all the following can be styled with CSS to visually convey structure and meaning, but screen reader users will miss out on this contextual and structural information.

Using <div>s for <fieldset>s,

<b>old or <strong> for headings or labels

<span>s with onclick events standing in for hyperlinks

a series of <span>s for a list

Further more, if semantic markup is used inappropriately, it can lead to screen reader users missing or misunderstanding page content, or perhaps skipping over what they assume is content they are not interested in. For example,
Using headings to embolden text instead of <strong>

Form controls with no <label>s

<label>s with no form controls

non hierarchical use of headings

buttons where a hyperlink would be more appropriate (e.g. for navigation links)

Taking all of the above into account, screen readers, like browsers, work most robustly and consistently when web pages are well crafted with semantic code.

Remember: One of the main foundations of accessibility is using semantic markup.

While WCAG has been formulated without reference to specific assistive technologies, one of the most widely used are screen readers.

Screen readers are applications which allow people – usually people who are blind or partially sighted, but also others such as people with dyslexia – to use computers, including operating systems, word processors, integrated development environments, music players – and of course browsers.
Screen readers,

Communicate all content by voice, and/or by braille display.

Enable users to navigate a site, and explore a webpage’s content and current states without needing to use a pointing device or view a screen.

Alert users to changes in state and content of web pages.

Enable users to read and interact with a web page’s links, forms, widgets and other focusable controls using only a keyboard.

Who uses them?

Screen readers are used by,

Blind people, and partially sighted people without enough useful sight to see and operate a webpage.

As an aid for partially sighted people who have some useful vision, but who might find it inconvenient or exhausting to rely only on sight alone.

People with sensitive eyes who find looking at a screen for prolonged periods painful.

Dyslexic people who might have good vision, but have difficulty in reading text.

How do they work?

We’ll concentrate on using screen readers and browsers, although of course they are used to interact with most software using similar methods.

A screen reader application acts as an intermediate layer between the browser and user. Hooking into the accessibility API of a browser, they build their own Accessible DOM from the browser’s DOM of the webpage. When someone operates a screen reader, they are interrogating and navigating this Accessible DOM, rather than navigating the browser’s DOM directly. The screen reader manages its own virtual cursor which is independent of the one seen in a browser – be aware that the virtual cursor position may not match the browser’s cursor visible onscreen! Of course, for the screen reader user it appears they are directly interacting with the webpage.

Navigation and interaction is entirely keyboard based, using tab and shift-tab, arrow keys, and other special command keys. There is no reliance whatsoever on pointing devices, and neither is it necessary to see the screen, or even have one plugged in.

When users interact with the Accessible DOM – for example, click links, uses widgets such as drop down menus – the screen reader forwards on those commands to the browser. They also allow users to fill in forms, often using some form of ‘forms mode’, which, for instance, treats characters as input for text boxes rather than commands to move around a page.
If the content of a webpage (i.e., the browser’s DOM) updates or changes, whether caused by the user or not, the browser’s accessibility API triggers change events. The screen reader can detect these, and it updates its own Accessible DOM accordingly.

Sighted users will probably be able to see the changes, but screen readers will not know the changes have happened, unless they happen to navigate to them later on. Sometimes this is fine, but if it is important for screen reader users to be aware of the changes (e.g., an error message), the webpage can be marked up with ARIA (Accessible Rich Internet Applications) live regions. Then when content changes inside these regions, screen readers alert users of those changes as they happen, with varying degrees of specified urgency.

Its important to understand that users, mediated by the screen reader, access the DOM of the webpages directly, and pay no attention to the visual representation of the webpage the browser builds using CSS.

The next post will describe how screenreader users interact with web pages…