Archive for the ‘accessibility’ Category

The classic usability complaint is that projects just tack a usability test on at the end of development when it is too late to make any changes. Which leaves the usability consultant in the uneviable position of having to tell the project team that their product doesn’t work, when they can’t do anything about it. It can feel like a waste of time and money.

In reality these sessions are rarely entirely useless and I’d prefer to run them rather than having nothing at all. A lot of feedback is often about content which can usually be changed at the last minute. You can also capture general customer research insights that can feed into the next project.

A couple of projects I’ve got involved with recently have involved late stage usability testing . We need to tackle this but we’ve got some bigger challenges than usual in bringing in a better approach to usability testing.

1. The organisation can’t afford rounds of testing

This is hardly unique to us and I fully expected this when I took the job. The answer usually involves the word “guerrilla” at some point.

2. We have some challenges in doing guerrilla testing

Our target audience (blind and partially sighted people) is a small section of the population and can’t easily be found by popping into libraries and coffee shops. Everybody else really isn’t representative and would give completely different results. Although admittedly our target audience can often be found in our own offices, or rather in the public resource centre downstairs. But you can’t just get them to test on your laptop as you need to have the access tech that they are used to using. We might need to try and find folks who are both willing to test and also use the access tech we have available. Not insurmountable problems, but will take a bit of planning.

3. Can’t easily do paper-based testing or flat onscreen mock-ups.

I’ve mentioned this particular challenge before. We can survey and interview quite easily. We can test existing or competitor systems. But when it comes to trying out how well new designs are working, our options get a lot fewer. Whilst it would be interesting to experiment with tactile mock-ups, the admin overheads and learning curve probably aren’t justified. Really we should just concentrate on working prototypes, rather than getting carried away with how cool an IA presentation idea “tactile wireframes” is.

I’m not a screenreader expert and if you are wondering how your site works in screenreaders it is worth getting it tested properly by experts. But if you just want to get a flavour of what it is like to use a screenreader or how screenreaders cope with particular types of content…then these tools might be helpful.

Fangs Screen Reader Emulator :: Add-ons for Firefox. This Firefox add-on will produce a (text) version of your page to give you an idea of how a screenreader might read it. It’s just an idea as it depends on the screenreader and it doesn’t help you understand how the page might sound.

If you want to experience the actual audio experience:

NVDA is a free and open source screen reader for Windows. Apparently works best with Firefox. I find it useful for quickly pointing the cursor at a bit of the page and listening to how that is read out. If you want to get a real sense of the page might be navigated then you’ll need to learn some of the commands. And you’ll probably want to slow it down to start with (go to preferences > voice controls)

JAWs is a widely used screenreader but definitely not free. You can however download a free trial. As for NVDA, you’ll need to learn some commands.

All the screenreaders are easier to use if you tend to use the keyboard more than the mouse. You’ll already be in the habit of memorising all those key combinations.

It is important to remember that a screenreader’s experience of your page will vary depending on how many of the screenreader’s functions that user knows and how they have their preferences set. The setting that controls how much punctuation is read makes a big difference but there are legitimate reasons for having it set to read all punctuation (which probably makes it sound worse and harder to process).

After some of the frustations with the accessibility of the iPhone when first launched, I wondered what people were saying about the accessibility of the iPad. There’s not masses of commentary yet and doesn’t seem to be any from anyone with any first hand experience (unsurprisingly).

This didn’t stop abledbody being unimpressed with the accessibility of the announcement:

“In Apple’s rush to debut the new iPad tablet it forgot one little piece of marketing: Accessibility. Apple has an accessibility page but it didn’t bother to add the iPad before launching it yesterday at its headquarters. And even though Steve Jobs’ keynote was likely prepared, Apple didn’t bother to add captions for deaf or hard of hearing reporters, nor did it add captions to the 46-minute video broadcast of Jobs’ speech or the video “demo” of the new tablet.”

But they do go on to say that the iPad has the same accessibility features as the iPhone including VoiceOver, screen zoom, mono audio and closed-captioned support. They believe the size and weight are a good thing, as are the built in speakers.

Not so good is the shortage of captioned content to actually watch, and the inability to plug in alternative input devices.

Mac-cessibility Network comments that “iWork for the Mac is almost entirely accessible, and Apple has made it a point to have good access to its AppStore offerings. We expect iWork for the iPad to be accessible, but this is not confirmed.”

They also have content concerns:

“To date, electronic book stores, such as Amazon’s Kindle store, have not provided books in an accessible format, owing to DRM restrictions. We hope Apple may be able to pave the way for the visually impaired and their access to content with the iBooks application and store. If VoiceOver does indeed have access to the content in these publications, it would be a tremendous step forward for access to printed media.”

Web accessibility is a reasonably familiar topic for IAs but document accessibility is also important. Here’s some considerations for your typical Word documents.

To support screen magnification and other adjustments:

don’t set the text to black. choose automatic (if you set the text to black and the person reading has the colours reversed for ease of reading then all of your text will disappear)

use a simple clear font e.g. Ariel

avoid italics

use left aligned text including headings (screen magnification users often don’t realise there is content that is centred or right aligned)

don’t use other colours for fonts (the RNIB training specifically asks us not to use fancy colours like purple. I don’t think it was particularly aimed at me)

use 14 point text as the standard font size (this seems huge to me, but this is our recommended standard as meeting the needs of most readers)

Screen readers with speech output

use the correct Word styles

use heading hierarchies to communicate the structure of the document

You’ll note the advice is less detailed for screen readers. This mirrors my experience with web design in the RNIB. Outside the RNIB most accessibility conversations I heard focused on the challenge of designing for screenreaders but the challenges are much greater in designing for both magnification users and fully sighted users at the same time.

Some of my colleagues use screen readers with braille display output. I’d never come across this particular form of access tech and it wasn’t immediately obvious why the braille displays are necessary…or indeed how they worked. They look a bit space-age, or rather 1960s Sci-Fi movies idea of space age.

According to Wikipedia:

“The mechanism which raises the dots uses the piezo effect of some crystals, where they expand when a voltage is applied to them. Such a crystal is connected to a lever, which in turn raises the dot. There has to be a crystal for each dot of the display, i.e. eight per character.”

An RNIB training video made the why clear. The braille output is often used in combination with speech output and it is particularly useful for punctuation, spelling and codes. These can’t be easily heard in the speech output, at least not without seriously compromising your ability to listen to the speech comfortably. You can ask the screenreader to speak all the punctuation and spell out words but you wouldn’t always want it to be doing that. And the braille display is much more like reading, as opposed to listening which could make it easier for precision work and for remembering. The video featured a computer programmer explaining how valuable the braille display is for her when reading computer code.

They’re not cheap though. The Braille display available from the RNIB shop is £1,195.00 (Ex. VAT).

Being the IA in an organisation that is fundamentally and very practically committed to accessibility is for the most part an IA dream.

Imagine it. A top-down drive towards machine readable content. An emphasis on the content rather than the style. A management team that understands that whizzy and award-winning is no b****y use if your users can’t use it (unless you count getting management their next job as a use).

But occasionally IA and accessibility, if not conflict, at least exchange a couple of slightly sniffy words.

Let’s take machine readable for a start. Which machine is doing the reading? And what language does it speak? Google and Jaws at the very least speak different dialects. I’ve been struggling for while to get to the bottom of the punctuation in URLs issue. SEO suggests a slight preferences for hyphens in URLs, screenreaders (well JAWs) seem to work better with CamelCase than with either hyphens or underscores (if the screenreader is set to read out the punctuation then imagine listening to all those underscores). It isn’t clear cut with either technology.

(as an aside, I was impressed to discover that JAWs seems to get Latin and had no trouble trotting through the Lorum Ipsum in lots of my documents)

In an effort to get a local navigation that shows the user where they are on the site, regardless of whether they are using a screenreader, we’ve ended up with a rather unfamiliar pattern of navigation on our new site. And as a general rule I don’t like novel patterns for common stuff like navigation. No-one wants to think about navigating.

But mostly my IA instincts and the needs of screenreader users are happily in tune, or at the very least don’t interfer with each other (courtesy of the magic of CSS).

Where it really gets interesting is when you consider screen magnification users. Screen magnification users are using the same interface as everyone else, just a whole lot bigger. I actually find screen mag much harder to use than a screenreader. I can mostly touch type and I tend to use the keyboard rather than the mouse so I don’t find a screenreader too much of a leap (when the site is accessible, of course!). But a significantly magnified screen is just baffling. It is the world as you knew it but nothing quite works the same. And moving around the screen just makes me feel a bit sick.

So some design constraints are: You can only see a very small amount of the screen at any one time. You don’t know where the next bit of information is, unless part of it is already on screen. And you don’t want to have to go back and forth on the page.

In many ways this helps the IA. It reinforces the need to follow accepted patterns. If the mag user is expecting the search box to be top left then don’t stick it in the middle of the left column or they’ll never find it.

Magnification creates a slight preference for linear, left aligned layout. You have to be careful with white space, otherwise the mag users is left with nowhere to go. I’m noticing a tendancy for my layouts to end up with empty space towards the right and bottom of the page.

A similar issue that isn’t really about magnification but about designing for low vision comes up when you design for significant font resizing. You can find that you are not making full use of the screen when the font is smaller.

Now none of this can’t be sorted out with some clever information design and a CSS whizz. Except maybe the URL punctuation but I should probably just get over that and worry about something a little more important.

“From the data you can see that with CAPTCHA on, there was an 88% reduction in SPAM but there were 159 failed conversions. Those failed conversions could be SPAM, but they could also be people who couldn’t figure out the CAPTCHA and finally just gave up. With CAPTCHA’s on, SPAM and failed conversions accounted for 7.3% of all the conversions for the 3 month period. With CAPTCHA’s off, SPAM conversions accounted for 4.1% of all the conversions for the 3 month period. That possibly means when CAPTCHA’s are on, the company could lose out on 3.2% of all their conversions!

Given the fact that many clients count on conversions to make money, not receiving 3.2% of those conversions could put a dent in sales. Personally, I would rather sort through a few SPAM conversions instead of losing out on possible income.”

Most of the document is marketing bumf but chapter 5 is a useful set of principles and guidelines. It is worth noting that Microsoft is not saying that these principles are requirements/commitments for their products. They tend to say something more vague like “[this is] an approach which Microsoft is integrating into its products”.

Peter Morville has published a list of UX deliverables, complete with cute icons.

It is a nice list but the pre-amble rang warning bells for me with lots of enthusiasm for visual thinking. I’m increasingly unable to benefit from discussions about IA deliverables in the IA community because I have to produce deliverables that are accessible to blind and partially sighted people.

The list started well in terms of accessibility with stories and proverbs, hardly typical on a list of UX deliverables. I’ve reviewed Peter’s list and compared to my early thoughts on accessbile deliverables to see if I’ve progressed at all.

stories – fine

proverbs – great, potentially even more memorable than stories and consequently repeatedly accessible

personas – works, but without the poster

scenarios – ok without the illustrations

content inventories – fine, but needs careful layout of excel

analytics – presentation can be tricky. collection software often inaccessible

surveys – much the same as analytics

concept maps – love them but very tricky

system maps – tricky – we tend to cobble something together in Excel/Word and use outlining to create a hierarchy

process flows – also tricky

wireframes – largely doomed, if being used for a partially sighted audience then you need to think very carefully about descriptive text and the positioning of annotations

storyboards – definitely doomed

concept designs – ditto

prototypes – paper no, xHtml could be good, not sure about tools like Axure

narrative reports – fine, although any illustrations will be a problem

presentations – forget the powerpoint, just talk

plans – don’t know if MS Project works for screenreaders? could probably do something that sort of works in Excel

specifications – as for narrative reports

style guides – depends how it is produced, some elements will be inaccessible but acceptably so

Looking at all those deliverables that are essentially flows or concept maps, makes me think a screenreader friendly mapping technique would be a big win. Even if you still won’t be able to “see it all at once”!

I first attended the IA Summit in 2004 and I’ve gone every year since. Each time the conference has given me a much needed boost of energy and optimism. So I’m sad not to be going to Memphis.

Timing isn’t good with one project launching and another kicking off in anger. But I would also have struggled to make the business case to my charity employers. We have budget to send staff to conferences but we need to be really really clear about the benefits.

The programme this year looks intriguing as ever but there’s nothing explicitly about my sector (charities), main products (intranet and CRM), technology (SharePoint) or dominant issue (accessibility). There is a session about Agile and one on Web Standards but they’re the only sessions that my organisation would recognise as being relevant to what I do.

The presentation titles aren’t really very helpful on their own (Evolve or Die? You’re Not Doing It Right? IA Spy School? A House Divided?). I needed the descriptions when I was trying to make the business case!

I’ve got no team to manage anymore so the UX management stream is far less relevant than when I was at the BBC. I can’t use visual communication methods like comics and lots of IA deliverables wouldn’t be easily re-usable with blind team-members without a lot of effort. Anything too future-facing/web 3.0 is just pie in the sky when you are still trying to get web 1.0 to work for all your users.

The strategic stuff would be applicable, although it is nowhere near as imperitative in a 3000 person organisation compared to a 30000 person one. MetaSearch, Facets of Faceting, and Business Centred Design all sound like sessions I would attend but they’re not enough.

Interestingly, having always worked in not entirely commerical companies, I feel a much greater sense of responsibility for the RNIB’s cash. The money we receive (for the most part) comes from people who wanted to make someone else’s life better, rather expecting to get some benefit in return.

Getting employees re-energised and re-inspired is a legitimate way for charities to use that money… but I feel an obligation to think of ways of achieving the same goal that don’t require me to fly to Memphis.