Preparing Your Website For Voice-Recognition Usability

Posted by Nick Rigby | July 26, 2017 | 3:21pm

As an English native living in America who’s accent can often be hard for technology to understand, this is a bit of a sore subject for me. Nonetheless, voice-recognition functionality may just be the next big step for website usability.

Amazon, Google, Microsoft, and now Apple are racing to deliver the best voice-activated consumer products. And as this technology becomes more common, users may come to expect a similar “talk and response” relationship with websites.

It’s hard to imagine a future in which a website’s user interface disappears entirely and the information on the site is conveyed by devices like Amazon Echo, but I do foresee websites using voice-activated and vocal communication tools to save users time and make it easier for them to find information.

For example, a university website could have a search function in which users ask common questions, like “When’s the next campus tour?” or “What science-based degrees do you offer?” — and are then taken to a corresponding part of the website or read a response. Particular web pages could even utilize screen reading technology for especially long paragraphs or news articles. It all depends on what users will want and expect from a website.

Of course, similar user experience already exists as a way for websites to be accessible to visually impaired users.

When navigating a website, visually impaired users use assistive technology that dictates information coded into a website’s HTML. You can think of HTML as a blueprint for a website’s content. For example, if a web page starts with the header “About the Author”, and then has the sentence “Ernest Hemingway was born on July 21st, 1899” underneath, the HTML should read:

When the HTML is semantic, meaning it follows the universal rules of the coding language, it’s easier for visually impaired users to navigate a webpage and listen to the exact information they need. For example, if a page has several headers marking different sections of content, assistive technology often has the ability to jump through headings so the user can skim the page, and read the sections they care about. So if the user doesn’t care about “about the author”, then they can skip the body copy under it and jump to another section of the site.

If the HTML is poorly structured and less semantic, maybe because it doesn’t use an h1 or by skips from an h1 to an h3, the user could have a more difficult time navigating the website. This happens quite often because web developers can write HTML that doesn’t follow semantic rules, but still have the website look the same as one that does on the front end. As a result, it’s totally fine to the average user, but difficult to navigate via assisted technology.

In other words, we write proper, semantic HTML. We know that by doing so, the websites we build aren’t just accessible to more users. They’re also more SEO friendly, since Google crawls HTML to find information, and they’re potentially primed for the future of voice-activated tools, since voice-activated software would likely depend on the well-structured HTML to dictate information.

Preparing for the future of voice-activated web usability is a group effort between developers, copywriters, content strategists, designers and user experience professionals. It goes far beyond organized HTML. We need a structure catered to navigation so software can find information, content that’s available depending on audience questions, and copy that’s well-written, persuasive, and on-brand.

We may even reach a time when a website will need a literal brand voice to read information to users. And when we do, consider this your invitation to hire my inherently charming, though not always coherent, British accent. Cheers, mate!