5 Second Test: An important conversion optimization tool

A 5 second test can help increase website conversion and improve online ROI, here’s how.

Five seconds may seem like a short time, but in fact it is more than enough time for a website visitor to determine if there is enough quality in your website to stay, or to leave potentially never to return. Using a 5 second test to optimize conversion is a powerful way to improve the ROI of a website. This is because in hundreds of website audits I have conducted over the past years I have found that a critical driver of website success is the ability of the home page, or any page for that matter, to deliver three pieces of critical information in five seconds or less:

Who are you?

What product or service do you provide?

Why should I care (what’s in it for ME)?

Websites that are able to quickly and efficiently communicate these three critical elements within 5 seconds typically have much better conversion, and thus ROI than websites that don’t.

Why Five Seconds

But why five seconds? The reason five seconds is so important is because of research studies which demonstrate that visitors to websites take a very short amount of time (in some cases a fraction of a second, as little as 50 milliseconds) to judge the quality of a website. As stated in an important study of timing of website visual quality judgments by Lindgaard et. al.…

“Our ambition was to determine how quickly people decide whether they like or dislike what they see, and whether such judgments may constitute a mere exposure effect. The above data suggest that a reliable decision can be made in 50 ms, which supports the contention that judgments of visual appeal could represent a mere exposure effect. The level of agreement between participants and between experiments was impressive and highly correlated even for the 50-ms condition.”*

In addition, hundreds of conversion optimization testing studies I have conducted over the years corroborate this, with test participants quickly scanning a page for just a few seconds before either staying, or moving on in their hunt for information.

5 Second Test Definition

I define a 5 second test for websites as…

“A five second test is a usability testing method in which the participant is exposed to an image of a webpage for five seconds. The image is then removed and the participant is asked questions about what they remember seeing on the page. The test is used for evaluating how well the page communicates the purpose and content within.”

5 Second Test Provides Quantitative and Qualitative Data

A major benefit of a 5 second test is the data that can be obtained, which is both quantitative and qualitative. Because the test is so fast and easy to distribute, it can be deployed to dozens, hundreds or even thousands of testers in a matter of hours, or at the most days. It is relatively quick and easy to obtain statistically significant results that can be organized into charts and graphs for analysis. This takes almost all of the guesswork out of validating if a page is communicating effectively or not, and makes it easy to develop useful conversion optimization recommendations.

As the example below shows, this kind of quantitative data is useful for analyzing exactly how well the webpage is working in terms of communicating with the intended audience. In this example we can see that the vast majority of participants had partial or no idea as to what service or product the company provides.

Homepage and the Five Second Test

In fact, one of the better uses of a 5 second test is to conduct tests of the home page. That is because the home page, of all the pages of a website, is most critical for communicating who the firm is, what products or services the firm offers and why the visitor should care (what’s in it for them). Conversion optimizations of the home page based on analysis of 5 second test results can greatly improve page flow, number of pages visited and bounce rate.

To demonstrate how effectively this test can work, examine one at a time for 5 seconds the two images below. Both are for websites that are providing a particular product. Can you derive in those 5 seconds:

What product or service is provided?

Who is the company?

What’s the benefit to you, the visitor?

What product or service does this firm provide?

What product or service does this firm provide?

Both sites are in the retail business, and both sell women’s shoes. Both appeared in search results for “women’s shoes.” But did one clearly communicate women’s shoes better than the other?

Most likely you found it easier to identify the product, company and benefit from the second website, which happens to be the Nordstrom shoe page. Even if you were not aware of the Brands prior to seeing the image, you probably would have found it easier to understand the product, company and benefit to you of the Nordstrom page. The first image is also a shoe retailer, but was that as obvious as the Nordstrom page?

Conversion optimization is most effective when quantitative data gathered from the 5 second test exposes issues with the page content, leading to optimization recommendations.

5 Second Test Methodology

To conduct a 5 second test, and use the results for conversion optimization, apply the following methodology…

Evaluate which page you would like to test. Typically I like to start with the home page, as most often it receives the highest amount of traffic, has the highest bounce rate, and also is the most important page to communicate the three critical elements to the visitor. However, there are other opportunities to optimize page conversion with a 5 second test including; landing pages, category pages, product pages, information pages, customer service pages, contact us pages and more.

Capture an image of the page. It’s typically best to control the test by providing an image of the page, versus sending the test participant to the page. This is primarily because if you are using the actual website, within the 5 seconds some testers may click away to other pages to help them identify the purpose of the site. Likewise, displaying an image instead of the actual webpage reduces the chances that the test will be flawed by slow load times or other technology glitches that cause the full website to be displayed less than the full 5 seconds. By only displaying an image, you the tester can control how long the participant views the page and curtail any desire to escape the page to find the missing information.

Identify test participants. If the website has a specific target audience, say for example educators in Universities, then it can be helpful to find testers who match that Persona. Likewise, if the website is oriented more to the general public then you can find testers that match the general public at large. One word of caution here, ALL website home pages, whether targeting specific audiences or not, should be able to effectively and efficiently communicate the three core elements no matter WHO is viewing the page. This is why it is less essential to be overly focused on the exact Persona for those pages.

Conduct the test. There are multiple ways to conduct the test. A low-tech way is to find someone in the hallway that matches your test participant profile, show them a print out of the page for 5 seconds, then remove the image and ask them your questions. There are online tools you can use as well, including the 5 Second Test website that I mentioned in the 24 Usability Testing Tools review. To record the results, a simple Excel spreadsheet can be used to document each participant’s comments, or if using the 5 Second Test tool a download of the results is available.

Analyze the results. Depending on the question, I typically divide the results into three categories; incorrect answers, correct answers, and partially correct answers. For questions about what captured the most attention I typically list the elements that are important for communicating the purpose of the page including: Company logo, heading or explanation copy, any strong image. Interestingly, often the strong image is what captures the most attention, but is often guilty of not communicating what the product or service is. In all cases I total the results for each group of answers and provide that data, including a chart, in my report.

Make recommendations for conversion optimization. The analysis of the results will quickly reveal where there are opportunities for conversion optimization. Typically these fall into several areas including; Brand name or logo too difficult to see, a strong non-product image or images capturing all the attention of the participant, the value proposition is unclear or totally missing. The recommendations for optimizations will become clear, based on what elements of the page are not working.

5 Second Test of Carousel Sliders

Many websites today use a carousel with a set of sliders that briefly display a set of horizontally sliding images at the top of the home page. Testing this type of page is important as the number of sliders, and their duration on the screen are impacting the communicative ability of that page. But how do you test animated sliders? Conversion optimization of multiple sliders may seem difficult, but in fact it is not if the following approach is used.

If the initial slider stays on the home page for 5 seconds or longer, then the problem is solved and that image can be used. If however the time is shorter between slider changes, then try categorizing all of the sliders into common groups, and use an example image from each category. For an eCommerce website as an example, there may be several sliders of a product that include the product image, and there may be several that are information only with no product image, or an image of something other than a product. I typically try to test each category of images. Ultimately you could test all the slider images, but that will necessitate having a much larger pool of testers to draw from.

Conclusion: 5 Second Test and Conversion Optimization

The 5 second test is a powerful tool to test the communicative ability of a webpage and provide data for conversion optimization. Remember that the test purpose is to evaluate how well the page communicates the following three critical elements;

Who is the company?

What product or service is provided?

What’s the benefit to your visitor?

A 5 second test can provide quantitative data as well as qualitative data, and because of the speed and efficiency can produce very quick results. Using a five second test on critical pages of a website is one of the best ways to identify opportunities to optimize the UX and thus improve conversion and ROI.

“As the WSJ’s Drew Fitzgerald reported earlier today, Avon is pulling the plug on a $125 million software system rollout that has been in the works for four years after a test of the system in Canada drove away representatives the door-to-door beauty product company relies on to drive sales.

Avon began testing the new order management software system in Canada in the second quarter. While the new system based on software supplied by SAP AG worked as planned, it was so burdensome and disruptive to the representatives’ daily routine that they left in meaningful numbers. Avon relies on a direct sales model where its representatives aren’t employees, which makes it difficult to add new tasks associated with the software system.”

Ouch.

If you are a CIO and you like your job I have a simple tip for you…

“Conduct usability testing often and extensively when developing new applications.”

If you are a CIO you can stop reading now. Thanks for stopping by!

But for the rest of us, let us examine why Avon ended up throwing out $125,000,000.00 and four long years of work by what was probably a large team. It is called usability, or user experience. Without an easy to use and satisfying user experience your application will NOT be used by your users and you will have wasted your money.

As the article in the Wall Street Journal states…

“At a time when people are accustomed to using well-designed applications from companies such as Google Inc. and Apple Inc. in their personal lives, they have little patience for workplace applications that leave them confused. Functionality is no longer the definition of success. Usability is key.”

Just to repeat a very important phrase

“USABILITY IS KEY”

Here are five tips you can use to make sure YOU do not throw out one hundred and twenty five million dollars:

1. Identify Your Personas

A Persona is a fictional representation of your typical users, and includes both behavioral and goal oriented information that is critical for design decisions. In a previous article I wrote about tips you can use to create Personas and how to avoid bad ones, but suffice it to say it is critical that you have Personas BEFORE starting any application development project.

2. Design for Persona Critical Tasks

Personas include information on the top 3 or so critical tasks they MUST do to be successful. Your application should include bold statements that failure to make those tasks brain-dead simple means failure of the application. Never forget about those critical tasks, and make sure your application design is focused on continually seeking ways to make those tasks simple, fast and super-easy.

3. Conduct Early Prototype Usability Testing

Conducting usability testing early and often is not just a catch-phrase. Early prototype testing includes testing wireframes and even conducting Card Sorts very early in the process. The data gathered from this testing will ensure your application design is focused from the users perspective. There are lots of free or pay card sorting tools that make conducting card sorting and creating information architectures easy. There is no excuse for not gathering this data. Testing paper wireframes is also a really easy but extremely helpful data point.

4. Test Often During Development

Testing often during your application development is another key to creating usable (and thus successful) applications. Using remote usability testing methods means testing can be done almost instantly, and data can be captured in hours, not days or weeks. There are excellent books on how to conduct remote testing that make it fast and easy for even newbies to create and run usability tests. Using an agile method of application development? No problem, remote testing during sprints means never having to say you are sorry (to your spouse after you come home with a box of your stuff in hand because you just got fired for a bad application).

5. Include users in your team

An excellent idea that few organizations seem to use is including actual users as part of the application design team. Having a group of actual users that you reach out to for feedback and input will clear up disagreements and clarify your purpose as you move through your process. Panels of users are worth their weight in gold. Use the input and commentary you receive from actual users as you go through your sprints or waterfall process. Their input will ensure you are keeping your application on track from a usability perspective.

Conclusion on How to Not Throw Out 125 Million Dollars

By incorporating usability testing and conducting user-centered design as part of your application development process you will ensure your design is user-friendly and successful. Failure to do so risks the potential of your application not being used, which can waste 125 Million Dollars and four years of work. As has been said before, Failure is Not an Option!

For more tips on how to include testing as part of an application development process read the article on 24 usability testing tools.

Amazon versus Walmart and the usability testing results

Comparing Amazon and Walmart with simple but critical usability testing tasks: finding and buying an iPad, who won?

Amazon and Walmart are kings of eCommerce. But how do they compare in usability? To answer this, I created a simple but useful usability test: something thousands of users were trying to do this holiday season, finding and buying an iPad.

The usability testing protocol I created was simple, but not meant to be exhaustive in terms of comparing the user experience of both sites. Rather, the test was a quick evaluation of how easy or difficult it was for users to find an iPad with the best possible features for the price (the value of which had to be less than $550) and then buy it.

Here’s the usability testing protocol I set up for the test. It’s simple, quick, but importantly meant to be directional only. I used usertesting.com as my tool for this test.

Amazon versus Walmart Usability Testing Protocol

Introduction: You are buying an iPad as a gift for a family member. You only have a total of $550. You want to buy the best one you can for the price in terms of functionality and features.

Task 1: Please show me how you would find an iPad or iPads that are equal to or less than your price range?

Task 2: Let’s assume you’ve decided to purchase one of the iPads, please show me what you would do to buy it. Please go through all the steps without actually purchasing it.

Tester Age: 18 to 65+

Tester Household Income: $40k to $150k+

Gender: Any

Web Expertise: Any

Country: United States

Number of Testers: Six total (3 for Amazon and a different 3 for Walmart)

Testing Dates: December 6-17, 2012

Usability Testing Results of Amazon versus Walmart

The results of the usability tests are revealing and point to several areas where both Walmart and Amazon may need to explore further usability optimization. And even though this test was simple, quick and used a rather small amount of testers (3 for Walmart and a separate 3 for Amazon), it clearly shows how even minor amounts of usability testing can reveal important places where the user experience can be potentially improved. For eCommerce, this also means improving revenue!

Let’s look first at the results for Amazon and Walmart in terms of how they performed for several key tasks, including:

Being offered the opportunity to purchase a protection plan (something that no doubt is high on the Walmart and Amazon team’s radar as it is a good source of incremental revenue per shopping cart)

Purchasing the item

We also look at several other errors that seem to be obvious things that can be fixed, or at least evaluated.

And now, on with the results!

1. Amazon versus Walmart Usability Test Task, Finding an iPad

Winner Walmart

Amazon and Walmart take different approaches to displaying and filtering product search results. So a true apples to apples comparison is not possible. However, we can compare the overall ease of use of each system based on the task of asking a tester to “find an iPad in the $550 or less range,” a real world scenario.

Based on the results of this test, the advantage goes to Walmart. This is primarily due to the displayed list of results after the user enters iPad into the search tool. All our testers were able to easily navigate the results, and take the next step promptly, which was to use filtering to find the product in the right price range.

Amazon did not do as well in this test as it could have. The search results are critical to helping our testers to sift through the hundreds of thousands of products Amazon sells to find an iPad in the $550 or less range. Even here at the very start of searching there were potential usability issues.

One of our testers almost immediately became confused when he noticed that the top result for the search term “iPad” was an iPad 2 Second Generation, which caused him to spin off in a different direction spending larger amounts of time trying to find the newer models (as of the writing of this article the iPad Fourth Generation is the newest iPad). Interestingly, all testers mentioned that “typically the best product is at the top” even though this clearly was not the case, and all of our testers had to do a fair amount of searching by scrolling up and down, or clicking on various links, to find the newer iPad models that fit their $550 price limit.

I am guessing Amazon has a usability team so I’m hoping they can evaluate this test result, to determine if there’s a need to find a better way to put the newer (aka “hotter”) products at the top of their search results display. I’m thinking perhaps some Search algorithm testing is in order.

For Walmart, things went well for testers who used the search bar, but the one tester who did not use the test bar had a much harder time of finding iPads. Lesson for Walmart? Consider making your search bar bigger, to attract more attention and cause fewer users to try to navigate through a more difficult process.

A critical element of eCommerce is using filter tools to narrow search results, which both Amazon and Walmart do, but using vastly different methods. For Amazon, there’s not a specific filter tool that’s readily apparent such as with Walmart, however users do have the ability to filter results, IF they know where to look.

Interestingly, the testers using Amazon had a more difficult time finding the iPad that fit our parameters, in this case a model that gave the most performance and features at a cost of $550 or less, than did the Walmart testers. This was specifically because the Walmart filter tool enables users to easily filter based on price. Not that our testers found the Walmart tool without problems (which they did).

Still, Amazon’s filtering (or lack thereof) of product results based on pricing parameters was something that all our testers struggled with. All testers resorted to scrolling through pages of results, some gave up early and selected a product because it was listed near the top and seemed to fit the test parameters. In the real world, I’m betting this behavior happens more often than may be realized, I’m not sure always to the benefit of Amazon or Amazon users.

Several times, testers became lost in their search due to scrolling through so many results and had to “reset” themselves by going back to the starting results page. The inclusion of peripherals spread in what seems random fashion in the results did not help matters, as it made hunting in the results for the latest model iPad even more difficult.

Because of the extra cognitive load Amazon puts on users, we give the nod to Walmart for this part of the test.

It would be interesting to see what the usability test results for Amazon would be if they were to offer their users a filtering set of tools along the lines of the Walmart tool, versus what Amazon users currently have available.

Walmart Filtering Tools are Good, but not Great

Walmart has one advantage over Amazon in terms of our test of finding an iPad in our price range, and that is the filter tools on the left side. Interestingly, all of the testers used this tool, and all of them were able to reduce what was a much larger list of products down to those they felt met their parameters by using the tool. That’s not to say the tool didn’t cause issues. Several found the refresh that happens without warning rather disconcerting, and one mentioned that slides were preferred, as that way the exact pricing parameters they wanted could be entered.

A critical element of eCommerce success is adding in additional SKUs to a shopping cart, in this case a protection plan. Typically this is good for the company, as it is an incremental source of revenue. But it can be a good idea for the shopper too. Reminding them to buy additional items or a protection plan they (if they are anything like my family) will end up using when something bad happens to their product is not a bad idea.

In terms of the offers, both Amazon and Walmart pop-up the protection plan, but that is where the similarities end. Notice the critical difference, Amazon has the “Add Coverage” button the bright, yellow, some would almost say Default button. Because of this, people evaluating adding the extra coverage may have more of a tendency to click the highlighted button, all other things being equal. In essence, the default is YES.

But with WalMart, note the choice is “I prefer not to add coverage.” Ouch. The default here is NO. Also, note that with Amazon you only have to click one button to make your selection, Walmart requires two clicks, one on the radio choice button, and then one way down at the bottom of the pop-up for “Continue.” My guess is WalMart is losing hundreds of thousands, maybe millions, of incremental dollars with their current protection plan offering user experience. Perhaps the WalMart usability or metrics team may disagree with me, but I would test a much more Amazon like user experience here, just to see if there’s a difference (I am betting lunch with the entire WalMart usability team that there is, if you know any of them forward them that message from me).

The Amazon pop up with the Protection Plan offer has a single button to buy the product

The Walmart pop up with the Protection Plan offer requires two clicks, and does not highlight the YES choice

And just to provide an additional data point, it’s interesting that the only tester to choose the protection plan was an Amazon tester, although there was a Walmart tester that was tempted.

4. Amazon versus Walmart Usability Test Task, Purchasing

Winner Amazon

In fact, both Amazon and Walmart are about equal in terms of the ease of moving through the buy-flow. Both have what can be described as best in class user experiences in terms of the shopping cart to purchase task flow. That said, Amazon has a slight edge with their ability to move users through the process with a bit less cognitive load, as witnessed by the several errors that occurred for our Walmart testers that did not happen for our Amazon testers.

Since so much went right for both purchase flows, let us focus on the errors we picked up, both in the buy-flow as well as in other places. Amazon more than once tripped our testers up with offers to buy a product at a price that seemed to disappear when they actually went to the results pages to find the product at that price. Walmart had several avoidable user errors in their buy-flow, mostly caused by simple things like not labeling required fields or hiding critical choices in the middle of a rather busy purchase page. Simple usability and A/B testing could easily improve all these easy to fix errors.

Summary of Amazon versus Walmart in Usability, Who Won?

So in summary, based on this simple usability test we performed, it would appear that Amazon and Walmart are about equal in terms of the usability of finding and purchasing an iPad, with Amazon winning two categories and Walmart winning two.

However, I actually believe that based on this test Walmart has the edge in usability. The primary reason? I believe Walmart provides an overall easier and faster user experience in the searching, filtering and vetting process associated with seeking out and purchasing a product.

The primary advantage Walmart has over Amazon is the availability of filters on the left side of the products search results pages. This filter set enables users to very easily target products that meet their parameters, to find the best product possible for the given budget range.

You Should Usability Test, Even With Just 1 Person

Wanna know what I think? I think usability testing is so important, so amazingly powerful, and so useful for companies that want need to increase web site ROI that they should must usability test – even with just 1 person.

Only 1 person? Not 7 people? I know – I know, you’re reaching for the phone to call the insane asylum and have me committed. But before you do, just hear me out – you may decide I’m crazy like a Fox (or a really, really smart Badger).

Crazy like a Fox

ANY usability testing is better than NO usability testing

You may not believe me, but this is a universal truth: ANY usability testing is better than no usability testing. Don’t believe me? OK, maybe you’ll believe a couple of usability gurus.

“As soon as you collect data from a single test user, your insights shoot up and you have already learned almost a third of all there is to know about the usability of the design. The difference between zero and even a little bit of data is astounding.”

Now of course I’m not advocating ONLY using one person at all times. But in critical situations where resources and/or money and/or time are tight, usability testing with just one person is an acceptable alternative to full usability testing with 7 or so people.

Usability testing case study: Heardable.com

I’m doing more and more usability testing with just one person, and you know what, it works really well!

Case in point: I recently used a usability test with just one person for one of my clients: Heardable.com.

Heardable.com is a web service that enables Brands to measure and monitor critical social attributes. I like Heardable because it also provides actionable information about how to improve the attributes. I’m a big fan of actionable and useful data, so I’m a big fan of Heardable.

Because Heardable.com is a start-up, just like any other start-up the founders had many issues to resolve, everything from how to explain what Heardable.com is on the home page, to how to access detailed metrics and data.

Because Heardable is in public Beta, the founders asked me to help identify some potential opportunities for usability improvements. But with their resources being tight, and knowing many more changes were coming, they asked me for a low cost – very fast way to do a quick usability test.

How did I do it? Easy…

A VERY quick usability test with one person

In the quick time of only three days, I:

Created a Persona (it was easy, they already had very specific data on their target users)

Identified five critical tasks that needed testing

Created a usability test protocol

Recruited a test participant

Conducted the test using Morae

Analyzed the results

Edited the snippet videos showing usability improvement opportunities

Created the PowerPoint analysis document

Sent the analysis to the clients

Submitted my invoice for payment

Almost broke my arm patting myself on the back for a job well done

Visited my chiropractor for adjustment on that arm

The results of the usability test and analysis were excellent. The usability test found 11 potential opportunities for usability optimizations, and more than double that for recommendations the Heardable team could use to implement those potential optimizations.

Could additional test participants have found more issues? You bet. But the point is with the limited time / resources / money available, this test provided them with critical usability information that is actionable – and can make a big difference for long term improvement.

Conclusion: usability testing with 1 person works well:

So what am I saying here?

I’m saying ANY usability testing is way, way better than no usability testing.

I’m saying the ability to conduct usability testing in a matter of days (not weeks) is powerful.

I’m saying the ability to conduct usability testing for low cost (not the cost of a mid-size car) is a significant reality.

And I’m saying the ability to conduct usability testing that provides actionable and useful information that can be used NOW is brilliant, because it enables a company to improve the usability, thus ROI of their web site or application in near real-time.

What’s to not love?

The very smart founders of Heardable know that usability testing, ANY usability testing, even testing with just one person, is way better than no usability testing.

Now you do too. So what are you gonna do about it?

Feel free to contact me if you want more information about how a usability test with just one person can help improve your web site’s ROI.

Perceived Affordance, Usability and Online Sales:

One of the most important goals of web site usability testing is finding and fixing perceived affordance issues. You can increase your usability, conversion and thus your web site Return on Investment (ROI) by improving perceived affordance.

What’s perceived affordance? For web site owners, it’s the art and science of designing objects like ‘buy now’ buttons in such a way that your web site visitors know just by looking at it that they can click on it.

One of the most important functions of web site usability testing is to evaluate the perceived affordance of links and buttons. By testing and optimizing perceived affordance of critical objects, such as ‘Add to Cart’ or ‘Buy Now’ buttons, web sites can dramatically increase conversion, and thus ROI.

Definition of Perceived Affordance:

According to Don Norman, the Godfather of design and usability and the author of the book “The Design of Everyday Things” the concept of perceived affordance is defined this way;

“The word “affordance” was originally invented by the perceptual psychologist J. J. Gibson (1977, 1979) to refer to the actionable properties between the world and an actor (a person or animal).

What the designer cares about is whether the user perceives that some action is possible (or in the case of perceived non-affordances, not possible).

In product design, where one deals with real, physical objects, there can be both real and perceived affordances, and the two need not be the same. In graphical, screen-based interfaces, all that the designer has available is control over perceived affordances. The computer system, with its keyboard, display screen, pointing device (e.g., mouse) and selection buttons (e.g., mouse buttons) affords pointing, touching, looking, and clicking on every pixel of the display screen.”

By evaluating the design elements that communicate perceived affordance for various objects in your web site, you can determine which category an object fits, and if wrong, take steps to correct it.

Perceived Affordance is Critical for Your Web Site Success:

When you think about your web site, your ROI in fact lives or dies on your ability to successfully manipulate design to improve perceived affordance. Your web site is primarily a one-way pipe of information, the majority being visual information (with the potential for some audio). You provide the visual information, and your web site visitors consume and comprehend it (or at least try to).

Because the primary interaction that takes place on your site is one-way visual, you must be zealous in your attempts to understand and evaluate how well you are communicating perceived affordance. Testing and optimization of elements that impact perceived affordance should be your number one goal, because it directly impacts your conversion rates, and thus your web site’s ROI.

Actions your web site visitors take such as mouse clicks or typing characters, although very important, are never going to happen unless you provide clear, consistent and effective visual clues about how to take actions. You do this by continually testing and optimizing the crucial elements of your site that establish and communicate perceived affordance.

Examples of Perceived Affordance in Buttons:

Let’s examine a few examples of perceptible perceived affordance in action. In order to visually communicate that a button is clickable and will enable the site visitor to take action, it is necessary to use design to visually separate, distinguish and illuminate a function.

As demonstrated above, Amazon.com uses many design elements to generate high perceived affordance of their ‘Add to Shopping Cart’ button, including use of:

Strongly contrasting yellow button color

Only use of that yellow color on the page

Heavy outline border around button

Round strongly contrasting icon of shopping basket

Text in button ‘Add to Shopping Cart’

Larger font for button text

Elongated shape, round on left side, squared on right side

Gradient fill in top of button to visually mimic 3-D shape

Dark blue background color for surrounding box

Another example is eBay, which creates a high perceived affordance of the ‘Buy It Now’ button.

To provide contrast, let’s examine use of design elements that appear to provide a function, but in fact do not. This is known as false affordance, and can work against web site visitors.

False Affordance:

A false affordance is an apparent affordance that has no real function. False affordance is a major contributor to lower web site conversion and lost online sales. This is because a false affordance breaks the faith a web site visitor has in the web site’s functional abilities, and causes doubt and confusion.

Example of a False Affordance:

In this example, the prominently displayed ‘Featured Gift’ and photo of the toy seem to indicate that more information about the toy might be available by clicking, but where? Web site visitors who come across the display are left wondering, because no clear action button seems available for this toy.

A common tool many web site designers use is to make the image of the product clickable. But that is not the case here.

In fact, there is no action available, the image of the toy is not clickable, nor is the heading ‘Featured Gift.’ There is no way to navigate to the featured toy using the visual designs offered, thus the connection with a ‘false affordance.’

There are many types of designs that can lead to false affordance, some of the more common being:

Objects that look like buttons, but are not

Photos of objects that are not links, especially if place with photos that are links

Placing a blue outline around an image or link, yet no link is present

Underlined text that is not a link

Use of blue in text that is not a link

Form data entry fields that are not active

For web site owners, false affordances are extremely damaging, and cause many more problems than simply lost clicks to a particular item.

By prominently displaying a false affordance on the home page, a web site causes damages including:

Lost faith (visitors wonder “is this clickable, what about this, or this?”)

Lost focus (visitors spend more time trying to solve a navigation problem than shopping)

Lost sales (frustrated visitors will often not complete their task)

Lost trust (many visitors will simply leave the site – never to return)

Finding and fixing false affordances should be a high-priority job of every web site owner, especially those who own eCommerce sites – as false affordances cost lost visitors, conversion and sales.

Poor Design and Hidden Perceived Affordance:

As with false affordance, poorly designed techniqes can hurt perceived affordance and can cause major performance issues for web site owners as well. This is referred to as Hidden Affordance. In the case of poor design, visual clues that a link or function is present are not displayed as visually separate, distinguished and illuminated.

Example of poor perceived affordance:

The example above demonstrates a site that provides web site visitors with a display of products available for purchase. However, the function associated with ‘Checkout Now’ (in this case a link to an online order form) is poorly displayed because it has minimal visual clues as to it’s function, and thus has low perceived affordance.

Among the perceived affordance problems with the ‘Checkout Now’ button are:

No button shape around the text

Yellow text color is not a strong contrast against the white page

No underline when mouse rolls over text

Text in button visually close to ‘Back to results’ text

Missing a background color to call attention to location

Upper left location not typically associated with ‘continue’ action

Improve Perceived Affordance with Testing:

So how do you improve your web site objects perceived affordance – with testing and re-testing. There are four primary types of testing that can be used to analyze and optimize perceived affordance. They are:

Expert Usability Review Also called a ‘heuristic review.’ This review uses expert analysis of interaction devices such as buttons, links and related functions against industry standards and best practices. The best form of an expert usability review is to receive several, since each expert might focus on unique aspects that grouped together form a better picture of what needs to be improved and why.

Usability Testing Using 1-on-1 moderated testing, a web site owner can quickly find problems with task flows for critical tasks. These often involve issues with perceived affordance. Because usability testing only needs about 7 or so participants, and because it uses real web site visitors, and can be done very quickly and for low cost, usability testing is a great way to find issues with perceived affordance. It is the only method a web site owner can use to determine the ‘why’ of an actual web site visitor’s behavior.

A/B Testing Two different versions of a button, link or related object can be tested on your web site at the same time using a traffic split. 50% of the traffic goes to the version that has the ‘A’ version (the original version of the object usually) and 50% to the new test ‘B’ version. After enough statistically significant results are captured, a winner can be picked based on interaction rate. A/B testing is pretty reliable, assuming enough traffic is present. However, it won’t tell you the ‘why’ of the visitor behavior, and of course it might negatively impact your conversion if the ‘B’ test version is worse than the original version.

Multivariate Testing For sites with large amounts of traffic, multiple versions of objects can all be tested at the same time. This allows for rapid analysis and iteration of the best possible combination of elements. The downside to multivariate testing is it needs lots and lots of traffic to establish statistically significant results. In addition, as with A/B testing the ‘why’ of visitor behavior won’t be know, only which combination of elements performs the best.

Perceived affordance is critical to your web site success, and to your conversion and ROI. Perceived affordance determines how well your interaction object designs communicate their function and use to your web site visitors. Poor perceived affordance hurts your web site interaction, conversion and sales and results in lower ROI. You can increase your ROI by conducting testing and optimization with the interaction objects on your web site. An excellent way to identify potential issues and optimizations of perceived affordance is with usability testing. Continual testing and re-testing ensures you are maximizing your potential usability, perceived affordance and thus ROI of your web site.