Executives, product managers and developers at one point have all joined the call to reduce the number of clicks. Part of that call is to improve the usability of the website or software by reducing the number of steps and time it takes users to accomplish their goals.

Reducing clicks has an intuitive appeal. Why make users click more than they have to? More clicks usually means more screens. More screens usually means spending more time completing tasks. More time spent on tasks usually means higher task failure and a poorer user experience.

Both click counts and task time are metrics for measuring efficiency—one of the key elements of usability. All else being equal, a task that takes less time to complete is more usable than one that takes more. Efficiency isn’t everything. A task that takes less time can still be more burdensome on a user or lead to higher error rates—usability is about balancing efficiency, effectiveness and satisfaction.

But when you measure efficiency, are click counts an acceptable substitute for task-times? Is the number of clicks really the metric to manage?

Counting Clicks

The nice thing about click counts is that they seem relatively easy to measure. Just look at a screen and count how many clicks it takes to go down a path. Task time is more complicated and it usually involves observing users.

However, it doesn’t take much to see where click counts fall apart. Taken to an extreme you can optimize around click counts by putting all functions on one or a few screens. The results is a cluster of information and functions that’s close to reach but hard to parse. In most cases when I hear reduce clicks I hear reduce task time. But how much do clicks counts correlate with task times?

Clicks and Clocks

I looked at data from three unmoderated usability tests of ecommerce websites I recently conducted. I used UserZoom for all three studies which captured clicks and task times. In total I had data from 1228 users, across 19 tasks for a total of 4892 observations of task time and click counts.

I found that there was a reasonably strong correlation between click counts and task-time. The average correlation across the 19 tasks was r = .5. You can see the distribution or correlations by task in Figure 1 below. A correlation of 0 means no relationship and a correlation of 1 means a perfect correlation (where clicks are perfect substitute for time).

Clicks aren’t the Same as Time

A correlation of this magnitude means that there is clearly an overlap between task time and clicks. However, the overlap is not strong enough that clicks are a replacement for time and vice versa. On average you would only be explaining about 25% of reductions in time by counting clicks on websites. For consumer software(e.g., QuickBooks and Quicken) you’d perhaps predict around 44% of the reduction in task times.

Clicks are then measuring something similar to task-time, but they aren’t measuring the same thing. If you’ve ever tried counting clicks from watching videos of users (you should try it if you haven’t), then you’ll quickly notice how not all clicks are equal. For example, some users will use one click to scroll down a page while others will click many times on the small arrow below the scroll bar. Does that count as one click or many?

The good news is you can still get a decent estimate of task-times without measuring users. Keystroke level modeling provides a method of decomposing a task just like clicks but it provides a more accurate account of how long it would take a skilled users to complete the task.

So the next time you are in a discussion about clicks, ask if it really should be about clocks. If you can’t measure users but want some idea about improvements in efficiency then use KLM.

If there really is a good reason to reduce clicks then by all means count and reduce. If however the intention is to improve productivity by making users more efficient with the software then time on task is probably the better metric to manage.