When a user first experiences an application, it takes some time to figure out the interface, controls and how to perform tasks. As the user continues to perform similar tasks, she learns how to operate the application more effectively and saving time. Learnability is according to Wikipedia:

the capability of a software product to enable the user to learn how to use it. Learnability may be considered as an aspect of usability, and is of major concern in the design of complex software applications.

But how do one measure learnability? My idea is to let a number of users perform similar tasks on the same application, measuring the time it takes to perform tasks. My hypothesis is that the time to perform similar tasks will be quicker the more similar tasks she performs up to a certain level where the user doesn’t learn more.

If I were to compare the learnability between two different applications, per example travel agencies web sites, and letting users find a certain hotel in a certain place. Then I would (assumable) get a graph looking like the following:

Reading from the graph one could say that Agency 1 have better learnability than Agency 2 but that Agency 1 is harder to use for new users. Over time Agency 1 will be faster to use.

Is this a fair measurement on Learnability? Could I measure Learnability in a better way?

Why is learnability measured in seconds? You are faster, but does it mean you understand it better? Or are you just typing faster, because Agency1 supports keyboard better? Or you are clicking faster, because buttons are closer than the different layout. You can measure efficency (reactions) in seconds, but learnability (understanding)? I don't think so.
–
FrankLOct 10 '12 at 8:06

6 Answers
6

I was just reading an article on Coding Horror that draws parallels between GroundHog Day and A/B Testing. But, I think the movie tells you a lot more about Learnability than A/B testing. In the movie, Bill Murray, lives the same day over and over again for a very long (debatable) time. He wakes up in the morning living the same day over and over again, with the same series of events leading to the same consequences.

During the latter parts of the movie, he knows exactly what to do. If Bill Murray were the user of an application and visited the same site every day performing the same set of actions, he would know exactly which buttons to press to get the desired outcome.

The point am trying to make is that you need to figure out the series of events that the user has to perform to get to a desired result. Example: For a banking website, from logging in to generating a statement requires some learning for the user. Now, start analyzing data for a set of five to ten users. Keep the data set small and across demography.

If you have an analytics tool, start building intelligence such that it tells you the exact path taken by a first time visitor compared to a returning visitor, to perform the same set of actions, it will give you a dataset to analyze and convert to a Learnability Parameter.

This is just a suggestion and would need some coding, but the results will tell you exactly how users' interaction changes with the website over time. This will give you steps you can merge or probably condense for faster performance.

+1 For adding such an unusual reference as movie. I'm not affraid of coding - so your suggestion looks like it could be don. Thanx!
–
Benny Skogberg♦Oct 9 '12 at 10:25

"the exact path taken ... to perform the same set of actions" Mmmmh, and if you learned to skip unnecessary steps you are out of dataset? I think you will still measure effeciency/performance only. Not understanding and learned use.
–
FrankLOct 10 '12 at 8:28

The effort you spend in coding, analyzing, prepare and filter the analytics data is huge.

Why not just do an usability test for both designs with persons you can ask, instead of do unknown magic with analytic numbers. Give them a predefined task first time and meet them again a week later with same task. It costs you 2x1day for 6 people (80% error detection), some coffee and snacks, may be a goodie.

If you see a decent fall in the scales, learnabiltiy is increased.
And you can actually ask them after second testing, how they remember and learned the interface!!! Users give good tips, how and what to change.

Actually you can measure all four items online by analytics as well (expect the SUS survey). But I recommend aksing people. Its faster (2 days, instead of approx 3 weeks tracking time) and you get more insights, if the designs are more than just a minor layout issue.

Edit:
After searching the web I found a compelling and offical source! ISO 9126-2
And a presentation about it at slide 28.

Learnability
Learnability metrics assess how long it takes users to learn to use
particular functions, and the effectiveness of help systems and
documentation.

.

ISO/IEC 9126-2 Learnability metrics

• Ease of function learning How long does the user take to learn to
use a function?

• Ease of learning to perform a task in use How long does the
user take to learn how to perform the specified task efficiently?

• Help Accessibility What proportion of the help topics can the
user locate?

• Effectiveness of the user documentation and/or help system
What proportion of tasks can be completed correctly after using the
user documentation and/or help system?

• Effectiveness of user documentation and help systems in use
What proportion of functions can be used correctly after reading
the documentation or using help systems?

IMO when added to the usability definition, the learnability facet refers to how recurring users can get back to the UI and already know how to use it because they learned about it in the previous sessions.
If so, then this could be measured by comparing performances of a first session against a later one.
If you are doing AB testing then you could measure the effort needed to perform an operation for the first time and in later times.Effort is to be defined, might be a combination of elapsed time plus number of actions (clicks, keystrokes, pages, ...), something representative and measurable. @rizwaniqbal mentioned the dynamic analysis of the logs, and you might want to store some additional information about usage patterns.

If the UI is learnable then the measured effort ideally would have a high value for the first session and a low and constant value for all the remaining sessions.
That would mean that the user learned it once and for all the first time she used it.
On the contrary, a linear decreasing function would mean (linearly) progressive learning, faster if it's a steep line and slower if it's flat.

Another method, with instant results, would be to take note of the user's mistakes, where a mistake is a wrong path taken or a long inaction time or .
This focuses in the understanding part of learning = understanding + remembering.
On the contrary, if they commit mistakes then they might not be learning well. As in school, exams measure what the student's learned and mostly always the punctuation is impacted by the errors.
But, this is covered by qualitative use testing, where you have the users use the UI and change it where they get lost, isn't it?

One could argue that if the users understand the UI at first sight then you are safe.
Moreover, if they get it at once, why should they learn it at all? In fact, this is the point of the UI (as opposed to the command line).
Actually, this is how I think. The zero-training UI, which might not be possible in all the cases.

That's certainly one dimension to look at, and it would be a good proxy for learnability in some situations. I think you'll have problems using it as a direct metric to compare sites however.

As a separate point, some companies get hung up on the "standard" definition of usability - and start looking for metrics to track learnability, etc. over time. I think that this is, nearly always, a mistake. It's very easy to miss factors when looking at individual metrics.

For example in your graph of travel agencies - maybe there's another story besides the learnability one that would explain the graphs. Maybe, for example, agency 2 is better at cross/up-selling which is more profitable for the business, but makes the hotel finding task take longer... so there's a constant-time hit to the task that has nothing to do with how hard it is to learn.

Or maybe agency 1's web site is slow and if you take that into account the overall task is much easier and quicker to learn than for agency 2.

Another thing to consider is breadth of feature usage. If features Foo, Bar and Ni are of use to a certain group of users, and only feature Foo and Bar get used regularly - that might say something about how easy it is to discover/learn Ni.

I thought you would be one to answer this question, since your knowledge in statistics. And I agree - one couldn't use learnability itself without looking for other parameters as well. But it can give you a clue on where to look for difficulties from another perspective. Thanx for your insight!
–
Benny Skogberg♦Oct 9 '12 at 9:11

You could, perhaps take the variables into account and instead of looking at time in seconds, find the percentage faster a user can do things. For working on a live site using analytics, I'd not look at how long it takes them to do multiple tasks, but instead look at each task as it's own metric (as some users may stop after 2 tasks)
–
Bryan RobinsonOct 9 '12 at 13:22

One thing that is often stressed in education is that it's almost impossible to measure the effiencey in changes on how we teach a topic. There are to many factors that influences learning. And I think the warning is applicable here as well.

Learning a new interface and learning algebra are very different things but share the same problem when we try to compare different ways of learning the same process. The advice I've gotten from experts in the field is that when evaluating learning, results(such as grades) does not make a good point for comparison.

Instead my professor in educational software proposed to use related factors. Look at user satisfaction, perceived effort and retention. And you probably want to do this over a longer time period. If you're user comes back after two months, will they remember how to do all the tasks? Will they be as satisfied as they where the first time? How hard to they think it is now?