Design/UX goals in your company

6 Mar 2009 - 3:24pm

Last reply:
5 years ago

7 replies

4221 reads

Alan Cox

2009

As it grows, the company I work for is becoming more metric-driven.
Ultimately, I support the idea of having goals and metrics that help
us understand whether we're doing good work, the right work, etc.

I don't expect goals & metrics to ever tell the whole story; the
world is squishy and numbers are unlikely to paint a completely
honest picture. I do think, however, that they'll help us start
conversations and give us something to shoot towards.

I'm curious: what type of goals and metrics exist in your company
that are related to good user experience and good design? Do you
have goals & metrics that are company-wide, team-wide and
individual?

Alan

Comments

6 Mar 2009 - 5:21pm

Chauncey Wilson

2007

Hello Alan,

You might want to get the book Built for Use: Driving Profitability
Through the User Experience
By Donoghue, Karen and Schrage, Michael D

The book never got the attention it should have, but it is full of
good information and stories about how to connect:

Business goals with User Experience Goals with Product Features with
specific metrics.

There is much discussion on how to set usability goals (multiple
metrics or a composite metric).

There is an approach called SUM that is a usability metric for
comparing products or different versions of a product. A paper on
that is found at:

In your goal setting, you might examine the corporate goals and then
fit your goals to the corporate goals (see the Donoghue book for a
matrix that lays this out in a very powerful fashion).

Chauncey

On Fri, Mar 6, 2009 at 4:24 PM, Alan Cox <alan.cox at icontact.com> wrote:
> As it grows, the company I work for is becoming more metric-driven.> Ultimately, I support the idea of having goals and metrics that help> us understand whether we're doing good work, the right work, etc.>> I don't expect goals & metrics to ever tell the whole story; the> world is squishy and numbers are unlikely to paint a completely> honest picture. I do think, however, that they'll help us start> conversations and give us something to shoot towards.>> I'm curious: what type of goals and metrics exist in your company> that are related to good user experience and good design? Do you> have goals & metrics that are company-wide, team-wide and> individual?>> Alan> ________________________________________________________________> Welcome to the Interaction Design Association (IxDA)!> To post to this list ....... discuss at ixda.org> Unsubscribe ................ http://www.ixda.org/unsubscribe> List Guidelines ............ http://www.ixda.org/guidelines> List Help .................. http://www.ixda.org/help>

6 Mar 2009 - 5:25pm

Scott Berkun

2008

There are several flavors of these sorts of things, but ironically setting
metrics to measure is often done without a well formed goal. Are you trying
to measure the value of your group? The value of people on your team? To win
arguments with other groups? It's usually a mistake to create metrics unless
you're clear on what you want to do with it, or more cynically, how others
might use them against you.

The most common reason this stuff gets generated is because everyone else is
- the CEO or VP is mandating it. In which case you should quickly decide
what parts of this process are done for show, and what parts you care about
and will find useful. Politically speaking, it can be best to align your
metrics with the peer team that has the strongest standing and most affinity
for your group. It's certainly a consideration to keep in mind (in other
words, at a minimum your metrics speaks a similar language to their
metrics).

I studied this stuff years ago, so here's a rusty recollection of an opinion
on this:

The usability/analytic side is easier:

1) Ratio of usability recommendations to implemented changes - This is the
most effective metric of how much value a usability group is adding. Running
studies is one thing, but if a study results in zero changes than either the
study was unnecessary or the results were ignored.

Often this ratio points out usability teams are better at generating data
than they are at getting anyone to do anything with it. Which suggests their
growth will come most from developing persuasion, storytelling,
communication and political skills more than learning new methodologies.

2) Number of requests for consultations and usability studies - This is a
reflection of how valued the usability team is perceived to be. If no one is
asking for your input, perception of value is low. If everyone is asking for
your input, and you can't meet demand, your perception is high. Should also
track, per group, a) when in their project cycle their request for help came
b) if they used your advice or not - more indicators of perceived value.

3) How often usability goals appear in the goals of project managers, team
leaders and even executives. Ideally a UX goal is simply one of several
project goals that the entire project team is expected to defend. If the
only organization with a UX goal is the UX team, something is wrong - the UX
team is set up to fail.

4) Work produced. This is easy to measure but has questionable value. # of
Reports written, # of studies done, etc. But it captures zero about the
impact or value of the work. Popular things like usability scorecards or
heuristic evaluations are effectively a kind of recommendation generator
(see #1 above) and are best measured in terms of their impact rather than
their quantity.

For design/creative it's harder:

The way designers are used varies so much it's harder to give one generic
answer. Managers and team leads always have highly subjective measures for
their own performance - so don't be afraid of having subjective measures for
designers (There is a good philosophical argument that all metrics are
subjective simply because someone has to pick which things to measure :).

1) Recommendations vs. implementations is always a good measure. However for
design it's more subjective, as what constitutes a design recommendation vs.
a prototype or a conversation is something you have to sort out. Still, the
balance should be on impact and effect on what goes out the door to
customers.

2) Initiatives vs. results. Designers in a proactive role should be
initiating feature, project and process designs into projects (e.g. The
drafting of UX guidelines, or a new metaphor for a new website). Did anyone
use them? How well were they used? Etc.) Even a subjective measure, by you
and other designers, of the impact of designer driven initiatives has value.
For example, for every quarter there should one design initiative per
designer, and your job as a team is to meet at the end of every quarter and
evaluate the results. Even subjective measures ("score from 1 to 5 on how
successful this was on the following attributes..." etc.) can be useful.

3) Requests vs. results. In more service oriented roles, how did the
engineer or manager requesting services feel their needs were met. Basic
customer satisfaction data can be collected here in much the same way you do
for actual (external) customers.

If you tell me more about the design work you're doing, and the nature of
the relationship (proactive/responsive) with the clients, and I'll have
better advice on the design side.

References:

I haven't done work on this stuff in awhile, but here's some working links
from an old pile of bookmarks. Sadly googling for "ux goals" brings up very
litte:

I'm curious: what type of goals and metrics exist in your company that are
related to good user experience and good design? Do you have goals &
metrics that are company-wide, team-wide and individual?

6 Mar 2009 - 6:12pm

Chauncey Wilson

2007

Here are some classic references that discuss usability goals. The
earliest examples of usability specifications that I could locate came
from Tom Gilb in the late 1970s and early 1980s. Whiteside, Bennett,
and Holtblatt's chapter in the Handbook of HCI described usability
specifications and highlighted how field work can inform usability
goals. Mayhews book describes how goals fit into the usability
engineering lifecycle.

Gilb, T. (1988). Principles of software engineering management.
Wokingham, England: Addison-Wesley.
In his book on software engineering Gilb actually uses "Usability" in
some of his examples as a quality attribute of his products and he had
principles for developing attribute specifications that include:
“measurability” (all attributes should be made measurable) and
"result-oriented attributes (the attributes should be specified in
terms of the final end-user results demanded). Gilb also gets into
principles for choosing solutions to help designers meet those
objectives.

Mayhew, D. (1999). The usability engineering lifecycle: A
practitioner’s handbook for user interface design. San Francisco. CA:
Morgan Kaufmann.
Mayhew’s book is a detailed blueprint of the usability engineering
life cycle with a wealth of practical advice. This book has four
sections: Requirements Analysis, Design/Testing/Development,
Installation, and Organizational Issues. Each chapter discusses
usability engineering tasks, roles, resources, levels of effort, short
cuts (quick and dirty techniques to use when a rigorous approach isn’t
possible), Web notes, and sample work products and templates. The book
is both detailed and readable and worthwhile for both new and
experienced usability specialists.

Whiteside, J., Bennett, J., & Holtzblatt, K. (1988). Usability
engineering: Our experience and evolution. In M. Helander, (Ed.),
Handbook of human-computer interaction (pp. 791-817). Amsterdam:
North-Holland.
This chapter laid out the general guidelines for a usability
specification which is the deliverable listing a product’s "usability
requirements". A usability specification contains the usability
attributes that are critical to the product's quality, the technique
for measuring the attributes (which would include the context,
constraints, user data requirements, etc), the quantitative metric
that represents the usability value (say task completion rate without
assistance), and the minimum level of usability for each attribute and
the planned level. The Whiteside, et. al. chapter also made a point
that usability requirements (and the scenarios for obtaining usability
requirements) should be based on field input (through contextual
inquiry or other methods) so that the requirements are realistic.

Scott brought up some good issues about the impact of metrics. If you
metric is around finding problems and you are not persuasive enough to
get them implemented then your impact might be low. Paul Sawyer,
Dennis Wixon and Alicia Flanders wrote about a metric they called the
impact ratio. Here is the reference and abstract

ABSTRACT
In this methodology paper we define a metric we call
impact ratio. We use this ratio to measure the effectiveness
of inspections and other evaluative techniques in getting
usability improvements into products. We inspected ten
commercial software products and achieved an average
impact ratio of 78%. We discuss factors affecting this ratio
and its value in helping us to appraise usability
engineering's impact on products.

So this metric gets at how many are implemented, but there is another
step - how much did the changes that were implemented improve the
product on whatever usability attributes are most important. What if
you implement fixes for 80% of the problems, but the fixes are bad.

So, perhaps you can measure how fixes from one version to the next
make the product better but does it impact the revenues/profits of the
company. It could be that you made your product 20% better and met
your goal, but your competitor just came out with a really usable and
useful product and was 20% better than your version.

On Fri, Mar 6, 2009 at 4:24 PM, Alan Cox <alan.cox at icontact.com> wrote:
> As it grows, the company I work for is becoming more metric-driven.> Ultimately, I support the idea of having goals and metrics that help> us understand whether we're doing good work, the right work, etc.>> I don't expect goals & metrics to ever tell the whole story; the> world is squishy and numbers are unlikely to paint a completely> honest picture. I do think, however, that they'll help us start> conversations and give us something to shoot towards.>> I'm curious: what type of goals and metrics exist in your company> that are related to good user experience and good design? Do you> have goals & metrics that are company-wide, team-wide and> individual?>> Alan> ________________________________________________________________> Welcome to the Interaction Design Association (IxDA)!> To post to this list ....... discuss at ixda.org> Unsubscribe ................ http://www.ixda.org/unsubscribe> List Guidelines ............ http://www.ixda.org/guidelines> List Help .................. http://www.ixda.org/help>

6 Mar 2009 - 6:48pm

Mike Myles

2009

I've been working on using desired user responses as a way to
communicate design intent. These responses are potentially measurable
goals, but more importantly they are effective at getting
non-designers to understand the core objectives of a project.

It's important to be able to measure design and usability
objectives; and the response approach I've used can be linked to
detailed qualitative and quantitative test plans. But I've found in
most cases that the measurable goals, specs, prototypes, etc. don't
help in the least with communicating design intent.

User responses are something I started using on a recent project, and
they look to be very effective. I've started work on a presentation
in an attempt to generalize their use for any project.

I have an early PowerPoint slide deck available for download off my
website. There are no accompanying notes as of yet. I planned to do a
few verbal presentations first to refine the message before adding
that to the file.

That said, you are welcome to review the slides in the current
format. Perhaps you will find them useful; and any comments,
questions, or criticisms are welcome.

> I'm curious: what type of goals and metrics exist in your company> that are related to good user experience and good design? Do you> have goals & metrics that are company-wide, team-wide and> individual?

I actually think this is really, startlingly, shockingly easy.

Whatever goals and metrics exist for your larger company, those are
what you use for user experience and design. If UX is not contributing
to an organization's goals and metrics, than what good is it?

This often means that design/UX has to do stuff that's not sexy, but
that's ok. At our recent MX conference, Prof Sara Beckman related the
story of Sam Lucente, VP of Design at HP. Sam was brought in by Carly
Fiorina, but then had to figure out a way to succeed when she was
replaced by Mark Hurd. Mark is a cost-cutter and efficiency guy.

So, Sam pointed out that through a design program to standardize and
make consistent the use of HP's "jewel" logo, he could save
$50,000,000. And that got Mark's attention. It wasn't sexy (it's
essentially an operational project) but it helped Mark understand that
design could deliver the kind of value he sought. And when it proved
successful, it opened the doors to additional value that design can
bring.

At Cisco we have been using SUMI (Standard Usability Measurement
Inventory) for a few years (not to be confused with SUM, which
Chauncey Wilson mentioned)

Though SUMI is good (as described below) does anyone know of new or
upcoming metrics methodologies/tools for non-web site design?

Our design projects tend to be application software that is sometimes
web-based. We don't do web pages, so web metrics don't apply. Some
product examples are WebEx, Linksys home networking, and of course
our huge line of enterprise networking products.

Some facts about SUMI in case you want to try it:

- It works best as an addition to a usability study
- Developed by Dr. Jurek Kirakowski from the Human Factors Research
group at the University of Cork. Seehttp://www.ucc.ie/hfrg/questionnaires/sumi/
- A rigorously tested and proven method of measuring software quality
from the end user's point of view. Similar in design to Myers-Briggs
- It measures Efficiency, Affect (emotion), Helpfulness, Control and
Learnability