Five UX Research Pitfalls

Five UX Research Pitfalls

In the last few years, more and more organizations have come to view UX design as a key contributor to successful products, connecting teams with end-users and guiding product innovation within the organization. Though it’s fantastic to see this transition happen, there are growing pains associated with becoming a user-driven organization. These are the pitfalls that I see organizations grappling with most often.

Pitfall 1: It’s easier to evaluate a completed, pixel-perfect product so new products don’t get vetted or tested until they’re nearly out the door.

Months into a development cycle and just days before the release date, you realize that the UI has serious flaws or missing logic. If you’re lucky, there is enough flexibility in the schedule to allow grumbling engineers to re-architect the product. More likely, though, the PM will push to meet the original deadline with the intent to fix the UI issues later. However, “later” rarely happens. Regardless, everyone wonders: how could these issues have been caught earlier?

The UI is typically built after the essential architectural elements are in place, and it can be hard to test unreleased products with users until the very last moment. However, you can gather feedback early in the process:

Don’t describe the product and ask users if they would use it. In this case, you are more likely testing your sales pitch rather than the idea itself. If you ask users if they want a new feature, 90% of the time they’ll say yes.

Test with the users you want, not the users you already have. If you want to grow your audience with a new product, you should recruit users outside your current community.

Validate that the problem you are solving actually exists. Early in the design cycle, find your future users and research whether your product will solve their real-world problems. Look for places where users are overcoming a problem via work-around solutions (e.g., emailing links to themselves to keep an archive of favorite sites) or other ineffective practices (e.g., storing credentials in a text file because they can’t remember their online usernames and passwords).

Verify your mental models. Make sure that the way you think about the product is the same as your user. For instance, if you’ve been pitching your product idea to your coworkers as “conversational email” but your actual users are teenagers who primarily use text messaging, then your email metaphor probably won’t translate to your younger users. Even if you don’t intend to say “conversational email” in your product, you will unconsciously make subtle design choices that will limit your product’s success until you find a mental model that fits that of your users, not of your coworkers.

Prototype early. Create and test a Flash-based or patched-together prototype internally as soon as possible. Even if your prototype doesn’t resemble a finished product, you’ll uncover and develop confidence in the major issues to wrestle down in the design process. You’ll also have an easier time seeing the areas of the product that need animations, on-the-fly changes, or other issues that require significant engineering time that weren’t recognized in the project scope because the product was only explored in wireframes and design specs.

Plan through v2. If you intend to launch a product with minimal vetting or testing, make sure you’ve written down and talked about what you intend for the subsequent version. One of the downsides of the “release early, release often” philosophy is that it’s easy to get distracted or discouraged if your beta product doesn’t immediately succeed. Or upon launch you might find your users pulling you in a direction you hadn’t intended because the product wasn’t fully fleshed out, or dealing with weeks of bug-fixing and losing sight of the big picture. Once the first version is out the door, keep your team focused on the big picture and dedicated to that second version.

Pitfall 2: Users click on things that are different, not always things they like. Curious trial users will skew the usage statistics for a new feature.

Upon adding a “Join now!” button to your site, you cheer when you see an unprecedented 35% click-through rate. Weeks later, registration rates are abysmal and you have to reset expectations with crestfallen teams. So you experiment with the appearance of your “Join now!” button by changing its color from orange to green, and your click rates shoot up again. But a few days later, your green button is again performing at an all-time low.

It’s easy for an initial number spike to obscure a serious issue. Launching a new feature into an existing product is especially nerve-wracking because you only have one chance to make a good first impression. If your users don’t like it the first time, they likely won’t try it again and you’ve squandered your best opportunity. Continuously making changes to artificially boost numbers leads to feature-blindness and distrustful users. Given all of this, how and when can you determine if a product is successful?

Instrument the entire product flow. Don’t log just one number. If you’re adding a new feature, you most likely want to know at least three stats: 1) what percentage of your users click on the feature, 2) what percentage complete the action, and 3) what percentage repeat the action again on a different day. By logging the smaller steps in your product flow, you can trace the usage statistics within all of these points to look for significant drop-offs.

Test in sub-communities. If you are launching a significant new feature, launch the feature in another country or in a small bucket and monitor your stats before launching more widely.

Dark-launch features. If you are worried that your feature could impact site performance, launch the feature silently without any visible UI and look for changes in uniques, visit times, or reports of users complaining about a slow site. You’ll minimize the number of issues you might have to potentially debug upon the actual launch.

Anticipate a rest period. Don’t promise statistics the day after a release. You’ll most likely want to see a week of usage before your numbers begin leveling.

Test the discoverability of your real estate. Most pieces of your UI will have certain natural discoverability rates. For instance, consider adding a new temporarily link to your menu header for a very small percentage of your users just to understand the discoverability rates for different parts of your UI. You can use these numbers as a baseline for evaluating future features.

Pitfall 3: Users give you conflicting feedback.

You are running a usability study and evaluating whether users prefer to delete album pictures using a delete keystroke, a remove button, a drag-to-trash gesture, or a right-click context menu. After testing a dozen participants, your results are split among all four potential solutions. Maybe you should just recommend implementing all of them?

It’s unrealistic to expect users to understand the full context of our design decisions. A user might suggest adding “Apply” and “Save” buttons to a font preference dialog. However, you might know that an instant-effect dialog where the settings are applied immediately without clicking a button or dismissing the dialog makes for an easier, more effective user experience. With user research, it’s temptingly easy to create surveys or design our experiments so study participants simply vote on what they perceive as the right solution. However, the user is giving you data, not an expert opinion. If you take user feedback at face value, you typically end up with a split vote and little data to make an informed decision.

Ask why. Asking users for their preference is not nearly as informative as asking users why they have a preference. Perhaps they are basing their opinion upon a real-world situation that you don’t think is applicable to the majority of your users (e.g., “I like this new mouse preference option because I live next to a train track and my mouse shakes and wakes up my screen saver”).

Make a judgment call. It’s not often helpful to users to have multiple forms of the same UI. In most cases it adds ambiguity or compensates for a poorly designed UI. When the user feedback is conflicting, you have to make a judgment call based upon what you know about the product and what you think makes sense for the user. Only in rare cases will all users have the same feedback or opinion in a research study. Making intelligent recommendations based upon conflicting data is what you are paid to do.

Don’t aim for the middle ground. If you have a legitimate case for building multiple implementations of the same UI (e.g., language differences, accessibility, corporate vs. consumer backgrounds, etc.), don’t fabricate a hodgepodge persona (”Everyone speaks a little bit of English!”). Instead, do your best to dynamically detect the type of user situation upfront, automate your UI for that user, and offer your user an easy way to switch.

Pitfall 4: Any data is better than no data, right?

You are debating whether to put a search box at the top or the bottom of a content section. While talking about the issue over lunch, your business development buddy suggests that you try making the top search box “Search across the Web” and the bottom search box “Search this article” to compare the results between the two. You can’t seem to place your finger on why this idea seems fishy though you can see why this would be more efficient than getting your rusty A/B testing system up and running again. Sensing your skepticism, your teammate adds, “I know it’s not perfect, but we’ll learn something about search boxes, right? I don’t see a reason not to put it in the next release if it’s easy?”

The human mind’s ability to fabricate stories to fill in the gaps in one’s knowledge is absolutely astounding. Given two or three data points, our minds can construct an alternate reality in which all of those data points make flawless sense. Whether it’s an A/B test, a usability study, or a survey, if your exploration provides limited or skewed results, you’ll most likely end up in a meeting room discussing everyone’s different interpretations of the data. This meeting won’t be productive and you’ll either agree with the most persuasive viewpoint or you’ll realize that you need a follow-up study to reconcile the potential interpretations of your study.

Push for requirements. When talking with your colleagues, try to figure out what you are trying to learn. What is the success metric you’re looking for? What will the numbers actually tell you? What are the different scenarios? This will help you determine the study you should run while also anticipating future interpretations of the data before running the study (e.g., if the top search bar performs better, did you learn that the top placement is better or just that users look for site search in the upper left area of a page?).

Recognize when a proposed solution is actually a problem statement. Sometimes someone will propose an idea that doesn’t seem to make sense. While your initial reaction may be to be defensive or to point out the flaws in the proposed A/B study, you should consider that your buddy is responding to something outside your view and that you don’t have all of the data. In this scenario, perhaps your teammate is proposing running the search box study because he has a meeting early next week and needs to work on a quicker timeline. From his perspective, he’s being polite by leading with a suggestion without realizing that you don’t have the context for his suggestion. However, after pushing him for what problem the above study will resolve, you can also help him think through alternative ways of getting the data he needs faster.

Avoid using UX to resolve debates. UX might seem like a fantastic way to avoid personal confrontation (especially with managers and execs!). After all, it’s far easier to debate UX results rather than personal viewpoints. However, data is rarely as definitive as we’d like. Conducting needless studies runs the risk of slowing down your execution speed and perhaps leaving deeper philosophical issues unresolved that will probably resurface again. Sometimes we agree to a study because we aren’t thinking fast enough to weigh the pros and cons of the approach, and it seems easier to simply agree. However, you do have the option of occasionally saying, “You’ve raised some really good points. I’d like to spend a few hours researching this issue more before we commit to this study. Can we talk in a few hours?” And when you do ask for this time, be absolutely certain to proactively follow-up with some alternative proposals or questions, not just reasons why you think it won’t work. You should approach your next conversation with, “I think we can apply previous research to this problem,” or “Thinking about this more, I realized I didn’t understand why it was strategically important to focus on this branding element. Can you walk me through your thinking?” or “After today’s conversation, I realized that we were both trying to decrease churn but in different ways. If we do this study, I think we’re going to be overlooking the more serious issue, which is…”

Pitfall 5: By human nature, you trust the numbers going in the right direction and distrust the numbers going in the wrong direction.

Hours after a release, you hear the PM shout, “Look! Our error rates just decreased from .5% to .0001%. Way to go engineering team! Huh, but our registration numbers are down. Are we sure we’re logging that right?”

Even with well-maintained scripts, the most talented stats team, and the best intentions, your usage statistics will never be 100% accurate. Because double-checking every number is unrealistic, you naturally tend to optimize along two paths: 1) distrust the numbers that are going in the wrong direction and, more dangerously, 2) trust the numbers that are heading in the right direction. To make matters worse, data logging is amazingly error-prone. If you spot a significant change in a newly introduced user activity metric, nine times out of ten it’s due to a bug rather than a meaningful behavior. As a result, five minutes of logging can result in five days of data analyzing, fixing, and verifying.

Hold off on the champagne. Everyone wants to be the first to relay good news so it’s hard to resist saying, “We’re still verifying things and it’s really early, but I think registration numbers went up ten-fold in the last release!” Train yourself to be skeptical and to sanity-check the good news and the bad news.

QA your logging numbers. Data logging typically gets inserted when the code is about to be frozen. Since data logging shouldn’t interfere with the user experience, it tends not to be tested. Write test cases for your important data logging numbers and include testing them in the QA process.

Establish a crisp data vocabulary. Engagement, activity, and session can mean entirely different things between teams. Make sure that your data gatekeeper has made it clear how numbers are calculated on your dashboards to help avoid false alarms or overlooked issues.

One of the main tenets of user research is to constantly test the assumptions that we develop from working on a product on a daily basis. It takes time to develop the skills to know how to apply our UX techniques, when our professional expertise should trump the user’s voice, or when to distrust user data. As a researcher, you are trained to keep an open mind and to keep asking questions until you understand the user’s entire mental picture. However, it’s that same open-mindedness and willingness to understand the user’s perspective that makes it easy to assume that because their perspective can make sense, that it should also justify changes within our product design. Or, because we are so comfortable with a particular type of UX research, we tend to over-apply it to our team’s questions.

While by no means a complete list, I hope these five pitfalls from my personal experience will be relevant to your professional lives and perhaps, provide some food for thought as we all strive to become better researchers and designers.

ABOUT THE AUTHOR(S)

Elaine Wherry is Co-founder and VP of Products at Meebo and oversees Meebo's Web, User Experience, and Product Management teams. She takes a special interest in finding passionate folks who want to build amazing products that people closer together across the Web: http://www.meebo.com/jobs/openings/. You can find her personal blog at http://www.ewherry.com/.

Add new comment

Login via:

Your name *

E-mail *

The content of this field is kept private and will not be shown publicly.

Comment *

Because of problems with spam comments, HTML in comments is not permitted. URLs are allowed, but they will not be rendered as click-able links.

Comments

That brought to mind a couple things I've noticed:
1. Not entirely UX-related but similar to Pitfalls 4 and 5: there may be times when it's easy to measure the upside but difficult to measure the downside, and then there's often a tendency to ignore the downside. E.g., we added a big distracting button over here and our whatever rate has gone up a bit, but are people getting annoyed and leaving the service? We can't tell if they are, so we'll keep the button!

2. While I haven't seen any data to back this up, I suspect users will complain more than they will compliment. So if you change something and look at user feedback, you may see a lot of complaints, even if overall more users like it the new way.

Hi Elaine, nice article. I agree with several points, but would also like to point out a few things I would differ on.

1. When I worked at Apple we used 0 data. We maintained the fact that "we know what you want better than you do."
2. When I worked at Microsoft we used a MOUNTAIN of data and I totally agree with not using UX data to argue a point. You can use it.. and I have, but as I always state "UX can be used for good or evil." Several times UX data has come back negatively and my response was, "The participants were wrong.. period."
3. Determine if you are going to treat your product as a web page or software. Design accordingly. Do not fall into the typical web design patterns and junk that most web people do. I have been quoted numerous times saying that web designers are the red headed step children of software. I still think that.
4. Do not micro-analyze your data. Don't get caught up in the # of clicks on a single button. Take a step back and look at the overall picture to find out the use cases for why they are clicking on it. Not the purpose of the button, color, etc.

The main problem with using web standards to build a piece of software is that ... as in meebo... with your notifier, you have begun to creep outside the web world and into my world. Now what do you do?

If you rely on the browser to perform functionality that you should be doing (like change font size in meebo) what do you do when the browsers change? Minimize dependencies.

Very nice article Elaine. Very detailed and extremely valuable in terms of the examples you quote and experience you've shared. Your article was a good read.

~
JSH
~

While I was reading your article, I co-incidentally got a tweet with the following link: http://repeatgeek.com/technical/10-resources-for-design-challenged-programmers/
Although I don't believe that everything that is being said about a developer (in the above link) is true, but some of the points are really hilarious the way a programmer is stereotyped and contrasted from a usability and UI designer's point of view. Thought you'd like it.

Yeah, and arrogant web designers like Ryan Carson are one of the many reasons UX specialists are necessary. People who think that their knowledge of CSS, HTML, and whatever usability experience they've picked up along the way qualifies them to research and design user-centered software products are way underestimating other people's skills and experience, and way overestimating their own. Having good UX pros on your team helps inoculate your project against people who arrogantly assume they know best. There's a *HUGE* difference between web design and software product design, both in terms of the tools and of the complexity involved in building them. Would Ryan Carson's advanced knowledge of HTML5 and CSS going to do him much good on the Adobe Creative Suite design team, or on the Amazon information architecture group, or on the National Federation for the Blind's website?

Anyway, I don't need to keep going on about how wrong that guy is... a bunch of people have already done a great job of breaking down that assertion from a hundred different directions. What shines out of his responses to the comments is that he believes that designers should be like unto all-knowing gods of every facet of research and design, and that he considers himself such a god. If you truly had such a designer, you can be sure he'd be charging much higher rates than your average web design production monkey.

Given the fact that a large number of Meebo users are following your link to this article from Meebo's start area, it would be a good idea to include a sentence to define or give preliminary background info on "UX design". I understand that the article was written for a magazine of the same name, so yes it was safe to assume that those readers would already be familiar with "UX". But in the online world, any online article brings with it the possibility of being posted and/or linked to an outside group so basic information would be necessary and avoid a lot of abandoned attempts to read your article to its completion. This is particularly true where a search of a term like "UX" would surely return a very large number of differing results that would only add confusion. Just my 2 cents. :) Thanks

Whats good here is seeing UX techniques being presented from from a usability study perspective. We usually hear a lot about UX from the early design phase. It validates the need for up-front design that a lot of skeptical employees (particularly in tech. companies) ignore as a nice to have rather than a must have.

I agree, some companies have a tendency to create experiments/ surveys where participants simply vote for what the perceive as the the right solution. Providing participants with every single feature they ask for creates a monster. More often than not, users just don't really know what they want.

Wow this is a great article, I will have to reread it a couple of times to get the most out of it. We are now in the second redesign of our beta product. The funny part is that the 3rd one is already on its way. The more we talk to users the more we see the product through their eyes...