Review of the New UX Book "Remote Research"

Review of the New UX Book "Remote Research"

Damnit. I wanted to write a book like Remote Research (Rosenfeld, 2010), but these guys beat me to it. Though I probably wouldn't have done it as well, or as thoroughly, or as amusingly as Nate Bolt and Tony Tulathimutte did. And if I didn't like those guys and respect them so much, I'd be really upset. I'll admit that my intention was to just skim the book enough to write a minimally credible review. But I couldn't put it down. (If that makes me a geek, I'm okay with that.)

Doing remote research is all the rage. I first started doing it when my clients wanted geographic diversity but didn't want to pay for travel to the geographic locations. This trend continues even as budgets begin to grow back to pre-economic-crisis levels because it just makes good sense; remote research is a smart tool to have in the toolbox.

There are many advantages to conducting remote sessions, and the UX world is now seeing them. It's very inexpensive. You can gather a lot of data without leaving the comfort of your own office. Team members who are remote can easily observe sessions. Even if you don't buy special tools for conducting online sessions, it appears pretty easy to put a study together.

The discussion lists are full of questions about which tools to use and how much they cost. Nate and Tony have anticipated every question and answered each clearly and intelligently. This book is smart, it's plain, and it is clear that these guys know of what they speak. But they haven't said, “Do it our way or hit the highway.” They present the options, share their experiences, and call on others to tell their stories (I'm one of them). We even get lessons learned.

For example, one of the most difficult things to do on any usability test (especially for automated remote usability tests) is to create tasks for participants to perform. Nate and Tony do a masterful job of using their own experiences to illustrate what to do and what not to do, and why. In fact, this section of the book is worth its price alone. They explain beautifully how to develop task scenarios based on research questions. I actually think their section about task development (or "elicitation," as they call it) is better than the one in my book, Handbook of Usability Testing Second Edition. I absolutely intend to point people to their explanation every chance I get. This book is loaded with just those kinds of gems.

Though there is a lot of information about tools, applications, and services to use, this book is not masquerading as a third-party user manual for any of them. The undercurrent here is an outline for performing research projects, including advice about methodological subtleties. I just love the section in Chapter 5 called "Quiet, Chatty, Drunk, Bored, and Mean" (preview an excerpt from this section) about how to handle different types of problem participants. And while the authors make no claims of having done "remote ethnography," the authors share some really useful tips about the lovely qualitative information you might pick up about the physical context the participant is in if you are very observant.

The heart of the book is a straight read: conversational prose about what remote research is about, why you would do it, when it is best and when it might not answer your questions, and how to do it to gather the data you need. If you're wondering about whether remote research is the right approach to answer your research questions, you'll get well-informed answers in this book.

After that, there are also dozens of neat inserts and insets. From excellent, brief case studies to side-by-side comparisons along with samples of deliverables, the book offers tips, tricks, and techniques told by the authors and a cast of others who have tread these remote waters and lived to tell the tale.

The greatest contribution this book makes to the UX world, however, is in how Nate and Tony handle the definitions, constraints, and advantages of moderated versus automated remote research. I personally am not a fan of automated tests, especially for teams that are new to user research. People who ask me about it seem to be hoping to put a website out there and then sit back and wait for the data to flow in. But as Remote Research points out, performing a good, useful automated test takes careful research design and scripting.

For this public service, I salute the authors. As the guys say, "don't waste your life doing pointless research." Get this book. Then email Nate and ask how he talked Rosenfeld Media into putting a pink cover on it.

Comments

Aw, this was a really nice post. In idea I would like to put in writing like this additionally – taking time and actual effort to make a very good article… but what can I say… I procrastinate alot and by no means seem to get somet
deck de piscina hing done.

Right from the start there are some useful case studies and comparisons. These are what make the book so useful. With the various other tips and tricks it is overall a pretty good addition to literature on remote research.

It sounds to me like a lot of designers, especially the top ranked one like I see on angie's list, have been integrating remote research in to their projects. It is something I am just now becoming familiar with, but it seems like the next big trend.

I went directly from this post here, to this post (http://uxmag.com/technology/angies list/usabilla-and-loop11) on a mission to integrate remote research tools in to my UX aspects of design. I have to say that I learned some cool tricks, but for the most part I will not be implementing these methods in to my every day work, just special occasions.

I read the book and came away with some pretty useful points in the long run. I am a firm believer that when it comes to UX it is important to stay on top of the game and I feel like this book did help me out with this. Not highly recommended, but recommended.

I recently read this book, and was able to apply it in my everyday work. I was actually planning out usability testing, but had the restraints of limited budget, no lab, and minimal subjects that could test in person.

The methods, insight and suggestions in the book really helped me to perform some very successful usability testing within my resources.

None of these methods limited my results, and I actually got the information needed from the tests quicker enabling me to get right to work on the strategy for site improvements based on the test results.