I am working on a new design for bank reconciliation in a piece of accounting software. We have identified our users' existing workflows and problems with our current design from an earlier round of customer interviews.

A couple of additional feature suggestions for bank reconciliation came up from our team based on other user needs they have seen in the past. An example is the ability to attach notes and files to a bank reconciliation. I'm not sure to what extent our users would actually use these features (they weren't core needs that we found in the earlier interviews), and I don't want to add to development time and clutter the screen with these bonuses if there's not high value in them for our users.

I'm planning to use a remote survey tool such as Verify to study this. I could ask the question in a couple of ways off the top of my head:

How likely would you be to use [feature] on a scale of 1-10?

How likely would you be to recommend [feature] to a colleague on a scale of 1-10?

Would you have a use for [feature] in your regular process (yes or no)?

What is the best way to ask the question and interpret the results in order to get an accurate assessment of how widely-used these bonus features would be? I don't want to bias the results by respondents being agreeable.

Are you trying to (a) determine whether users have the problem you imagine they need you to solve, or (b) assuming they have the need determine how hard to solve it for them (from the simplistic "here's a workaround, good luck", all the way up to "here's something completely magical, and it will even cook breakfast for you the next morning") ?
– EricsNov 26 '13 at 11:40

@Erics basically a. But it's not about the overall problem, it's trying to gauge the value of a few bonus features related to solving that problem.
– Mike EngNov 26 '13 at 23:07

5 Answers
5

This actually comes up a lot in my own practice, mainly because feature parity is often highly valued by internal stakeholders. If you've already defined the workflows from interviews and this didn't come up, then you probably need to do two things: 1) make sure the features is/isn't needed and 2) quell the call for the feature.

To verify the feature is/isn't needed, test your design in whatever form it is in, lo- or high-fidelity. (This doesn't need to be many people, 7-10 should do.) The main goal is to test the goals and tasks you've already defined and make sure your designs adequately address those first. Then, probe users for new features by asking questions like:

What can't you do here?

How would this make your job harder?

What's missing from this?

I propose this type of questioning (open questions) to elicit a negative response because users are notoriously deceptive, albeit unknowingly, when answering self-reporting questions like "who would use..." and "would you recommend..." (closed questions). Here's a quick summary of open vs. closed questions:

Pros: develop trust, perceived as less threatening, allow an unrestrained or free response, may be more useful with articulate
users.

Cons: Can be time-consuming, may result in unnecessary information, and may require more effort on the part of the user.

Closed ended questions are those questions which can be answered finitely by either “yes” or “no.” [or from some finite set of options]

Pros: Quick and require little time investment, just the answer.

Cons: Incomplete responses, requires more time with inarticulate users, can be leading and hence irritating or even
threatening to user, can result in misleading assumptions/conclusions
about the user’s information need, and discourages disclosure.

To test your assumptions, I would consider a prototype MVP (minimum viable product) instead of a survey.

For example, instead of implementing the entire bank reconciliation attachment feature, you could have a link or button that says "Attach Notes or Files". Then, you could measure how many times the link is clicked. This will help you determine if this is a feature that people would use.

The problem with prototype MVPs is that the user is then left with an unmet need. After the user clicks the link, you should explain to them that they just voted for this feature in an upcoming release and they can email you if they have further feedback.

Before creating your prototype MVP, I would document your assumptions. You should explain why you think the feature needs to exist and the supporting research.

You do have to be careful about false positives here though due to humans being curious monkeys, poking at things they don't strongly want or need just to figure out just what that thing is.
– EricsNov 26 '13 at 11:47

I'd go along with @Andrew's answer and build a prototype of some kind and test your assumptions with that.

It doesn't even necessarily have to be something in the main product - a paper prototype or other mock up should be enough to get you some level of validating that the feature meets some real customer need.

The point I'd add is that the style of questions you proposed ...

How likely would you be to use [feature] on a scale of 1-10?

How likely would you be to recommend [feature] to a colleague on a scale of 1-10?

Would you have a use for [feature] in your regular process (yes or no)?

... are, in my experience, really ineffective.

First, without the context of the actual feature and workflow have the customer has to imagine their actual context. Second, you've already delivered a leading question by discussing and describing the feature. Both of these combined tend to produce answers that bear little relation to how the feature is actually used.

Prototyping - and interviews in the working context - are much more effective

Have a converstion with a couple to start out and then do a survey. Either way, find out how they are handling notes and files (attachments):

paper files

local and/or network hard drive

thumb drive

3rd party service like Dropbox or Evernote

Are they burdened with creating complex folder structures to "match" the information in your application?

Do they prefer the flexibility and sharing capabilities of some of the 3rd-party apps and would prefer your app to work with them?

Meshing with other services may be a way to promote your product as well. Have a conversation with your users. It doesn't cost anything to answer "yes" to new features and I don't think you will gain any better understanding of how your app fits into all of their needs. Ultimately, you're wanting them to say, "You know what would really help?"

Thanks, but I think I may have led you astray with the question. I edited it to clarify that "notes" and "files" are some potential bonus features to the core functionality. The question is about whether or not there is a common enough use for notes and files on this core functionality.
– Mike EngJun 12 '13 at 15:11

You say that these features came from "our team, based on other user needs they have seen in the past." This makes sense, coming up with solutions to problems someone has witnessed, rather than solutions to problems someone imagines. Or, worse yet, features that seem cool.

If it were me, I'd verify that those problems exist by observing users as they perform their reconciliation tasks. However they do it - with your tool, with a competitor's tool, using pen and paper, ... Do they add notes as they work? As you observe, you can ask for details on what they're doing as it's happening, and ask questions immediately afterward. "What was the hardest part of that?" "Is there a way to make that step easier?"

That's what I would do because I trust observation a lot more than surveys and interviews.