I read a great post by Jeff Bussgang (an entrepreneur turned VC) where he talked about “Mother in law market research”. He shared a quote by his MBA classmate:

“I think [this consumer product] will be a hit because I can see my mother-in-law buying it.”

I don’t have to explain why such an assertion should never be made without supporting data (e.g. does the MIL match the primary or secondary personas? Does she have the right demographic / psychographic profile? Was there qualitative and quantitative research to prove the same?)

The “focus group of one” happens to the best of us, regardless of our roles. Sometimes we use the MIL, most times we project our own views onto the target personas.

It generally goes like this. Someone in the company becomes fixated to some product feature. 99% of the time, he or she is not a good match to the target personas. (He can be a man commenting on a product to fix hot flashes for menopausal women.) He or she would share his/her opinion:

“Yesterday I saw a demo of <product feature>, and it immediately made me think people can <achieve improbable application of product feature to unrealistic use case>. I am now absolutely convinced we must line up all our resources to optimize the user experience for <unrealistic use case> because if I thought of this, lots of other people would want to do that too.”

I’ve seen it happen to founders, CEO’s, CTO’s, COO’s, or SVP’s of something-or-other. For all their brains and success (present and past), they fall into the trap of believing they can project themselves into the minds of target end users, without taking the time to really understand the latter.

Unfortunately, these guys routinely underestimate the magnitude of thrashing they can cause. Let’s face it, if the SVP Sales declares that product feature X must do Y, the product team isn’t going to spend 2 minutes convincing her otherwise. Instead, it is going to spend 2 business days putting together a well structured argument, based on facts. Then they will make an appointment with her to present their arguments. They may even commission a new research study to put the argument to bed.

And let’s face it… if she remains unconvinced despite hard facts, and the company is set up in that way, Feature X shall do Y in the next release. Forget about the research results and the needs of the end users.

So, to folks on the management team: please don’t become the focus group of one. At best, you will waste much more time of your product team than you realize. At worst, your stray comments could cost your company its ability to develop great products.

There are many templates for positioning statements, and different ones exist for brand, product and services.

Being a product person, I must admit that I really only care about functional positioning statements for products or services. I am partial to the product version below (courtesy of GrowthConnection):

Product Position Statement

For [target end user]

Who wants/needs [compelling reason to buy]

The [product name] is a [product category]

That provides [key benefit].

Unlike [main competitor],

The [product name] [key differentiation]

Just for laughs I am going to apply it to a fictitious product that I would love to buy: a miniature kitchen composter that doesn’t take up much space, requires no worms, maintains a high core temperature purely from absorbing energy from the lights that are on every night, and doesn’t smell at all.

For consumers passionate about the environment,

Who practice green, sustainable living in every aspect of their lives,

The magic composter is anindoor composter

That provides a fast, odorless way that sustainably converts all your kitchen waste into compost every day, 365 days a year.

Unlike the Worm Factory(R),

The magic composter has the footprint of a 2.5 gallon trash bin, emits no odor, can keep up with the kitchen waste of a family of 6, costs only $15, and requires no worms to operate.

Now I am sure my magic composter needs lots more work, but that’s the idea. This format forces you to think about why the product exists and what it does for the target persona. I love it.

“Beta” is a nickname for software which has passed the alpha testing stage of development and has been released to users for software testing before its official release. It is the prototype of the software that is released to the public. Beta testing allows the software to undergo usability testing with users who provide feedback, so that any malfunctions these users find in the software can be reported to the developers and fixed. Beta software can be unstable and could cause crashes or data loss.

In my mind, the beta program provides an early window into how the market will receive the product release. It generally happens at the very end of the development cycle. I generally run small-scale beta programs as follows:

Recruit 10-20 beta testers to match the primary and secondary personas that the product is designed for

Do a kickoff meeting (either one on one or in a group) to set expectations on what’s in the new release, and how we expect to collect feedback from beta testers

Ask beta testers to use the product in the target environment of use

Schedule a call with each tester on the phone 1 week into the program, to ensure everything is going smoothly

Use phone calls and email to keep track of progress during the program

At the end of the beta program, schedule a phone conference or an in person debrief to collect feedback.

Since beta programs occur at the very end of the development cycle, typically weeks before the target release date, it is really only useful for testing things that can be iterated right before the release: positioning and messaging, delivery and support mechanisms and the like. There is a great recent post on beta programs by Dave Daniels of Pragmatic Marketing that outlines all this – do take a look, it brings into sharp relief many of the questionable practices a lot of software companies take for granted.

Findings from beta programs can also be used to pull a release if (gasp!) a customer discovers a fatal bug that the QA department failed to find. Lastly, it can also be used as a vehicle to collect customer feedback for the next release. It is NOT a vehicle for usability testing – it is way too late in the game for that! Usability studies (whether in lab or extended use tests) should be done early in the development cycle, before the product is finalized and when there is still time to effect change.

Using beta programs to test positioning is a great idea. One can save a lot of money in marketing programs by iterating the messaging with target buyers until winning messages are arrived at.

For me, I prefer to think that a startup becomes a small business once it has evolved a business model, acquired a substantial number of customers, is at least somewhat dependent on the resultant revenue stream, and is feeling a need to scale.

Now why do we care about this terminology? It’s because “startup” and “small business” evoke completely different images and expectations on the proper way to conduct business. A startup’s charter is to experiment and learn until it finds something that works and has lasting value. A small business must keep its existing business model going while investing in new innovations. Naively applying startup thinking to a small business or vise versa could very well result in a suboptimial decision making process.

One place that really matters is the pivot. A startup, with a small number of customers, can rapidly iterate on learnings from the market, and pivot on product strategy, offerings, positioning / messaging, and go-to-market strategy without worrying about alienating a large customer base. A small business, however, has amassed substantial numbers of customers that it cannot offend (having no alternative customer base to help maintain its revenue stream). Thus it must keep a certain level of staff for baseline development, sales and marketing and customer support activities, just to stay in business.

So the small business can still pivot, but the scope of change is more constrained. It can iterate on positioning / messaging and go-to-market strategy, but changes in product and service offerings must take on a smaller scope and/or a longer timeline. Of course, ultimately whether a company can pivot and change course quickly depends on their cash situation… a small business with a large war chest can well out-pivot a startup with low cash reserves.

So is it good or bad to have made this transition? I for one think it’s great news. It means the company is one step closer to nirvana – becoming a successful business and bringing lasting value to customers. It’s a great time to be with a company!

Like this:

After years of denial, I finally caved and switched to an iPhone after going through the following devices in the past 5 years:

A Palm Treo 650 (2005-2007)

A Samsung Blackjack II (2007-2009)

A BlackBerry Curve 8900 (2009-2010)

Being a cheapskate, I ebayed my iPhone 3G 8GB used. So where does that place me in the technology adoption curve? Am I:

In the early majority for smartphone technology, or

In the laggards category for iPhone adoption?

I must say now that I’ve finally joined the club, I feel extremely stupid not to have chosen it over the 8900. I miss my QWERTY keyboard tragically, but the UI! And the apps! Wow, the apps! I’m a total convert after 3 days of use!

Usability research for consumer electronics products can be very costly. There are companies that specialize in doing it the right way, with high end audio/video equipment and multi-stream video editing and compositing integrated into the program. The deliverable is typically an incredibly insightful presentation with snippets of video that tell a compelling story all by themselves.

Since I work with startups and small businesses, I have never had the luxury of doing it “right”. My theory is that some research trumps no research. So I butcher best practices until they become unrecognizable but affordable (deepest apologies to Scott Weiss who taught me how to do it the right way!)

I usually start with a research protocol that clearly states the questions we want to answer, provides a guideline for recruitment and lists the props required for the session, which includes a mini-DVD camcorder and a tripod. Then I design the session, which is videotaped throughout. I try to stay inside of 1h if at all possible.

Let’s say I am comparing the usability of two smart phones for working moms aged 35-45, with at least one child under the age of 12 living in their house. The session could look something like this:

Introductions and orientation – explain purpose of research to subject and let them know what to expect (5 min)

Execute any paperwork, such as an NDA, a photo and video release forms, and a profiling questionnaire (5 min)

Ask the subject to familiarize themselves with Device 1. Product manuals are provided to the subject. (5 min)

Repeat with Device 2. (5 min)

Ask the subject to execute a scripted task list for Device 1. Tasks tend to be fairly specific – for instance, I could ask them to make a call, send and receive a text message, check traffic, take a picture, upload a picture to a computer, take a video, etc. Ask them to verbalize what they are doing as they try things out (but do not offer hints or commentary – we are there to watch and learn, not to talk.) (10 min)

Repeat with Device 2. (10 min)

Debrief – loosely guided interview to ask subjects to rate the usability of each task on a scale of 1 to 5, as well as answer some open ended questions about their general impressions and perceptions (20 minutes)

Present the incentive check (typically $50-100, depending on the nature of the study).

This format is great at providing a sanity check for the out-of-box experience for consumer electronics devices. Can the end user figure out how to set up a new device and get it working without groaning and gnashing of teeth? Lots of times they can’t. I’ve learned so much about what’s wrong with the current packaging design and documentation from watching subjects struggle through product setup. It is very hard to keep quiet and not offer suggestions along the way… but the learnings are priceless.

As with all other kinds of research, I am aggressive about inviting engineering team members to be observers in these sessions. This is the best way to help them understand who they are designing for and why certain feature enhancements are necessary to ensure an awesome user experience.

When I do qualitative research, invariably someone would say: “oh, so you are doing focus groups!” which usually makes me cringe. The reality is that 99% of the time I am not doing focus groups. I may be doing detailed interviews or observations which are 1-on-1 techniques. Or I may be doing a photo essay or journal study. Or I may be doing roundtable discussions.

For those, I adhere to the same best practices one uses to run focus groups:

Include no more than 8 participants per session.

Craft a screening questionnaire and recruit carefully to ensure you get the right crowd.

Separate the genders. You get much more candid discussions that way (especially for younger demographics).

Control the discussion. Ensure everyone at the table has a turn (including the reticent ones).

Record the discussion on video for future reference.

Such a roundtable discussion becomes a focus group only if two more criteria are met:

The moderator is an independent third party who is engaged by the company to do this research.

The discussion is held in a research facility with one-way glass.

I’m old schooled. I insist on using an independent moderator for focus groups. In my experience, employees tend to be too close to the products and services provided by the employer. They have assumptions and expectations that may impact their ability to be impartial. A third party moderator has no such baggage. He or she is free to learn about the product by asking probing questions, then lead a discussion where more probing questions are asked. The resulting quality of the discussion tends to be much higher and more unbounded for this reason.

As for the facility, the one-way-glass room has superb benefits. It allows a lot more people from the company to observe. It also removes employees from the participants, helping to foster a more genuine discussion. It costs more than using the company lunch room, but I have never felt like I didn’t get my money’s worth in such a facility.

What’s your take? Would you do a focus group in a hotel meeting room? I would love to hear your thoughts.