We recently had a good question from a CEO who just heard about Adaptive Survey(r) Technology, “Why should I use Adaptive instead of SurveyMonkey or any other tool my people already know?”

One of our advisors, Professor Raghu Santanam from Arizona State University, jumped right in with a short definition of the various uses of research, “There are three basic uses for research…”

Descriptive surveys are backward looking and captures what happened to what demographic in the past. CX or Customer Experience surveys are in this category.

Confirmatory surveys confirm information you already know by taking a current reading. Some of these are tracking over time to see if anything changes – NPS or other tracking surveys for example.

Discovery surveys are future looking where you want actionable insights that lead you to doing something.

Adaptive Survey(r) technology falls primarily in the Discovery category. It is used to generate new ideas, innovations or simply unexpected opportunities to delight customers. If you are looking for new and actionable insights in priority order, Adaptive is the right type of tool for you. Adaptive generates new ideas not conceived in a traditional design.

The first and only tool for doing Adaptive Surveys(r) is at GroupInsight.com.

Once you see some results form your discovery survey, you’ll find that Adaptive is also a useful replacement for open-ended questions and collapsing ratings into one Adaptive Question(r) in descriptive and confirmatory surveys too. Many survey can be reduced by 80% using this method; you can turn 30 questions into 5 or 6 using this method. Adaptive surveys require dramatically fewer questions, yet provide more business insights.

There is a lot of talk about big data and data mining. It’s all good stuff as people try to get control of their data stream and monetize it. The current state seems to be organizational — put it in a form that people can read and identify issues. This is a huge step forward.

What do you think about mining information in people’s minds and organizing it in a way that makes sense? I’m not talking about mind reading, just involving a lot of people in the decision-making process in order to find the best solution to everyday problems. You could involve employees, customers, prospects, and etc. Here is how it works…

Come up with the ten best solutions to any business problem you have and show it to any group of people who can help you. Ask them to review 10 possible solutions from the pool and indicate those solutions they agree with. Then prioritize those they agree with. If they have new solutions or don’t see their solution, they can add to the pool of solutions. After everyone has a chance to respond, drop the solutions with a low percentage of people who agree or a low percentage who feel they are priorities.

Once you have the top 10 or so solutions from the group, combine it with the ten you started with and pick the one that makes the most sense –the best decision.

Collecting customer feedback for B2B companies requires a much different methodology than that used by a traditional consumer-oriented company. For one, B2C survey respondents are kept anonymous to allow for candidness and “honesty”. However, for B2B surveys, this doesn’t really work because relationships are at the crux of recurring revenue. To foster a strong relationship, Customer Success Managers needs to follow up with respondents and dig deeper to solve potential issues or even just say, “Thanks.” Acting on customer feedback is arguably more important than deploying the survey, since expectations are set by sending the request for feedback, and not acting can actually do more harm than good.

But this is all much easier said than done, and that’s the hard truth that most B2B companies face with closing the loop. It takes a bit of extra effort for someone in the team to set time to make phone calls or hand-write (what?) a Thank You note. You’ve got to find the right shipping address to send a little Thank You item or card. And how about those work-from-home-folks who aren’t in the same office as everyone else — someone has to look up these details. And you may not have an intern all the time to handle these “small” tasks.

But the thing is, it’s worth the time investment. Eventually, it will pay off. And using what are now considered “old-school” methods, like picking up a phone or hand-writing a snail mail note, are the best ways to make a splash. Chances are your competitors aren’t doing it—or they are and the experience will be enough to convert their business.

For example, we at Waypoint Group always tell our clients about the need to actively recruit survey respondents so that 1) responses come from all the right people and provide an accurate summation of your business model, and 2) make closing the loop easier post-survey.

While difficult, we’ve had several clients do this well and see great benefits including this example below. Out of 800 Account Managers (AMs) who actively recruited contacts within their accounts to be sure that they had the correct email information, not only were new people identified as part of the account, but new opportunities were discovered for upsell and cross-sell. Relationships became stronger as a result and some customers were able to resolve an existing issue – something that certainly would cause delight and possibly convert a Detractor or Passive.

With a third of these AMs uncovering new leads, it’s results like these that make closing the loop worth it. Do you have a proper strategy for doing so? It’s not too late to make one.

One of my favorite people commented that my last post sounded angry. Sorry about that. It was really just frustration with the process of larger companies.

Some additional information about the results might help you understand my frustration. As you might recall, there was an extended conversation about who to include in the sample for the company’s NPS survey. It is a relationship survey and many people like me feel like customer surveys of this type should be conducted among a representative sample of all customers. The executives decided to include only top-tier customers — those who spend the most money.

Here are the results. The number of completes for our group was about 1,500 last quarter; this quarter…wait for it…46. Our NPS measurement is 16 plus or minus 31. Out of tens of thousands of customers, we got 46 completed surveys. The anger this quarter is coming from the executive team above those who made this decision. To top it off, the same process will be used for the current quarter using twice as many top-tier accounts. I am guessing that we will get about 92 responses this time. I’m not sure whether to laugh or cry.

Heard a long conversation about an NPS sample that includes only the company’s top accounts.

I get that you want to include top accounts in your sample — but to the exclusion of all other customers? This happens to be a decision driven from the top of a large company, executives who want to understand this key group of customers. What they don’t realize is that once they get a score from this exclusive group, the very next question is, “compared to what?”

Stay with me for a minute. What if you get an NPS of 35. Is that good? Is it bad? What if you get a score of 60? Wow, we got a great score. But what if the rest of your customers give you a score of 95? The 60 doesn’t look so good now, does it?

It doesn’t cost a single penny more to survey a representative sample of all customers. Then at the end you can separate this key group from all other customers and see if they are different. NPS is not that hard. Come on people. The worker bees in the room need to tell executives that they are making a mistake. It’s your job. Do it.

We got this client feedback through our partner about Adaptive Questions where respondents can see responses from others.

“I spoke to the client today and they are excited about the prospect of this project. They have only one concern which I could not address… they are worried about the likelihood of negative comments surfacing and being shared within the survey group, and possibly shared outside the group. “

One of our best long-term customers from TurboTax put this in a most elegant way, “You have to expect negative feedback to make any progress. If you filter it out, you are doing a disservice to your company.”

I usually like to put both positive and negative people in the same Adaptive Question because we can see how they agree and disagree. Think about how valuable it is to see agreement and disagreement between Promoters and Detractors, happy and unhappy customers. Personally, I prefer to allow respondents to say anything they want because…

Bad ideas, negative ideas, complaints don’t usually get seen by very many people because they don’t usually get a lot of agreement because we word questions to get constructive feedback, not complaints

Given the sensitivity of some clients such as this one, there are a couple of fairly easy solutions.

Moderate the comments – only allow constructive comments to be seen. CloudMR has an option to require moderation. Respondents only see the comments you approve. That way we can limit the pool of ideas to positive and constructive improvements. The client will need to give us some guidance which is a part of our process. Downside: If you are not aggressive about approving legitimate ideas during the early part of the fielding process, you could go through all of your sample with just the initial 10 seed ideas.

Separate positive and negative respondents into two Adaptive Questions(TM) – It is pretty natural to get a rating such as satisfaction or likely to recommend before the Adaptive Question. Simply use the built-in logic to get improvement suggestions from unhappy respondents and positive sound bites from happy customers. This isolates the negatives to an extent.

Combine 1 and 2. Adapative Questions are ideal for getting WOM. The analysis will include both general buckets of ideas and individual comments. If you want to WOM for your promotions or advertising, we will ask respondents to identify themselves for the quotes.

When we started thinking about text analysis, one of the issues we discussed was sentiment analysis. Lots of people have tried to figure this out and some companies claim a pretty high success rate using algorithms or other techniques.

Our feeling is that language is so complex and that the subtleties of things such as sarcasm make it really difficult to be very accurate. So we took a different approach to the problem. What if we leave the determination of sentiment up to real humans? If we do that, it simplifies the task to finding which comments to read.

CloudMR’s patent-pending algorithm sorts a list of comments in priority order. The comments most likely to resonate with other respondents are sorted at the top. If you want to know sentiment, just read the top comment for yourself. If you are feeling really energetic, read the first ten. Sentiment problem solved…