Topic

I was curious as to how any of your currently score your survey questions. At one point in time, before we understood how scoring could be important, we made positive responses worth more and negative responses worth less. (i.e. "Were we helpful? No = 1 point, Yes = 2 points.)

Since then, we have completely turned this on its head and developed the concept where NEGATIVE feedback is scored HIGHER. For example, one question asking about specific categories our customers may have had issues in has each individual answer choice worth 50 points each. This allows us to pull only surveys with very high scores and review them for negative feedback for ways we can improve. We have a "Win Back" team that specifically reaches out to those customers who have given negative surveys for resolution on these issues.

I'm not sure this is exactly industry standare but thus far it has worked for our purposes.

Answer

We answered this in our expert seminar and I will touch on the main points here as well.

First of all, the scoring should go hand-in-hand with the survey methodology (i.e. If using a Net Promoter type of question then the scoring should be based on a 10 point scale). This will allow you to compare your results to published industry standards.

To dig deeper, I understand the thought process around making negative scores higher so that the ‘Win Back’ team can easily identify these for follow-up. However, it seems like this will cause issues whenever anyone outside of the process needs to understand survey results. Anytime the results need to be communicated on an executive level they would need to be accompanied by a caveat explaining why high scores are very negative. This may also cause an unnecessary learning curve for new employees or internal transfers as it seems counter-intuitive.

There should be enough functionality in the product to keep a normal scoring system and still recognize low (negative scores). A few options include the following:

Create custom reports that sort on score ascending. This will ensure that all low scores are shown first when running the reports.

Utilize exceptions in reports that specifically target a score below some threshold. This combined with scheduled reports that run regularly should point out the negative responses that require follow-up.

Have follow-up processes built into the survey itself. In advanced mode, it is possible to branch based on the response to any question in the survey and then create incidents or notifications which can be assigned to the ‘Win Back’ team. This automates the process ensuring that no human needs to notice the negative score in a report.

I know this post is pretty old, but I figure I'd present our stragegy and hope to get some feedback as well.

We currently score our satisfaction responses in the following manner:

Extremely Satisfied = 5

Somewhat Satisfied = 4

Neither Satisfied nor Dissatisfied = 3

Somewhat Dissatisfied = 2

Extremely Dissatisfied = 1

Essentially, we tally up the scores with the weights above, figure out the avg out of a perfect '5', then convert that to a percentage of overall satisfaction. However. there is some major flaws with this methodology:

1. If someone scores a 1 on all aspects, this is still 20% overall - this may be okay from a Pass/Fail or Successful/Unsuccessful perspective, but now your total scores would only effectively range from 20%-100%.

2. If we changed our score values from 1-5 to 0-4, then in my mind, the mapped percentages for each of the scores would then fall similar to this:

0=0%

1=25%

2=50%

3=75%

4=100%

So look at this model, say we get all 2's in our surveys throughout the month (Neither Satisfied nor Dissatisfied), then our CSAT for the month would be 50%. Now reporting this back up to our Senior Execs, they will look at the score and think 'COMPLETELY FAILED' as we have certain goals at 90% CSAT and most likely will not understand our scoring methodology behind the scenes. They will most likely interpret this as a school grade:

A = 90%, B = 80%, and eventually to <60% = F.

Therefore, I'm not sure it's appropriate to score “Neither Satisfied nor Dissatisfied” as a 50%. If we were to follow the Net Promoter strategy, I'm not sure how we'd present the NPS score in a meaningful way or how to relate that to our 90% CSAT goals.

I know that this post is a few years old, but I think it's still a relevant issue. I'm trying to create a new survey focused on our IVR system. We are using a few matrix questions to determine which self-service options are helpful and not helpful to our customers. However, not all customers use all of our products so we have "N/A" as a possible option. I don't want the "N/A" to help or hurt our survey scores, how do I account for that when I set up the scores for each possible response?

I initially tried to set up the negative responses with a negative number (-2 for Not helpful at all and -1 for Not very helpful), but I can't set up a negative number in the questions explorer. I'm using the May 2013 version - but we are upgrading to the February 2015 version on Wednesday night of next week.

I guess I have two questions:

1. Can I set up a negative score (e.g. -2) for an answer response in the February 2015 version?

The product still does not allow one to put in a negative value for a question choice.

However, there is a workaround that I didn’t mention in my previous post back that could be used for those who want negative values. Using the ‘Set Field’ element on an advanced survey, one can manipulate the survey score, and yes this includes decrementing the score to a negative value. So, you could effectively do the scoring directly in the survey flow itself by incrementing/decrementing the score based on the result selected (and just leave the choice scores all set to 0).

The problem is that it gets pretty intense depending on how many questions you are scoring in the survey. In my example (see screenshot), I set up a one question survey with three choices (Bad, Neutral, Good). My submit path goes to a case statement so that I can take a different path depending on choice. You can see that ‘No Value’ and ‘Neutral’ both go straight to the Redirect to URL (Thank you page) as they do not affect the score in this example. The ‘Bad’ takes you to a set field which decrements the survey score by 1 and the ‘Good’ takes you to a set field which increments by 1. So, the possible values for the question session in this example are -1, 0, 1.

As I alluded to, I think that this would turn into a major hassle fairly quickly so it is more of an FYI than a recommendation. As far as the question raised of how to score an ‘N/A’, it is probably best to hear from the other customers concerning that.