2 answers

Thanks for the offer, but the project manager has already used your information plus other resources she found to develop a prototype. We are currently in the process of testing it and making revisions.

Thanks again for your help – we used your advice to guide some of the design decisions.

Other then the questions themselves, I feel it is very important to also look at how you ask your questions and how you allow your users to answer.

Personally I went with a Likert Scale system using the following ratings – N/A (Default), Strongly Disagree, Disagree, Agree, Strongly Agree.

My group spent a few hours debating this topic, so you need to discuss the positives and negatives to using Strongly vs Slightly, also the fact of having a Neutral. We want to have our users specifically agree or disagree, no neutral. We also chose to have N/A as our default selection so that if someone doesn’t want to answer the statement or if it does not apply to them, we don’t want it.

We also discussed as a group if this would be mandatory for all users. We chose not to because we want organic clean data. So if we make the feedback easy to use, then we will gather more.

One other topic we debated was how we asked the questions. We decided to go with first hand statements that the user can grade such as, “I was comfortable with the pace of the course.” The user could then think of things from a first person point of view to make it more personable to them.

When I made my questions I made sure to:

be short and sweet

have very specific action items tied to each statement

be consistent for all my training modules that I am evaluating

Well I tried to be as descriptive as possible. Use that first link I gave you to help come up with the questions that match your training environment/needs.