I'm Andy Warren, currently a SQL Server trainer with End to End Training. Over the past few years I've been a developer, DBA, and IT Director. I was one of the original founders of SQLServerCentral.com and helped grow that community from zero to about 300k members before deciding to move on to other ventures.

Session evaluations are one real benefit that speakers derive from participating in community events - a chance to see how they did and maybe even get some ideas on how to improve. The challenge is that most attendees only tend to fill out the eval if the session was really good, or really bad. We tried to fix that this year by asking the questions in a different way, and also treating each completed eval as a raffle ticket in the end of session drawing for a couple books.

Here's the form we used:

And thanks to my friend and volunteer Mike Antonovich, we've finally got the compiled results - here's a sample of the output:

Full results can be downloaded directly from http://www.sqlsaturday.com/files/SQLSat8EvaluationResultsBySession.xps. If you look there is one speaker with especially bad results and it wasn't his fault; we couldn't get the projector to work with his laptop (they had weird VGA connectors) and an attempt to use a loaner didn't work out. Aside from that, the results are pretty good!

Is it enough feedback? The right feedback? I'm open to discussion but I think this is decent.

One of the things I hope to add for next time is a short video that reviews what we're looking for in evals and why they matter. I've put off adding web support for capturing and reporting on this data until we could lock in some type of standard. Ideally we would collect these at the end of each session and then have a couple volunteers key the data while the event was in progress. Direct capture would be nice, but trying to setup kiosk machine just adds complexity and a potential bottleneck, I've considered some type of cell phone solution and maybe that's worth a look. I lean towards analog and old fashioned because it does reduce the complexity and hopefully leverages the volunteers.

Comments

Posted by John Magnabosco on 24 November 2008

We have used evals for past events and having them as the "ticket" for a drawing is very effective in getting them filled out.

One challenge is that if the person places their name on the eval you may not get very accurate feedback. No one wants to be known as the guy who gave a speaker a 1 (poor) rating. My thought is that a number could be assigned to the eval that has a tear off ticket that matches.

The other challenge is the collection of the eval data into electronic format. Manually perusing written responses is quite time consuming. My thought is to consider the old "fill in the circle" process that can be scanned through some OCR software.

Posted by Andy Warren on 24 November 2008

I know putting the name on them does increase the chance of someone not giving us a true eval, but always the option to leave your name off to get the feedback in. The tear off is doable with a little extra expense. I skipped it this year because we had confusion last year over which end to keep!

OCR is interesting. Not sure how well it would do on comments, I guess possibly could just show them the scan minus contact info.