The most important thing to remember when helping develop and improve the breadth, depth and range of presentations is that we all started someplace. At every user group meeting of my group, the Ohio North SQL Server Users Group, I share what others call my "spiel". I share it in every user group meeting of other groups I attend, and in every presentation I give, be it a SQL Saturday, the PASS Summit or any other event I've been invited to speak. Here's what I say:

There isn't a person in this room who doesn't have some knowledge that we can all learn from. In other words, every one of you has something that I can learn from, but the only way that can happen is if you get up here and share it with the rest of us. It does two things. One, we get to learn from you. Two, you get to learn more about something you're already passionate about. You have to know more about something to present it, than to just do it every day. By sharing it with us we learn from you and you learn it better.

Now I don't mean for someone to get up the very first time and expect to be at a level that's ready for a major conference. That takes experience. That takes understanding that someone in the audience isn't really interested in your topic, and it's OK if they get up and leave. That also takes understanding that someone in the audience wants to prove that they know more than you know about your subject. I've seen this happen to both new speakers and to very experienced ones. Those of us who have been on the speaking circuit for a while have dealt with those people, and I encourage this group to help the newbie by letting the offending audience member know that their comments can wait until after the presentation is over. (There's no "good" way to handle this kind of heckler, and it's best to get them to shut up or leave.)

I like Erin's idea about a "buddy" system, to help each other out. It allows us to provide new speakers the kind of feedback they won't get on an evaluation form, and it provides moral support. I feel extremely proud that five people from our user group in Cleveland will be presenting at this year's PASS Summit, including both Erin and me. I think this stems from my "spiel" and the supportive approach we take during user group meetings where new speakers present.

Brent has some good points about the PASS Summit requiring the best speakers. The rating system in place doesn't objectively allow for ratings to be used exclusively, though. Speakers often get bad ratings because of things out of the speaker's control, things like the temperature of the room, the random disturbances outside the room, poor audio or video projection systems, etc. There also doesn't seem to be a way to let attendees know what to expect, and even when it does, attendees often pay little attention to prerequisites or session goals. Everyone has their own agenda, and that's the criteria by which the speaker is rated. I don't know how to fix this, but it deserves some attention.

Most importantly, while we need to see the speakers we know will "deliver the goods", we also need fresh faces and new ideas. My "spiel" is my way of encouraging new speakers, and I think we're successful. SQL Saturdays offer a great avenue for new and experienced speakers to learn from each other. I ask my experienced colleagues to lend a hand and help new people wherever possible, and attend their sessions, even if it's a topic that you already know thoroughly. (I once attended a "Basic T-SQL Backup" session by my friend and SQL Server MVP/MCM Sean McCown and learned things about backup I hadn't known, after using backup for 20 years.) By attending these sessions you provide support to the new speaker, you can intervene in the case of a negative attendee scenario, and you also just might learn something.

Comments

"Speakers often get bad ratings because of things out of the speaker's control, things like the temperature of the room, the random disturbances outside the room, poor audio or video projection systems, etc."

That's actually really easy to solve if you get a psychometrician involved in the survey question design. It's a common issue in surveys - you need a question designed to gauge satisfaction with the environment, a question about the presentation content, and the presentation delivery, for example. There's a whole science around how you design questions to get the right feedback. I would totally agree that the current question set doesn't work, and it clearly wasn't done by a psychometrician (or even experience working with one.)

Allen, completely agree - we are all together in this! However, we really have to also rethink how feedback to a speaker is given(no matter of his experience). I am saying this because I am tired of seeing feedback forms(or online portals) that follow the template "Rate this speaker from 1 to 10" or "From 1 to 10 how would you rate the speaker's presentation skills?". Seriously... what's that and how that can be valuable feedback (OK, it can be for some specific cases and purposes, but it's not the optimal variant)!? As speakers we do not care about numbers(or at least we should not care about them except if they are not presented in specific context). We care about what was good, what was not? What else could have been improved and we want someone to tell this to us! Being able to rate or to be rated with 6/10 on one of those is just nonsense and at the same time that's what is happening at almost each and every event out there...

Brent, thanks. I've also gotten dinged at a SQL Saturday because the original speaker scheduled for the time slot cancelled, and I was asked to fill in, and an attendee rated me poorly because I wasn't the original speaker scheduled. Those kinds of issues, plus addressing ratings based on expectations that aren't aligned with session goals, are still going to be problematic.

Boris, like it or not, conferences need numerical rating systems to rank speaker feedback. It doesn't do a lot for the speaker, but it allows the event to rank the sessions. Microsoft lives, eats and breathes by the numerical ratings at their Tech Ed conferences, and the various platform areas are pitted against each other based on these ratings. They're not going away. That said, I encourage attendees in every session I present to give me comments because, as I tell them, "I can't change something based on a number, but I can change something if you tell me about how it doesn't work for you".

Good stuff, and I totally agree. If I can speak (a socially awkward person with serious speaker fright; and I was a LOT worse 13 years ago!), anyone can. I was kind of forced into it by a manager of mine who had us all write abstracts and submit them for CA World one year. I got accepted to speak about ERWin macros. As much as I hated it, I also really liked how much it pushed me to learn and submitted for PASS in London, and up until this year, I spoke every year (I didn't submit this year, so it was my fault :))

>>Brent has some good points about the PASS Summit requiring the best speakers. <<

I agree, but PASS also needs the best variety of topics. If we have 100 great speakers talking about indexing, no one gets any information about the new stuff, old stuff, or the really old stuff (like database design, for an obvious example that I care about.)

And that is a tough problem, because with SQL Saturday and user groups, we tend to need lots of the DBA/Tuning sessions, but as you narrow things down to a 3 day conference, we squeeze the pool of good speakers because of topic choice.

I know I have survived as a speaker often because I have chosen (quite by accident) topics that aren't the most sexy, but are useful.

Yeah, it is a terrible necessity. But I wonder how often people really think about each question. Do people really disseminate between a good speaker and a bad presentation? Or do they go +/- 1 or so down the line? Last year, there was a question on the survey where the bad answer was opposite the other questions, and MANY just put the worst answer because they weren't paying attention.

I have often wished we had volunteers who graded sessions, in addition to the regular attendees. They could judge stuff like the speakers management of questions, time, demos, slides, and more fine grained details. They could also judge more than just the speaker. Like were the attendees all asleep? Were they unruly? Was the room suited to them? Was the sound terrible? Was it a session that needed interaction and everyone was on their phone? A great presentation is sometimes not the most entertaining, and perhaps a few people were following along and getting tremendous information, but everyone else wanted a laser light show.

Allen, yes, exactly. I always say the same to my audience and I am quite sure that numbers are also not going away for one reason or another. The point is that we have to make sure that all others we mentor or help become speakers are aware of the fact that "numbers" will not always be their "best friend" and that they must always push in the "let me know your real opinion" direction. I think that's what really makes a speaker advance and I can easily remember how my first presentations(during the course I was taking) were coupled with some serious negative feedback. If it was not for that feedback back then, I am not sure where would I have gone with my presentation skills. And indeed - that feedback did not consist any points or numbers...

I've attended one SQL Saturday and about 4 user group meetings. I have been in the habit of giving the speaker top grades, just because I think that it's awesome that people are willing to get up and share their knowledge.

Maybe when the awesome-ness of this community wears off, I can be more critical!

nice article! I tried to promote this at my annual end of the year @SFSSUG party, and made it into a contest called "Speaker Idol" :)

I agree with you Allen, and without knowing it, I'm also encouraging members to share, because We are all experts in one area or another, that we could benefit from each other's feedback and expertise.

In Columbus, we are considering a modification to our meeting structure with the intent to allow for a 10 minute lightening talk in addition to the feature presentation. It is our hope that the shorter time will encourage some new presenters to come forward and build some confidence. Additionally, we are hoping to have the lightening talk have a different target audience than the feature, such as a DBA feature presentation might have a BI or Dev lightening talk.

October 22, 2014 11:22 AM

Leave a Comment

About AllenMWhite

Allen White is a consultant and mentor for Upsearch Technology Services in Northeast Ohio. He has worked as a Database Administrator, Architect and Developer for over 30 years, supporting both the Sybase and Microsoft SQL Server platforms over that period.