As we look to upgrade from 4.5.6 to 5.0, we would like to include internal usability testing. Wondering if anyone has done this with their own internal employees who will ultimately use the system and if so, how was the test structured? I know that we should structure it with both experience and non-experienced alike. I'm thinking it makes sense to create a script of tasks we want them to complete, and have non-biased observers recording user actions, confusion, frustration, etc.

The devils advocate side of me says "we didn't do usability testing on our email system, so why do it here"!

We actually did this in part of the company about six months after we launched our Jive platform. This was back in the days of 3.0. And yes, we had a script. We would lay out a task and ask them to complete, then shut up and watch. We recorded sessions (but never actually used the recordings), and made sure to time each response. For example, you would say, "You remember John Smith just wrote an article that is relevant to a project you are starting. How would you find John's most recent posts." And then you time how long it takes them to reach the goal.

It is important to make sure there is a defined end point to the task. This means you need to do some prep to see what exists and what you might need to create to make the scenarios make sense.

Not really convinced the observers need to be unbiased, just that they actually record what they witness and don't provide help to the people being observed. In fact, I think it is good for people who "know" the answers to watch. There were things that surprised all of us.

The reasons we found that it made sense:

Jive is much more complicated than email. If you had/have an intranet, you might have considered doing usability testing there. And Jive is more complex than an intranet, so why not.

We uncovered areas that were common problems. It allowed us to figure out where to focus training and documentation efforts. We also implemented a "Tip of the Day" that was publicized to staff via other mediums, so we could be more certain of reaching the audience with helpful tidbits.

We are also currently gearing up for an upgrade from 4.5>5. We've done user testing/still in the process in two ways:

First: We interviewed a group of 'power-users' and similar to what Tracy mentioned above - we interviewed them, asked them to show us specific actions and interactions in the community and recorded the sessions. The results from these interviews helped us to determine how people are using the community in their day to day work, but also allowed us to watch how they navigated to their content. One challenge we faced when doing these interviews, was getting a wide range of users, as our "Power-Users" of course were on board (they are power users for a reason), the folks who are less active in the community or have less experience were less willing to participate.

The feedback from this has helped us to determine gaps in our previous training (which would have been when we launched a year ago with 4.5) and is helping us to shape our training for v5, and map out any potential UI changes, enhancements, apps etc. to invest in as we move to v5. It was also encouraging that many of the "barriers" or "issues" called out in these interviews are items that the functions of v5 actually address!

-Note: as we are an Advertising Agency, we enlisted the help of one of our UI experts to help structure & conduct the interviews as well as give recommendations

Second: We have a QA version of our community in v5 that we've opened up to our Power-Users to play around in and test out the functionality of the new version. We initially had plans of having our UAT peeps test out specific functions in the QA environment, to keep the testing structured and gather feedback. But we have found keeping these users engaged in testing has been challenging for a couple reasons: 1.Our QA environment does not contain live content, and 2. our testers are doing this on top of their regular day-jobs which take priority. We have slightly abandoned our 'structured' testing plan, and are testers are still giving good feedback as they test when/what they can/what interests them the most.

We are currently trying to re-brand this UAT as more of a "Sneak Peek" of the new version to get some more interest, and invite more people to test. If anyone has had success in running a testing environment like this or tips for keeping the testers engaged I'd love to hear it!

Great guidance Erin, and thank you ! Interesting point on Power users vs. non. I'd like to get many of those to assist us that have litle to no experience in our current instance. To your point, it could be a bit of a challenge. Your second point is dead on. Anyone will be reluctant to invest time "testing" unless it has direct impact on their own day to day work in getting things done. Ideally, if both versions could co-exist, perhaps that challenge would be alleviated. I like the "Sneek Peek" thought. Makes me think of setting up a "kiosk" of sorts in a high traffic area that runs through a video loop of the new UAT. Hmmmm.

Thanks so much Tracy. Very helpful. I believe if we communicate it's purpose properly, i.e. for training modules, documentation, Tips... as well as record how long tasks take and what the stumbling blocks are, this will be successful. I think the challenge will be when we get questions like "why do we have to do it that way..? "Why can't you just change the colors", etc. Don't want to get bogged down in individual requests for changes that we ultimately can't control. Picking the right audience will help with that.

We made it clear that we were testing across a wide audience and looking to see what challenges were and see where there might be opportunities. We honestly didn't get a lot of negative feedback from the participants, and based on what we saw, we were able to give some of them really quick, easy tips to improve their experience right then and there. I also suggest making sure you select people from different departments and different experience levels.

Congratulations, Scott - testing your site with users as part of this upgrade process is a great idea.

My high level recommendations to you are:

Try to shoot for 5-8 users in each category that makes sense based on your community. Examples: Community Managers and Admins (you may not have 5-8 of these, but as many as you can get up to that range), New users, Existing users.

Create tasks for each group based on (if you are able) user interviews, or based on what you know about your user base and existing permissions.

Keep the tasks short and to the point, and don't rely on terminology that is Jive specific whenever possible. User flows that come up in daily business don't always have the added bonus of "content type." Examples:

"Add your meeting notes from this morning's meeting into the system." vs. "Create a document and enter your meeting notes in the Project Managers group." This lets you watch their thought process and can also inform your information architecture.

"Ask your peers if anyone has ever <something that makes sense for your community>." vs. "Start a discussion to find out if..."

Don't feel compelled to answer or respond to questions like "Can we change the colors of the site?" - The usability proctor can simply thank the user and offer to share their comments and feature requests with the people who make the decisions when those things come up;

I've done the quick lunchroom testing before (kiosk-style as you mention above) and we lured people in by offering something along the lines of "Tell us what you think, wear jeans on Friday." We worked out a deal with management to provide special (and rare) jeans-wearing passes for their time. It worked well!