Abroad101 has many many reviews from students. The study abroad reviews are meant for study abroad students, their schools and future students to analyze the program they went on but also to help the student reflect and understand what they learned from their time abroad. Future students and parents find reviews very helpful when trying to decide which program to pick for an upcoming study abroad trip. With this in mind the quality and variety of the review comments and ratings becomes very important.

Our guest blogger, Missy Gluckmann wrote an article about this concept. It is reposted for you here.

Education Abroad Evaluations: Why Imperfection is Ideal

The culture of higher education is to want to measure. We want to know not only how people have been changed by education abroad experiences, but how they FEEL about them too. Assessment has grown exponentially in our field and we sometimes get stuck on the metrics over the human experience – at least that is my opinion. But this post is about evaluations, so let’s talk about them.

I recently had a conversation with Mark Shay, CEO of Abroad101.com, about evaluations and what they really mean in our field these days. Let me back up though and share some history; Mark and I have worked in international education since the 1990s. While we did not work “directly” together on a specific project back then, I clearly remember the early days of his educational entrepreneur ventures at studyabroad.com. Back in the day (wait, am I really old enough to say that – yup, apparently I am!), this was “the tool” for finding education abroad program information. Today, he is leading Abroad101.com which is described as follows:

“ With more than 20,000 reviews of 3,800 programs, Abroad101 is the leading student review site for study abroad programs. It provides a free service for universities and students to rate, review and rank student experiences in study abroad programs.”

It is a timely business idea, as education abroad offices are tasked with creating evaluations, measuring “success” and increasingly looking to student satisfaction in this game of extreme competition that has evolved in higher education and sadly, has trickled down to the education abroad world. Ironically, small education offices often don’t have much time to think about evaluations, and those that do often don’t have much time to make meaning of the data or simply use it to pull “sounds bites” for marketing.

The “bigger players” tend to evaluate, sometimes not allowing students to receive their credits until a mandatory evaluation is completed. I find this to be a challenge, as I believe that time to properly reflect on an experience abroad is more important than keeping to a timetable. Withholding grades, in my opinion, borders on unethical.

Mark and I had a chat about the direction of his company’s website now that he is “officially” on board as CEO. We got into an important and sticky topic – why do so many programs come up so very high on ratings? Why are there so many “perfect” ratings in education abroad reviews?

We know that education abroad experiences, like many things in life, cannot reach perfection or close to it. The lack of funding to train faculty to encourage participants to do what Dr. Anthony Ogden describes as “getting students off the veranda” in this piece, is evidenced by the countless “island” experiences that we witness on faculty led academic courses abroad. We see comments by students about “how transformed” they are, yet how does a perfect five star rating illustrate how a program’s design has positively impacted that young mind and the abroad experience? This is where the rating (“five out of five stars”) of a program can diverge from the actual review of the details of a program – and the “devil is in the details”!

What is “perfection” in education abroad? Can we truly be “perfect” at anything in life? Does a “perfect” score mean we are doing all that we can to create an optimal learning experience for young minds? Or does it mean we are not setting a high enough bar or that students don’t have high expectations – or even enough life experience to have realistic expectations across cultures? Does a lack of maturity prevent students from providing deep and thoughtful critical analysis?

More precarious is that there may be pressure from program providers and administrators to keep the scores ‘high’ – as some education abroad departments are embracing the competitive nature of higher education admissions’ philosophy, transferring the demand to perform to those going through re-entry by offering an incentive to them to complete a positive evaluation for a prize of some sort (bookstore bucks, access to academic records, etc).

Another challenge is that students haven’t been given much guidance in HOW to actually review a program so that information can be gleaned that will actually help improve the program design. I recentlywrote a guest blog post for abroad101.com about this subject.

During our chat, Mark and I quickly agreed that giving (or coaching your students to give) a “perfect” score on a program is not useful to those who are researching future participation in an experience abroad. Giving someone a score of five on a site like abroad101.com is not like jumping on a site like yelp.com and letting someone know that you thought a particular meal in a new restaurant was “spot on” and to your liking on a given day. Education abroad is so much more complex than one random stop for brunch. It requires that we are critical and thoughtful in our feedback and that we specifically disclose the imperfections to provide accountability and necessary education for others who are considering these programs.

Let’s consider an example: A student in Rome completed an evaluation and ranked it “five stars” (perfect score), yet she indicated that she would not return abroad with the same program. When reading her evaluation further, she illustrated concerns about program administration, yet still rated the program with a perfect score.

Confused? So am I.

My guess is that someone in her study abroad office failed to take her to task on the incongruous feedback. It would have been so much more helpful is she had actually taken her comments on the program administration and carefully weighed them against the scale, providing a more realistic overall rating score, perhaps a three out of five. This would then prompt future readers to consider the program weaknesses, which really are opportunities to address issues and improve processes and outcomes. These types of scores are the ones that create REAL dialogue about program design and delivery of service to an academic sojourner. They also open up the door for discussion about partnership between universities and third party providers (who I prefer to call third party partners – as that is what the relationship should be based on – partnership…but I digress).

I have seen a similar example for a third party provider in Tanzania. The student rates the program a five yet consistently ranks it lower in all subcategories, including a “one star” on the academic experience with this commentary:

“The academics were overall pretty terrible, whenever the professors did actually show up they usually did not even teach material that was relevant.” Regarding housing, she says “The dorms are pretty terrible and my roommate didn’t even live here so I was by myself. The safety on campus is not great and you can’t walk alone after dark. It’s about a 15 minute walk to class and a 30 minute bus ride to nightlife. There is also no access to kitchens….and that was a big problem.”

It begs us to ask how THAT evaluation was overlooked by the home university and permitted to land on the site in such condition.

A score of five indicates to others that a program is stellar. Perfect. Wouldn’t change a thing. Except that in some cases, there IS much that needs to change. Sadly, we miss the opportunity to get to the fine detail about what those components in need of work are when we see imperfection as failure.

Had the students rated the programs with more accuracy (e.g. “not perfect”), it would encourage not only administrators but prospective students to more carefully consider the actual “review.” A score in the “high fours” is much more revealing to someone seeking information on a program. It offers a positive endorsement of the overall program yet provides specific details of what to expect, what could change to improve the experience and what to consider when making the important decision of who to go abroad with. Its value is priceless when compared to the ubiquitous “five”, as it actually provides insight into the little nuances that are so important (e.g. what to pack that was not mentioned in pre-departure materials/orientation) and the bigger ticket items (e.g. feedback on where budget improvements can be adjusted by a provider to allow students to more consistently engage in activities throughout a period abroad, vs. blowing all of their money during the first two weeks).

I realize that someone who rates a five may be someone who feels they had a positive, “life changing” overall experience abroad and wants to communicate that to the masses, but as Katy Rosenbaum from the Melibee team stated so eloquently, “I think it’s safe to say that a great time does not necessarily equal a great program.” Frankly, we all know of less than immersive education abroad programs that are highly rated.

Perhaps this quote by Iain Thomas sums it up best when it comes to education abroad evals:

“But life isn’t something that should be edited. Life shouldn’t be cut. The only way you’ll ever discover what it truly means to be alive and human is by sharing the full experience of what it means to be human and each blemish and freckle that comes with it.”

Give me imperfection over a perfect score any day. It is those blemishes and freckles that will inform and make the evolving world of free evaluation services in education abroad a truly meaningful tool for all of us.