Reviewing US News Release of Honor Rolls of Online Education

The moment has arrived. The ‘mother of all rankings,’ also known as the US News and World Report rankings, have birthed their newest baby where they list the “Top Online Programs.” Rather than an overall institutional ranking they have created “Honor Rolls” of the top programs in: bachelor’s degree and graduate degrees in business, education, engineering, information technology, and nursing.

We have followed this idea from when it was first announced last summer. On first glance, many of our initial reservations are confirmed, we found a few things that we like, and see a new concern. We’ll give you some background and our initial reactions.

WCET’s History with the US News Rankings

In July, WCET hosted a webcast with Bob Morse and Eric Brooks from US News and World Report to try to better understand how these rankings would work. We came away almost as empty-handed as we went into the conversation. The one nugget we carried out was that while the methodology for determining the rankings had yet to be set, data was being collected and this first survey and ranking would advise the methodology for future rankings. In early September, we followed up with a blog post to share our impressions, as well as those from some of our higher education friends. To say the least, we were not hopeful at that time that the process would produce useful results.

One of our members, Capella University’s President Deborah Bushway, wrote an article for the Huffington Post about why Capella would not be participating in the US New Rankings. We heard from several other members that they would not be participating for many of the same reasons Dr. Bushway cites – the focus on inputs rather than outputs and the lack of a ranking methodology.

What We Observe Now that the Rankings are Released

Judging online programs like produce at a county fair…appearance over substance.

The rankings were just released, so we will give you our initial reaction to what has been published:

‘Honor Roll’ Not Rankings. We were glad to see that they used an ‘Honor Roll’ for the top programs in each section. Given the imprecise science of rankings, this makes more sense than an absolute ordinal scale. Even so, they went ahead and ranked within each subcategory.

Small Number of Usable Surveys.US News sent out nearly 2,000 surveys and received 969 responses for the Online Bachelor’s Degree category. Yet, few responses were usable. The Bachelor’s Degree rankings had the most responses (194) and not all of those could be used. Many institutions appear to be in the final rankings even though they were not able to respond to all of the subcategories.

Overall Methodology is Still Questionable. The small number of useful surveys speaks to the scattershot methodology used by US News. We would have thought that they would have engaged experts to develop questions and then pilot test the survey on a sample of institutions. Instead they seemed to develop questions and assign point values based on “interviews with decision makers in high-enrollment online bachelor’s degree programs, online education literature reviews, and pre-existing U.S. News ranking practices.” They did interview people at distance education programs, but that is far different than fully engaging people cognizant of online education policies in developing the questions. As a result, this survey became a large-scale pilot test with questions that many institutions could not or would not answer.

Questions Focused on Inputs, Not Outcomes. The survey is almost completely focused on inputs to the educational experience. With the focus on outcomes by accreditors, the Department of Education, and institutions, this seems like a major problem.

Scattershot Question Asking. From the methodology: “Once the survey deadline passed, U.S. News analyzed the quantity and quality of data collected to determine which questions could be used for rankings.” Instead of focusing on a few pre-tested questions that would lead to assessing quality, the survey was a smorgasbord of questions that they decided whether or not to use in the end. That’s a fairly disrespectful use of staff time at our nation’s colleges and universities. Also, institutional personnel could have suffered from ‘survey fatigue’ and not answered some questions only to learn later that they skipped a crucial question.

Specific Questions Showed Lack of Knowledge about Online Education. There were several head-scratching examples of questions and weightings. There were several places where there was no understanding of the nuance of what actually happens on campus. Here are a couple examples that we found in our short time analyzing the results:

The criterion “training required in online teaching best practices before instructors are allowed to teach” is scored on an affirmative response receiving the full 10 points. Presumably a negative response receives zero. This misses the wide array of possibilities in between. It is often hard to get a faculty requirement such as this. There are many effective non-required faculty development efforts. Is a one-hour cursory required training session worth 10 points, while an in-depth training experience that 80% of faculty complete worth nothing?

Why does “Corresponding undergraduate programs ABET accredited” rate 20 out of 100 points in the “Student engagement and accreditation ranking?” It might be a valid criteria (or might not), but why is it paired with student engagement issues? Nowhere else to put it?

Under the Bachelors survey section on “Student engagement and assessment ranking” the “Additional indicators (2)” criterion scores several aspects of faculty feedback availability and timing. This assumes a model that not all institutions are using.

Many have said that it is important to measure online learning, but it is essential to use the correct tool. A ruler is a valid measuring tool, but there are better options if you are trying to determine someone’s weight.

Honor Roll Weirdness. While we like that they use the Honor Role, we found some oddities. The way onto this list is: “A program made the Top Online Bachelor’s Degrees Honor Roll if it was ranked in the top one third of all three indicators: faculty credentials and training, student engagement and assessment, and student services and technology.” Only 55 of the 194 responding institutions were able to be ranked in the “faculty credentials and training” category. The questions this raises:

If it was so difficult for institutions, was it a valid set of criteria?

Institutions were penalized or helped by the small number completing the faculty section. For example, Central Michigan University fell just beyond the top-third in the faculty section, but was in the top third for the other two categories. If just a small handful of institutions had answered the faculty section to US News’ satisfaction, they would have made the Honor Roll. In essence they were penalized by the vagaries of how many answered and the difficulty in answering this section

Pushing Students to a Lead Aggregation Tool. When you get to the US News site, they prominently display a link to their “Degree Finder Tool.” This is one of those annoying lead aggregation sites, that asks for your email address on the third page. They sell these leads and the person who gives their contact information will undoubtedly get bombarded by recruiters.

Conclusions

A recent Washington Post article highlights the pervasive proliferation of rankings – the oldest and well known such as the US News rankings to the downright silly like the hairiest students. The methodologies for collecting data for these rankings vary from surveys of college administrators to allowing anyone with an email address to rate, rank, grade, or otherwise comment on the rigor, friendliness, drugginess, or hairiness of a college.

Of course, a single ranking on a basketful of criteria is a questionable undertaking in any case. How would you rank the best food, car, pet, or television show? Certainly, you have your opinion, but would that be the same opinion held by your spouse, your parents, your children, that weird neighbor down the street, or someone from a completely different geographic and economic background?

In short, there is an absurdity to many of these rankings and the fact that no matter how much any of us may hate them, there’s not much we can do about them.

Or is there? If we were to create a culture of transparency throughout higher education, with institutions sharing data openly, publicly and giving students tools to make informed decisions, would the rankings live on? If we were accountable to ourselves and our students, would we need to use the arbitrary rankings in our marketing?

For now, these rankings are a part of our world, they infiltrate the work that we do and unduly influence our students. At the very least we should strive to understand them.

It would be better if all schools participated or there was a list of those schools that participated. Otherwise the basis for comparison is limited to those institutions that either responded or even knew to participate and it is not clear which institutions were considered for the rankings. Thus a really good online institution that did not participate for anyone of a variety of reasons would not appear despite the fact that they might rank highly on one or more of these measures. Lists like these are only valuable if you know the comparison institutions….good qualitative statistics not apparent here!