Every fall, U.S. News and World Report releases its much-read (but oft-criticized) college rankings. This year, U.S. News had new competition from the New York Times, which ranked colleges based on socioeconomic diversity, and LinkedIn, which tracked graduates’ employment outcomes. Meanwhile, the U.S. Dept. of Education has been preparing to release the most significant instrument yet: a ratings system that will eventually be tied to federal funding. With college ratings growing in significance for both consumers and policymakers, we asked our contributors: What criteria should weigh most heavily in college and university ratings? How should the department hold institutions accountable for these variables?What’s the right way to judge our institutions of higher education? Join the conversation: You’re seated at the Field Day blog round table.

Families and Policymakers Both Require Quality DataBill DeBaunAnalyst, National College Access NetworkConflicting views on college ratings show the complexity of developing a system that is useful for policymakers and consumers. These groups’ needs are different when it comes to ratings, but both groups require current, complete, and comprehensible data for promoting accountability and well-informed consumer decisions.When developing ratings, policymakers should keep institutions of higher education accountable for the aims of the programs from which those institutions receive money. For Pell Grants, that means spurring high completion rates among grant recipients—many of whom are low-income and first-generation college students. For loan programs, that means graduating students with an education that helps them pay back taxpayers’ investment in their degree. Accordingly, policymakers should consider loan default rates and students’ 24-month post-graduation incomes to compare workforce outcomes by institution.

For consumers’ rating system needs, I endorse my colleague Carrie Warick’s suggestions for a system that includes information on institutions’ net price, admissions, completion rates, and average student loan debt and defaults, all disaggregated by institutional and student characteristics. Students need better data to make critical financial and professional decisions about matriculation.

Both groups’ needs for better data could be addressed by a student unit record system, which has been proposed by the New America Foundation and others. The imperfect data available underscores the need for a more complete collection of higher education information. The Integrated Postsecondary Education Data System only reports graduation rates for first-time, full-time students, ignoring growing numbers of returning and part-time students. Better data will allow policymakers, consumers, and researchers to better understand key questions about institutional outcomes. Whether used for public information or accountability, ratings will only be as good as the data on which they are based. Students and institutions deserve the best basis on which to rate and be rated.

Popular Rankings Don’t Reflect Student Experience on CampusDylan HackbarthSchool Counselor, Fairfax County Public SchoolsThis past summer, I worked as a student advocate on a week-long trip to a mid-sized, mid-Atlantic university with a group of about 60 first-generation college students. As we met with admissions representatives and university advisers, they told us about their national rankings—intermingled with a few, hard facts about five-year graduation rates, student financial aid packages, and admissions requirements. While Princeton Review’s top ranking for dining hall food is interesting, it is also subjective. I rarely hear students talk about lifestyle rankings like “best dorms” or “best food,” though I know reputations matter for students and their families when deciding where to apply. At the beginning of the college search, I see students impacted by parents who push for elite and top-ranked schools. With some elite colleges and universities boasting admission rates of between 5 and 8 percent, I’ve seen students who demonstrate an interest in this type of setting applying to 10 to 20 colleges to hedge their bets with admission. Rankings strongly influence students’ application lists, but once financial aid packages are parsed out and an enrollment deposit needs to be paid, the conversation often swings, instead, to focus on best fit. University rankings contribute only to perceived prestige, blurring reality for students and parents, especially when rankings are based on data like a university’s endowment or applicant SAT scores.For this reason, whenever I meet with admissions folks, I always ask not about prestige rankings, but about student financial aid, first-year retention, and four-year graduation rates. These data points shed better light on the student experience at a particular institution. Students need to understand the financial and academic realities of attending college. Getting into college is only one part of the postsecondary equation—being able to pay and stay on track academically are of premium importance.

In developing college and university ratings systems, focus must be placed on criteria that evaluate institutions based on their outcomes relative to their unique student populations.

Simply grouping an institution with peers defined by traditional sector or geographic region does not give an accurate expectation of that institution’s outcomes. Ratings systems need to be able to evaluate institutions relative to the demographics of students they serve. This is what’s commonly referred to as an “input-adjusted” metric or evaluation.

Input adjustment involves examining outcomes while controlling for key factors so that valid comparisons can be made among the outcomes of different institutions. Predicted graduation rate is an excellent example of an input-adjusted metric. A calculation could be done using student demographic­­ information—socioeconomic status or race, for instance—to determine the institution’s predicted graduation rate.

If such a calculation were to be incorporated into President Obama’s proposed Postsecondary Institutional Ratings System, the predicted rate could serve as that school’s expected benchmark. The U.S. Department of Education could then evaluate the institution based on how close their actual graduation rate is to their predicted rate, allowing for an “apples to apples” comparison that would otherwise be impossible. As with most input-adjusted metrics, this would incentivize schools to improve rates but would not unfairly penalize them or inadvertently encourage them to stop admitting at-risk students.

Institutions within the same sector and state or with similar missions can vary widely, in terms of the characteristics of their students and programs. Given the differences that can exist even within broad categories of institutions, colleges and universities must be evaluated based on how well they serve their own unique population of students. Only then will schools’ ratings accurately measure the outcomes they produce.

What do you think? Engage these bloggers or share your own perspective on higher education rankings and ratings in the comments below.

Leave a Reply.

about

YEP-DC is a nonpartisan group of education professionals who work in research, policy, and practice – and even outside of education. The views expressed here are only those of the attributed author, not YEP-DC. This blog aims to provide a forum for our group’s varied opinions. It also serves as an opportunity for many more professionals in DC and beyond to participate in the ongoing education conversation. We hope you chime in, but we ask that you do so in a considerate, respectful manner. We reserve the right to modify or delete any content or comments. For any more information or for an opportunity to blog, contact us via one of the methods below.

Bloggers

MONICA GRAY is co-founder & president of DreamWakers, an edtech nonprofit. She writes on education innovation and poverty.

LYDIA HALL is a legislative aide in the U.S. House of Representatives, where she works on education, civil rights, and other issues. Lydia is interested in helping to bridge the gap between Capitol Hill and the classroom.

MOSES PALACIOS is an advocate for student rights and works as a Research Manager for the Council of the Great City Schools (CGCS) - a coalition of urban school districts across the nation. He writes on issues regarding the children of immigrants and students learning English as a second language. His views are his own and not representative of CGCS.

PATRICIA RUANE is aresearch associate at an education nonprofit. She is an editor of Recess. ​LESLIE WELSH is a high school social studies teacher in DC. She is an editor of Recess.