8/11/2010 @ 6:00PM

Methodology

Americans spend more money on college education than nearly any other investment. Just as they want home inspectors to evaluate possible house purchases, and Consumer Reports or J.D. Power and Associates to help guide their car purchases, they look to information providers like Forbes to assist them in their choice of colleges and universities.

The Forbes ranking is designed to meet the needs of undergraduate students. It attempts to help them evaluate things that many believe are important criteria when selecting a college:

–Do students enjoy their classes and overall academic experience?

–Do graduates succeed well in their occupations after college?

–Do most students graduate in a timely fashion, typically four years?

–Do students incur massive debts while in schools?

–Do students succeed in distinguishing themselves academically?

We use more than 10 factors in compiling these rankings, with no single factor counting as much as 20%. The rankings are objectively determined, with the only subjective judgments being those of the Center for College Affordability and Productivity and Forbes as to which factors to include and the weights to be used in evaluating each factor.

Students have varying tastes, preferences, academic abilities and financial situations, so the “best” school for each student depends not only on overall quality as measured by rankings such as this one, but other considerations specific to individual students. A do-it-yourself ranking indicator allows users to personalize rankings to fit their own tastes. We also offer a “best value” ranking that relates institutional quality to costs as measured by tuition and fees.

A detailed discussion of the methodology used in the compiling of these rankings follows.

Ranking Factors and Weights

The Center for College Affordability and Productivity (CCAP), in conjunction with Forbes, compiled its college rankings using five general categories, with several components within each general category. The weightings are listed in parentheses:

No. 1: Student Satisfaction (27.5%)

Student Evaluations from RateMyProfessor.com (17.5%)

Freshman-to-Sophomore Retention Rates (5%)

Student Evaluations from MyPlan.com (5%)

No. 2: Postgraduate Success (30%)

Salary of Alumni from Payscale.com (15%)

Listings of Alumni in Who’s Who in America (10%)

Alumni in Forbes/CCAP Corporate Officers List (5%)

No. 3: Student Debt (17.5%)

Four-year Debt Load for Typical Student Borrower (12.5%)

Student Loan Default Rates (5%)

No. 4: Four-year Graduation Rate (17.5%)

Actual Four-year Graduation Rate (8.75%)

Predicted vs. Actual Four-year Graduation Rate (8.75%)

No. 5: Competitive Awards (7.5%)

Student Nationally Competitive Awards (7.5%)

Changes This Year

In light of the addition of new variables (the inclusion of data on student default rates on educational loans, data on alumni who are either CEOs or on the boards of directors of America’s leading companies, freshman-to-sophomore retention rates, and student evaluations from Myplan.com), we dropped the faculty awards measure from this year’s rankings. Last year we reduced the weighting of the faculty awards from 8.33% to 5%, and this year we made the determination that the other new variables deserved heavier consideration.

In the past we have had technical problems with this component because a very large number of schools receive zeros, making this variable relevant to only a small percentage of schools in our sample. Additionally, we had some concern that faculty awards were more of an input, while our rankings pride themselves as being output-based. Schools can effectively “buy” faculty who have won these awards, so our use of this measure was potentially susceptible to institutional manipulation. Furthermore, as these awards do not directly measure what students actually learned from these distinguished faculty to their students, this measure of faculty awards measures an input to college education rather than an output.

School Selection

The 610 institutions of higher education in this ranking are schools that award undergraduate degrees or certificates requiring “four or more years” of study, according to the U.S. Department of Education. Only those schools categorized by The Carnegie Foundation [2] as doctorate-granting universities, master’s colleges and universities or baccalaureate colleges are included in this sample of schools. Of the 610 schools included in the sample, 599 were included in the 2009 college ranking. (A total of 600 schools were ranked in 2009, but Bob Jones University is excluded from the list this year because data were unavailable.) Name changes that have occurred over the past year are accounted for in this year’s list. Also, the 2010 rankings include only the metropolitan campus of Fairleigh Dickinson University; in the past, we had included both of its campuses as a composite school in our rankings.

The other 11 schools in the sample were added based on school size and performance in other, comparable rankings published within the United States. These additional schools are classified as either master’s colleges and universities or baccalaureate colleges with FTE undergraduate enrollments greater than 1,000, are not affiliated with a large multicampus university, and do not offer primarily online education. [3]

Student Evaluations From Ratemyprofessors.com (17.5%)

RateMyProfessors.com, founded in 1999 as TeacherRatings.com, is a free online service that allows college students from American, Canadian, British, New Zealand and Australian Institutions to assign ratings to professors anonymously.

The participation of students in this website has been quite significant: Around 10 million evaluations have been posted to this site; however, we used only those evaluations for professors who have taught at the schools in our sample. University administrations have no control over the process of evaluation, meaning schools would find it difficult to try to “game” the process by manipulating student participation. Furthermore, this database is very useful because it provides a uniform evaluation method for all instructors at all schools in the country.

Any student can enter in ratings of professors via RateMyProfessors.com. All categories are based on a 5-point rating system, with 5 as the highest rating. The categories students evaluate classes on are easiness, helpfulness and clarity. Overall quality is determined by averaging the helpfulness and clarity ratings given by students. There is also a “chili” (hotness) component that assesses the professor’s physical appearance, which we ignored in the determination of this component of the rankings.

Why This Measure?

Students are consumers who attend college to learn and acquire knowledge and skills. The core dimension of the learning experience comes from attending classes taught by instructors. Asking students what they think about their courses is akin to what some agencies like Consumer Reports or J.D. Powers and Associates do when they provide information on various goods or services. As Otto, Sanford and Ross note, students who post ratings on the website can be viewed as experts due to their significant experience with the professors they are evaluating. Considering the popularity of RateMyProfessors.com (RMP), with students themselves using the ratings to develop expectations about faculty members and set their schedules, we agree with these scholars when they argue that online ratings should be taken seriously. [4]

To be sure, the use of this instrument is not without criticism. Some would argue that only an unrepresentative sample of students complete the forms. In some cases the results for a given instructor might be biased because only students who are unhappy with the course complete the evaluation, while in other instances perhaps an instructor urges students liking the course to complete the evaluation, biasing results in the opposite direction.

It is possible that this concern has some validity as it applies to individual instructors. But when the evaluations of dozens or even hundreds of instructors are added together, most examples of bias are washed out–or any systematic bias that remains is likely relatively similar from campus to campus. What is important to us is the average course evaluation for a large number of classes and instructors, and the aggregation of data should largely eliminate major interschool biases in the data. In fact, on an institutional level, there is evidence that higher RMP scores are correlated with fewer evaluations; that is, the lower the number of RMP evaluations per enrollment, the higher the school’s composite RMP score.

The other main objection to the RMP measure is that instructors can “buy” high rankings by making their course easy and giving high grades. Again, to some extent the huge variations in individual instructor rigor and easiness are reduced when the evaluations of all instructors are aggregated–nearly every school has some easy and some difficult professors. Nonetheless, we took this criticism seriously and did observe some inter-institutional variation in course easiness, as perceived by the students themselves. Other things equal, an institution’s score on this factor should be enhanced if it has a relatively high proportion of “hard” instructors or courses, for two reasons. First, there is a negative correlation between student overall evaluation of a course and its ease of difficulty, and we should control for this factor in order to get evaluations relatively unbiased by this factor. Second, there is a case that can be made that where difficulty is perceived to be high, there is likely more learning occurring–students on average are being challenged more. For these reasons, we gave special consideration to the difficulty factor in the measurement of this factor, as discussed below.

Scholarly Assessments of RateMyProfessors.com

There have been a number of studies assessing the validity of the RMP evaluations. The general approach is to relate the results on this website to the more established student evaluations of teaching (SET) that are routinely performed by most North American institutions of higher education. Since the schools themselves think their SET provides useful information in assessing the effectiveness of faculty and instruction, if these institutional evaluations correlate well with the RMP results, it enhances the likelihood that RMP is a valid instrument.

The research to date cautiously supports the view that RMP is relatively similar to the SET used by universities themselves. As one oft-cited study puts it, “The results of this study offer preliminary support for the validity of the evaluations on RateMyProfessors.com.” [5] Thomas Coladarci and Irv Kornfield, surveying instructors at the University of Maine, note that “… these RMP/SET correlations should give pause to those who are inclined to dismiss RMP indices as meaningless,” although they also expressed some concerns that the correlation between the two type of instruments was far from 1.00. [6] Otto, Sanford and Ross concluded that their analysis of ratings on RMP revealed what would be expected if the online ratings of professors were in fact valid measures of student learning. [7] More recently, Bleske-Rechek and Michels, in an analysis of students across majors at a state university, contradicted the popular notion that students who use RMP only post highly negative or highly positive ratings. [8] Bleske-Rechek and Michels also concluded that the evidence does not support the common assumption that students who post on RMP are not typical of the whole student body.

To be sure, the research is not all enthusiastically supportive of RMP. Felton, Koper, Mitchell and Stinson suggest that the positive correlation between RMP quality ratings and ease of course assessments make this a questionable instrument. [9] Bleske-Rechek and Michels confirmed the existence of a positive relationship between student evaluations of quality and easiness at the instructor level, Bleske-Rechek and Michels warned that “it is misguided to jump to the conclusion that the association between easiness and quality is necessarily a product of just bias” and suggest that the RMP data may only be reflecting that “quality instruction facilitates learning.” [10] However, regardless of the precise causes of positive relationship between student assessments of quality and easiness, we have adjusted the RMP score for course easiness to correct for this potential bias.

In spite of some drawbacks of student evaluations of teaching, they apparently have value for the 86% of schools that have some sort of internal evaluation system. RMP ratings give similar results to these systems. Moreover, they are a measure of consumer preferences, which is what is critically important in rational consumer choice. When combined with the significant advantages of being uniform across different schools, not being subject to easy manipulation by schools, and being publicly available, RMP data is a preferred data source for information on student evaluations of teaching–it is the largest single uniform data set we know of student perceptions of the quality of their instruction.

Calculating the Schools’ Scores

We take the average overall quality rating for all instructors at each school based on the quality ratings of individual professors listed on the RMP website. [11] We also examined course rigor from the RMP easiness variable. The RMP easiness variable, like the overall quality variable, is based on a scale from 1 to 5, with 5 being the easiest. To establish a measure of course rigor, we invert the scale of the rating by subtracting the easiness score from 5.

The overall RMP score was generated by giving three times more weight to the overall course/instructor ranking and summing its number with the constructed rigor factor. This composite score was then updated using Bayesian methods that consider the number of votes submitted. [12] The data were then standardized and given a score between 0 and 100 commensurate with its location in a normal distribution. Put differently, 18.75% of the total ranking of each school was based on student perceptions of course/instructor quality, and 6.25% was based on student perception of course rigor, with greater points being given the less easy–more difficult–the course was perceived to be. Thus, RateMyProfessors.com evaluations account for 17.5% of the final score for each school in this ranking.

Freshman-to-Sophomore Retention Rates (5%)

Data on freshman-to-sophomore retention rates are quite common, and are often used in college rankings or databases on colleges, such as the College Board. For our purposes, we use retention rates as a gauge of student satisfaction with the education offered by the college or university which they attend. We interpret higher retention rates as suggestive of higher student satisfaction while lower retention rates indicate (other things equal) a lower level of student satisfaction. Like any other metric used in assessing colleges and universities, retention rates is limited in the amount of reliable information it can convey. However, because retention rates are readily available and are a respected measure of college’s performance, they provide some of the best available data on college students’ satisfaction.

The data on freshman-to-sophomore retention is part of the data annually collected by the U.S. Department of Education from any college or university which receives federal funding. This data is available within the U.S. Department of Education comprehensive database (IPEDS). [13] For those schools in our sample which do not report retention rates to the U.S. Department of Education, we obtained the data by other means. (For instance, for Hillsdale College we received freshman-to-sophomore retention rates directly from the school.) The data we used were the retention rates reported for Fall 2008 in IPEDS.

After compiling the data set, we standardized the data using Z-scores; in the final computation to obtain the rankings, the data are then weighted so that they constitute a 5% importance in determining the final 2010 Forbes ranking.

Student Evaluations from MyPlan.com (5%)

MyPlan.com is a website devoted to helping students and professionals find information about the educational and career options. While including traditional data on hundreds of American colleges and universities, perhaps most interesting and useful is the website’s user questionnaire that surveys a university’s current and former student users. The survey asks a number of questions, but the one used in the Forbes/CCAP ranking is question 15, which asks: “How happy are you having gone to this college?” and offers five possible responses: (1) “Worst decision I ever made;” (2) “Not horrible, but I probably wouldn’t choose to go here again;” (3) “I have mixed feelings;” (4) “I’m happy I chose this college;” (5) “Time of my life! Best choice I ever made.” To review a college one must be logged into the website. It is free to create an account.

Why use MyPlan.com?

MyPlan.com’s survey provides data on how satisfied students and alumni are with their overall college experience. When selecting a college, a prospective student is likely concerned with many different factors. We try to capture many of these in our ranking (e.g. graduation rates, professor evaluations, debt loads, etc.). Yet, there are many aspects of a college for which there is not quantifiable data that can be used in our calculations. The Forbes/CCAP ranking strives to be outcomes based. The most important outcome is likely if graduates are satisfied with their investment. The MyPlan.com data allow us to assess this important question.

Inclusion of this variable of course comes with some criticism. Some will argue that the sample sizes of number of reviews are too small to be reliable. To help minimize this problem, CCAP included only those schools that had at least five reviews submitted. Under this criteria, 468 schools have usable data (of of a total of 610 schools this is 76.7%) and there are a total of 7,994 reviews by students/alumni. The average number of reviews for this sample is 17.08 and the median is 12. For schools that did not meet the five-review threshold, the 5% MyPlan.com variable weighting was instead evenly distributed (i.e., 2.5% each) to Payscale.com and Ratemyprofessors.com.

It is our hope that MyPlan.com will attract more users and gather more surveys as this variable contains very important information for prospective students. It is assigned a 5% weighting in the overall 2010 ranking.

Listings of Alumni in Who’s Who in America (10%)

Alumni listings in Who’s Who in America are part of our measure of career success for college graduates. We used a different data set from the previous years, using the more comprehensive Who’s Who in America Online Edition. The weight of this variable dropped to 10% (from 12.5% in 2009) due to the addition of an alternative component for post-graduate success (the Corporate Officers composite data set).

Why Use Who’s Who in America?

Published by Marquis Who’s Who, Who’s Who in America has contained biographical sketches of influential and noteworthy men and women since its first appearance in 1899. [14] The Who’s Who volumes are routinely purchased by libraries as a standard biographical reference. We used Who’s Who in America Online to look at all degree earning Who’s Who entries born after 1954. Other college rankings rely typically on the inputs for college education: the reputation of institutions among education professionals (peer assessments), selectivity (acceptance rates, high school performance and SAT scores), and variables related to institutional resources (faculty resources, instructional and research spending and student-faculty ratio). Some rankings consider student attitudes toward schools almost exclusively while others take into account financial dimensions, such as the cost of attending college. The Forbes/CCAP college rankings stress the outputs of a college education rather than these inputs incorporated by the more pervasive ranking systems.

Who’s Who in America, while imperfect, is a sampling of America’s successful citizens. By recording the college attendance of persons in Who’s Who, the rankings account for their achievements once they leave college, allowing us to determine how many graduates of a particular college reach a significant level of accomplishment. No fee is charged of those whose bibliographies are listed in Who’s Who, and the purchase of the publication is not a factor in deciding which biographies are included. According to the Preface of the 62nd edition, neither a person’s wealth, social position, nor desire to be listed are sufficient reasons for inclusion. Researchers employed by Marquis consult lists such as the Forbes Celebrity 100, the Fortune 500, general interest magazines, special interest magazines and lists specific to various industries and professions when deciding who to include. [15]

We are aware that this approach is not perfect. There are cases–relatively few in our judgment–of individuals with decidedly modest vocational achievement being included in the Who’s Who volumes. And while Marquis’s team of researchers completes biographies for the most prominent members of society, there are other cases of accomplished individuals who simply refuse to fill out the forms and are thus not included. While these deficiencies exist, they apply to graduates of all universities and do not work to create any known bias in favor of a particular individual institution or class of institutions within our sample.

Developing the Data Set

We focused on the recent success of colleges rather than recording people who have not attended the institution in the past 40 years. The year 1954 was chosen as the earliest birth date of those sampled because, using data obtained in a sample of slightly over 5,000 from a two year old study and extrapolating to the full population of approximately 100,000 names, approximately 20% of the persons should have been born in 1952 (plus two years ) or later [16] . By restricting our sample only to those born in 1954 or later, we generated a data set of roughly 44,000 names. The online edition of Who’s Who in America allowed us to specify the birth year range, institution attended and type of degree awarded. We then recorded the number of entries for each university that were born after 1954 and earned undergraduate degrees. The data were recorded in such a fashion that we could later replicate the sample or revisit individual institutions. Since the typical student born in 1954 graduated from college no earlier than 1976, our analysis focuses on graduates of the 1970s, 1980s, 1990s and, in a few cases, this decade.

We standardized college names, especially for schools that have aliases (while also searching under all known school alias names), such as College of the Holy Cross (Massachusetts) (a.k.a., Holy Cross College, Holy Cross). In the case of different schools that have the same name but exist in different states (e.g., Augustana College, St. John’s University, and Westminster College), when the state was not listed in the entry, every entry was re-examined to check for recording errors. If no further information was found, we used other clues to determine the exact institution. For example, if a student was employed in Norton, Mass., (the location of Wheaton College) during his or her undergraduate years, we assumed that the student attended Wheaton College of Massachusetts as opposed to Wheaton College of Illinois. If biographical clues given in Who’s Who were not enough to determine which school the person attended, we looked for information online (employees’ profiles, public officials’ websites, educators’ curriculum vitas and so forth).

For entries which omitted the campus name (e.g., University of Michigan, University of Arkansas and California State University) we also re-examined the entries for additional information and searched through online databases. For entries without a specified campus, we apportioned the remaining undetermined entries using a new apportionment method this year. Entries were apportioned based on the average of two qualifications. First, we looked at the universities’ graduation rate multiplied by historical FTE enrollment (note that this was the sole apportioning method used in last year’s list). The second qualification was the ratio of campus representation that could be determined. For example, if there were 100 entries for the University of Michigan, Ann Arbor, and only two for both UM Dearborn and UM Flint, we took this into account when apportioning the unspecified campuses, by using this proportion as the other half of our Who’s Who apportionment equation.

Some schools (e.g., Rhodes College and the College of Idaho) have changed their names significantly over time; these changes were accounted for while we developed the Who’s Who database. We controlled for institutional size because introducing enrollment numbers allows us to compare schools as large as Ohio State University and as small as Thomas Aquinas College. The average college graduate in our sample attended school between 1980 and 1990, so we calculated the average FTE undergraduate enrollments for 1980 and 1990 and divide the absolute number of Who’s Who entries for each school by this historical average enrollment. Given the varying graduation dates of entries and given changing enrollments, this is not a precise method of adjusting for enrollment variation, but it combines the virtue of simplicity with relative accuracy.

The enrollment adjusted numbers were standardized using Z-scores; in the final computation to obtain the rankings, the raw data are weighted so that they constitute a 10% importance in determining the final ranking.

Salaries of Alumni From PayScale.com (15%)

PayScale.com is a market leader in global online compensation data. The website is used to help employers as well as employees better gauge the current job market. The “PayScale Salary Survey,” which is updated daily, is one of the largest online salary surveys in the world. Persons complete the “PayScale Salary Survey” in exchange for a free salary report that anonymously compares them to other people with similar jobs in similar locations. The survey is designed to discourage salary inflation and is unique in that it probes users into revealing details such as position, level of work, location, industry, education, etc. In addition to individual surveys, PayScale receives data from employers administered on behalf of trade associations.

PayScale’s algorithm system applies stringent rules to ensure data validity prior to entering the database. Duplicate entries are eliminated and unusual results are reviewed by an in-house Compensation Analyst before being added to the database. Cost of living adjustments are not made and PayScale only reports data that has been collected and approved by its in-house compensation analysts.

Why This Measure?

To most college students, the bottom line of higher education is whether it helps them get a good job after graduation. Of the many measures that qualify a job as “good,” we believe that compensation is near the top in the mind of a student. Other things being equal, students will choose a school that provides them the opportunity to earn the highest possible salary upon graduation.

Another criticism is that schools with a higher sample size do better on this portion of the rankings. We analyzed this claim and found that there is a slight, but not significant, positive correlation between the number of surveys completed and the median salary one to four years after graduation. Though the gap widens when we took into account 10 to 19 years after graduation salary, the positive correlation is still not significant.

Another criticism is that the website does not differentiate between bachelor’s and graduate salaries. There is a statistical significant difference between baccalaureate (schools that do not have graduate programs) and the schools that do have graduate programs. We found that if a student went to a school that offered graduate studies, they were likely to earn a little over $5,000 more than in a baccalaureate school. However, this difference is minimized because of the methodology used.

How We Measured the PayScale Component

We used salary data from one to four years after graduation 10 to 19 years after graduation rate. Average salary at one to four years after graduation is a good measure of what a student can expect to earn directly after graduation. It is a good measure of the value added to a job applicant by his or her alma mater. The one to four years after graduation rate made up 50% of the rating. The other 50% came from the growth rate between the one-to-four-year and 10-to-19-year after-graduation rate. Because someone from a larger and more prestigious school typically earns more directly after graduation, there is naturally a bias upward in the one-to-four-year after graduation rate. Taking the growth rate between one to four year and 10 to 19 year is a good indicator of value-added skills that were learned during school, both technical and soft skills. In other words, we believe that the acceleration of growth in salary is just as important starting salary throughout a career.

Alumni in Forbes/CCAP Corporate Officers List (5%)

Why This Measure

Another measure of career success in America is to be appointed chief executive officer (CEO) or named a member of the board of directors (BOD) of a leading American company. While obviously this is a measure of success in the field of business, it can also be a mark of success in other fields. Someone not in the field of business but who has achieved a particularly high level of distinction in his or her field can receive appointment to either of these types of positions. Companies can, and often do, reach outside of the traditional field of business to find persons qualified to be the chief executive of the company or to serve as a director. Thus, using the measure of alumni who go on to serve as CEOs or on BODs measures more than just success in the field of business.

Building the Data Set

A substantial portion of this component derives from educational data of persons who serve on boards of directors of the leading American corporations. First, we compiled a list of boards of directors from the Forbes 400 Best Big Companies and assembled a data set which included the institution from which each person had obtained a bachelor’s degree. We then added educational data of about 700 CEOs in a database that Forbes has compiled of CEOs of many prominent American companies. We combined both the data set of boards of directors and the data set of CEOs into one composite data set with 5,043 unique entries (in more than one case, a single person served on multiple boards; any duplicates were removed from the composite data set).

For our purposes, we are interested solely in the institution from which these persons obtained an undergraduate college degree. We gathered educational and background data on the persons in the composite data set mainly from the websites Businessweek.com, Infoweek.com, and the Marquis Who’s Who online database. In the cases where those databases did not include data on a particular person, we gathered data from various news articles, company memos, or another such source. There were many entries within our composite data set for which educational data was unattainable or where the only data available was for graduate or professional degrees, rather than undergraduate degrees. The names for individuals whose undergraduate institution could not be determined, as well as the names of those who received an undergraduate degree from a foreign institution, were excluded from our final composite data set.

When we finished compiling the list, we ensured that the names of all the institutions within our composite database were uniform (University of California-Berkeley to University of California, Berkeley) We also gathered any information necessary to identify which campus a person attended if they attended a multicampus school or a university system so that correct awarding would be made for each institution.

Each school is awarded a point for each CEO/director that graduated from that institution. For directors that graduated from multiple undergraduate programs, the point was divided evenly among the institutions represented (If a director attended two undergraduate institutions each institution was awarded half a point, if three undergraduate institutions awarded degrees, each of the three would be awarded one-third of a point.) Each institution’s raw score was then divided by its full time enrollment to give an enrollment weighted score. This score was then entered into the master ranking consistent with this component’s 5% weighting in the overall 2010 Forbes ranking.

Four-Year Debt Load For Typical Student Borrowers (12.5%)

Student debt is incorporated in the ranking as a measure of the relative affordability of attending a particular school. The amount of expected debt students may face if they borrow money to attend college is important consumer information. The weighting of this component decreased to 12.5% this year from the 20% weighting it received in the 2009 ranking. Part of the reason for the decrease in weighting comes as a response to particularly well formed criticisms of our prior rankings; while debt certainly is an important factor affecting the choices college students and their families make, it is by no means the primary factor, let alone the single most important one. After reviewing criticisms of our past methodologies, we reached the conclusion that a 20% weighting on four-year debt load was excessively high. Furthermore, this year we have introduced student loan default rates as a new variable (discussed below). Part of the weighting assigned to raw debt in the 2009 ranking was assigned to this variable this year.

The figure used for student debt is the average debt for the typical student borrower. We exclude from consideration those students who do not borrow for college. The data for the student debt is obtained from the U.S. Department of Education database (IPEDS). The figure is the sum of the average amount of loan aid received by student borrowers for the years 2004-05 to 2007-08 (the most recent years with available data). This four-year span assumes that a student graduates in the normal four year period. According to IPEDS, student debt is defined as any financial aid which the student must repay, including “all Title IV subsidized and unsubsidized loans and all institutionally and privately sponsored loans.” This debt burden, however, does not include PLUS loans or loans made directly to the parents.

Although full data is available in IPEDS for most schools, there are several data anomalies. For schools with data missing for only one of the four years, the year of missing data is estimated as the average debt of the three other years for which data is available. Two other schools (Hillsdale and Grove City Colleges) do not report debt figures for any of these years. For these schools, the “average indebtedness of 2007 graduates” as reported on the website Collegedata.com is used.

The overall four-year raw debt data are standardized using Z-scores. These standardized rates are then assigned a score between zero and 100 commensurate with where they fall in a normal distribution. This final indexed figure is weighted at 12.5% of the overall 2010 Forbes ranking of America’s Best Colleges.

Student Loan Default Rates (5%)

A new component for this year and 5% of the rankings, student loan default rates were added in order to gauge students’ ability to pay back their student debt. This is a measure of quality for an institution, in that it provides insight into whether students can manage the debt accumulated from attending the institution. Schools of a higher caliber would enhance a student’s post graduate opportunities and the ability to pay back student debt. The inverse is true for schools of poor quality. Therefore a low student loan default rate is considered better in our rankings.

Student loan default rates are gathered and calculated by the U.S. Department of Education. The two year cohort default rate is used in this year’s rankings. This default rate is the percentage of people defaulting on either Federal Family Education Loans (FFEL) or William D. Ford Federal Direct Loan within two fiscal years after entering the repayment period. Our rankings took a three year average of the Fiscal Years 2005, 2006 and 2007 cohort default rates.

Several schools in our sample do not take federal monies, and therefore have no student loan default rate. Grove City College and Hillsdale College do not accept federal funding, making their students incapable of defaulting on federal loans. Because of this, their default components were calculated assuming a zero default rate. Additionally, Principia College did not have a default rate. Although their students are permitted to receive federal loans, in recent years Principia’s students have not been accepting federal loans (according to IPEDS data), therefore they also have a default rate of zero.

The overall three year average of the student loans default rates are standardized using Z-scores. These standardized rates are then assigned a score between zero and 100 commensurate with where they fall in a normal distribution. This final indexed figure is weighted at 5% of the overall 2010 Forbes rankings.

Four-Year Graduation Rates (17.5%)

Graduation rates measure how effectively institutions of higher education deliver the education they provide to their students. The higher a college’s four-year graduation rate, the higher the proportion of students who fulfill the requirements for their academic program of study within the normal time of study. The higher this proportion of students, other things equal, the lower the cost for a student to obtain a college education. Our measure for graduation rates includes two components: the actual four-year graduation rate and the difference between the actual graduation rate and a predicted graduation rate.

Why Use Four-Year Graduation Rates?

Traditionally college education in America has been viewed, particularly by students and their parents, as a four-year educational investment. In recent times, the higher education sector has increasingly relied upon five- or even six-year graduation rates as a measure for how successful students complete their program of study at American colleges and universities. Consistent with our approach in constructing previous rankings, we have chosen to incorporate the four-year graduation rate rather than the five- or six-year graduation rates used in other college rankings. Because all of the schools included in this sample are classified as offering instructional programs which are “four years or more” by the U.S. Department of Education, it is perfectly legitimate for assessing these schools using a four-year graduation rate. After all, prospective students arguably view that “four years or more” classification as an implication that they can graduate from any of these schools within four years.

Using the four-year graduation rate is not beyond criticism. Several schools included in this sample focus heavily on five-year academic programs (this is particularly true of some of the STEM intensive schools which require not only four years of academic study but also one year of co-op/internship experience). For these schools, many students take more than four years in order to satisfy the requirements for graduation. However, we believe that using a four-year graduation rate is valid, because these schools are included in the traditional four-year college classification and because some students even at these schools do in fact graduate within four years. Arguably, a four-year graduation rate is a more meaningful measure than either a five or six-year rate, because according to the U.S. Department of Education, “normal time” for completion of a bachelor’s degree is four years.

After the publication of the 2008 college ranking, some in the higher education sector expressed concern over the use of the four-year graduation rate instead of a five-year graduation rate. While we ultimately decided to retain the use of the four-year rate, we did explore alternative measure for graduation rates. Interestingly enough, using a five-year graduation rate yielded mixed results, as far as the STEM-intensive schools are concerned. While some of these schools fared better under a five-year graduation rate model, others fared much worse.

Summary of the Statistical Model

We rely upon a statistical model to predict what a school’s four-year graduation rate is expected to be based on a number of input criteria which measure the academic quality of incoming students. In order to capture the quality of students, we use 25th percentile composite SAT scores, acceptance rates, full-time enrollment rates (how many admitted students actually matriculate), percentage of students receiving Pell Grants, percentage of students enrolled in STEM majors, [18] a dummy variable for public or private institutional control, and regional dummy variables. We first transformed the four-year graduation rate data with the logistic transformation (occasionally referred to as the log of the odds ratio) to account for the particular bounded nature of that variable. We next regressed this transformed variable against the list of regressors mentioned above using the least squares method. Due to the nature of the logistic transformation, and the history of even respected academics misinterpreting the coefficient estimates, [19] we do not encourage interpretation of coefficient estimates on graduation rates and therefore suppress them in this methodology.

Schools increased their final score by having actual graduation rates that exceeded those predicted by the regression model. They decreased their score if the actual graduation rate fell below the model’s predicted rate. The differences in the actual versus the predicted rate for all schools were standardized similar to other components of the index previously discussed.

A Note on the Data Sources

The primary source for the data used was IPEDS. The actual graduation rate, according to IPEDS, is computed by dividing “the total number of students completing a bachelor degree or equivalent within four years (100% of normal time) … by the revised bachelor sub-cohort minus any allowable exclusions.” The IPEDS database is also the source of the data for the variables used in the statistical model, with a few exceptions. There were several schools (notably Hillsdale College) where the data came from other sources, which are: Collegedata.com, the College Board and Hillsdale.edu. In cases where current data was unavailable at any of these sources, we developed estimates based on the most recent publicly available data. The data we use for Fairleigh Dickinson University is derived from the combination of the data available for both the Florham and Metropolitan campuses.

The graduation rate component accounted for 17.5%, apportioned equally between the actual graduation rate (8.75%) and the graduation performance of a school relative to its predicted graduation rate (8.75%).

Student Nationally Competitive Awards (7.5%)

Every year students from colleges and universities across the country compete with one another for highly prestigious student awards. Analyzing the number of award winners per school serves as an indicator of how well an institution is preparing its students to successfully compete for these awards. Winning a nationally competitive award assumes that the student is not only thoroughly academically prepared and qualified but also possesses other qualities such as a high level of motivation or initiative, leadership, etc. Those schools with a high number of award winners are better preparing their students.

The following nine nationally competitive student awards were considered, with the years of awards considered included in parentheses:

The Rhodes Scholarship (2002-2010)

The British Marshall Scholarship (2002-2010)

The Gates Cambridge Scholarship (2002-2010)

The Harry S. Truman Scholarship (2006-2010)

The Barry M. Goldwater Scholarship (2010)

The Jack Kent Cooke Graduate Scholarship (2002-2010)

The Boren (NSEP) Fellowship (2009-2010)

National Science Foundation (NSF) Fellowships (2010)

USA Today All-Academic First and Second Teams (2002-2010)

The Rhodes, Marshall and Gates-Cambridge Scholarships are included because they are widely recognized as three of the most selective and prestigious of all postgraduate awards to undergraduate students. Additionally, each year, USA Today names approximately 40 students to its “All-Academic” first and second teams. Winners of this award come from across the country and are some of the most accomplished college students in the country across many different academic disciplines. The remaining five awards attempt to encompass a variety of different academic backgrounds. The Truman award is directed toward students interested in pursuing careers in public service while the Goldwater Scholarship targets students pursuing careers in the natural sciences, mathematics or engineering. The Jack Kent Cooke scholarships are not limited to specific areas of study; grants are awarded to deserving low-income students wishing to pursue graduate studies. The Boren Fellowship is an award funded by the National Security Education Program and given to support graduate studies in areas of the world critical to U.S. interests. Finally, National Science Foundation Fellowships are awarded to students wishing to pursue graduate study in the sciences (including social sciences), mathematics and engineering.

Due to varying number of awards given in a single year among these eight awards, it is necessary to use multiple years of data to expand the sample size. However, several of these awards include a sufficient number of awards every year for the single most recent year’s data to be sufficient for use in the study.

After calculating the raw number of each award students from an institution won over the examined period, each award is weighted to reflect the award’s competitiveness and prestige. Because the Rhodes Scholarship is the most competitive and prestigious of undergraduate awards, we give it a higher weighting relative to the other awards. The same is true, although to a lesser extent, of the Marshall and Gates-Cambridge Scholarships. Therefore, the Rhodes Scholarship is weighted five times, and the Marshall Scholarship and Gates-Cambridge three times, relative to the other five awards. If a school has one scholarship winner for each award we use, that institution’s total number of awards would be recorded as seventeen. In the few rare cases where award winners studied for a significant amount of time (at least two years) at an institution before transferring to the institution at which they were current students upon winning the award, credit for the student’s award was divided equally among the two institutions.

Enrollment size of an institution is accounted for as well. A school with a greater number of students, other things equal, has a better chance of winning an award. Thus, the number of award winners is adjusted by the school’s full-time equivalent undergraduate enrollment during the fall of 2008 (the most recent year enrollment data are available). These enrollment adjusted numbers for student award recipients account for 8.33% of the final score for each school in the sample.

A Note on the “Best Value Ranking”

For many students the price of a school is equally as important a factor in deciding where to go to college as its quality. Knowing where you can get the most quality for each tuition dollar spent is important for those shopping on a budget. Answering this question is the goal of this year’s “Best College Buy” ranking for 2010. To produce the ranking, we divide each school’s overall quality score by its 2008 in-state tuition and fees. [20]

The overall quality score for each school (which is exactly the overall quality score we used to derive the overall best college 2010 America’s Best College ranking) is divided a school’s published (or “sticker price”) tuition for 2008, the most recent data available in the IPEDS comprehensive database. Published tuition is the amount of tuition and fees required of students as payment for the cost of receiving their collegiate education. Published tuition does not include any financial aid (whether scholarships, grants, or other forms of aid which do not need to be repaid) that students directly receive from government, institutional or other sources. For those schools which charge $0 tuition and fees to students (e.g., the service academies) or those schools that automatically offer all students scholarships or grants valued at the full price of tuition (e.g., College of the Ozarks and Cooper Union [21]), we arbitrarily set their tuition and fees at one cent. After obtaining this list of schools, we remove those schools with four-year graduation rates that fall below 20%, using this 20% rate as a baseline level of quality (since the overall list contains only 610 schools, it is possible that some which are low quality appear on the best value list simply because they have low tuition. In the spirit of maintaining a list that indicates high “quality,” and not just low tuition, this adjustment is necessary).

Last year we used “net tuition” (which controls for financial aid received by typical students) in constructing the “Best College Buy.” However, we made the determination that, for the purposes of calculating the “Best College Buy” rankings, published tuition is a more realistic measure for the cost to students and their families of attendance at a college or university. Prior to attending an institution, students and their families often do not know with certainty the precise amount of tuition discounts they will receive once the students are enrolled; however, they do know that in the case no tuition discounts are received, the students will be responsible to pay the published tuition costs.

A Note on the “Do-It-Yourself” Variables

Although not included in the Forbes/CCAP college ranking, there are several variables available for readers to use as they construct their own rankings (this option is available on Forbes.com). All of the variables used to construct the Forbes/CCAP official ranking are available in the “do-it-yourself” option, but the additional variables include: five-year graduation rates, acceptance rate, SAT scores, undergraduate enrollment, student-faculty ratio, in-state tuition, out-of-state tuition and a crime index.

For the five-year graduation rate, we use a model (identical to the model we use for the four-year graduation rate in the official Forbes/CCAP ranking) to predict a school’s five-year graduation rate. Half of this component is based on the actual graduation rate while the other half is based on the difference between the actual and predicted graduation rate.

The acceptance rate is the percentage of student applicants who received an offer to enroll in the institution. The SAT scores are the 25th percentile composite score for admitted students. The undergraduate enrollment is the FTE undergraduate enrollment for the 2006-2007 academic year. The student to faculty ratio is computed by dividing FTE undergraduate enrollment by FTE faculty. In-state tuition is the cost for instruction (tuition and required fees) public institutions charge for undergraduate students who are residents of the same state in which the school is located while out-of-state tuition is cost for instruction charged to undergraduates who are not residents of the same state in which the public college is located. For private schools, in-state and out-of-state tuition charges are the same.

The crime index is computed by dividing the number of violent crimes that are reported by a school by a school’s FTE undergraduate enrollment (in thousands). For this variable, we averaged the index for both 2006 and 2007. The number of crimes is the weighted sum of murders/non-negligent homicide, negligent manslaughter, forcible and non-forcible sexual offenses, robbery, aggravated assault, burglary, motor vehicle theft and arson (murder and forcible sexual offenses were weighted double the other reported crimes). Thus, this crime index is number of reported violent crimes per year per thousand undergraduate students.

Some of the data used for the “do-it-yourself” variables are based on estimates for schools where the data was not directly reported by the school to the U.S. Department of Education.

For all of these “do-it-yourself” variables, no inherent weight is assigned to any of them. Individual readers can assign the weights to each of these variables to construct the ranking which they deem best suited to their own needs.

Conclusions

Any set of rankings is subject to criticisms, such as: the variables used are inappropriate or mis-measured, key factors are neglected, etc. Yet, what is the alternative? No rankings at all? We feel, and many independent observers agree with us, that these rankings provide a good assessment of the quality of the educational experiences at over 600 institutions. They further improved student choices and provide external assessments of the often inflated institutional claims of excellence. We hope they prove interesting, informative and valuable to those using them.

Endnotes

[1] The compilation of these rankings was done at the Center for College Affordability and Productivity, although in cooperation and active consultation with the staff of Forbes. At Forbes, the CCAP staff worked particularly closely with David Ewalt. This was truly a collaborative effort in the finest sense of that term.

At CCAP, Director Richard Vedder took overall charge of the project. He appreciates the assistance of Ohio University, and particularly the cooperation of the chair of its Economics Department, professor Rosemary Rossiter. The work on the rankings was done by a team of students working with Prof. Vedder. Five persons were particularly critical: Matthew Denhart, Jonathan Leirer, Christopher Matgouranis, Jonathan Robe and Robert Villwock. Others assisting in the effort included Ryan Brady, Andrew Cadamagnani, Christopher Denhart, Will Drabold, Michael Malesick and Karen Vedder.

[2] For further information on the Carnegie Classification system, see The Carnegie Foundation for the Advancement of Teaching, “Classification Description,” accessed Aug. 5, 2010.

[3] For our purposes full-time equivalent (FTE) undergraduate enrollment is defined as the sum of the full-time undergraduate enrollment and one-third of the part-time undergraduate enrollment as reported by the Department of Education.

[9] James Felton, Peter T. Koper, John Mitchell, and Michael Stinson. “Attractiveness, Easiness, and Other Issues: Student Evaluations of Professors on RateMyProfessors.com,” the abstract page of which is available here, accessed Aug. 5, 2010.

[10] Bleske-Rechek and Michels, p. 9.

[11] Unfortunately Corban College has only one professor evaluation on ratemyprofessors.com. To calculate a score, we used the average score for all schools considered “Baccalaureate” institutions by the Carnegie classification.

[12] For further discussion of Bayesian approaches to rankings see James O. Berger and John Deely. “A Bayesian Approach to Ranking and Selection of Related Means With Alternatives to Analysis-of-Variance Methodology” Journal of the American Statistical Association 83, no. 402 (June 1988): 364-373.

[13] Data on freshman-to-sophomore retention rates are also reported annually by schools to institutions such as the College Board and other publications which publish data on colleges or publish other college rankings.

[17] These four schools were: New College of Florida, Marlboro College, College of the Atlantic and Mount St. Mary’s College (California).

[18] This control variable was added to address the (valid) criticism that, other things equal, students enrolled in STEM majors tend to take more time to graduate because of the higher rigor associated with their fields of study. The addition of this control variable allows us to take into account that schools with higher percentages of students enrolled in STEM majors will likely have lower four-year graduation rates. Thus, these schools should not be “penalized” by the model because of the higher percentage of STEM students.

[19] One prominent example of a misinterpretation of a logistic regression coefficient is discussed in the following letter to the editor. Andrew Gelman, “Letter to the editors regarding some papers of Dr. Satoshi Kanazawa” Journal of Theoretical Biology 245, no. 3 (April 7, 2007): 597-599.

[20] For private schools, in-state tuition and fees is the same as out-of-state tuition.

[21] Berea College in Kentucky also offers their students full scholarship and grant coverage for the cost of tuition, but according to the IPEDS data, students are required to pay a $866 fee to the school for the year.