Discussion and analysis of international university rankings and topics related to the quality of higher education.
Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes99@yahoo.com

Thursday, December 21, 2006

Reply to Open Letter

John O'Leary, editor of the THES, has replied to my open letter:

Dear Mr Holmes

Thank you for your email about our world rankings. As you have raised a number of detailed points, I have forwarded it to QS for their comments. I will get back to you as soon as those have arrived but I suspect that may be early in the New Year, since a lot of UK companies have an extended Christmas break.

Best wishes

John O'LearyEditorThe Times Higher Education Supplement

If nothing else, perhaps we will find out how long an extended christmas break is.

Saturday, December 16, 2006

Open Letter to the Times Higher Education Supplement

This letter has been sent to THES

Dear John O’LearyThe Times Higher Education Supplement (THES) world university rankings have acquired remarkable influence in a very short period. It has, for example, become very common for institutions to include their ranks in advertising or on web sites. It is also likely that many decisions to apply for university courses are now based on these rankings.

Furthermore, careers of prominent administrators have suffered or have been endangered because of a fall in the rankings. A recent example is that of the president of Yonsei University, Korea, who has been criticised for the decline of that university in the THES rankings compared to Korea University (1) although it still does better on the Shanghai Jiao Tong University index (2). Ironically, the President of Korea University seems to have got into trouble for trying too hard and has been attacked for changes designed to promote the international standing, and therefore the position in the rankings, of the university. (3) Another case is the Vice-Chancellor of Universiti Malaya, Malaysia, whose departure is widely believed to have been linked to a fall in the rankings between 2004 and 2005, which turned out to be the result of the rectification of a research error.

In many countries, administrative decisions and policies are shaped by the perception of their potential effect on places in the rankings. Universities are stepping up efforts to recruit international students or to pressure staff to produce more citable research. Also, ranking scores are used as ammunition for or against administrative reforms. Recently, we saw a claim the Oxford’s performance renders any proposed administrative change unnecessary (4).

It would then be unfortunate for THES to produce data that is any way misleading, incomplete or affected by errors. I note that the publishers of the forthcoming book that will include data on 500+ universities include a comment by Gordon Gee, Chancellor of Vanderbilt University, that the THES rankings are “the gold standard” of university evaluation (5)). I also note that on the website of your consultants, QS Quacquarelli Symonds, readers are told that your index is the best (6)).

It is therefore very desirable that the THES rankings should be as valid and as reliable as possible and that they should adhere to standard social science research procedures. We should not expect errors that affect the standing of institutions and mislead students, teachers, researchers, administrators and the general public.

I would therefore like to ask a few question concerning three components of the rankings that add up to 65% of the overall evaluation.

Faculty-student ratioIn 2005 there were a number of obvious, although apparently universally ignored, errors in the faculty-student ratio section. These include ascribing inflated faculty numbers to Ecole Polytechnique in Paris, Ecole Normale Superieure in Paris, Ecole Polytechnique Federale in Lausanne, Peking (Beijing) University and Duke University, USA. Thus, Ecole Polytechnique was reported on the site of QS Quacquarelli Symonds (7)), your consultants, to have 1,900 faculty and 2,468 students, a ratio of 1.30 students per faculty, Ecole Normale Superieure 900 faculty and 1800 students, a ratio of one per two faculty, Ecole Polytechnique Federale 3,210 faculty and 6,530 students, a ratio of 2.03, Peking University 15,558 faculty and 76,572 students, a ratio of 4.92, and Duke 6,244 faculty and 12,223 students, a ratio of 1.96

In 2006 the worst errors seem to have been corrected although I have not noticed any acknowledgement that any error had occurred or explanation that dramatic fluctuations in the faculty-student ratio or the overall score were not the result of any achievement or failing on the part of the universities concerned.

However, there still appear to be problems. I will deal with the case of Duke University, which this year is supposed to have the best score for faculty-student ratio. In 2005 Duke, according to the QS Topgraduates site, had, as I have just noted, 6,244 faculty and 12,223 students, giving it a ratio of about one faculty to 2 students. This is quite implausible and most probably resulted from a data entry error with an assistant or intern confusing the number of undergraduates listed on the Duke site, 6,244 in the fall of 2005, with the number of faculty. (8)

This year the data provided are not so implausible but they are still highly problematical. In 2006 Duke according to QS has 11,106 students but the Duke site refers to 13,088. True, the site may be in need of updating but it is difficult to believe that a university could reduce its total enrollment by about a sixth in the space of a year. Also, the QS site would have us believe that in 2006 Duke has 3,192 faculty members. But the Duke site refers to 1,595 tenure and tenure track faculty. Even if you count other faculty, including research professors, clinical professors and medical associates the total of 2,518 is still much less than the QS figure. I cannot see how QS could arrive at such a low figure for students and such a high figure for faculty. Counting part timers would not make up the difference, even if this were a legitimate procedure, since, according to the US News & World Report (America’s Best Colleges 2007 Edition), only three percent of Duke faculty are part time. My incredulity is increased by the surprise expressed by a senior Duke administrator (9) and by Duke's being surpassed by several other US institutions on this measure, according to the USNWR.

There are of course genuine problems about how to calculate this measure, including the question of part-time and temporary staff, visiting professors, research staff and so on. However, it is rather difficult to see how any consistently applied conventions could have produced your data for Duke.

I am afraid that I cannot help but wonder whether what happened was that data for 2005 and 2006 were entered in adjacent rows in a database for all three years and that the top score of 100 for Ecole Polytechnique in 2005 was entered into the data for Duke in 2006 – Duke was immediately below the Ecole in the 2005 rankings – and the numbers of faculty and students worked out backwards. I hope that this is not the case.

-- Could you please indicate the procedures that were employed for counting part-timers, visiting lecturers, research faculty and so on?-- Could you also indicate when, how and from whom the figures for faculty and students at Duke were obtained?-- I would like to point out that if the faculty-student ratio for Duke is incorrect then so are all the scores for this component, since the scores are indexed against the top scorer, and therefore all the overall scores. Also, if the ratio for Duke is based on an incorrect figure for faculty, then Duke’s score for citations per faculty is incorrect. If the Duke score does turn out to be incorrect would you consider recalculating the rankings and issuing a revised and corrected version?

International facultyThis year the university with the top score for international faculty is Macquarie, in Australia. On this measure it has made a giant leap forward from 55 to 100 (10).

This is not, I admit, totally unbelievable. THES has noted that in 2004 and 2005 it was not possible to get data for Australian universities about international faculty. The figures for Australian universities for these years therefore simply represent an estimate for Australian universities as a whole with every Australian university getting the same, or almost the same, score. This year the scores are different suggesting that data has now been obtained for specific universities.

I would like to digress a little here. On the QS Topgraduate website the data for 2005 gives the number of international faculty at each Australian university. I suspect that most visitors to the site would assume that these represent authentic data and not an estimate derived from applying a percentage to the total number of faculty. The failure to indicate that these data are estimates is perhaps a little misleading.

Also, I note that in the 2005 rankings the international faculty score for the Australian National University is 52, for Monash 54, for Curtin University of Technology 54 and for the University of Technology Sydney 33. For the other thirteen Australian and New Zealand universities it is 53. It is most unlikely that if data for these four universities were not estimates they would all differ from the general Australasian score by just one digit. It is likely then that in four out of seventeen cases there have been data entry errors or rounding errors. This suggests that it is possible that there have been other errors, perhaps more serious. The probability that errors have occurred is also increased by the claim, uncorrected for several weeks at the time of writing, on the QS Topuniversities site that in 2006 190,000 e-mails were sent out for the peer review.

This year the Australian and New Zealand universities have different scores for international faculty. I am wondering how they were obtained. I have spent several hours scouring the Internet, including annual reports and academic papers, but have been unable to find any information about the numbers of international faculty in any Australian university.

-- Can you please describe how you obtained this information? Was it from verifiable administrative or government sources? It is crucially important that the information for Macquarie is correct because if not then, once again, all the scores for this section are wrong.

Peer ReviewThis is not really a peer review in the conventional academic sense but I will use the term to avoid distracting arguments. My first concern with this section is that the results are wildly at variance with data that you yourselves have provided and with data from other sources. East Asian and Australian and some European universities do spectacularly better on the peer review, either overall or in specific disciplinary groups, than they do on any other criteria. I shall, first of all, look at Peking University (which you usually call Beijing University) and the Australian National University (ANU).

According to your rankings, Peking is in 2006 the 14th best university in the world (11). It is 11th on the general peer review, which according to your consultants explicitly assesses research accomplishment, and twelfth for science, twentieth for technology, eighth for biomedicine, 17th for social science and tenth for arts and humanities.

This is impressive, all the more so because it appears to be contradicted by the data provided by THES itself. On citations per paper Peking is 77th for science and 76th for technology. This measure is an indicator of how a research paper is regarded by other researchers. One that is frequently cited has aroused the interest of other researchers. It is difficult to see how Peking University could be so highly regarded when its research has such a modest impact. For biomedicine and social sciences Peking did not even do enough research for the citations to be counted.

If we compare overall research achievements with the peer review we find some extraordinary contrasts. Peking does much better on the peer review than California Institute of Technology (Caltech), with a score of 70 to 53 but for citations per faculty Peking’s score is only 2 compared to 100.

We find similar contrasts when we look at ANU. It was 16th overall and had an outstanding score on the peer review, ranking 7th on this criterion. It was also 16th for science, 24th for technology, 26th for biomedicine, 6th for social science and 6th for arts and humanities.

However, the scores for citations per paper are distinctly less impressive. On this measure, ANU ranks 35th for science, 62nd for technology and 56th for social science. It does not produce enough research to be counted for biomedicine.

Like Peking, ANU does much better than Caltech on the peer review with a score of 72 but its research record is less distinguished with a score of 13.

I should also like to look at the relative position of Cambridge and Harvard. According to the peer review Cambridge is more highly regarded than Harvard. Not only that, but its advantage increased appreciably in 2006. But Cambridge lags behind Harvard on other criteria, in particular citations per faculty and citations per paper in specific disciplinary groups. Cambridge is also decidedly inferior to Harvard and a few other US universities on most components of the Shanghai Jiao Tong index (12).

How can a university that has such an outstanding reputation perform so consistently less well on every other measure? Moreover, how can its reputation improve so dramatically in the course of two years?

I see no alternative but to conclude that much of the remarkable performance of Peking University, ANU and Cambridge is nothing more than an artifact of the research design. If you assign one third of your survey to Europe and one third to Asia on economic rather than academic grounds and then allow or encourage respondents to nominate universities in those areas then you are going to have large numbers of universities nominated simply because they are the best of a mediocre bunch. Is ANU really the sixth best university in the world for social science and Peking the tenth best for arts and humanities or is just that there are so few competitors in those disciplines in their regions?

There may be more. The performance on the peer review of Australian and Chinese universities suggests that a disproportionate number of e-mails were sent to and received from these places even within the Asia-Pacific region. The remarkable improvement of Cambridge between 2004 and 2006 also suggests that a disproportionate number of responses were received from Europe or the UK in 2006 compared to 2005 and 2004.

Perhaps there are other explanations for the discrepancy between the peer review scores for these universities and their performance on other measures. One is that citation counts favour English speaking researchers and universities but the peer review does not. This might explain the scores of Peking University but not Cambridge and ANU. Perhaps, Cambridge has a fine reputation based on past glories but this would not apply to ANU and why should there be such a wave of nostalgia sweeping the academic world between 2004 and 2006? Perhaps citation counts favour the natural sciences and do not reflect accomplishments in the humanities but the imbalances here seem to apply across the board in all disciplines.

There also are references to some very suspicious procedures. These include soliciting more responses to get more universities from certain areas in 2004. In 2006, there is a reference to weighting responses from certain regions. Also puzzling is the remarkable closing of the gap between high and low scoring institution between 2004 and 2005. Thus in 2004 the mean score for the peer review of all universities in the top 200 was 105.69 compared to a top score of 665 while in 2005 it was 32.82 compared to a top score of 100.

I would therefore like to ask these questions.

-- Can you indicate the university affiliation of your respondents in 2004, 2005 and 2006?-- What was the exact question asked in each year?-- How exactly were the respondents selected?-- Were any precautions taken to ensure that those and only those to whom it was sent completed the survey?-- How do you explain the general inflation of peer review scores between 2004 and 2005?-- What exactly was the weighting given to certain regions in 2006 and to whom exactly was it given?-- Would you considering publishing raw data to show the number of nominations that universities received from outside their regions and therefore the genuine extent of their international reputations?

The reputation of the THES rankings would be enormously increased if there were satisfactory answers to these questions. Even if errors have occurred it would surely be to THES’s long-term advantage to admit and to correct them.

The Australian has an outstanding short article on the THES rankings by Professor Simon Marginson of the University of Melbourne here.

An extract is provided below.

Methodologically, the index is open to some criticism. It is not specified who is surveyed or what questions are asked. The student internationalisation indicator rewards entrepreneurial volume building but not the quality of student demand or the quality of programs or services. Teaching quality cannot be adequately assessed using student-staff ratios. Research plays a lesser role in this index than in most understandings of the role of universities. The THES rankings reward a university's marketing division better than its researchers. Further, the THES index is too easily open to manipulation. By changing the recipients of the two surveys, or the way the survey results are factored in, the results can be shifted markedly.

This illustrates the more general point that rankings frame competitive market standing as much or more than they reflect it.

Results have been highly volatile. There have been many sharp rises and falls, especially in the second half of the THES top200 where small differences in metrics can generate large rankings effects. Fudan in China has oscillated between 72 and 195, RMIT in Australia between 55 and 146. In the US Emory has risen from 173 to 56 and Purdue fell from 59 to 127. They must have let their THES subscriptions lapse.

Second, the British universities do too well in the THES table. They have done better each successive year. This year Cambridge and Oxford suddenly improved their performance despite Oxford's present problems. The British have two of the THES top three and Cambridge has almost closed the gap on Harvard. Yet the British universities are manifestly under-funded and the Harvard faculty is cited at 3 1/2 times the rate of its British counterparts. It does not add up. But the point is that it depends on who fills out the reputational survey and how each survey return is weighted.

Third, the performance of the Australian universities is also inflated.

Despite a relatively poor citation rate and moderate staffing ratios they do exceptionally well in the reputational academic survey and internationalisation indicators, especially that for students. My university, Melbourne, has been ranked by the 3703 academic peers surveyed by the THES at the same level as Yale and ahead of Princeton, Caltech, Chicago, Penn and University of California, Los Angeles. That's very generous to us but I do not believe it.

Friday, December 08, 2006

Comments on the THES Rankings

Professor Thomas W. Simon, a Fulbright scholar, has this to say about the THES rankings on a Malaysian blog.

Legitimate appearanceQuacquarelli Symonds (QS), the corporate engine behind the THES rankings, sells consulting for business education, job fairs, and other services. Its entrepreneurial, award winning CEO, Nunzio Quacquarelli, noted that Asian universities hardly ever made the grade in previous ranking studies. Curiously, on the last QS rankings, Asian universities improved dramatically. They accounted for nearly 30% of the top 100 slots. Did QS choose its methodology to reflect some desired trends?QS has managed to persuade academics to accept a business ploy as an academic report. Both The Economist and THES provide a cover by giving the rankings the appearance of legitimacy. Shockingly, many educators have rushed to get tickets and to perform for the magic show, yet it has no more credibility than a fairy tale and as much objectivity as a ranking of the world's most scenic spots.

Major flawsIt would be difficult to list all the flaws, but let us consider a few critical problems. Scientists expose their data to public criticism whereas QS has not published its raw data or detailed methodologies. The survey cleverly misuses scholarly terms to describe its methodology. THES calls its opinion poll (of over 2,000 academic experts) “peer review”, but an opinion poll of 2,000 self-proclaimed academic experts bears no resemblance to scholars submitting their research to those with expertise in a narrow field. Further, from one year to the next, QS unapologetically changed the weighting (from 50% peer review to 40%) and added new categories (employer surveys, 10%).

He concludes:

Concerned individuals should expose the QS/THES scandal. Some scholars have done scathing critiques of pieces of the survey, however, they now should launch an all out attack. Fortunately, this survey is only few years old. Let us stop it before it grows to maturity and finds a safe niche in an increasingly commercialised global world.