Relative Research Performance

I guess it won’t surprise anyone that my post the other day on the merger or lack of merger between the Faculty of Economics and Commerce and MBS led to some controversy. That is always the way when you start comparing performance.

Now I didn’t really want to revisit that but in the comments thread, the current Associate Dean of Research at FEC, Professor Ian King — who is one of Australia’s most regarded macroeconomic researchers — took issue with this statement:

Through the merger process we learned that on any objective measure of research output our faculty out performed those of FEC (citations, journal publications and grants received per capita).

This, it should be said, was against a backdrop of some ‘conventional wisdom’ that ‘at MBS you get great teaching but no research.’ So I’ll admit to it being a touchy subject for me given that is not my perception at all.

In the comments, Ian King said the following:

If you look at the University of Melbourne’s website, and the research statistics contained therein http://www.research.unimelb.edu.au/performance/published/researchperformance you will find that the FEC has systematically out-performed the MBS in terms of refereed journal articles (weighted by number of authors) per staff member. For example, in 2008, on average, each FEC staff member produced approximately 61% more than each MBS one.

This highlights the importance of using the data-led approach to analysis, favored by the FEC, rather than the alternative approach apparently favored at the MBS. Students seeking to learn analytical techniques that peer beyond salesmanship should choose the FEC for their graduate studies.

He should know, he is in charge of the research part of FEC. Now I didn’t quote any data in my original post (it wasn’t that kind of post) but to be accused of not using a data-led approach was a surprise to me. And to infer that students should go to FEC rather than MBS for graduate studies was a stretch.

Anyhow, I went to the data and pulled all of the relevant bits from the University’s web-site. I have put it all in a convenient table from 2005 to 2008, here. Now FEC’s preferred measure of performance is to look at refereed publications only and to use the ‘weighted by author’ measure. That measure means that if I were to co-author with say a international academic that would only count for half of a single authored piece. You can see that on that measure FEC outperforms MBS. So Ian King is right, my statement on ‘any’ objective measure MBS outperforms FEC is wrong. I should have written on virtually all objective measures of research output our faculty out performed those of FEC (citations, journal publications and grants received per capita). Mea culpa.

But that raises a question as to what the right measure is. It is hard to tell. The University when comparing faculties places weight on DIISR publications. And it is true that MBS encourages co-authorship and we do not believe that a co-authored paper is worth half of a single authored one. This might reflect a difference in philosophy but I can’t, as an economist, draw that conclusion just because FEC chooses to highlight refereed publications weighted as its preferred measure.

Anyhow, the data is there for all to see and I was happy to spend time compiling it because I think it is very important. Students are welcome to look at it or, even better, look at the research itself when choosing institutions. (They can also look at The Economist’s ‘Which MBA?’ ranking but I’ll admit that is just one ranking). And they can see how academic discourse emerges in public arenas as information too.

[Update: Put some data in front of academics and they can’t help but fret over it (or I guess be led by it!). A FEC colleague griped that the data didn’t adjust for quality. An MBS colleague worried about the same thing and grabbed the data on papers and their classification using the Australian Business Dean’s Council. As I understand it these classifications are contentious and the government is coming up with a new one. In their absence, the spreadsheet now includes the quality breakdown for refereed journal articles for 2008 (the year MBS supposedly performed worse than FEC on that measure). (I am happy for anyone to provide data for other years). Anyhow, you can see that MBS had a much greater share of A* and A articles then FEC in that year (57% to 36%) so much so that on a per faculty basis it outperformed FEC on A* publications.

Also, by way of update, Ian King has told me his comments were all ‘tongue in cheek.’ I can see that perspective (and am happy that that is the case) but the data can of worms has now been opened.]

Like this:

Related

5 thoughts on “Relative Research Performance”

Looking at those numbers I can see why the Associate Dean was annoyed. Yet the MBS still looks very good. Bearing in mind that a business school academic is expected to have less of an ivory-tower perspective and is expected to contribute to industry etc. I suspect the average productivity (broadly defined) at the MBS is still greater than that on the main campus.

Thank you for posting the numbers in such a clear format. The picture that emerges is quite telling.
When looking at the figures for “Research Output per FTE”, if you look at the last two columns, you can see that the FEC beat the MBS in publishing refereed journal articles in all years except 2005, and the gap is widening over time (from 3% in 2006 to 11% in 2008).
This set of figures also shows that, over that period, the MBS faculty have been producing relatively more publications in non-refereed outlets. (This comes from the previous 4 columns.)

When looking at the figures “Weighted by Author Number”, again, at the last 2 columns, which are restricted to refereed journals, you see a similar picture, but the effect is more pronounced, with the FEC’s lead growing from 18% in 2006 to 61% in 2008.

Moreover, you can clearly see the double-counting problem, by comparing these two sets of figures. For example, in 2008, the MBS produced 0.98 refereed journal articles per FTE, but when this is corrected for the double-counting problem, we can see that the actual number of papers per FTE comes down to 0.46. Thus, the raw, uncorrected figure pretty much literally double-counts: by a factor of 2.13. The comparable distortion for the FEC is somewhat smaller: 1.47.

Now, keeping in mind that the double-counting problem is more pronounced in the MBS than in the FEC, let’s turn to the last set of figures, which break down the shares of the articles according to the ABDC classifications. (Let me tug on my beard for a moment.) Okay. I’t not clear exactly how this distortion affects those figures — although one could argue that the articles in the top journals are more likely to have multiple co-authors than those in lower ranked journals. However, I’m willing to give ground here.

My main point is this: the breakdown of the merger between the FEC and the MBS had nothing to do with the relative qualities of the research undertaken at these institutions. Quite seriously, as one of the anonymous commenters mentioned yesterday, there’s not much difference between the two.

The breakdown of the merger occurred due to the fact that the people negotiating on behalf of the MBS did not involve their donor members in the negotiations until a very late stage in the game. When the donor members were finally informed of what was going on, they rebelled against the deal that had ostensibly been made on their behalf by the MBS negotiating team. Why this negotiating team chose to not keep the donor members in the dark up to that point will probably remain one of the great mysteries of the universe.

Rationalizing the breakdown by claiming that it had something to do with the relative research outputs of the FEC and MBS is, frankly, not very helpful.

Ian, I missed where I suggested that the breakdown in negotiations had anything to do with research performance. I can’t imagine that it did. I am not sure where you get that impression. When I said that it was something revealed through the merger process it was about due diligence rather than something else.
But I will say this: I find it unlikely that even if there was some missed management with respect to donors that a mere tactical error led to a year-long highly costly process being de-railed. I suspect there were some deeper issues at work. That said, I was not in the inner circle on merger issues so couldn’t only speculate on that.
What is true is this: is there really anything that we couldn’t achieve now without integration that we could have achieved with it? As economists, you and I know that ownership rarely matters if you have good relations.
PS. Do you know how a co-authored paper between someone at MBS and another person at FEC (and I can think of several) would be counted? Also, the ANU uses the square root of authors as a weight. Let’s face it that makes more sense overall.

Hi Joshua,
I should be clear that I wasn’t in the inner circle either, I’m only going on what has been made public through the media.
I agree that ample opportunities for synergies and collaboration between the FEC and MBS have existed for a long time, and will continue to exist. (We are, after all the #1 and #2 institutions of our kind in the county!;)

On the co-author weighting scheme, though, I disagree. Fundamentally, one paper is one paper. If, for example, 4 co-authors in an institution put out one paper together, this is still only one paper. According the scheme you are proposing (as I understand it), this would count as 2 papers! This would be free lunch that, as an economist, I’m sure you couldn’t possibly countenance.

I agree that with Ian that IF WE ARE TRYING TO RANK DEPARTMENTS then a paper with two MBS authors should not count as 2. There are rather few of these papers though I think. Papers with an MBS author and a non MBS author should surely count for 1 (not 1/2). Ultimately, it is pretty hard to argue against counting the number of PAPERS per department.