Please log in

or

Register now for free

or

Choose your profile *

Email *

A valid e-mail address. All e-mails from the system will be sent to this address. The e-mail address is not made public and will only be used if you wish to receive a new password or wish to receive certain news or notifications by e-mail.

Asked to list the elite institutions of English higher education, a few names would easily roll off most tongues, all of them members of the Russell Group. One reason for this is their age. Another is their high entry standards. But, especially when it comes to league tables, their pre-eminent status is largely due to the strength and scale of their research.

But how long research will continue to be the primary driver of prestige, in England at least, is open to question. The government is making a concerted push for a sector with – as a previous White Paper put it – students at the heart of the system. Jo Johnson, minister for universities and science, argues that some institutions have traded on their world-class research reputation without giving their £9,000 fee-paying students an educational experience to match. Enter the teaching excellence framework (TEF). This is the government’s attempt to assess higher education providers on their educational standards, to give students better information about what they can expect from their course and to give universities a financial and reputational incentive to pull their teaching socks up.

The big question is how much the TEF will shake up the established hierarchy. How closely will the results resemble the research excellence framework, or the world university rankings? According to exclusive research by Times Higher Education, the answer is likely to be: not very closely at all.

THE’s data team modelled the potential results of the TEF for 120 UK universities using the three supposed core aspects of teaching performance that the government recently confirmed it will measure in the early stages of the TEF: graduate employment, student retention and student satisfaction.

Looking at institutions’ raw scores on those metrics, the traditional hierarchy is still evident, with the Russell Group and some other select pre-92 universities leading the way. But that dominance seems to be based, in many cases, on the types of students that these institutions recruit. When the scores are adjusted for factors such as subject mix, entry qualifications and ethnicity – as they will be in the real TEF – a very different picture emerges.

Far from the “golden triangle” of Oxbridge and London reigning supreme, it is a Midlands triangle of Loughborough, Aston and De Montfort universities that emerge as the top performers. Coventry University is another post-92 institution that figures very highly, and also conspicuous in the top 10 are a number of smaller research-intensives, such as the universities of Swansea, Kent, Surrey, Bath and Lancaster.

Phil Baty, THE’s rankings editor, says that the TEF “could send shockwaves not just through the UK but across the world”.

Universities’ absolute and benchmarked scores in mock TEF

“Some household names – major international university brands – previously judged to be world-class based mainly on their proven research excellence, will suddenly be exposed to the world as failing to deliver the expected results for their students – surely their most important stakeholders,” Baty says. “The metrics used may continue to be highly controversial, and will continue to be debated, but they will bear the powerful stamp of government approval, so they will be taken seriously as a reference point around the world.

“Of course, the prestige that comes from a long history of research excellence, membership of the elite Russell Group and selective entry standards will continue to attract students internationally, but there is huge potential for disruption.”

Baty’s point about the controversy over the metrics used is an important one. There are many who would argue that the best university teaching involves making students feel challenged and even uncomfortable; something that cannot always be associated with satisfaction, as measured by the National Student Survey. Likewise, many are concerned that assessing universities on graduate employment, via the Destinations of Leavers from Higher Education survey, ignores so much of the personal development that is central to higher education.

It is also worth noting that THE’s modelling of the TEF metrics cannot take into account the additional institutional evidence that – as set out in last month’s White Paper, Success as a Knowledge Economy – will be submitted by each provider to special TEF panels, and will be used to supplement the core data before a final verdict is reached.

But many of the universities that perform best in our mock exercise will take the results as confirmation of what they have been saying for a long time: that smaller, campus-based, research-active institutions offer the best student experience. Helen Higson, Aston’s deputy vice-chancellor, argues that if the TEF metrics can be got right and put in the appropriate context, it will be positive for the sector to have greater focus on teaching standards.

“I went to a very research-intensive university where I wasn’t taught by anyone who knew how to teach or why they were teaching me. Although it was great to be taught by the top minds, they weren’t necessarily the top teachers,” says Higson. “There is still a lot of public resource going into supporting students, so I think to run any [course] that isn’t excellent – that is run by ‘amateurs’ – is ethically not right, especially as a lot of individuals are now putting a lot of money into their education.”

Examination of the top performers’ results reveals that while they tend to perform above their benchmarks on student satisfaction and retention, it is often on graduate employment where they really excel, coming in well above what might be expected.

The top-performing Russell Group institutions – which also include the universities of Birmingham and Exeter – tend to perform well beyond expectations on graduate employment, but are closer to their benchmarks on the other metrics. Many other members of the mission group struggle to significantly outperform their benchmarks, and some fall well short in some areas.

All this will be useful and fascinating information for academics and university administrators, but the TEF is likely to have the significant impact that the government desires only if students take notice of it when choosing where to study, and if employers do the same when deciding which graduates to recruit. For instance, Russell Group institutions attract a significant proportion of international students, and the fee revenue that comes with them (see 'Market share of non-UK undergraduates' graph, below), so any dent to their reputation could lead to a major reallocation of resources within the sector.

As Baty says, the backing of the government has the potential to give the TEF significant weight, and this is welcomed by Robert Allison, Loughborough’s vice-chancellor. Like many university leaders, he stresses the importance of making sure the TEF metrics truly reflect excellence, but he believes that the TEF has the potential to “challenge orthodoxies” in the sector.

“I’m in no doubt that in the TEF there will be some of the Russell Group right at the top and I won’t be surprised if there are some quite a long way down,” says Allison. “Will the Russell Group implode as a consequence? I doubt it. But will that challenge the message that the Russell Group gives out that ‘we are the country’s elite universities’? I hope it does, because if there is evidence to suggest constructs like that are true, I’m pleased it reinforces it. But if there is evidence that indicates constructs like that aren’t necessarily everything they are made out to be, that would be an interesting consequence.”

An institution’s performance in the TEF will be communicated to students and employers via the rating they will be awarded. They will be judged to “meet expectations”, or to be “excellent” or “outstanding”. The government has already signalled that it expects about 20 per cent of institutions to fall into the first, lowest category, with between 50 and 60 per cent of institutions deemed excellent and the other 20 to 30 per cent outstanding.

Based on THE’s analysis, this entails that no Russell Group universities would be judged merely to “meet expectations”. But as few as five of the 24 members – add Newcastle University to the four previously mentioned – could be judged “outstanding”, with the universities of Leeds and Oxford on the borderline if the government opts for the lower cut-off. Cardiff University will also be in a similar position if Wales decides to participate in the full TEF. (The devolved nations have not yet decided whether to do so.)

In England at least, the strongest public indication of an institution’s performance is likely to be the tuition fees it is allowed to charge, with excellent and outstanding institutions permitted to make increases in line with inflation for the 2019-20 academic year onwards, and those that meet expectations permitted to raise their fees at only half that rate. However, this may limit the transformative power of the TEF since the worst-performing institutions in relative terms tend to be among those that do not do well on an absolute ranking or other measures of excellence commonly used in the sector.

Other significant brakes on the TEF as an agent for change are likely to be the enduring power of prestige and reputation and the lack of assessment at subject level until the fourth year of the exercise. This lack of disciplinary breakdown may prove to be particularly damaging to the government’s ambitions for the TEF to feed directly into sixth-formers’ choice of institution given that most choose course before institution, and there is likely to be widespread variation in standards of teaching on the various courses across institutions.

Stephen Isherwood, chief executive of the Association of Graduate Recruiters, warns that institutional-level data are likely to be “too generic” to be used in student application or post-university hiring decisions. Ultimately, there is also a question about how important relative TEF scores will be to employers – and, consequently, to applicants – when the selectivity of elite institutions is likely to ensure that their graduates will still have among the strongest outcomes in absolute terms.

Nick Hillman, director of the Higher Education Policy Institute, started to explore ways of assessing providers on their teaching standards while he was a special adviser to David Willetts, the former universities and science minister. He argues that the TEF will “quite quickly become a significant source of information used by applicants”. But even he accepts that it will take time to change employers’ views of universities, and that there will probably “always be a cachet from being really old or from being really research-intensive”.

The Russell Group, for its part, cautions that it would be “wrong to attach any weight” to the THE exercise, since it is based on only one year’s data, rather than the government’s preferred three, and because, according to the mission group, the methodology “doesn’t meet Office for National Statistics standards”.

Market share of non-UK undergraduates

Note: Universities are ordered according to their benchmarked TEF scores, with the highest scorers towards the left. In a small number of cases, data are not shown, either because the data are not collected (Northern Ireland) or because they were withheld by the institution.

Duncan Ross, data and analytics director at THE, responds: “Despite not being a public body (and therefore being outside the scope of the legislation), THE, and this analysis in particular, meets all of the relevant principles of the UK Statistics Authority Code of Practice for Official Statistics. This analysis is clearly in the public interest, as it gives a new and vital insight into the effectiveness of teaching in UK institutions.”

Wendy Piatt, the Russell Group’s director general, acknowledges that there is “always room for improvement” in teaching, but argues that “robust and credible measures of teaching quality will take time to develop through piloting and review. A huge amount of time, effort and resources have been devoted to improving the education and student experience our universities provide. This is reflected in feedback from employers and our students who, year-on-year, express above-average levels of overall satisfaction with the quality of their course.”

Beyond shaking up the traditional hierarchy of institutions, our mock TEF also paints a worrying picture for London. Belowbenchmark NSS scores drag down University College London, King’s College London and the London School of Economics: all members of the Russell Group. There are no London institutions in the “outstanding” category, while one in four of those in the “meets expectations” group is from the capital, with retention and graduate employment often proving problematic too.

London institutions currently attract disproportionate numbers of international students, but Hillman says that the TEF could encourage them in particular to look further afield: “International students will have to use data because it is hard for them to travel to London before they apply,” he notes.

If the devolved UK governments do take part in the TEF, the scores could also give some indication of the relative performance of institutions in different parts of the UK, and the impact of different funding systems. While the average score achieved in relative terms across the UK is 55.8, Scottish institutions – which have not benefited from £9,000 tuition fee income from domestic students – average just 44 points. Wales and Northern Ireland, which both face significant cuts in government funding for higher education, average 49.3 and 55.1 respectively.

Whatever happens – and whatever view people take of what the core metrics actually measure – our exercise suggests that the TEF will enable us to see one of the most renowned university systems in the world from a new and potentially enlightening angle.

Methodology for the THE mock teaching excellence framework

Times Higher Education’s modelling of the teaching excellence framework draws on the government’s core measures for the opening stages of the exercise: student satisfaction, student retention and graduate employment.

On satisfaction, THE’s data team uses the results of the 2015 National Student Survey. Specifically, we use question 22, relating to overall satisfaction, benchmarked for subject and students’ age, ethnicity, gender and disability. The government has recently announced that it plans to use the results of specific NSS questions on assessment and feedback, teaching and academic support, but benchmarks for these are not publicly available.

On student retention, we use the Higher Education Statistics Agency’s performance indicator T5 for 2013-14, which measures the proportion of full-time first degree students who are on course to achieve their chosen qualification. This is benchmarked for subject and students’ entry qualifications and age.

The government has recently said that it plans to draw on a different performance indicator, T3, which measures the proportion of students that continue with their studies beyond their first year. However, the two measures are strongly correlated, and it could be argued that T5 is a better measure of teaching quality because it does not reward universities where large numbers of degree students leave with diplomas instead.

To measure employment, THE commissioned a graduate destination marker from Hesa, which mimics what the government is consulting on for the TEF, measuring the proportion of university leavers who go into professional-level jobs, as defined by the Office for National Statistics, or postgraduate-level study. We also commissioned benchmarks for each institution, based on subject mix and students’ entry qualifications, age, ethnicity and gender.

Data for 120 institutions from across the UK are included in the final table. Some small, specialist providers have been excluded, either because they have potentially anomalous results in areas such as graduate employment or because they have limited numbers of full-time undergraduates.

Giving each of the three metrics equal weighting, an absolute score for each institution is generated, as well as a relative score reflecting benchmarking.

The THE exercise is unable to draw on qualitative submissions from institutions, which will be used in the real TEF to determine final ratings. Another caveat is that THE uses only one year of data, rather than the three that are planned for the TEF. However, Nicki Horseman, lead higher education data analyst for THE, doubts that using three years instead would affect the results significantly: “Using three years is aimed at providing sufficient data to break down the student body into smaller cohorts, such as part-time students. Here, we are not attempting that finer-grained analysis,” she says.

Horseman says the process of modelling the TEF highlights significant issues that the government will face when deciding how to rate providers: “Some institutions are strong across the board on all three measures, but it’s much more complicated when an institution is strong on one and weak on another,” she says. “How is that actually going to be assessed?”

Horseman says that another challenge will be how to divide universities into the government’s three categories – “meets expectations”, “excellent” and “outstanding” – when three clear groupings are not evident in the metrics. Instead, universities’ performance appears to range along a continuum, with a few outliers at either end.

“One of the criticisms of rankings and league tables is that we have very small differences between individual places,” says Horseman. “Institutions can get overly concerned about ranks when there is actually little difference in scores, but the TEF is proposing three assessment levels. Everybody wants to be outstanding and I don’t know how there will be evidence of a real difference between them.”

The advantages that large pre-1992 institutions enjoy in research are unlikely to give them a similar head start in the teaching excellence framework, Times Higher Education’s analysis suggests.

Here, we plot the data from our modelling of the core TEF metrics against institutions’ intensity-weighted grade point average in the 2014 research excellence framework, calculated by multiplying the standard GPA by the proportion of eligible staff submitted (“REF 2014 rerun: who are the 'game players'?”, Features, 1 January 2015). Dot sizes correspond to each provider’s student numbers.

Members of the Russell Group cluster at the top end of the REF scale. These institutions also tend to perform well on the TEF measures, but are not among the top tier after benchmarking.

The graph shows that, instead, it is often significantly smaller institutions that perform best on the core TEF measures of retention, student satisfaction and graduate employment. While many of the top performers on these metrics also have strong research profiles, they are not among the top research performers, and some of them had quite low scores for research power: a hybrid measure that mimics the research funding formula by combining quality and volume.

The lower performers in REF terms – most of which are post-1992 institutions – display a greater diversity of scores on the TEF metrics.

Nicki Horseman, lead higher education data analyst for THE, says the results reveal only a weak correlation between REF and TEF performance.

“Some of the institutions which are doing best on the measures we put together are smaller, campus-based universities,” she said. “It may be that they have a stronger connection with their students and more evidence of a community.”

How well do universities perform in getting their graduates into employment?

Getting students into a job is unlikely to be enough to guarantee success for a university in the teaching excellence framework; placing them in what the government calls “graduate-level employment” is likely to be key.

The government is exploring whether it should introduce a metric for the proportion of graduates in highly skilled employment, as defined by the Office for National Statistics: an approach followed by Times Higher Education in its modelling of the possible TEF results.

This graph compares universities’ performance on the proportion of their graduates in all employment six months after graduation with THE’s measure of the proportion of students in professional employment and postgraduate study after six months. After benchmarking for the type of students it recruits and courses it runs, De Montfort University stands out as the exceptional performer on both measures. Other top performers include the universities of Sussex, Hertfordshire and Huddersfield, plus Coventry, Lancaster and Robert Gordon universities.

But, interestingly, there are also a number of providers that are very successful at getting their leavers into work but are not as good at placing them in what are deemed professional jobs. Institutions in this group include the universities of West London, Derby, Wolverhampton and Northampton.

On the flip side, two Russell Group members – the London School of Economics and the University of Glasgow – stand out for being comparatively poor at getting their students into employment within six months. However, when these students do get jobs, they are more likely to get a professional role.

Notes: Graduate destination data are drawn from the Destinations of Leavers from Higher Education Survey 2013-14. Completion data are drawn from Higher Education Statistics Agency performance indicators for 2013-14. Student satisfaction scores are drawn from question 22 of the 2015 National Student Survey, which asks about overall level of satisfaction with course quality. The overall benchmarked TEF score is based on the distance from benchmark in each of the three categories, weighted equally.

That may well be; I find it difficult to tell what was included and how it was weighted. But the key question is, how similar will be the HEFCE metrics be to the ones that are used here? I.e. are these results relevant to the real TEF?

I completely agree. Slow news day at THE? And however the actual TEF is calculated, will it really "send shockwaves not just through the UK but across the world”? Come on. The bottom line is league tables and information that potential studnets use to decide which institution to go to is what counts. This information already exists. Whatever the TEF leagues look like, and they may well reflect some of the patterns imagine above, impact rightly or wrongly will be much less. Also poor, or what is perceived as poor teaching exists across all (types of) instituitons.

I agree with all comments above, but also, why are the results considered in terms of the subject mix and entry qualifications when we want to assess the impact of the type of student that is recruited? Don't international students matter here? I have recently written about it in my article that points to a fundamental problem with the TEF as a national tool that legitimises subordination of international students in recent moves and changes to higher education. International students are an important group that have an impact on the TEF results, but they are not considered in any of the discussions regarding the TEF.

I would be grateful for further explanation of how these scores were arrived at, because there appear to be some anomalies. The pdf table helpfully gives the breakdown of scores for the 3 components, i.e. destination, student satisfaction, and completion rates. We are told that:
“Giving each of the three metrics equal weighting, an absolute score for each institution is generated”. This implied that before any benchmarking was done, the column called Overall Absolute Score should be derived from the sum of the three components. I can see it has been scaled to range from 0 to 100, but that should not alter the relative standing of institutions. However, we have several instances where institutions with the same total score have different values on Overall Absolute Score.
For example, if you average the non-benchmarked scores for Graduate Destinations, Completion and Student Satisfaction for Surrey, Sussex, Leeds, Cardiff and UCL they all are the same, with mean score of 87.33. Yet their ranks on the non-benchmarked scale are given as 12, 20, 14, 16 and 29 respectively. Either there is an error in the tables, or the description of the computations is incomplete. I wondered whether there had been some adjustment for the size of the institution. As I noted in a Times Higher blogpost, this kind of adjustment penalises large institutions because their raw scores have a smaller standard error (see: https://www.timeshighereducation.com/blog/nss-and-teaching-excellence-wrong-measure-wrongly-analysed).
I plan also to look at the impact of benchmarking on your results, but suffice it to say, if your tables have been computed using the same kind of benchmarking as I described in my blogpost, they are highly questionable. When evaluating institutions, one can make a case for benchmarking against student intake characteristics when the measure is something like completion rates - although even there, one wonders whether the average student (for whose benefit we are told this whole exercise exists) is more concerned about absolute or benchmarked estimates of outcomes when considering where to apply. But for student satisfaction, benchmarking seems to imply that we must expect that students with lower entry level qualifications will be less satisfied with their courses, and adjust for this. This seems to go against one of the stated goals of the White Paper, which is to encourage institutions to provide courses that are supportive of those from non-traditional backgrounds.

Thanks for your interest. The method applied to generate the score is to take each of the individual indicators (absolute or relative to benchmark), z-scoring them to bring them to a common scale and then combining with equal weights to produce either an absolute or relative score. The data contains more decimal places than presented in the article and there is no adjustment for size of institutions.
The benchmarks are established benchmarks derived by HEFCE/HESA. In terms of the use of entry qualifications as a factor in the benchmarks, it is used in Completion and Graduate Destination benchmarks but not in Student Satisfaction. This is as proposed in tthe TEF Year Two technical consultation
https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/523340/bis-16-262-teaching-excellence-framework-techcon.pdf

I've just looked at the data behind that figure purporting to show "only a weak correlation between REF and TEF performance."
Essentially, you can show pretty much anything, depending on how exactly you compute and select your scales. For the TEF and REF indices used in the figure, the correlation between the two is .359. If you take the absolute TEF, without benchmarking, the correlation with the REF is .732. So can we really conclude, as the government would have us believe, that there is a reciprocal relationship between research excellence and teaching excellence?
In your piece it is suggested that: “Some of the institutions which are doing best on the measures we put together are smaller, campus-based universities,” she said. “It may be that they have a stronger connection with their students and more evidence of a community.” Alternatively, it could be that they benefit greatly from use of a benchmarked index, especially if the TEF index is biased in favour of small institutions because of the larger standard error around the estimates of their scores.
If the TEF is going to gain the confidence of institutions, it's going to have to be based on better measures than this.

Only an OFSTED inspection will check the accuracy of information collected by universities. Keele University for example only had 4% of its postgraduate's filling in the survey conducted by an independent company. We know this because the independent company sent out an EMail a week before the survey deadline encouraging more students to complete the survey. The next thing we saw were large banners everywhere stating 71% of students thought Keele was outstanding. Were the 4% employed by Keele in some capacity or other I wonder? Did the other 96% fill in the survey form within a week? What do you think. Discuss.

There are now more women than men in higher education worldwide. While it would appear to be a victory for gender equality, this imbalance also highlights boys’ educational underachievement. Ellie Bothwell reports