Universities were the main focus of TEF-related press coverage, with some surprises emerging on TEF day. While some universities very much landed where their metrics meant they were expected to (think LSE), others mysteriously were moved up or down based on the interpretation of the panel statements by the TEF panel.

In all, twenty-six universities were moved up from their initial hypothesis, while only three were moved down. Eleven of those moved up were institutions based in London.

Following on from Catherine’s analysis of how colleges fared in front of the TEF panels, and Ant’s initial look of how some written submissions may have helped or hindered universities, I’ve taken another look at what factors might have been at play when the panels decided on what award to give.

Pieces in the puzzle

It is difficult to piece together trends for how much value was placed on student satisfaction metrics by the TEF panels. “Excellent outcomes” prevails over poor student satisfaction in many of the panel’s judgements, except where there is insufficient evidence in the provider statement to show (or sufficiently claim) that poor student satisfaction is being tackled. From reading the panel judgements, it appears that the panel was willing to give many universities some slack for poor NSS scores, provided they were deemed to be addressed in the providers’ submissions.

BPP University is one institution that was not granted this mercy. It was the only HEI entrant where the panel deemed the institution did “not address… areas of concern”; in this case, low satisfaction with assessment and feedback and academic support.

BPP’s written submission acknowledges that there was a problem with its NSS scores, but appears to apportion blame to “high assessment anxiety” and increasing demands for academic and pastoral support by undergraduates and overseas students. BPP appears to have expressed surprise at such interest from eighteen year old undergraduates, and it appears to believe that it is not geared up to deliver on their needs as compared to their “elite postgraduates”. Three paragraphs out of a total of 48 were dedicated to addressing its poor NSS scores; perhaps there is an author kicking themselves somewhere for not having fully owned the issue. The panel was evidently not impressed.

Contrast this with the University of Bristol, whose Silver outcome was a move up from its Bronze initial hypothesis, despite below-benchmark satisfaction with assessment and feedback and academic support. Curiously, the panel judgement does not explicitly state that the university has addressed its double-negative flags for student satisfaction. Indeed, it appears that Bristol used its written submission to dispute the validity of NSS data, whilst also citing NSS metrics (not used in TEF) that cast it in a more favourable light. We can only defer to the TEF panel’s expert judgement that this was deemed sufficient to merit a Silver rather than a Bronze.

The London effect?

Universities based in London rested heavily on their provider submissions to mitigate for poor NSS scores, and many were moved up from their initial Bronze hypotheses indicated by their base metrics. King’s College London received double negative flags for assessment and feedback and academic support, as did University College London and London South Bank University. All were deemed to have “largely addressed” their below-benchmark performance in their written submissions.

King’s specifically asks for the “weighting of the provider submission [to be] increased”, going onto say it believes that the “weighting and choice of TEF metrics should in future take more account of the particular circumstances of large-scale providers in London and other major cities which may have significant impact on NSS scores for reasons unrelated to teaching quality.” King’s claims that there is a clear connection between the challenges of London and poor student satisfaction, citing tube strikes as an impediment on NSS scores if they are is disruptive to examinations.

London South Bank’s submission provides a graph to attempt to demonstrate how the London effect impedes all London universities compared to the rest of the UK. UCL appears to have conceded that it has a problem, although in its Education Strategy it also blames poor expectation management.

Praise should be given to Birkbeck College, which did not claim the effects of London as an excuse for lower satisfaction scores and instead highlighted the nature of its work as a facilitator of part-time study and the challenges that this provides instead

Plaudits for part-time

Successful outcomes for part-time students appear to have been favourably upon by the panel. Leeds Beckett University has particularly good metrics for its part-time students, and was moved up to a Silver award despite bellow-benchmark full-time outcomes. The same appears to have happened for the University of Central Lancashire. The TEF panel appears to have valued the success of part-time students – something not to be overlooked in the context of declining part-time student provision.

Clear as mud

It is difficult to identify a consistent pattern in how the TEF panel made its decisions. While there have been institutions bumped up for excusing their negative flags on the basis of being in London, and others disputing the validity of the NSS, there is some amusement to be found in how universities cherry picked NSS statistics that were not being considered as a part of TEF, and used this to bolster their claims to be moved up in their written submissions. Could the panels have been influenced by well crafted submissions that were creative marketing as much as a robust analysis of an institution’s performance?