Farewell TEF, hello TEaSOF: Year 3 digested

Author

Tags

If you want the TL;DR version of the latest slew of TEF documents, it’s that TEF3 will be a lot like TEF2 in shape and outcomes, but with some tweaked metrics and the exciting new additions of supplementary metrics and a grade inflation measure. For those of you looking for more, read on for an update on what’s changed.

The amendments to the exercise which Jo Johnson announced at the beginning of September have been spun out into two hundred pages of policy documents. Inevitably, there are still plenty of holes in the policy including the statement that “the results are generally perceived as credible and reflecting teaching excellence across the sector.” Let’s recall that the exercise does not measure teaching, and move on from there.

It’s clear that the Department for Education also realises that the name needed changing and the exercise will be called the ‘Teaching Excellence and Student Outcomes Framework.’ Disappointingly, the department still wants to use TEF as the acronym (rather than TEaSOF, or “tease-off”). It’s yet another missed opportunity to improve the exercise.

Here beginneth the lesson

From the lessons learned document [pdf], of which we had a taster in September, we have the rationale for halving the impact of NSS and changing the way in which the initial hypothesis of awards is made:

A provider with positive flags (either + or ++) in core metrics that have a total value of 2.5 (after accounting for the weighting set out in 7.10 [The three core metrics based on the NSS have a weight of 0.5. The other three core metrics have a weight of 1.0.]) or more and no negative flags (either – or – – ) should be considered initially as Gold.

A provider with negative flags in core metrics that have a total value of 1.5 or more should be considered initially as Bronze, regardless of the number of positive flags.

All other providers, including those with no flags at all, should be considered initially as Silver.

There’s also an explicit reference to NUS’s boycott of NSS:

“If a provider does not have reportable metrics for the 2017 National Student Survey and there is evidence of a boycott of the NSS by students at that provider, the provider shall be treated as if it had reportable metrics for that year for the purposes of eligibility and award duration.”

These measures, while retaining NSS within the exercise, diminish its role and the position of the student voice, as Gwen van der Velden has argued.

One thing that isn’t changing is the names of the award categories. This knotty question is one of many that have been left for the more formal review of TEF which is mandated in the HE and Research Act. The Independent Review is due in 2018-19, the results of which will inform the exercise from 2019-20. While some thought was given to changing the names, there was this gem on the steps being taken to explain the meaningless terms to baffled audiences worldwide:

“During TEF year Two we recognised that explaining TEF to an international audience would be a challenge, specifically to communicate the subtle message that TEF bronze shows teaching excellence – and builds upon very high national quality assurance thresholds throughout the UK. We have worked with stakeholders to try and mitigate this risk – e.g. through developing an international script – and will continue to do.”

A subtle message indeed.

The specifics

Within the specification [pdf] of TEF3, we have plenty of detail about the changes made to the exercise. Highlights include the inclusion of LEO (graduates’ salary data) and the new grade inflation metric as trailed last month. Other changes include allowing providers with a majority of part-time provision will be able to submit additional evidence to make their case, giving the Director of Fair Access (and Participation, as the role has been renamed) a role in eliminating ‘game playing’, and the power of referral to reevaluate a provider against the threshold quality level.

TEF assessors will know whether a provider is in the top or bottom decile of the metrics on absolute, rather than benchmarked, scores: “Where a metric is flagged, the flag will form the basis of determining the initial hypothesis. However, where a metric is not flagged, a high or low absolute value will be treated as, respectively, a positive or negative flag in that metric.” This is perhaps the most regressive of the moves to TEF’s design which move it further from the ‘added value’ approach with benchmarked data to something more akin to a traditional ranking system.

A new grade inflation measure will attempt to measure the “rigour and stretch” of a provider’s provision with institutions providing data on degree classifications over the years. They’ll have to make a case for why their numbers have changed to convince the assessors that any increase in firsts or upper seconds isn’t a result of rampant lowering of standards.

Looking back

A true document for wonks, the analysis of final awards [pdf] is an in-depth analysis of how various external factors did – or did not – influence the awards in order to learn lesson for TEF3. The major headline is that there was no statistically significant evidence that provider type, tariff level or student characteristics (ethnicity, gender or disability) are associated with particular award levels. Considering the role that benchmarking has played in TEF, this is good news for the architects of the exercise. When the initial year two results came out, Wonkhe undertook some analysis of the ‘London effect’, a topic which has been considered very carefully by DfE. Although a small difference in award allocations was found in the London/South East area, it wasn’t statistically significant. While there won’t be benchmarking by region in TEF3, panel members will have more information about institutions’ recruitment of local students.

T(EF)rivia

Just for the hard-core wonks, there’s a note on the lessons learned document which shows that it wasn’t updated following the announcements at Conservative Party conference, referring to the repayment threshold of student loans at £21,000.

Despite some good advice on Wonkhe, there’s a repeat of an error in the description of providers in which the quality of their learning resources are described without any metrics relating to these.

What’s next?

HEFCE will publish its guidance for TEF applicants this month and a Benchmarking Review in November. HEFCE’s TEF baton will pass to OfS in April. Then we’ll all wait with bated breath for the results of TEF3 next year and the Independent Review.

4 responses to “Farewell TEF, hello TEaSOF: Year 3 digested”

We can always count on WonkHE for a rapid and thorough analysis of our documents! Just wanted to pick up a couple of small factual points, for the benefit of your readers.

1. Today’s WonkHE daily suggested that the supplementary LEO metrics are not benchmarked. In fact, both are benchmarked: the list of factors are set out on page 39 of the Specification and, for the threshold metric, include sex, ethnicity, social disadvantage, subject, entry qualification, disability and level.

2. The continued incorrect assumption in ‘T(EF)rivia’ that the TEF is only based on metrics. As the Year Two results clearly demonstrated, assessments are made against the ten TEF criteria, based on evidence from the metrics AND the provider submission. The existence of criterion LE1 (Resources) explains why the level descriptors refer to the quality of learning resources; the fact that evidence against this criterion will come from the submission rather than a metric does not make it less important.

Always enjoy WonkHE’s articles on the TEF and will look forward to seeing what comes next.

On point one, we drew on paragraph 191 and 194 of the lessons learned document – HEFCE have been investigating the use of typical benchmarking approaches for LEO, but it wasn’t clear from the text that these had been consulted on and approved (indeed, there is a benchmarking consultation due in November, no?).

On point two, thank you for the clarification! We would never stoop to claiming the TEF is based entirely on metrics – indeed the effect of the panel has been fascinating to model! Our comments merely relate to the perception issues – other criteria have attached metrics, “resources” does not. As a fan of the use of open educational resources this is a topic that particularly interests me.

I’ve just read the guidance… Even a cursory glance at the new metric split, and indeed the measures put in place for institutions with no reportable NSS data, would suggest that the student voice has been diminished yet further. This, in an exercise, that is supposed to help students make informed decisions about their institution choices… By halving that student voice, HEFCE has distanced the TEF even further from its original intentions.

The only metric with the student experience at the heart of it (and yes, I concede it is a metric of student satisfaction NOT teaching quality) is the NSS. A poor metric, but the only one we have that consults students themselves. By lessening the impact of NSS, institutions in London/red bricks will rejoice (the cynic in me can’t help but think this is a deliberate move by HEFCE/OfS to re-establish the old elite). But for everyone else, this data exercise undermines the work around student engagement and experience that has enabled institutions beyond the elite to be competitive and distinctive.

You skip over the particular highlight that as “only” a quarter of international students think a bronze award means teaching quality is “unsatisfactory” that’s alright then nothing to see here move along (para 249).