Author

Tags

It wasn’t always like this. A few years ago, it was perfectly easy to discuss higher education policy without recourse to spreadsheets and visualisations. Rankings and ratings were simply devices to sell newspaper advertising and could be dismissed as jejune and largely irrelevant.

The tone of academic life

Sure, there was evidence. It was carefully and painfully collected, analysed by statistical experts and used to understand where policy needed to happen – not to set policy itself. But quietly, insidiously, the data became the reason, and the policymakers could only follow. Leadership and strategy became synonyms for trend extrapolation and shape-matching. An inhuman intelligence had taken over the HE sector and no one knew for sure what its goals were.

The very tone of academic life altered. Time-worn practices and concepts were discarded so universities could fit into the new reality. Systems and processes snaked into every office, lab, and seminar room – hungry for more and more data. An algorithm told us who would have a job and who would not. An Excel formula dictated who passed and who failed.

Then these alien lifeforms began to link together, viewing our endeavours from every possible angle. Nothing went unmeasured, and the forms and returns that suddenly seemed to have inveigled themselves into every aspect of the university took on even more sinister overtones. Meanwhile, the Elder Gods of Governance were stirring, animated by the sheer force of prediction…

Don’t have nightmares

Well, yes, some people do see data in the sector in ways like that. But real wonks know that data and metrics are tools – just like committees and strategies. They have strengths and weaknesses, and have been used both to the benefit and the detriment of sound policy making – which does remain a purely human preserve.

On the site, we’ve examined all kinds of data and metrics over the last year:

Each has offered us a limited but important view of the way the sector is, and the way it appears to be changing. We’ve been consistent in calling for caveats and data hygiene and we’ve been as suspicious of overstated claims as we have been of “policy by anecdote” and last year has seen plenty of both.

Dare you miss Wonkfest?

At Wonkfest, we expect metrics and data will play a part in our discussions, as it should in any serious examination of policymaking. The session on REF will feature insight from Jisc’s Neil Jacobs and Interfolio’s Steve Goldenberg on the way technology and data is changing research, with contributions from Jennifer Stergiou from Northumbria University on how this plays out on an institutional level. Catriona Firth from Research England will help us understand the way the REF itself is changing, and the way the findings will be used in the future.

LEO has been one of the most controversial new measures to enter HE over the past few years, and few people understand the dataset like Anna Vignoles. Along with Celia Hunt from HEFCW and Ian Campbell from Hertfordshire, we’ll discuss what it can tell us about the future we are preparing graduates for – and what it can’t.

Learning and teaching might feel like the hardest university missions to measure, but it has been one of the most measured in recent times. A key figure in the implementation of both TEF and the national student survey, Graeme Rosenburg from OfS joins an all-star panel of educational developers – Robin Middlehurst, James Wisdom of SEDA, Debbie Holley from Bournemouth, and Jisc’s Sarah Knight – to discuss the impact of these measures on what happens in the lecture theatre, seminar room, lab, and workshop.

Measures can be useful when used maturely and safely but, without support for work to address their findings, nothing will ever change.