In light of a recent thread, I'd like to propose that we scrap degree classifications in favour of rankings.

Someone who has just missed out on a 2.1, ending up with a very high 2.2 from Cambridge, will have worked much harder than someone who just about scraped a 2.1 from London Met.

However, the Cambridge student would be filtered out by employers due to this absurd obsession with "2.1 or above".

It's unfair to filter by 2.1 degrees when some universities so blatantly abuse the system via grade inflation.

Degree classification abuse

Look at the LSE. They awarded half of their BSc Economics students, that's 50%, first class honours degrees.

Cambridge, on the other hand, awarded just 29% of their Economists a first, and UCL a mere 17%.

You have to question the system:
LSE economics students are not better than Cambridge students - they on average perform worse at A-level (Unistats). Given that Cambridge takes on 90% of its offer holders, and LSE on the other hand loses lots of its offer holders to Oxbridge, it's clear that Cambridge first-years are on average of a higher calibre on paper (and if you don't want to go by paper scores alone.. Cambridge interview, LSE don't).

Even if you were to propose that LSE's students were stronger than Cambridge's, and that LSE's teaching was superior to Cambridge's (which I wouldn't believe for a second) there's no way that they are almost *twice* 29% vs. 50%) as good.

Instead, I propose a simple ranking - you can't tweak a ranking. Making an exam easier will mean that more of your students get a 2.1, but it won't mean that more of your students are ranked in the top 40%, as that proportion is fixed.

LSE could not, however hard they tried, have 100 students ranked in the top 10.

This makes comparison a whole lot easier:

Economics - University of Cambridge - Upper Second Class
Economics - London School of Economics - First Class

The ranking would be out of the students in your year on the same degree course, so you could add "in the year for BSc Economics" for clarity.

Which of the two is the most clear, given that it's the *same* two students in each case?

Example of employer bias

Only last year I spoke to an employer at a Magic Circle law firm about degree classifications, and he bluntly replied that a 2.1 from Liverpool would be superior to a 2.2 from Cambridge, that the Cambridge 2.2 would be all but ruled out.

Now, I think it's ridiculous that 1% in exams at completely different universities (the Liverpool grad could have just scraped the 2.1, and the Cambridge grad just missed it) can lead to such an enormous influence on job prospects.

The change would eliminate grade inflation/deflation, and allow for more flexibility with employers rather than tarring 40% of the year with the same '1st' brush.

TDLR: Instead of "BSc Economics First Class Honours, LSE, 2005",
have "BSc Economics LSE, 2005, 30th highest score out of 200 students graduating on that course"

What is the purpose of this ... is it to inform employers ... I only ask as I imagine most would believe that a 2.2 from Cambridge would probably be a better bet than a 2.1 from LMU.

Are you suggesting that every student studying a specific course in the UK each year be ranked?

Perhaps the LSE teaching of economics is better than that at Cambridge and, therefore, that the knowledge of students at the end of the degree is better there?

I am suggesting that yes, that each year group for each subject it's ranked. I'll add it to the OP.

The purpose is for employers, yes, as well as academic interviewers.

I highly doubt that LSE teaching is better than Cambridge's! The attention which Oxbridge students receive from their teachers is enormous, whereas LSE students frequently complain about a lack of contact with their lecturers, and a complete lack of care from them about their undergraduates.

Perhaps LMU is an extreme example, but only last year I spoke to an employer at a Magic Circle law firm (if you don't know the Magic Circle, they are the most prestigious firms in the UK, with an incredibly competitive hiring process) about the issue of the 2.1, and he bluntly replied that a 2.1 from Liverpool would be superior to a 2.2 from Cambridge, that the Cambridge 2.2 would be ruled out.

Now, I think it's ridiculous that 1% in exams at completely different universities (the Liverpool grad could have just scraped the 2.1, and the Cambridge grad just missed it) can lead to such an enormous influence on job prospects.

This could have odd effects in the essay subjects where (imo) the marker reads your assignment and basically decides what class it belongs in then gives it a % in accordance with that.
I think it could cause precipices and increase stress and the rate of complaints about marking. Currently you're unconditionally happy to get 71% on an essay, in your scheme you're not so happy if 10 other people got 72%. Maybe you put a comma in the reference list where it should have had a full stop.

(Original post by Upper Echelons)
I am suggesting that yes, that each year group for each subject it's ranked. I'll add it to the OP.

Seems problematic ... especially when you consider the vast range of subject titles

I highly doubt that LSE teaching is better than Cambridge's! The attention which Oxbridge students receive from their teachers is enormous, whereas LSE students frequently complain about a lack of contact with their lecturers, and a complete lack of care from them about their undergraduates.

For economics specifically LSE scores higher than Cambridge on student satisfaction according to theComplete University Guide

Graduate Prospects, on the other hand, are ranked the other way ... suggesting that employers may well be able to see past the issues you suggest exist

hmmm 80.9% at LSE get a Good Degree compared with 87.4% at Cambridge ... I cannot see the subject specifics for these at the moment

Perhaps LMU is an extreme example, but only last year I spoke to an employer at a Magic Circle law firm ... about the issue of the 2.1, and he bluntly replied that a 2.1 from Liverpool would be superior to a 2.2 from Cambridge, that the Cambridge 2.2 would be ruled out.

Well I do not know about the relative teaching of law at those two institutions so I could not comment on the relative ranking that you might consider appropriate for them

However, employers will be looking, to some extent, at how a candidate has performed against the expectations he was faced with ... and the 2.2 candidate will have performed less well ... all be it by a small margin

I imagine that the employers in question look into greater detail and, since they are so competitive, it is unlikely that the "just scraped at Liverpool" would get much further in the process. Indeed my understanding of MC processes (based only on previous information read on this site) would be ... eliminate below a 2.1 ... quickly followed by an institutional ranking anyway

Well I do not know about the relative teaching of law at those two institutions so I could not comment on the relative ranking that you might consider appropriate for them

However, employers will be looking, to some extent, at how a candidate has performed against the expectations he was faced with ... and the 2.2 candidate will have performed less well ... all be it by a small margin

I imagine that the employers in question look into greater detail and, since they are so competitive, it is unlikely that the "just scraped at Liverpool" would get much further in the process. Indeed my understanding of MC processes (based only on previous information read on this site) would be ... eliminate below a 2.1 ... quickly followed by an institutional ranking anyway

Practically, it's not problematic at all. If a university can work out your degree classification they can certainly rank their students.

If you mean in terms of comparison, then you'd have the same problem with degree classifications anyway!
It's more difficult to compare:
BA Economic Studies - First
BSc Economic Sciences with Studies of Economics - 2.1

Than it would be if you included rankings, or do you not agree?

It's true, but the fact that the 2.2 would be instantly eliminated is unfair.

Especially when you reach the higher levels it becomes more unfair, where a 1st from LSE could be seen as more prestigious than a 2.1 from Cambridge, even though, as I showed with the statistics, it would not necessarily be the case at all.

Even if you don't seem to see the flaws with degree classification (many do see the flaws - the reason I made the thread was that someone complained about it just now in another thread), I can't see how the rankings wouldn't be an improvement.

(Original post by Joinedup)
This could have odd effects in the essay subjects where (imo) the marker reads your assignment and basically decides what class it belongs in then gives it a % in accordance with that.
I think it could cause precipices and increase stress and the rate of complaints about marking. Currently you're unconditionally happy to get 71% on an essay, in your scheme you're not so happy if 10 other people got 72%. Maybe you put a comma in the reference list where it should have had a full stop.

There should be more complaints about marking - the whole idea is that grading is a huge issue, especially grade inflation and the difficulty of comparing graduates.

Perhaps we could go with deciles or percentages?

I definitely understand you point, though, above how it would no longer be the case that you could simply do a set task and get a classification. Just like in current exams, you can get a certain amount of answers right and get a first, regardless of competition, this makes it skewed in that it depends on your year group's performance.

It's a very difficult task, comparing university graduates, when they each have individual exams and student profiles..

Even if you don't seem to see the flaws with degree classification (many do see the flaws - the reason I made the thread was that someone complained about it just now in another thread), I can't see how the rankings wouldn't be an improvement.

This system will also have its problems because it is difficult to compare universities. For example, who is better, someone who is in the 90th percentile at LSE or someone who is 75th percentile at Cambridge.

First you have to rank the institutions ... impossible as can be seen by the different league tables that are available

Whilst you could rank students within an institution this ranking would not use consistent markers ... a student at an institute that assesses through coursework may be ranked very differently if their institute assessed through exams

Then, as I say, how do you rank courses that have different titles/different content/different proportions of modules from out of subject/etc/etc/etc

Are the rankings national or simply for the university? If their only for the latter then you still have problems concerning the difficulty of the assessment at different universities. If they had standardised tests then problem solved?

I like the idea of rankings; they offer much detail as to a students potential. In France I believe they rank people nationally at the baccalaureate and at some universities within specific subjects.

The issue with this is the inevitable variance between years. For example i could be doing x degree in y uni and come say 10th, when the very same performance could have put me in first in the year above/below me. I then apply to jobs with people ranked 1st in the same degree from the same uni when in actual fact i am more qualified than then.

(Original post by TenOfThem)
First you have to rank the institutions ... impossible as can be seen by the different league tables that are available

Whilst you could rank students within an institution this ranking would not use consistent markers ... a student at an institute that assesses through coursework may be ranked very differently if their institute assessed through exams

Then, as I say, how do you rank courses that have different titles/different content/different proportions of modules from out of subject/etc/etc/etc

You don't have to rank the institution, I just said internal rankings of *students*! Which is all that matters in this case, just knowing how well someone did within their university. Formal comparison across universities is impossible, it comes entirely down to subjective prestige, and so we should keep the formal classifications as objective.

Are the rankings national or simply for the university? If their only for the latter then you still have problems concerning the difficulty of the assessment at different universities. If they had standardised tests then problem solved?

I like the idea of rankings; they offer much detail as to a students potential. In France I believe they rank people nationally at the baccalaureate and at some universities within specific subjects.

The Bacc is a standardised exam, which for obvious reasons can't continue to university level.

The rankings are just within the university: "Steve graduated #3 out of 200 Economists at UCL this year"

(Original post by charliemac41)
The issue with this is the inevitable variance between years. For example i could be doing x degree in y uni and come say 10th, when the very same performance could have put me in first in the year above/below me. I then apply to jobs with people ranked 1st in the same degree from the same uni when in actual fact i am more qualified than then.

Indeed, this is the main issue. I suppose it would be easier for employers to recognise, though, that when it comes down to someone who came #9 one year vs. someone else who came #3 another year, the difference is negligible.

Intelligent employers would allow a few places to compensate for variance, especially at the top end.

Are you suggesting that every student studying a specific course in the UK each year be ranked?

(Original post by Upper Echelons)
I am suggesting that yes, that each year group for each subject it's ranked.

(Original post by Upper Echelons)
You don't have to rank the institution, I just said internal rankings of *students*!

See the lack of clarity
I asked a question
You said YES
You meant NO

So you simply want people to be given a class position rather than a classification

I do not see how that helps an employer at all ... I assume that year on year an institution tries to ensure that the same or at least similar requirements have to be met for a student to achieve a specific classification whereas the quality or performance of students may not be consistent hence ranking tells them nothing

And how does it change the arbitrary cut off issue ... does the MC say "we will only look at people who are in the top 10% of their ranking" someone who is 21 out of 200 at Cambridge will still be rejected whilst 50 out of 500 from Liverpool will not (numbers made up to be specifically provocative)

Couldn't you just give out a percentage mark then instead of a ranking, you don't have to make a classification, just keep the numbers.
In the end I feel employers will still just make up an arbitrary cut off point though:/