Image: Unsplash

Grade inflation may be due to overemphasis on university rankings

Grade inflation has been a well-worn topic of discussion for years. It seems to have permeated all levels of education in the UK, but is particularly evident in undergraduate degrees. Grades have never been higher. Perhaps this is all down to the students never having been brighter, teachers never more effective, and universities never more supportive. This would be lovely if true, but I fear it’s a fairy tale.

I reckon that teaching possibly is slightly better and I would hope that universities are somewhat more supportive than they have been in the past. But, to assume that students are brighter doesn’t seem particularly tenable. The bigger point is that it’s a pretty remote possibility that any combination of these factors could account for the scale of the rise in degree grades between 2010-11 and 2018-19. During this period, the proportion of students with firsts almost doubled from 16% to 30% and those with either a first or 2:1 increased from 67% to 79%. The OfS itself accepts that a significant proportion of this increase cannot be explained by legitimate improvements in standards.

The trouble is that universities are paying too much attention to these rankings, turning the measure into a target

How do we explain this rise then? Here’s my theory: it’s the fault of university rankings. There is an adage which I think explains a lot of the problems encountered by modern institutions; it’s called Goodhart’s law and claims that ‘when a measure becomes a target, it ceases to be a good measure’. In this case, the university rankings are the measure. They are increasingly looked to by prospective students and future employers for an indication of the quality of an institution. The trouble is that universities are paying too much attention to these rankings, turning the measure into a target. They are all learning to play the game.

There is one sure-fire way of having a decent shot at coming out well in this game – inflating grades. Look at the inputs the rankings use for their results: student satisfaction, value added to students, satisfaction with teaching, career after university, continuation rates, employer reputation, etc. All these can be bolstered by simply adding a few percent to grades. Students will be more satisfied, fewer will drop out, more value will seem to have been added, it’ll be easier to get a good grad job, employers will look more favourably at the institution, and so on. The only thing for the universities to watch out for is going too far. Once you get to 55% of Imperial graduates being awarded a first, the game has pretty much run out.

The grade inflation problem perhaps has the worst effects at the university level for one reason: there’s no uniformity across the inflation. The rate of inflation can occur at hugely different rates across institutions. So, not only does it become more difficult to compare grades between years, but it is also more difficult to compare grades within years, between universities. 

How can we reasonably expect employers to recognise the hugely intricate differences in standards applied between and within universities?

Consider this example. The percentage of Warwick Economics students achieving a 2:1 or first is a very admirable 80%. The percentage of Economics students at the nearby Coventry University achieving those grades is exactly the same. Are we meant, therefore, to take the academic quality of the two sets of students to be basically the same? This seems unlikely. Firstly, the average A-levels of Warwick Economics students were A*AA, compared to CCD from Coventry Economics students. If that’s anything to go by, the teaching had better be pretty exceptional to have the same standard of students by the end of the course. The trouble here is that Warwick is world-renowned for its Economics department and so is unlikely to be beaten in that respect either.

So, presumably we are not supposed to assume that ‘a first’s a first’, no matter where it came from. The trouble is lots do assume just that. Those who have applied for internships or grad schemes will be well-aware of entry requirements which just state a degree classification, regardless of where it was achieved. Indeed, there are lots of employers who, in the name of ‘university-blind applications’, will not ask for details of one’s university at any point in the application process. Even if these sorts of policies are not used, how can we reasonably expect employers to recognise the hugely intricate differences in standards applied between and within universities? 

Surely then, the solution is to have broad parity across the standards. This might be possible with some heavy regulation, but the problem with that approach is that it turns our diverse set of universities into a homogenous mass, constraining them from having the flexibility which is so valuable at this stage of education. I would suggest instead that the place to start would be to demand some honesty from universities; don’t give higher grades to play games, give them when they have been deserved.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.