Advertisement

The UK's exam results farce only deepens the inequality between private and state pupils

<span>Photograph: Reuters</span>
Photograph: Reuters

In July, I thought the major risk of this extraordinary exam results season would be the unconscious bias of teachers when they made their predictions of student grades. The effects might be quite subtle, and universities would have to be correspondingly nuanced in their response. I have to say, it never occurred to me that teachers would be broadly disregarded, and instead, any existing disparity between the private and state sectors, between affluent and deprived areas, would be baked in by design.

There is a political complacency to all this, an assumption that parents and students will simply weather it. And so here we are: a fortnight ago in Scotland, students from richer areas saw their predicted grades, if they were A to C, reduced by seven percentage point; in the poorer areas, 15. This, apparently, was to preserve “credibility” – if there had always been better grades in these richer areas, what would the world make of a year in which the poorer kids did somewhat less badly? What would it mean for the students of 2019 and 2021, to be sandwiched either side of this freak year in which deprivation didn’t show (so much)?

Almost no student in Scotland was untouched by this system – among 138,000 entrants, 125,000 grades were changed. But the truly life-changing impact – A and B predictions reduced to Cs and Ds – was of course reserved for the highest achieving students in the poorest schools.

Related: UK schools fear backlash from parents over 'unfair' A-level exam results

The decision in England, where results come out on Thursday, is more rigid still: alongside their predictions, teachers were asked to put their students in order, with none jointly ranked. Then, in what Ofqual called its “direct centre-level approach”, the predictions were largely disregarded; instead, schools have been allocated a fixed number of As, Bs etc based on their performances in previous years. Students’ results will rely jointly on where they were placed relative to their peers, and how students did in the years before them.

This system would be statistically absurd in the more obscure subjects, with five students or less, and in those, teachers’ predictions are still decisive. But this is the plan in subjects with more than 15 entrants – English, maths, history. There is almost no scope for appeal at an individual level: you can challenge the result if you can prove that this process wasn’t followed, but you can’t challenge the process itself.

A student at an independent or selective school can expect to receive their most optimistic result, those institutions being accustomed to exam success (not only because of their intake, but because they quietly shuffle out underachieving students over the course of their school career). A student in a historically low-performing school (and very recent history, at that) can expect to see their results downgraded. Everyone knows that in this unprecedented era of the exam-guesstimate, no results will be perfect. But to choose a system in which those that already have shall be given As, while those that have not, Cs, is shameful.

In all the handwringing, there is an acceptance of one inevitable choice – either Ofqual extrapolated from precedent, and kept grades level, or they took teachers’ predictions, and accepted perhaps a significant amount of grade inflation. I would have been far happier with the latter: grade inflation, like any inflation, erodes inequality by shaving away pre-existing advantage in a relatively painless way.

Yet that isn’t the only measure they could have considered: for the purposes of the ceaseless measurement that has become the hallmark of state education, secondary schools have a “value added” score, to mark how good their actual teaching is, as distinct from the advantages of their intake. It relies on a comparison between SAT scores at the end of primary school, and key stages 3 and 4, ending in GCSE. Schools in deprived areas tend to have good value-added scores – if they had an intake that was below average at key stage 2, they’d have to, to keep Ofsted off their backs.

Related: How are exam grades being decided in England this year?

Independent schools, meanwhile, don’t measure value added at all, because independent primaries don’t always sit SATs in year six. It would have been perfectly possible to add that as a reference point to these calculations, and consider not only how many As to Cs a school got over the past three years, but also how much value they typically added in order to get there. You might still be left with some grade inflation, but then you could always adjust downwards from the sector that couldn’t provide full data, which would be private schools, or the schools that didn’t add much value, selective state schools.

It is inevitable that results which were only guesses would disadvantage some hard-working pupils. But all these decisions have been flaccidly political – they didn’t even set out to penalise the disadvantaged, they just didn’t think in any serious way about how not to.

After all that’s happened – the chaotic closure of schools, the assumption that everybody could work from home while simultaneously homeschooling, the sudden instruction to go back to the office with no mention of educational provision, the reopening for years 1 and 6 on a sporadic, confusing, discretionary basis, the announcement of a full return in September with no indication that teachers’ unions were on board, the seemingly tabloid-led announcement of priorities, a prime minister adjudicating between pubs and schools hypothetically, and on a whim – perhaps this was predictable.

• Zoe Williams is a Guardian columnist