A-Levels 2020: Derailed by a mutant algorithm

How did the government get A-Level results so wrong?

An algorithm used to determine A level grades left thousands of students devastated after their predicted grades were marked down. Statistics showed that 39% of students had their results downgraded and were more likely to attend state schools while in contrast, not a single student from Eton had their grades reduced. The country erupted in protest, with some claiming that the algorithm was classist and others arguing that a computer is clearly smarter than a teacher and cannot be biased. 

When people hear about algorithms and Artificial Intelligence (AI), they often assume the computer has human-level intelligence capable of making decisions. But this is completely wrong – an algorithm is just a series of instructions for the computer to follow with no concept of morals. It simply does what you tell it.

Okay, but mustn’t AI be intelligent? Well, it’s worth noting there are two types of AI: Weak and Strong. Chances are when you think of AI, you’re thinking of Strong AI, which is able to make decisions and learn in an attempt to mimic human intelligence. However, most AI that is currently in use is Weak AI, and it is likely that this was used for the A-Level algorithm.

So how do we end up with bias within algorithms? Well, the first kind of bias is caused by malicious intent. As previously mentioned, an algorithm will do what it’s told, so if you want to build an algorithm that discriminates against a group of people, the algorithm will do just that. The second kind of bias is the inability to recognise bias occurring within data. This is often harder to recognise, as people tend to just accept numbers, forgetting that numbers leave no room for real-world context. 

For example, you are tasked with building a system to direct police patrol cars around a city. The first thing you might look at is which areas have a higher arrest rate and then choose to distribute more officers there. However, what those numbers don’t reflect is the fact that areas with a higher percentage of people of colour historically have been, and still are, more policed and therefore have been unfairly targeted. This results in a higher arrest rate. By not looking at why there are more arrests in certain areas, you could end up building a system which is systematically racist. 

It then becomes the responsibility of the algorithm developer to spot bias and identify prejudice within the data. They then need to adjust the algorithm to account for this. However, something as simple as a lack of diversity in a development team could mean that some forms of discrimination are overlooked. 

Boris Johnson given 24-hour deadline to fix A Level 'shambles' by furious  Tory MPs | London Evening Standard
Students protesting against the A-Level results algorithm. Image: Evening Standard.

The A-Level results in August would appear to suggest that the algorithm used reinforced class prejudice against students from working-class backgrounds by consistently downgrading them. It is highly likely that this was a result of ignorance, but it does beg the question, who was ultimately responsible for overseeing the algorithm and highlighting this inbuilt prejudice? 

Had these issues been picked up at an earlier stage, the algorithm could have been a force for good – equalising results across private and state schools. However, what was ultimately developed just reaffirmed the classism that exists within the UK. 

By Elizabeth Sarell

Header image: The Guardian