IT Pro is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Learn more

The A-level results crisis has once again shown the limits of AI

When will those in government learn that technology isn’t magic?

Let’s be clear: artificial intelligence (AI) and its offspring are wonderful things. They can recognise patterns faster than humans can, meaning things like security threats that can be detected more quickly and accurately. They can be used to recognise hand gestures, which could have a positive impact on surgery, for example. In self-driving cars they can – theoretically at least – detect hazards and react to them quicker than humans.

But when it comes to making decisions about people’s lives, AI is often the worst possible tool to throw at the problem, as this year’s A-level results fiasco (and, undoubtedly, the GCSE results fiasco, when they’re released on 20 August) shows.

This isn’t news to anyone, either - or at least it shouldn’t be. Algorithms have repeatedly determined that black people, especially black men, are criminals. And that’s if they can even differentiate between one black person and another. Asian people, meanwhile, are terrorists, algorithms have determined. If you’re white, however, then please come this way, sir or madam, as you’re clearly virtuous thanks to your skin colour – or so a now shelved Home Office AI system determined.

While the algorithm being used to determine this year’s exam results isn’t racially biased, it’s defective in multiple other ways and is ruining people’s lives.

As revealed in tweets from BBC Newsnight’s policy editor, Lewis Goodall, the AI system that’s being used to decide students’ grades has penalised outstanding students based on the performance of past students, sometimes to an extraordinary extent. These stories include a student with predicted a B or C who the algorithm instead awarded a U because of an apparent faulty decision process. Another student, who was predicted AAA by his school and A*A*A* by UCAS, the algorithm gave BBB. And there are many, many more.

Of course, every year there are some students who are disappointed by their final grades, but this is not the same. This year’s students are effectively being penalised for not having taken the exam at all, which they were unable to do through no fault of their own.

As much as some people may declare that A-level, GCSE and Scottish Higher results don’t matter, the reality is that they do. They matter in the moment emotionally to the children who have taken them, and they matter because they are a stepping stone to the next stage of their education or to entering the workforce.

I can’t shake the feeling that the UK government, and governments elsewhere, have fallen in love with a technology they don’t fully understand. But as those blinded by love so often do, they’re sticking by the object of their affection, despite clear evidence that it’s fundamentally broken.

As I’ve said before, technology isn’t magic. In fact, it can plunge us into a dystopian world where our fate is determined by a faulty AI that pays no regard to our own character or achievements. Instead, a black box algorithm pulls in data about where we live, what school we went to, the colour of our skin.

There’s no sci-fi hero coming to save us from the misuse of technology, though – it’s up to us as a society to push back against it, before it’s too late.

Featured Resources

Join the 90% of enterprises accelerating to the cloud

Business transformation through digital modernisation

Free Download

Delivering on demand: Momentum builds toward flexible IT

A modern digital workplace strategy

Free download

Modernise the workforce experience

Actionable insights and an optimised experience for both IT and end users

Free Download

The digital workplace roadmap

A leader's guide to strategy and success

Free Download

Recommended

Recommendations for managing AI risks
Whitepaper

Recommendations for managing AI risks

25 Apr 2022
Recommendations for managing AI risks
Whitepaper

Recommendations for managing AI risks

25 Apr 2022
BangDB now offers its AI and graph data platform on cloud
artificial intelligence (AI)

BangDB now offers its AI and graph data platform on cloud

19 Apr 2022
Operationalising anti-fraud on the mainframe
Whitepaper

Operationalising anti-fraud on the mainframe

5 Apr 2022

Most Popular

Raspberry Pi launches next-gen Pico W microcontroller with networking support
Hardware

Raspberry Pi launches next-gen Pico W microcontroller with networking support

1 Jul 2022
Universities are fighting a cyber security war on multiple fronts
cyber security

Universities are fighting a cyber security war on multiple fronts

4 Jul 2022
Hackers claim to steal personal data of over a billion people in China
data breaches

Hackers claim to steal personal data of over a billion people in China

4 Jul 2022