Robot dogs won't save policing – but AI just might

Police using AI-powered facial recognition

After a spate of outrages and protests in recent years it’s clear that policing needs to change. Technology could be the key – but ideas such as robot police, facial-recognition software and AI won’t eradicate fundamental challenges including low budgets, internet-savvy criminals and human racism.

Take facial recognition. Last year in Detroit police wrongly arrested Robert Williams for shoplifting, based on an incorrect identification by facial-recognition software. Williams was arrested at work and spent the night in jail, despite having nothing to do with the case.

We haven’t seen such headlines in the UK so far, but it could be only a matter of time. While the Court of Appeal has ruled against the use of facial recognition by South Wales Police, on the grounds of privacy and data protection, but Fraser Sampson, commissioner for the retention and use of biometric materials, told the Financial Times in May that police would eventually “have no alternative but to use facial recognition, along with any other technology that is reasonably available to them”.

Indeed, the Metropolitan Police has already started trialling the tech to scan crowds for wanted individuals, although the results have shown a low level of accuracy. Across three tests in 2020, the Met scanned 13,000 faces, from which the system picked out eight individuals – but seven of them turned out not to be the wanted suspects.

Robot cops have also failed to become a widespread reality, despite the best efforts of technology-minded police forces. In New York, police tried bringing a Boston Dynamics robot to respond to a home invasion in the Bronx, and then to a gun-related incident at an apartment in Manhattan – but this raised heated accusations of overly aggressive policing. A few weeks later the robot police dog, named Digidog, was retired from service.

And what of machine learning? Its use continues to be fraught with challenges. In the US in 2016, research by ProPublica revealed that a system called COMPAS, designed to assess the likelihood of a convicted criminal reoffending, exhibited a bias against black people. In the UK, the Home Office spent £10 million designing an AI system to predict gun and knife crime, but it was never used after a flaw was spotted in the training data, making its predictions useless.

RELATED RESOURCE

The ultimate law enforcement agency guide to going mobile

Best practices for implementing a mobile device program

FREE DOWNLOAD

Critics argue that such technologies have no place in policing, not just because they remain largely unproven but because erroneous outcomes are potentially life damaging. Then again, human police are already heavily criticised for bias, including disproportionately targeting BAME people. Is it possible that technology can make policing work better for all of society?

Motivating forces

Owen West is a retired chief superintendent at West Yorkshire Police who now researches policing at Keele University. He believes that, even in the face of criticism and risks, police forces in the UK are keen to trial new systems such as facial recognition, to push back against a long-held suspicion that the police are overlooking opportunities and cost savings. “Governments think greater technology can reduce the number of officers or police staff, and therefore provide policing at less cost than conventional approaches,” West tells us.

There’s pressure from the security and defence industry too, he adds, which may give away technology for free or at a discount. “It’s often big business in the security and surveillance sectors that seeks to lobby for greater technologies in policing,” West says. “Much of this is seen in the overlap between the defence industries – surplus military equipment and technologies being made available at significant discount to police departments in the US.”

Such projects may seem like a good deal, but West warns that police should be wary of handing over large, valuable datasets to commercial partners. “The police are providing gold dust to such companies but never realise the monetary value they present,” he notes. “If forces were really effectively partnering, they would ensure that the commercial growth and revenue of the product they have hosted or trialled is paid for by the company. All too often the company has the commercial advantage.”

Whatever the balance of power, Fraser Sampson has already signalled the government’s direction, telling the Financial Times that the use of AI is an “inevitable… increasingly necessary component of policing”.

Problems with tech

One of the core problems with AI, whether in facial recognition or data analytics, is limited or poor-quality datasets end up encoding bias into the decision-making process, meaning the same mistakes humans make are replicated by digital systems.

For example, Adriane Chapman, professor in electronics and computer science at the University of Southampton and a Turing Fellow, points to institutional policing problems that have been highlighted by the Black Lives Matter movement. “We are at a point in society when we acknowledge that a change in responses and actions must happen, and yet the technology takes in what happens in the past, learns from it and repeats it,” she says.

“Humans bring with them biases,” says Chapman. “It’s attractive to try to take them out of the equation using a technical tool. But the tech to work through these biases, and how to mitigate them, isn’t quite ready yet.” In fact, by mechanising such faults, we risk exacerbating them. “One officer cannot generate too many false arrests too much of the time,” West points out. “With facial recognition and AI, what we see is this potentially speeded up and localised.”

Even the best of intentions may not go far enough. The COMPAS system we mentioned above was devised specifically because human judges were observed to be biased. “The goal was to take that horrible human bias out of the loop, by inserting a coolly clinical machine,” says Chapman. “Indeed, the data that was used to train the machine had the race variable hidden.”

Yet that wasn’t enough to avoid race becoming an issue, for two reasons. “First, it was created by the previous biased sentencing/bail decisions from the past,” Chapman says. “Second, there are many other variables in the data that are related to race – for example, zip code – so race is effectively not hidden. In effect, we’ve used data that we know is biased to train the machines.” Which leads to a crucial question: “[When] we know that there are errors in the system, can we justifiably strip someone of their freedoms based on it?”

Fixing the problems

There are ways to deal with the downsides and imperfections of such technologies. One is to restrict them to roles where mistakes are less likely to have serious consequences. “If a false ID leads to someone being shot by police or incarcerated, as we’re seeing across the US, then it is very dangerous,” Chapman says. “However, what if facial recognition were being used to identify trafficked and vulnerable individuals? In this case, the societal good may make the use by police palatable.”

She also points to the creation of frameworks to help developers consider the risks and impact of new technologies at an early stage. But while these can focus attention on the potential impact of a project on police and citizens, it remains difficult to assess the further knock-on effects of such ideas. “There is ongoing work looking at how to mitigate bias in datasets, or adjust the models themselves to be more fair, but still much work to be done,” Chapman says.

Even getting companies to use the existing frameworks, limited as they are, is a challenge in itself. “Why would a company who needs to meet a bottom line and sell a product engage with these?” Chapman asked. “What we need is some regulation.” That could be coming, with the EU working on AI systems regulation.

AI systems can also be monitored by those using them, by tracking mistakes and using that data to inform decision-making. “We know these systems have error rates,” says Chapman. “The implications of any action or inaction should be weighed with respect to those error rates.”

“For instance, the Avon and Somerset Police uses a data-driven approach, not to target crime or bad guys, but to police the police and identify vulnerable individuals. In this case, a false accusation does not impact a member of the public or incarcerate anyone.” That police force does use algorithms and AI for a wider range of areas, however.

What’s next

That last point may be the key to safely using AI in policing. Rather than letting technology make decisions about who to arrest and who to refuse bail, it can be used for positive purposes.

For example, West predicts that police will turn to technology that allows them to understand vulnerability. “The ability to walk down a street and receive alerts – who needs a welfare check visit, which house was recently burgled and could be revisited, any vulnerable children at an address, women and young girls at risk in a particular home – that sort of operational picture,” he says.

West calls such systems “descriptive technologies,” saying they help paint a picture of operational context and make using data easier, while AI will fit into “predictive policing” such as crime-pattern analysis. That will take time to perfect – or even make good enough for limited use – and will require public support and trust. “The notion of Big Brother remains strong, and I cannot see a great deal of public confidence in such technology anytime soon,” he says.

Fundamentally, West argues that technology should enhance human policing, rather than replacing it. “Where technology adds value is in the connectivity between the police and the communities they serve,” he says. “Technology for people to easily connect, communicate and do business with their local officers, technology for people to ascertain things they currently have to phone the police to find out about. In other words, technology that makes the customer experience much better than it currently is.” Forget robotic police dogs – this is how technology could actually help police serve and protect better.