Tech firms are saving face by dropping facial recognition

A piece of facial recognition software analysing a crowd
Facial recognition used for mass surveillance (Image credit: Shutterstock)

I would love to see a fact-based comedy about all the times Woody Harrelson has found himself, intentionally or not, on the wrong side of a technology dispute. It should include that time he posted 5G conspiracy theories on Instagram and also that mad story about the NYPD using his image in a facial recognition system to catch a criminal doppelganger. It could be called: “White Men Can’t Jump to Conclusions”.

The NYPD reportedly couldn’t match fuzzy CCTV footage of petty theft to anything in their database. But, because the suspect bore resemblance to the actor they just winged it and used a photo of Harrelson, which we are led to believe resulted in an arrest.

Unethical tales like the above make me glad that Amazon has taken its Rekognition software away from the police (for a year, at least), saying it wants to give US lawmakers time to come up with workable legislation to regulate the technology. Just two days earlier, IBM decided to scrap its own facial recognition system in the wake of the black lives matter protests, citing fears that it could be used by law enforcement for racial profiling.

As protests over the alleged murder of George Floyd continue in the US there is a very tiny comfort knowing that two of the world’s biggest tech firms won’t be providing a technology well-know for housing racial bias to agencies accused of the same thing.

The problem is though, facial recognition is quite a popular business and other companies, such as the controversial Clearview AI, can always come in as a replacement. US Senator Edward Markey has already voiced concerns about Clearview, which could be used in cities where protests are being held, but the chance of him being heard in the White House is slim – it definitely won’t reach the emergency bunker.

The best-case scenario is that Congress does indeed build a legislative framework on the use of facial recognition and the trend is then taken up around the world. But as the former UK Prime Minister, Tony Blair, pointed out at CogX, government is slow when it comes to regulating technology. Getting implementation in place is difficult at the best of times and it’s even harder when the subject is a complex technology that even the technologists themselves can’t get right.

Naturally, law enforcement agencies around the world have all been keen to give facial recognition a go, backed by governments too narrow-sighted to contemplate the ethics of it. This is despite the many accounts of AI-based technologies proving to be racially biased, time and time again.

In 2018, the UK’s Metropolitan and South Wales police forces continued to deploy facial technology even as their own results showed it was ludicrously inaccurate. Indeed, its deployment and continued use was backed by the Home Office, despite a growing backlash from privacy groups and warnings from the Information Commissioner’s Office.

RELATED RESOURCE

The IT Pro Podcast: Can AI ever be ethical?

As AI grows in sophistication, how can we make sure it’s being developed responsibly?

FREE DOWNLOAD

We’re now in a situation where an untrustworthy technology that should have been binned ages ago is being used by authorities we have dwindling faith in. What’s worse, the best hope we have to stop it is not the government, but the very organisations that made mass facial recognition possible saving face by pulling their technology back

Bobby Hellard

Bobby Hellard is ITPro's Reviews Editor and has worked on CloudPro and ChannelPro since 2018. In his time at ITPro, Bobby has covered stories for all the major technology companies, such as Apple, Microsoft, Amazon and Facebook, and regularly attends industry-leading events such as AWS Re:Invent and Google Cloud Next.

Bobby mainly covers hardware reviews, but you will also recognise him as the face of many of our video reviews of laptops and smartphones.