Amazon outlines regulatory framework for facial recognition

Web giant's call for new laws follows MIT research demonstrating possible racial bias in the firm's own system

Man using facial recognition on phone

Amazon has outlined a regulatory framework for facial recognition technology amid mounting concerns of racial bias in its own system, and murky ethics around its use in business and wider society.

Facial recognition must be subject to human review when used in law enforcement, and should be subject to a 99% confidence score threshold, the web giant has suggested. There should also be sufficient notice when video surveillance is deployed in public settings, and law enforcement should be transparent about how it uses the technology.

"New technology should not be banned or condemned because of its potential misuse," said Amazon Web Service's (AWS) vice president for global public policy Michael Punke.

"Instead, there should be open, honest, and earnest dialogue among all parties involved to ensure that the technology is applied appropriately and is continuously enhanced.

"AWS dedicates significant resources to ensuring our technology is highly accurate and reduces bias, including using training data sets that reflect gender, race, ethnic, cultural, and religious diversity.

"We're also committed to educating customers on best practices, and ensuring diverse perspectives in our technology development teams."

Last month researchers with the Massachusetts Institute of Technology (MIT) found that Amazon's Rekognition facial recognition platform may not be identifying race or gender accurately or fairly. For instance, the system mistakenly identified some pictures of women as men, and it was more common when shown pictures of women with darker skin.

Amazon employees also expressed concerns this summer over a deal struck between the firm and US' Immigration and Customs Enforcement (ICE), over the use of Rekognition in border control and immigration enforcement.

This was considered especially pertinent in light of ongoing troubles at the US/Mexico border. Employees were concerned the tool was being used to facilitate "historic militarization of police, renewed targeting of Black activists, and the growth of a federal deportation force currently engaged in human rights abuses".

Amazon's decision to weigh in on a facial recognition regulatory framework follows Microsoft's call for new laws in December last year. The firm's president Brad Smith called for lawmakers to draw up a set of laws to protect citizens' civil rights and privacy, suggesting the technology is open to abuse and needed much further research.

Google Cloud, similarly, has instigated a self-ban on selling general-purpose AI-powered facial recognition technology until the underlying systems are polished, and the privacy and data collection concerns are addressed in law.

The UK has seen several facial recognition trials deployed by police services across the country in isolated areas, such as the Metropolitan Police rolling out an eight-hour trial in London in December, among several other instances.

But the Met's use of this technology is "dangerously inaccurate" according to the privacy group Big Brother Watch, which published a set of findings into the police's use of facial recognition in May last year. The report, which was presented to parliament, claimed the Met has a failure rate of 98%, and misidentified 95 individuals as criminals during the 2017 Notting Hill Carnival.

Amazon's calls for new regulations in the US also coincides with the University of East Anglia's (UEA) calls yesterday for more rigorous testing and greater transparency around its use in law enforcement.

"These FRT trials have been operating in a legal vacuum. There is currently no legal framework specifically regulating the police use of facial recognition technology," said UEA lecturer in criminal law Dr Joe Purshouse.

"Parliament should set out rules governing the scope of the power of the police to deploy facial recognition technology surveillance in public spaces to ensure consistency across police forces.

"As it currently stands, police forces trialling facial recognition technology are left to come up with divergent, and sometimes troubling, policies and practices for the execution of their facial recognition technology operations."

Amazon says it will continue to work with industry partners, the government, academics, and campaign groups in order to improve the standards what it considers a powerful tool with wider business and social applications.

Featured Resources

The complete guide to changing your phone system provider

Optimise your phone system for better business results

Download now

Simplify cluster security at scale

Centralised secrets management across hybrid, multi-cloud environments

Download now

The endpoint as a key element of your security infrastructure

Threats to endpoints in a world of remote working

Download now

2021 state of IT asset management report

The role of IT asset management for maximising technology investments

Download now

Recommended

Workday releases new Accounting Center to help manage financial data
chief financial officer (CFO)

Workday releases new Accounting Center to help manage financial data

30 Oct 2020
MarqVision detects counterfeit products with deep learning and AI
intellectual property

MarqVision detects counterfeit products with deep learning and AI

18 Sep 2020
The IT Pro Podcast: Attack of the AI hackers
artificial intelligence (AI)

The IT Pro Podcast: Attack of the AI hackers

14 Aug 2020
MIT develops AI tech to edit outdated Wikipedia articles
artificial intelligence (AI)

MIT develops AI tech to edit outdated Wikipedia articles

13 Feb 2020

Most Popular

Do smart devices make us less intelligent?
artificial intelligence (AI)

Do smart devices make us less intelligent?

19 Oct 2020
Best MDM solutions 2020
mobile device management (MDM)

Best MDM solutions 2020

21 Oct 2020
What is Neuralink?
Technology

What is Neuralink?

24 Oct 2020