Amazon outlines regulatory framework for facial recognition

Web giant's call for new laws follows MIT research demonstrating possible racial bias in the firm's own system

Man using facial recognition on phone

Amazon has outlined a regulatory framework for facial recognition technology amid mounting concerns of racial bias in its own system, and murky ethics around its use in business and wider society.

Facial recognition must be subject to human review when used in law enforcement, and should be subject to a 99% confidence score threshold, the web giant has suggested. There should also be sufficient notice when video surveillance is deployed in public settings, and law enforcement should be transparent about how it uses the technology.

"New technology should not be banned or condemned because of its potential misuse," said Amazon Web Service's (AWS) vice president for global public policy Michael Punke.

"Instead, there should be open, honest, and earnest dialogue among all parties involved to ensure that the technology is applied appropriately and is continuously enhanced.

"AWS dedicates significant resources to ensuring our technology is highly accurate and reduces bias, including using training data sets that reflect gender, race, ethnic, cultural, and religious diversity.

"We're also committed to educating customers on best practices, and ensuring diverse perspectives in our technology development teams."

Last month researchers with the Massachusetts Institute of Technology (MIT) found that Amazon's Rekognition facial recognition platform may not be identifying race or gender accurately or fairly. For instance, the system mistakenly identified some pictures of women as men, and it was more common when shown pictures of women with darker skin.

Amazon employees also expressed concerns this summer over a deal struck between the firm and US' Immigration and Customs Enforcement (ICE), over the use of Rekognition in border control and immigration enforcement.

This was considered especially pertinent in light of ongoing troubles at the US/Mexico border. Employees were concerned the tool was being used to facilitate "historic militarization of police, renewed targeting of Black activists, and the growth of a federal deportation force currently engaged in human rights abuses".

Amazon's decision to weigh in on a facial recognition regulatory framework follows Microsoft's call for new laws in December last year. The firm's president Brad Smith called for lawmakers to draw up a set of laws to protect citizens' civil rights and privacy, suggesting the technology is open to abuse and needed much further research.

Google Cloud, similarly, has instigated a self-ban on selling general-purpose AI-powered facial recognition technology until the underlying systems are polished, and the privacy and data collection concerns are addressed in law.

The UK has seen several facial recognition trials deployed by police services across the country in isolated areas, such as the Metropolitan Police rolling out an eight-hour trial in London in December, among several other instances.

But the Met's use of this technology is "dangerously inaccurate" according to the privacy group Big Brother Watch, which published a set of findings into the police's use of facial recognition in May last year. The report, which was presented to parliament, claimed the Met has a failure rate of 98%, and misidentified 95 individuals as criminals during the 2017 Notting Hill Carnival.

Amazon's calls for new regulations in the US also coincides with the University of East Anglia's (UEA) calls yesterday for more rigorous testing and greater transparency around its use in law enforcement.

"These FRT trials have been operating in a legal vacuum. There is currently no legal framework specifically regulating the police use of facial recognition technology," said UEA lecturer in criminal law Dr Joe Purshouse.

"Parliament should set out rules governing the scope of the power of the police to deploy facial recognition technology surveillance in public spaces to ensure consistency across police forces.

"As it currently stands, police forces trialling facial recognition technology are left to come up with divergent, and sometimes troubling, policies and practices for the execution of their facial recognition technology operations."

Amazon says it will continue to work with industry partners, the government, academics, and campaign groups in order to improve the standards what it considers a powerful tool with wider business and social applications.

Featured Resources

Become a digital service provider

How to transform your business from network core to edge

Download now

Optimal business results with the cloud

Evaluating the best approaches to hybrid cloud adoption

Download now

Virtualisation that enables choices, not compromises

Harness the virtualisation technology that's right for your hybrid infrastructure

Download now

Email security threat report 2020

Four key trends from spear fishing to credentials theft

Download now

Recommended

How to become a machine learning engineer
Careers & training

How to become a machine learning engineer

23 Dec 2020
Data science fails: Building AI you can trust
Whitepaper

Data science fails: Building AI you can trust

2 Dec 2020
MLOps 101: The foundation for your AI strategy
Whitepaper

MLOps 101: The foundation for your AI strategy

2 Dec 2020
Realising the benefits of automated machine learning
Whitepaper

Realising the benefits of automated machine learning

2 Dec 2020

Most Popular

150,000 arrest records accidentally deleted from police database
data management

150,000 arrest records accidentally deleted from police database

15 Jan 2021
How to recover deleted emails in Gmail
email delivery

How to recover deleted emails in Gmail

6 Jan 2021
What is a 502 bad gateway and how do you fix it?
web hosting

What is a 502 bad gateway and how do you fix it?

12 Jan 2021