San Francisco bans facial recognition technology

Lawmakers say it's not an "anti-tech policy", but FR is "uniquely dangerous and oppressive"

Facial recognition datasets

San Francisco has become the first city to ban government agencies from using facial recognition technology.

Eight of the city's board of supervisors voted to approve the proposal, with one against, that will prevent government agencies, such as law enforcement, from using the technology.

The Stop Secret Surveillance Ordinance, proposed by supervisor Aaron Peskin in January, also requires departments in the city to seek approval from the board of supervisors before using or buying surveillance technology. Other cities have approved similar transparency measures.

In a statement seen by The Verge, Peskin said it was "an ordinance about having accountability around surveillance technology". Peskin added that it was not an "anti-technology policy" but stated that facial recognition is "uniquely dangerous and oppressive".

San Francisco's ban comes as a broader debate about the ethical use of facial recognition rages on. The technology can be used to rapidly identify individuals for security purposes, but there has been a number of cases where the data collected from the technology has been plagued with bias and or inaccuracies.

"The propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits, and the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring," the ordinance states.

In January, researchers at MIT discovered that Amazon's Rekognition facial tech wasn't identifying race or gender accurately or fairly. A report by the scientific research facility said tests it has conducted on the technology found that Rekognition mistakenly identified some pictures of women as men and this was more prevalent when presented with pictures of darker-skinned women.

Where companies get the data to train facial recognition models has also proved to be controversial. In March, it was revealed that IBM used Flick images to train its facial recognition tech, but without letting the people involved know.

The company is said to have used almost a million pictures from the Flickr photo-sharing site to train its platform. However, those in the pictures weren't advised the company was going to use their features to determine gender, race or any other identifiable features, such as eye colour, hair colour, whether someone was wearing glasses etc.

In the UK, the Metropolitan Police have come under heavy scrutiny for using facial recognition, due to its disastrous success rate. The Police revealed that the scheme, intended to help identify and apprehend violent criminals, resulted in zero arrests.

Arguing the case for the technology, Microsoft, which offers facial recognition tools, has called for some form of regulation for the technology, but how to exactly regulate the tool has been contested.

Featured Resources

2021 Thales cloud security study

The challenges of cloud data protection and access management in a hybrid and multi cloud world

Free download

IDC agility assessment

The competitive advantage in adaptability

Free Download

Digital transformation insights from CIOs for CIOs

Transformation pilotes, co-pilots, and engineers

Free download

What ITDMs did next - and what they should be doing now

Enable continued collaboration and communication for hybrid workers

Most Popular

What should you really be asking about your remote access software?
Sponsored

What should you really be asking about your remote access software?

17 Nov 2021
Australia film archive gets $41.9 million to digitise audiovisual heritage
digitisation

Australia film archive gets $41.9 million to digitise audiovisual heritage

6 Dec 2021
Nike to take customers into the metaverse with 'NIKELAND'
virtualisation

Nike to take customers into the metaverse with 'NIKELAND'

19 Nov 2021