San Francisco bans facial recognition technology
Lawmakers say it's not an "anti-tech policy", but FR is "uniquely dangerous and oppressive"
San Francisco has become the first city to ban government agencies from using facial recognition technology.
Eight of the city's board of supervisors voted to approve the proposal, with one against, that will prevent government agencies, such as law enforcement, from using the technology.
The Stop Secret Surveillance Ordinance, proposed by supervisor Aaron Peskin in January, also requires departments in the city to seek approval from the board of supervisors before using or buying surveillance technology. Other cities have approved similar transparency measures.
In a statement seen by The Verge, Peskin said it was "an ordinance about having accountability around surveillance technology". Peskin added that it was not an "anti-technology policy" but stated that facial recognition is "uniquely dangerous and oppressive".
San Francisco's ban comes as a broader debate about the ethical use of facial recognition rages on. The technology can be used to rapidly identify individuals for security purposes, but there has been a number of cases where the data collected from the technology has been plagued with bias and or inaccuracies.
"The propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits, and the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring," the ordinance states.
In January, researchers at MIT discovered that Amazon's Rekognition facial tech wasn't identifying race or gender accurately or fairly. A report by the scientific research facility said tests it has conducted on the technology found that Rekognition mistakenly identified some pictures of women as men and this was more prevalent when presented with pictures of darker-skinned women.
Where companies get the data to train facial recognition models has also proved to be controversial. In March, it was revealed that IBM used Flick images to train its facial recognition tech, but without letting the people involved know.
The company is said to have used almost a million pictures from the Flickr photo-sharing site to train its platform. However, those in the pictures weren't advised the company was going to use their features to determine gender, race or any other identifiable features, such as eye colour, hair colour, whether someone was wearing glasses etc.
In the UK, the Metropolitan Police have come under heavy scrutiny for using facial recognition, due to its disastrous success rate. The Police revealed that the scheme, intended to help identify and apprehend violent criminals, resulted in zero arrests.
Arguing the case for the technology, Microsoft, which offers facial recognition tools, has called for some form of regulation for the technology, but how to exactly regulate the tool has been contested.
Managing security risk and compliance in a challenging landscape
How key technology partners grow with your organisationDownload now
Evaluate your order-to-cash process
15 recommended metrics to benchmark your O2C operationsDownload now
AI 360: Hold, fold, or double down?
How AI can benefit your businessDownload now
Getting started with Azure Red Hat OpenShift
A developer’s guide to improving application building and deployment capabilitiesDownload now