Canadian mounted police broke law with Clearview AI deal
The use of a facial image database violated the country’s Privacy Act, said Canada’s privacy regulator
The Canadian police force broke the law when using Clearview AI facial recognition software, the country's privacy regulator has ruled.
The Royal Canadian Mounted Police's (RCMP) use of the technology to conduct hundreds of searches of a database compiled illegally by a commercial company was deemed a violation of the Privacy Act, a report from the Office of the Privacy Commissioner (OPC) of Canada found.
Clearview AI was found to have violated Canada's federal private sector privacy law by creating a database of over three billion images scraped from internet websites without users' consent. Clearview users, like the RCMP, could match photographs of people against photographs in the database.
"The use of FRT [facial recognition technology] by the RCMP to search through massive repositories of Canadians who are innocent of any suspicion of crime presents a serious violation of privacy," commissioner Daniel Therrien said. "A government institution cannot collect personal information from a third party agent if that third party agent collected the information unlawfully."
The RCMP had initially stated that it was not using Clearview AI, only to later admit that it had used the company's technology "in a limited way", primarily for identifying, locating, and rescuing children who were victims of online sexual abuse.
However, OPC found that only 6% of the RCMP's searches using the technology appeared to be related to victim identification, with a further 9% attributed to other justifiable law enforcement activities. However, the police force was unable to provide adequate justification for the vast majority (85%) of the searches it conducted, based on Clearview records.
In a statement, the RCMP said it publicly acknowledged its use of the technology in February 2020, and ceased using Clearview AI in July 2020, when the company ended its operations in Canada.
"We acknowledge that there is always room for improvement and we continually seek opportunities to strengthen our policies, procedures and training," said a police spokesperson. "The RCMP has accepted all of the recommendations of the OPC and has already begun efforts towards their implementation."
In May, Privacy International and several other European digital privacy campaigners launched legal action against the controversial US facial recognition firm, Clearview AI. The group claimed that the company's methods for collecting images are in violation of European privacy laws.
Following a cyber attack against Clearview AI in February 2020, it was reportedly revealed that a number of high-profile public agencies, including the FBI, were on the company's client list.
The company gained notoriety after the New York Times ran a feature about its work with law enforcement agencies and how its facial recognition models were trained on three billion images, harvested from social media sites.
Consumer choice and the payment experience
A software provider's guide to getting, growing, and keeping customersDownload now
Prevent fraud and phishing attacks with DMARC
How to use domain-based message authentication, reporting, and conformance for email securityDownload now
Business in the new economy landscape
How we coped with 2020 and looking ahead to a brighter 2021Download now
How to increase cyber resilience within your organisation
Cyber resilience for dummiesDownload now