Clearview AI ordered to cease data scraping in Australia

A finger about to press the Clearview AI App on a device
(Image credit: Shutterstock)

The Australian Information Commissioner has found that Clearview AI breached Australians’ privacy by scraping their biometric information from the web and disclosing it through a facial recognition tool.

A joint investigation between the Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) found that Clearview AI breached the Australian Privacy Act 1988 by collecting Australians’ sensitive information without consent, collecting personal information by unfair means, and by not taking reasonable steps to notify the individuals over the collection of personal information.

It also found the company didn’t take reasonable steps to ensure personal information it disclosed was accurate or implement practices, procedures, and systems to ensure compliance with the Australian Privacy Principles.

The outcome orders the company to cease collecting facial images and biometric templates from individuals in Australia, and to destroy the existing images and templates collected from the country.

Clearview AI’s facial recognition tool includes a database of over three billion images scraped from social media platforms and other publicly available websites. The tool allows users to upload a photo of an individual’s face and find other facial images of that person gleaned from the internet. It then links to where the photos appeared for identification purposes.

The OAIC said there was a lack of transparency around the company’s collection practices, monetisation of individuals’ data for a purpose entirely outside reasonable expectations, and the risk of adversity to people whose images are included in their database.

“When Australians use social media or professional networking sites, they don’t expect their facial images to be collected without their consent by a commercial entity to create biometric templates for completely unrelated identification purposes,” said privacy commissioner Angelene Falk. “The indiscriminate scraping of people’s facial images, only a fraction of whom would ever be connected with law enforcement investigations, may adversely impact the personal freedoms of all Australians who perceive themselves to be under surveillance.”

The ICO’s Elizabeth Denham said that as the digital world is international, the regulatory work must be international too, particularly where regulators are looking to anticipate, interpret, and influence developments in tech for the global good.

“That doesn’t mean sharing the same laws or approaches, but on finding ways for our different approaches to work side by side and to co-ordinate and share the regulatory challenge where technologies impact our citizens across international borders,” said Denham. “This helps minimise the burden on data protection authorities and those they regulate. That is what we were able to achieve in this case, and the result is an investigation that will protect consumers in both the UK and Australia.”

The ICO is also considering its next steps and any formal regulatory action that may be appropriate under the UK’s data protection laws.

Clearview AI also provided trials of facial recognition tools to Australian police forces between October 2019 and March 2020. The purpose was to conduct searches using facial images of individuals located in the country. The OAIC is currently finalising an investigation into this trial and whether the police complied with requirements under the Australian Government Agencies Privacy Code to assess and mitigate privacy risks.

RELATED RESOURCE

How to manage AI risk

Recommendations from the Cyber Resilience Think Tank

FREE DOWNLOAD

The AI company had previously argued the information it handled was not personal information and, as it was based in the US, it was not within the Privacy Act’s jurisdiction. Clearview also claimed it stopped offering services to Australian law enforcement shortly after the OAIC’s investigation began.

However, Falk said she was satisfied the company was required to comply with Australian privacy law and that the information it handled was covered by the Privacy Act.

Clearview AI told IT Pro it intends to appeal the commissioner’s decision, adding that not only has the decision missed the mark on the manner of the company’s way of operating, but it also lacks jurisdiction.

“We only collect public data from the open internet and comply with all standards of privacy and law,” said Hoan Ton-That, CEO of Clearview AI. “I respect the time and effort that the Australian officials spent evaluating aspects of the technology I built. But I am disheartened by the misinterpretation of its value to society. I look forward to engaging in conversation with leaders and lawmakers to fully discuss the privacy issues, so the true value of Clearview AI’s technology, which has proven so essential to law enforcement, can continue to make communities safe.”

This isn’t the first time Clearview AI has come up against privacy laws, as in June 2021 Canada’s privacy regulator found that the Canadian police force broke the law when using the company’s facial recognition software. Clearview was found to have violated Canada’s federal private sector privacy laws by creating a database of over three billion images scraped from the internet without users’ consent.

Zach Marzouk

Zach Marzouk is a former ITPro, CloudPro, and ChannelPro staff writer, covering topics like security, privacy, worker rights, and startups, primarily in the Asia Pacific and the US regions. Zach joined ITPro in 2017 where he was introduced to the world of B2B technology as a junior staff writer, before he returned to Argentina in 2018, working in communications and as a copywriter. In 2021, he made his way back to ITPro as a staff writer during the pandemic, before joining the world of freelance in 2022.