Anonymous data "easily identifiable", says report

Tools to re-identify individuals are easily available and failure to sufficiently anonymised data will breach GDPR

Current methods used for anonymising data leave individuals at risk of being re-identified, according to research.

Scientists from Imperial College London and Belgium's Universit Catholique de Louvain (UCLouvain) have developed an algorithm that found that anonymous databases could be reverse engineered with 99.98% accuracy.

Sampled data is often anonymised by stripping away identifying characteristics like names and email addresses, so that individuals can't be identified. For example, a hospital may remove patients' names, addresses and dates of birth from health records, allowing them to open up access to these large datasets for researchers to analyse.

Once data is free of these identifying characteristics, it's no longer subject to data protection regulations and can be freely used and sold to third parties, such as advertising companies and data brokers.

But the research from UCLouvain and Imperial argues that anonymisation is not enough for companies to get around laws such as GDPR. "Our results reject the claims that, first, reidentification is not a practical risk and, second, sampling or releasing partial datasets provide plausible deniability.

"Moving forward, they question whether current deidentification practices satisfy the anonymisation standards of modern data protection laws such as GDPR and CCPA [California consumer privacy act] and emphasise the need to move, from a legal and regulatory perspective, beyond the deidentification release-and-forget model."

Using their model, the researchers found that 99.98% of Americans would be correctly re-identified in any dataset using 15 demographic attributes and according to their results. Even heavily sampled anonymised datasets are unlikely to satisfy the modern standards for anonymisation set forth in GDPR, the researchers said, and seriously challenge the technical and legal adequacy of the de-identification release-and-forget model.

GDPR doesn't apply to personal data which has been "rendered anonymous in such a manner that the data subject is not or no longer identifiable". If a data set is inadequately anonymised before selling to a buyer, however, allowing said buyer to use available tools to re-identify the individuals, then it will have been "pseudonymised" rather than anonymised and pseudonymisation is not sufficient, according to lawyer Frank Jennings.

"Given the ICO's new willingness to issue higher fines, organisations should make sure they properly anonymise data before the ICO becomes aware and it's only a matter of time," Jennings said. "As tools and technology are constantly developing, what was sufficient to anonymise last year might not be sufficient this year."

The ICO has clear guidelines on this scenario, as do most of Europe's data regulators. In March 2019, the Danish data protection agency fined a taxi company 140,000 for failing to properly anonymise data.

Featured Resources

Consumer choice and the payment experience

A software provider's guide to getting, growing, and keeping customers

Download now

Prevent fraud and phishing attacks with DMARC

How to use domain-based message authentication, reporting, and conformance for email security

Download now

Business in the new economy landscape

How we coped with 2020 and looking ahead to a brighter 2021

Download now

How to increase cyber resilience within your organisation

Cyber resilience for dummies

Download now

Most Popular

How to find RAM speed, size and type
Laptops

How to find RAM speed, size and type

16 Jun 2021
What is HTTP error 400 and how do you fix it?
Network & Internet

What is HTTP error 400 and how do you fix it?

16 Jun 2021
Ten-year-old iOS 4 recreated as an iPhone app
iOS

Ten-year-old iOS 4 recreated as an iPhone app

10 Jun 2021