Anonymous data "easily identifiable", says report

Tools to re-identify individuals are easily available and failure to sufficiently anonymised data will breach GDPR

Current methods used for anonymising data leave individuals at risk of being re-identified, according to research.

Scientists from Imperial College London and Belgium's Universit Catholique de Louvain (UCLouvain) have developed an algorithm that found that anonymous databases could be reverse engineered with 99.98% accuracy.

Sampled data is often anonymised by stripping away identifying characteristics like names and email addresses, so that individuals can't be identified. For example, a hospital may remove patients' names, addresses and dates of birth from health records, allowing them to open up access to these large datasets for researchers to analyse.

Once data is free of these identifying characteristics, it's no longer subject to data protection regulations and can be freely used and sold to third parties, such as advertising companies and data brokers.

But the research from UCLouvain and Imperial argues that anonymisation is not enough for companies to get around laws such as GDPR. "Our results reject the claims that, first, reidentification is not a practical risk and, second, sampling or releasing partial datasets provide plausible deniability.

"Moving forward, they question whether current deidentification practices satisfy the anonymisation standards of modern data protection laws such as GDPR and CCPA [California consumer privacy act] and emphasise the need to move, from a legal and regulatory perspective, beyond the deidentification release-and-forget model."

Using their model, the researchers found that 99.98% of Americans would be correctly re-identified in any dataset using 15 demographic attributes and according to their results. Even heavily sampled anonymised datasets are unlikely to satisfy the modern standards for anonymisation set forth in GDPR, the researchers said, and seriously challenge the technical and legal adequacy of the de-identification release-and-forget model.

GDPR doesn't apply to personal data which has been "rendered anonymous in such a manner that the data subject is not or no longer identifiable". If a data set is inadequately anonymised before selling to a buyer, however, allowing said buyer to use available tools to re-identify the individuals, then it will have been "pseudonymised" rather than anonymised and pseudonymisation is not sufficient, according to lawyer Frank Jennings.

"Given the ICO's new willingness to issue higher fines, organisations should make sure they properly anonymise data before the ICO becomes aware and it's only a matter of time," Jennings said. "As tools and technology are constantly developing, what was sufficient to anonymise last year might not be sufficient this year."

The ICO has clear guidelines on this scenario, as do most of Europe's data regulators. In March 2019, the Danish data protection agency fined a taxi company 140,000 for failing to properly anonymise data.

Featured Resources

Modern governance: The how-to guide

Equipping organisations with the right tools for business resilience

Free Download

Cloud operational excellence

Everything you need to know about optimising your cloud operations

Watch now

A buyer’s guide to board management software

Improve your board’s performance

The real world business value of Oracle autonomous data warehouse

Lead with a 417% five-year ROI

Download now

Most Popular

How to boot Windows 11 in Safe Mode
Microsoft Windows

How to boot Windows 11 in Safe Mode

6 Jan 2022
Dell XPS 15 (2021) review: The best just got better
Laptops

Dell XPS 15 (2021) review: The best just got better

14 Jan 2022
How to speed up Windows 11
Microsoft Windows

How to speed up Windows 11

7 Jan 2022