Anonymous data "easily identifiable", says report
Tools to re-identify individuals are easily available and failure to sufficiently anonymised data will breach GDPR
Current methods used for anonymising data leave individuals at risk of being re-identified, according to research.
Scientists from Imperial College London and Belgium's Universit Catholique de Louvain (UCLouvain) have developed an algorithm that found that anonymous databases could be reverse engineered with 99.98% accuracy.
Sampled data is often anonymised by stripping away identifying characteristics like names and email addresses, so that individuals can't be identified. For example, a hospital may remove patients' names, addresses and dates of birth from health records, allowing them to open up access to these large datasets for researchers to analyse.
Once data is free of these identifying characteristics, it's no longer subject to data protection regulations and can be freely used and sold to third parties, such as advertising companies and data brokers.
But the research from UCLouvain and Imperial argues that anonymisation is not enough for companies to get around laws such as GDPR. "Our results reject the claims that, first, reidentification is not a practical risk and, second, sampling or releasing partial datasets provide plausible deniability.
"Moving forward, they question whether current deidentification practices satisfy the anonymisation standards of modern data protection laws such as GDPR and CCPA [California consumer privacy act] and emphasise the need to move, from a legal and regulatory perspective, beyond the deidentification release-and-forget model."
Using their model, the researchers found that 99.98% of Americans would be correctly re-identified in any dataset using 15 demographic attributes and according to their results. Even heavily sampled anonymised datasets are unlikely to satisfy the modern standards for anonymisation set forth in GDPR, the researchers said, and seriously challenge the technical and legal adequacy of the de-identification release-and-forget model.
GDPR doesn't apply to personal data which has been "rendered anonymous in such a manner that the data subject is not or no longer identifiable". If a data set is inadequately anonymised before selling to a buyer, however, allowing said buyer to use available tools to re-identify the individuals, then it will have been "pseudonymised" rather than anonymised and pseudonymisation is not sufficient, according to lawyer Frank Jennings.
"Given the ICO's new willingness to issue higher fines, organisations should make sure they properly anonymise data before the ICO becomes aware and it's only a matter of time," Jennings said. "As tools and technology are constantly developing, what was sufficient to anonymise last year might not be sufficient this year."
The ICO has clear guidelines on this scenario, as do most of Europe's data regulators. In March 2019, the Danish data protection agency fined a taxi company 140,000 for failing to properly anonymise data.
Managing security risk and compliance in a challenging landscape
How key technology partners grow with your organisationDownload now
Evaluate your order-to-cash process
15 recommended metrics to benchmark your O2C operationsDownload now
AI 360: Hold, fold, or double down?
How AI can benefit your businessDownload now
Getting started with Azure Red Hat OpenShift
A developer’s guide to improving application building and deployment capabilitiesDownload now