ICO seeks expert input on AI regulation

The exterior of the ICO's offices
(Image credit: The Information Commissioner's Office)

The Information Commissioner's Office (ICO) has called upon industry leaders to help construct a new framework to audit data protection provisions in artificial intelligence (AI).

As AI starts to permeate more facets of everyday life, it becomes more important for it to be regulated and not used for illicit purposes, the ICO said.

That said, the ICO also recognises that there are many beneficial and exciting use cases for AI which can have a positive impact on businesses and individuals. The healthcare, retail and recruitment sectors, for example, were cited as just a few that could benefit from the ever-growing technology.

Simon McDougall, the ICO's executive director for technology policy and innovation, stressed the necessity for GDPR and said its critical principles must be applied to AI, due to the large swathes of data it processes.

"The GDPR strengthens individuals' rights when it comes to the way their personal data is processed by technologies such as AI," McDougall wrote in a blog post.

"They have, in some circumstances, the right to object to profiling and they have the right to challenge a decision made solely by a machine, for example".

The law as set out in the Data Protection Act 2018 requires organisations to hard-wire data protection into new products by design, identifying risks at the outset by conducting risk assessments.

The audit framework, McDougall said "is at the very early stages of our thinking." As such, he invited experts from data science to app developers or leaders of AI businesses to help the ICO make the tech better and more privacy-centric for everyone.

"Whether you're from the private, public or third sector, we want you to join our open discussion about the genuine challenges arising from the adoption of AI," he said.

"This will ensure the published framework will be both conceptually sound and applicable to real life situations."

The ICO's focus on AI isn't new. Back in November 2018, the data protection watchdog appointed its first postdoctoral research fellow for AI, Dr Reuben Binns, who leads a team tasked with developing the AI framework.

"Artificial intelligence is an exciting area that has great potential. However, like all new technologies, it also has the potential to be misused, and there is understandable anxiety amongst the public around how decisions using AI are being made," McDougall added.

"The ICO is committed to remaining engaged with emerging technologies, and Reuben's work will deepen our understanding of this complex area."

The feedback given to the ICO will be used for a formal consultation paper set to be released in January 2020 with the final audit framework set for a Spring 2020 publication.

Connor Jones
News and Analysis Editor

Connor Jones has been at the forefront of global cyber security news coverage for the past few years, breaking developments on major stories such as LockBit’s ransomware attack on Royal Mail International, and many others. He has also made sporadic appearances on the ITPro Podcast discussing topics from home desk setups all the way to hacking systems using prosthetic limbs. He has a master’s degree in Magazine Journalism from the University of Sheffield, and has previously written for the likes of Red Bull Esports and UNILAD tech during his career that started in 2015.