IT Pro is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Learn more

AI should include 'undesirable side effect' warnings, claims PwC

Maria Luciana Axente says data scientists should respond to an ethical code just like medical practitioners

A handful of colourful pills scattered on a grey surface

Artificial intelligence (AI) should be treated with the same scrutiny as medication and warn users about potential “undesirable side effects”, according to an AI ethics expert.

Maria Luciana Axente, PwC UK’s Responsible AI and AI for Good lead, made the statement during a discussion on the second day of the newly-launched AI Festival, adding that data scientists should respond to an ethical code just like medical practitioners.

When quizzed by BT's head of AI and Data Science Research, Detlef Nauck, about whether she thinks AI systems should be regulated like medicines that come “with a description of undesirable side effects”, Axente replied: “Of course, no doubt about it.”

"We have to move in that direction,” she said. “Because we started using AI in so many different domains of life that have a significant impact on people's lives.”

However, Axente added that, although she has heard arguments for creating an agency that would regulate the use of AI in a similar way to how the US Food and Drug Administration aims to protect public health, she believes that it might be worth focusing on existing institutions.

“Let's see what we can be doing on this part of the pond, and how can we leverage some of the institutions we already have, rather than creating new ones – but they will have to go in that direction, especially for high-risk application [of AI],” she said.

Related Resource

Six reasons to accelerate remote asset monitoring with AI

How to optimise resources, increase productivity, and grow profit margins with AI

Why you should accelerate remote access monitoring with AI - whitepaper from IBMDownload now

She referenced the European Commission’s plans to revise EU laws on the use of AI, with new regulations “set to come in April”. 

“I think [they] will allow us to at least separate the use cases that need more attention and more governance from the ones that we can be a little bit more relaxed,” she said.

Axente’s comments come as Facebook’s head of hardware confirmed that the tech giant is considering using facial recognition technology for its upcoming smart glasses, which are set to be released later this year.

Facial recognition has been the subject of a petition launched last week by a coalition of privacy advocates, who are pressuring EU regulators to take advantage of its upcoming revision of AI laws and ban the use of biometric mass surveillance tools. 

Featured Resources

Four strategies for building a hybrid workplace that works

All indications are that the future of work is hybrid, if it's not here already

Free webinar

The digital marketer’s guide to contextual insights and trends

How to use contextual intelligence to uncover new insights and inform strategies

Free Download

Ransomware and Microsoft 365 for business

What you need to know about reducing ransomware risk

Free Download

Building a modern strategy for analytics and machine learning success

Turning into business value

Free Download

Most Popular

Russian hackers declare war on 10 countries after failed Eurovision DDoS attack
hacking

Russian hackers declare war on 10 countries after failed Eurovision DDoS attack

16 May 2022
Windows Server admins say latest Patch Tuesday broke authentication policies
Server & storage

Windows Server admins say latest Patch Tuesday broke authentication policies

12 May 2022
IT admin deletes company’s databases and is jailed for seven years
Policy & legislation

IT admin deletes company’s databases and is jailed for seven years

16 May 2022