IT Pro is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Learn more

Tay scandal taught us to take accountability, says Microsoft CEO

Satya Nadella says Redmond has learned from its disastrous racist chatbot

Microsoft CEO Satya Nadella stressed today that it is the job of AI companies to ensure that artificial intelligence in kept under control.

Speaking at the London launch of his new book, Hit Refresh, IT Pro heard Nadella address the question posed by many AI sceptics regarding what happens if tech companies create an AI that gets out of hand.

"It's up to us. In other words, how do we approach this with a set of design principles that allow us to control what AI we create? Just like good user experience, I would claim there is good AI," he said. "As designers of AI, it's our responsibility."

Microsoft has invested heavily in the space and Nadella considers it to be one of the three main pillars of the company's future, along with quantum computing and mixed reality. It has made progress with AI most notably through its digital assistant Cortana, but in other areas as well, including machine vision and advanced analytics.

Some of its AI experiments, however, have been less successful. One particularly embarrassing failure was Tay, a Twitter-based chatbot powered by machine learning. Designed to emulate a teenage girl, Tay's conversation was supposed to become more natural through learning from social interactions with real users.

This quickly went off the rails, as trolls exploited the system in order to teach Tay to parrot racial slurs, conspiracy theories and other objectionable comments.

Nadella acknowledged that the experiment proved problematic, but said that the company has learnt from the incident.

"One of the things that has really influenced our design principles is that episode; we have to take accountability. First and foremost, we need to be able to in fact foresee these attacks," he said.

"But the idea that we need to keep the broader goal of having this AI behave properly is our accountability. So how can we test it, how can we make sure that it does not lose control is a lot of places where we're working now."

Featured Resources

Accelerating AI modernisation with data infrastructure

Generate business value from your AI initiatives

Free Download

Recommendations for managing AI risks

Integrate your external AI tool findings into your broader security programs

Free Download

Modernise your legacy databases in the cloud

An introduction to cloud databases

Free Download

Powering through to innovation

IT agility drive digital transformation

Free Download

Recommended

Microsoft reportedly blocks Russian Windows 10 and Windows 11 downloads
Microsoft Windows

Microsoft reportedly blocks Russian Windows 10 and Windows 11 downloads

20 Jun 2022
IT Pro News in Review: UK tech raises $16bn, Microsoft acquires Miburo, largest DDoS attack mitigated
Business strategy

IT Pro News in Review: UK tech raises $16bn, Microsoft acquires Miburo, largest DDoS attack mitigated

17 Jun 2022
Proofpoint details 'dangerous' ransomware flaw in SharePoint and OneDrive
ransomware

Proofpoint details 'dangerous' ransomware flaw in SharePoint and OneDrive

17 Jun 2022
Microsoft silent patches called “a grossly irresponsible policy”
cyber security

Microsoft silent patches called “a grossly irresponsible policy”

15 Jun 2022

Most Popular

The UK's best cities for tech workers in 2022
Business strategy

The UK's best cities for tech workers in 2022

24 Jun 2022
LockBit 2.0 ransomware disguised as PDFs distributed in email attacks
Security

LockBit 2.0 ransomware disguised as PDFs distributed in email attacks

27 Jun 2022
Salaries for the least popular programming languages surge as much as 44%
Development

Salaries for the least popular programming languages surge as much as 44%

23 Jun 2022