Tay scandal taught us to take accountability, says Microsoft CEO

Satya Nadella says Redmond has learned from its disastrous racist chatbot

Microsoft CEO Satya Nadella stressed today that it is the job of AI companies to ensure that artificial intelligence in kept under control.

Speaking at the London launch of his new book, Hit Refresh, IT Pro heard Nadella address the question posed by many AI sceptics regarding what happens if tech companies create an AI that gets out of hand.

Advertisement - Article continues below

"It's up to us. In other words, how do we approach this with a set of design principles that allow us to control what AI we create? Just like good user experience, I would claim there is good AI," he said. "As designers of AI, it's our responsibility."

Microsoft has invested heavily in the space and Nadella considers it to be one of the three main pillars of the company's future, along with quantum computing and mixed reality. It has made progress with AI most notably through its digital assistant Cortana, but in other areas as well, including machine vision and advanced analytics.

Some of its AI experiments, however, have been less successful. One particularly embarrassing failure was Tay, a Twitter-based chatbot powered by machine learning. Designed to emulate a teenage girl, Tay's conversation was supposed to become more natural through learning from social interactions with real users.

Advertisement
Advertisement - Article continues below

This quickly went off the rails, as trolls exploited the system in order to teach Tay to parrot racial slurs, conspiracy theories and other objectionable comments.

Advertisement - Article continues below

Nadella acknowledged that the experiment proved problematic, but said that the company has learnt from the incident.

"One of the things that has really influenced our design principles is that episode; we have to take accountability. First and foremost, we need to be able to in fact foresee these attacks," he said.

"But the idea that we need to keep the broader goal of having this AI behave properly is our accountability. So how can we test it, how can we make sure that it does not lose control is a lot of places where we're working now."

Featured Resources

Preparing for long-term remote working after COVID-19

Learn how to safely and securely enable your remote workforce

Download now

Cloud vs on-premise storage: What’s right for you?

Key considerations driving document storage decisions for businesses

Download now

Staying ahead of the game in the world of data

Create successful marketing campaigns by understanding your customers better

Download now

Transforming productivity

Solutions that facilitate work at full speed

Download now
Advertisement

Recommended

Visit/network-internet/web-browser/356369/dont-like-chromium-edge-you-can-revive-legacy-edge
web browser

Don't like Chromium Edge? Here's how to revive the old Edge

7 Jul 2020
Visit/cloud/356294/azure-digital-twins-previews-new-features
Cloud

Microsoft Azure Digital Twins previews new features

30 Jun 2020
Visit/server-storage/data-recovery/356278/microsoft-releases-windows-file-recovery-app
data recovery

Microsoft releases Windows File Recovery app

30 Jun 2020
Visit/mobile/mobile-phones/356088/microsoft-may-unveil-the-dual-screen-surface-duo-next-month
Mobile Phones

Microsoft might release the dual-screen Surface Duo next month

16 Jun 2020

Most Popular

Visit/business/business-operations/356395/nvidia-overtakes-intel-as-most-valuable-us-chipmaker
Business operations

Nvidia overtakes Intel as most valuable US chipmaker

9 Jul 2020
Visit/laptops/29190/how-to-find-ram-speed-size-and-type
Laptops

How to find RAM speed, size and type

24 Jun 2020
Visit/server-storage/servers/356083/the-best-server-solution-for-your-smb
Sponsored

The best server solution for your SMB

26 Jun 2020