Stephen Hawking signs open letter against AI "pitfalls"

Elon Musk also signs letter warning AI researchers to make society-friendly robots

Stephen Hawking and tech investor Elon Musk have signed an open letter calling for greater focus on making artificially intelligent robots do just what we tell them to.

They join dozens of scientists, professors and experts who have also signed the Future of Life Institute's (FLI) letter, which comes as leading visionaries warn against AI's potential threat to human jobs.

Scientist Hawking even warned that AI could spell the end for humanity last month, while Telsa Motors founder Musk has spoken of the potential for "a dangerous outcome" from AI research.

The FLI's letter calls for researchers to create "robust and beneficial" AI, while avoiding any "potential pitfalls".

Advertisement - Article continues below
Advertisement - Article continues below

It reads: "Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

"We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do."

Other signatories include Google researchers, Oxford and Cambridge professors and three co-founders of DeepMind, a startup bought by Google for $400 million.

DeepMind claims to have created an advanced neural network that allows machines to store short-term memories and learn from them.

The company said this creates the ability for machines to operate beyond the initial capabilities of their programming.

Space X founder Musk said last October that AI could be our "biggest existential threat", suggesting that regulatory oversight could be necessary to safely develop robots that pose no threat to the human race.

Advertisement - Article continues below

He  told delegates at the Massachusetts Institute of Technology (MIT) AeroAstro Centennial Symposium: "I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that. So we need to be very careful with artificial intelligence.

"I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish."

However, the open letter also outlined hopes that smart robots could help solve various human crises such as the spread of diseases.

"The eradication of disease and poverty are not unfathomable," it said.

Google's chairman, Eric Schmidt, called concerns over AI "misguided" last month.

Featured Resources

Digital Risk Report 2020

A global view into the impact of digital transformation on risk and security management

Download now

6 ways your business could suffer if you don’t backup Office 365

Office 365 makes it easy to lose valuable data regularly, unpredictably, unintentionally, and for good

Download now

Get the best out of your workforce

7 steps to unleashing their true potential with robotic process automation

Download now

8 digital best practices for IT professionals

Don't leave anything to chance when going digital

Download now


Careers & training

The UK should follow Finland's lead with it comes to AI training

19 Dec 2019
Business strategy

What is machine learning?

27 Sep 2019
Marketing & comms

AI is just clever marketing, and I’m not buying

20 Sep 2019

AI can play poker, but I’m neither shaken nor stirred

16 Jul 2019

Most Popular


How to use Chromecast without Wi-Fi

5 Feb 2020
operating systems

How to fix a stuck Windows 10 update

12 Feb 2020
cyber security

McAfee researchers trick Tesla autopilot with a strip of tape

21 Feb 2020
data protection

Google to shift UK user data to the US post-Brexit

20 Feb 2020