Stephen Hawking signs open letter against AI "pitfalls"

Elon Musk also signs letter warning AI researchers to make society-friendly robots

Stephen Hawking and tech investor Elon Musk have signed an open letter calling for greater focus on making artificially intelligent robots do just what we tell them to.

They join dozens of scientists, professors and experts who have also signed the Future of Life Institute's (FLI) letter, which comes as leading visionaries warn against AI's potential threat to human jobs.

Advertisement - Article continues below

Scientist Hawking even warned that AI could spell the end for humanity last month, while Telsa Motors founder Musk has spoken of the potential for "a dangerous outcome" from AI research.

The FLI's letter calls for researchers to create "robust and beneficial" AI, while avoiding any "potential pitfalls".

It reads: "Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

"We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do."

Other signatories include Google researchers, Oxford and Cambridge professors and three co-founders of DeepMind, a startup bought by Google for $400 million.

DeepMind claims to have created an advanced neural network that allows machines to store short-term memories and learn from them.

Advertisement - Article continues below

The company said this creates the ability for machines to operate beyond the initial capabilities of their programming.

Advertisement - Article continues below

Space X founder Musk said last October that AI could be our "biggest existential threat", suggesting that regulatory oversight could be necessary to safely develop robots that pose no threat to the human race.

He  told delegates at the Massachusetts Institute of Technology (MIT) AeroAstro Centennial Symposium: "I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that. So we need to be very careful with artificial intelligence.

"I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish."

However, the open letter also outlined hopes that smart robots could help solve various human crises such as the spread of diseases.

"The eradication of disease and poverty are not unfathomable," it said.

Google's chairman, Eric Schmidt, called concerns over AI "misguided" last month.

Featured Resources

Top 5 challenges of migrating applications to the cloud

Explore how VMware Cloud on AWS helps to address common cloud migration challenges

Download now

3 reasons why now is the time to rethink your network

Changing requirements call for new solutions

Download now

All-flash buyer’s guide

Tips for evaluating Solid-State Arrays

Download now

Enabling enterprise machine and deep learning with intelligent storage

The power of AI can only be realised through efficient and performant delivery of data

Download now


Careers & training

The UK should follow Finland's lead with it comes to AI training

19 Dec 2019
Business strategy

What is machine learning?

27 Sep 2019
Marketing & comms

AI is just clever marketing, and I’m not buying

20 Sep 2019

AI can play poker, but I’m neither shaken nor stirred

16 Jul 2019

Most Popular

Server & storage

HPE warns of 'critical' bug that destroys SSDs after 40,000 hours

26 Mar 2020
video conferencing

Zoom beams iOS user data to Facebook for targeted ads

27 Mar 2020

These are the companies offering free software during the coronavirus crisis

25 Mar 2020
Mobile Phones

Apple lifts iPhone purchase restrictions

23 Mar 2020