Stephen Hawking signs open letter against AI "pitfalls"

Elon Musk also signs letter warning AI researchers to make society-friendly robots

Stephen Hawking and tech investor Elon Musk have signed an open letter calling for greater focus on making artificially intelligent robots do just what we tell them to.

They join dozens of scientists, professors and experts who have also signed the Future of Life Institute's (FLI) letter, which comes as leading visionaries warn against AI's potential threat to human jobs.

Advertisement - Article continues below

Scientist Hawking even warned that AI could spell the end for humanity last month, while Telsa Motors founder Musk has spoken of the potential for "a dangerous outcome" from AI research.

The FLI's letter calls for researchers to create "robust and beneficial" AI, while avoiding any "potential pitfalls".

It reads: "Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

"We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do."

Other signatories include Google researchers, Oxford and Cambridge professors and three co-founders of DeepMind, a startup bought by Google for $400 million.

DeepMind claims to have created an advanced neural network that allows machines to store short-term memories and learn from them.

Advertisement
Advertisement - Article continues below

The company said this creates the ability for machines to operate beyond the initial capabilities of their programming.

Advertisement - Article continues below

Space X founder Musk said last October that AI could be our "biggest existential threat", suggesting that regulatory oversight could be necessary to safely develop robots that pose no threat to the human race.

He  told delegates at the Massachusetts Institute of Technology (MIT) AeroAstro Centennial Symposium: "I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that. So we need to be very careful with artificial intelligence.

"I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish."

However, the open letter also outlined hopes that smart robots could help solve various human crises such as the spread of diseases.

"The eradication of disease and poverty are not unfathomable," it said.

Google's chairman, Eric Schmidt, called concerns over AI "misguided" last month.

Featured Resources

Staying ahead of the game in the world of data

Create successful marketing campaigns by understanding your customers better

Download now

Remote working 2020: Advantages and challenges

Discover how to overcome remote working challenges

Download now

Keep your data available with snapshot technology

Synology’s solution to your data protection problem

Download now

After the lockdown - reinventing the way your business works

Your guide to ensuring business continuity, no matter the crisis

Download now
Advertisement

Recommended

The UK should follow Finland's lead with it comes to AI training
Careers & training

The UK should follow Finland's lead with it comes to AI training

19 Dec 2019
What is machine learning?
Business strategy

What is machine learning?

27 Sep 2019
AI is just clever marketing, and I’m not buying
Marketing & comms

AI is just clever marketing, and I’m not buying

20 Sep 2019

Most Popular

How do you build a great customer experience?
Sponsored

How do you build a great customer experience?

20 Jul 2020
Labour Party donors caught up in Blackbaud data breach
data breaches

Labour Party donors caught up in Blackbaud data breach

31 Jul 2020
Why it’s time to expand beyond 16:9 monitors
Advertisement Feature

Why it’s time to expand beyond 16:9 monitors

21 Jul 2020