Elon Musk co-founds OpenAI non-profit research firm

Elon Musk and a number of other key Silicon Valley thought leaders have come together to form an artificial intelligence research company called OpenAI, making sure any advancements in artificial intelligence are benefitting humanity as a whole.

The non-profit organisation will conduct research in collaboration with experts in machine learning and other relevant topics and will be encouraged to share their findings as papers, blog posts and code that will be shared with the world as open-source projects. If any patents are filed, they too will be available to everyone.

"Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely," the collective said in a blog post.

OpenAI has received money from a number of individuals and organisations such as AWS, Infosys and YC Research amounting to a total of $1bn (660,000), although the company said it expects to spend only a fraction of that in the next few years.

"AI systems today have impressive but narrow capabilities," the statement continued. "It seems that we'll keep whittling away at their constraints, and in the extreme case they will reach human performance on virtually every intellectual task. It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly."

Elon Musk, CEO of Tesla and SpaceX, has previously spoken out against the rise of artificial intelligence (AI), calling it the "biggest existential threat" to the human race.

The comments were made during a panel discussion at the Google Quad campus in Silicon Valley (via Huffington Post), where Musk warned the audience of the potentially apocalyptic future that could be brought about by AI.

He said: "The AI researchers are all racing toward creating [superintelligence] without wondering what's going to happen if they succeed. I think AI risk is the biggest [existential] risk that I can see today by a fairly significant margin and it's happening fast - much faster than people realised."

As a potential solution, AI researcher Russell Stuart of UoC Berkeley recommends that AI research should put machines that benefit the human race before "smart machines."

The PayPal and Tesla co-founder Elon Musk has funded 37 AI projects around the world, with seven of those in the UK. His $10 million (6.5 million) contribution to OpenAI will, it is hoped, ensure such projects are safe and beneficial for humans.

Musk will also spend another 1 million on building an AI research centre in collaboration with Oxford and Cambridge universities' Open Philanthropy Project.

"There are reasons to believe that unregulated and unconstrained development could incur significant dangers, both from 'bad actors' like irresponsible governments and from the unprecedented capability of the technology itself," said Oxford University's Nick Bostrom.

"The centre will focus explicitly on the long-term impacts of AI, the strategic implications of powerful AI systems as they come to exceed human capabilities in most domains of interest, and the policy responses that could best be used to mitigate the potential risks of this technology."

The 6.5 million fund is in the care of the Future of Life Institute (FLI), which received applications from more than 300 AI researchers who were hoping to get their hands on some of the PayPal founder's fortune.

Musk said: "Here are all these leading AI researchers saying that AI safety is important. I agree with them, so I'm committing $10 million to support research aimed at keeping AI beneficial for humanity."

Last year, Musk described AI as "summoning the devil", while in March he predicted self-driving cars will completely replace human drivers, but he doesn't see that as the same threat as other forms of AI.

"I don't think we have to worry about autonomous cars, because that's sort of like a narrow form of AI," he claimed.

This article was originally published on 08/07/15 and has been updated several times since, most recently on 13/12/15

Caroline Preece

Caroline has been writing about technology for more than a decade, switching between consumer smart home news and reviews and in-depth B2B industry coverage. In addition to her work for IT Pro and Cloud Pro, she has contributed to a number of titles including Expert Reviews, TechRadar, The Week and many more. She is currently the smart home editor across Future Publishing's homes titles.

You can get in touch with Caroline via email at caroline.preece@futurenet.com.