Read Google's five rules for human-friendly AI

Google updates Asimov's Three Laws of Robotics for AI developers

Google has come up with five rules to create human-friendly AI - superseding Isaac Asimov's Three Laws of Robotics.

The tech giant, whose DeepMind division recently devised an AI capable of beating the world's best Go player - believes AI creators should ask themselves these five fundamental questions to avoid the risk of a singularity in which robots rule over humankind.

Google Research's Chris Olah outlined the questions in a research paper titled Concrete Problems in AI Safety, saying: "While possible AI safety risks have received a lot of public attention, most previous discussion has been very hypothetical and speculative.

"We believe it's essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably."

Advertisement
Advertisement - Article continues below

Published in collaboration with OpenAI, Stanford and Berkley, the paper takes a cleaning robot as an example to outline the following five rules.

Avoiding negative side effects: Ensuring that an AI system will not disturb its environment in negative ways while completing its tasks.

Avoiding reward hacking: An effective AI needs to complete its task properly without cutting corners.

Scalable oversight: AI needs to learn from feedback, and should not need continuous feedback from a human programmer.

Safe exploration: AI needs to avoid damaging objects in its environment as it performs its task.

Robustness to distributional shift: AI should be able to adapt to an environment that it has not initially been conditioned for, and still perform.

Google has thrown much of its resources at developing deep learning and AI, amid a backdrop of fear of robots, voiced by luminaries including SpaceX founder Elon Musk and scientist Stephen Hawking.

DeepMind is working on a failsafe that would effectively shut off AI in the event it attempted to disobey its users.

Other firms including Microsoft are exploring AI, getting AI to tell stories about holiday photos, and debuting its tween chatbot, Tay, which spouted rude replies on Twitter.

Featured Resources

The IT Pro guide to Windows 10 migration

Everything you need to know for a successful transition

Download now

Managing security risk and compliance in a challenging landscape

How key technology partners grow with your organisation

Download now

Software-defined storage for dummies

Control storage costs, eliminate storage bottlenecks and solve storage management challenges

Download now

6 best practices for escaping ransomware

A complete guide to tackling ransomware attacks

Download now
Advertisement

Recommended

Visit/technology/33253/toyota-partners-with-nvidia-to-create-the-future-of-autonomous-vehicles
Technology

Toyota, NVIDIA partner on self-driving cars

20 Mar 2019

Most Popular

Visit/security/identity-and-access-management-iam/354289/44-million-microsoft-customers-found-using
identity and access management (IAM)

44 million Microsoft customers found using compromised passwords

6 Dec 2019
Visit/cloud/microsoft-azure/354230/microsoft-not-amazon-is-going-to-win-the-cloud-wars
Microsoft Azure

Microsoft, not Amazon, is going to win the cloud wars

30 Nov 2019
Visit/hardware/354237/five-signs-that-its-time-to-retire-it-kit
Sponsored

Five signs that it’s time to retire IT kit

29 Nov 2019
Visit/business/business-strategy/354195/where-modernisation-and-sustainability-meet-a-tale-of-two
Sponsored

Where modernisation and sustainability meet: A tale of two benefits

25 Nov 2019