So long Skynet: DeepMind sets up ethics committee

AI tech isn't "value neutral" and its impact must be controlled, organisation says

AI artificial intelligence

Alphabet's AI research unit, DeepMind, has established an ethics and society committee to help avoid a dystopian future ruled over by hyper-intelligent and potentially deadly machines.

DeepMind is part of an increasingly large segment of the tech industry that's starting to consider the potential negative consequences of AI, and looking to mitigate them.

While not quite in the group that believes malevolent AI is almost a given, such as Elon Musk, Steve Wozniak and Elon Musk, the organisation attested that "technology is not a value-neutral, and technologists must take responsibility for the ethical and social impact of their work".

"The development of AI creates important and complex questions. Its impact on society and on all our lives is not something that should be left to chance," said DeepMind Ethics and Society co-leads Verity Harding and Sean Legassick in a blog post. "Beneficial outcomes and protections against harms must be actively fought for and built-in from the beginning."

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

This is easier said than done in the field of AI, though, Harding and Legassick said, which is why the DeepMind Ethics and Society has been established. It will look at the real-world impacts of AI with the aim of both helping technologists put ethics into practice and also helping society at large understand the potential effects of AI. This latter aspect, they hope, will enable society to direct the course of the technology's development "so that it works for the benefit of all".

Ethics and Society, while a DeepMind initiative, will be getting input from several external "fellows". These include Nick Bostrom of Oxford University's Future Humanity Institute and Strategic Artificial Research Centre, University of Manchester economics professor and co-director of Policy@Manchester Diane Coyle, and UN senior advisor and director of the Center for Sustainable Development at Columbia University Jeffrey Sachs.

"If AI technologies are to serve society, they must be shaped by society's priorities and concerns," said Harding and Legassick.

"With the creation of DeepMind Ethics & Society, we hope to challenge assumptions including our own and pave the way for truly beneficial and responsible AI," they concluded.

Featured Resources

What you need to know about migrating to SAP S/4HANA

Factors to assess how and when to begin migration

Download now

Your enterprise cloud solutions guide

Infrastructure designed to meet your company's IT needs for next-generation cloud applications

Download now

Testing for compliance just became easier

How you can use technology to ensure compliance in your organisation

Download now

Best practices for implementing security awareness training

How to develop a security awareness programme that will actually change behaviour

Download now
Advertisement

Most Popular

Visit/policy-legislation/data-governance/354496/brexit-security-talks-under-threat-after-uk-accused-of
data governance

Brexit security talks under threat after UK accused of illegally copying Schengen data

10 Jan 2020
Visit/security/cyber-security/354468/if-not-passwords-then-what
cyber security

If not passwords then what?

8 Jan 2020
Visit/web-browser/30394/what-is-http-error-503-and-how-do-you-fix-it
web browser

What is HTTP error 503 and how do you fix it?

7 Jan 2020
Visit/policy-legislation/31772/gdpr-and-brexit-how-will-one-affect-the-other
Policy & legislation

GDPR and Brexit: How will one affect the other?

9 Jan 2020