Facebook and TUM create joint AI ethics research centre

The social network will contribute $7.5 million to the centre over a period of five years

Facebook has teamed up with the Technical University of Munich (TUM) to create an independent research centre focused on the study of AI ethics. 

The Institute for Ethics in Artificial Intelligence will draw on the expertise of thought leaders and academics to research potential ethical issues related to the use of AI - such as safety, privacy, fairness, and transparency - as well as identifying possible with new use cases. 

Facebook will contribute $7.5 million over five years and offer insight into how it's using AI and algorithms in initiatives such as its Fairness Flow that can determine unintended bias. Although TUM also plans to consider other funding sources, too. 

"At Facebook, ensuring the responsible and thoughtful use of AI is foundational to everything we do from the data labels we use, to the individual algorithms we build, to the systems they are a part of," Joaquin Quionero Candela, director of Applied Machine Learning at Facebook, wrote in a post announcing the partnership

"AI poses complex problems which industry alone cannot answer, and the independent academic contributions of the Institute will play a crucial role in furthering ethical research on these topics... The Institute will also benefit from Germany's position at the forefront of the conversation surrounding ethical frameworks for AI - including the creation of government-led ethical guidelines on autonomous driving - and its work with European institutions on these issues."

The Institute for Ethics in Artificial Intelligence will be led by Professor Dr. Christoph Ltge.

"At the TUM Institute for Ethics in Artificial Intelligence, we will explore the ethical issues of AI and develop ethical guidelines for the responsible use of the technology in society and the economy," Dr. Ltge said.

"Our evidence-based research will address issues that lie at the interface of technology and human values. Core questions arise around trust, privacy, fairness or inclusion, for example, when people leave data traces on the internet or receive certain information by way of algorithms.

"We will also deal with transparency and accountability, for example in medical treatment scenarios, or with rights and autonomy in human decision-making in situations of human-AI interaction."

Featured Resources

Four cyber security essentials that your board of directors wants to know

The insights to help you deliver what they need

Download now

Data: A resource much too valuable to leave unprotected

Protect your data to protect your company

Download now

Improving cyber security for remote working

13 recommendations for security from any location

Download now

Why CEOS should care about the move to SAP S/4HANA

And how they can accelerate business value

Download now

Recommended

What is 4G?
Mobile

What is 4G?

17 Jun 2020
What does the future of work look like?
Careers & training

What does the future of work look like?

28 Apr 2020
Flexible vs agile working
Business strategy

Flexible vs agile working

3 Mar 2020

Most Popular

Cisco acquires container security startup Banzai Cloud
Security

Cisco acquires container security startup Banzai Cloud

18 Nov 2020
macOS Big Sur is bricking some older MacBooks
operating systems

macOS Big Sur is bricking some older MacBooks

16 Nov 2020
46 million Animal Jam accounts leaked after comms software breach
Security

46 million Animal Jam accounts leaked after comms software breach

13 Nov 2020