AI can develop prejudice without human input

MIT researchers found that robots in groups can be prejudice to AIs in other groups

Robots fighting

There is a danger that human prejudices will make their way into AI, but research from MIT claims that robots can learn it from themselves.

Researchers set up a simulated game using groups of AI-powered robots, whereby each chooses to donate to another within a group it belonged to or another group it didn't. The robots in the test based decisions on the reputation of each robot and the individual robot's donation strategies.

The researchers used thousands of simulations and the robots learned new strategies by copying each other, either within their own groups or across the entire population. The study found the robots copied strategies that gave them a better payoff in the short term.

As such, the results found that robots became increasingly prejudiced against those from other groups they have not learnt from, and the researchers noted that groups of autonomous machines could demonstrate prejudice by simply identifying, copying and learning behaviour from one another.

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

"Our simulations show that prejudice is a powerful force of nature and through evolution, it can easily become incentivized in virtual populations, to the detriment of wider connectivity with others," wrote Cardiff University's Professor Roger Whitaker, seen via TechCrunch.

"Protection from prejudicial groups can inadvertently lead to individuals forming further prejudicial groups, resulting in a fractured population. Such widespread prejudice is hard to reverse."

This research suggested to the teams that high cognitive ability isn't therefore necessarily required to develop prejudices.

Until now, artificial intelligence that has shown signs of prejudice has been the result of its human interactions, such as Microsoft's ill-fated chatbot 'Tay tweets' that began to post racist content after learning it from other Twitter users.

However, the research from MIT would suggest a limited form of the technology might have that bias as a default, adding more negative points to a field of technology that is increasingly coming under scrutiny. Recently, the incoming president of the British Science Association warned of a fear-driven public backlash over the misconceptions of AI.

Featured Resources

What you need to know about migrating to SAP S/4HANA

Factors to assess how and when to begin migration

Download now

Your enterprise cloud solutions guide

Infrastructure designed to meet your company's IT needs for next-generation cloud applications

Download now

Testing for compliance just became easier

How you can use technology to ensure compliance in your organisation

Download now

Best practices for implementing security awareness training

How to develop a security awareness programme that will actually change behaviour

Download now
Advertisement

Recommended

Visit/technology/33253/toyota-partners-with-nvidia-to-create-the-future-of-autonomous-vehicles
Technology

Toyota, NVIDIA partner on self-driving cars

20 Mar 2019

Most Popular

Visit/policy-legislation/data-governance/354496/brexit-security-talks-under-threat-after-uk-accused-of
data governance

Brexit security talks under threat after UK accused of illegally copying Schengen data

10 Jan 2020
Visit/security/cyber-security/354468/if-not-passwords-then-what
cyber security

If not passwords then what?

8 Jan 2020
Visit/policy-legislation/31772/gdpr-and-brexit-how-will-one-affect-the-other
Policy & legislation

GDPR and Brexit: How will one affect the other?

9 Jan 2020
Visit/web-browser/30394/what-is-http-error-503-and-how-do-you-fix-it
web browser

What is HTTP error 503 and how do you fix it?

7 Jan 2020