AI can develop prejudice without human input

MIT researchers found that robots in groups can be prejudice to AIs in other groups

Robots fighting

There is a danger that human prejudices will make their way into AI, but research from MIT claims that robots can learn it from themselves.

Researchers set up a simulated game using groups of AI-powered robots, whereby each chooses to donate to another within a group it belonged to or another group it didn't. The robots in the test based decisions on the reputation of each robot and the individual robot's donation strategies.

The researchers used thousands of simulations and the robots learned new strategies by copying each other, either within their own groups or across the entire population. The study found the robots copied strategies that gave them a better payoff in the short term.

As such, the results found that robots became increasingly prejudiced against those from other groups they have not learnt from, and the researchers noted that groups of autonomous machines could demonstrate prejudice by simply identifying, copying and learning behaviour from one another.

"Our simulations show that prejudice is a powerful force of nature and through evolution, it can easily become incentivized in virtual populations, to the detriment of wider connectivity with others," wrote Cardiff University's Professor Roger Whitaker, seen via TechCrunch.

"Protection from prejudicial groups can inadvertently lead to individuals forming further prejudicial groups, resulting in a fractured population. Such widespread prejudice is hard to reverse."

This research suggested to the teams that high cognitive ability isn't therefore necessarily required to develop prejudices.

Until now, artificial intelligence that has shown signs of prejudice has been the result of its human interactions, such as Microsoft's ill-fated chatbot 'Tay tweets' that began to post racist content after learning it from other Twitter users.

However, the research from MIT would suggest a limited form of the technology might have that bias as a default, adding more negative points to a field of technology that is increasingly coming under scrutiny. Recently, the incoming president of the British Science Association warned of a fear-driven public backlash over the misconceptions of AI.

Featured Resources

Edge-enabled mobility of the future

Turning vehicle data into value

Download now

Modern networking for the borderless enterprise

Five ways top organisations are optimising networking at the edge

Download now

Address multi-cloud configuration risks

Cloud security challenges and how to overcome them

Watch now

The total economic impact of IBM Security Verify

Cost savings and business benefits enabled by IBM Security Verify

Download now

Recommended

How to become a machine learning engineer
Careers & training

How to become a machine learning engineer

23 Dec 2020
Data science fails: Building AI you can trust
Whitepaper

Data science fails: Building AI you can trust

2 Dec 2020
MLOps 101: The foundation for your AI strategy
Whitepaper

MLOps 101: The foundation for your AI strategy

2 Dec 2020
Realising the benefits of automated machine learning
Whitepaper

Realising the benefits of automated machine learning

2 Dec 2020

Most Popular

UK gov flip-flops on remote work, wants it a standard for all jobs
flexible working

UK gov flip-flops on remote work, wants it a standard for all jobs

5 Mar 2021
How to find RAM speed, size and type
Laptops

How to find RAM speed, size and type

26 Feb 2021
How to connect one, two or more monitors to your laptop
Laptops

How to connect one, two or more monitors to your laptop

25 Feb 2021