World Economic Forum lambasts AI bias

Ethical questions about artificial intelligence are being raised by “obvious problems” with biased algorithms

Surveillance, machine learning, facial recognition

A lack of diversity in the tech industry is raising serious questions about the future of artificial intelligence, according to the head of AI and machine learning at the World Economic Forum.

Speaking at an event in Tianjin, China, Kay Firth-Butterfield flagged the issue of bias within AI algorithms and called on the need to make the industry "much more diverse" in the West.

"There have been some obvious problems with AI algorithms," she told CNBC, mentioning a case that occurred in 2015, when Google's image-recognition software labelled a black man and his friend as gorillas'. According to a report published earlier this year by Wired, Google has yet to properly fix this issue - opting instead to simply block search terms for primates.

"As we've seen more and more of these things crop up, then the ethical debate around artificial intelligence has become much greater," said Firth-Butterfield. She also noted the rollout of General Data Protection Regulation (GDPR) in Europe, claiming this has brought ethical questions about data and technology "to the fore".

The dominance of "white men of a certain age" in building technology was signalled as a root cause for bias creeping into the algorithms behind AI. Training machine-learning systems on racially uneven data sets has previously been noted as a problem, particularly within facial-recognition software.

An experiment undertaken earlier this year at the Massachusetts Institute of Technology (MIT), for example, involved testing three commercially available face-recognition systems, developed by Microsoft, IBM and the Chinese firm Megvii. The results found that the systems correctly identified the gender of white men 99% of the time, but this success rate plummeted to 35% for black women.

Dr Adrian Weller, programme director for artificial intelligence at The Alan Turing Institute, told IT Pro: "Algorithmic systems are increasingly used in ways that can directly impact our lives, such as in making decisions about loans, hiring or even criminal sentencing. There is an urgent need to ensure that these systems treat all people fairly - they must not discriminate inappropriately against any individual or subgroup. 

"This is a particular concern when machine learning methods are used to train systems on past human decisions which may reflect historic prejudice."

Weller noted that a growing body of work is addressing the challenge of making algorithms fair, transparent and ethical. This outlook is similar to that of Firth-Butterfield, who emphasised that the World Economic Forum is trying to ensure AI grows "for the benefit of humanity".

Human diversity might not be the only issue facing AI bias. A recent study by Cardiff University and MIT found that groups of autonomous machines can demonstrate prejudice by simply identifying, copying and learning this behaviour from one another.

Featured Resources

Security analytics for your multi-cloud deployments

IBM Security QRadar SIEM solution brief

Download now

Five reasons to move to the cloud

Join the enterprises moving their workloads to the cloud

Download now

Architecting hybrid IT and edge for digital advantage

Why business leaders should consider a hybrid IT strategy

Download now

Six reasons to accelerate remote asset monitoring with AI

How to optimise resources, increase productivity, and grow profit margins with AI

Download now

Recommended

How to become a machine learning engineer
Careers & training

How to become a machine learning engineer

23 Dec 2020
Data science fails: Building AI you can trust
Whitepaper

Data science fails: Building AI you can trust

2 Dec 2020
MLOps 101: The foundation for your AI strategy
Whitepaper

MLOps 101: The foundation for your AI strategy

2 Dec 2020
Realising the benefits of automated machine learning
Whitepaper

Realising the benefits of automated machine learning

2 Dec 2020

Most Popular

How to build a CMS with React and Google Sheets
content management system (CMS)

How to build a CMS with React and Google Sheets

24 Feb 2021
Oxford University COVID lab falls victim to hackers
hacking

Oxford University COVID lab falls victim to hackers

26 Feb 2021
Npower shuts down app after hackers steal user data
hacking

Npower shuts down app after hackers steal user data

25 Feb 2021