AI framework to prevent sexist and racist algorithms

The World Economic Forum releases guidelines for businesses and governments to build and deploy ethical AI systems

Diverse group of people

The US government is to pilot diversity regulations for work on artificial intelligence that reduces the risk of sexual and racial bias within computer systems.

The framework will act as a guideline for government and businesses to pinpoint where and how they can integrate an ethical approach to algorithms and data sets.

With AI-based technology quickly spreading throughout the world, there's a growing concern of tainting these systems with prejudice through a lack of diversity.

To combat this, the World Economic Forum has released its 'Responsible Use of Technology Report', a framework for governments and businesses to counter the growing societal risks linked to AI.

"Numerous government and large technology companies around the world have announced strategies for managing emerging technologies," said Pablo Quintanilla, a fellow at the World Economic Forum and director in the Office of Innovation, Salesforce.

"This project presents an opportunity for companies, national governments, civil society organisations and consumers to teach and to learn from each other how to better build and deploy ethically-sound technology. Having an inclusive vision requires collaboration across all global stakeholders."

The guide was co-designed by industry leaders from civil society, international organisations and businesses including the United Nation's Office of the High Commissioner for Human Rights, Microsoft, Uber, Salesforce, IDEO, Deloitte, Omidyar Network and Workday.

For the project, teams examined national technology strategies, international business programmes and ethical task forces from around the world, looking at lessons learned with local expertise to develop a guide that would be inclusive across different cultures.

Similar work has been undertaken in the UK with the Centre for Data Ethics and Innovation joining forces with the Cabinet Office's Race Disparity Unit to investigate potential bias in algorithmic decision-making.

"We want to work with organisations so they can maximise the benefits of data-driven technology and use it to ensure the decisions they make are fair," said Roger Taylor, chair of the Centre for Data Ethics and Innovation.

"As a first step, we will be exploring the potential for bias in key sectors where the decisions made by algorithms can have a big impact on people's lives."

Featured Resources

Consumer choice and the payment experience

A software provider's guide to getting, growing, and keeping customers

Download now

Prevent fraud and phishing attacks with DMARC

How to use domain-based message authentication, reporting, and conformance for email security

Download now

Business in the new economy landscape

How we coped with 2020 and looking ahead to a brighter 2021

Download now

How to increase cyber resilience within your organisation

Cyber resilience for dummies

Download now

Most Popular

How to find RAM speed, size and type
Laptops

How to find RAM speed, size and type

16 Jun 2021
Q&A: Enabling transformation
Sponsored

Q&A: Enabling transformation

10 Jun 2021
What is HTTP error 400 and how do you fix it?
Network & Internet

What is HTTP error 400 and how do you fix it?

16 Jun 2021