AI framework to prevent sexist and racist algorithms

The World Economic Forum releases guidelines for businesses and governments to build and deploy ethical AI systems

Diverse group of people

The US government is to pilot diversity regulations for work on artificial intelligence that reduces the risk of sexual and racial bias within computer systems.

The framework will act as a guideline for government and businesses to pinpoint where and how they can integrate an ethical approach to algorithms and data sets.

With AI-based technology quickly spreading throughout the world, there's a growing concern of tainting these systems with prejudice through a lack of diversity.

To combat this, the World Economic Forum has released its 'Responsible Use of Technology Report', a framework for governments and businesses to counter the growing societal risks linked to AI.

"Numerous government and large technology companies around the world have announced strategies for managing emerging technologies," said Pablo Quintanilla, a fellow at the World Economic Forum and director in the Office of Innovation, Salesforce.

"This project presents an opportunity for companies, national governments, civil society organisations and consumers to teach and to learn from each other how to better build and deploy ethically-sound technology. Having an inclusive vision requires collaboration across all global stakeholders."

The guide was co-designed by industry leaders from civil society, international organisations and businesses including the United Nation's Office of the High Commissioner for Human Rights, Microsoft, Uber, Salesforce, IDEO, Deloitte, Omidyar Network and Workday.

For the project, teams examined national technology strategies, international business programmes and ethical task forces from around the world, looking at lessons learned with local expertise to develop a guide that would be inclusive across different cultures.

Similar work has been undertaken in the UK with the Centre for Data Ethics and Innovation joining forces with the Cabinet Office's Race Disparity Unit to investigate potential bias in algorithmic decision-making.

"We want to work with organisations so they can maximise the benefits of data-driven technology and use it to ensure the decisions they make are fair," said Roger Taylor, chair of the Centre for Data Ethics and Innovation.

"As a first step, we will be exploring the potential for bias in key sectors where the decisions made by algorithms can have a big impact on people's lives."

Featured Resources

Managing security risk and compliance in a challenging landscape

How key technology partners grow with your organisation

Download now

Evaluate your order-to-cash process

15 recommended metrics to benchmark your O2C operations

Download now

AI 360: Hold, fold, or double down?

How AI can benefit your business

Download now

Getting started with Azure Red Hat OpenShift

A developer’s guide to improving application building and deployment capabilities

Download now

Most Popular

School laptops sent by government arrive loaded with malware
malware

School laptops sent by government arrive loaded with malware

21 Jan 2021
SolarWinds hackers hit Malwarebytes through Microsoft exploit
hacking

SolarWinds hackers hit Malwarebytes through Microsoft exploit

20 Jan 2021
How to recover deleted emails in Gmail
email delivery

How to recover deleted emails in Gmail

6 Jan 2021