AI framework to prevent sexist and racist algorithms

The World Economic Forum releases guidelines for businesses and governments to build and deploy ethical AI systems

Diverse group of people

The US government is to pilot diversity regulations for work on artificial intelligence that reduces the risk of sexual and racial bias within computer systems.

The framework will act as a guideline for government and businesses to pinpoint where and how they can integrate an ethical approach to algorithms and data sets.

With AI-based technology quickly spreading throughout the world, there's a growing concern of tainting these systems with prejudice through a lack of diversity.

To combat this, the World Economic Forum has released its 'Responsible Use of Technology Report', a framework for governments and businesses to counter the growing societal risks linked to AI.

"Numerous government and large technology companies around the world have announced strategies for managing emerging technologies," said Pablo Quintanilla, a fellow at the World Economic Forum and director in the Office of Innovation, Salesforce.

"This project presents an opportunity for companies, national governments, civil society organisations and consumers to teach and to learn from each other how to better build and deploy ethically-sound technology. Having an inclusive vision requires collaboration across all global stakeholders."

The guide was co-designed by industry leaders from civil society, international organisations and businesses including the United Nation's Office of the High Commissioner for Human Rights, Microsoft, Uber, Salesforce, IDEO, Deloitte, Omidyar Network and Workday.

For the project, teams examined national technology strategies, international business programmes and ethical task forces from around the world, looking at lessons learned with local expertise to develop a guide that would be inclusive across different cultures.

Similar work has been undertaken in the UK with the Centre for Data Ethics and Innovation joining forces with the Cabinet Office's Race Disparity Unit to investigate potential bias in algorithmic decision-making.

"We want to work with organisations so they can maximise the benefits of data-driven technology and use it to ensure the decisions they make are fair," said Roger Taylor, chair of the Centre for Data Ethics and Innovation.

"As a first step, we will be exploring the potential for bias in key sectors where the decisions made by algorithms can have a big impact on people's lives."

Featured Resources

Managing security risk and compliance in a challenging landscape

How key technology partners grow with your organisation

Download now

Security best practices for PostgreSQL

Securing data with PostgreSQL

Download now

Transform your MSP business into a money-making machine

Benefits and challenges of a recurring revenue model

Download now

The care and feeding of cloud

How to support cloud infrastructure post-migration

Watch now

Most Popular

Microsoft is submerging servers in boiling liquid to prevent Teams outages
data centres

Microsoft is submerging servers in boiling liquid to prevent Teams outages

7 Apr 2021
Data belonging to 500 million LinkedIn users found for sale on hacker marketplace
hacking

Data belonging to 500 million LinkedIn users found for sale on hacker marketplace

8 Apr 2021
How to find RAM speed, size and type
Laptops

How to find RAM speed, size and type

8 Apr 2021