IBM to snuff out AI bias with updated Watson OpenScale

Watson OpenScale now has recommended bias monitors to automatically detect gender and ethnic bias

IBM Watson

IBM has added a feature to its Watson OpenScale software that detects and mitigates against gender and ethnic bias.

These recommended bias monitors are the latest addition to Watson OpenScale, which was launched in September 2018, with the purpose of giving business users and non-data scientists the ability to monitor their AI and machine learning models to better understand performance. The software helps to monitor algorithmic bias and provides explanations for AI outputs.

Up till now, users manually selected which features or attributes of a model to monitor for bias in production, based on their own knowledge. But according to IBM, with the recommended bias monitors, Watson OpenScale will now automatically identify whether known protected attributes, including sex, ethnicity, marital status, and age, are present in a model and recommend they are monitored.

What's more, IBM says it is working with the regulatory compliance experts at Promontory to continue expanding this list of attributes to cover the sensitive demographic attributes most commonly referenced in data regulation.

"As regulators begin to turn a sharper eye on algorithmic bias, it is becoming more critical that organisations have a clear understanding of how their models are performing and whether they are producing unfair outcomes for certain groups," said Susannah Shattuck, the offering manager for Watson OpenScale.

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

Artificial intelligence is a rapidly advancing sector, particularly in the UK where it is often reported that the country is one of the leading developers, but this growth is often offset with concerns that the technology is being developed in a way that accentuates inequality.

In March, the Centre for Data Ethics and Innovation (CDEI) announced it had joined forces with the Cabinet Office's Race Disparity Unit to investigate potential bias in algorithmic decision-making.

As algorithms become more commonplace in society, their potential to help people increases. However, recent reports have shown that human bias can creep into algorithms, thus ultimately harming the people it's meant to help.

Featured Resources

Digitally perfecting the supply chain

How new technologies are being leveraged to transform the manufacturing supply chain

Download now

Three keys to maximise application migration and modernisation success

Harness the benefits that modernised applications can offer

Download now

Your enterprise cloud solutions guide

Infrastructure designed to meet your company's IT needs for next-generation cloud applications

Download now

The 3 approaches of Breach and Attack Simulation technologies

A guide to the nuances of BAS, helping you stay one step ahead of cyber criminals

Download now
Advertisement

Most Popular

Visit/business-strategy/mergers-and-acquisitions/354602/xerox-to-nominate-directors-to-hps-board-reports
mergers and acquisitions

Xerox to nominate directors to HP's board – reports

22 Jan 2020
Visit/operating-systems/25802/17-windows-10-problems-and-how-to-fix-them
operating systems

17 Windows 10 problems - and how to fix them

13 Jan 2020
Visit/business-strategy/public-sector/354608/uk-gov-launches-ps300000-sen-edtech-initiative
public sector

UK gov launches £300,000 SEN EdTech initiative

22 Jan 2020
Visit/web-browser/30394/what-is-http-error-503-and-how-do-you-fix-it
web browser

What is HTTP error 503 and how do you fix it?

7 Jan 2020