IBM to snuff out AI bias with updated Watson OpenScale

Watson OpenScale now has recommended bias monitors to automatically detect gender and ethnic bias

IBM Watson

IBM has added a feature to its Watson OpenScale software that detects and mitigates against gender and ethnic bias.

These recommended bias monitors are the latest addition to Watson OpenScale, which was launched in September 2018, with the purpose of giving business users and non-data scientists the ability to monitor their AI and machine learning models to better understand performance. The software helps to monitor algorithmic bias and provides explanations for AI outputs.

Advertisement - Article continues below

Up till now, users manually selected which features or attributes of a model to monitor for bias in production, based on their own knowledge. But according to IBM, with the recommended bias monitors, Watson OpenScale will now automatically identify whether known protected attributes, including sex, ethnicity, marital status, and age, are present in a model and recommend they are monitored.

What's more, IBM says it is working with the regulatory compliance experts at Promontory to continue expanding this list of attributes to cover the sensitive demographic attributes most commonly referenced in data regulation.

"As regulators begin to turn a sharper eye on algorithmic bias, it is becoming more critical that organisations have a clear understanding of how their models are performing and whether they are producing unfair outcomes for certain groups," said Susannah Shattuck, the offering manager for Watson OpenScale.

Advertisement
Advertisement - Article continues below

Artificial intelligence is a rapidly advancing sector, particularly in the UK where it is often reported that the country is one of the leading developers, but this growth is often offset with concerns that the technology is being developed in a way that accentuates inequality.

Advertisement - Article continues below

In March, the Centre for Data Ethics and Innovation (CDEI) announced it had joined forces with the Cabinet Office's Race Disparity Unit to investigate potential bias in algorithmic decision-making.

As algorithms become more commonplace in society, their potential to help people increases. However, recent reports have shown that human bias can creep into algorithms, thus ultimately harming the people it's meant to help.

Featured Resources

Top 5 challenges of migrating applications to the cloud

Explore how VMware Cloud on AWS helps to address common cloud migration challenges

Download now

3 reasons why now is the time to rethink your network

Changing requirements call for new solutions

Download now

All-flash buyer’s guide

Tips for evaluating Solid-State Arrays

Download now

Enabling enterprise machine and deep learning with intelligent storage

The power of AI can only be realised through efficient and performant delivery of data

Download now
Advertisement

Most Popular

Visit/security/cyber-crime/355171/fbi-warns-of-zoom-bombing-hackers-amidst-coronavirus-usage-spike
cyber crime

FBI warns of ‘Zoom-bombing’ hackers amid coronavirus usage spike

31 Mar 2020
Visit/security/privacy/355155/zoom-kills-facebook-integration-after-data-transfer-backlash
privacy

Zoom kills Facebook integration after data transfer backlash

30 Mar 2020
Visit/security/data-breaches/355173/marriott-hit-by-data-breach-exposing-personal-data-of-52-million
data breaches

Marriott data breach exposes personal data of 5.2 million guests

31 Mar 2020
Visit/data-insights/data-management/355170/oracle-cloud-courses-are-free-during-coronavirus-lockdown
data management

Oracle cloud courses are free during coronavirus lockdown

31 Mar 2020