Why transparency is key to promoting trust in artificial intelligence

Breaking open the black box with explainable AI is the first step in making the technology fairer for all

Artificial intelligence (AI) is inescapable. In our daily lives we probably encounter it and its best friend machine learning much more frequently than we think. Did you buy something online yesterday, use face login on your smartphone, check your Facebook, look for something on Google, or use Google Maps? AI was right there.

Advertisement - Article continues below

When AI is helping us find the most efficient route home, we’re often quite happy to let it do its job. But this technology already does so much more, from helping to decide whether to grant us bank loans and diagnose our illnesses, to presenting targeted advertising

A question of trust

As AI gets more and more embedded in our lives and helps make decisions that are increasingly significant to us, we’re rightly concerned about transparency.  When big new stories like the Cambridge Analytica scandal or ongoing discussion around inherent biases in facial recognition hit the headlines, we are concerned about bias (intentional or otherwise), and our trust in AI takes a hit. 

Explainable AI gives us a route to greater trust in AI. It is designed to help us learn more about how AI works in any given situation. So, instead of the AI just giving us an answer to a question, it shows us how it got to the answer. The alternative is the so-called ‘black box’ situation – where an AI uses an unspecified range of information and algorithms to get to an answer, but doesn’t make any of this transparent.

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

In theory, explainable AI gives us confidence in the conclusions an AI system draws. Dr Terence Tse, Associate Professor of Finance at ESCP Business School, gives the following example: “Imagine you want to obtain a loan and the approval is purely determined by an algorithm. Your loan gets rejected. If the algorithm in question is a black box it’s an issue for all parties. The bank cannot say why this is happening, and you don't know what to do in order to obtain the loan. Having explainable AI will help.”

Shedding light on competence

Explainable AI is a vital aspect of understanding an AI’s competence in coming up with any particular set of outputs. Mark Stefik, Research Fellow and Lead of Explainable AI at PARC, a Xerox company, tells IT Pro: “Typically, when people interact with AIs and the systems do the right thing, then people overestimate the AI’s competence. They assume that the machines think like people, which they do not. They assume that machines have common sense, which they do not.”

In fact, AI does not ‘think’ like humans do at all. We use ‘think’ in relation to AI to describe a way of working that in reality is different to that of our own brains. AI uses algorithms and machine learning to help it draw conclusions from data it is given, or from insights it generates. In showing how an AI has reached its decision, explainable AI can help uncover biases and in doing so not only provide individuals with redress, as in the banking example above, but also help refine the AI system itself. 

Advertisement - Article continues below

Oleg Rogynskyy, Founder and CEO of People.ai says: “A lack of explainability on how the machine learning model thinks can result in biases. If there is a bias hidden in the data set a machine learning model is trained on, it will consider the bias a ground truth.

“Explainability techniques can be used to detect and then remove biases and ensure a level of trust between the machines and the user.”

Making explainable AI ubiquitous

As AI takes an increasingly important role in our everyday lives, we are getting more and more concerned about whether we can trust it. As Stefik puts it: “The need for explainable AI increases if we want to use the systems in critical situations, where there are real consequences for good and bad decisions. People want to know when they can trust the systems before they rely on them.”

Related Resource

The IT Pro Podcast: Looking forward to 2020

With 2019 behind us, we predict what trends the IT industry can expect over the next year

Listen now

The industry recognises this need. In a recent IBM survey of 4,500 IT decision makers, 83% of respondents said being able to explain how AI arrived at a decision was universally important. That number rose to 92% among those already deploying AI, as opposed to 75% of those considering a deployment. 

Advertisement - Article continues below

Rogynskyy is unequivocal in his message, saying: “Explainable AI must be prevalent everywhere.” Tse  was similarly forthright, adding: “If we want to gain public trust in the deployment of AI, we have to make explainable AI a priority.”

Stefik, however, has reservations, particularly when it comes to how we define terms like ‘trust’ and ‘explainable’, which he argues are nuanced and complex concepts. Nevertheless, he hasn’t written explainable AI off completely, saying: “It is not ready as a complete (or well-defined) approach to making trustworthy systems, but it will be part of the solution.”

Advertisement
Advertisement

Recommended

Visit/technology/artificial-intelligence-ai/354766/mit-develops-ai-tech-to-edit-outdated-wikipedia
artificial intelligence (AI)

MIT develops AI tech to edit outdated Wikipedia articles

13 Feb 2020

Most Popular

Visit/mobile/mobile-phones/355239/microsofts-patent-design-reveals-a-mobile-device-with-a-third-screen
Mobile Phones

Microsoft patents a mobile device with a third screen

6 Apr 2020
Visit/development/application-programming-interface-api/355192/apple-buys-dark-sky-weather-app-and-leaves
application programming interface (API)

Apple buys Dark Sky weather app and leaves Android users in the cold

1 Apr 2020
Visit/software/video-conferencing/355229/zoom-we-moved-too-fast
video conferencing

Zoom CEO admits company "moved too fast" as privacy issues mount

6 Apr 2020