Businesses blame lack of resources for not explaining AI decisions

Research finds there's no 'one-size-fits-all' approach as context behind decisions matters more than anything

AI brain

Companies using artificial intelligence (AI) more often than not cite non-tech reasons like high costs as an excuse for not fully explaining to people why these systems have come to certain conclusions.  

The main challenges that businesses face in explaining the rationale to end users behind decisions made by AI-powered systems hinge mostly on logistics and resources, rather than anything technical, the Information Commissioner's Office (ICO) has revealed.

Its interim report, published this week in collaboration with the Alan Turning Institute, was commissioned to explore how businesses approach explaining AI decisions to people affected by them, and how this can improve in future.

The fact that technical feasibility wasn't cited as a major stumbling block came as a source of relief to the ICO, as it means organisations responsible for deploying AI are confident the technology can be explained.

Advertisement - Article continues below
Advertisement - Article continues below

The UK's data regulator also claims its findings highlight a need for raising the profile of explaining AI decisions at board-level. This is so the right budget and personnel allocations can be made to address these issues. Businesses also face difficulties due to the lack of any standardised approach for internal accountability for explainable AI decisions.

"If an AI system makes a decision about an individual, should that person be given an explanation of how the decision was made?" said the ICO's senior policy officer Alex Hubbard.

"Should they get the same information about a decision regarding criminal justice as they would about a decision concerning healthcare?

"Industry roundtable participants generally felt confident they could technically explain the decisions made by AI. However, they raised other challenges to 'explainability' including cost, commercial sensitivities (eg infringing intellectual property) and the potential for 'gaming' or abuse of systems."

As part of its research, the ICO and the Alan Turing Institute hosted several roundtable discussions with industry representatives from the public, private and third sectors. Individuals and consumers were also invited to 'citizen juries' to discuss these themes.

Beyond the main challenges in explaining AI decisions, the report found there's a desire for education and awareness-raising activities to inform the public on the use and benefits of AI. It's not, however, clear which section of society should take responsibility for engaging the public on these issues.

Advertisement - Article continues below

There are also risks, the report claims, that awareness-raising can "simply serve to normalise the use of AI decisions, disproportionately emphasising its benefits" so people are less likely to question its use.

However, the strongest message to emerge, according to the ICO, is that context matters more than anything else when it comes to the expectations by individuals.

Factors such as the importance or urgency of the decision, or the power of the user to change factors influencing the decision, and even the scope for bias, play a huge role in what people expect from organisations that deploy these technologies.

Generally, people who participated in the research expected an explanation from an AI decision in the same way they would expect an explanation from human judgement. But they also questioned whether AI decisions should be held to higher standards, given the possibility for humans to harbour ulterior or selfish motives.

Advertisement - Article continues below

Overall, this suggests there's no "one-size-fits-all" approach; rather the content and delivery of explanations must adapt to the audience and context around the AI-based decisions.

"The ICO has said many times that data protection is not a barrier to the use of innovative and data-driven technologies," Hubbard continued. "But these opportunities cannot be taken at the expense of being transparent and open with individuals about the use of their personal data.

Advertisement - Article continues below

"The guidance will help organisations to comply with data protection law but will not be limited to this. It will also promote best practice, helping organisations to foster individuals' trust, understanding, and confidence in AI decisions."

The interim report will feed directly into the ICO's guidance for organisations, which will go out for public consultation over the summer before being published in full in the autumn.

Featured Resources

Digital Risk Report 2020

A global view into the impact of digital transformation on risk and security management

Download now

6 ways your business could suffer if you don’t backup Office 365

Office 365 makes it easy to lose valuable data regularly, unpredictably, unintentionally, and for good

Download now

Get the best out of your workforce

7 steps to unleashing their true potential with robotic process automation

Download now

8 digital best practices for IT professionals

Don't leave anything to chance when going digital

Download now


artificial intelligence (AI)

MIT develops AI tech to edit outdated Wikipedia articles

13 Feb 2020

Toyota, NVIDIA partner on self-driving cars

20 Mar 2019

Most Popular


How to use Chromecast without Wi-Fi

5 Feb 2020
operating systems

How to fix a stuck Windows 10 update

12 Feb 2020

The top ten password-cracking techniques used by hackers

10 Feb 2020
cyber security

McAfee researchers trick Tesla autopilot with a strip of tape

21 Feb 2020