Businesses put AI innovation on hold to avoid public backlash

Several significant barriers to building AI embedded with ethical principles are contributing to swelling public distrust in automated and data-driven technologies and software, according to the Centre for Data Ethics and Innovation (CDEI).

A number of UK sectors may be unwilling to experiment and engage in AI-based innovation for fear of sustaining reputation damage, a panel of more than 120 experts concluded in the arms-length government organisation’s AI Barometer.

There are considerable opportunities in successfully adopting ethically-embedded AI, which range from building a fairer justice system to more efficient decarbonisation, and more effective public health research and disease tracking. However, these ‘harder to achieve’ opportunities are unlikely to be realised without concerted government support and a clear national policy, according to CDEI chair Roger Taylor.

“These opportunities have a number of common characteristics,” Taylor said. “They require coordinated action across organisations or ecosystems; they involve the use of very large-scale complex data about people; and they affect decisions that have an immediate and significant impact on people’s lives.

“The second overarching conclusion is that there are a number of common barriers to achieving these ‘harder to achieve’ benefits. Some relate to the workforce – the skills and diversity of those working on these problems. Some involve our state of knowledge, about, for example, what the public will accept as ethical. Others relate to the data governance and regulatory structures we currently have in place.”

As part of the barometer, the panel assessed the viability and severity of 19 common risks across five sectors including criminal justice, financial services, health and social care, digital and social media, as well as energy and utilities.

Both bias leading to discrimination and the lack of explainability were deemed severe risks across four out of the five sectors. This is in addition to cyber attacks, failure of consent mechanisms, and a lack of transparency also being deemed a severe risk in most industries, and a moderate risk across the others.

Conversely, the panellists said that loss of trust in institutions was likely a severe or moderate risk, but the loss of trust in AI, as well as low accuracy, as generally low-to-moderate risks across the five sectors.

There are several barriers that exist to addressing these risks, the report continued, ranging from regulatory confusion to market disincentives.

RELATED RESOURCE

The IT expert’s guide to AI and content management

Your guide to the biggest opportunities for IT teams when it comes to AI and content management

FREE DOWNLOAD

Regulatory confusion may arise, for example, for new technologies such as facial recognition where the ethics and application can fall between the gaps of disparate regulators. Market disincentives, meanwhile, would manifest as, for example, social media companies fearing the loss of profits if they take action to mitigate disinformation.

CDEI picked out three barriers in particular that are acutely contributing to a swelling sense of public distrust, namely low data quality and availability, a lock of co-ordinated policy and practice, as well as a lack of transparency around AI and data use.

The use of poor data in training algorithms can lead to faulty of biased systems, the report outlined, with the concentration of market power over data and unwillingness to share data all stymying innovation.

The guidance, training and various approaches used across the development and deployment of AI and data-driven systems is also highly localised and disparate. Regulatory approaches may vary between sectors, which can lead to confusion among those both deploying and overseeing the technology.

This comes in tandem with a lack of transparency, with both the private and public sector not always open about how they use AI, or how they are regulated. This prevents scrutiny and accountability, which could otherwise block ethical innovation.

Without these barriers being addressed, they will feed into a chronic loss of trust, deemed a bigger brake on innovation than several of these barriers combined, the report claims. This would mean consumers are not likely to use new technologies or share their data needed to build them. This would, firstly, affect businesses’ ability to build functional and useful AI products, but also deter them from engaging in innovation for fear of meeting opposition.

The CDEI plans to promote the findings of its 152-page AI Barometer to policymakers and other decision-makers across the industry, in regulation and in research. The report will also be further expanded over the next year with the panel examining additional sectors to gather a broader understanding of the barriers to implementing ethical AI.

The body is also launching a new programme of work that aims to address many of the institutional barriers as they arise in various settings, ranging from policing to social media platforms. The CDEI plans to work with private sector and public sector partners to ensure the recommendations are taken seriously and implemented.

Keumars Afifi-Sabet
Features Editor

Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.