UK gov says media reports to blame for public distrust of AI, data collection

An abstract image showing a digital version of a human head to represent artificial intelligence

Research into the UK public’s perception of artificial intelligence (AI) has revealed that people still view the technology as “scary and futuristic”, while feeling comfortable about some of its most basic applications in current use.

The UK government’s Centre for Data Ethics and Innovation (CDEI) published its findings after surveying a broad selection of the public on matters concerning data use, data security, and AI.

The report showed that a third (32%) of people expressed discomfort with the idea that AI is used to recommend web pages in Internet search results. Respondents were also near-equally split on their perception of AI use in the job market to decide which candidate should get an interview, with 45% of people saying they’re comfortable with the idea.

Digital familiarity played a role in the findings, with lower technological understanding broadly correlating to increased feelings of worry and fear related to AI.

The report highlighted media coverage of AI and other data-related issues as a key driver in the public’s negative perception of the technologies and the potential for innovation using them. The government came only second to social media companies when looking at the organisations that the public least trust with their data.

Highlighting some of the top stories in the latter half of 2021, like the NHS’ plan for wider data sharing and various facial recognition stories, the CDEI said the sentiment in the news was more frequently negative than positive.

In the case of NHS’ push to implement the General Practice Data for Planning and Research (GPDPR), the media echoed wider calls from healthcare and privacy experts to halt the initiative. More than a million opted out of the program, which was ultimately shelved as a result.

The most-read data-related stories included various data breaches from the likes of the Labour Party, the government’s New Year Honours list, the Afghan interpreter breach, T-Mobile, and British Airways.

AI was represented more positively than data sharing, for the most part, with stories about it helping dementia diagnoses and advanced debating capabilities among the most popular for the year.

Data has been more of a focal point in society in recent years and the Cambridge Analytica scandal shone a light on the dangers involved in mishandling personal data at a corporate level.

Since then, the UK has seen numerous data protection initiatives and legislation installed in a bid to win over public trust and alleviate concerns about how personal data can be used to provide better products and services to end-users.

Despite the strengthened efforts to uphold data security in the UK, the public remains uncertain about how data is used and the CDEI said these concerns must be addressed “for the full potential of data to be realised”.

“People report feelings of uncertainty about current data practices and fairly limited knowledge regarding how data about them is used and collected in their day-to-day lives, demonstrating the opportunity and importance of meaningful transparency about data use by organisations,” the report read.

RELATED RESOURCE

Decoding Customer IAM (CIAM) vs. IAM

What’s the difference between CIAM and IAM?

FREE DOWNLOAD

“This uncertainty, alongside perceived risks around data security, data control and data accountability are barriers that must be overcome to build confidence in data use.”

Most people (52%) said they don’t have a strong understanding of how their data is being used or how it’s collected in the normal course of living their digital life, despite 93% of all respondents indicating they use the Internet either most days or every day.

Data security is seen as the most pressing issue, but despite their confused outlook on data use in society, people are broadly happy with their data being used in a variety of contexts, particularly for personal benefit or to improve the lives of others, according to the report.

The use of data in the government’s COVID-19 response was one of the most positive applications of wide data sharing, the survey showed, though concerns were raised around how the current level of data sharing and collection that exists today benefits all areas of society.

Almost a third of people (31%) said they thought data would adversely impact all groups in society and 28% said they felt AI was having a negative effect on ethnic minority groups.

AI has been known to show biases of many types and has led to the likes of Twitter opening up its algorithm in the hope that increased scrutiny would help stamp these out.

AI is a technology that’s also used alongside machine learning and machine vision to power facial recognition, a beleaguered technology plagued with stories of racial bias across worldwide law enforcement and government applications.

Connor Jones
News and Analysis Editor

Connor Jones has been at the forefront of global cyber security news coverage for the past few years, breaking developments on major stories such as LockBit’s ransomware attack on Royal Mail International, and many others. He has also made sporadic appearances on the ITPro Podcast discussing topics from home desk setups all the way to hacking systems using prosthetic limbs. He has a master’s degree in Magazine Journalism from the University of Sheffield, and has previously written for the likes of Red Bull Esports and UNILAD tech during his career that started in 2015.