Public sector to be probed over AI and data-driven tech use

Ai
(Image credit: Shutterstock)

An independent committee will scrutinise the potential impact of artificial intelligence on the public sector and the standards to which it delivers its services.

The Committee on Standards in Public Life will conduct the review that aims to ensure that high standards of public services are maintained as "technologically assisted decision-making" becomes more widespread across the public sector.

"The increasing development and use of data and data-enabled technologies in our public services can potentially bring huge advantages in terms of pace and scale of service delivery, but there are some major ethical and practical challenges about what this means for accountability, objectivity and the other Nolan principles," said the Committee.

"As the Committee celebrates its 25th year as an advisory body conducting broad reviews of key ethical issues, we want to look at what the future holds for public services and help ensure that high standards of conduct continue to be built in' to new ways of making decisions on the public's behalf."

Those standards will centre on the values of the Committee which enshrine "honesty, integrity, objectivity, openness, leadership, selflessness and accountability" and how such values are applied and upheld in the public sector.

While the Committee only provided a general overview of its intentions to assess the impact of AI and data-driven technology on the public sector, it is likely it will look at issues such as bias in machine learning systems how data is collected and used, and how aware the public is of the way AI technology is applied to their data and the services they use, as well as who is accountable for such technology use.

As the use of AI-centric technology and algorithmic decision making begins to grow within the public sector there is increased scrutiny in its deployment.

For example, the Centre for Data Ethics and Innovation and the Cabinet Office's Race Disparity Unit have jointly launched an investigation into the potential for bias in algorithms.

This is yet another indicator that despite the march of AI and machine learning development, there are still plenty of concerns around the practical and ethical use of such technologies that need to be alleviated.

Roland Moore-Colyer

Roland is a passionate newshound whose journalism training initially involved a broadcast specialism, but he’s since found his home in breaking news stories online and in print.

He held a freelance news editor position at ITPro for a number of years after his lengthy stint writing news, analysis, features, and columns for The Inquirer, V3, and Computing. He was also the news editor at Silicon UK before joining Tom’s Guide in April 2020 where he started as the UK Editor and now assumes the role of Managing Editor of News.

Roland’s career has seen him develop expertise in both consumer and business technology, and during his freelance days, he dabbled in the world of automotive and gaming journalism, too.