Public sector to be probed over AI and data-driven tech use

The Committee on Standards in Public Life aims to assess the ethical and practical challenges as AI tech adoption grows

AI

An independent committee will scrutinise the potential impact of artificial intelligence on the public sector and the standards to which it delivers its services.

The Committee on Standards in Public Life will conduct the review that aims to ensure that high standards of public services are maintained as "technologically assisted decision-making" becomes more widespread across the public sector.

"The increasing development and use of data and data-enabled technologies in our public services can potentially bring huge advantages in terms of pace and scale of service delivery, but there are some major ethical and practical challenges about what this means for accountability, objectivity and the other Nolan principles," said the Committee.

"As the Committee celebrates its 25th year as an advisory body conducting broad reviews of key ethical issues, we want to look at what the future holds for public services and help ensure that high standards of conduct continue to be built in' to new ways of making decisions on the public's behalf."

Those standards will centre on the values of the Committee which enshrine "honesty, integrity, objectivity, openness, leadership, selflessness and accountability" and how such values are applied and upheld in the public sector.

While the Committee only provided a general overview of its intentions to assess the impact of AI and data-driven technology on the public sector, it is likely it will look at issues such as bias in machine learning systems how data is collected and used, and how aware the public is of the way AI technology is applied to their data and the services they use, as well as who is accountable for such technology use.

As the use of AI-centric technology and algorithmic decision making begins to grow within the public sector there is increased scrutiny in its deployment.

For example, the Centre for Data Ethics and Innovation and the Cabinet Office's Race Disparity Unit have jointly launched an investigation into the potential for bias in algorithms.

This is yet another indicator that despite the march of AI and machine learning development, there are still plenty of concerns around the practical and ethical use of such technologies that need to be alleviated.

Featured Resources

Security analytics for your multi-cloud deployments

IBM Security QRadar SIEM solution brief

Download now

Five reasons to move to the cloud

Join the enterprises moving their workloads to the cloud

Download now

Architecting hybrid IT and edge for digital advantage

Why business leaders should consider a hybrid IT strategy

Download now

Six reasons to accelerate remote asset monitoring with AI

How to optimise resources, increase productivity, and grow profit margins with AI

Download now

Recommended

How to become a machine learning engineer
Careers & training

How to become a machine learning engineer

23 Dec 2020
Data science fails: Building AI you can trust
Whitepaper

Data science fails: Building AI you can trust

2 Dec 2020
MLOps 101: The foundation for your AI strategy
Whitepaper

MLOps 101: The foundation for your AI strategy

2 Dec 2020
Realising the benefits of automated machine learning
Whitepaper

Realising the benefits of automated machine learning

2 Dec 2020

Most Popular

Npower shuts down app after hackers steal user data
hacking

Npower shuts down app after hackers steal user data

25 Feb 2021
Hackers publish Bombardier data in wide-reaching FTA cyber attack
cyber attacks

Hackers publish Bombardier data in wide-reaching FTA cyber attack

24 Feb 2021
New monitors for an agile new normal
Sponsored

New monitors for an agile new normal

19 Feb 2021