EU watchdog urges AI policymakers to protect fundamental rights

Regulator calls for more guidance over the use of AI as it is "made by people" and therefore not "infallible"

A digital brain above a chip to indicate AI

The European Union's rights watchdog has warned of the risks possessed by predictive artificial intelligence (AI) used in policing, medical diagnoses and targeted adverts.

The warning came in a report produced by the Agency for Fundamental Rights (FRA), which is urging policymakers to provide more guidance on existing rules and how they can be applied to AI to ensure future laws do not harm fundamental rights. 

AI is widely used by law enforcement agencies and often comes up in cases where the technology, particularly facial recognition, clashes with privacy laws and human rights issues. The European Commission is currently mulling new legislation over the use of AI, but it hasn't had much authority over it so far

The FRA's report, 'Getting the future right - Artificial intelligence and fundamental rights in the EU', is calling on countries in the EU to make sure that AI respects all fundamental rights, not just privacy or data protection but also where it discriminates or impedes justice. It wants a guarantee that people can challenge automated decisions, as AI is "made by people". 

Governments within the bloc should also assess AI both before and during its use to reduce negative impacts, particularly where it discriminates and there is a call for an "effective oversight system", which the report suggests should be "joined-up" with members of the block to hold businesses and public administrators to account.

Related Resource

Realising the benefits of automated machine learning

How to overcome machine learning obstacles and start reaping the benefits

What are the benefits of automated machine learning - whitepaper from DataRobotDownload now

Authorities are also being urged to ensure that oversight bodies have adequate resources and skills to do their job.

"AI is not infallible, it is made by people, and humans can make mistakes," said FRA director Michael O'Flaherty. "That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people's rights both in the development and use of AI. 

"We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them."

Featured Resources

B2B under quarantine

Key B2C e-commerce features B2B need to adopt to survive

Download now

The top three IT pains of the new reality and how to solve them

Driving more resiliency with unified operations and service management

Download now

The five essentials from your endpoint security partner

Empower your MSP business to operate efficiently

Download now

How fashion retailers are redesigning their digital future

Fashion retail guide

Download now

Most Popular

The benefits of workload optimisation

The benefits of workload optimisation

16 Jul 2021
Samsung Galaxy S21 5G review: A rose-tinted experience
Mobile Phones

Samsung Galaxy S21 5G review: A rose-tinted experience

14 Jul 2021
RMIT to be first Australian university to implement AWS supercomputing facility
high-performance computing (HPC)

RMIT to be first Australian university to implement AWS supercomputing facility

28 Jul 2021