EU watchdog urges AI policymakers to protect fundamental rights
Regulator calls for more guidance over the use of AI as it is "made by people" and therefore not "infallible"
The European Union's rights watchdog has warned of the risks possessed by predictive artificial intelligence (AI) used in policing, medical diagnoses and targeted adverts.
The warning came in a report produced by the Agency for Fundamental Rights (FRA), which is urging policymakers to provide more guidance on existing rules and how they can be applied to AI to ensure future laws do not harm fundamental rights.
AI is widely used by law enforcement agencies and often comes up in cases where the technology, particularly facial recognition, clashes with privacy laws and human rights issues. The European Commission is currently mulling new legislation over the use of AI, but it hasn't had much authority over it so far.
The FRA's report, 'Getting the future right - Artificial intelligence and fundamental rights in the EU', is calling on countries in the EU to make sure that AI respects all fundamental rights, not just privacy or data protection but also where it discriminates or impedes justice. It wants a guarantee that people can challenge automated decisions, as AI is "made by people".
Governments within the bloc should also assess AI both before and during its use to reduce negative impacts, particularly where it discriminates and there is a call for an "effective oversight system", which the report suggests should be "joined-up" with members of the block to hold businesses and public administrators to account.
Realising the benefits of automated machine learning
How to overcome machine learning obstacles and start reaping the benefitsDownload now
Authorities are also being urged to ensure that oversight bodies have adequate resources and skills to do their job.
"AI is not infallible, it is made by people, and humans can make mistakes," said FRA director Michael O'Flaherty. "That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people's rights both in the development and use of AI.
"We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them."
The ultimate law enforcement agency guide to going mobile
Best practices for implementing a mobile device programFree download
The business value of Red Hat OpenShift
Platform cost savings, ROI, and the challenges and opportunities of Red Hat OpenShiftFree download
Managing security and risk across the IT supply chain: A practical approach
Best practices for IT supply chain securityFree download
Digital remote monitoring and dispatch services’ impact on edge computing and data centres
Seven trends redefining remote monitoring and field service dispatch service requirementsFree download