EU to propose GDPR-like fines for AI abuses

AI will be prohibited from use in mass surveillance or for ranking social behaviour

The European Union (EU) is set to propose a set of enforceable rules that will restrict the use of artificial intelligence (AI) systems against the threat of hefty GDPR-like fines for flagrant violations.

Under the proposals drafted by the European Commission (EC), organisations operating in the EU will not be allowed to use AI for mass surveillance or for ranking social behaviour, according to Bloomberg. Systems deployed to manipulate human behaviour, exploit information about individuals or groups would also be banned in the EU.

Under the rules, authorisation would be required to use biometric identification systems in the public domain, while high-risk AI applications would need to undergo a thorough inspection before they’re deployed. The high-risk category would include applications that use facial recognition, are involved with physical safety or healthcare, or those used in transport or energy.

Member states, in these cases, would need to appoint assessment bodies to examine whether these systems are trained on unbiased data sets and have sufficient human oversight. They’ll inspect and eventually certify the systems that pass the criteria.

While some companies will be allowed to assess themselves, others will need to be vetted by a third party, which will issue compliance certificates valid for up to five years.

Failure to comply with the terms set out in the proposals will result in a range of punishments including financial penalties up to a maximum of 4% of global revenue, which is the same maximum penalty for violating GDPR.

There are several exemptions in place, however, including the use of AI for safeguarding public security as well as AI systems used exclusively for military purposes.

The EU has long been keen to devise a set of enforceable rules that would govern the use of AI systems by organisations operating in its territories, amid the growing concerns about the consequences of unregulated AI deployments. Google has also been among a sea of voices calling out for some kind of framework governing the use of AI, while the Information Commissioner's Office (ICO) is currently consulting with experts on AI regulation.

In March last year, the EC launched the first phase of this process, setting out in its AI white paper the goals for a regulated industry that taps into its digital single market.

The EU also hopes these rules will serve as a model for other nations and territories to replicate, as has become the case with GDPR since its introduction in 2018 with legislation such as the California Consumer Privacy Act (CCPA).

Featured Resources

BCDR buyer's guide for MSPs

How to choose a business continuity and disaster recovery solution

Download now

The definitive guide to IT security

Protecting your MSP and your customers

Download now

Cost of a data breach report 2020

Find out what factors help mitigate breach costs

Download now

The complete guide to changing your phone system provider

Optimise your phone system for better business results

Download now

Recommended

Taming the machine: AI Governance
artificial intelligence (AI)

Taming the machine: AI Governance

29 Apr 2021
Panasonic finalizes deal to acquire supply chain firm Blue Yonder
Acquisition

Panasonic finalizes deal to acquire supply chain firm Blue Yonder

23 Apr 2021
10 keys to AI success in 2021
Whitepaper

10 keys to AI success in 2021

10 Mar 2021
MLOps 101: The foundation for your AI strategy
Whitepaper

MLOps 101: The foundation for your AI strategy

10 Mar 2021

Most Popular

KPMG offers staff 'four-day fortnight' in hybrid work plans
flexible working

KPMG offers staff 'four-day fortnight' in hybrid work plans

6 May 2021
16 ways to speed up your laptop
Laptops

16 ways to speed up your laptop

29 Apr 2021
How to move Windows 10 from your old hard drive to SSD
operating systems

How to move Windows 10 from your old hard drive to SSD

30 Apr 2021