Massive fines headed for tech giants that host harmful online content

New powers will allow Ofcom to fine companies up to 5% of their revenues

Social media button concept

Video sharing apps such as Facebook, Instagram and YouTube will soon be fined millions of pounds for hosting harmful videos as part of the government's ongoing commitment to enforce EU laws.

The government will hand communications regulator Ofcom new policing and sanctioning powers to protect children from violent, abusive and pornographic content, according to the Telegraph.

The tech giants could face a fine of up to 5% of the company's revenue as well as having their platform potentially banned in the UK if they fail to comply with Ofcom's rulings.

The handoff of powers to Ofcom is being made to comply with the UK's current obligations it has to the EU. Specifically, its Audiovisual Media Service Directive (AVMS) which aims to deliver greater protections for children, as well as preserving cultural diversity and guaranteeing the independence of national media regulators.

However, the regulator may never get to enjoy these powers as the proposed date of Ofcom's new role would be 19 September 2020, beyond the current withdrawal date for the UK leaving the EU. Once it leaves the bloc, the UK will no longer be legally obligated to enforce the AVMS directive.

"The implementation of the AVMSD is required as part of the United Kingdom's obligations arising from its membership of the European Union and until the UK formally leaves the European Union all of its obligations remain in force," said a spokesman for the Department for Digital, Culture, Media and Sport to the BBC.

"If the UK leaves the European Union without a deal, we will not be bound to transpose the AVMSD into UK law."

Under the same rules, the apps in question will also face fines for improper implementation of robust age verification systems and parental controls on videos.

Social media platforms have faced heightened scrutiny this year after a number of incidents involving terrorist attacks being broadcast over online platforms. Operators, including Facebook, have been accused of not removing said videos expeditiously.

"1.5 million copies of the video had to be removed by Facebook - and could still be found on Youtube for as long as eight hours after it was first posted - is a stark reminder that we need to do more both to remove this content, and stop it going online in the first place," said former Prime Minister Theresa May at the Online Extremism Summit in Paris.

Facebook, Twitter and YouTube all faced harsh criticism after the New Zealand shooter's video evaded all three sites' harmful content algorithms.

Google, which owns YouTube, has previously boasted impressive figures concerning the accuracy of its machine learning-driven AI algorithms which first came into effect on YouTube's platform in 2017.

Within a year after its implementation, most content that was violent or extremist in nature was removed from the site with fewer than 10 views.

Featured Resources

Preparing for AI-enabled cyber attacks

MIT technology review insights

Download now

Cloud storage performance analysis

Storage performance and value of the IONOS cloud Compute Engine

Download now

The Forrester Wave: Top security analytics platforms

The 11 providers that matter most and how they stack up

Download now

Harness data to reinvent your organisation

Build a data strategy for the next wave of cloud innovation

Download now

Most Popular

RMIT to be first Australian university to implement AWS supercomputing facility
high-performance computing (HPC)

RMIT to be first Australian university to implement AWS supercomputing facility

28 Jul 2021
Samsung Galaxy S21 5G review: A rose-tinted experience
Mobile Phones

Samsung Galaxy S21 5G review: A rose-tinted experience

14 Jul 2021
Zyxel USG Flex 200 review: A timely and effective solution
Security

Zyxel USG Flex 200 review: A timely and effective solution

28 Jul 2021