Massive fines headed for tech giants that host harmful online content

New powers will allow Ofcom to fine companies up to 5% of their revenues

Social media button concept

Video sharing apps such as Facebook, Instagram and YouTube will soon be fined millions of pounds for hosting harmful videos as part of the government's ongoing commitment to enforce EU laws.

The government will hand communications regulator Ofcom new policing and sanctioning powers to protect children from violent, abusive and pornographic content, according to the Telegraph.

Advertisement - Article continues below

The tech giants could face a fine of up to 5% of the company's revenue as well as having their platform potentially banned in the UK if they fail to comply with Ofcom's rulings.

The handoff of powers to Ofcom is being made to comply with the UK's current obligations it has to the EU. Specifically, its Audiovisual Media Service Directive (AVMS) which aims to deliver greater protections for children, as well as preserving cultural diversity and guaranteeing the independence of national media regulators.

However, the regulator may never get to enjoy these powers as the proposed date of Ofcom's new role would be 19 September 2020, beyond the current withdrawal date for the UK leaving the EU. Once it leaves the bloc, the UK will no longer be legally obligated to enforce the AVMS directive.

"The implementation of the AVMSD is required as part of the United Kingdom's obligations arising from its membership of the European Union and until the UK formally leaves the European Union all of its obligations remain in force," said a spokesman for the Department for Digital, Culture, Media and Sport to the BBC.

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

"If the UK leaves the European Union without a deal, we will not be bound to transpose the AVMSD into UK law."

Under the same rules, the apps in question will also face fines for improper implementation of robust age verification systems and parental controls on videos.

Social media platforms have faced heightened scrutiny this year after a number of incidents involving terrorist attacks being broadcast over online platforms. Operators, including Facebook, have been accused of not removing said videos expeditiously.

"1.5 million copies of the video had to be removed by Facebook - and could still be found on Youtube for as long as eight hours after it was first posted - is a stark reminder that we need to do more both to remove this content, and stop it going online in the first place," said former Prime Minister Theresa May at the Online Extremism Summit in Paris.

Advertisement - Article continues below

Facebook, Twitter and YouTube all faced harsh criticism after the New Zealand shooter's video evaded all three sites' harmful content algorithms.

Google, which owns YouTube, has previously boasted impressive figures concerning the accuracy of its machine learning-driven AI algorithms which first came into effect on YouTube's platform in 2017.

Within a year after its implementation, most content that was violent or extremist in nature was removed from the site with fewer than 10 views.

Advertisement

Most Popular

Visit/security/cyber-security/355200/spacex-bans-the-use-of-zoom
cyber security

Elon Musk's SpaceX bans Zoom over security fears

2 Apr 2020
Visit/development/application-programming-interface-api/355192/apple-buys-dark-sky-weather-app-and-leaves
application programming interface (API)

Apple buys Dark Sky weather app and leaves Android users in the cold

1 Apr 2020
Visit/security/cyber-crime/355171/fbi-warns-of-zoom-bombing-hackers-amidst-coronavirus-usage-spike
cyber crime

FBI warns of ‘Zoom-bombing’ hackers amid coronavirus usage spike

31 Mar 2020