EU will fine social media firms for failing to remove extremist material

The European Union (EU) will draw up new plans to fine social media companies over their failure to remove extremist content from their services.

Under new proposals, companies such as Facebook and YouTube will be compelled to remove terrorist propaganda within one hour or face hefty fines, according to European commissioner for the security union Julian King, who spoke with the Financial Times.

"We cannot afford to relax or become complacent in the face of such a shadowy and destructive phenomenon," King said, adding that new regulations would create legal certainty for websites of all sizes.

"The difference in size and resources means platforms have differing capabilities to act against terrorist content and their policies for doing so are not always transparent.

"All this leads to such content continuing to proliferate across the internet, reappearing once deleted and spreading from platform to platform."

The draft proposals, set to be published next month, signal a shift from the EU's current regulatory outlook which allows companies to voluntarily remove content deemed to incite terrorist violence or radicalise users.

The EU's decision to make its guidelines legally enforceable mirrors a change in heart in terms of the UK's strategy, with the government earlier this year hinting at new rules that represent a clear shift from voluntary guidelines and self-policing.

After ten companies failed to turn up to government talks, of 14 invited, the then-secretary of state for digital, culture, media and sport (DCMS) Matt Hancock said in May the UK would draft laws to fine firms that failed to tackle online abuse or remove inappropriate content.

"The fact that only four companies turned up when I invited the 14 biggest in; it gave me a big impetus to drive this proposal to legislate through," Hancock said on BBC One's the Andrew Marr Show.

"Before then, and until now, there has been this argument - work with the companies, do it on a voluntary basis, they'll do more that way because the lawyers won't be involved.

"And after all, these companies were set up to make the world a better place. The fact that these companies have social media platforms with over a million people on them, and they didn't turn up [is disappointing]."

The wider movement towards tougher and more meaningful regulation has in-part been motivated by the continuously-unfolding data misuse scandal involving Facebook and the now-defunct Cambridge Analytica. The DCMS select committee, for instance, has proposed several new laws in an interim report published last month that make social media companies, like Facebook, liable for misinformation that is allowed to spread on their platforms.

Meanwhile, in February the former home secretary Amber Rudd revealed an auto-blocking tool the government is hoping can automatically detect and flag extremist content without human intervention. Developed by the London-based artificial intelligence company ASI Data Science, the tool was trained by analysing thousands of hours of ISIS-produced content and forms part of the government's wider efforts to tackle online hate speech and extremist material.

Keumars Afifi-Sabet
Features Editor

Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.