Tech firms lobby EU over harmful content liability
Companies are desperate to avoid the sanctions for failing to remove content but admit some oversight is needed
Lobbyists working on behalf of the biggest tech companies have been writing to European Union (EU) lawmakers to ensure they cannot be held liable for harmful content hosted on their platforms.
With the European Commission in the midst of drawing up a Digital Services Act, tech firms such as Google and Facebook have been working to water down the legislation to avoid prospective sanctions, according to the Financial Times.
Self-regulation should continue, according to a letter sent from Edima, an organisation established to lobby on behalf of Silicon Valley’s biggest companies. There should, however, also be a “new approach”, which could involve an additional degree of oversight.
Managing security risk and compliance in a challenging landscape
How key technology partners grow with your organisationDownload now
Due to the lack of any existing legislation, the EU has given social media companies the scope to regulate themselves, with platforms trusted to remove extremist or illegal content on their own accord.
These companies would argue that, despite gaps and flaws in their processes, their self-regulatory systems have improved in the last few years.
The European Commission published research in February 2019 showing tech giants have become better at removing content like hate speech and child sexual abuse.
The lobbyists have urged lawmakers not to introduce an element of liability in any new legislation drawn up, as this could lead to punishments for companies that have proactively tried to unearth illegal content.
New rules could establish “a perverse incentive whereby companies are discouraged from taking action before being made aware of the existence of illegal content, for fear of incurring additional liability”, according to Edima's director-general, Siada El Ramly.
Member states and lawmakers, however, have moved towards a consensus that the existing model of self-regulation is flawed, and that tech companies cannot be trusted nor incentivised to remove harmful content without sufficient oversight.
The majority of reports from the EU in the last few years have pointed towards a much more aggressive regulatory landscape, where removing harmful content is concerned.
Stretching back years, lawmakers have been urging social media companies to do more, and despite some progress, the lack of significant improvement has led to harsher briefings.
In March 2018, for example, the European Commission suggested limiting the statutory deadline for tech giants to remove illegal content to just one hour.
Five lessons learned from the pivot to a distributed workforce
Delivering continuity and scale with a remote work strategyDownload now
Connected experiences in a digital transformation
Enable businesses to meet the demands of the futureDownload now
Simplify to secure
Reduce complexity by integrating your security ecosystemDownload now
Enhance the safety and security of your people, assets and operations
Enable a true vision of security with an engineered solution based on hyperconverged and storage platformsDownload now