Tech giants like Facebook should be made liable for fake news, DCMS committee says
Wide-reaching report pinpoints the social media company's role in the spread of disinformation and hate speech
Facebook and other social media companies should be made liable for the content that appears on their websites, MPs have concluded in a lengthy report into fake news and the misuse of data in political campaigning.
Society faces a "democratic crisis", according to parliament's digital, culture, media and sport (DCMS) select committee, with Facebook called on to take responsibility for identifying and preventing illegal election campaign activity on its own platform.
The 89-page interim report, published following an 18-month investigation, also called for sweeping reforms to electoral law in the UK in light of findings by the electoral commission that 'Vote Leave' campaigned illegally during the 2016 EU referendum.
"We are facing nothing less than a crisis in our democracy -- based on the systematic manipulation of data to support the relentless targeting of citizens, without their consent, by campaigns of disinformation and messages of hate," said DCMS chair Damian Collins MP.
"We heard evidence of coordinated campaigns by Russian agencies to influence how people vote in elections around the world. This includes running adverts through Facebook during elections in other countries and in breach of their laws.
"Facebook failed to spot this at the time, and it was only discovered after repeated requests were made for them to look for evidence of this activity."
Among its recommendations, the DCMS committee outlined that social media companies must be defined by an entirely new category - neither as a 'platform' nor a 'publisher' - with a clear liability for them to act against harmful or illegal content appearing on their platforms.
The committee highlighted Facebook's experimental deployment of 'Free Service' in Burma as an example of how the company abdicated its responsibilities and exacerbated the spread of hate speech in a region.
'Free Service' is used to provide people in developing countries with data-free Facebook access. But the United Nations found the product had played a key role in stirring up hatred against the Rohingya Muslim minority in Rakhine State, mostly due to the fact people could only access news and content via Facebook.
The DCMS committee labelled Free Service "dangerous to consumers and deeply unethical", citing it as a "further example of Facebook failing to take responsibility for the misuse of its platform".
"Arguably, more invasive than obviously false information is the relentless targeting of hyper-partisan views, which play to the fears and prejudices of people, in order to influence their voting plans and their behaviour," the report outlined in its summary.
The focus of the DCMS inquiry, according to its summary, shifted from the phenomenon of fake news to its wider role in digital election campaigning, and the manipulation and misuse of data by malign actors.
"Data crimes are real crimes, with real victims," Collins continued. "This is a watershed moment in terms of people realising they themselves are the product, not just the user of a free service. Their rights over their data must be protected."
The interim report also touched on the illegal and unethical activities undertaken by a host of companies, including SCL, the parent company of the now-defunct Cambridge Analytica, during elections. SCL used behavioral micro-targeting ahead of the US midterm elections in 2014, for instance, while Canadian-based Aggregate IQ used tools that "scrape" user profile data from LinkedIn.
Elsewhere the government should also establish a voluntary 'Digital Atlantic Charter' to reassure citizens of their digital rights. This would be underpinned by a legal framework setting out the obligations in various countries to ensure a degree of alignment between nations, at a minimum.
The report also called for the government to shut a loophole in the GDPR-inspired Data Protection Act 2018 after Brexit that will allow social media companies to process UK users' personal data from the US beyond its oversight.
Facebook has been in the crosshairs of UK regulators for some time, with the social media giant recently fined 500,000 by the ICO (the maximum available fine under the Data Protection Act 1998) for its role in the Cambridge Analytica scandal.
"The Committee has raised some important issues and we were pleased to be able to contribute to their work," said Richard Allan, vice president for Policy at Facebook.
"We share their goal of ensuring that political advertising is fair and transparent and agree that electoral rule changes are needed. We have already made all advertising on Facebook more transparent.
"We are working on ways to authenticate and label political ads in the UK and create an archive of those ads that anyone can search. We will work closely with the UK Government and Electoral Commission as we develop these new transparency tools.
"We're also investing heavily in both people and technology to keep bad content off our services. We took down 2.5 million pieces of hate speech and disabled 583 million fake accounts globally in the first quarter of 2018 - much of it before anyone needed to report this to Facebook.
"By using technology like machine learning, artificial intelligence and computer vision, we can detect more bad content and take action more quickly."
The committee's final report, which will include further conclusions based on the continued assessment of data and other evidence, will be released before the end of 2018.