Coronavirus forces social media to rely heavily on AI moderation

The pandemic has put social media’s automated takedown software to the test - with some room for error

Facebook, Twitter, and YouTube warned that more videos and other content could be mistakenly classified as policy violations and removed, due to the companies relying on the judgement of artificial intelligence (AI) during the coronavirus pandemic.

AI has been left to do its content-policing job virtually unattended as tech giants empty their offices and ask staff to work from home to protect them from the virus and curb the spread of the pandemic.

Advertisement - Article continues below

But this unprecedented situation has put social media’s automated takedown software to the test, with some room for error.

“We've invested significantly in automated systems for content review but they are not always as accurate or granular in their analysis of content as human reviewers,” Google announced on their blog. “These systems are configured deliberately to identify content that may violate our policies. So on YouTube there may be an increase in content classified for removal during this time—including some content that does not violate our policies.”

Twitter has also announced that it would also be increasing its use of machine learning and automation. In a blog post, legal, policy and trust & safety lead Vijaya Gadde and VP of sales Matt Derella warned: “While we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes”.

Advertisement - Article continues below
Advertisement - Article continues below

The company also assured users that it would not be permanently suspending any Twitter accounts based solely on the judgement of its automated enforcement systems.

Facebook announced that it would ask its content review contract workers to work from home, yet warned that the staff could not perform some duties remotely “due to safety, privacy and legal reasons”. Nevertheless, it assured that AI would help in tackling content-reviewing workload.

“We believe the investments we’ve made over the past three years have prepared us for this situation,” assured Kang-Xing Jin, Facebook’s head of health. “With fewer people available for human review we’ll continue to prioritize imminent harm and increase our reliance on proactive detection in other areas to remove violating content. We don’t expect this to impact people using our platform in any noticeable way.”

Despite assurances that the situation would not heavily affect the experiences of Facebook users, Jin warned that “there may be some limitations to this approach and we may see some longer response times and make more mistakes as a result”.

Featured Resources

The case for a marketing content hub

Transform your digital marketing to deliver customer expectations

Download now

Fast, flexible and compliant e-signatures for global businesses

Be at the forefront of digital transformation with electronic signatures

Download now

Why CEOS should care about the move to SAP S/4HANA

And how they can accelerate business value

Download now

IT faces new security challenges in the wake of COVID-19

Beat the crisis by learning how to secure your network

Download now

Most Popular


Ransomware collective claims to have hacked NASA IT contractor

3 Jun 2020

The UK looks to Japan and South Korea for 5G equipment

4 Jun 2020

How data science is transforming business

29 May 2020