Coronavirus forces social media to rely heavily on AI moderation

The pandemic has put social media’s automated takedown software to the test - with some room for error

Facebook, Twitter, and YouTube warned that more videos and other content could be mistakenly classified as policy violations and removed, due to the companies relying on the judgement of artificial intelligence (AI) during the coronavirus pandemic.

AI has been left to do its content-policing job virtually unattended as tech giants empty their offices and ask staff to work from home to protect them from the virus and curb the spread of the pandemic.

But this unprecedented situation has put social media’s automated takedown software to the test, with some room for error.

“We've invested significantly in automated systems for content review but they are not always as accurate or granular in their analysis of content as human reviewers,” Google announced on their blog. “These systems are configured deliberately to identify content that may violate our policies. So on YouTube there may be an increase in content classified for removal during this time—including some content that does not violate our policies.”

Twitter has also announced that it would also be increasing its use of machine learning and automation. In a blog post, legal, policy and trust & safety lead Vijaya Gadde and VP of sales Matt Derella warned: “While we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes”.

The company also assured users that it would not be permanently suspending any Twitter accounts based solely on the judgement of its automated enforcement systems.

Facebook announced that it would ask its content review contract workers to work from home, yet warned that the staff could not perform some duties remotely “due to safety, privacy and legal reasons”. Nevertheless, it assured that AI would help in tackling content-reviewing workload.

“We believe the investments we’ve made over the past three years have prepared us for this situation,” assured Kang-Xing Jin, Facebook’s head of health. “With fewer people available for human review we’ll continue to prioritize imminent harm and increase our reliance on proactive detection in other areas to remove violating content. We don’t expect this to impact people using our platform in any noticeable way.”

Despite assurances that the situation would not heavily affect the experiences of Facebook users, Jin warned that “there may be some limitations to this approach and we may see some longer response times and make more mistakes as a result”.

Featured Resources

Modern governance: The how-to guide

Equipping organisations with the right tools for business resilience

Free Download

Cloud operational excellence

Everything you need to know about optimising your cloud operations

Watch now

A buyer’s guide to board management software

How the right software can improve your board’s performance

The real world business value of Oracle autonomous data warehouse

Lead with a 417% five-year ROI

Download now

Most Popular

How to boot Windows 11 in Safe Mode
Microsoft Windows

How to boot Windows 11 in Safe Mode

6 Jan 2022
How to speed up Windows 11
Microsoft Windows

How to speed up Windows 11

7 Jan 2022
Solving cyber security's diversity problem
Careers & training

Solving cyber security's diversity problem

5 Jan 2022