Facebook claims AI managed to reduce hate speech by 50%

The social media platform has hit back at claims the tech it uses to fight hate speech is inadequate

Facebook has hit back at reports which claim its artificial intelligence (AI) fails to detect hate speech by claiming that the technology managed to reduce its prevalence by 50%.

On Sunday, the Wall Street Journal (WSJ) published a report based on internal documents and employee testimonials which show that the social media platform manages to remove only “a low-single-digit percentage” of posts that violate its rules of conduct.

The AI used to identify harmful content has problems detecting first-person shooting videos and racist rants, according to the report, as well as telling the difference between cockfighting and car crashes. However, it is more affordable to use than human reviewers, which in 2019 were costing the company “$2 million a week, or $104 million a year”, according to the WSJ.

Facebook’s VP of Integrity, Guy Rosen, issued a response to the article hours after it was published, stating that the hate speech has been reduced by almost 50% during the last three quarters.

According to the company, “prevalence is the most important metric to use because it shows how much hate speech is actually seen on Facebook”. 

Related Resource

Build trustworthy AI with MLOps

AI performance, operations, and ethics

Blue nebulous backgroundFree download

“Recent reporting suggests that our approach to addressing hate speech is much narrower than it actually is, ignoring the fact that hate speech prevalence has dropped to 0.05%, or 5 views per every 10,000 on Facebook,” said Rosen.

When it’s uncertain whether a post violates Facebook’s terms, its visibility is reduced by limiting its distribution and not being recommended to users. This is done in order to protect those who post “content that looks like hate speech but isn’t”, such as “describing experiences with hate speech or condemning it”. The company also stated that 97% of removed content is identified by its algorithm, up from 23.6% in 2016.

Facebook didn’t address the WSJ’s claims that the decision to use AI to monitor hate speech was motivated by costs.

These latest allegations come amid a difficult month for the social media platform, which was recently accused by former product manager turned whistleblower Frances Haugen of repeatedly prioritising profits over user safety. On 4 October, Facebook, as well as its subsidiaries WhatsApp and Instagram, also suffered a six-hour outage.

Featured Resources

2021 Thales cloud security study

The challenges of cloud data protection and access management in a hybrid and multi cloud world

Free download

IDC agility assessment

The competitive advantage in adaptability

Free Download

Digital transformation insights from CIOs for CIOs

Transformation pilotes, co-pilots, and engineers

Free download

What ITDMs did next - and what they should be doing now

Enable continued collaboration and communication for hybrid workers

Recommended

Meta makes 2FA mandatory for high-risk users
two-factor authentication (2FA)

Meta makes 2FA mandatory for high-risk users

3 Dec 2021
Meta delays product-wide end-to-end encryption rollout until 2023
encryption

Meta delays product-wide end-to-end encryption rollout until 2023

22 Nov 2021
Podcast transcript: Can the US take on big tech?
Policy & legislation

Podcast transcript: Can the US take on big tech?

19 Nov 2021
The IT Pro Podcast: Can the US take on big tech?
Policy & legislation

The IT Pro Podcast: Can the US take on big tech?

19 Nov 2021

Most Popular

What should you really be asking about your remote access software?
Sponsored

What should you really be asking about your remote access software?

17 Nov 2021
What are the pros and cons of AI?
machine learning

What are the pros and cons of AI?

30 Nov 2021
How to move Microsoft's Windows 11 from a hard drive to an SSD
Microsoft Windows

How to move Microsoft's Windows 11 from a hard drive to an SSD

24 Nov 2021