Facebook’s 'reliance on AI' leaves minority groups vulnerable to hate speech

Tech isn’t stopping violent rhetoric on Facebook, and Zuckerberg refuses to correct false political ads, claims Avaaz

finger above Facebook icon

Global nonprofit advocacy organisation Avaaz has published a report claiming that Facebook is once again failing to prevent the spread of anti-Muslim hate speech on its platform in the Assam region of northeast India.

The group accused Facebook of relying too heavily on underdeveloped artificial intelligence (AI) technology to detect hate speech and using its understaffed team of human content moderators to review pre-flagged content rather than employing them as the first line of defence.

Since India's Hindu nationalist government excluded nearly 1.9 million Muslims (and other minorities) in the National Register of Citizens (NRC), the Muslim population in the country's northeastern region has come under threat of statelessness.

In July, the United Nations (UN) expressed concern over the NRC process while at the same time warning of the role of social media in the rise of hate speech in Assam.

"This process may exacerbate the xenophobic climate while fueling religious intolerance and discrimination in the country," it said in a statement that harkened back to Facebook's crisis just over a year ago, when the UN criticised it for playing a "determining role" in the violence against the Rohingya people in Myanmar.

In Avaaz's report, the group combed 800 Facebook posts relating to Assam and the NRC for keywords in Assamese, comparing them to the three tiers of prohibited hate speech defined in Facebook's Community Standards.

At least 26.5% of the posts constituted hate speech targeting religious and ethnic minorities, accumulating at least 5.4 million views for the posts' 99,650 shares.

The comments especially targeted Bengali Muslims, calling them "criminals," "rapists," "terrorists," "pigs," and demanding that people "poison" daughters and legalise female foeticide.

Avaaz accused Facebook of relying too heavily on AI to flag hate speech that has not been reported by human users. Its limited staff of human content moderators, Avaaz said, is only used to review AI-detected content, rather than to actively uncover it.

"Facebook is being used as a megaphone for hate, pointed directly at vulnerable minorities in Assam," said senior Avaaz campaigner Alaphia Zoyab, "Despite the clear and present danger faced by these people, Facebook is refusing to dedicate the resources required to keep them safe."

Advertisement - Article continues below
Advertisement - Article continues below

A spokesperson for Facebook told TechCrunch: "We have invested in dedicated content reviewers, who have local language expertise and an understanding of the India's longstanding historical and social tensions. We've also made significant progress in proactively detecting hate speech on our services, which helps us get to potentially harmful content faster. But these tools aren't perfect yet."

Just over a year ago, Facebook CEO Mark Zuckerberg optimistically projected that, once the technology was developed enough to become reliable, AI would take over the hate speech detection process. At the time, Zuckerberg said it should take five to ten years. "Today we're just not there on that," he admitted. Recent failures to properly censor hate speech suggest that the social media company may have jumped the gun in relying on AI tech.

Avaaz has challenged Facebook to beef up its protections for minorities in Assam, suggesting the company implement a "human-led 'zero tolerance' policy" against hate speech and recruit more human moderators with expertise in local languages.

They further call on Facebook to correct disinformation in the platform's ads, a topic that has spurred interest since a letter was recently released in which Facebook employees plead with their executives to do just that.

Facebook's current policy on political ads allows politicians to post any claim they want, regardless of factuality. Zuckerberg backed this stance as a defender of free expression in his address at Georgetown University in Washington, D.C.

Roughly 250 employees, however, argued that refraining from fact-checking political ads "doesn't protect voices, but instead allows politicians to weaponize [the] platform by targeting people who believe that content posted by political figures is trustworthy".

Advertisement - Article continues below

Whether in terms of detecting and removing hate speech or correcting the advertisement of false information, Facebook has arguably shown that it has leaps and bounds to go before its platform is properly regulated.

Featured Resources

Digitally perfecting the supply chain

How new technologies are being leveraged to transform the manufacturing supply chain

Download now

Three keys to maximise application migration and modernisation success

Harness the benefits that modernised applications can offer

Download now

Your enterprise cloud solutions guide

Infrastructure designed to meet your company's IT needs for next-generation cloud applications

Download now

The 3 approaches of Breach and Attack Simulation technologies

A guide to the nuances of BAS, helping you stay one step ahead of cyber criminals

Download now


internet security

Avast and AVG extensions pulled from Chrome

19 Dec 2019

Google confirms Android cameras can be hijacked to spy on you

20 Nov 2019

Most Popular

operating systems

17 Windows 10 problems - and how to fix them

13 Jan 2020
Microsoft Windows

What to do if you're still running Windows 7

14 Jan 2020
web browser

What is HTTP error 503 and how do you fix it?

7 Jan 2020
General Data Protection Regulation (GDPR)

Data protection fines hit £100m during first 18 months of GDPR

20 Jan 2020