Facebook updates hate speech detection ahead of Myanmar election

The Facebook app as seen on a smartphone in somebody's pocket

Facebook has outlined plans to improve its technology that detects hate speech on its platform ahead of crucial November elections in Myanmar.

The company has expanded its misinformation policy so that it will now remove fake news that could lead to voter suppression or damage the integrity of the election. Facebook will also use its AI to remove content such as hate speech that could lead to offline harm as well as attempts to suppress the vote, according to a blog post.

The firm claims to have invested significantly in proactive detection technology to help catch such content more quickly than before, using AI to identify hate speech in 45 languages, including Burmese.

"In the second quarter of 2020, we took action against 280,000 pieces of content in Myanmar for violations of our Community Standards prohibiting hate speech, of which we detected 97.8% proactively before it was reported to us," Facebook’s director of public policy for Southeast Asia emerging markets, Rafael Frankel said.

"This is up significantly from Q1 2020, when we took action against 51,000 pieces of content for hate speech violations, detecting 83% proactively," he added.

Political ads in Myanmar will also be labelled with a tag showing which individual or organisation has paid for the content, alongside verified badges for the pages of official political organisations. This is in addition to limits on message forwarding to five people, as well as the launch of a third-party fact-checking programme.

The social media platform has previously been criticised for allowing hate speech and prejudice to spread on its platform in the region, and for inadequate processes that "enable genocide", according to New Zealand’s privacy commissioner.

The reliance on AI to detect harmful content, such as hate speech, was also slammed by the global nonprofit advocacy organisation Avaaz. The group published a study in October 2019 suggesting an overreliance on this technology, which is under-developed, was leaving minorities vulnerable.

Attention will also be thrust on Facebook’s role in the upcoming US presidential election, and the nature that false and harmful content spreads on its platforms. The firm’s reliance on technology was also recently criticised for wrongly blocking a pro-Joe Biden ad, telling a Democratical group that the clip violated its policy against "sensational" content.

The decision was later reversed, according to Reuters, with Facebook citing an "enforcement error" for the rejection of only certain versions of the advert, while others were allowed to run.

Keumars Afifi-Sabet
Features Editor

Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.