Social media companies vow to reduce abuse of women online

Facebook website on a computer screen

Facebook, Google, Twitter, and TikTok have committed to changing their moderation policies to protect women from abuse online.

The announcement, made at the UN Generation Equality Forum in Paris, was orchestrated by the World Wide Web Foundation (WWWF) in response to rising concerns over harassment online. The social media giants have agreed to a set of commitments developed during WWWF workshops on tackling gender-based abuse online.

The commitments focus on two areas: content curation and online abuse reporting. On the curation side, the workshops found that women needed more control over what they see online and who could comment on their posts. They also highlighted the need for better systems to report abusive content.

Each commitment has four considerations. On the curation side, companies must offer more granular settings over who can see and reply to posts. They must also provide more accessible language throughout the user experience, easy navigation and access to safety tools, and actively reduce the amount of abuse women see online.

The reporting commitment requires companies to offer users the ability to manage and track their reports, increase their capacity to address context and language, provide more policy and product guidance when reporting abuse, and establish additional channels for help and support during the reporting process.

The WWWF said it would measure the companies' performance in these areas and report it annually.

The foundation cited an Economist Intelligence Unit report measuring online violence against women.

The report found that 85% of women reported witnessing online violence against other women, including outside their networks. It also found 38% had personally experienced online abuse.

The most common abuse was misinformation and defamation, as 67% of survey respondents experienced it. The least common types of abuse were violent threats, which an alarming 52% of survey respondents reported experiencing.

Other abuse tactics included publishing personal information, impersonation, sharing damaging information across multiple platforms, image- and video-based abuse, and stalking or hacking.

RELATED RESOURCE

2031: Reimagining the future of life and work

Sample our exclusive Business Briefing content

FREE DOWNLOAD

Some of the tech firms supporting the commitments have some work to do regarding the more equitable treatment of various groups online and offline.

TikTok has drawn criticism for allegedly telling moderators to suppress videos from users deemed not attractive or rich enough and from users with disabilities. Google, which already changed its harassment reporting policies following a mass employee walkout, drew flak late last year for allegedly dismissing AI ethics co-lead Timnit Gebru after she questioned the company’s treatment of women and people of color.

Last year, Plan International sent an open letter to social media platforms demanding action after its survey found that harassment across the most popular platforms is driving girls and young women offline.

Danny Bradbury

Danny Bradbury has been a print journalist specialising in technology since 1989 and a freelance writer since 1994. He has written for national publications on both sides of the Atlantic and has won awards for his investigative cybersecurity journalism work and his arts and culture writing. 

Danny writes about many different technology issues for audiences ranging from consumers through to software developers and CIOs. He also ghostwrites articles for many C-suite business executives in the technology sector and has worked as a presenter for multiple webinars and podcasts.