YouTube wrongly removes Syria conflict videos

Removing the videos could harm documentation of human rights violations, activists say

Thousands of videos documenting violence in Syria were wrongly removed by YouTube after its machine learning software flagged them, forcing staff to intervene and re-upload some of the videos.

The machine learning technology the site uses to police extremist videos flagged the videos as inappropriate content, leading activists to warn that this could jeopardise future efforts to prosecute war crimes.

Eliot Higgins, founder of Bellingcat, a citizen journalism website, told the BBC: "We have a situation where a few videos get wrongly flagged and a whole channel is deleted. For those of us trying to document the conflict in Syria, this is a huge problem."

YouTube does not allow violent, harmful or dangerous content on its site, unless the purpose is "educational, documentary, scientific or artistic (EDSA), and it isn't gratuitously graphic". While there's little detail on what factors the machine learning software considers when assessing a video, human reviewers at YouTube consider a video's metadata, description and, most importantly, the context, before deciding whether to take it down or not.

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

The video streaming site introduced machine learning to help it police the 400 hours of content uploaded to YouTube every minute, revealing earlier this month that 75% of violent extremist videos it removed were spotted by the machine learning software before any human viewer notified the firm, helping it more than double its take-down rate.

Keith Hiatt, a vice president of human rights technology tools firm Benetech, told the New York Times that removing these videos means losing "the richest source of information about human rights violations in closed societies".

A YouTube spokesperson said: "YouTube is a powerful platform for documenting world events, and we have clear policies that outline what content is acceptable to post. We recently announced technological improvements to the tools our reviewers use in video takedowns and we are continuing to improve these.

"With the massive volume of videos on our site, sometimes we make the wrong call. When it's brought to our attention that a video or channel has been removed mistakenly, we act quickly to reinstate it."

02/08/2017: Machine learning doubles YouTube's extremist video take-down rate

Machine learning has helped YouTube identify and take down extremist content before a viewer has flagged the video themselves.

Advertisement - Article continues below

The video streaming site recently began using machine learning technology to identify and remove "violent extremism and terrorism-related content in a scalable way".

In the last month, over 75% of the violent extremist videos YouTube removed were spotted using machine learning before any human viewer notified the firm, it said.

With 400 hours of content uploaded to YouTube every minute, removing inappropriate or extremist videos is a challenge for the website, but it said: "Our initial use of machine learning has more than doubled both the number of videos we've removed for violent extremism, as well as the rate at which we've taken this kind of content down."

The system is also "more accurate than humans at flagging videos that need to be removed", YouTube said.

Advertisement
Advertisement - Article continues below

The company is now looking to hire more employees to review and enforce upload policies as well as planning to invest in technical resources to address these issues.

Other measures it has taken include working with 15 additional expert NGOs to understand complex issues like hate speech and radicalisation, so the site can identify this content.

Advertisement - Article continues below

It is also implementing some features from Jigsaw's Redirect Method, where if a user searches for sensitive keywords on Youtube, they will be redirected towards playlists of YouTube videos that "confront and debunk violent extremist messages".

Tech giants are facing growing calls to take a more proactive role in controlling what appears on their platforms. This comes after the UK government had to pull its advertising content from YouTube in March as the content was appearing next to videos of a hate preacher banned in the UK.

Furthermore, the UK and French governments are working together to tackle online radicalisation, seeking to introduce laws that will punish tech firms that fail to remove radical material or hate speech from their platforms.

YouTube and other tech companies met Amber Rudd this week at a Silicon Valley forum set up to counter terrorism. The forum hopes to come up with new solutions on how to address and remove terrorist content on the internet.

Featured Resources

Digitally perfecting the supply chain

How new technologies are being leveraged to transform the manufacturing supply chain

Download now

Three keys to maximise application migration and modernisation success

Harness the benefits that modernised applications can offer

Download now

Your enterprise cloud solutions guide

Infrastructure designed to meet your company's IT needs for next-generation cloud applications

Download now

The 3 approaches of Breach and Attack Simulation technologies

A guide to the nuances of BAS, helping you stay one step ahead of cyber criminals

Download now
Advertisement

Most Popular

Visit/operating-systems/25802/17-windows-10-problems-and-how-to-fix-them
operating systems

17 Windows 10 problems - and how to fix them

13 Jan 2020
Visit/microsoft-windows/32066/what-to-do-if-youre-still-running-windows-7
Microsoft Windows

What to do if you're still running Windows 7

14 Jan 2020
Visit/web-browser/30394/what-is-http-error-503-and-how-do-you-fix-it
web browser

What is HTTP error 503 and how do you fix it?

7 Jan 2020
Visit/policy-legislation/general-data-protection-regulation-gdpr/354577/data-protection-fines-hit-ps100m
General Data Protection Regulation (GDPR)

Data protection fines hit £100m during first 18 months of GDPR

20 Jan 2020