YouTube Announces its Four Steps Plan to Fight Online Terror

YouTube is struggling with the confusing problem of how to regulate videos that support hateful ideologies, but are not specifically encouraging violence

The main problem that YouTube is facing is that these videos are not breaking any of the platform’s regulations and guidelines. This means that banning some of them based on ideology alone could lead to a slippery-slope that could be detrimental to YouTube’s image: that users can upload their own videos, as long as it is not illegal, without fear of being censored.

On Sunday, Google announced the new regulations that help manage such content in a blog post by Kent Walker. He entitled the post “Four steps we’re taking today to fight online terror.”

The first two steps focus on finding and removing all the videos that specifically encourage terrorism. However, this is a process that is more difficult than it seems. That’s because, starting from 2012, one hour of content is uploaded on YouTube each second.

“This can be challenging: a video of a terrorist attack may be informative news reporting by the BBC, or glorification of violence if uploaded in a different context by a different user,” Walker wrote.

At the moment, YouTube uses a combination of video analysis, as well as both software and human content flaggers to identify and remove all the videos that break the community guidelines.

The first step is focused on devoting more resources “to apply our most advanced machine learning research” to the software. This means applying artificial intelligence to the software that will be able to learn over time what content does not respect the guidelines.

The second step involves increasing the number of “independent experts in YouTube’s Trusted Flagger Program,” which composes users who report inappropriate content directly to the company. To be more specific, Google plans to add to the program 50 experts that are not from governmental organizations, whom it will support with operational grants in order to review content.

“Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech,” Walker wrote.

The third step put accent on content that does not break the guidelines, but pushes hateful agendas, “for example, videos that contain inflammatory religious or supremacist content.”

The final step regards using “targeted online advertising to reach potential ISIS recruits” and then redirect them “towards anti-terrorism videos that can change their minds about joining.”

Currently, social media companies struggle with the fact that they represent breeding ground for radicalism. By their nature, they act as global free speech platforms. Consequently, it makes them attractive as recruiting hotspots.

 

Share