Caught in the Net: The Impact of “Extremist” Speech Regulations on Human Rights Content

San Francisco – Social media companies have long struggled with what to do about extremist content that advocates for or celebrates terrorism and violence. But the dominant current approach, which features overbroad and vague policies and practices for removing content, is already decimating human rights content online, according to a new report from Electronic Frontier Foundation (EFF), Syrian Archive, and WITNESS. The report confirms that the reality of faulty content moderation must be addressed in ongoing efforts to address extremist content.

The pressure on platforms like Facebook, Twitter, and YouTube to moderate extremist content only increased after the mosque shootings in Christchurch, New Zealand earlier this year. In the wake of the Christchurch Call to Action Summit held last month, EFF teamed up with Syrian Archive and WITNESS to show how faulty moderation inadvertently captures and censors vital content, including activism, counter-speech, satire, and even evidence of war crimes.

“It’s hard to tell criticism of extremism from extremism itself when you are moderating thousands of pieces of content a day,” said EFF Director for International Freedom of Expression Jillian York. “Automated tools often make everything worse, since context is critical when making these decisions. Marginalized people speaking out on tricky political and human rights issues are too often the ones who are silenced.”

The examples cited in the report include a Facebook group advocating for the independence of the Chechen Republic of Iskeria that was mistakenly removed in its entirety for “terrorist activity or organized criminal activity.” Groups advocating for an independent Kurdistan are also often a target of overbroad content moderation, even though only one such group is considered a terrorist organization by governments. In another example of political content being wrongly censored, Facebook removed an image of a leader of Hezbollah with a rainbow Pride flag overlaid on it. The image was intended as satire, yet the mere fact that it included a face of a leader of Hezbollah led to its removal.

Social media is often used to as a vital lifeline to publicize on-the-ground political conflict and social unrest. In Syria, human rights defenders use this tactic as many as 50 times a day, and there are now more hours of social media content about the Syrian conflict than there have been hours in the conflict itself. Yet, YouTube used machine-learning-powered automated flagging to terminate thousands of Syrian YouTube channels that published videos of human rights violations, endangering the ability of those defenders to create a public record of those violations.

“In the frenzied rush to delete so-called extremist content, YouTube is erasing the history of the conflict in Syria almost as quickly as human rights defenders can hit ‘post,’” said Dia Kayyali, Program Manager for Tech and Accountability at Witness. “While ‘just taking it down’ might seem like a simple way to deal with extremist content online, we know current practices not only hurt freedom of expression and the right to access information, they are also harmful to real efforts to fight extremism.”

For the full report:
https://www.eff.org/wp/caught-net-impact-extremist-speech-regulations-human-rights-content

This article was sourced from EFF.org

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on Minds and Twitter.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "Caught in the Net: The Impact of “Extremist” Speech Regulations on Human Rights Content"

Leave a comment