New Zealand Attack Exposes Social Media Sites’ Tolerance of Anti-Muslim Content

Image via - www.siasat.pk

"Tech giants like Facebook don’t always crack down on Islamophobia as much as other forms of online hate."

Mother Jones, March 15, 2019

Nearly a year ago, Mark Zuckerberg testified before Congress that Facebook does not allow hate groups on its platform. “If there’s a group that—their primary purpose or—or a large part of what they do is spreading hate, we will ban them from the platform,” he told the House Energy and Commerce Committee on April 11, 2018.

Across the country in San Francisco, Madihha Ahussain was watching from her office at Muslim Advocates, a civil rights group that seeks to protect Muslim Americans from discrimination, bigotry, and violence. She found the assertion shocking. Since 2013, Muslim Advocates had been urging Facebook to remove Islamophobic hate groups and material. Ahussain tracks anti-Muslim bigotry, and Muslim Advocates had recently sent Facebook a list of 26 anti-Muslim groups on the platform. As Zuckerberg spoke, 23 remained active.

“We never received a response about the list that we sent,” Ahussain told Mother Jones in January. “We never received any indication about what they were doing to address those hate groups on the platform. So the reality is, they never really took any steps to address the presence of those hate groups on the platform.”

The massacre of 49 Muslims at two mosques in New Zealand on Friday was a devastating reminder to the entire world of the presence of anti-Muslim hate online and its effects. While the path to radicalization of the perpetrator and potential co-conspirators remains unclear, social media was weaponized to spread images of the attack, as well as anti-Muslim propaganda. The attacker linked to his Facebook livestream on 8chan, a platform where extremist content percolates, exploiting a connection between the world’s largest social network and the darkest corners of the internet that experts on extremism have warned about for years. The footage quickly spread across social media platforms. Facebook removed the original livestream after 20 minutes, but versions of the video continued to circulate on major tech platforms for hours. The attacks quickly became an extreme example of a problem that has been ongoing for years: the flourishing of anti-Muslim hate online. And it’s made possible, civil rights advocates and extremism watchdogs say, because large tech companies have largely ignored the problem.

There’s a reason that nudity and ISIS propaganda rarely show up on YouTube or Facebook: These are types of content that Silicon Valley has cracked down on. “When was the last time you were recommended an ISIS video on YouTube or Facebook?” tweeted NBC tech reporter Ben Collins after the New Zealand shooting. “The answer is probably never. That’s because law enforcement and tech companies made it a top priority.”

But when it comes to content that vilifies Muslims, Facebook hasn’t always been quick to remove it. In fact, several companies have repeatedly chosen to leave such content up, designating it a valid political viewpoint rather than dangerous hate speech.

Perhaps the best example is the Facebook account of Tommy Robinson, a British anti-Muslim activist who developed a large following on social media by highlighting crimes committed by Muslims in the UK. By late 2018, Robinson had more Facebook followers than British Prime Minister Theresa May. His following catapulted him to a position in Britain’s far-right UK Independence Party, and Politico described him as one of Britain’s leading political voices. In July 2018, just a few months after Zuckerberg’s testimony, an undercover investigation by the UK-based Channel 4 Dispatches, an investigative documentary series, revealed that Facebook protected far-right content if it came from activists with a large number of followers. Facebook had granted Robinson’s page the same special status given to government accounts, which meant that hateful content on his page could not be deleted by a regular moderator and instead had to be flagged to someone higher up at Facebook.

Facebook had given the same protection to Britain First, a defunct anti-Muslim political party. In 2017, Trump retweeted three Britain First videos purporting to show heinous crimes committed by Muslim immigrants. (One of the videos was a fake, another was footage from the Arab Spring uprising in Egypt years earlier, and the origin of the third origin was unverified.) Twitter banned Britain First a few months later, in December 2017, and then banned Robinson in March 2018, both for hateful conduct. Facebook was slower to react. It took down Britain First’s page in May 2018, after the group’s leaders were jailed for committing hate crimes against Muslims. By that point, the page had more than 2 million likes. Robinson’s page was banned in February 2019 for violating Facebook’s hate-speech policies. In a blog post, Facebook said it banned the page because Robinson had promoted violence against Muslims and repeatedly violated the platform’s “policies around organized hate.” ...
Read full article at Mother Jones

Comments