Mainstream social media bans ‘erode influence of extremist groups’

By admin In News, Technology No comments

Mainstream social media bans ‘erode influence of extremist groups’

The network is led by the Royal United Services Institute (RUSI) in the UK, and includes research bodies based all around the world. According to its latest research, removing extremist groups from social media platforms is an effective technique for limiting their influence and exposure.

In its paper, Following the Whack-a-Mole: Britain First’s Visual Strategy from Facebook to Gab, the network describes the impact of the removal of far-right UK group Britain First from Facebook for violations of community standards in 2018. The ban was enforced after the leaders of the group, Paul Golding and Jayda Fransen, were convicted of hate crime against Muslims. After being removed, the group was forced to shift to Gab, a Facebook-like ‘free speech’-based social media platform mostly inhabited by far-right users banned by mainstream social media companies.

While it had 1.8 million followers and more than two million likes on Facebook – making it the second most liked Facebook page within the ‘politics and society’ category in the UK – it has around 11,000 followers on Gab. According to the report, its move to Gab has resulted in a notable “move towards more extreme content”.

This year, Facebook has taken the decision to ban other far-right figures, including Stephen Yaxley-Lennon (‘Tommy Robinson’) for violating their policies around organised hate. Following the Christchurch attack, in which a white nationalist terrorist killed 51 Muslims attending prayer in Christchurch, New Zealand, Facebook, YouTube, Twitter, and other social media platforms have renewed their purge of far-right figures and organisations, including Alex Jones and Milo Yiannopoulos. Facebook notably announced that it would ban all content promoting white nationalism and separatism, which was previously not defined as hate speech by the company. Yvette Cooper MP, chair of the Commons Home Affairs Select Committee commented at the time that the bans were “long overdue”.

“For long too social media companies have been facilitating extremist and hateful content online an profiting from the poison,” Cooper said.

In June, social media platform Reddit quarantined the Internet’s most high-profile and active pro-Trump community (r/the_donald) after users repeatedly called for violence against public officials and police officers. A 2017 study led by Georgia Institute of Technology found that banning extremist and other hateful communities from Reddit is an extremely effective method of reducing hate speech, with former users of banned communities reducing their hate speech usage by at least 80 per cent and not causing serious issues for communities they ‘migrate’ to following bans.

According to the Global Research Network’s report, these sorts of bans leave extremist groups without a ‘gateway’ to larger pools of people to radicalise.

“The removal of the extremist group Britain First from Facebook in March 2018 successfully disrupted the group’s online activity, leading them to have to start anew on Gab, a different and considerably smaller social media platform,” the authors write. “The removal also resulted in the group having to seek new online followers from a much smaller, less diverse recruitment pool.”

The report recommends that mainstream social media companies should continue to remove extremist groups like Britain First which breach their terms of service, and should share best practice with regard to the removal and monitoring of extreme content with smaller social media platforms. It also calls on the UK and US governments to develop relationships with ‘fringe’ platforms in order for content to be regulated on these sites.

Another recent Global Research Network paper argues that major technology platforms should begin developing ‘hybrid’ systems that combine human and algorithmic decision-making to handle content which “is not obviously terrorist content but not clearly innocent either” . It also calls for social media companies to promote appeal procedures which include informing users why their content has been removed, and for the Global Internet Forum to Counter Terrorism’s database of files of terrorist images and videos – which is shared between Facebook, Microsoft, Twitter, and YouTube – to be expanded to content a wider range of content.

The network is led by the Royal United Services Institute (RUSI) in the UK, and includes research bodies based all around the world. According to its latest research, removing extremist groups from social media platforms is an effective technique for limiting their influence and exposure.

In its paper, Following the Whack-a-Mole: Britain First’s Visual Strategy from Facebook to Gab, the network describes the impact of the removal of far-right UK group Britain First from Facebook for violations of community standards in 2018. The ban was enforced after the leaders of the group, Paul Golding and Jayda Fransen, were convicted of hate crime against Muslims. After being removed, the group was forced to shift to Gab, a Facebook-like ‘free speech’-based social media platform mostly inhabited by far-right users banned by mainstream social media companies.

While it had 1.8 million followers and more than two million likes on Facebook – making it the second most liked Facebook page within the ‘politics and society’ category in the UK – it has around 11,000 followers on Gab. According to the report, its move to Gab has resulted in a notable “move towards more extreme content”.

This year, Facebook has taken the decision to ban other far-right figures, including Stephen Yaxley-Lennon (‘Tommy Robinson’) for violating their policies around organised hate. Following the Christchurch attack, in which a white nationalist terrorist killed 51 Muslims attending prayer in Christchurch, New Zealand, Facebook, YouTube, Twitter, and other social media platforms have renewed their purge of far-right figures and organisations, including Alex Jones and Milo Yiannopoulos. Facebook notably announced that it would ban all content promoting white nationalism and separatism, which was previously not defined as hate speech by the company. Yvette Cooper MP, chair of the Commons Home Affairs Select Committee commented at the time that the bans were “long overdue”.

“For long too social media companies have been facilitating extremist and hateful content online an profiting from the poison,” Cooper said.

In June, social media platform Reddit quarantined the Internet’s most high-profile and active pro-Trump community (r/the_donald) after users repeatedly called for violence against public officials and police officers. A 2017 study led by Georgia Institute of Technology found that banning extremist and other hateful communities from Reddit is an extremely effective method of reducing hate speech, with former users of banned communities reducing their hate speech usage by at least 80 per cent and not causing serious issues for communities they ‘migrate’ to following bans.

According to the Global Research Network’s report, these sorts of bans leave extremist groups without a ‘gateway’ to larger pools of people to radicalise.

“The removal of the extremist group Britain First from Facebook in March 2018 successfully disrupted the group’s online activity, leading them to have to start anew on Gab, a different and considerably smaller social media platform,” the authors write. “The removal also resulted in the group having to seek new online followers from a much smaller, less diverse recruitment pool.”

The report recommends that mainstream social media companies should continue to remove extremist groups like Britain First which breach their terms of service, and should share best practice with regard to the removal and monitoring of extreme content with smaller social media platforms. It also calls on the UK and US governments to develop relationships with ‘fringe’ platforms in order for content to be regulated on these sites.

Another recent Global Research Network paper argues that major technology platforms should begin developing ‘hybrid’ systems that combine human and algorithmic decision-making to handle content which “is not obviously terrorist content but not clearly innocent either” . It also calls for social media companies to promote appeal procedures which include informing users why their content has been removed, and for the Global Internet Forum to Counter Terrorism’s database of files of terrorist images and videos – which is shared between Facebook, Microsoft, Twitter, and YouTube – to be expanded to content a wider range of content.

Hilary Lambhttps://eandt.theiet.org/rss

E&T News

https://eandt.theiet.org/content/articles/2019/07/mainstream-social-media-bans-erodes-influence-of-extremist-groups/

Powered by WPeMatico