APD News
Close

APD NewsAPP, New stage!

Click to download

Freedom of speech complicates online extremism regulation

Insights

2019-03-27 10:24

With a French Muslim group suing Facebook and YouTube for allowing the Christchurch video to be broadcast and New Zealand pressuring Facebook on its regulation of violent videos, it seems that the global battle against online extremism is on.

But over the years it seems that tech companies have been reluctant to be open for scrutiny and sometimes there is also backlash from the public over the fear of governments become the "big brother." But on the issue of combating online extremism, it is simply not a matter that tech companies are able to do on their own.

The public is concerned that there could be a thin line between regulation and oppression. The age of social media has fundamentally changed how media operates and with the rise of populism among many Western countries, the distrust against the authorities and elites grew deeper than ever. Long gone are the days when only the large media outlets can set the agenda of social issues and control the volume.

On platforms like Facebook and YouTube, the editors and reporters are essentially users themselves. There are no editorial policies that guide them for balanced "reports." That should sound like a paradise if one believes in the "freedom of speech" and "marketplace of ideas" where the "best" idea wins.

But the dark side of such utopia lies in that there could also be a thin line between free speech and hate speech, and sometimes carefully packaged extremist ideas and violence.

And the ways tech companies are dealing with disinformation and conspiracy theories are far from satisfactory. They are increasingly relying on algorithms to push "relevant" posts and videos to lure media consumers to spend more time on their apps and web pages.

Participants arrive for the World Internet Conference in Wuzhen, China's Zhejiang Province, December 16, 2015. /VCG Photo

If one is already on the "right" side of the political spectrum, relevant videos and posts may push him/her further to the direction of far-right.

It is not to say that people become radicalized because of social media, but unregulated extremist posts plus algorithms can create a social “filter bubble” that makes people more unlikely to tolerate diversified political views.

In this case, users' online experience has become more isolated than open, which could only reinforce their existing beliefs and create more divisions between different racial and political groups. And when hatred and far-right ideology turn into action, tragedy happens.

What's more worrying is that extremist groups and terrorists seem to be skilled at harnessing the power of social media. Since no media organizations would speak for ISIL, they rely on social media to promote their propaganda to recruit members and supporters.

Although ISIL has lost most of its territory and even, as announced by U.S. President Donald Trump, has been "defeated," it has never gone from the dark side of the Internet, where there is an ongoing battle of pushing extremist ideas.

According to a report called Documenting the Virtual "Caliphate" (2015), ISIL uses "Islamic utopia" as one of its techniques to attract more supporters. They create an imaginary world where Muslims will live in full happiness if they join – very useful in luring their targets who often live in a disappointing condition and tend to have black/white world-views.

Hardline Muslims protest in Jakarta against Facebook's blocking pages belonging to Islamic mass-organizations and Islamic teachings, January 12, 2018. /VCG Photo

And the mass shooter of the Christchurch massacre also knew how to turn a cruel mass murder into a publicity event for far-right extremism. His live stream and online manifesto of real-life gun violence used to be tabloids' best materials for the front page.

Unfortunately, tech companies have turned out to be slow in reacting to hate content. It took Facebook 29 minutes to take down the live stream video of the Christchurch murder. Audio clips of the video are still circulating on social media platforms, and they are scrambling to take them down.

In the meantime, as Internet entrepreneur Kalev Leetaru has forcibly argued in his recent piece on Forbes that today it is technologically possible to swiftly detect violent and horrific videos with the combination of AI and human reviews, but it simply costs a lot. Media platforms also lack incentives to upgrade because posting hate speech and depictions of terrorism are not necessarily illegal in most countries.

It is with intense pressure from the government and the public that Twitter and Facebook have changed their positions on "doing more" to combat online extremism.

But it's time to stop looking back at what they should have done and start doing it before any more tragedies happen. With data collected from millions of its users, arguments that media platforms should be free from government intervention seem pale and weak.

The online battle against extremism is not commercial but a public matter which needs constant public scrutiny. The stakes are too high for it to be left in the companies that seek profits rather than public responsibility.

(CGTN)