Buffalo shooting .. a test for censorship of “terrorist” content

More than 50 countries, including the United States, Germany and India, along with internet companies such as Meta (Facebook and Instagram), Google and Twitter, have opened registrations to support the so-called “Christchurch Call”, an event aimed at and subscribe to create a coordinated response by social media companies Social media when violent and extremist videos are broadcast on its platforms.

The American Wall Street Journal says the shooting incident in Buffalo, New York, in which 10 people were killed, was a “test” for this agreement.

A Meta spokesman was quoted as saying the company was taking its own measures, including by investing in detection technology and working with Christchurch Call and other groups.

The group created the “Christchurch Call” after a white supremacist killed 51 people at two mosques in Christchurch, New Zealand, and broadcast the attacks live on Facebook.

The attack on a supermarket in Buffalo, New York, was broadcast on Twitch, a website owned by Amazon that specializes in live streaming.

The newspaper says the live broadcast of a shooting incident at a supermarket in Buffalo, New York, showed the strengths and shortcomings of reaching a global agreement to combat the spread of terrorist content on the Internet.

“The shooting incidents in Buffalo will undoubtedly give additional impetus” to what the Christchurch appeal is trying to achieve, said Paul Ash, the New Zealand government’s co-ordinator.

“As a society, we will analyze this tragic event and use what we have learned to further strengthen crisis response measures,” he added.

The group’s main success was getting technology companies to use the Global Internet Forum to combat terrorism as a central point for sharing “digital fingerprints” of offensive videos and photos, according to Professor Dave Barry, Head of Information Technology at Australia is Murdoch University.

This technology allows computer algorithms to identify suspicious content when it is downloaded.

Monitoring the Internet is a challenging task, given the amount of video downloaded and users’ ability to manipulate files to make it difficult for algorithms to detect them.

“There’s a lot of combat material on the web, including video game sequences, and combat videos are often of low quality, making the online approach to understanding and then blocking content much more difficult,” Barry said.

Meta says it has banned more than 250 white supremacist organizations worldwide and removed nearly 900 socio-military movements from its platform.

Governments also specify material that they consider to be terrorist and violent extremist content.

In Australia, which is a member state of Christchurch Cole, government officials said they referred nearly 6,000 offensive posts to the video and social media platform between the beginning of 2020 and mid-March this year, of which more than 4,200 were removed .

Alia Danzisen, National Coordinator of the Muslim Women’s Council of New Zealand, who is part of the Advocacy Advisory Committee, said the advocacy is expected to look beyond focusing on stopping the distribution of the Christchurch video and to the reasons why hatred based on factors such as religion and ethnicity flourishes on the internet.

“However, it kind of stopped,” Danzessen told the newspaper.

She said governments should end the anonymity and impunity that social media extremists offer and the lack of legal consequences for internet companies for the content they allow.

But others say social media companies are already going too far in moderating content and policing speech.

More than 60 civil society organizations, including Amnesty International, Reporters Without Borders and the Electronic Frontier Foundation, have opposed legislation passed by the European Parliament in April 2021 requiring online platforms to remove terrorist content within an hour of being flagged. is.

Knowing the basis of transnational legislation and the absence of judicial oversight, they said the law is open to abuse by governments and could encourage excessive censorship and threaten freedom of expression.

In addition to the algorithms that monitor offensive posts or videos and photos, there are advertising algorithms that, according to the newspaper, make the user “live in a room where he only hears what he wants”.

These algorithms learn the user’s interests and coordinate the flow of content to them according to those interests.

These algorithms benefit advertisers and social media companies to reach customers, but they also cause extremists to form “closed communities” in which only their opinions are heard.

Leave a Comment