Facebook’s Auto-Generation Tech is Creating “Memories” Videos for Terrorist Groups

Facebook Scam Adverts

Facebook’s auto-generation technology is creating ‘Local Business’ pages for terrorist organisations and creating “celebration” and “memories” videos for groups such as al-Qaeda and ISIS.

A whistleblower has accused Facebook of “auto-generating” extremist content on the social media platform and has filed an official complaint to regulators in the United States.

The complaint has been filed with the United States Securities and Exchange Commission, which claims the company has misled shareholders by failing to remove extremist content.

White supremacist and far-right content, as well as jihadist videos and business pages for terror group, al-Qaeda, are among the types of content auto-generated by the platform.

The study, which lasted five months, monitored thousands of people who liked or were connected to pages promoting organisations listed as terror groups. It found that the social media platform’s tools were creating content automatically for groups such as ISIS or al-Qaeda through “celebration” or “memories” videos that had been active for several months.

Research also showed that the two major terrorist groups were “openly active” on the social media platform.

The ‘local business’ page for al-Qaeda generated by Facebook had more than 7,000 likes and, according to the study, gave the group “valuable data” which could be used to recruit people and identify supporters around the globe.

This page was also populated with job descriptions that users had put in their profiles and featured flags, branding and other content used by the group in a promotional capacity.

Related:

Facebook and CEO Mark Zuckerberg have been highly vocal on the issue of extremist content in recent months, and the company says it has improved its processes for identifying and removing extremist content.

Last month, the social media giant banned a flurry of far-right groups and individuals for breaching platform rules on inciting hate and violence.

In a statement, Facebook said: “after making heavy investments, we are detecting and removing terrorism content at a far higher success rate than even two years ago. We don’t claim to find everything and we remain vigilant in our efforts against terrorist groups around the world.”

The National Whistleblower Center, which released the study, contested Facebook’s claims in a statement. The organisation claims that toward the end of 2018, the percentage of profiles of people who identify themselves as ‘friends’ of terror groups removed by Facebook was less than 30%.

Additionally, the profiles of those who displayed terrorist imagery and symbols, Facebook removed just 38% during the study period.

Digit Leader Banner

John Kostyack, Director of the Center, said: “We are grateful to the whistleblower for bringing this disturbing information to the SEC and for authorising its release to the public.

“We hope that SEC takes prompt action to impose meaningful sanctions on Facebook and that this information stimulates a public debate about the accountability of social media companies for addressing terrorist recruiting and other organised crime on their websites.”



Latest News

Cybersecurity Editorial
Cybersecurity News
19th November 2019

Hospitality Sector Urged to Sharpen-up Cyber Resilience

Cybersecurity Events
18th November 2019

Scot-Secure 2020: Call for Speakers

Data Government News
18th November 2019

UK Home Office Brexit App Lacks Basic Security