Facebook 'auto-generated' Extremist Video
Facebook has been accused of "auto-generating" extremist content, including a celebratory jihadist video and a business page for al-Qaeda.
The material was uncovered by an anonymous whistleblower who filed an official complaint to US regulators.
Similar content for self-identified Nazis and white supremacist groups was also found online.
Facebook said it had got better at deleting extreme content but its systems were not perfect.
The whistleblower's study lasted five months and monitored pages of 3,000 people who liked or connected to organisations listed as terrorist groups by the US government.
The study found that groups such as the Islamic State group and al-Qaeda were "openly" active on the social network.
In addition, it found that Facebook's own tools were automatically creating fresh content for the proscribed groups by producing "celebration" and "memories" videos when pages racked up enough views or "likes", or had been active for a certain number of months.
The local business page for al-Qaeda generated by Facebook's tools had 7,410 "likes" and gave the group "valuable data" it could use when recruiting people or seeking out supporters, the complaint said.
On the local business page, Facebook's algorithms populated the page with job descriptions that users put in their profiles. It also copied images, branding and flags used by the group.
Similar content was automatically produced for white supremacist and Nazi groups active on Facebook.
The complaint has been filed with the US Securities and Exchange Commission, alleging Facebook has misled shareholders by claiming to remove extremist content while letting it persist on the site.
John Kostyack, director of the National Whistleblower Centre, which released the study on behalf of the whistleblower, said he was "grateful" that the "disturbing information" had been released.
"We hope that SEC takes prompt action to impose meaningful sanctions on Facebook," he said in a statement.
In a statement, Facebook said: "After making heavy investments, we are detecting and removing terrorism content at a far higher success rate than even two years ago.
"We don't claim to find everything and we remain vigilant in our efforts against terrorist groups around the world."
The study is the latest in a series of mis-steps for Facebook, which has faced repeated criticism over the way it handles hate speech and extremist content.
This week, Facebook co-founder Chris Hughes said it was time to break up Facebook, in an editorial published in the New York Times.
"The government must hold Mark [Zuckerberg] accountable," he wrote.
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more