Facebook has removed more than three billion fake accounts during a six month period, according to its latest enforcement report.
The report details how many posts and accounts the social media platform took action against between October 2019 and March 2019. More than seven million “hate speech” posts were removed by the company, which marks a record high and follows an increasing crackdown on extremist content on the site.
Across the platform, around 5% of active monthly users were fake accounts, the report shows.
“Bad actors” were blamed for the significant rise in the number of deleted fake accounts, the company said. Many of these are increasingly turning to automated methods in order to create large numbers of bogus accounts.
The majority of these accounts were identified and deleted within a matter of minutes, Facebook insisted, preventing them from causing any harm to other users.
- Nick Clegg has Dismissed Calls for Facebook Breakup
- Facebook’s Auto-generation Tech is Creating ‘Memories’ Videos for Terrorist Groups
- Facebook Bans Alt-right Figures for Breaching Hate & Violence Policies
During this six month period, Facebook took action against content including child sex abuse imagery, violence and terrorist propaganda.
Earlier this year, a host of far-right hate groups and prominent figures within these political spheres were banned from the platform in an unprecedented crackdown on hate speech. In this report, Facebook estimated how often content such as this was witnessed by people using the platform.
Out of every 10,000 pieces of content viewed on Facebook, fewer than 14 people saw nudity, the report shows. Around 25 people saw violence or graphic content while fewer than three people saw child abuse imagery or terrorist propaganda.
“For hate speech, we now detect 65% of the content we remove, up from 24% just over a year ago when we first shared our efforts,” Facebook said. “In the first quarter of 2019, we took down 4 million hate speech posts and we continue to invest in technology to expand our abilities to detect this content across different languages and regions.”
Facebook’s report also revealed that between January and March of this year, more than one million appeals were made to the site after posts were removed for breaching rules on hate speech.
During this period, around 150,000 posts that were found to have not breached the company’s policies were restored. Facebook conceded that its enforcement “isn’t perfect” and that it rectifies mistakes as soon as they are identified. For the sake of transparency, however, the company has included this data in the most recent report.
“That’s why we are including how much content was restored after it was appealed, and how much content we restored on our own – even if the content wasn’t directly appealed,” the report said.
In summer 2018, the social media firm began using artificial intelligence (AI) to identify content that violates policies on regulated goods, such as firearms and drugs.
Since then, the company said these increased efforts have “enabled us to take action on more content and, in the majority of cases, to do so before people need to report it to us.”
In the first quarter of 2019, Facebook took action on around 900,000 pieces of drug sale content. Of this, the company detected around 83.3% proactively. During this same period, action was also taken on about 670,000 pieces of firearm sale content.