Social network said it removed more than 2bn fake accounts in the first quarter of this year.
Facebook has said that its aggressive policies to scrub the platform of hate speech, fake news and other forms of harmful content is working.
More than two months after the Christchurch tragedy in New Zealand that saw a gunman livestream the killings via his Facebook account, the social network has come under intense scrutiny from governments and regulators across the world.
‘We estimated for every 10,000 times people viewed content on Facebook, 25 views contained content that violated our violence and graphic content policy’
– GUY ROSEN
In its twice-a-year Community Standards Enforcement Report, Facebook revealed that it scrubbed more than 2bn fake accounts from the platform in Q1 2019. That’s quite a staggering number when you consider that Facebook has 2.3bn monthly active users.
But it is also clear that the social network is playing a tough cat-and-mouse game against bad actors and it is relying heavily on AI to help catch and purge harmful content and fake accounts.
The report covers metrics across various policy areas including adult nudity and sexual activity, bullying and harassment, child nudity and sexual exploitation of children, fake accounts, hate speech, regulated goods (drugs and firearms), spam, global terrorist propaganda, and violence and graphic content.
Purge the hate
According to the report, Facebook removed 4m hate speech posts during the first three months of 2019 and detected 65pc of them using AI, up from 24pc last year.
The social network said that its automated systems for detecting violence caught more than 95pc of violent content posted on its platform before users reported it.
“We estimated for every 10,000 times people viewed content on Facebook, 11 to 14 views contained content that violated our adult nudity and sexual activity policy,” said Guy Rosen, vice-president of integrity at Facebook.
“We estimated for every 10,000 times people viewed content on Facebook, 25 views contained content that violated our violence and graphic content policy.
“For fake accounts, we estimated that 5pc of monthly active accounts are fake.”
Rosen added: “For fake accounts, the amount of accounts we took action on increased due to automated attacks by bad actors who attempt to create large volumes of accounts at one time. We disabled 1.2bn accounts in Q4 2018 and 2.19bn in Q1 2019.”
Updated, 12.26pm, 24 May 2019: This article was updated to clarify that Facebook reported catching more than 95pc of violent content before it was reported, not 98pc.