Facebook says it took down 583 million fake accounts in Q1 2018

Entrance to Facebook's Menlo Park office

Facebook remove more than 20 million pieces of adult nudity or pornography in three months

The post said Facebook found nearly all of that content before anyone had reported it, and that removing fake accounts is the key to combating that type of content.

To distinguish the many shades of offensive content, Facebook separates them into categories: graphic violence, adult nudity/sexual activity, terrorist propaganda, hate speech, spam and fake accounts.

The amount of content moderated by Facebook is influenced by both the company's ability to find and act on infringing material, and the sheer quantity of items posted by users.

The report also doesn't address how Facebook is tacking another vexing issue - the proliferation of fake news stories planted by Russian agents and other fabricators trying to sway elections and public opinion. Additionally, 2.5 million hate speech posts, 1.9 million terroristic propaganda posts, 3.4 million posts including graphic violence, and 21 million posts featuring nudity or sexual activity were cleared off. But the report also indicates Facebook is having trouble detecting hate speech, and only becomes aware of a majority of it when users report the problem.

In the latest stop on its post-Cambridge Analytica transparency tour, Facebook today unveiled its first-ever Community Standards Enforcement Report, an 81-page tome that spells out how much objectionable content is removed from the site in six key areas. The rate at which we can do this is high for some violations, meaning we find and flag most content before users do. "This increase is mostly due to improvements in our detection technology", the report notes.

"We took down or applied warning labels to about three and a half million pieces of violent content in Q1 2018, 86 per cent of which was identified by our technology before it was reported to Facebook".

The report looks at Facebook's enforcement efforts from Q4 2017 and Q1 2018, and shows an uptick in the prevalence of nudity and graphic violence on the platform.

Miramar Map Arrives for Mobile Gamers
The filters work in a similar way to the language filters, and you'll be able to see the results on the spot. In this post, we'll go over everything you need to know about the May PUBG Mobile update.

The company removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, almost triple the 1.2 million a quarter earlier, according to the report.

In some instances, Facebook said it may cover graphic content with a warning that requires people to click to uncover it, to prevent it from accidentally being viewed by underage users.

Facebook said it released the report to start a dialog about harmful content on the platform, and how it enforces community standards to combat it. "Overall, we estimate that out of every 10,000 pieces of content viewed on Facebook, seven to nine views were of content that violated our adult nudity and pornography standards".

Facebook estimates that 3-4% of monthly active users during the last three months of 2017 and first three months of 2018 were fake. "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important".

The social network says when action is taken on flagged content it does not necessarily mean it has been taken down.

Facebook says AI has played an increasing role in flagging this content.

The firm disabled about 583 million fake accounts which were disabled minutes after registering.

Latest News