Facebook admits 4% of accounts were fake

Posted May 16, 2018

All in all the company removed 583 million fake accounts, although these were not all active at the same time.

That was less than 0.1 per cent of viewed content - which includes text, images, videos, links, live videos or comments on posts - Facebook said, adding it had dealt with almost 96 per cent of the cases before being alerted to them.

Getting rid of racist, sexist and other hateful remarks on Facebook is challenging for the company because computer programs have difficulties understanding the nuances of human language, the company said Tuesday.

The quarterly report will also mention how much of the content against the guidelines was seen by its users and how much of it was removed upon reporting and how much was removed even before a user reported the content that violates its guidelines.

It took action on 21 million pieces of content containing nudity and sexual activity. Of the total 2.5 million hate speech posts removed, only 38 percent were pulled by Facebook's tech before users reported it. Compare that to the 95.8 percent of nudity or 99.5 percent of terrorist propaganda that Facebook purged automatically.

While artificial intelligence is able to sort through nearly all spam and content glorifying al-Qaeda and ISIS and most violent and sexually explicit content, it is not yet able to do the same for attacks on people based on personal attributes like race, ethnicity, religion, or sexual and gender identity, the company said in its first ever Community Standards Enforcement Report.

Facebook also managed to increase the amount of content taken down with new AI-based tools which it used to find and moderate content without needing individual users to flag it as suspicious.

"We took down or applied warning labels to about three and a half million pieces of violent content in Q1 2018, 86 per cent of which was identified by our technology before it was reported to Facebook".

Facebook took action on 1.9 million pieces of content over terrorist propaganda.

This was up by three quarters from 1.1m during the previous quarter because of improvements in Facebook's ability to find such content using photo detection technology.

The social network estimates that it found and flagged 85% of that content prior to users seeing and reporting it - a higher level than previously due to technological advances. The renewed attempt at transparency is a nice start for a company that has come under fire for allowing its social network to host all kinds of offensive content.

"For serious issues like graphic violence and hate speech, our technology still doesn't work that well and so it needs to be checked by our review teams", Mr Rosen said.

While AI is getting more effective at flagging content, Facebook's human reviewers still have to finish the job. But, as Schultz made clear, none of this is complete.

"All of this is under development. These are the metrics we use internally and as such we're going to update them every time we can make them better", he said. By releasing these numbers, Facebook can claim that it's getting a grip on its community.

Other summits are planned on May 16 in Oxford and May 17 in Berlin. The company expects to have 20,000 people working on security and content moderation by the end of the year.