The social networking giant said that the report shows the scale of spam, hate speech and violence encountered and revealed that it had closed 583 million fake accounts in first three months of 2018.

Facebook said that it had taken moderation action against almost 1.5 billion accounts and posts which violated its community standards in the first three months of 2018.

The company said in its first quarterly Community Standards Enforcement Report that the overwhelming majority of moderation action was against spam posts and fake accounts.

Elaborating further, Facebook said that it took action on 837 million pieces of spam, and shut down a further 583 million fake accounts on the site.

However, Facebook pointed out that it moderated 2.5 million pieces of hate speech, 1.9 million pieces of terrorist propaganda, 3.4 million pieces of graphic violence and 21 million pieces of content featuring adult nudity and sexual activity.

Commenting on the facts released, Richard Allan, Facebook’s vice president of public policy for Europe, the Middle East and Africa said, “This is the start of the journey and not the end of the journey and we’re trying to be as open as we can.”

Further, Alex Schultz, the company’s vice president of data analytics, revealed that the amount of content moderated for graphic violence almost tripled quarter-on-quarter.

Schultz added that one hypothesis for the increase is that “in [the most recent quarter], some bad stuff happened in Syria. Often when there’s real bad stuff in the world, lots of that stuff makes it on to Facebook.”

He further stressed that much of the moderation in those cases was “simply marking something as disturbing.”

According to reports however, many of the categories mentioned in Facebook’s moderation guidelines, of violating content were not included in the report.

Commenting on the child exploitation imagery, Schultz said that the company still needed to make decisions about how to categorise different grades of content, for example cartoon child exploitation images.

He added, “We’re much more focused in this space on protecting the kids than figuring out exactly what categorisation we’re going to release in the external report.”

In the report, Facebook also said that it managed to use the AI tools to find 98.5 percent of the fake accounts it shut down, and “nearly 100 percent” of the spam.

Schultz pointed out that automatic flagging worked well for finding instances of nudity, since it was easy for image recognition technology to know what to look for.

Meanwhile, commenting on the moderation for hate speech, Facebook said in the report, “We found and flagged around 38 percent of the content we subsequently took action on, before users reported it to us.”