Facebook had earlier introduced red warning labels. But this led some users to share the false stories even more aggressively, forcing the social network to ditch the red flag.

After Facebook’s red warning labels to flag fake news backfired, the social networking giant is now literally sizing down the doubtful stories to prevent the triumph of falsehood on its platform.

As part of its new strategy to combat fake news, Facebook wants its users to miss these stories at the time of scrolling their News Feed, while not withdrawing them altogether so as to walk a fine line “between censorship and sensibility”, according to a media report.

When an article is verified as inaccurate by the social network’s third-party fact-checkers, Facebook will shrink the size of the link post in the News Feed, TechCrunch reported on Saturday.

“We reduce the visual prominence of feed stories that are fact-checked false,” a Facebook spokesperson was quoted as saying.

To combat the menace of fake news, Facebook earlier introduced red warning labels. But this led some users to share the false stories even more aggressively, forcing the social network to ditch the red flag. Facebook then started showing “Related Articles” from trusted news sources in the hope of offering its users the correct perspective.

The move to reduce the visual prominence of inaccurate stories is another effort in the same direction. Facebook detailed its new tactics to fight fake news at its Fighting Abuse @Scale event in San Francisco, according to TechCrunch.

Facebook, the report said, is also now using machine learning to look at newly published articles and scan them for signs of falsehood. “We use machine learning to help predict things that might be more likely to be false news, to help prioritize material we send to fact-checkers (given the large volume of potential material),” a Facebook spokesperson was quoted as saying.