FACEBOOK is rating users based on how “trustworthy” it thinks they are.

Users receive a score on a scale from zero to one that determines if they have a good or bad reputation – but it’s completely hidden.

ALAMY

Your Facebook usage is being monitored, and may be converted in a trustworthiness score

The rating system was revealed in a report by the Washington Post – and later confirmed by Facebook to The Sun – which says it’s in place to “help identify malicious actors”.

Facebook tracks your behaviour across its site and uses that info to assign you a rating.

Tessa Lyons, who heads up Facebook’s fight against fake news, said: “One of the signals we use is how people interact with articles.

“For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false news feedback more than someone who indiscriminately provides false news feedback on lots of articles, including ones that end up being rated as true.”

PA:PRESS ASSOCIATION

Facebook can see everything you do on the site – which helps build a highly detailed picture of who you are

Earlier this year, Facebook admitted it was rolling out trust ratings for media outlets.

This involved ranking news websites based on the quality of the news they were reporting.

This rating would then be used to decide which posts should be promoted higher in users’ News Feeds.

User ratings are employed in a similar way – helping Facebook make a judgement about the quality of their post reports.

Mark Zuckerberg apologises for data breach by says he’s ‘sure someone’s trying’ to use Facebook to meddle with US mid-term election

According to Lyons, a user’s rating “isn’t meant to be an absolute indicator of a person’s credibility”.

Instead, it’s intended as a measurement of working out how risky a user’s actions may be.

How does Facebook’s user rating system work?

Facebook told The Sun that this is how the system works…

Facebook works to fight fake news by using machine learning systems

These automated systems predict articles that its human fact-checkers should review

Facebook developed a process that protects against people “indiscriminately flagging news as fake”, attempting to game the system

One of the indicators used in this process is how people report articles as false

For instance, if someone previously gave Facebook feedback that an article was false, and then that article was confirmed false by a fact-checker, that person’s future feedback would be weighted more positively

This is reflected in an invisible score or rating, which changes depending on the quality of a person’s ratings

So if someone reports news as false regularly, and that news is rated as true, that person’s future reports will be rated lower than someone with a higher score

Facebook says this is an effective way to fight misinformation

Facebook says that people often report something as false because they disagree with a story, or are trying to target a particular publisher

Attempts to game this feedback are why Facebook can’t rely on the reporting system as a totally accurate indicator

Facebook told The Sun that the rating is specific to its fake news team, and that there’s no unified score that is like a credit rating used everywhere

A Facebook spokesperson told The Sun: “The idea that we have a centralised ‘reputation’ score for people that use Facebook is just plain wrong and the headline in the Washington Post is misleading.

“What we’re actually doing: we developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system.

“The reason we do this is to make sure that our fight against misinformation is as effective as possible.”