SAN FRANCISCO — Facebook has begun to assign its users a reputation score, predicting their trustworthiness on a scale from zero to 1. The previously unreported ratings system, which Facebook has developed over the past year, shows that the fight against the gaming of tech systems has evolved to include measuring the credibility of users

SAN FRANCISCO — Facebook has begun to assign its users a reputation score, predicting their trustworthiness on a scale from zero to 1.

The previously unreported ratings system, which Facebook has developed over the past year, shows that the fight against the gaming of tech systems has evolved to include measuring the credibility of users to help identify malicious actors.

Facebook developed its reputation assessments as part of its effort against fake news, Tessa Lyons, the product manager who is in charge of fighting misinformation, said in an interview. The company, like others in tech, has long relied on its users to report problematic content — but as Facebook has given people more options, some users began falsely reporting items as untrue, a new twist on information warfare for which it had to account.

It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” Lyons said.

A user’s trustworthiness score isn’t meant to be an absolute indicator of a person’s credibility, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk. Facebook is also monitoring which users have a propensity to flag content published by others as problematic and which publishers are considered trustworthy by users.

It is unclear what other criteria Facebook measures to determine a user’s score, whether all users have a score and in what ways the scores are used.

The reputation assessments come as Silicon Valley, faced with Russian interference, fake news and ideological actors who abuse the company’s policies, is recalibrating its approach to risk — and is finding untested, algorithmically driven ways to understand who poses a threat. Twitter, for example, now factors in the behavior of other accounts in a person’s network as a risk factor in judging whether a person’s tweets should be spread.

2:04

How to spot fake news

Consider these points before sharing an article on Facebook. It could be fake.(Monica Akhtar/The Washington Post)

But how these new credibility systems work is highly opaque, and the companies are wary of discussing them, in part because doing so might invite further gaming — a predicament that the firms increasingly find themselves in as they weigh calls for more transparency around their decision-making.

“Not knowing how [Facebook is] judging us is what makes us uncomfortable,” said Claire Wardle, director of First Draft, a research lab within the Harvard Kennedy School that focuses on the impact of misinformation and that is a fact-checking partner of Facebook. “But the irony is that they can’t tell us how they are judging us — because if they do, the algorithms that they built will be gamed.”

The system Facebook built for users to flag potentially unacceptable content has in many ways become a battleground. The activist Twitter account Sleeping Giants called on followers to take technology companies to task over the conservative conspiracy theorist Alex Jones and his Infowars site, leading to a flood of reports about hate speech that resulted in him and Infowars being banned from Facebook and other tech companies’ services. At the time, executives at the company questioned whether the mass reporting of Jones’s content was part of an effort to trick Facebook’s systems. False reporting has also become a tactic in far-right online harassment campaigns, experts say.

Apple, Facebook, YouTube and Spotify have moved to remove the content of prominent right-wing talk show host Alex Jones for violating hate speech policies.(The Hollywood Reporter)

Tech companies have a long history of using algorithms to make all kinds of predictions about people, including how likely they are to buy products and whether they are using a false identity. But as misinformation proliferates, companies are making increasingly sophisticated editorial choices about who is trustworthy.

In 2015, Facebook gave users the ability to report posts they consider to be false. A tab on the upper right-hand corner of every Facebook post lets people report problematic content for a variety of reasons, including pornography, violence, unauthorized sales, hate speech and false news.

Lyons said she soon realized that many people were reporting posts as false simply because they did not agree with the content. Because Facebook forwards posts that are marked as false to third-party fact-checkers, she said it was important to build systems to assess whether the posts were likely to be false to make efficient use of fact-checkers’ time. That led her team to develop ways to assess whether the people who were flagging posts as false were themselves trustworthy.

“One of the signals we use is how people interact with articles,” Lyons said in a follow-up email. “For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true.”

The score is one signal among many that the company feeds into more algorithms to help it decide which stories should be reviewed.

“I like to make the joke that, if people only reported things that were false, this job would be so easy!” Lyons said in the interview. “People often report things that they just disagree with.”

She declined to say what other signals the company used to determine trustworthiness, citing concerns about tipping off bad actors.

A Princeton PhD, was a U.S. diplomat for over 20 years, mostly in Central/Eastern Europe, and was promoted to the Senior Foreign Service in 1997. After leaving the State Department to express opposition to the planned invasion of Iraq, he was privileged to have give-and-take conversations (officially called “courses”) with Georgetown University students pertaining to

tass.com image (not from article) from According to the head of the Federal Agency for Youth Affairs Alexander Bugayev, they were interrupted due to some reasons PYONGYANG, November 18. /TASS/. Russia and North Korea are considering plans of stepping up cooperation to organize youth exchange programs, Head of the Federal Agency for Youth Affairs Alexander […]

Joseph Varghese, gulf-times.com *The organisation’s 5th General Assembly gathers its members at Katara to discuss various topics and future projectsMembers of the Global Public Diplomacy [JB emphasis] Network (GPDNet) have unanimously elected Qatar as the next president of the organisation during the start of its fifth General Assembly at Katara – the Cultural Village on […]

Description The Encyclopedia of Diplomacy is a complete and authoritative 4-volume compendium of the most important events, people and terms associated with diplomacy and international relations from ancient times to the present, from a global perspective. An invaluable resource for anyone interested in diplomacy, its history and the relations between states Includes newer areas

As partial of a work we are doing with a New Mexico News Port project during a University of New Mexico covering internal elections, one of a goals is to yield a forum to rivet readers — and quite students — with politicians. Our news site was innate this semester, and while it’s been

The sign outside the headquarters of the US Army’s new Cyber School at Fort Gordon, Georgia. The school is part of the Army’s creation of a new cyberwarfare branch. When Laura Jackson went to her local Army recruiter, she wasn’t actually sure what she wanted to do in uniform. She was interested in intelligence, figuring

Despite the many issues of human rights abuses, censorship and suppression of dissent, China has managed a great victory by diverting the world’s attention to itself as a strong economic, political and cultural power. Soft power has become strategically important for China and is regarded an important component of its “comprehensive national power” , a

What’s the Issue? Internet freedom was until recently not a foreign policy issue. Newton-Small traces the origins of the policy to a conversation four years ago: In 2008, Michael Horowitz, a longtime religious-liberty advocate, went to his friend, Representative Frank Wolf, a Virginia Republican, and suggested setting aside funds to help Falun Gong, a religious

Israel has one of a many active Digital Diplomacy units in a world. In a new consult we conducted of 86 countries travelling a globe, Israel’s MFA (Ministry of Foreign Affairs) was found to be a tenth many active MFA on twitter. It was also a fifth many renouned MFA in a sample. Yoram

About BIDD

Belgrade Initiative for Public and Digital Diplomacy leads, researches and implements the best examples of public and digital diplomacy that exist in SEE region and globally.
BIDD network is supporting online activism in region and mapping out the inter-cultural values and vivid digital inter-discussion. Digital as the medium that is bridging public opinion with foreign policy, guide colleagues in digital diplomacy of the neighboring countries through regional perspective without fragments.
BIDD gathers different generations of university researchers, industry specialists, public policy officials and experienced diplomats who got educated and worked in Serbia, Germany, The United Kingdom, France, Italy, Russia, PR of China, Middle East, USA, Brazil and s.o.
BIDD network grows daily and its actively seeks partnerships.