The magazine published a profile, available here, which highlights the ongoing work of the project into media manipulation and computational propaganda.

Howard heads the Computational Propaganda Project at Oxford, an interdisciplinary research group that combines the methods of computer science, political science, and sociology to examine how the internet and social media can be used to manipulate public opinion. His project has brought scientific rigor to bear on the phenomenon of fake news by systematically cataloging and analyzing how bots and automation are influencing public opinion.

Trained as a sociologist, Howard got his first taste of the fake news phenomenon while living in Budapest in 2012, when allies of Hungarian Prime Minister Viktor Orban used the power of innuendo and rumor to cast aspersions on the country’s Roma population. When in 2014, Russian-backed separatists shot down Malaysia Airlines Flight 17 over eastern Ukraine, Howard’s Hungarian friends began passing along the many conspiracy theories promoted by the Kremlin to explain the disaster — among them, for example, that the U.S. military had downed the packed airliner.

With this development, Howard’s long-standing research interests suddenly intersected with the world of online propaganda. “I’m interested in studying how political elites manipulate people — and how they do that over the internet,” Howard says.

]]>313810 Things you wish you didn’t know about elections (and what to do about them)http://philhoward.org/10-things-you-wish-you-didnt-know-about-elections-and-what-to-do-about-them/
Tue, 12 Dec 2017 21:44:10 +0000http://philhoward.org/?p=3135In March 2017, I spoke spoke at an event hosted by the University’s Social Science Division to highlight projects across Oxford that are receiving ERC funding. Professor Roger Goodman, Head of the Social Sciences Division hosted the event, which marked the 10th anniversary of the ERC, and celebrated the researchers supported by these grants.

The talk was recently made into an interactive presentation, accessible online here.

I am delighted to share the good news that the OII’s Project on Computational Propaganda was awarded the National Democratic Institute’s “Democracy Award” on November 2nd.

Every year the NDI recognizes organizations that have demonstrated a deep and abiding commitment to democracy and human rights. This year, NDI honored the Oxford Internet Institute and the Project on Computational Propaganda for its research on the front lines of fighting the global challenge of disinformation and false news.

This is the first time that a University has won the award.

The NDI awarded the Oxford Internet Institute at the annual Democracy Dinner, held November 2nd in Washington, DC. In addition, Senator Chris Murphy provided a perspective from the U.S. Congress on this important topic and efforts that are being taken to counter disinformation. Past awardees include Vaclav Havel, Aung San Suu Kyi, Kofi Annan, Archbishop Desmond Tutu, and Ellen Johnson Sirleaf. Please find more information about the award here: https://www.ndi.org/our-stories/disinformation-vs-democracy-fighting-facts

The Oxford study categorized 12,413 Twitter users and 11,103 Facebook users whose social media messages referred to or carried content from one or more of the Russian-linked websites between April 2 and May 2, 2017. The researchers used sophisticated modeling in an attempt to examine how tweets and “likes” of Facebook posts broadened the effects of junk and phony news on the three sites, sometimes directly connecting the recipients with Russian trolls.

“On Twitter there are significant and persistent interactions between current and former military personnel and a broad network of Russia-focused accounts, conspiracy theory-focused accounts and European right-wing accounts,” the researchers concluded.

]]>3129Russian operatives used Twitter and Facebook to target veterans and military personnel, study sayshttp://philhoward.org/russian-operatives-used-twitter-and-facebook-to-target-veterans-and-military-personnel-study-says/
Fri, 13 Oct 2017 21:32:59 +0000http://philhoward.org/?p=3126The project’s latest memo on junk news and social media operations against veterans was covered in the Washington Post.

They researchers also tracked information on several military-themed websites and used the traffic to these sites —along with the Twitter data — to determine what Facebook accounts promoted similar content on publicly available pages. That yielded maps of online interaction showing, for example, that accounts that linked frequently to veterans and military issues also in many cases linked to content related to Russia.

The kind of information shared by and with veterans and active-duty personnel span a wide range, with liberal political content also common, though not as common as conservative political content. The online military community, the researchers found, also shared links about sustainable agriculture, mental health issues such as addiction, and conspiracy theories.

No one subject dominated the online content flowing among these communities, but the largest individual categories dealt with military or veteran matters. Russian disinformation was a smaller but significant and persistent part of the overall information flow.

“The very idea that there’s aggressive campaigns to target military personnel with misleading content on national security issues is surprising. It’s disappointing,” Howard said. “Because they’re opinion leaders, they get more attention from governments and people who spread misinformation.”

Social media provides political news and information for both active duty military personnel and veterans. We analyze the subgroups of Twitter and Facebook users who spend time consuming junk news from websites that target US military personnel and veterans with conspiracy theories, misinformation, and other forms of junk news about military affairs and national security issues. (1) Over Twitter we find that there are significant and persistent interactions between current and former military personnel and a broad network of extremist, Russia-focused, and international conspiracy subgroups. (2) Over Facebook, we find significant and persistent interactions between public pages for military and veterans and subgroups dedicated to political conspiracy, and both sides of the political spectrum. (3) Over Facebook, the users who are most interested in conspiracy theories and the political right seem to be distributing the most junk news, whereas users who are either in the military or are veterans are among the most sophisticated news consumers, and share very little junk news through the network.

John D. Gallacher, Vlad Barash, Philip N. Howard, and John Kelly. “Junk News on Military Affairs and National Security: Social Media Disinformation Campaigns Against US Military Personnel and Veterans.” Data Memo 2017.9. Oxford, UK: Project on Computational Propaganda. comprop.oii.ox.ac.uk.

]]>3124Washington Post: Facebook has so much more to tell ushttp://philhoward.org/washington-post-facebook-has-so-much-more-to-tell-us/
Thu, 05 Oct 2017 15:02:53 +0000http://philhoward.org/?p=3097This originally appeared as “Facebook has so much more to tell us” and was written by my collaborator Bence Kollanyi and I.

Facebook and Twitter have taken the important step of handing over thousands of ads to Congress that were bought and circulated by Russian strategists to influence our elections. These examples show just how expertly the Russian propaganda machine can craft messages that stoke fear, hatred and panic among American voters.

But sharing examples is only the first small step in what should be a systematic analysis of foreign political influence on American voters through online networks. Facebook and Twitter are unique as media companies because they provide us with platforms for communicating with our networks of family and friends. The next step in understanding Russian interference involves sharing network data — not just ad examples.

At the Oxford Internet Institute, we have been studying how governments use social media to manipulate public opinion — in their own country and in others. Information provided by Facebook and Twitter has already allowed us to fill in some of the details.

We know, for example, that Russian strategists bought 3,000 Facebook ads with a budget of $100,000 and that RT, the news outlet funded by the Russian government and previously known as Russia Today, spent $274,100 on 1,823 promoted tweets. We know Russians set up and managed fake accounts of users who pretended to be voters. We have found them managing networks of highly automated accounts. We know they use such accounts to direct political conversation among their own citizens (some 45 percent of Russian Twitter is managed by bots). We know they know how to use Twitter’s and Facebook’s algorithms to push propaganda at voters in democracies. And we now know what some of the ads look like.

Yet to really understand the influence of ads — as well as the impact of surreptitious content that isn’t as obvious as a purchased ad — we need to understand the networks, not the examples. Twitter recently offered a helpful blog post on its reaction to the ad examples Facebook shared with lawmakers. Twitter used Facebook’s list of known fake American voters originating in Russia to identify a set of suspicious users on its own platform. Twitter then closed a handful of accounts and provided the Senate Intelligence Committee with examples of the ads that the main Russian news agency, RT, had pushed across the platform.

Because Twitter now suggests that content from Russian news agencies may be a proxy for the larger campaign of dark ads and bot networks, we can further visualize how the Russian content targeted voters around the United States. For example, my team and I recently found that junk news — propaganda and ideologically extreme information — was concentrated in swing states. Russian content also appears to have been steered into states where the Trump campaign was doing well.

But working without the cooperation of Twitter and Facebook means that our models aren’t perfect. No doubt, Twitter and Facebook have higher-quality data on all this. They certainly employ some of the best network analysts and data scientists in the world. Yet it has taken an FBI inquiry, congressional investigations, nearly a year of bad press and pressure from outside researchers such as us to dislodge some examples of Russian interference.

Evidence of campaign coordination and election interference will be in the network data, not just the ad buys. The next step should be open collaborations that explain network effects and help restore public trust in social media.

]]>3097The Guardian: Social media companies must respond to the sinister reality behind fake newshttp://philhoward.org/the-guardian-social-media-companies-must-respond-to-the-sinister-reality-behind-fake-news/
Mon, 02 Oct 2017 14:53:14 +0000http://philhoward.org/?p=3093This originally appeared as “Social media companies must respond to the sinister reality behind fake news” and was written by my collaborator Bence Kollanyi and I.

Social media companies such as Facebook and Twitter have begun to share evidence of how their platforms are used and abused during elections. They have developed interesting new initiatives to encourage civil debate on public policy issues and voter turnout on election day.

Computational propaganda flourished during the 2016 US presidential election. But what is most concerning is not so much the amount of fake news on social media, but where it might have been directed. False information didn’t flow evenly across social networks. There were six states where Donald Trump’s margin of victory was less than 2% – Florida, Michigan, Minnesota, New Hampshire, Pennsylvania and Wisconsin. If there were any real-world consequences to fake news, that’s where they would appear – where public opinion was evenly split right up to election day.

US voters shared large volumes of polarising political news and information in the form of links to content from Russian, WikiLeaks and junk news sources. Was this low-quality political information distributed evenly over social networks around the country, or concentrated in swing states and specific areas? How much of it was extremist, sensationalist or commentary masking as news?

To answer these questions, our team at Oxford University collected data on fake news – though we use the term “junk news” because it is impossible to tell how much fact-checking work went into a story just by reading it. But the junk is often easy to spot: extremist, sensationalist, conspiratorial stories, commentary essays presented as news, or content sourced from foreign governments.

Using self-reported location information from Twitter users, we place a third of users by state and create a simple index for the distribution of polarising content around the country. First, we found that nationally, Twitter users got more misinformation, polarising and conspiratorial content than professionally produced news. Second, we found that users in some states were sharing more polarising political news and information than users in other states. Average levels of misinformation were higher in swing states than in uncontested states, even when weighted for the relative size of the user population.

Political speech has a lot of protection in the US, but the reasonable balance between freedom of speech and election interference has been tipped. There is such a significant volume of misinformation flowing over social media that it is difficult to imagine voters in the US are equipped with what they need to make good decisions. Did voters in swing states get the political news and information they needed to make good decisions? Our conclusion: certainly not.

We did similar research during a less controversial election in Germany and found that for every four stories sourced to a professional news organisation, there was one piece of junk. In part, this healthier ratio is because levels of education are high in Germany, and there is public financing for several kinds of professional news organisations. But the voting public in Germany – and its politicians – are panicked even by this level of misinformation.

But the real strain on democracy kies ahead, not behind. Social networks can be fragile in important ways. If the followers of candidates who lost the election begin to un-friend the followers of candidates who won, our social networks will become even more bounded than they already are. Worse, the politicians who won with the backing of junk news are likely to keep generating it. They may be more likely to consult it when the time comes to make big public policy decisions.

What worries us now is that junk news is becoming a vehicle for junk science. Campaigns of misinformation about climate change, the link between smoking and cancer, and the health benefits of inoculating children also spread like wildfire over social networks. Our next project, with the Oxford Martin School, is to work out who is behind campaigns to persuade our political leaders to ignore scientific recommendations.

It is hard to know what a comprehensive solution might be. Part of the explanation for all this involves significant changes in the business of news and generational differences in how young people consume news. But we are at a point where some kind of public policy oversight is needed, and past the point where voluntary initiatives from social media firms are sufficient.

Facebook and Twitter don’t generate junk news but they do serve it up to us. We must hold social media companies responsible for serving misinformation to voters, but also help them do better. They are the mandatory point of passage for this junk, which means they could also be the choke point for it.

There are some obvious ways to fix this without interfering with political speech. In the US, the Uniform Commercial Code, which is a supplement to contract law in many states, could be the place to make both advertisers and social media companies adhere to some basic anti-spam and truth-in-advertising rules. But in all democracies, paid political content on social media platforms should come with clear disclosures. Bots should declare their paymasters and ads should disclose their backers. Social media platforms should be required to file all political advertising and political bot networks with election officials. Bots should be clearly identified to users.

Most people, most of the time, don’t use social media for politics. But in the days before a major election or referendum social media platforms provide the most important source of information in most democracies. How they design for deliberation is now crucial to the success of democracy.

]]>3093Fake News on Twitter Flooded Swing States That Helped Trump Winhttp://philhoward.org/fake-news-on-twitter-flooded-swing-states-that-helped-trump-win/
Sun, 01 Oct 2017 15:22:12 +0000http://philhoward.org/?p=3103The project’s latest research on politicized information and the 2016 US election was covered in Mother Jones.

Millions of tweets were flying furiously in the final days leading up to the 2016 US presidential election. And in closely fought battleground states that would prove key to Donald Trump’s victory, they were more likely than elsewhere in America to be spreading links to fake news and hyperpoliticized content from Russian sources and WikiLeaks, according to new research published Thursday by Oxford University.

Nationwide during this period, one polarizing story was typically shared on average for every one story produced by a professional news organization. However, fake news from Twitter reached higher concentrations than the national average in 27 states, 12 of which were swing states—including Pennsylvania, Florida and Michigan, where Trump won by slim margins.

]]>3103Propaganda flowed heavily into battleground states around election, study sayshttp://philhoward.org/propaganda-flowed-heavily-into-battleground-states-around-election-study-says/
Sat, 30 Sep 2017 15:18:26 +0000http://philhoward.org/?p=3100Our latest research into polarizing information shared in the lead up to the 2016 US election was covered in the Washington Post.

Propaganda and other forms of “junk news” on Twitter flowed more heavily in a dozen battleground states than in the nation overall in the days immediately before and after the 2016 presidential election, suggesting that a coordinated effort targeted the most pivotal voters, researchers from Oxford University reported Thursday.

The volumes of low-quality information on Twitter — much of it delivered by online “bots” and “trolls” working at the behest of unseen political actors — were strikingly heavy everywhere in the United States, said the researchers at Oxford’s Project on Computational Propaganda. They found that false, misleading and highly partisan reports were shared on Twitter at least as often as those from professional news organizations.

But in 12 battleground states, including New Hampshire, Virginia and Florida, the amount of what they called “junk news” exceeded that from professional news organizations, prompting researchers to conclude that those pushing disinformation approached the job with a geographic focus in hopes of having maximum impact on the outcome of the vote.