Monday’s meeting between Donald Trump and Vladimir Putin in Helsinki caps a big few days on the topic of cyber election interference. On Friday, special counsel Robert Mueller—tasked with investigating Russian interference in the 2016 US election—indicted 12 Russian intelligence officers over the hacking of the Democratic National Committee and the Hillary Clinton campaign. The indictment, remarkable for publicly revealing the extent to which US intelligence services themselves have compromised Russian networks, is further confirmation of the US intelligence community’s unanimous view on the certainty of the Kremlin’s interference.

The 2016 US presidential election was a remarkable event in the history of US national security, in terms of the brazen nature of Russia’s interference (not to mention unresolved questions about the Trump campaign’s involvement) and the unprecedented nature and scale of the cyber tools deployed. Yes, foreign ‘influence operations’ are a long-established stratagem of intelligence services, but the brazen leveraging of cyberspace—and social media in particular—to influence voter behaviour took most by surprise.

Our recent article in the journal Contemporary Politics develops an analytical framework for thinking about how cyber interference affects voting behaviour. We ask how cyber tools can shape individual decisions, by making someone more or less likely to vote for a particular candidate, vote at all, or engage with political processes in other ways. We examine three cyber tools in particular.

Fake news, now a familiar (though often misused) term, is false or misleading content usually spread via social media and fashioned to appear credible or authentic. For example, a US Justice Department indictment alleges that during the 2016 campaign, Russian operatives promoted (false) allegations of voter fraud by the Democratic Party, echoing Trump’s own message that the system was rigged.

Finally, trolling is the act of posting provocative and/or lurid content online to elicit emotional responses and widen existing social or political cleavages. Trolling behaviour isn’t just comments authored by automated bots or hired ‘troll farms’; it can also include paid advertisements on divisive social and political issues such as race, religion and gun control.

These tools were deployed with sophistication during the 2016 campaign. The hacked Podesta emails were published within minutes of the infamous Trump Access Hollywood tape emerging, distracting from that scandal. Paid advertisements targeted highly specific demographics such as Beyoncé fans or secessionist Texans. Facebook’s general counsel described Russia-linked advertisements as ‘an insidious attempt to drive people apart’. Our research unpacks how these activities might have affected the political behaviour of American voters and potentially swung a historically close election.

It’s impossible to know for sure whether Russia’s efforts were decisive, though James Clapper, the former director of national intelligence, believes they were. But there’s enough evidence to understand how these tools operated and identify local conditions that may have exacerbated or blunted their impact.

In particular, it’s instructive to compare the 2016 US election with the 2017 French presidential election, which saw similar tactics deployed. Emails from the campaign of frontrunner (and eventual winner) Emmanuel Macron were leaked just before the final-round vote, fake news stories included a report that the Macron campaign was being funded by Saudi Arabia and supported by al-Qaeda, and a coordinated effort including trolling activity was directed towards supporting Russia’s preferred candidate, far-right nationalist Marine Le Pen.

Analysing and comparing the two elections suggests that local factors matter in shaping the effectiveness of cyber voter interference.

First, an election season spanning months in the US (compared to a narrow five-week window in France, including a media blackout just prior to voting) gave ample time for hackers to infiltrate political campaigns. It also allowed the fatiguing psychological impacts of fake news and trolling to accumulate.

Second, the incumbent French government, the Macron campaign and tech companies each took more initiative in conducting cyber defence—for example, taking decisive and public action in the wake of the Macron email leaks to caution the public and remind the media of their duty to protect the integrity of the vote. The Obama administration, in contrast, was inhibited by both confusion about the nature of the attack and partisan resistance from Republicans.

As Thomas Rid observes, cyber voter interference seeks ‘to drive wedges into pre-existing cracks’, not create new ones. Highly polarised or divided democracies may be more vulnerable due to their tribal nature, but our comparison of the US and French elections suggests that effectiveness of cyber tools also depends on how the sources and integrity of information entering into the public discourse are regulated.

Trust in the underlying integrity of public discourse is vital for the functioning of a liberal democracy. The policy challenge is to regulate but not suffocate flows of information, promoting transparency and civility while leaving space for a diversity of opinions and perspectives. These issues are just as complex closer to home—Australia’s new Electoral Integrity Task Force has much on its plate.