The abuse of modern telecommunications technologies by governments is capturing headlines the world over: the 2017 Cambridge Analytica scandal, for example, helped put digital disinformation squarely on the public agenda. But state-sponsored trolling—a new form of human rights abuse using online hate and harassment campaigns (often by people, but sometimes automated) to intimidate and silence voices critical of the state—remains an underappreciated threat to journalists around the world. In 2017 alone, at least forty-six journalists were killed for their reporting; at least one of these journalists, Daphne Caruana-Galizia, was subjected to state-sponsored online harassment in the lead-up to her death.

State-sponsored trolling—a new form of human rights abuse using online hate and harassment campaigns (often by people, but sometimes automated) to intimidate and silence voices critical of the state—remains an underappreciated threat to journalists around the world.

In the short term, solutions to these problems will depend on the unwavering efforts of journalists and civil society to raise them with the highest levels of political and corporate leadership. Both are often the first point of contact for those who, like Caruana-Galizia, find themselves in the crosshairs of online disinformation and harassment campaigns. Journalists and civil society play a crucial role in facilitating quick and clear communication channels from sources on the ground to technology companies and governments; they have a unique, detailed understanding of the relevant linguistic and cultural context; and as a result, their ability to quickly communicate the urgency and severity of crises stemming from digital communication is vital.

In the medium term, research and fact-finding are key. Without the labor and expertise that go into uncovering online disinformation and harassment campaigns, it is impossible to understand and prevent them. Several groups of savvy researchers have shown that independent research can enhance comprehensive understanding and have positive effects. DFRLab and Bellingcat, for instance, use open-source forensic journalism to unmask influence operations and astroturf trolling campaigns online. CitizenLab and the Computational Propaganda project conduct research on the scale and impact of these campaigns; scholars at Indiana University have created the Botometer and other tools for detecting and measuring these phenomena. In Taiwan, a civic tech collective called g0v.tw supports (among other projects) a crowdsourced factchecker operating via a popular peer-to-peer messaging platform. Similarly, databases and research compiled by civil society, such as the International Press Institute’s On The Line database of online harassment of journalists and Freedom House’s annual Freedom on the Net report, are crucial for understanding the scale and depth of attempts to intimidate, drown out, or silence independent voices online.

Attempts to solve the problems of disinformation and state-sponsored trolling should focus on four “TORR” principles: Transparency, Ownership, Release, and Regulation.

What lessons have these complementary efforts unearthed to date? Together, they suggest that attempts to solve the problems of disinformation and state-sponsored trolling should focus on four “TORR” principles: Transparency, Ownership, Release, and Regulation. As Oxford scholar Robert Gorwa has written compellingly, bots should be transparent about their automated status on platforms (as is now required by a recently passed California law). Second, civil society should continue to push for personal ownership of data: users should have a say in how their data is used, as well as the ability to understand and consent to its use accordingly; Tim Berners-Lee, sometimes called the inventor of the world wide web, has often advocated for this solution. Third, it is critical that private companies, governments, and researchers release data on known propaganda campaigns, in order to better understand past challenges and prepare for future problems. And finally, civil society and journalists should continue to push for regulation of malicious uses of tools, but not necessarily regulation of the tools themselves. A blanket ban on bots, for example, would be detrimental to the functioning of the web as we know it today because of their myriad uses for business, public health, artistic expression, and other areas; a 2017 study even found that networks of bots can effectively disseminate and amplify socially useful messages. Legal regulation conditioning precise types of political usage of bots is a much more fine-tuned solution to undercutting the effectiveness of astroturfing campaigns while preserving freedom of speech.

It can be easy to despair about the Internet’s prospects, but the same technologies which enable state-sponsored astroturf trolling also empower genuine grassroots activists. High-quality news and information are available to huge swaths of humankind, even if falsehood and sensationalism are also abundant. While policy prescriptions for governments and platforms are crucial to turning the online balance in favor of democracy and free expression, they will even in the best case scenario take time to implement. Time is a luxury the targets of these campaigns do not always have. Until or unless a systemic fix arrives, civil society efforts to research, monitor, and report state-sponsored harassment and disinformation campaigns will remain essential.

Nick Monaco is a disinformation researcher at Graphika, a company that tracks and analyzes disinformation campaigns online. He is also a research affiliate at the Oxford Internet Institute Computational Propaganda Project.

The views expressed in this post represent the opinions and analysis of the author and do not necessarily reflect those of the National Endowment for Democracy or its staff.