To understand why data protection is necessary, we can take a closer look at the activities of a company that engaged in data analysis for political actors – the Canadian firm AggregateIQ (AIQ).

AggregateIQ and the political technology industry

BC-based data analytics firm AIQ found itself at the centre of an international scandal this year around the use of information gleaned from Facebook users. Christopher Wylie, a former employee and whistleblower from the company that extracted the information from Facebook, Cambridge Analytica, said that AIQ bought some of the data to help with the pro-Brexit campaign in the UK.

AIQ told the Canadian MPs at committee that it sees itself as part of a legitimate global political technology industry. Canada, like many other Western democracies, has witnessed parties turn to data analytics and digital media to better run their campaigns. For its proponents, technology-intensive campaigning gets people out to vote. It also helps parties to find the right supporters, be more responsible with their limited funds, and ultimately win.

Jeff Silvester of AggregateIQ appears at a Commons ethics committee in Ottawa on June 12, 2018. The Canadian Press, by Sean Kilpatrick.

AIQ, by all accounts, seems to have had little uptake in Canada. This is not because there is an aversion to technology in politics, but because the parties already have their own solutions in place. The Conservative Party uses NationBuilder together with a proprietary database. The Liberal Party uses the US Democratic Party-affiliated NGP VAN, and the NDP has consults with another Democrat-affiliated firm, Bluestate Digital, as well as using its own tool, Populus. All these firms and the parties will have to adapt if Canada implements the committee’s recommendations for the protection of data.

Disaggregating AggregateIQ

What exactly does AIQ do in the political technology industry? Some clues may be found in an application it made in 2017 to the National Research Council (NRC) for funding to develop a “political campaign online reporting tool.”

A page from AggregateIQ’s application for funding from the National Research Council. Source: The Canadian Press.

Though AIQ included more detailed descriptions of these tasks in its application, these were deemed confidential third-party information and redacted.

But the information that was released largely matches my own observations of political engagement platforms in Canada. Crucially, what these technologies do is collect political data, found elsewhere and make it operational for the parties they work for. Here are some of the things political engagement platforms allow parties to do, according to my research.

Maintain records of voters and their support of the party. These databases might be supplemented with limited third-party data in Canada, such as lists of magazine subscribers, as well as with data collected by the party, to develop more complete pictures of potential voters.

Facilitate contact with voters and, crucially, result in more donations. This contact could include sending emails, running a website or posting to social media. Today, most political engagement platforms also have voter apps that help canvassers knocking on doors to avoid wasting time visiting non-supporters, as well as to log issues citizens raise at the door. Interactions are entered into the database, helping campaigns track their voter relations.

Track voters’ intentions and activities to increase participation. Participation can range from visiting the website to voting on Election Day.

Conduct data analysis, usually to predict who voters will support, the issues that are relevant to them, and the likelihood they will vote.

AIQ’s activities bolster the committee’s call for “data protection.” In the ethics section of its application for NRC funding, AIQ said that “with no personal data and no data that could be matched back to an individual, we believe this project meets all ethical requirements and does not require further ethical review.”

That statement reflects a belief that nonidentifiable data or data posted online does not require the same standards as other personal data. The statement is not too far from academic debates about what to do with the volumes of data that can now be found willingly posted online. The issue before the Ethics committee, and before academics, is what should be done with the data? What limits should we place on political technology?

The limits of political technology

Before discussing data protection in politics, I should note that the impact of these technologies has been often overstated (often in hopes of selling the product). In his testimony, Jeff Silvester of AIQ warned the committee of “widely speculative comments” about his company.

The data-driven campaigning that was possibly enabled by AIQ data is both a dream and a reality. From the George W. Bush campaign’s micro targeting of certain pockets of voters to the various innovations of the Obama campaigns, there is a belief that data helps win elections. Campaigners aspire to use data to make better decisions, but often these plans come up short because of a lack of expertise, too little money or too short a campaign.

The Ethics committee noted that the European Union’s General Data Protection (GDPR) could be a model for Canada’s data protection legislation. As described by privacy expert Colin Bennett, the GDPR requires limits on the collection and storage of data, new demands for fairness and transparency as well as greater accountability. If the Canadian government moves forward on the Ethics committee’s recommendations, it will have to translate these principles for the Canadian political context.

These matters are part of a potential national data strategy and are beyond the scope of the Ethics committee. When the committee returns, however, it will have to answer to answer questions such as these about political engagement platforms like AIQ and their role in globalized political campaigns:

How can regulators audit or monitor the political engagement platforms being used by parties? It has never been easy to audit these tools. If parties agree to abide by new privacy policies, will they also agree to be more transparent about their use of new political technologies?

How can we ensure that data analytics concerning voters do not have a bias? Data analytics often function as a proxy for voters themselves. Data about voters helps parties decide who to target, who to encourage (or discourage) to vote. Yet, biases about voters are often embedded in big data and computer models, as Trevor Deley and Julia Szwarc describe in a recent Policy Options article. The committee has to be able to identify possible biases and ways to mitigate their potential harms.

How can regulators be sure that models developed with data collected improperly by a user in one region do not travel unchecked through a political engagement platform to another user? Much of the concern to date has been the spread of data that was illegally collected on Facebook. We have yet to understand this shadow industry of political data and how it functions.

Even more important than how to stop the flow of bad data is the question of how global companies like AIQ can ensure that a computer model trained with bad data does not travel from one jurisdiction to another. From press coverage about AIQ, we know that it solicited data from Internet service providers in Trinidad and Tobago, buying records of everything its customers did online to start constructing psychological profiles of them. The worst-case scenario is that bad data that is used to train a compromised data analytics tool spreads globally. As I have noted previously, political engagement platforms help parties circulate campaign innovations. Analytics also spreads, but it keeps its flawed origins hidden.

Hopefully, when the house resumes in the fall and the Ethics committee begins work on its final report, it will answer these questions

Some soft regulations might also help the committee in its task of ensuring better elections. As an example of such soft regulations, University of Ottawa professor Elizabeth Dubois and I have called for a digital code of conduct for political parties would apply to their data analytics and other ways parties rely on political engagement platforms. As well, the committee might outline some of the ways parties could limit their data collection, and improve the transparency of their data analytics as a back-up, in case the data regulations are unsuccessful.

Dealing with data-driven campaigning

The former president of the Canadian Political Science Association, Edwin R. Black, speaking at a conference in 1983, said “computer simulations and model-testing could, theoretically, lead to innovative policies.” He had reason to be excited then, thirty-five years ago. Political data promised new ways to help parties engage apathetic voters and run more efficient campaigns. However, he worried that what now would be called political data would instead be used for political gain. “The winners,” he warned, “will be those prepared to learn what [electronic data processing] in government is all about and who then go on to bend its promise to achieve their own power goals.” This seems to be at the heart of the matter before the committee.

The current aversion to data-driven campaigning is an important moment in which we should reflect on the under-appreciated role of technology in campaigning, as well as advocate for sensible reforms in the way data is affecting the parties’ relationships with voters. What this Cambridge Analytica/Facebook scandal has exposed, more than anything, is the public’s unawareness, resignation or willed ignorance about data mining and analysis, of which its use in politics is just a small part. We must be mindful of data-driven campaigning and itslimitations, while also being critical of the rhetoric justifying more data collection without a clear explanation of how this benefits voters. As it has demonstrated by issuing its report, the Commons Ethics committee has started to pay attention to political technology. Developing data protection laws would be a good first step in the attempt to correct these problems.

Photo: Jeff Silvester, back middle left, and Zackary Massingham, back middle right, of AggregateIQ, appear as witnesses at the Commons Access to Information, Privacy and Ethics Committee in Ottawa in April, 2018. The Canadian Press, by Sean Kilpatrick.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

Fenwick McKelvey is an assistant professor in the Department of Communication Studies at Concordia University. He is director of the Algorithmic Media Observatory and co-director of the Media History Research Center.

Fenwick McKelvey is an assistant professor in the Department of Communication Studies at Concordia University. He is director of the Algorithmic Media Observatory and co-director of the Media History Research Center.