The power and limits of data-driven campaigning

Two new books reveal the empirical side of politics

For decades, the structure of American electoral campaigns has looked at lot like the winning formula Richard Nixon used in 1968: TV ads plus consultants plus money.

After losing to the more telegenic John F. Kennedy in 1960, Nixon recognized the power of TV broadcasting and called on skilled advertisers from Madison Avenue to guide his campaign. He ultimately “depended on a television studio the way a polio victim relied on an iron lung” (Mcginnis, 1970).

This TV-first, consultant driven campaign model has dominated up through the latest presidential election. And because TV is a medium that generates little performance data, nobody can say for certain how well it’s been working.

“The election industry has shown little interest in testing its methods,” observed Andrew Cockburn, writing in Harper’s during the 2016 primaries. “Techniques tend to be a matter of lore or seat-of-the-pants instinct, the only constant being that they require lots of money. … However, in 1998, two Yale political scientists, Donald Green and Alan Gerber, set out to change all that.”

Green and Gerber’s experimental research on voter turnout challenged the conventional wisdom, finding that personal, face-to-face mobilization was actually the best way to get people to the polls. Campaigns took note, and began to shape their efforts accordingly in the early 2000s.

Campaigners’ growing interest in proving what works coincided with the rapid expansion of digital media platforms. This not only offered alternative communication channels to TV, but it provided a dramatic increase in the amount of available voter data, which permitted large-scale experimental testing using the methods of digital marketing.

Two recently published books highlight the shift to data-driven campaigning beyond the confines of well-funded presidential campaigns. Both focus on advocacy organizing, drawing lessons and case studies from electoral campaigns, which serve as the large-scale testing grounds for new digital strategies.

Analytic Activism by David Karpf, a professor at GWU’s School of Media and Public Affairs (a former professor of mine), examines how advocacy organizations use digital platforms to grow their supporter bases and rely on testing and analytics to guide decision-making and increase their impact.

Both authors highlight the power of data-driven campaigning, but also caution that over-reliance on it can sap political power.

Infrastructure and Growth

The foundations of data-driven campaigning are so un-glamorous they could easily be overlooked.

The first imperative is keeping track of supporters, which requires a database (also known as CRM software). Price explained, “Today a database is a campaign’s most important asset after its people. It is the campaign’s brain.” The database serves as a central platform for digital communication, fundraising, petitioning, canvassing and any other element of campaigning with a digital component.

As little as ten years ago, according to Price, establishing an integrated database required significant in-house expertise and resources, but now off-the-shelf packages such as NationBuilder have removed the barriers to centralized data management for even the smallest organizations.

A second key area for most data-oriented organizations is list growth — finding and connecting with new supporters. Karpf reports that online petitions are “the most flexible and essential tool of analytic activism” because they can rapidly attract large numbers of like-minded individuals. And, unlike social media engagements, signing a petition leaves an email address behind, which means signers can be contacted repeatedly and converted into supporters.

In fact, Karpf points out, “A viral petition can give birth to a political organization.”

A Culture of Testing

Reaching large scale is one of the defining features of analytic activism because it’s a mathematical prerequisite for powerful data analysis. Another key characteristic is a culture of testing, which is defined by the manner in which an organization uses analytics. Karpf identifies three possible applications for analytics: tactical optimization, computational management and passive democratic feedback.

Tactical optimization is pretty much what it sounds like, using testing to improve the results of individual campaign elements and improve action rates (e.g., testing email subject lines or fundraising appeals).

Computational management means relying on testing as a core component of organizational governance by using analytics to “evaluate competing tactics and strategies.” More broadly, it’s part of a philosophy of empirical organizing in which challenging assumptions is encouraged.

Collecting passive democratic feedback refers to using analytics to help identify members’ priorities. Karpf explained why this approach is so important to organizers, saying, “We can easily evaluate whether an electoral campaign has won or lost. But the near-term measures of activist success are indeterminate. And that creates space for activist organizations to employ digital listening for broader agenda-setting purposes.”

While pretty much any modern campaign relies on tactical optimization, using computational management and passive democratic feedback is part of the culture of testing that, in Karpf’s model, defines analytic activism.

Participatory Communication

Passive information collection can help set priorities, but it’s long been understood that personal interaction is a key part of organizing. Saul Alinsky observed, “Communication is a two-way process. If you try to get your ideas across to others without paying attention to what they have to say to you, you can forget about the whole thing.”

Price highlights Green and Gerber’s findings that more personal forms of communication are more persuasive, and points out that, unlike broadcast media, digital platforms offer the opportunity to engage in personalized communication on a large scale.

He explains, “When people can talk back, a campaign must not only have the capacity to receive and acknowledge them respectfully but also be open to truly hearing what they are saying and to engage in meaningful dialogue with them.”

Engaging with supporters does require organizational resources, but there’s a significant payoff as well. The same platforms that facilitate communication between supporters and staff also allow each individual to take the organization’s messages, shape them and share them with their own personal networks. As a result, a message can spread widely without any direct input from staff.

What data can’t do

Both authors caution that, despite their significant value, analytics and digital tools alone are not sufficient to build political power.

Karpf points out that over-reliance on analytics can skew priorities toward whatever efforts are most easily measured. Also, supporters’ overall preferences can get distorted in an environment focused solely on narrow testing regimes. He concludes, “analytics and the broader culture of testing … are a valuable additional input into strategic thinking. They are not a replacement for strategic thinking.”

Similarly, Price emphasized that digital tools can facilitate the basic functions of advocacy organizing, but can’t take their place. He observed, “Digital tools and practices and good data management do not ‘change everything’ and by themselves do not build real power.” Success still relies on leadership, personal relationships, training and a powerful, organic message.

Both of these books are highly recommended for campaign practitioners and political nerds seeking to understand the ongoing shift to data-driven campaigning. The most important lesson they offer is that winning campaigns should be guided by data, not blinded by it.