intellectual self-deference

When a researcher tells me their research topic is fascinating, I get a sinking feeling, like when someone announces they are going to tell me about the dream they had last night. Here’s something I’ve learnt in my time as a scholar: finding something fascinating isn’t a good guide to what actually is fascinating.

There are plenty of questions which I used to find fascinating. Topics which, for years, I would have said fascinated me, but which now I think are of little interest. That feeling that something was a deep question, that it promised – somehow – to help reveal the secrets of the universe, wasn’t to be trusted. With a bit more thought, or experience, my fascination in a topic turned out to be a dead end. Now I think those topics are not productive to research, they don’t promise to reveal anything. What appeared to be a mysterious contradiction was just a blunt fact, a universal symbol turned out to be boring particular.

I’m not going to give you an example, because I don’t want to focus on a specific case, but on that feeling of fascination which drives our curiosity, which must in some form be the foundation of a research programme.

Personal fascination is a poor guide to a good research topic, but it has also been the guide for the research I’ve done of which I’m most proud, and which I think makes the most important contributions.

The trick is to not to blindly trust your fascination, but to draw it out. Can you explain why something is so fascinating? Can you show the connections to wider topics? Can you show that suggested explanations are inadequate?

Without action all you have is a feeling, which has as much currency with other people as when you try and explain one of your dreams. You may feel deeply involved, but there’s no compelling reason for other people to be.

Facebook is a specific, known, threat to democracy, not a general unknown threat to our capacity for rationality

Zeynep Tufekci has a TED talk ‘We’re building a dystopia just to make people click on ads’. In it she talks about the power of Facebook as a ‘persuasion architecture’ and she make several true, useful, points about why we should be worried about the influence of social media platforms, platforms which have as their raison-d’être the segmentation of audiences so they can be sold ads.

But there’s one thing I want to push back on. Tufeki’s argument draws some of its rhetorical power from a false model of how persuasion works. This is a model in which persuasion by technology or advertising somehow subverts normal rational processes, intervening on our free choice in some sinister way ‘without our permission’. I’m not saying she would explicitly endorse this model, but it seems latent in the way she describes Facebook, so I thought it worth bringing into the light, pausing just for a moment to look at what we really mean when we warn about persuasion by advertising.

Now 340,000 votes is a lot, enough to swing an election, but it would be a mistake to think that these people were coerced or tricked into acting out of character by the advert. These were people who might have voted anyway, and the advert was a nudge.

Think of it like this. Imagine you offer someone an apple and they say yes. Did you trick them into desiring fruit? In what sense did you make them want an apple? If you offer apples to millions of people you may convert hundreds of thousands into apple-eaters, but you haven’t weaved any special magic. At one end, the people who really like apples will have one already. At the other, people who hate apples won’t ever say yes. For people who are in between something about your offer may speak to them and they’ll accept. A choice doesn’t have to originate entirely from within a person, completely without reference to the options presented to them, to be a reasonable, free, choice.

No model of human rationality is harmed by the offer of these apples.

Our choices are always codetermined by ourselves and our environment. Advertising is part of the environment, but it isn’t a privileged part — it doesn’t override our beliefs, habits or values. It affects them, but it no more so and in no different way than everything else which affects us. This is easy to see when it is offers of apples, but something about advertising obscures the issue.

Take the limit case — some political candidate figures out the perfect target audience for their message and converts 100% of that audience from non-voters into voters with a Facebook advert. Would we care? What would that advert — and those voters — look like? They would be people who might vote for the candidate anyway, and who could be persuaded to vote for someone else by all the normal methods of persuasion that we already admit into the marketplace of ideas / clubhouse of democracy. They wouldn’t vote for a candidate they didn’t sincerely believe in, and the advert wouldn’t mean that their vote couldn’t be changed at some later point, whether by another advert, by new information, by arguing with friend or whatever.
There are still plenty of reasons to worry about Facebook:

Misinformation —how it can embed and lend velocity to lies.

Lack of transparency — both in who is targeting, who is targeted and why.

Lack of common knowledge —consensus politics is hard if we don’t all live in the same informational worlds.

Tufeki covers these factors. My position is that it hasn’t been shown that there is anything special about Facebook as a ‘persuasion architecture’ beyond these. Yes, we should worry something with the size and influence of Facebook, but we already have frameworks for thinking about ‘persuasional harm’— falsehoods are not a legitimate basis for persuasion, for example, so we are particularly concerned to hunt down fake news; or, it is worrying when one interest group controls a particular media form, such as newspapers. Yes Facebook persuades, but it doesn’t do so in a way that is itself pernicious. Condemning it in general terms would be both misplaced, a harm to any coherent model of citizens as reasonable agents, and a distraction from the specific and novel threats that Facebook and related technologies constitute to democracy.