Manage your subscription

Don’t fear Facebook’s emotion manipulation experiment

Rage has erupted over a Facebook experiment aimed at manipulating users' emotions. There are reasons not to get angry, says psychology researcher Tal Yarkoni

By Tal Yarkoni

Whatever device you use to look at Facebook, you see what the company wants you to see

(Image: Robert Galbraith/Reuters)

I don’t find myself defending Facebook very often. But there seems to be a lot of rage at the moment over a new study involving a large experimental manipulation of users in order to show that emotional contagion occurs on social networks.

Before getting into the source of that rage – and why I think it is misplaced – it’s worth describing the research. The paper, in the Proceedings of the National Academy of Sciences, involved Facebook manipulating news feeds of almost 700,000 of its users during one week in January 2012 to see if a reduction in negative or positive content changed the posting behaviour of those people.

Reducing the number of negative emotional posts people saw led those people to produce more positive, and fewer negative words (relative to the unmodified control group) in their own status updates; conversely, reducing the number of positive posts led people to produce more negative and fewer positive words of their own.

Advertisement

Taken at face value, this is interesting and informative. But these effects, while highly statistically significant, are tiny. The largest effect reported shifted a user’s own emotional word use by two hundredths of a standard deviation. In other words, the manipulation had a negligible real-world impact.

Second, the fact that users in the experimental conditions produced content with very slightly more positive or negative emotional content doesn’t mean that they actually felt any differently. It’s entirely possible – and I would argue, probable – that much of the effect was driven by changes in the expression of ideas or feelings that were already on people’s minds, and that such subtle behavioural changes shouldn’t really be considered genuine cases of emotional contagion.

Manipulating sentiments

On Twitter, the reaction has been similarly negative. A lot of people seem to be very upset that Facebook would manipulate news feeds in a way that could potentially influence emotions. All this implies that it added in something designed to induce a different emotional experience. In reality Facebook simply removed a variable proportion of status messages that were automatically detected as containing positive or negative emotional words.

Crucially, we need to remember that the Facebook news feed is, and has always been, a completely contrived environment. I hope that people who are concerned about this research realise that Facebook is constantly manipulating the experience of its users.

In fact, by definition, every single change Facebook makes to the site alters the user experience, since there simply isn’t any experience to be had on Facebook that isn’t entirely constructed by Facebook. When you log on, you’re not seeing a comprehensive list of everything your friends are doing, nor are you seeing a completely random subset of events. In the former case, you would be overwhelmed with information, and in the latter case, you’d get bored very quickly. Instead, what you’re presented with is a carefully curated experience that is, from the outset, crafted in such a way as to be more engaging.

What you see is determined by a complex and ever-changing algorithm that you make only a partial contribution to. It has always been this way, and it’s not clear that it could be any other way.

Global betterment

If you were to construct a scale of possible motives for manipulating user behaviour – with the global betterment of society at one end, and something really bad at the other end – I submit that conducting basic scientific research would almost certainly be much closer to the former than other standard motives we find on the web – like trying to get people to click on more ads.

Data scientists and user experience researchers at companies like Facebook, Twitter and Google routinely run dozens, hundreds or thousands of experiments a day, all of which involve random assignment of users to different conditions. Typically, these manipulations aren’t conducted to test basic questions about emotional contagion; they are conducted with the explicit goal of helping to increase revenue. If the idea that Facebook would actively try to manipulate your behaviour bothers you, you should probably stop reading this right now and go and close your account.

Oh, and you should probably also stop using Google, YouTube, Yahoo, Twitter, Amazon and pretty much every other major website – because I can assure you that, in every case, there are people out there who get paid a good salary to… yes, manipulate your emotions and behaviour.

There’s nothing intrinsically evil about this. Everybody you interact with – including friends, family and colleagues – is trying to manipulate your behaviour. Your mother wants you to eat more broccoli; your friends want you to come get smashed with them at a bar; your boss wants you to stay at work longer and take fewer breaks.

None of this is meant to suggest that there aren’t legitimate concerns one could raise about Facebook’s more general behaviour – or about the immense and growing social and political influence that social media companies wield. You can certainly question whether it is fair to expect users signing up for a service like Facebook’s to read and understand user agreements containing dozens of pages of dense legalese, and which includes a clause consenting to the kind of research under discussion.

I’m certainly not suggesting that we give Facebook, or any other large web company, a free pass to do as it pleases. What I am suggesting, however, is that even if your real concerns are about the broader social and political context in which Facebook operates, using this particular study as a lightning rod for criticism of Facebook is misplaced.