News & Views

Dates, data and manipulation

Of course companies want to play with our emotions. But context is key, and Facebook and OKCupid have breached their users’ trust.

When Facebook and OKCupid revealed they had been secretly conducting experiments on some of their users, the ensuing furore was warranted. Far from simply iterating to improve usability, they had set out to see if they could emotionally manipulate users without warning. While brands are trying to do this all the time, it’s important that we define when it is and isn’t, like in these instances, appropriate.

Facebook altered the newsfeeds of 700,000 English-language users without them knowing, so that they featured either a higher proportion of positive or negative updates from their network than average. Facebook found that those exposed to more positive updates created more positive updates themselves. Likewise, more negative updates in their Timeline drove users to use more negative language in their own posts. In light of the outcry, Facebook apologised, saying the research, which appeared in the Proceedings of the National Academy of Sciences, was ‘poorly communicated’.

Meanwhile, free dating site OKCupid confessed that it had told some members that they were matched to people who the site’s usual rating system wouldn't have matched them to. They found simply being labelled a ‘match’ increased the likelihood that the not-so-star-crossed-lovers made contact and exchanged messages. But ultimately, the users weren't being presented with truly ‘optimal’ matches.

Why were so many upset? In a controversial post on the experiment, the site’s co-founder Christian Rudder (who definitely doesn't have a book out in September about Big Data or anything like that) claimed: ‘If you use the internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.’

Rudder raises an important point. Behind the scenes, developers constantly tweak digital experiences for users, learn from their behaviour and iterate accordingly. But typically they want to improve usability or improve something like ‘basket size’. Most users understand that within the context of digital utilities or commerce platforms, this is to be expected and they can adapt their own behaviour accordingly.

Experiments like Facebook’s and OKCupid’s, however, are opaque. Were they undertaken to benefit the user, the business or advertiser (read: customer)? In both cases it’s not clear. But the emotions and social interactions of some users were manipulated without due care or consideration towards the users themselves. Of course, companies try to move people and affect behaviour in complex ways all the time but this largely happens in spaces where it’s expected. When they watch an ad on TV for example, or see a funny post from a brand on a social network, audiences are waiting to have their eye caught and emotions stirred.

As we now know though, companies are able not only to affect user choices, such as purchases, with design, data and algorithms; they can also alter emotions and deliberately alter life-paths in spaces where, I believe, context for users is lacking. On Facebook, users expect to be moved, for good or bad, by the posts of their friends, not the tinkering of Newsfeed designers.

When organisations behave like this, it demonstrates a yawning gulf between the corporate world and the everyday lives of people (as if it weren't wide enough already). Rather than data and algorithms binding the two together to create shared value, they appear a threatening rising tide for customers, who are expected to acquiesce and accept a raw deal amid ‘free’ services, social capital and cats. (As much as it’s useful for users to remember they’re the product when they’re not paying, in truth everyone should keep in mind that even free services and experiences are co-created. Users bring their time, content and data, while advertisers bring their cash).

As organisations tool up with data scientists and neuroscientists in a bid to gain advantage, they’d do well to ensure they’re not punchdrunk on this stuff, and are capable of stepping back to see the picture through the eyes of their customers. Researching customer attitudes may be a part of this. Web-sociologist Nathan Jurgenson suggests that an independent ethics board be created to 'distinguish what requires opt-in, debriefing etc'. I'm inclined to agree although how workable this might be is unclear. I'd like to see some companies take a stand, to lead and influence, similar to the way that Unilever has in sustainability, by being clear, transparent, committed and accountable. Ultimately a huge grey area exists between the type of experiments outlined in this article and the 'common or garden' A/B testing that drives typical website development. It's only going to be get more complicated, and it deserves debate.