Simplicity 2.0

Wrestling With the Ethics of the Facebook Experiments

Article by Sharon Fisher, August 21, 2014

In the era of big data and analytics, the conventional wisdom seems to be that businesses should try lots of different things, test the results and continuously optimize to develop new innovations and better customer experiences.

In some cases, the experimentation is going even further. In Amsterdam, two designers set up a workspace for freelancers that started out with lots of perks and benefits, a la Google. It even included a pet rabbit. Over the course of a month, the perks (including the rabbit) gradually went away. The diminishing benefits were in response to user request, claimed the designers.

By the end of the month, the designers had converted the once-lush workspace to rows of numbered gray cubicles that were under the constant surveillance of a virtual boss who, appearing on an overhead screen, led the freelancers in a series of exercises every few hours, writes Fast Company. The designers also installed movement sensors and when a person got up for, say, a cup of coffee, the virtual boss barked that it wasn’t break time yet.

Ironically, the designers reported that the workers were more productive without all the perks and benefits. But that’s not exactly the point. The real question being, is this sort of thing okay? When does A/B testing cross the line into inappropriate social experimentation?

Social scientists have a long history of performing experiments on human subjects. Some of these experiments have led to valuable insights, such as the Stanford Prison Experiment (the one where a class was divided into “prisoners” and “guards” to see how they’d act) and the Milgram Experiment (the one where many people were willing to electrocute other people when they were told to).

Now, with easy access to enormous populations, such as the more than 1 billion Facebook users, social scientists see many opportunities. At the same time, social scientists today have ethical guidelines around experimentation with human subjects; in fact, experiments like Milgram and Stanford Prison might not have been allowed under modern guidelines. Essentially, researchers say, if an experiment has the potential to change someone’s emotional status, they’re supposed to have informed consent. Consent that goes beyond Facebook’s terms of use that mention such experimentation might happen.

Thanks in part to the backlash over the Facebook experiments—which even drew a Congressional letter to the Federal Trade Commission—work is now underway to develop ethical guidelines for online experimentation. The scientist who developed the Facebook experiment is involved in the work, as well as people from Microsoft Research, MIT, and Stanford, according to the New York Times.

“Such testing raises fundamental questions,” writes the Times. “What types of experiments are so intrusive that they need prior consent or prompt disclosure after the fact? How do companies make sure that customers have a clear understanding of how their personal information might be used? Who even decides what the rules should be? Existing federal rules governing research on human subjects, intended for medical research, generally require consent from those studied unless the potential for harm is minimal. But many social science scholars say the federal rules never contemplated large-scale research on Internet users and provide inadequate guidance for it.”

In addition, social network companies are starting to clean their own houses in terms of experimental ethics. Facebook is reportedly tightening up its experimental guidelines. And DERP specifically notes that it will only support research that “respects user privacy, responsibly uses data, and meets IRB [Institutional Review Board] approval.”

Facebook has been heavily criticized for its experiments, with 84 percent of surveyed users saying they had lost trust in Facebook, and 66 percent claiming they considered deleting their Facebook account because of it. Though it’s too soon to say what effect this all could have on people’s willingness to continue using Facebook, this shows that a company’s reputation is something you might not want to experiment with.

Simplicity 2.0 is where we examine the intricate and transitory world of technology—through a Laserfiche lens. By keeping an eye on larger trends, we aim to make software that’s relevant to modern day workers, rather than build technology for technology’s sake.

Subscribe to Simplicity 2.0 and follow us on Twitter. If what we’re saying piques your interest, head over to Laserfiche.com where you’ll see how we apply the lessons learned on Simplicity 2.0 to our own processes, products and industry.