Oh, look, that DFH conspiracy website Quartz says that “The US military is already using Facebook to track your mood“:

Critics have targeted a recent study on how emotions spread on the popular social network site Facebook, complaining that some 600,000 Facebook users did not know that they were taking part in an experiment. Somewhat more disturbing, the researchers deliberately manipulated users’ feelings to measure an effect called emotional contagion.

Though Cornell University, home to at least one of the researchers, said the study received no external funding, but it turns out that the university is currently receiving Defense Department money for some extremely similar-sounding research—the analysis of social network posts for “sentiment,” i.e. how people are feeling, in the hopes of identifying social “tipping points.”

The tipping points in question include “the 2011 Egyptian revolution, the 2011 Russian Duma elections, the 2012 Nigerian fuel subsidy crisis and the 2013 Gazi park protests in Turkey,” according to the website of the Minerva Initiative, a Defense Department social science project…

If the idea of the government monitoring and even manipulating you on Facebook gives you a cold, creeping feeling, the bad news is that you can expect the intelligence community to spend a great deal more time and money researching sentiment and relationships via social networks like Facebook. In fact, defense contractors and high-level US intelligence officials say that social network data has become one of the most important tools they use in the collecting intelligence….

The growth of social media has not just changed day-to-day life at agencies like DIA, it’s also given rise to a mini gold rush in defense contracting. The military will be spending an increasing amount of the $50 billion intelligence budget on private contractors to perform open-source intelligence gathering and analysis, according to Flynn. That’s evidenced by the rise in companies eager to provide those services…

Lots of exciting anedotes on the “One tweet and they can find you” theme at the link.

Meanwhile, Facebook hunts for a medium-strong frowny-face emoticon and says it is sorry, not sorry:

On Wednesday, Facebook’s second-in-command, Sheryl Sandberg, expressed regret over how the company communicated its 2012 mood manipulation study of 700,000 unwitting users, but she did not apologize for conducting the controversial experiment. It’s just what companies do, she said.

“This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated,” Sandberg, Facebook’s chief operating officer, told the Wall Street Journal while travelling in New Delhi. “And for that communication we apologize. We never meant to upset you.”…

(I’m sure FB has done the research establishing a major corporation should always shove a chick out front under these circumstances, because women do the best sorry-not-sorrys.)

And the Wall Street Journal helpfully explains that, hey, FB’s been doing this for years, and there’s nothing you can do about it, citizen:

Thousands of Facebook Inc. users received an unsettling message two years ago: They were being locked out of the social network because Facebook believed they were robots or using fake names. To get back in, the users had to prove they were real.

In fact, Facebook knew most of the users were legitimate. The message was a test designed to help improve Facebook’s antifraud measures. In the end, no users lost access permanently.

The experiment was the work of Facebook’s Data Science team, a group of about three dozen researchers with unique access to one of the world’s richest data troves: the movements, musings and emotions of Facebook’s 1.3 billion users….

Until recently, the Data Science group operated with few boundaries, according to a former member of the team and outside researchers. At a university, researchers likely would have been required to obtain consent from participants in such a study. But Facebook relied on users’ agreement to its Terms of Service, which at the time said data could be used to improve Facebook’s products. Those terms now say that user data may be used for research.

“There’s no review process, per se,” said Andrew Ledvina, a Facebook data scientist from February 2012 to July 2013. “Anyone on that team could run a test,” Mr. Ledvina said. “They’re always trying to alter peoples’ behavior.”…

Since its creation in 2007, Facebook’s Data Science group has run hundreds of tests. One published study deconstructed how families communicate, another delved into the causes of loneliness. One test looked at how social behaviors spread through networks. In 2010, the group measured how “political mobilization messages” sent to 61 million people caused people in social networks to vote in the 2010 congressional elections…

“Facebook deserves a lot of credit for pushing as much research into the public domain as they do,” said Clifford Lampe, an associate professor at the University of Michigan’s School of Information who has worked on about 10 studies with Facebook researchers. If Facebook stopped publishing studies, he said, “It would be a real loss for science.”…

https://www.balloon-juice.com/wp-content/uploads/2015/11/balloon_juice_header_logo_grey.jpg00Anne Lauriehttps://www.balloon-juice.com/wp-content/uploads/2015/11/balloon_juice_header_logo_grey.jpgAnne Laurie2014-07-03 20:24:382014-07-03 20:33:52In Case You Were Still Wondering, Facebook Is Not Your Friend

I want to know how the mood alteration study passed institutional review at Cornell. When we want to do a study, we have to get informed consent. A casual mention that data might be used for research buried in the middle of a multi-page document that most people click through without reading doesn’t come close to counting. And that’s for stuff where we’re collecting saliva or urine, or asking questionnaires; actually doing something to a person requires even more review. There’s no way a reasonably competent ethics group should have approved this.

They shouldn’t be allowed to publish their research in scientific journals if they don’t follow scientific rules and guidelines to my mind. And Clifford Lampe is a collaborator, personally complicit, and thus is no way to be trusted as an unbiased authority on the subject to state the bleeding obvious.

All the so-called studies, it should be noted, are of a self-selected group – that is, people who deliberately chose to sign up for Facebook (and may or may not have been less than fully truthful about personal data).

Whether or not that group is any way a statistically valid representation or measure in terms of people in general is entirely unproven.

I really don’t like Sheryl Sandberg. Can’t figure out what it is but she rubs me the wrong way. To me she comes across as someone who grew up with every advantage but who thinks she grew up disadvantaged. I don’t know if that’s the case but that’s the vibe I get when I see her interviews.

My point is that while I don’t trust Facebook to tell me the color of the sky, I expect better from major research institutions like Cornell. They’re lending their name to the study, so they have an ethical obligation to vet the procedure before it’s ever carried out, or to prevent their researchers from using the data if it was already collected under conditions that wouldn’t pass institutional review. I want to know what in hell their IRB was thinking giving the OK to this study, or why the researchers are still employed there if they published research without IRB approval.

@Violet: One of those folks born on third base who thinks she hit a triple. That’s why she has the gall to write a book telling the rest of us dumber folks to do what she did so we can be fabulously wealthy CEOs also.

OK wait. The post says that this particular study received “no external funding.” Let’s leave the next sentence with “extremely similar sounding” alone for a moment. Your question about how it passed the IRB at Cornell is legitimate, but it did not have to if that is Cornell’s policy. There was no “external,” and clearly no government funding. Most large research institutions use the same IRB standards for all studies, but does Cornell?

The question to be raised is “did the ‘similar sounding research'” that is funded by DOD pass the IRB? And who, if anyone except the author of this article, determined what “similar sounding” means? That’s something that needs to be peer reviewed.

For the Facebook “study” in the second portion of this post, Facebook is effectively saying that a user agreeing to the terms of service is “informed consent” Maybe, maybe not. Someone needs to sue to find out.

Also: Screw Facebook. They suck. Just on general principal.

When the internet started many, many moons ago, I was at Columbia Univeristy (ironically – or not) running the administration for a research dept.) and a buddy of mine there said “we all want to kill the lawyers first, but damn, with this new internet we’re gonna need all the lawyers we can get.” Quite prophetic was he.

Manipulate ’em back. Go to the page, see all your peeps, but only interact with the most leftwing stuff imaginable, and never get sucked into looking at ‘awful’ backward nonsense. Balloon Juice will bring you that in suitable snarky privacy! :D

Does the person writing this know anything about sociology studies? Are they going to be horrified to learn that League of Legends has been studying how the use of different colors in suggestions to be a nicer player affects complaint statistics?

@Roger Moore:
This, on the other hand, is an excellent point. It sounds weird to me, too.

@schrodinger’s cat: @Gretchen: Yep. That’s how she seems to me. She’s got this faux-earnest manner that seems exceptionally rehearsed–like some PR person had her do exercises on how to appear interested or something. Her body language doesn’t match her words and her facial expressions make her seem annoyed by the whole process. She’s a modern day version of rich white people telling poor brown people what to do and how to be.

Edit: She is a perfect fit for Facebook and Mark Zuckerberg who actually is annoyed by having to do interviews and so forth. She’s exactly the type of public face I’d expect him to put on Facebook and he probably thinks people believe her. Wrong.

@NotMax: More than that, I’d doubt most of the shit posted is realistic. Between the rolling perpetual Christmas letter writing poseurs marketing their personal lifestyle and personality brand, the drama addicts, the Trolls, the front ends and Nigerian Princes, that would be one odd dataset to pull a signal from.

“Facebook deserves a lot of credit for pushing as much research into the public domain as they do,” said Clifford Lampe, an associate professor at the University of Michigan’s School of Information who has worked on about 10 studies with Facebook researchers. If Facebook stopped publishing studies, he said, “It would be a real loss for science.”…

Really? “Science” is now conducting “research” on internet users without their knowledge–okay “informed consent” but it’s buried in the middle of some Terms of Service requirement that no one reads. That’s just depressing. Does this guy own a ton of stock in Facebook or something?

There’s a nice writeup in regards to this, including informed consent at Lawyers Guns and Money:

They don’t get into the nitty gritty of institutional review, but there was at least a short section on that at the Atlantic article they linked to. It sounds as if they got around the IRB by letting Facebook to the dirty work. Facebook passed the study through their own internal review, which appears to be primarily focused on data privacy rather than conventional rules of research subjects protection. Once they had the data in hand, they sent it to their academic collaborators, who were able to bypass most of the institutional review because it was classified as existing data. So this is theoretically kosher from the academic institutions’ point of view because they didn’t directly violate anyone’s rights. It’s OK to use Dr. Mengele’s data as long as you weren’t personally involved in the experiments.

@scav: Facebook is very big in many countries outside the US, and people post actual, true information. The original target market in the US (I had a daughter at an Ivy when it started, and it stopped being cool or useful to her ages ago) has moved on, and it is mostly Mombook now, but elsewhere it’s still very big.

I’m with you. Perhaps I’m being unfair, as I haven’t read her book and when I see her name I actually have to take a scone to remind myself who she even is, but I find her ever so slightly creepy. And then I feel guilty about feeling that way, like a self-hating feminist or something. But yeah, she rubs me wrong.

EDIT: “Take a scone” = “take a second.” Stupid Autocorrect, what are you gonna do, right?

@Helen:
I’m not talking about what is legal, I’m talking about what is right. Medical researchers adopted their standards of informed consent because experimentation without consent led to horrors like the Tuskegee Syphilis Experiment. If there are institutions who think that ethical behavior is optional and they only need to get informed consent when they’re getting money from Uncle Sam, I want to know about it.

@Gin & Tonic: Oh, as a dataset it’s large and has its value, but it’s messy and I’m not sure how they’d clean it for all the probably biases. Probably ore than adequate for their internal marketing etc research (which works in that space anyway) and for other external purposes for lack of other opportunities. It’s just far from ideal and they sure don’t seem to playing by scientific rules so I fail to see why we should award them with scientific brownie points. Let them go satisfy themselves with their corporate cookies and gateau.

ETA: To be clear, if they adopt the same ethics and guidelines used by others in the fields publishing why not? I just don’t care for the corporate scientists also opting out of norms they don’t care for while nevertheless insisting on getting all the benefits and acclaim of those abiding by the guidelines.

“This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated,” Sandberg, Facebook’s chief operating officer, told the Wall Street Journal while travelling in New Delhi. “And for that communication we apologize. We never meant to upset you.”…

Facebook and/or any of it representatives or executives shall be permitted to manipulate your emotions, expose you to epilepsy-causing content, radiation, chemical bombing,identity theft, and otherwise subject you to experimental measures, and shall not be liable for any adverse events or outcomes related to such experimentation. Your acceptance of the license to use Facebook constitutes an admission of informed consent. Do not accept this license if you are unable or unwilling to be subject to the condition of experimental subject at the whim of any of Facebook’s executives or representatives, including but not limited to: customer support representatives, marketing executives, sales people, Facebook’s advertising clients, and other at home workers we pay to provide experimental and/or advertorial content on users Facebook home pages.“

@Roger Moore:
Did you see anything about the exact nature of the experiment? It’s unethical to manipulate subjects in even the most minor way without consent, but ‘unethical’ and ‘immoral’ are not always the same. The article leaps straight to scary implications that Big Brother Is Watching YOU, but the vague description given sounds along the lines of ‘Scan for people posting a status of happy, record number of Friends who post that they’re happy in the next twelve hours’ repeated ten thousand times and run through statistical analysis kind of stuff. It’s about as threatening as measuring sales of alcohol to figure out which states party the most.

The ‘manipulation’ is what I’d need to know more about. Of course, even if it’s not dangerous or scary, that doesn’t mean it’s something a reputable scientific body should be doing.

EDIT – @Roger Moore:
And that is why ‘ethical’ and ‘moral’ are different, and research institutions should damn well stick to ‘ethical’ so moral issues never come up.

@Roger Moore: I don’t know about the existing data stuff, though that sounds fishy, but I used to serve on my university’s IRB, and we had to approve all research on human subjects whether the study had federal funding or not. The federal regs say that if the institution gets federal funding, all studies there need be approved.

My own studies needed IRB approval and believe me there was no federal funding.

The institution gets audited regularly and if there are too many violations, all federal funding can get yanked. The university would have been most displeased.

Sandberg reminds me of Tracey Flick from the movie Election. She’s slightly different from blimp-egoed asshole businesspeople like Jack Welch, but the smug lecturing tone of her public speaking rubs me the wrong way, like it’s all a very calculated exercise and she thinks you’re too stupid to notice this.

At the moment FB is sitting on ~$11 billion cash-on-hand so they’ve recouped $8 billion (IIRC) from the buy in about 5 months. 12% return on the invest would be ~$2.28 billion/year or $570 million a quarter. Given their customer base the numbers add-up.

Apple is reportedly sitting on $147 billion.

Overall, Forbes estimates US corporations are sitting on $21 trillion on off-shore accounts.

@Frankensteinbeck: Yeah, manipulation would be a key factor in determining what level of approval this thing would need. You can usually observe people in public without an informed consent form and FB might count as public. I don’t know. The standards for online research were evolving when I quit the IRB because it was an enormous time sink. You’d have to think about degree of risk too, like if you were observing illegal behavior or something.

@Roger Moore: Let’s leave the “right” grounds. The government is not “right” and neither is it “moral” And neither is government funding. This particular experiment did not get government funding. Should the test be “if you get any external or governmental funding, and I mean any government funding for anything including student loans” so you need to pass the IRB”? Maybe so. But that is not the law now. Tuskegee was conducted with government funds. The study described above was not.

@Iowa Old Lady: Please see me at 24. Did I miss something? IRB approval for the study they are talking about – because it did not have external funding therefore it did not need approval – is irrelevant.

@Helen: It’s absolutely relevant. As I say, I served on my university’s IRB for four years and we approved ALL studies involving human participants whether they had external funding or not. If the institution gets federal funding, that’s required.

My own work involved observing engineers in industry settings. No manipulation, just shadowing. And no external funding. I still needed IRB approval.

@Iowa Old Lady: @TheMightyTrowel: But what does Cornell say? That was one of my original questions. I do not know the answer. But I know for sure that a university can exempt themselves (meaning internal funding) from IRB review. So there can be many levels of review.

IRB approval for the study they are talking about – because it did not have external funding therefore it did not need approval – is irrelevant.

IRB review is still relevant because it’s a sign that somebody has vetted the planned research to make sure that it isn’t mistreating the subjects. Many institutions and journals require IRB approval of all research, regardless of funding source, as evidence that the work was carried out ethically. I see this research as ethically wrong, and I want to know how it got published despite obvious ethical problems. If it passed an IRB, what was the IRB thinking? If it didn’t pass an IRB, why wasn’t it subject to IRB approval, and why was it published despite obvious ethical questions and lack of an IRB OK?

It sounds as if it was approved by an IRB because the authors were able to wriggle through a loophole. Facebook, which generated the data, did not have a rigorous IRB process. They appear to care only about protecting subject privacy but not about getting informed consent. They then passed on the data to their collaborators, who were able to present it to their IRB as existing data and thus not subject to the same kind of rigorous review that would be required if the academic researchers had been involved in collecting it. IMO there are two things that should come out of this:

1) The existing data loophole should be closed. If the goal is ethical conduct rather than compliance with government mandates, it shouldn’t be possible to bypass those ethical requirements by getting unethically gathered data from a third party. Researches who want to use existing data should either show that it was gathered ethically or explain why that isn’t relevant (e.g. it’s from before the modern IRB process).
2) There should be an investigation to see if this truly was existing data. If the researchers consulted with Facebook about the experiment before the data was collected, that exception shouldn’t apply.

When supermarkets test different genres of music on the muzak to affect buying patterns are they supposed to get signed consent forms from everyone that comes in? I don’t really see what FB did as much different from that.

Every company you have ever bought anything from in your entire life has tried one way or another to influence your mood.

I was wondering if Anne was going to post about this. It seemed right up her alley. The whole thing is sourced to SCGNews which is actually just some guy’s blog. The link to his editorial/analysis in her post. If you read his contact or about page you can’t even see who he is because he intentionally hides his identity. If you look through his site it’s all pro-Putin, pro-Assad, conspiratorial propaganda.

Their evidence that the funding came from the DoD is almost non-existent and if anything suggests the opposite conclusion. If you actually read the project synopsis on the MINERVA site, there’s nothing about US social media / Facebook.

The reason people are more worked up about this than some random store counting smiles at the customer service desk is thus:

When you have a ginormous, unaccountable corporation with twenty billion dollars doing things like this on purpose, the scale on which they can do them makes it significant.

Jaron Lanier would tell you it’s the size of Facebook itself that makes it stifle other things in its class (never mind simply buying them up!) and I think he’s not wrong there. So you end up with a situation where you’re hoping to regulate the possibly creepy or downright alarming behavior of a company that is the only place you can GET a certain online thing, and unlikely to be supplanted because of its economies of scale.

Ask yourself what would happen if they resumed that experimentation, but making all white people report good news and show happy feelings, while silently discarding the same from black users and tending to show bad news and unhappy situations from black users.

To ‘see what would happen’.

Because hey, it’s not like giant corporations are ever steered by the agendas of petty, over-wealthy humans who might seek to ‘color’ general opinion of a class of people.

How many of you would cheerfully do the same for the inverse, say teabaggers, in well-meaning efforts to defuse the damage those folks are doing rather than to try and engage with the people? Knowing that trying to rationally engage with teabaggers is challenging because they’re the product of EXACTLY THIS sort of conscious meddling, but from a news agency rather than ‘their peers’?

Embrace the creepiness of all this because it’s going to have to be part of our conversation as citizens. The entities with billions of dollars WANT things, now. They happen to have tools we just don’t have, to get those things, and it is a big deal.

Comments are closed.

Get Involved!

It takes just 5 minutes, twice a week:

Make a call
Send an email
Send a postcard or fax
Make your voice heard!

For both local and national numbers, recommended scripts and approaches: