Month / June 2011

Ezra Klein says so. The comparison isn’t quite 100%, but it’s a lot closer than most people think. I agree with pretty much everything below:

At the New York Times, academics and activists and authors lend their time, name and authority to the publication. The payoff? A quote in the paper, some influence over the story, a bit of publicity for their work and a role in the broader debate. But no money. Never any money. The New York Times would fire a reporter for offering sources money.

At the Huffington Post, you’re seeing the same transaction, but run more efficiently: Academics, activists and authors lend their time, name and authority to work they’ve written themselves, that gets published at its full length, where their names always appear up at the top. The tradeoff is that, in most cases, fairly few people see their work. But that’s better than no one seeing their work, which is often the realistic alternative.

Are these unpaid writers helping to make Arianna Huffington rich? They are. But the insight, expertise and inside information of unpaid sources has made many newspapers rich, too. And the fact that the work those sources put into those subjects appeared under someone else’s byline made it worse, not better.

At its best, journalism brings a lot of different perspectives into the conversation. But it’s always been the people aggregating these perspectives who got paid. That remains true at the Huffington Post, and perhaps it’s something that the Huffington Post’s unpaid contributors should be angry about. But it’s not something that the journalists and news outlets have much standing to condemn. We’ve long been asking people to contribute pro bono labor to the products sold by our for-profit companies.

Also, you don’t have to buy this comparison to realize this suit is ridiculous. And can we please not turn this into a lefty protest? More good thoughts on that part from Matt Yglesias.

I have a piece up at The Atlantic (went up Friday) titled “The Future of Media Bias” that I hope you’ll read. I suppose the title is deliberately misleading, since the topic isn’t media bias in the typical sense. Here’s the premise:

Context can affect bias, and on the Web — if I can riff on Lessig — code is context. So why not design media that accounts for the user’s biases and helps him overcome them?

Head over to The Atlantic to read it. In the meantime, I want to expand a bit on some of my ideas.

1) This is not just about pop-up ads. The conceit of the post is visiting a conservative friend’s site and being hit with red-meat pop-ups that act as priming mechanisms. But that was just a way of introducing my point. (Evidence from comments and Twitter suggest this may have distracted some.) So while pop-ups can illustrate the above premise, the premise is in no way restricted to the impacts of pop-ups either as they exist in practice to sell ads or as they might be used in theory.

2) More on self-affirmation. I might have been clearer on how self-affirmation exercises work. Although they were not described in detail in either paper I referenced. Here’s how I understand them: you’re asked to select a value that is important to your self-worth – maybe something like honesty – and then you write a few sentences about how you live by that value. Writing out a few sentences about what an honest and therefore valuable person you are makes you less worried about information threatening your self-worth.

I want to address a few potential objections to embedding an exercise like this in media. One might argue that no one would complete the exercise (I’m imagining it as a pop-up right now.) Perhaps. But you could incentivize it. Perhaps anyone who completes it gets their comments displayed higher or something like that. Build incentives into a community reputation system. Second objection is that maybe you could get people to complete it once, but it’s impractical to think anyone would do it before every article. Fair point. But perhaps you just need people to do it once, and then it’s just displayed alongside or above the content, for the reader to view, to prime them. Finally, I want to note that this is just one random example and I don’t think my argument really rides too much on it. The reason I used it was a) there was lots of good research behind it and b) it fit nicely with the pop-up conceit of the post.

3) More examples. One paper I referenced re: global warming suggests that the headline can impact susceptibility to confirmation/disconfirmation biases. So what if the headline changed depending on the user’s biases? This would be tricky in various ways, but it’s hardly inconceivable. In fact I wish I’d mentioned it since in some ways it seems more practical than the self-confirmation exercises. It would, however, introduce a lot of new difficulties into the headline writing process.

Another thing I might have mentioned is the ordering of content. Imagine you’re looking at a Room for Debate at NYT. Which post should you see first? In the course of researching the Atlantic piece, I came across some evidence that the order you receive information matters (with the first piece being privileged) but I’m having trouble finding where I saw that now. And it’s not obvious that that kind of effect would persist in cases of political information. But, still, there may well be room to explore ordering as a mechanism for dealing with bias.

Finally – and at this point I’m working off of no research and just thinking out loud – what if you established the author’s credibility by showing the work the user was most likely to agree with, in cases of disconfirmation bias (and the reverse in confirmation bias cases)? So, say I’m reading about climate change and you knew I’d be biased against evidence for it. But the author making the case for that evidence wrote something last week that I do agree with, that does fit my worldview. What if a teaser for that piece was displayed alongside the global warming content? Would that help?

I’ve also wondered if asking users to solve fairly simple math problems would prime them to think more rationally, but again, that’s not anything based on research; just a thought.

So that’s it. A few clarifications and some extra thoughts. To me, the hope of this piece would be to inject the basic idea into the dialogue, so that researchers start to think of media as an avenue for testing their theories, and so that designers, coders, journalists, etc. start thinking of this research as input for innovation into how they create new media.

UPDATE: One more cool one I forgot to mention… there’s some evidence that hitting someone with graphical information more forcefully makes a point, such that it would basically take too much mental energy to rationalize around it. You can read more about that here. This column in the Boston Globe refers to a study concluding – in the columnist’s words – “people will actually update their beliefs if you hit them “between the eyes” with bluntly presented, objective facts that contradict their preconceived ideas.” This strikes me as along the same lines as the graph experiment. And so these are things to keep in mind as well.

The psychology on self-control is fascinating to me, and a recent article at The New Republic relayed how that research is intersecting with research into poverty. Here’s the gist:

…psychologists and economists have been exploring one particular source of stress on the mind: finances. The level at which the poor have to exert financial self-control, they have suggested, is far lower than the level at which the well-off have to do so. Purchasing decisions that the wealthy can base entirely on preference, like buying dinner, require rigorous tradeoff calculations for the poor. As Princeton psychologist Eldar Shafir formulated the point in a recent talk, for the poor, “almost everything they do requires tradeoff thinking. It’s distracting, it’s depleting … and it leads to error.” The poor have to make financial tradeoff decisions, as Shafir put it, “on anything above a muffin.”

The article also contains a very good summary (at least to my non-expert eye) of the literature here. Basically:

In the 1990s, social psychologists developed a theory of “depletable” self-control. The idea was that an individual’s capacity for exerting willpower was finite—that exerting willpower in one area makes us less able to exert it in other areas…

…This theory of depletable willpower has its detractors, and, as in most academic topics studied across disciplinary fields, one finds plenty of disputes over the details. But this model of self-control is now one of the most prominent theories of willpower in social psychology, at the core of what E. Tory Higgins of Columbia University described in 2009 as “an explosion of scientific interest” in the topic over the last decade. Some skeptics correctly emphasize the vital role of motivation, and some emphasize instead that “attention” is limited. But the core of the breakthrough is that resolving conflicts among choices is expensive at a cognitive level and can be unpleasant. It causes mental fatigue.

And:

what about the possibility of strengthening the willpower “muscle”? Here, the research is complicated. While one line of research has found reason to think that drained willpower can be restored in the short term—by taking a walk in nature or watching a humorous video, for instance—studies on how to strengthen the willpower muscle in the long term are far less conclusive. This second line of research seems to be more promising in children than in adults. As Kathleen Vohs of the University of Minnesota, who has done extensive research on willpower, put it, “There might be something of a developmental sweet spot.” In twelve U.S. states, a program called Tools of the Mind is explicitly aimed at improving willpower functions in prekindergarten and kindergarten children. While some of the strategies would be quite difficult in much of the developing world, many are not, or could be adapted.

I’ve been reading a lot about cognitive biases lately, for a post I recently finished (that hopefully will be published soon) and I wanted to share a fascinating post only slightly related to that topic, that didn’t make it into my post on the subject. Jonah Lehrer has a characteristically fascinating post at Wired on how ads implant false memories. You really should read it all. But here’s a bit that struck me, from the perspective of someone interested in media:

A new study, published in The Journal of Consumer Research, helps explain both the success of this marketing strategy and my flawed nostalgia for Coke. It turns out that vivid commercials are incredibly good at tricking the hippocampus (a center of long-term memory in the brain) into believing that the scene we just watched on television actually happened. And it happened to us.

The experiment went like this: 100 undergraduates were introduced to a new popcorn product called “Orville Redenbacher’s Gourmet Fresh Microwave Popcorn.” (No such product exists, but that’s the point.) Then, the students were randomly assigned to various advertisement conditions. Some subjects viewed low-imagery text ads, which described the delicious taste of this new snack food. Others watched a high-imagery commercial, in which they watched all sorts of happy people enjoying this popcorn in their living room. After viewing the ads, the students were then assigned to one of two rooms. In one room, they were given an unrelated survey. In the other room, however, they were given a sample of this fictional new popcorn to taste. (A different Orville Redenbacher popcorn was actually used.)
One week later, all the subjects were quizzed about their memory of the product. Here’s where things get disturbing: While students who saw the low-imagery ad were extremely unlikely to report having tried the popcorn, those who watched the slick commercial were just as likely to have said they tried the popcorn as those who actually did. Furthermore, their ratings of the product were as favorable as those who sampled the salty, buttery treat. Most troubling, perhaps, is that these subjects were extremely confident in these made-up memories. The delusion felt true. They didn’t like the popcorn because they’d seen a good ad. They liked the popcorn because it was delicious.

Read the whole post to learn more about the science here. But isn’t that variable of text vs. video fascinating?

I don’t care (like really don’t care) about sports. I don’t read Bill Simmons. I find Klosterman amusing but tiresome. Eggers is on the to-read list. Malcolm Gladwell is wrong about Twitter and other than that I don’t pay much attention. And yet I’m excited about Simmons’ new site Grantland.

In practice I’m excited about it because Katie Baker’s “The Garden of Good and Evil”, about being a one-time and born again Knicks fan, was awesome. So good it made me like reading about sports.

We had four goals for this site. The first was to find writers we liked and let them do their thing. The second was to find sponsors we liked and integrate them within the site — so readers didn’t have to pay for content, and also, so we didn’t have to gravitate toward quantity over quality just to chase page views. The third was to take advantage of a little extra creative leeway for the right reasons and not the wrong ones. And the fourth was to hire the right blend of people — mostly young, mostly up-and-comers, all good people with good ideas who aren’t afraid to share them.

Simmons is one of those few lucky writers who doesn’t have to care about pageviews. He’s got ’em. If Grantland is a way for him to offer cover to young writers to also not have to care about pageviews – if he can increase the number of good writers able to prove themselves outside of the content sweatshops – that strikes me as a very good thing.

On the other hand, Nick Jackson has a pretty compelling takedown at The Atlantic. Especially this bit:

We already have long-form websites that give us the literary. We have the New Yorker‘s website (which enjoyed record traffic last month with 3.7 million unique visitors according to Women’s Wear Daily), we have the New York Review of Books, and we have dozens of others. Simmons has to build a long-form populist site. We’re going to go long, Simmons warns his profiler. And that means, he must hope, 5,000 words on boobs. Hell, even I would read that.

I suppose the beauty for me is that I have zero skin in the game. I don’t like sports. Not pop culture’s biggest fan either. So if I’m wrong to be excited, really, there’s nothing for me to lose.

Eli Pariser, president of the board at MoveOn.org, has a new book out called The Filter Bubble, and based on his recent NYT op-ed and some interviews he’s done I’m extremely excited to read it. Pariser hits on one of my pet issues: the danger of Facebook, Google, etc. personalizing our news feeds in a way that limits our exposure to news and analysis that challenges us. (I’ve written about that here, here, and here.) In this interview with Mashable he even uses the same metaphor of feeding users their vegetables!

The Filter Bubble - Eli Pariser

So, thus far my opinion of Pariser’s work is very high. But what kind of blogger would I be if I didn’t quibble? So here goes…

From the Mashable interview (Mashable in bold; Pariser non-bold):

Isn’t seeking out a diversity of information a personal responsibility? And haven’t citizens always lived in bubbles of their own making by watching a single news network or subscribing to a single newspaper?

There are a few important ways that the new filtering regime differs from the old one. First, it’s invisible — most people aren’t aware that their Google search results, Yahoo News links, or Facebook feed is being tailored in this way.

When you turn on Fox News, you know what the editing rule is — what kind of information is likely to get through and what kind is likely to be left out. But you don’t know who Google thinks you are or on what basis it’s editing your results, and therefore you don’t know what you’re missing.

I’m just not sure that this is true. I completely recognize the importance of algorithmic transparency, given the terrific power they have over our lives. But it’s not obvious to me that we’re living in a less transparent world. Do we really know more about how Fox’s process works than we do about how Google’s does? It seems to me that in each case we have a rough sketch of the primary factors that drive decisions, but in neither do we have perfect information.

But to me there is an important difference: Google knows how its process works better than Fox knows how its process works. Such is the nature of algorithmic decision-making. At least to the people who can see the algorithm, it’s quite easy to tell how the filter works. This seems fundamentally different than the Fox newsroom, where even those involved probably have imperfect knowledge about the filtering process.

Life offline might feel transparent, but I’m not sure it is. Back in November I wrote a post responding to The Atlantic’s Alexis Madrigal and a piece he’d written on algorithms and online dating. Here was my argument then:

Madrigal points out that dating algorithms are 1) not transparent and 2) can accelerate disturbing social phenomena, like racial inequity.

True enough, but is this any different from offline dating? The social phenomena in question are presumably the result of the state of the offline world, so the issue then is primarily transparency.

Does offline dating foster transparency in a way online dating does not? I’m not sure. Think about the circumstances by which you might meet someone offline. Perhaps a friend’s party. How much information do you really have about the people you’re seeing? You know a little, certainly. Presumably they are all connected to the host in some way. But beyond that, it’s not clear that you know much more than you do when you fire up OkCupid. On what basis were they invited to the party? Did the host consciously invite certain groups of friends and not others, based on who he or she thought would get along together?

Is it at least possible that, given the complexity of life, we are no more aware of the real-world “algorithms” that shape our lives?

So to conclude… I’m totally sympathetic to Pariser’s focus and can’t wait to read his book. I completely agree that we need to push for greater transparency with regard to the code and the algorithms that increasingly shape our lives. But I hesitate to call a secret algorithm less transparent than the offline world, simply because I’m not convinced anyone really understood how our offline filters worked either.