Meta

Month / December 2015

When you’re writing regularly, even weekly, the stories can start to blur together. For me at least, it can get to the point where it’s hard to answer the question What have you been working on lately? So I decided this year to look back at everything I wrote in 2015. And as I suspected, a couple major themes emerged. I’ve grouped them together here, mostly for my own clarity. Here’s what I wrote about in 2015:

Algorithms, bias, and decision-making

I spent a lot of time reading, writing, and editing about how humans feel about robots and algorithms, and it culminated in this piece for the June issue of HBR on the subject. Long story short, we’re skeptical of algorithms, but give them a voice and put them inside a robot’s body and we start to become more trusting. If you just want to read about the research on our fear of algorithms, I wrote about that here.

I was excited to write more about inequality this year, but along the way some of the most interesting assignments were about the more fundamental question: how do labor markets work? This piece asked that question from the perspective of a CEO considering raising wages. This one compared skills and market power as explanations for inequality.

The IGM Forum recently published its latest poll of economists, and it reminds me of one of the reasons that these polls are so interesting. They illustrate that expert disagreement is seldom a 50/50 split between two diametrically opposed viewpoints. And incorporating these more complicated disagreements into journalism isn’t always easy.

One big, well known challenge in reporting is to try to be fair to various viewpoints, without resorting to the most naive type of “false balance”, like including the views of a climate change denier just to make sure you have “the other side of the story.”

But false balance isn’t the only complication when reporting on experts’ views on an empirical topic. How do you portray disagreement? Typically, the easiest way is to quote one source on one side of the disagreement, and another source on the other side. But that assumes the expert disagreement is split down the middle, between only two camps.

Sometimes expert opinion really is symmetrical, like economists on the $15 minimum wage and whether it would decrease total employment:

This data (from the IGM poll) helps visualize a symmetrical disagreement among experts, arguably the easiest case for reporters to deal with. But even here there’s a subtlety. If you get a source to say a $15 minimum wage will kill jobs, and one to say that it won’t, have you correctly reported on the disagreement among economists?

Sort of. But you’ve left out the single biggest chunk of experts: the ones who aren’t sure. Should you quote an agnostic in your piece? Should you give the agnostic’s arguments more attention, since they represent the most prominent expert viewpoint? I’ve tried writing about stories like this, but making uncertainty into a snappy headline isn’t easy.

Or consider one of my favorite IGM polls, about the long-term effects of the stimulus bill, a subject of endless political debate:

You can see here that there is disagreement over the effects of the stimulus. But it’s not that a lot of economists think it was worth it, and a bunch think it wasn’t. It’s that a lot of economists think it was worth it, and a smaller but still significant group just aren’t sure.

What’s neat about this is that if you believe the results*, they really can help guide the way a reporter should balance a story on this topic. Obviously, you’ll want to find someone to explain why the stimulus was a good idea. But when you’re searching for a viewpoint to counter that, you don’t actually want to find an opponent, at least if your goal is to faithfully explain the debate between experts. Instead, you want to find someone who has serious doubts, someone who isn’t sure.

The IGM poll demonstrates the complexity of these disagreements, and it actually serves as a useful heuristic if you’re writing a story about one of these topics. I’m not saying journalists should be devout in allocating space in their stories to exactly match polls of experts — there’s more to good reporting than getting experts’ views right; these polls don’t necessarily do enough to weight the views of specialists in the area of interest; even on economic stories there are other experts to consult beyond just economists; these polls aren’t the final word on what economists think*; journalists should consider evidence directly and not rely exclusively on experts; etc.

Nonetheless, they’re a nice check. If your story gives as much space to stimulus skeptics as to advocates, you’ve probably succumb to false balance. On the other hand, if it’s only citing stimulus advocates, maybe there’s room to throw some uncertainty into the mix.

I’m thinking of all of this because of the most recent poll, on the Fed and interest rates:

The Fed is likely going to raise interest rates. What do experts think of that, and how should you report on it? Well, it’s complicated, but polls like this do give you a rough sense of things.

There’s a lot of support for the move — it’s the single biggest position within this group, so it probably should be represented in your story. But the uncertains and the disagrees are almost as big a group, if they’re combined. That’s probably worth a mention, too.

*After I’d finished a draft of this post, Tyler Cowen posted a spirited critique of the latest IGM results. Worth keeping in mind. If only someone could run a meta-survey of how trustworthy experts deem such results!