Wednesday, November 26, 2014

Since the midterm elections, there have been a number of stories about "What 'big thing' would reinvigorate the Democratic Party?" to quote the title of one of them. There seems to be agreement that the problem is that middle-class and working-class incomes haven't been rising, but in the stories I've read no one has raised what seems like an obvious idea: making it easier for workers to unionize (although Thomas Edsall hints at it). Back when unions mattered, there was a good deal of research on the "union wage effect," and the consensus was that they raised wages by something like 20% on the average (more for low paid workers and less for high paid workers). So unionization would help address the problem, and do it without requiring taxes or spending.

Is the lack of attention because unions have become so unpopular that it's not worth raising the issue? In 1952, the Gallup poll asked " In the labor disputes of the last two or three years, have your sympathies--in general--been on the side of the unions or on the side of the companies?" a number of times. After a long gap, the question was revived in 1999, and was also asked in 2002, 2005, and 2011. Each time, a plurality said unions--the margin ranged from 3 (37% to 34%) to 18 (52%-34%) percentage points. The average margin was 9.7% in 1952 and 10.5% in 1999-2011--the difference is not close to being statistically significant. (There are some ups and downs, but they have no obvious pattern, so I don't show the graph). So it seems like a pro-union effort would have a reasonable prospect of being popular with the public.

Saturday, November 22, 2014

A recent piece in the New York Times Jason Weeden and Robert Kuzban tells us that self-interest influences political views. Along with some uncontroversial examples, there was one that caught my attention:
"Those who do best under meritocracy — people who have a lot of education and excel on tests — are far more likely to want to reduce group-based preferences, like affirmative action." This didn't sound right to me: if it were true, universities, especially elite universities, would be centers of opposition to affirmative action.

Since "affirmative action" can mean a different things to different people, I looked for questions that asked directly about test scores. There were't many, but I found one in a CBS News/60 Minutes/Vanity Fair survey from 2013. It asked "Which phrase comes closest to how you would describe the SAT tests that are used for college admissions in the United States: a successful equalizer, a failed ideal, a waste of time, or a necessary evil"? The first answer can be regarded as positive, the second and third as negative, and the last one as neutral. Using this classification, here is the breakdown by education:

Pos Neg Pos-Negless than HS 33 43 23 +10HS 25 40 35 -10Some college 22 44 34 -12College graduate 21 48 31 -10Grad School 17 44 39 -22So people without a high school degree have the most favorable opinions, and people with graduate education are the least favorable. You get a similar pattern with income: people with incomes under $30,000 are the most favorable and those with incomes of over $250,000 (admittedly a small group) are most unfavorable. Of course, the general point that a lot of opinions have a straightforward relation to self-interest is valid, but as this example shows, there are exceptions.PS: I promised an examination of own vote vs. predicted winner in my last post. Own vote predicted 31 states correctly (that is, there were 31 states in which a majority of the sample said they would vote for X and X won), while the predicted winner actually won in 29 states. So it was a slight advantage for own vote, but not decisive. [Data from the Roper Center for Public Opinion Research]

Monday, November 17, 2014

After the election, Justin Wolfers wrote a column saying that questions about who people expected to win were better predictors of election outcomes than questions about who people intended to vote for. He offered this explanation: "Asking voters about their expectations allows them to reflect on everything they know about the race — which way they currently intend to vote, how likely they are to vote, whether they’re persuadable, the voting intentions of their friends and neighbors, and their observations about bumper stickers, yard signs, the resonance of a candidate’s message and the momentum they sense in their communities."

You can see how this explanation would appeal to an economist, because it's a parallel to the way that markets work: combining scattered information in an optimal (or more realistically, pretty good) fashion. But there's another possibility: that voters are reflecting what the "experts" are saying, rather than information they have from their own lives. Even if they're not paying close attention to the campaign, voters are likely to get a sense of what "everybody thinks" will happen. In 2014, this would mean good predictions, because all the experts were saying that the Republicans would win big, while the polls left more doubt. But there have been other campaigns in which the the experts were wrong, notably 1948. Wolfers's explanation says that voters would have called that one correctly, or at least come close.

Did they? In late September 1948, a Gallup poll asked "regardless of how you, yourself, plan to vote, which candidate do you think will carry this state: Truman, Dewey, or Wallace?" 25% said Truman, 56% said Dewey, 15% don't know, and the rest Wallace or someone else (presumably Thurmond, who did carry several states in the South). Of course, the polls were famously wrong in that year, but the survey also asked who they would vote for: it was 46% for Dewey, 40% for Truman, 4% Wallace, 2% Thurmond. Given that it was not a large margin for Dewey, it seems like voters' own stated preferences might have been better predictors of how their state would go. Gallup did record state, so that could be checked, and I will do that in a later post.

Wednesday, November 5, 2014

I get most of my election coverage from the NY Times. It occurred to me the other day that I hadn't read much about the race for Connecticut governor, even though the Times has a lot of readers here and it was expected to be close. I'm not sure, but I suspect that media coverage has shifted towards national politics over the years, and I wondered if that was reflected in a decline of knowledge of state politics in the general public. There have been occasional survey question over the years on whether people could name the governor of their state. The results:

Now that's what I call a trend. I think that knowledge of basic facts about national politics has been stable or declined slightly, but nothing like this.

Sunday, November 2, 2014

Since 1985, a number of surveys (first by the Times-Mirror Corporation and later by Pew) have asked "How would you rate the believability of _____ on this scale of 1 to 4?" The scale goes from 4 ("believe all or most of what they say" to 1 ("believe nothing"). I picked four publications: the Wall Street Journal, USA Today, Time Magazine, and the New York Times, and summarized the results by the logarithm of positive (3 or 4) divided by negative (1 or 2).*

The obvious point is that ratings of the believability of all have declined. Two other points that I'm not sure of, but are interesting possibilities:

1. In 1985, the Wall Street Journal and Time were rated much higher than USA Today. In the last ten years, there's been little difference among them--that is, the ones that started higher declined faster (unfortunately they didn't ask about the New York Times until 2004).

2. The decline is pretty well approximated by a linear trend. However, it also seems like there was an additional fall between 2002 and 2004 from which they haven't recovered. It seems reasonable that some people would have felt they were misled after the Iraq war didn't go as smoothly as promised--and even though government officials were the original source of the misleading information, it the "believability" of news outlets would suffer.

*In retrospect, I should have just taken the averages on the four point scale, but for reasons that I have forgotten I started by collapsing the scores into two groups.

About Me

I am a professor of sociology at the University of Connecticut, and editor of the journal Comparative Sociology. My book, Hypothesis Testing and Model Selection in the Social Sciences, was published by The Guilford Press in April 2016.