Different modes of discourse

In a negotiation you hold back, you only make concessions if you have to or in exchange for something else. In scholarly communication you look for your own mistakes, you volunteer information to others, and if someone points out a mistake, you learn from it. (Just a couple days ago, in fact, someone sent me an email showing a problem with bayesglm. I ran and altered his code, and it turned out we had a problem. Based on this information, Yu-Sung found and fixed the code. I was grateful to be informed of the problem.)

Not all scholarly exchange goes like this, but that’s the ideal. In contrast, openness and transparency are not ideals in politics and business; in many cases they’re not even desired. If Barack Obama and John Boehner are negotiating on the budget, would it be appropriate for one of them to just start off the negotiations by making a bunch of concessions for free? No, of course not. Negotiation doesn’t work that way.

I got to thinking about this topic after some blog exchanges with Ron Unz, a political activist (I’m using that term not in any negative way but merely as a description; for example, Unz’s Wikipedia entry describes him as a “former businessman and political activist”) who wrote an article a few months ago claiming, among other things, that Harvard discriminates in favor of Jews in its undergraduate admissions. See here and here for lots of details. The short version is that, at first I felt no particular reason to doubt his numbers, but after seeing more data, it became pretty clear to me that Unz had made some mistakes in classification and mistakes in data analysis. As I’ve emphasized throughout, this does not make all of Unz’s larger claims false; what this new information does is change the status of Unz’s article from a data-based analysis to an anecdote-based opinion piece containing numbers and comparisons of uneven quality.

OK, now back to today’s topic: different modes of discourse.

Consider Unz’s reaction to the data and comments provided by Janet Mertz, a professor of oncology at the University of Wisconsin who’s published two peer-reviewed articles on demographics of high achieving math students.
Mertz came into the picture to shoot down two of the most visible numbers in Unz’s article—two numbers that made it into the New York Times, thus reaching a much broader audience than the readership of the American Conservative and Statistical Modeling, Causal Inference, and Social Science. Unz wrote, “During the 1970s, well over 40 percent of the [U.S. Mathematical Olympiad team] were Jewish . . . However, during the thirteen years since 2000, just two names out of 78 or 2.5 percent appear to be Jewish.” Mertz, however, reports from direct contact with the students that over 12% of Olympiad team members since 2000 were Jewish, and she estimates 25-30% in the 1970s. It seems that Unz’s claim of 44% in the 1970s came by incorrectly including as Jewish any student with a German or Polish name even though many of them were found by Mertz to be Christian; on the other hand, in the 2000s, he no longer counted such names as Jewish and, even, failed to count as Jewish a student with an clearly Israeli-Hebrew name.

Going from a factor of 17 to a factor of 2 or 2 1/2, that’s a huge deal. At a direct level, it changes the flavor of one of Unz’s main points (“the strange collapse of Jewish academic achievement”) while, indirectly, it casts doubt on any estimates he creates based on identifying people’s ethnicity by name.

Unz recently accepted his error on the low number (although he shades things a bit by rounding up his original guess from 2.5% to 3% and reporting Mertz’s number as 12% rather than “over 12%”) but he seems to be standing fast on his claim of 44% for the 1970s, which seems odd to me given that it is based on his subjective ability to recognize Jewish names, an ability which he clearly doesn’t seem to have, at least not for these data.

OK, so far we could just be seeing some garden-variety stubbornness. Unz worked hard in compiling his numbers and in writing his article, so he’s understandably reluctant to admit some of this effort was wasted.

But here’s the interesting part. In the same blog entry where he admits error on the 2.5%, Unz writes:

The angry criticism of Prof. Mertz and “NB” had been floating around the Internet for some time, and had been widely ignored or dismissed.

So he’s known “for some time” that an expert on the demographics of math competition participants thought his numbers were wrong, and his reaction was to “ignore” and “dismiss” her? From an academic standpoint, this seems strange. We take criticisms seriously. When an expert (or even an informed outsider) says our numbers are wrong, we listen (even if we don’t end up agreeing).

But it all looks different if you take this as a negotiation or debate. Remember the Obama-Boehner story? Hold on to everything, don’t make any concessions till they drag it out of you. That seems to be what’s going on here. For some period of time, Unz has known that an expert in the field thinks his numbers are off by a factor of 5. But he sits on it and then after much criticism on many points, he admits a couple of mistakes but avoids comment on others. Savvy negotiating strategy. That’s how they do it in Washington. And, again, that’s how they should do it. If Obama (or, for that matter, Boehner) just started making unforced concessions, we wouldn’t admire the guy. We’d think he’s weak. And we’d be right. In a negotiation, you don’t get credit for openness. You gotta play by the rules.

So, I think what’s happening is that Unz and the rest of us are playing different games. We’re playing the academic research game, he’s playing the politics game.

Again, I’d better interject that I think politics is just fine. I’m a political scientist! I completely respect Unz’s choice to be a political activist and spend his life trying to change the world through political means (or to stop others from instituting what he believes would be negative changes). Calling Unz a political activist or politically-motivated is not an insult. It’s who he is. Unz made the choice not only to interpret the world but to change it, and that’s a choice we can all respect. He has every right to spend his time and money in this way.

But it gets tricky when academic and political norms collide. Unz is behaving in a perfectly reasonable way politically, dodging criticism as much as he can, fighting it where appropriate, and occasionally making a strategic concession of as little as possible in order to stay ahead of the story. This is what Mayor Bloomberg does, it’s what Paul Ryan and Hillary Clinton do, etc.

From an academic perspective, though, this kind of strategic behavior just comes off as weird. Listening to criticism and recognizing our errors isn’t just good scientific ethics, it’s also good for us. Consider this story, for example. After the 2008 election, I’d posted some colorful maps based on polling analysis, estimating how people from different demographic groups voted in different states. Lots of people loved the maps, but then I got one criticism from the political blogger Kos. He was the only person out there criticizing me, and indeed his criticism was aggressive in tone. It would’ve been easy for me to ignore him, to laugh it off and to say that thousands of people had seen my graphs and he was the only one to complain. I could’ve even questioned his motives. But did I? No. I took the criticism seriously. And it turned out he was right! Fixing my errors took months of research, but it was worth it. Yes, you heard that correctly: a kick in the pants from Daily Kos was a key input to this American Journal of Political Science paper.

If I’m not too proud to listen to Kos, there’s no reason Ron Unz should be too proud to listen to Janet Mertz. And, indeed he did, but it took him awhile to do so, and his admission is still only partial (for some reason he seems to continue to take his 44% number as fact) and so grudging that he’s still a long ways from making forward progress. For example, rather than criticizing himself for ignoring and dismissing Mertz’s accurate points, he criticizes her for sending her criticisms by email. That’s totally wack. When people point out errors to me by email, I appreciate the free information!

As they say in AA (or someplace like that), it’s only after you admit you’re a sinner that you can be redeemed. I know that I’m a sinner. I make statistical mistakes all the time. It’s unavoidable. Unz still has not reached that stage. Or maybe he does realize he’s made some big mistakes but he’s holding back on conceding them until he feels the time is right. I have no idea; I suspect it’s a mix of the two.

I have no reason to think Unz is insincere in what he writes, any more than I think Obama, Boehner, etc. disbelieve the economic policies they espouse. They’re just playing a complicated game in which communication is itself part of the strategy. In Unz’s case, he either knew for weeks or months that some of his numbers were wrong but refused to admit it, or he saw Mertz’s criticism but did not look at it. Neither of these reactions looks good to an academic, but, politically, they might have been smart gambles.

In his most recent post, Unz has moved in the direction of criticizing me as well, writing:

Individuals who become emotionally involved with a particular position of ideological or ethnic advocacy may lose their ability to dispassionately analyze data, and this intellectual failing may sometimes even apply to award-winning Ivy League statistics professors.

I agree. Even Ivy League statistics professors can go astray! I strongly believe Don Rubin analyzed his data honestly when he was being paid by the cigarette companies, but not everyone agrees with me on this. And some people felt that McShane and Wyner were misled by ideology when writing their controversial article on global warming for the Annals of Applied Statistics. As to my own writings, I’m subject to all the human flaws we all hold within ourselves. In this particular case, though, I don’t find Unz’s speculation convincing. Here’s what I wrote earlier regarding my interests in the matter:

I personally have connections both to Harvard and to Jews, so you can make of this what you will. All I can say on that account is that, when Unz’s article came out a few months ago, I had no problem presenting its claims as stated; it was only after receiving some recent emails with detailed statistics that I got the impression that Unz’s numbers were mistaken. What Unz did seems reasonable from a distance (and I can understand why he made the choices he did in making his estimate), but his conclusions don’t seem to hold up on closer inspection.

I think Unz is in a difficult position and I don’t know an easy way out for him. When I posted a couple weeks ago, I hoped/expected he would learn from the criticism, revise his numbers, and come up with a more nuanced picture of college admissions and ethnicity. He’d see the data showing a factor of 2 or 2 1/2 (rather than a factor of 17) decline in the proportion of Jews in the Olympiad and see this as a moderate decline explainable by increased competition and demographic change; he’d see that a calibrated analysis shows the Jewish enrollment at Harvard to be comparable to the rate of academically high-achieving Jews in the relevant population; and so on. Some of what he said in his original article still goes through, but he would need more nuance.

But it didn’t happen that way, and I think I’m starting to understand why. Unz is in the arena, he’s been a public figure for decades, and lots of people are angry at him before he says a single word. He gets lots and lots of political attacks, and it’s natural for him to treat all criticisms—from academics and non-academics alike—as political. Now, think about it. If you’re being criticized, year after year, on purely political grounds, it’s probably appropriate for your first reaction to be to fight and defend, not to examine your own writings for errors. What happens when an AFL-CIO researcher reads a new salvo from the Chamber of Commerce? Does he say: “Hey, those business guys have a point! Maybe I’m wrong about that whole collective-bargaining thing after all”? No, of course not. Instead he’ll look to see if the report reveals any weaknesses in his argument, but not with any expectation or intention of changing his thinking but rather so he can strengthen his claims. It’s a propaganda war. One might consider this a version of Kuhnian “normal science” but I don’t think so. Even a non-revolutionary scientist can only make serious progress by questioning his or her own assumptions.

Another possible reason for Unz not to admit some flaws and take a strategic retreat is, of course, that he honestly doesn’t think he made any mistakes (beyond the very few he admitted in his follow-up post). This is possible, but it just pushes the question back one step, as to why he would trust his demonstrably wrong counting procedure over the estimates of an expert. And it also doesn’t resolve why he sat on the criticisms for awhile without acknowledging his errors.

My guess is that Unz sits in some sort of intermediate state (as with the cat in the famous scenario pictured above). At some level he realizes his numbers are iffy. But at the same time he believes so strongly in his conclusions that I suspect he feels that even his false numbers are true in some deeper sense. Thus, for example, Unz tried to explain away the 12%+ Jews in recent math olympiads by claiming that many were recent immigrants. That claim also turned out to be wrong, but the very fact that he made the attempt indicates to me that he was trying to preserve all he could of his earlier views. From a statistical standpoint, this makes no sense: after all, if it’s informative to learn that the proportion of Jews in the math olympiad declined by a shocking factor of 17, then it should be informative (in the other direction) that the decline is only a factor of 2 or so. Surely our first response to the disproof of a shocking-but-surprising claim should be to be un-shocked and un-surprised, not to try to explain away the refutation. From a political standpoint, though, criticisms are not to be taken at face value. And once Unz feels that he’s in a battle, the rules all change, as discussed above. When you feel you’re attacked, anything goes.

A couple more things (for now)

1. A reader of this blog might reasonably ask: Why devote so much time to the case of a minor political figure who self-published some mistaken statistical claims? The short answer is that some of these numbers appeared in the New York Times. But that’s not the full answer, given that newspaper columnists make mistakes all the time (as do I). What really got me going in this case was a feeling of responsibility, in that I too had initially reported and reacted to Unz’s claims without skepticism. This then got me interested in the larger issue of how such numbers get put into wide circulation, and the corresponding difficulty in retracting them. First I expressed frustration that David Brooks issued no retraction, then I got to thinking about Ron Unz’s motivation in not admitting his mistakes. The larger question of scholarly discourse vs. political negotiation seemed important to me.

2. I’m not placing scholarly values above political values, nor am I implying that scholars always act in how I define the “scholarly” way. Of course not! Indeed, we speak of “academic politics” and so forth to discuss scholars behaving politically. Politics is important, and if you’re running a campaign of some sort, it’s not in general best for you to lay all your cards on the table. It can make perfect sense to fight every concession, kicking and screaming. I’m not much of a politician myself, and I respect those political skills that I do not have. All that said, I do find it frustrating when people behave in “political” ways. It doesn’t make me comfortable, hence this post.

3. In his recent post, Unz analogized his critics to “litigators who choose to completely ignore the overwhelming volume of the facts in a case but spend all their time angrily pounding the desk on an insignificant one hardly demonstrate the strength of their position.” I do not think this analogy is appropriate. First, while I have not been “angrily pounding the desk,” I can understand how Janet Mertz and my other correspondent might be drawn to anger, given that their comments were valid and clearly stated, yet had been (in Unz’s words) “ignored or dismissed.” That would make me angry too! Second, although (as I’ve written many times) several of Unz’s points remain relevant even after his errors are corrected, his claim of Harvard discriminating in favor of Jews and his claim of a dramatic drop in Jewish accomplishment do not hold up to scrutiny. These claims may well be a minor part of Unz’s bigger picture but they did make their way to the New York Times.

4. I thought twice about posting all of this because I fear it will annoy Unz, and (despite what you might think based on reading this blog), I have no desire to make more enemies in life. But on balance I think the point about modes of discourse is important, and I think it is in the context of a live story that this all comes out, so I wanted to make the post. To Unz and his friends, let me just say what I told David Brooks, that as a statistician I feel very strongly about the use of numbers. I have no desire to silence Unz. Rather, I feel (from a scholarly perspective) that whatever arguments he seeks to make will be stronger when attached to good data and clean statistical reasoning.

52 Comments

Great post, I found it very interesting (as a nonpolitical scientist) in general, not just for the case in point. Though I can’t help but think that scientists do have the better mode of discourse and even though you say both are fine the only advantages I see for politics-style discourse is staying in office or on top, not finding the truth or making the best changes. I wonder whether political actors will weigh in, considering that they made the decision to change stuff (the necessary difference) but may also be frustrated with what seems like a bad mode of discourse (maybe not a necessary difference and since science still has ways to go to shift closer to that ideal, political discourse could probably shift closer too).

The Gelman-Unz back-n-forth is atypical also because we have an academic engaging with a politician. I think the goals are at cross purposes. For an academic, correctness of methodology is a win in itself.

OTOH, for someone like Unz his message / conclusion seem more important. If he were to now say “I was wrong, yet this new analysis also yields the same conclusion” I think he’d lose a lot of his readers in the first part of the sentence.

It is more common in academics to admit mistakes and ditch approaches for better ones. Popular discourse is harsher; less space to admit errors; you’ve got to get things right at first go. Readers are less forgiving, if you admit you got it wrong the first time how do we trust you won’t again?

Unz is a former theoretical physicist who founded Wall Street Analytics. His business is extremely numerate, as is his argument; whatever its political motivation. Unfortunately, the good Professor Gelman, who also has a political motivation, doesn’t seem to have any numbers in his responses.

I don’t know why you say I have a political motivation but you can feel free to believe whatever you want. If you want to see some numbers, go to my original post here. The purpose of the above post was not the numbers (which we’ve already discussed endlessly earlier) but rather a discussion of different modes of discourse. I found it interesting that Unz (who, we both agree, is numerate) took so long to acknowledge one of the errors he made and has still not acknowledged others. It seems odd to me from my academic-discourse perspective but perhaps makes more sense in the context of political debate. I suspect that Rahul’s guess above is correct although of course I do not know. Even former physics students (I am one too!) can fool themselves.

I think the other problem is that Unz is so used to political arguments that he doesn’t recognize a scholarly conversation when he sees one. See my remark above, just below the Venn diagram. If you or Unz think that his being “extremely numerate” is protection against him making a statistical mistake, you are naive about the process of scientific discovery. Extremely numerate people make mistakes all the time. Everybody makes mistakes all the time. Being open to learning from your mistakes, that’s how to move forward. Denying your mistakes and fighting, that’s not a way to move forward in your understanding. Trying to be sarcastic by calling me “the good Professor Gelman” might be amusing, but it is not a good way to move forward.

I’m sorry to say that the majority of people who purport to play the academic game in philosophy, (in my interactions and from information from others over many years) do not acknowledge or act on challenges to their views, corrections to their arguments, or even evidence of blatant factual mistakes! It has been my greatest disappointment in philosophy, because I naively thought the field was not as political as others. At least part of it is knowing they can get away with ignoring someone not part of one of the popular boy’s clubs. It is no wonder our field so rarely advances, and actually goes backwards often.

Great post! One issue and problem is that although the “modes of discourse” are different between politicians and academics, the language is very similar. It is odd to think that the epistemological basis of a statement like “17 percent drop in X” would be very different when coming from a politician vs from an academic. But it really *is* true that the statements have different epistemological bases (for all the reasons in this beautiful post), and therefore slightly different meanings.

Good post on the meta’s of the situation. It is properly restrained, but as a commentor I’m more free to come down hard on Mr. Unz. Usually politicians and lawyers stay within their own domains. They use arguments they know are flawed and data they know is dubious, but it’s in speeches and courtrooms, and they’re trying to fool the common man. Usually the “facts” that are distorted as so vaguely defined anyway that it doesn’t count as dishonesty, “Obama’s economic record has been highly successful” isn’t something I’d call a lie even though I’d vehemently disagree with it, any more than “Burger King has the best hamburgers”.

Unz, though, wrote as a scholar would, and your and my first thought was to take him as a fellow scholar. He doesn’t have any credentials, maybe, but that doesn’t matter to the value of one’s work. (I just looked him up; he does have a Harvard physics BS and did some PhD work, so he has pretty good credentials, actually.) But now I feel betrayed because he cooked the books. He presented false data. He is lying, and should be fried for it. Having false data is not bad in itself, and often it is not even sloppy or biased, it’s easy to get things wrong, especially investigating a new topic. But continuing to stand by your numbers when they’re shown to be wrong is a sin, combining lying and pridefulness, and it also throws doubt on everything else you’ve done that hasn’t been checked yet.

I wonder whether the don’t-back-down strategy is actually used in law, business, or politics when it comes to serious discourse between peers. Talking to the jury, the pointy-headed boss, and the voter is different from talking to the lawyer on the other side in private, to an intelligent boss who has time to check your facts, and to the other party’s member of the conference committee.

But maybe I’m thinking as an INTJ. A lot of academics are in the 2% of the population that is in that Myers-Briggs type, very objective and willing to admit being wrong, perhaps because they think that it’s no use pretending, but that is rare in the general population. See http://www.16personalities.com/intj-strengths-and-weaknesses . It is useful for us to think about this for help in relating to students and administrators.

I’ve heard that said, but not with an explanation except that it doesn’t have a real theory behind it and that it divides people into 16 types rather than a 4-dim. continuum. Could you suggest a source on why it is bad? Connected with that is why people prefer the Big Five— my casual look gives me the impression that it’s just that they like factor analysis, a technique that to my mind needs justification.

Of course! Here’s a readable breakdown: http://www.skepdic.com/myersb.html with some papers you can follow up on if you’re interested.
One important point is that according to the authors you can simply choose a different type in your consultation if you’re unhappy with your result, though personal consultation isn’t always part of the test.
About the continuum-type stuff, they have a fairly simplistic approach to types, ie. cutoffs. Of course the scores are normally distributed not bimodal, so with a cutoff in the middle you get low retest reliability of types etc, but that’s probably what you heard before. It’s no small thing though, because the types are usually what’s reported and interpreted.
With the dimensional scores you can do more useful stuff, though then you still don’t have information on emotional stability (in the Big 5), a fairly useful trait for predicting relationship and job outcomes (fairly…).
That is, the MBTI has a lack in coverage in comparison to the Big 5 which at least have been shown to account for a lot of variance in the words people use to describe others and themselves. That’s fairly atheoretical (the MBTI has a “real theory” behind it, but it’s made up and doesn’t validate).
The widespread preference for the five factor model in academic psychology also stems from a combination of good marketing, history (once a test is established it’s harder to deviate). I don’t think the big 5 are the final word in personality psychology, and I believe few people do, but I think it’s pretty clear that they are much better than the MBTI which is more like astrology. Criticism of the MBTI is more about how it fails basic requirements of good science (e.g. peer-reviewed research is rare, conflicts of interest abound), criticism of the big 5 actually is more about improving our understanding of personality.
Another point is the high licence cost of these tests (both MBTI and NEO PI-R, the most popular big 5 measure), to my young internet-influenced mind they’re ridiculous and impede science. There are more public-domain alternatives for the five factor model, google e.g. IPIP.

Right on, Eric. The primary problem here is that Unz wrote an article in a format that pretends to be a very careful, scholarly piece of work (that he wants the US Supreme Court Justices to seriously consider in their deliberations regarding their affirmative action case), when, in reality, it was just a self-published political commentary written by a political activist with his own personal agenda who was using whatever data and methods supported the answers he desired. On the other hand, when Brooks and Krugman publish op-eds in the NYT, it is clear to all that these are simply opinion pieces.

What you’re missing is the second half of the equation. Jewish achievement versus their representation at elite schools. Unz claims their enrollment numbers are up while their achievement is down. Put in context, the 24% under-achievement weighed against a substantial rise in enrollment buttresses his argument. After all, the main point of his article or “scholarly essay” was admissions policies. A point that’s been lost here.

As most should know, I don’t agree with the statistical approach Unz used. It’s incumbent on academic scholars to point these errors out as Gelman makes clear. However, I’m not convinced the scholars are going far enough. Namely to shoot down the Weyl analysis for it’s inaccuracy. A true scientific approach would suggest improvements to the model. Or take a Cartesian approach and rip the entire edifice down and model an approach that might give us accurate results. This is why I stated “garbage in garbage out.” If the model is garbage it’s going to produce garbage even if you add “Gold” “Goldberg” “Goldman” etc.

Sadly, we’re still not talking the language of science. From some of the responses here, we’re still in the political sphere. Tribal mentality isn’t limited to politics. There’s nothing worse than scholars defending other scholars because their scholars. Let’s try and focus on discerning truth.

As a side note. I was hoping one of our scholars might run their version (apparently there are several out there) of the Weyl analysis on members of Congress. This gives us the benefit of being a nation wide sample based upon distributed population. Not to mention it can be directly checked. I’ve got a good idea of it’s accuracy. :0 Ouch!

Unz’s claim of a very high Jewish enrollment at Harvard, compared to high-end Jewish academic performance, has not “been lost here.” As has been discussed many times in the recent posts, that claim is based on Unz’s comparison of incompatible numbers. I agree with you that there is a garbage-in, garbage-out problem here. Please recall that the Weyl method was introduced by Unz in this contexts; my correspondents were using it to demonstrate that Unz’s claims were inconsistent with his own methods.

Bud Wiser, a sample size of 535 is not large enough to tell us anything about the accuracy of Weyl analysis.

You state, “a true scientific approach would suggest improvements to the model.” It is my belief that there is no accurate way to definitively determine whether an individual is Jewish other than by asking him (unless he or other members of his family publicly identify themselves as such). In fact, among US IMO team members Prof. Mertz did not already personally know, we made little effort to confirm the ethnic background of those with Anglo sounding surnames, as we simply assumed they were not Jewish. Last night, I found out that another IMO participant, whose paternal grandfather’s surname was anglicized upon immigrating to the US, has Jewish ancestry.

Phil calls it “garbage time”: the point in a blog thread where people start repeating their points. I don’t think the issue is person A chasing person A’s tail. It’s A chasing B’s tail and B chasing A’s tail.

I’m guessing you’re suggesting I may be A or B? :) Fair enough. However, that’s why I limited my response to NB. I didn’t feel rehashing my previous points were going to influence NB. I was glad to hear Janet will be suggesting ways to improve our analysis. That’s what I came here for.

I also appreciate that some of your subsequent blogs touched on issues I’ve been concerned with here. I’m not sure if it’s subconscious or you’re keenly aware. Either way it’s O.K.

BTW-Who is Phil? It’s nice to know there are people that know when to quit.

Interesting. One of the confusions of this whole story is that Unz publishes in his own magazine; that is, his article, which David Brooks characterized as a “magazine essay” is essentially a blog post.

Of course, Unz’s article is also a magazine essay; even if self-published, that’s what it is. But I wonder if its presence in what seemed to be an official published source lent it a bit of undeserved credibility. If the statistics in a “magazine article” are rebutted by data in emails and a blog post, perhaps there’s a tendency of a traditional journalist such as Brooks to trust the printed source.

To put it Bayesianly, online information sources that are obviously self-published, such as emails and blogs, are held to a higher evidential standards than conventional-appearing publications, because of the implicit model that online material is not checked, whereas conventional publications are assumed to be edited for accuracy.

I like your last part, except that I think people have it the wrong way around. In the age of the search engine online materials, indexed by Google, etc. are more likely to be read and checked. Thus more likely that mistakes will be spotted. Sunlight, goes the saying, is the best disinfectant.

Obviously there is a lot of garbage in the internet, but often that is uncontroversially garbage (e.g. CIA caused 9/11) and so it is ignored. Mainstream controversial claims, on the other hand, can elicit a crowd-sourced review, as happened to Unz in your blog.

Andrew, the kind of hard-ball negotiation strategy you ascribe to business is certainly not what is prescribed as first-best in most negotiation texts, where the only thing that matters is the payoff and especially where you are aiming for repeat business. It might make for good political theatre, but I suspect it is not good politics either. Or, has politics ceased to be defined as the art of compromise?

[…] to himself, Columbia University statistics professor Andrew Gelman has now seen fit to publish his sixth(!) lengthy blogsite column discussing or sharply critiquing my analysis of Ivy League university admissions. Just like most […]

You write, “For reasons best known to himself, Columbia University statistics professor Andrew Gelman has now seen fit to publish his sixth(!) lengthy blogsite column discussing or sharply critiquing my analysis of Ivy League university admissions.”

I have in fact already given my reasons for posting this. See items 1 and 4 at the end of this post, under the heading “A couple more things (for now).” As you and I both know, one advantage of self-publishing is that it allows us to explore issues in depth and to return to earlier discussions.

For all I know you are perfectly disinterested and unbiased, as you seem to say. If so, it’s not easy to explain several things:

1. You keep calling Unz a “political activist”, keep saying this is not an insult, and keep saying that Unz is not acting appropriately according to the academic norms you try to follow but rather is acting like a political activist. It’s obvious that by calling Unz a political activist you meant something negative (especially since you also called him “sloppy”). The puzzle is that you continue to insist you didn’t.

2. You call Mertz a professor of oncology who has written two papers about something relevant. True. What’s also true is that at least one of those papers has an obvious ideological bias. As Unz says, only an ideologue would say that finding that 10% of some group of mathematicians are women is evidence against Larry Summers’s view that women might be less good at math than men. Surely you know this. The puzzle is that you ignore it and repeatedly describe only some of the truth about Mertz. Which is misleading.

3. Unz is right when he says that you keep emphasizing one small point while ignoring the rest of his evidence or at least refusing to comment on it. After Unz points out that you are doing this, you keep on doing it. Again, puzzling. Especially since you complain that Unz is stubborn.

4. You seem to fail to understand the concept of thought experiment (Unz’s comment about Mormons). Again, puzzling.

1. I’m preparing a detailed response to Unz. As an academic who gets paid to do cancer research, I don’t have time to respond rapidly with care to this little side project. Sorry to keep you waiting.

2. Actually, I have published 3 articles on the topic of gender, ethnicity, and mathematics performance. Have you actually taken the time to read them? The first 2 deal with IMO- and Putnam-level performance, from which you quote the 10% females number. My 2012 Notices of the AMS article addresses the question of the ratio of male variance to female variance in the distribution of math performance on standard international exams, i.e., the PISA and TIMSS. These articles document with hard data the existence of huge differences across cultures in means, variance ratios, and identification of females who excel in mathematics performance at the IMO/Putnam level. The primary conclusion of my articles is that sociocultural factors play a large role in how females vs. males perform in mathematics at all levels. In other words, the extreme scarcity of US females who are top math research professors is due, in considerable part, to culture, not solely to gender differences in “intrinsic aptitude” as Larry Summers “hypothesized” in the absence of data or doing any research himself on the topic. In recent years, the very top US math departments have been successfully hiring terrific women math professors. The problem is that most of them were born and raised in other countries (e.g., France, Germany, Belgium, China, Russia, etc.), not the US. If you were raising a daughter in the US, you would see for yourself how US culture strongly discourages girls from pursuing high-level mathematics.

I have read only one of your papers, Prof. Mertz. Your comment is not reassuring that you are not an ideologue. Your work sounds good. You may be right about why there is an extreme scarcity of US females as top math professors. But I believe e that researchers in the area of behavioral genetics and education will be stunned to learn that it is as simple as you make it out to be. You find “huge differences across cultures in means, variance ratios, and other relevant stuff”. From this you conclude “sociocultural factors play a large role in how females vs males perform in mathematics at all level”. The first finding isn’t even a correlation. The second statement (“sociocultural factors…”) is a conclusion about causality. Researchers in this area find it much harder to make causal statements.

You need to read my other articles as well. The second one is Hyde & Mertz, Proc Natl Acad Sci U S A. 2009 Jun 2;106(22):8801-7. You can access it via the www at “Pubmed” for free. The third one is Kane & Mertz, 2012 (www.ams.org/notices/201201/rtx120100010p.pdf). All three articles went through serious peer review. The 1st and 3rd were published by the American Mathematical Society, the primary society of research mathematicians in North America. The 2nd article was published by the highly prestigious journal of the US National Academy of Sciences; the editor of the article was the editor-in-chief of the journal. All three articles are full of lots of tables of hard data. Wherever appropriate, we give Pearson correlations and p-values. Many of the p-values are <0.01; some are <0.001. The 3rd article includes analysis of data sets with over 1/4 million students. So far, nobody has written to me mentioning any errors needing correction; if they did, I would be happy to do so. All 3 articles were well received by the math community as well as covered in the lay press. The first one was even mentioned in a NYT op-ed written by Bob Herbert, referring to my conclusion that the US is doing a poor job of even meeting the needs of many of its academically most gifted white male students, let alone its female and under-represented minority ones. Yes, these articles were picked up by the lay press because they mentioned Larry Summers. However, the research was done and published in quality journals because it provides lots of high-quality data that many people were interested in seeing. Yes, different folks may believe the data should be interpreted in a variety of different ways to reach different conclusions; that's what the Discussion section of articles is for. However, while folks may differ on how they interpret the findings, nobody questions the data themselves and the validity of the methods used to obtain these data.

Compare the above statements with Uzn's Meritocracy article. He self-published, bypassing peer review. Some of the data in his tables and statements he makes are absolutely incorrect (Jews on 21st century US IMO teams), yet he refuses to admit they are, let alone correct the errors. Nowhere does he present estimates regarding either the precision or accuracy of his methods other than a single statement that the Weyl method yielded the same answer overall on the NMS data. The primary reason for these incorrect data is that the primary method he used to obtain his data is highly subjective and appears to have a very large error with respect to % Jews. The error may be so large that his data related to Jews may be incorrect to the point that most, if not all of the conclusions he draws from these data regarding Jews, and, indirectly, non-Jewish whites, may be wrong. He has yet to do anything to determine the size of his errors so correction factors and ranges of values can be stated, including the presentation of p-values. I will provide additional details regarding my scientific concerns with the Unz article in the statement I am preparing that includes suggestions for ways he could try to address them to transform his article from a very long political commentary into an academic article if he desires to do so. I understand that he is primarily a policy advocate and, thus, may not have any interest in doing so. Nevertheless, the fact remains that the current version of his article is badly lacking in academic rigor despite his pretense that it meets that standard.

Regarding point 3, it is not a small point that Unz underestimated the % of Jews on the US IMO team since 2000 by a factor of at least 5. Unz’s analysis of Jewish academic achievement is predicated on his ability to identify Jews on the basis of their names, which proved spectacularly wrong in the one data set on which there exists confirmed data about the ethnic background of the students, thus calling into question his entire analysis wrt Jews. Also, please note that Unz previously stated: “Science largely runs on the honor system, and once simple statements of fact—in Gould’s case, the physical volume of human skulls—are found to be false, we cannot trust more complex claims made by the particular scholar.”

Regarding Mormons, we need not perform a thought experiment. Unz evidently considers SAT scores as a critical measure of academic merit, given his analysis is based on his estimates of the demographics of NMS semifinalists. The NMS qualifying score in Utah is 206 (see here for NMS qualifying scores by state). This corresponds to an SAT score of 2060, which is way below average for Harvard, meaning few Utah NMS semifinalists are actually Harvard material. Contrast with New York state, which has a MUCH higher qualifying score of 218, corresponding to an SAT score of 2180, meaning NY contains a far greater share of students who are Harvard material than does Utah. Yet 2180 is still below average for Harvard, so it’s most instructive to consider states like Massachusetts with the highest qualifying scores, typically 221-223, corresponding to SAT scores of 2210-2230. Unz reported that 19% of MA NMS semifinalists are Jewish (which is an underestimate, in my opinion, based on my perusal of the names), meaning that Jews are overrepresented by a factor of 5 among NMS semifinalists in the state with the highest NMS qualifying score and the state that I imagine is the most disproportionately represented at Harvard (due to both geography and MA having arguably the best educational system in the nation, which Romney pointed out in his campaign, as you might recall). A far more significant share of MA NMS semifinalists are Harvard material than in Utah, where I imagine most American Mormons live. As I discuss in greater detail here, it is often the case that in states with high NMS qualifying scores, non-Jewish non-Hispanic whites are underrepresented among NMS semifinalists in proportion to their population in that state.

1. I fear there’s nothing I can do that will satisfy you. It is not just me who labels Unz a political activist; as noted above, he is so described in Wikipedia, also as far as I know that’s what he does full time. Given that I explicitly said that I don’t think it’s a negative to call someone a political activist, but you don’t believe me, there’s really nothing further I can say on the matter.

2. I suggest you read Mertz’s articles more carefully.

3. See point 3 at the very end of my blog above.

4. I do understand the concept of a thought experiment. However, as a statistician I prefer to answer questions as directly as possible, as I did in that example.

The claim that “[Prof. Gelman] neglected to note that Weyl Analysis also produced a substantially *lower* estimate for the other 17 states I used” is untrue as I demonstrated here. Perhaps part of the disparity is due to the fact that I interpreted “Gold—“ in Unz’s description of Weyl Analysis to mean Gold* when perhaps he meant a specific list of “Gold-” names. I’ve sought clarification of this issue in the comments sections here to no avail, and I recently sent Unz an email directly asking this question, so I hope Unz will reply.

FWIW, and Eli was there and was affected, the elite (read Ivies) had quotas for Jews, and even for various other ethnic whites such as Italians and Poles. These persisted well into the 60s and even 70s. These groups were primary beneficiaries of the civil rights movements which broke the barriers to admissions (not quotas, but absolute we don’t take any with rare exceptions) for Blacks. Recognition of this explains much of the hostility to the idea of quotas by Jews. Quotas were used to exclude them. Quotas were a vast improvement for Blacks

Ron, can you elaborate on how you would imagine this bias in favor of jews would look in practice? Just thinking about it a bit, I cant imagine any practical way this jewish admissions boost would work in practice across so many ivy school over a period of so many years. At least without dozens upon dozens of leaks coming from admission committee members not sympathetic to the massive preference given to jews over other religions and races (just as so many adcoms tell of stories of the “chore” of reviewing Asian applicants. In fact, in your own comment section one person noted how when they worked on admissions everyone dreaded the “night of 1000 lees”). Is there not a single non-jewish minority on all the adcoms who might take issue with giving such a large boost to jewish applicants?

i would think It would take a mass conspiracy for this to actually work in practice.

In other words, how could this work in any way that wouldn’t sound cartoonish? We have a general idea of how regular affirmative action works (e.g., separating out applications of certain minority groups and evaluating them using a separate criteria), but I presume this isnt how you envision it playing out with jewish preference. Do you think its unconscious? purposeful but unspoken? totally out in the open and they actually separate the applications into a “jewish stack” etc?

(*) I really hope that this sideshow regarding jewish overrep in uni admissions doesn’t cause the larger and more serious issue of Unz’s article to get buried which is the blatant discrimination against asian students.

(*) It is noteworthy that if admissions officers have no way to infer ethnicity other than names (and self reported religious preferences), then the discrepancy between J mertz’s more rigorous jew-counting and the name analysis used by weyl or unz is irrelevant for the IMO numbers at least. If admissions officers favor applicants which have apparently jewish names, then whether or not the applicants actually are jewish doesn’t matter. The bias applied to their application is based on their apparent ethnicity, not their actual ethnicity.

(*) I believe that ivy apps also require the full, and maiden names of both parents and guardians for applicants, (as well as country of birth) so the admissions staff can use this information to discriminate for/against applicants whose *parents* have names associated with a favored/unfavored group. This works even if the applicant’s ethnicity can’t be easily inferred from their name. If they say information will not be used to discriminate, one can be almost certain that it will!

(*) The bias toward jewish names could be a form of operant conditioning. admission officers see quite a few applicants with genuinely amazing accomplishments and jewish names.. eventually the association between the two is learned. After some conditioning – the quality of an applicants accomplishments is rated higher if they are observed to have a jewish name. Perhaps this idea could be tested in the lab, or on mechanical turk.

(*) The reliability of the hillel stats is a major weakness – someone should find name lists of ivy and non-ivy colleges for populations which overlap with the NMS lists to do a real comparison.

(*) I note in closing that I find this whole topic disturbing because (a) the dark history of jew counting (b) “jewishness” is a very vaguely defined ethnic category, much moreso than “black”, “chinese”, “hispanic”, etc. there are apparently as many definitions of jewishness as there are jews..

“(*) I really hope that this sideshow regarding jewish overrep in uni admissions doesn’t cause the larger and more serious issue of Unz’s article to get buried which is the blatant discrimination against asian students.”
There are serious concerns regarding the methodology Unz used to obtain his Asian-American data as well as his Jewish data. Although I have not personally studied the Asian issue in detail, I briefly mention some of the problems in my formal response to Unz that I hope Gelman will post soon on his blog.

“(*) It is noteworthy that if admissions officers have no way to infer ethnicity other than names (and self reported religious preferences), then the discrepancy between J mertz’s more rigorous jew-counting and the name analysis used by weyl or unz is irrelevant for the IMO numbers at least. If admissions officers favor applicants which have apparently jewish names, then whether or not the applicants actually are jewish doesn’t matter. The bias applied to their application is based on their apparent ethnicity, not their actual ethnicity.”
There is no solid data indicating the admissions folks ARE actually favoring Jews in admissions. That is one of the major claims made by Unz that we are questioning. Unz is using the Hillel data as the primary basis of his claim that Jews are favored. The Hillel data are not based upon apparently Jewish names. IF real, they are probably based upon who identified as a Jew on a religious affiliation questionaire filed with their college or attends Hillel events.

“(*) I believe that ivy apps also require the full, and maiden names of both parents and guardians for applicants, (as well as country of birth) so the admissions staff can use this information to discriminate for/against applicants whose *parents* have names associated with a favored/unfavored group. This works even if the applicant’s ethnicity can’t be easily inferred from their name. If they say information will not be used to discriminate, one can be almost certain that it will!”
Many of the 3rd/4th-generation US IMO team members whom I have identified as Jewish do not have parents with obviously Jewish names, either. For example, my older son’s parents are named Janet Mertz and Jonathan Kane. He was born and raised in Madison, WI. There was NOTHING in his college applications that indicated his ethnicity other than he checked the “white” box. He was admitted to all of the colleges to which he applied, including Harvard, in part because he was an IMO gold medalist with a good chance of helping the college that successfully recruited him having a top-ranked Putnam team.

(*) The reliability of the hillel stats is a major weakness – someone should find name lists of ivy and non-ivy colleges for populations which overlap with the NMS lists to do a real comparison.
Yes, the Hillel data should not be used. N.B. has been analyzing lists of college students using another method.

(*) I note in closing that I find this whole topic disturbing because (a) the dark history of jew counting (b) “jewishness” is a very vaguely defined ethnic category, much moreso than “black”, “chinese”, “hispanic”, etc. there are apparently as many definitions of jewishness as there are jews..
Agreed. I have been counting Jews by asking US IMO team members how many of their grandparents considered themselves to have been born ethnically or religiously Jewish, regardless of what religion, if any, they currently observe. That is how I ended up with many of the students claiming they are 1/2 or 1/4 Jewish. Whatever they claimed is how I counted, even when they claimed 0% Jewish while having a likely Jewish surname.

r m adler, regarding your statement “It is noteworthy that if admissions officers have no way to infer ethnicity other than names (and self reported religious preferences), then the discrepancy between J mertz’s more rigorous jew-counting and the name analysis used by weyl or unz is irrelevant for the IMO numbers at least. If admissions officers favor applicants which have apparently jewish names, then whether or not the applicants actually are jewish doesn’t matter”….it is important to note that Unz’s tally of 2 out of 78 Jewish names among IMO participants since 2000 is still incorrect as Unz excluded an obviously Israeli/Hebrew name and multiple names described as possibly Jewish by ancestry.com (Kane, Miller, Price, etc). And since Unz compared his tally of Jews based on name inspection to Hillel’s count, which would certainly include one of the IMO participants whom Unz counted as non-Jewish white but who is pictured on Hillel’s webpage, it is evident that Unz’s analysis does not address your point either. The larger point of Unz’s factor of 5+ underestimate in identifying the Jews among recent IMO participants is that Unz’s entire analysis of Jewish academic achievement is predicated on his ability to identify Jews on the basis of their names, which proved spectacularly wrong in the one data set on which there exists confirmed data about the ethnic background of the students, thus calling into question his entire analysis wrt Jews. More generally, it is difficult to identify Jews on the basis of their names alone, as many Jews do not have obviously Jewish names.

regarding discrimination against Asian-Americans…I find this aspect of Unz’s article puzzling in that he argues in the beginning that the Ivies have quotas for Asian-Americans (albeit on the basis of partly erroneous data, as Prof. Mertz and I will discuss in detail in later blog posts), but then concludes after analyzing his NMS data that “there appears to be no evidence for racial bias against Asians.” I will also discuss in detail in an upcoming blog post that Unz’s analysis of NMS data is deeply flawed.

[…] from your model. Darwin understood that, and so should we all. This also arose in our recent discussions about college admissions: I had various nitty-gritty data questions with some found annoying but I […]

[…] of course, I’m involved in scientific rather than political debates; he’s engaged in a different mode of discourse than I am. But for reasons I’ve explained in an old post, I do think that even in science […]

The self-reported Hillel data shows that Jews are 30 TIMES as likely per capita as non-Jewish whites to attend the Ivies. Instead of simply refusing to accept the validity of this data or its possible inconsistency with other data sources, why not accept it on its own terms? Do you believe that these self-identified Jews were lying? Were they lying at high enough rates to skew their representation by a factor of 30? What would be their motivations for lying?

Jewish IQs suggest that Jews should be overrepresented in elite universities, but not by a factor of 30. Emile Durkheim notes that Jews in France were overrepresented by a factor of 8 in his writings. Assuming an IQ cutoff of 130, Jews should be overrepresented by 7 to 1. With a cutoff of 145, by 4.5 to 1.

How do you account for these facts? The merit scholar name tally is uninteresting to me. Jewish achievement may or may not have collapsed. But why are these numbers off by 23,000% (30 vs. 7)? Do you account for it by other behavioral traits possessed by Jews lacking in non-Jewish whites? By Jewish privilege (not many universities have departments teaching about that, but our apolitical professoriate takes abundant joy in inculcating shame about “white privilege”)?

You boast about your peer-reviewed priesthood, which ensures that no dissident writings squeak through. Give us your professional analysis, please.

I don’t know who you’re talking about when you write, “You boast about your peer-reviewed priesthood.” Nobody on this blog, either in posts or comments, has ever to my knowledge boasted or even talked about a peer-reviewed priesthood. With your comments on “lying,” “our apolitical professoriate,” “abundant joy,” and “white privilege” you’re basically ranting here, and this blog is not a good place to rant. You might try the comments section at Gawker; they have some great rants there. It looks like a fun place to troll, and also a fun place to battle with trolls if that is your desire.

With regard to your substantive comments, as discussed above, there are many questions about the Hillel numbers, the subjective counts, and the scale-up methods used by Unz in his article. When applied to the same data sets these estimates gave much different answers. So I did not see evidence that Jews were attending Harvard at rates much greater than their rates among high-achieving high school students. The differences might be there, in which case I agree there should be reasons for them, but the data I’ve seen so far seem too inconclusive to make the sort of dramatic claims that you seem prepared to make. You can feel free to make such claims, which might indeed be valid; I just object to the implication that such claims are (currently) supported by the data.