Monday, November 30, 2009

Lecture 1: The Emergence of Modern Economic Growth: A Comparative and Historical Analysis

The focus of this course is in trying to understand the pattern of world development in the long-run, starting with the Neolithic Revolution around 10,000 years ago.

The most dramatic fact is the relative evolution of income per capita vividly portrayed in the data by Maddison shown on the next slide. When I was a (British, and worse, British Empire) schoolboy we used to just call this the industrial revolution but now we call it the Great Divergence.

This terminology reflects a signifcant shift in emphasis away from explaining why the industrial revolution happened to why the technologies and methods of organization it generated diffused so unevenly across the world.

I've looked at some of economic historian Angus Maddison's stuff, and to me it seems like little more than guesswork. But the figures are pretty...

Early industrialization is bad for the average person -- it makes you nasty, brutish and short ;-) But in the long run, it's the way to get ahead.

Tuesday, November 24, 2009

One of the most mysterious aspects of the nature-nurture question is the difficulty in characterizing the nurture component.

Turkheimer and Waldron: When genetic similarity is controlled, siblings often appear no more alike than individuals selected at random from the population. ... it has become widely accepted that the source of this dissimilarity is a variance component called nonshared environment.

... In what may have been the most influential article ever written in the field of developmental behavior genetics, Plomin and Daniels (1987) reviewed evidence that a substantial portion of the variability in behavioral outcomes could not be explained by the additive effects of genotype or the environmental influences of families. They suggested that this residual term, which they called the nonshared environment, had been neglected by environmentally oriented researchers who assumed that the most important mechanisms of environmental action involved familial variables, like socioeconomic status [SES] and parenting styles, that are shared by siblings raised in the same home and serve to make siblings more similar to each other. Indeed, Plomin and Daniels argued, once genetic relatedness has been taken into account, siblings seem to be hardly more similar than children chosen at random from the population.

In other words, despite a lifetime of proximity, your adopted child may bear no more similarity to you (in terms of, e.g., intelligence) than someone selected at random from the general population. The shared family environment that your children (biological or adopted) experience has little or no measurable effect on their cognitive development. While there are environmental effects on intelligence (the highest estimates of heritability for adult IQ are around .8, and some would argue for a lower value; see here for Turkheimer's work suggesting low heritability in the case of severe deprivation), they seem to be idiosyncratic factors that can't be characterized using observable parameters such as the parents' SES, parenting style, level of education, or IQ. It is as if each child experiences their own random micro-environment, independent of these parental or family characteristics.

The nonshared influences are by far the largest environmental (non-genetic) influences on intelligence -- in fact, they are the only detectable non-genetic influences. (Click figure for larger version; from a review by Plomin. More recent overview here.)

Identical twins, whether raised together or apart, turn out to be very similar, but one still finds differences in IQ and personality. The cause of those differences must be the different environments experienced by the twins, but can't be characterized by simple variables of the sort listed above: it is not the case that the twin raised by the higher SES family has, on average, a much higher IQ! In fact, twins raised in the same family are about as similar as those raised apart, so family shared environment does not produce a large measurable influence. See below for a plausible model that accounts for such outcomes.

By now these results are well understood and accepted by experts, but not by the general population or even policy makers. (See the work of Judith Rich Harris for popular exposition). The naive and still widely held expectation is that, e.g., high SES causes a good learning environment, leading to positive outcomes for children raised in such environments. However, the data suggests that what is really being passed on to the children is the genes of the parent, which are mainly responsible for, e.g., above average IQ outcomes in high SES homes (surprise! high SES parents actually have better genes, on average). Little or no positive effect can be traced to the SES variable for adopted children.

The implications are quite shocking, especially for two groups: high investment parents (because the ability of parents to influence their child's development appears limited) and egalitarians (because the importance of genes and the difficulty in controlling environmental effects seem to support the Social Darwinist position widely held in the previous century).

It is plausible to me that each child tends to create their own environment over time, by selectively seeking out or avoiding stimuli of various types. A bookish kid may end up at the library regardless of whether their father takes them there. An athletic kid may end up on the playground whether or not their mother takes them there. It has been argued that this effect is the reason that the heritability of IQ increases with age: over time, genetic influences assume greater importance as they cause the individual to create or seek out their preferred environment.

In a previous post I discussed individual cognitive profiles as described by an n-vector. Similarly, one could think of an individual's learning profile and learning environment as two more n-vectors. These n-vectors may or may not be well-matched, leading to outcomes with significant and hard to characterize variability. For example, one can imagine that both the environment (provided by parents, siblings, teachers and peers) and a particular child's reactions vary in each of the factors listed below.

Pressure and competition

Stimulation through stories and pretend play; flights of imagination

Ability to learn from repetition and drill / tendency to boredom

Isolated study vs group activities

Visual vs aural vs mechanical stimulation

Level of discipline or structure imposed

Close mentoring vs freedom of exploration

Abstraction vs experimentation

(One can think of many more.)

The factors listed are not intrinsically good or bad for learning -- what matters is whether the learning environment is matched to the nature of the individual child. Some react well to discipline or pressure or story telling, others do not. Further, none of the factors is obviously correlated with SES, parental education level or IQ. Even if they were, it's plausible that a child to some extent creates their learning environment outside the control of parents and teachers (e.g., through peer group or choice of play activities).

An individual whose learning vector (learning style) is well matched to their environment will thrive: the nonshared environmental component in their development will be large and positive. For others, the environment will have a smaller or even negative impact. Because both the learning vector and the environment vector vary in a many-dimensional space, and over time, prediction or control of the overall environmental effect on development is difficult.

Nonshared environmental contributions to development, which are the largest environmental contributions, are effectively random. They are not amenable to control, either by parents or policy makers. Note, this picture -- that each child creates their own environment, or experiences an effectively random one -- does not seem to support the hypothesis that observed group differences in cognitive ability are primarily of non-genetic origin. Nor does it suggest that any simple intervention (for example, equalizing average SES levels) will eliminate group differences. However, it's fair to say our understanding of these complex questions is limited.

Technical remark: if n is large, and factors uncorrelated, the observed environmental variation in a population will be suppressed as n^{-1/2} relative to the maximum environmental effect. That means that the best or worst case scenarios for environmental effect, although hard to achieve, could be surprisingly large. In other words, if the environment is perfectly suited to the child, there could be an anomalously large non-genetic effect, relative to the variance observed in the population as a whole. Of course, for large n these perfect conditions are also harder to arrange. (As a super-high investment parent I am actually involved in attempting to fine tune n-vectors ;-)

Environmental effects cause regression to the mean of a child relative to the parental midpoint. Parents who are well above average likely benefited from a good match between their environment and individual proclivities, as well as from good genes. This match is difficult to replicate for their children -- only genes are passed on with certainty.

Saturday, November 21, 2009

I get yelled at from all sides whenever I mention IQ in a post, but I'm a stubborn guy, so here we go again.

Imagine that you would like to communicate something about the size of an object, using as short a message as possible -- i.e., a single number. What would be a reasonable algorithm to employ? There's obviously no unique answer, and the "best" algorithm depends on the distribution of object types that you are trying to describe. Here's a decent algorithm:

Let rough size S = the radius of the smallest sphere within which the object will fit.

This algorithm allows a perfect reconstruction of the object if it is spherical, but isn't very satisfactory if the object is a javelin or bicycle wheel.

Nevertheless, it would be unreasonable to object to this definition as a single number characterization of object size, given no additional information about the distribution of object types.

I suggest we think about IQ in a similar way.

Q1: If you had to supply a single number meant to characterize the general cognitive ability of an individual, how would you go about determining that number?

I claim that the algorithm used to define IQ is roughly as defensible for characterizing cognitive ability as the quantity S, defined above, is for characterizing object size. The next question, which is an empirical one, is

Q2: Does the resulting quantity have any practical use?

In my opinion reasonable people should focus on the second question, that of practical utility, as it is rather obvious that there is no unique or perfect answer to the first question.

To define IQ, or the general factor g of cognitive ability, we first define some different tests of cognitive ability, i.e., which measure capabilities like memory, verbal ability, spatial ability, pattern recognition, etc. Of course this set of tests is somewhat arbitrary, just as the primitive concept "size of an object" is somewhat arbitrary (is a needle "bigger" than a thimble?). Let's suppose we decide on N different kinds of tests. An individual's score on this battery of tests is an N-vector. Sample from a large population and plot each vector in the N-dimensional space. We might find that the resulting points are concentrated on a submanifold of the N-dimensional space, such that a single variable (which is a special linear combination of the N coordinates) captures most of the variation. As an extreme example, imagine the points form a long thin ellipse with one very long axis; position on this long axis almost completely specifies the N vector. (See these slides for more explanation and some figures.)

What I've just described geometrically is the case where the N mental abilities display a lot of internal correlation, and have a dominant single factor that arises from factor analysis. This dominant factor is what we call g. Note it did not have to be the case that there was a single dominant factor -- the sampled points could have had any shape -- but for the set of generally agreed upon human cognitive abilities, there is.

(What this implies about underlying brain wetware is an interesting question but would take us too far afield. I will mention that g, defined as above using cognitive tests, correlates with neurophysical quantities like reaction time! So it's at least possible that high g has something to do with generally effective brain function -- being wired up efficiently. It's now acknowledged even by hard line egalitarians that g is at least partly heritable, but for the purposes of this discussion we only require a weaker property -- that adult g is relatively stable.)

To summarize, g is the best single number compression of the N vector characterizing an individual's cognitive profile. (This is a lossy compression -- knowing g does not allow exact reconstruction of the N vector.) Of course, the choice of the N tests used to deduce g was at least somewhat arbitrary, and a change in tests results in a different definition of g. There is no unique or perfect definition of a general factor of intelligence. As I emphasized above, given the nature of the problem it seems unreasonable to criticize the specific construction of g, or to try to be overly precise about the value of g for a particular individual. The important question is Q2: what good is it?

A tremendous amount of research has been conducted on Q2. For a nice summary, see Why g matters: the complexity of ordinary life by psychologist Linda Gottfredson, or click on the IQ or psychometrics label link for this blog. Links and book recommendations here. The short answer is that g does indeed correlate with life outcomes. If you want to argue with me about any of this in the comments, please at least first read some of the literature cited above.

Personnel selection research provides much evidence that intelligence (g) is an important predictor of performance in training and on the job, especially in higher level work. This article provides evidence that g has pervasive utility in work settings because it is essentially the ability to deal with cognitive complexity, in particular, with complex information processing. The more complex a work task, the greater the advantages that higher g confers in performing it well.

... These conclusions concerning training potential, particularly at the lower levels, seem confirmed by the military’s last half century of experience in training many millions of recruits. The military has periodically inducted especially large numbers of “marginal men” (percentiles 10-16, or WPT 10-12), either by necessity (World War II), social experiment (Secretary of Defense Robert McNamara’s Project 100,000 in the late 196Os), or accident (the ASVAB misnorming in the early 1980s). In each case, the military has documented the consequences of doing so (Laurence & Ramsberger, 1991; Sticht et al., 1987; U.S. Department of the Army, 1965).

... all agree that these men were very difficult and costly to train, could not learn certain specialties, and performed at a lower average level once on a job. Many such men had to be sent to newly created special units for remedial training or recycled one or more times through basic or technical training.

Limitations and open questions:

1. Are there group differences in g? Yes, this is actually uncontroversial. The hard question is whether these observed differences are due to genetic causes.

2. Is it useful to consider sub-factors? What about, e.g., a 2 or 3-vector compression instead of a scalar quantity? Yes, that's why the SAT has an M and a V section. Some people are strong verbally, but weak mathematically, and vice versa. Some people are really good at visualizing geometric relationships, some aren't, etc.

3. Does g become less useful in the tail of the distribution? Quite possibly. It's harder and harder to differentiate people in the tail.

4. How stable is g? Adult g is pretty stable -- I've seen results with .9 correlation or greater for measurements taken a year apart. However, g measured in childhood is nowhere near a perfect predictor of adult g. If someone has a reference with good data on childhood/adult g correlation, please let me know.

5. Isn't g just the same as class or SES? No. Although there is a weak correlation between g and SES, there are obviously huge variations in g within any particular SES group. Not all rich kids can master calculus, and not all disadvantaged kids read below grade level.

6. How did you get interested in this subject? In elementary school we had to take the ITED (Iowa Test of Educational Development). This test had many subsections (vocabulary, math, reading, etc.) with 99th percentile ceilings. For some reason the teachers (or was it my parents?) let me see my scores, and I immediately wondered whether performance on different sections was correlated. If you were 99 on the math, what was the probability you were also 99 on the reading? What are the odds of all 99s? This leads immediately to the concept of g, which I learned about by digging around at the university library. I also found all five volumes of the Terman study.

7. What are some other useful compressed descriptions? It is claimed that one can characterize personality using the Big Five factors. The results are not as good as for g, I would say, but it's an interesting possibility, and these factors were originally deduced in an information theoretic way. Big Five factors have been shown to be stable and somewhat heritable, although not as heritable as g. Role playing games often use compressed descriptions of individuals (Strength, Dexterity, Intelligence, ...) as do NFL scouts (40 yd dash, veritcal leap, bench press, Wonderlic score, ... ) ;-)

It's a shame that I have to write this post at all. This subject is of such fundamental importance and the results so interesting and clear cut (especially for social science) that everyone should have studied it in school. (Everyone does take the little tests in school...) It's too bad that political correctness means that I will be subject to abuse for merely discussing these well established scientific results.

Why think about any of this? Here's what I said in response to a comment on this earlier post:

Intelligence, genius, and achievement are legitimate subjects for study. Anyone who hires or fires employees, mentors younger people, trains students, has kids, or even just has an interest in how human civilization evolved and will evolve should probably think about these questions -- using statistics, biography, history, psychological studies, really whatever tools are available.

Were this model to be true, one would expect with overwhelming probability to find that the vast majority of rich people have IQ around 120, but not much higher. This is because IQ is normally distributed: as you go further out the tail the population decreases exponentially. To be specific, IQ = 120 corresponds to the 90th percentile, whereas IQ = 135 is 99th percentile (i.e., only 1 in 10 people with IQ > 120 have IQ > 135) and IQ = 145 is 99.9th percentile (i.e., only 1 in 100 people with IQ > 120 have IQ > 145).

Now let's look at the 2009 Forbes list of richest people in the world:

If the Igon Model were correct, we would not expect to find this list dominated by people with IQ much higher than 120. But in fact we do. Note these three made their money in different ways: Gates founded a software company, Buffett is primarily an investor, and Carlos Slim is an oligarch ;-)

Bill Gates scored 1580 on the pre-1995 SAT. His IQ is clearly >> 145 and possibly as high as 160 or so.

Warren Buffett graduated high school at 16 ranked in the top 5 percent of his class despite devoting substantial effort to entrepreneurial activities. Most people who know him well refer to him as brilliant, that folksy quote above notwithstanding. I would suggest the evidence is strong that his IQ is above 135, perhaps higher than 145.

Carlos Slim studied engineering and taught linear programming while still an undergraduate at UNAM, the top university in Mexico. He reportedly discovered the use of compound interest at age 10. I would suggest his IQ is also at least 135.

So it would appear that the three richest men in the world all have IQs that are higher than 90 percent or even 99 percent of the > 120 IQ population. (Relative to the general population they are all likely in the 99th or even 99.9th percentile.) The probability of this happening in the Igon Model is less than 1 in 1000.

[Here's a basketball analogy: the analogous Igon Model for basketball would say height over 6ft2 (90th percentile) doesn't increase likelihood of success in basketball. Suppose we find the 3 top players in the world are 7ft (Shaq/Gates), 6ft8 (LeBron/Buffett) and 6ft6 (Kobe/Slim). That strongly disfavors the model, as a random draw of 3 people from the set of people over 6ft2 in height has almost zero probability of producing the 3 heights we found.]

Note to angry Gladwell egalitarians: don't take this analysis too seriously :-) It's really an example of "Igon analysis" in the spirit of MG!

There are many factors aside from intelligence that impact success in business or investing. See here for a discussion by money manager and investment theorist William Bernstein, which is very similar to what Buffett has said on various occasions. If you carefully study biographies of the three men listed above, what really stands out (aside from high mental ability) is their determination, drive and fascination with material success beginning at a young age. See also: success vs ability and creators and rulers.

What about the broader population? It's well established that graduates of elite universities earn more than graduates of less selective schools. But, interestingly, controlling for SAT score (IQ) largely eliminates the differential. I wonder why? (See also here for UT Austin data on earnings variation with SAT and major.)

Strangely we haven't heard much recently about impending gigantic Goldman bonuses. Once the issue hits the news radar again, I hope to see some detailed analyses of how, exactly, Goldman made its recent record profits.

At the link below you will find an analysis of Goldman's prop trading numbers for 2008 (not a good year), using the public records of its charitable Goldman Sachs Foundation. Thanks to a reader for sending this. I don't know how reliable this method is -- it all depends on whether GSF's records reflect the firm's overall trading pattern.

Zerohedge: ... Sometimes no capital is allocated to excluded strategies, but usually, and especially for product agnostic funds such as Goldman, each entity will be allowed its pro rata share based on the "fungible" capital that makes up the firm's entire Assets Under Management. Therefore, the GS Foundation ("GSF"), with its $270 million of capital at the beginning of 2008, would likely get its pro rata allocation as a percentage of the total capital backing the Goldman hedge fund (which can come from such places as Goldman Sachs Asset Management, and Goldman Sachs & Co., which in turn gets it funding via such taxpayer conduits as the Fed's repo operations and the Discount Window). So if Goldman for example had access to total capital of $50 billion last year (roughly), each trade, when allocated to GSF, would account for about half a percent (0.5%), absent special treatment, of the total capital invested or disposed. As an example, if Goldman were to trade $100 million notional in 10 year Index Swaps, GSF would thus be allocated about $500,000 of the trade.

Why is all this relevant?

Were one to comb through GSF's tax filings, one would uncover in 2007 over 500 pages worth of single-spaced trades, and over 200 in 2008, across absolutely every single asset class: equities, indices, futures, fixed income, currencies, credit swaps, IR swaps, FX, private equity, hedge fund investment, you name it (oddly absent are CDS trades). And this is in 2007 alone. These are a one-for-one proxy of absolutely every single trade that Goldman executed in its capacity as a prop trader in the last two years. The only question is what is the proration multiple to determine what the appropriate P&L for the entire firm would have been based on any one single trade allocated to GSF, and subsequently, disclosed in the foundation's tax forms.

... Yet what is obvious no matter how the data set is sliced and diced, is that the firm was bleeding money across virtually all prop-traded groups in 2008. Is it any wonder that the firm's only source of revenue is courtesy of i) the near-vertical treasury curve (thank you taxpayers) and ii) the ability to demand usurious margins on Fixed Income and other products from clients trading in bulk who have no other middleman choices.

Unfortunately some of the books listed below are hard to find, unless you have access to a good library.

Curzio Malaparte: The novels Kaputt and The Skin are worth reading, but The Volga Rises in Europe, which is a collection of dispatches from the Eastern Front, is priceless. His dispatches were censored, but have been collected with the author's additional comments.

Monday, November 16, 2009

Perhaps fittingly, the first use of "igon value" was in a profile of (then obscure) hedge fund philosopher Nassim Taleb. (See earlier post Pinker on Gladwell.)

New Yorker, April 22 & 29, 2002: [this version retrieved from gladwell.com] ... As the day came to an end, Taleb and his team turned their attention once again to the problem of the square root of n. Taleb was back at the whiteboard. Spitznagel was looking on. Pallop was idly peeling a banana. Outside, the sun was beginning to settle behind the trees. "You do a conversion to p1 and p2," Taleb said. His marker was once again squeaking across the whiteboard. "We say we have a Gaussian distribution, and you have the market switching from a low-volume regime to a high-volume. P21. P22. You have your igon value." He frowned and stared at his handiwork. The markets were now closed. Empirica had lost money, which meant that somewhere off in the woods of Connecticut Niederhoffer had no doubt made money. That hurt, but if you steeled yourself, and thought about the problem at hand, and kept in mind that someday the market would do something utterly unexpected because in the world we live in something utterly unexpected always happens, then the hurt was not so bad. Taleb eyed his equations on the whiteboard, and arched an eyebrow. It was a very difficult problem. "Where is Dr. Wu? Should we call in Dr. Wu?"

I doubt the New Yorker and its famous fact checkers caught the error. Possibly not a single New Yorker employee knows any linear algebra. Who needs all that geeky math stuff? [Update: Apparently the New Yorker did correct the electronic version now available on its site, although one can find references to the error online in 2003. See here and comments below for more.]

... a man whom Taleb refers to, somewhat mysteriously, as Dr. Wu wandered in. Dr. Wu works for another hedge fund, down the hall, and is said to be brilliant. He is thin and squints through black-rimmed glasses. He was asked his opinion on the square root of n but declined to answer. "Dr. Wu comes here for intellectual kicks and to borrow books and to talk music with Mark," Taleb explained after their visitor had drifted away. He added darkly, "Dr. Wu is a Mahlerian."

Sunday, November 15, 2009

Thanks to a reader for pointing out this Steve Pinker review of Malcolm Gladwell's latest collection in the Sunday Times. I had more or less stopped reading stuff on Gladwell, as the uncritical acceptance of many of his claims is just too depressing a reminder of the mediocrity of our commentariat.

An eclectic essayist is necessarily a dilettante, which is not in itself a bad thing. But Gladwell frequently holds forth about statistics and psychology, and his lack of technical grounding in these subjects can be jarring. He provides misleading definitions of “homology,” “saggital plane” and “power law” and quotes an expert speaking about an “igon value” (that’s eigenvalue, a basic concept in linear algebra). In the spirit of Gladwell, who likes to give portentous names to his aperçus, I will call this the Igon Value Problem: when a writer’s education on a topic consists in interviewing an expert, he is apt to offer generalizations that are banal, obtuse or flat wrong.

...

The common thread in Gladwell’s writing is a kind of populism, which seeks to undermine the ideals of talent, intelligence and analytical prowess in favor of luck, opportunity, experience and intuition. For an apolitical writer like Gladwell, this has the advantage of appealing both to the Horatio Alger right and to the egalitarian left. Unfortunately he wildly overstates his empirical case. It is simply not true that a quarter­back’s rank in the draft is uncorrelated with his success in the pros, that cognitive skills don’t predict a teacher’s effectiveness, that intelligence scores are poorly related to job performance or (the major claim in “Outliers”) that above a minimum I.Q. of 120, higher intelligence does not bring greater intellectual achievements.

Malcolm Gladwell shows exquisite taste in the subjects he writes and talks about -- he has a nose for great topics. I just wish his logical and analytical capabilities were better ... My feeling is that Gladwell's work appeals most to people who can't quite understand what he is talking about.

What Pinker refers to as the major claim of Outliers: IQ above 120 doesn't matter, is easily shown to be false. Randomly selected eminent scientists have IQs much higher than 120 and also much higher than the average science PhD (120-130); math ability within the top percentile measured in childhood is predictive of future success in science and engineering; advanced education and a challenging career do not enhance adult IQs relative to childhood IQ.

So, accomplished scientists tend to have high IQs, and their IQs were already high before they became scientists -- the causality is clear. 10,000 hours of practice may be necessary but is certainly not sufficient to become a world class expert.

I recently remarked to a friend that many aspects of psychometrics which were well established by the 1950s now seem to have been completely forgotten due to political correctness. This leads to the jarring observation that recent social science articles (the kind that Gladwell is likely to cover) are sometimes completely wrong headed (even, contradicted by existing data of which the authors are unaware) whereas many 50 year old articles are clearly reasoned and correct. The data I cite in the links above comes from the Roe study of eminent scientists and the Terman longitudinal study of gifted individuals, both of which were conducted long ago, and the SMPY longitudinal study of mathematically precocious youth, which is ongoing. I've interacted with many social scientists whose worldview is inconsistent with the established results of these studies, of which they are unaware.

Saturday, November 14, 2009

What should be the goals and responsibilities of a great university? Should it strive to maximize the future contributions of its graduates to humanity? Or should the university define its interests more narrowly, in terms of institutional prestige, social cachet and financial wealth?

Below are more excerpts from Jerome Karabel's The Chosen, an in-depth analysis of admissions at Harvard, Yale and Princeton in the 20th century. All of the excerpts are from Chapter 9: Wilbur Bender and his Legacy, which chronicles the late 1950's confrontation between elements of the Harvard faculty (often idealistic scientists), who wanted to place more emphasis on intellectual merit, and then Dean of Admissions Wilbur Bender, who was more narrowly focused on Harvard's institutional priorities. (If you find this post interesting, I highly recommend a look at the book. At the Google link above all of Chapter 9 is available.)

Although Karabel does an excellent job (see below) of characterizing the two sides of the argument, he does not examine the conflict in fundamental values between the scholar-scientists and Bender: the best and brightest for their future contributions to mankind, or the best for Harvard's future as an institution? To prepare "leaders" who will pursue power (some of which shall accrue, indirectly, to Harvard) or to prepare scientists and scholars who will create knowledge to be shared by all?

... Brinton, a former Rhodes Scholar with a broad historic and comparative perspective on higher education, posed a sharp question to clarify the issue at hand: "Do we want an Ecole Normale Superieure, a 'cerebral school' aimed solely at preparing students for the academic professions?" Bender's answer was a resounding no. But to Wilson the matter was not so clear: the basic issue was which students "could take advantage of the unique intellectual opportunity which Harvard has to offer." In a barb clearly aimed at Bender, Wilson proclaimed that "he just did not accept potential financial return ... as the basis for showing favoritism to Harvard sons who were less well qualified academically than other admission candidates."

Having been under assault by segments of the faculty for almost two years, at first by Holton, then by Kistiakowsky, and now by Wilson, Bender apparently decided that he had had enough. In a meeting of the committee a month after this testy exchange, he announced his resignation as dean of Admissions and Financial Aids, and stated that he would prefer neither to affix his signature to the final report of the subcommittee nor to withhold his vote of approval. His departure was set for July 1, 1960, and he agreed to continue to meet with the subcommittee until it completed its mission.

It is interesting that Ecole Normale Superieure (ENS) features so prominently in Harvard's internal discussions. Along with Ecole Polytechnique, ENS is at the pinnacle of the strictly meritocratic French system of higher education. (See earlier post: Les Grandes Ecoles.)

The University of Chicago is an example of a school that followed the rigorous, meritocratic path, and suffered a consequential decline in social cachet and financial standing. Idealism damaged Chicago's position in the competition against Harvard and others. As one realistic Harvard commenter noted, one needs to "admit the bottom 10 percent to continue to attract the top 10 percent" -- even the intelligentsia value the social cachet of their alma mater.

... In a pair of letters that constituted something of a manifesto for the wing of the faculty favoring strict academic meritocracy, Wilson explicitly advocated admitting fewer private school students and commuters, eliminating all preferences for athletes, and (if funds permitted) selecting "the entering class regardless of financial need on the basis of pure merit." The issue of athletes particularly vexed Wilson, who stated flatly: "I would certainly rule out athletic ability as a criterion for admission of any sort," adding that "it bears a zero relationship to the performance later in life that we are trying to predict." He also argued that "it may well be that objective test scores are our only safeguards against an excessive number of athletes only, rich playboys, smooth characters who make a good impression in interviews, etc." As a parting shot, Wilson could not resist accusing Ford of anti-intellectualism; citing Ford's desire to change Harvard's image, Wilson asked bluntly: "What's wrong with Harvard being regarded as an egghead college? Isn't it right that a country the size of the United States should be able to afford one university in which intellectual achievement is the most important consideration?"

E. Bright Wilson was professor of chemistry and member of the National Academy of Sciences, later a recipient of the National Medal of Science. The last quote from Wilson could easily have come from anyone who went to Caltech! Indeed, both E. Bright Wilson and his son, Nobel Laureate Ken Wilson (theoretical physics), earned their doctorates at Caltech (the father under Linus Pauling, the son under Murray Gell-Mann).

For Bender, who loved Harvard, and had devoted much of his life to it ... "whether our eventual goal for Harvard is an American Ecole Normale, or the nearest approach to it we can get." In Bender's reading, "it is implied, but not directly stated" that Harvard should emulate this model, which admits students purely on the basis of their performance on an exam and serves as a training ground for many of France's leading academics and intellectuals. Professors, in particular, were especially prone to take this view: "My guess is that many, perhaps most, of the faculty would support such a policy, and many would assume that the case for it was obvious and irrefutable."

To Bender, however, the vision of a freshman class selected solely on the basis of academic criteria was nightmarish. "Would we have a dangerously high incidence of emotional problems, of breakdowns and suicides? Would we get a high proportion of rather precious, brittle types, intellectuals in quotes, beatniks, etc.?" "Do we really want," he continued, "a college in which practically everyone was headed for a career as a scholar, scientist, college teacher or research doctor?"

For his purposes -- the narrow institutional interests of Harvard -- Bender was absolutely right. Filtering purely by intellectual merit (as opposed to using a broader set of criteria, and several categories under which students are admitted) would not maximize Harvard's influence in government or business, or its financial wealth. Again see earlier post: Creators vs Rulers.

Bender also had a startlingly accurate sense of how many truly intellectually outstanding students were available in the national pool. He doubted whether more than 100-200 candidates of truly exceptional promise would be available for each year's class. This number corresponds to (roughly) +4 SD in mental ability. Long after Bender resigned, Harvard still reserved only 10 percent of its places (roughly 150 spots) for "top brains". (See category "S" listed at bottom.)

... To test his hypothesis that Harvard's most brilliant students were not its most "distinguished graduates," he [Bender] carried out his own study of exceptionally successful alumni.

The twenty-six men studied were a veritable Who's Who of the American elite: among them was a former secretary of defense, the president of Commonwealth Edison and Electric Bond and Share, the publisher of the Minneapolis Star and Tribune, the senior partner of Davis Polk, and (not least) the general chairman of the Program for Harvard College. Twenty-two were private school graduates, with St. Paul's (four) and Groton (three) leading a list of the nation's most elite boarding schools. These men had not compiled particularly distinguished academic records at Harvard; the majority of them had relatively poor grades. A casual inspection suggested "a much higher than average participation by the above in athletic and other extracurricular activities" precisely the kinds of students likely to be excluded by the Ecole Normale model.

Harvard much prefers that its graduates ascend to positions of power, as opposed to graduates of Stanford or Berkeley. But do the differences between these schools have any effect on the actual quality of leadership? Does it matter to the Nation? Whose interests are at stake?

Bender, above all, loved Harvard. Professors like E. Bright Wilson were, for better or worse, much more idealistic: looking far beyond their home institution, they held knowledge itself preeminent.

Typology used for all applicants, at least as late as 1988:

1. S First-rate scholar in Harvard departmental terms.

2. D Candidate's primary strength is his academic strength, but it doesn't look strong enough to quality as an S (above).

3. A All-Amercan‚ healthy, uncomplicated athletic strengths and style, perhaps some extracurricular participation, but not combined with top academic credentials.

Wednesday, November 11, 2009

... except that the Jews, unlike Asian Americans, made a fuss about it.

About The Chosen: ... But the admissions policies of elite universities have long been both tightly controlled and shrouded in secrecy. In The Chosen, the Berkeley sociologist Jerome Karabel lifts the veil on a century of admission and exclusion at Harvard, Yale, and Princeton. How did the policies of our elite schools evolve? Whom have they let in and why? And what do those policies say about America?

p. 76: ... Harvard, Yale, and Princeton thus faced a painful choice: either maintain the almost exclusively objective academic standards for admission and face the arrival of increasing numbers of Jews or replace them with more subjective criteria that could be deployed to produce the desired outcome. Their decision to do the latter was a great departure from their historic practices and bequeathed to us the peculiar admissions process that we now take for granted.

US News ... Translating the advantages into SAT scores, study author Thomas Espenshade, a Princeton sociologist, calculated that African-Americans who achieved 1150 scores on the two original SAT tests had the same chances of getting accepted to top private colleges in 1997 as whites who scored 1460s and Asians who scored perfect 1600s.

Espenshade found that when comparing applicants with similar grades, scores, athletic qualifications, and family history for seven elite private colleges and universities:

Whites were three times as likely to get fat envelopes as Asians. Hispanics were twice as likely to win admission as whites. African-Americans were at least five times as likely to be accepted as whites.

More from The Chosen below. OCR = Department of Education's Office of Civil Rights, which conducted an investigation of anti-Asian bias in Harvard admissions around 1990.

The Chosen, p.510: ... Asian Americans had the highest SATs of all [among groups admitted to Harvard]: 1450 out of a possible 1600. In 1991 the Asian-American/white admission ratio [ratio of percentages of applicants from each group admitted] stood at 84 percent -- a sharp downturn from 98 percent in 1990, when the scrutiny from OCR was at its peak. Though [this ratio] never dropped again to the 64 percent level of 1986, it never returned to its 1990 zenith. Despite Asian Americans' growing proportion of the national population, their enrollment also peaked in 1990 at 20 percent, where it more or less remained until 1994. ... by 2001 it had dropped below 15 percent.

So the "subjective but fair" measures used in admissions resulted in a record high admit rate for Asians during the year Harvard was under investigation by the federal government. But mysteriously the admit rate (relative to that of white applicants) went down significantly after the investigation ended, and the overall Asian enrollment has not increased despite the increasing US population fraction of Asians.

Saturday, November 07, 2009

I will be participating in a public Q&A session with Freeman Dyson later this term. Any reader of this blog will know that I'm an admirer of both his work in theoretical physics and his popular writing. (Related posts here.) In preparing for the event, I've been reading and re-reading all sorts of things by and about Dyson. Below is something I found quite striking:

Disturbing the Universe: ... In that spring of 1948 there was another memorable event. Hans [Bethe] received a small package from Japan containing the first two issues of a new physics journal. Progress of Theoretical Physics, published in Kyoto. The two issues were printed in English on brownish paper of poor quality. They contained a total of six short articles. The first article in issue No. 2 was called "On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields," by S. Tomonaga of Tokyo University. Underneath it was a footnote saying, "Translated from the paper . . . (1943) appeared originally in Japanese." Hans gave me the article to read. It contained, set out simply and lucidly without any mathematical elaboration, the central idea of Julian Schwinger's theory. The implications of this were astonishing. Somehow or other, amid the ruin and turmoil of the war, totally isolated from the rest of the world, Tomonaga had maintained in Japan a school of research in theoretical physics that was in some respects ahead of anything existing anywhere else at that time. He had pushed on alone and laid the foundations of the new quantum electrodynamics, five years before Schwinger and without any help from the Columbia experiments. He had not, in 1943, completed the theory and developed it as a practical tool. To Schwinger rightly belongs the credit for making the theory into a coherent mathematical structure. But Tomonaga had taken the first essential Step. There he was, in the spring of 1948, sitting amid the ashes and rubble of Tokyo and sending us that pathetic little package. It came to us as a voice out of the deep.

A few weeks later, Oppy received a personal letter from Tomonaga describing the more recent work of the Japanese physicists. They had been moving ahead fast in the same direction as Schwinger. Regular communications were soon established. Oppy invited Tomonaga to visit Princeton, and a succession of Tomonaga's students later came to work with us at Princeton and at Cornell. When I met Tomonaga for the first time, a letter to my parents recorded my immediate impression of him "He is more able than either Schwinger or Feynman to talk about ideas other than his own. And he has enough of his own too. He is an exceptionally unselfish person." On his table among the physics journals was a copy of the New Testament.

Ironically, Schweber, in his magisterial book QED and the Men Who Made It, advocates that Dyson deserved a share of the Nobel awarded to Feynman, Schwinger and Tomonaga, and somewhat downplays the role of Tomonaga.

Below are a list of questions I am considering for Dyson (I doubt he'll see it beforehand; does he read my blog? :-). Any suggestions are welcome!

You've written about how depressed you became over your war work analyzing Allied strategic bombing. Yet later you were a Jason, doing top secret military work for the US government. Could you talk about those two experiences, and your opinion about scientists working on weapons and advising the military?

Of the bomb designer turned disarmament activist Ted Taylor, who was the subject of a book called The Curve of Binding Energy, you once said "Very few people have Ted's imagination. ... I think he is perhaps the greatest man that I ever knew well. And he is completely unknown." Could you tell us more about Taylor?

You had a close association with many of the giants of the past -- Feynman, Dirac, Oppenheimer, Bethe. How do you compare them to the best people working today? Would they still be giants?

You advised Francis Crick, while he was still a physicist, that moving into biology might be premature. You thought that biology would eventually be more interesting than physics, but that Crick was too early. What would you be working on today if you were 25 years old?

You wrote that since childhood, some part of you had always known that the “Americans held the future in their hands and that the smart thing for me to do would be to join them.” Do Americans still hold the future in their hands, or will the future be made somewhere else -- for example in Asia or once again in Europe?

You've proposed that genetic engineering might be used for many purposes, from green energy to adapting humans for life in space. What about engineering ourselves for greater intelligence; could that be the next leap forward in human evolution?

You were at Princeton when Everett proposed his "Many Worlds" interpretation of quantum mechanics. Could you describe the reaction to his ideas then (including your own), and your present opinion? Any thoughts on the foundational questions of quantum mechanics?

How well did Feynman understand Second Quantization (or the idea of a quantum field) when he developed his approach to QED? At what point did he really understand the Schwinger / Tomonaga approach?

How much did Dirac understand about the path integral formulation of quantum mechanics before Feynman came along? Feynman was inspired by a formula in one of Dirac's papers, but has claimed that Dirac later acknowledged not knowing whether or how the analogy between amplitude and exponential of action could be made into an equality. Do you have any insight on this?

Friday, November 06, 2009

Take a population of children with similar (high) IQ scores. Follow them for the next 40 years. Consider two subsets:

Group A, those who obtain advanced (e.g., graduate) education and achieve exceptional career success as scientists, professionals or business leaders, and

Group C, another group that end up in less (intellectually) challenging jobs and with much less formal education (often no more than a high school diploma).

On the re-tests of these adults, did Group A outperform Group C relative to their childhood scores? No.

What does this mean? Enrichment is again seen as unlikely to drastically alter cognitive ability. An 1150 SAT kid is not going to become a 1460 or 1600 kid as a result of their college education. Yes, those funny little tests are measuring something real and relatively stable.

These results have been known for many years, thanks to the Terman study of 1,538 gifted individuals (see here and here). Note that overall the "Termites" tended to be very successful in life; we focus on the most and least successful outliers (Groups A and C) to test the effect of enrichment on cognitive ability.

Terman Study, volume 5: ... Perhaps the most direct way of determining whether the higher and lower occupational groups differed in the way they changed from childhood to young adulthood and then to middle age is to compare the average rank order of [each group] ... at these three stages of life. There was no difference in the amount of change in intellectual performance (IQ) to young or middle adulthood or in Concept Mastery score over the decade from early to later middle age. We must conclude that ... there was no tendency for the lower-level group to slip downwards during their careers.

Stanford alumni magazine: ... the 100 most successful and 100 least successful men in the group, defining success as holding jobs that required their intellectual gifts. The successes, predictably, included professors, scientists, doctors and lawyers. The non-successes included electronics technicians, police, carpenters and pool cleaners, plus a smattering of failed lawyers, doctors and academics. But here's the catch: the successes and non-successes barely differed in average IQ. [All Termites had high childhood IQs as a consequence of the selection process.] The big differences turned out to be in confidence, persistence and early parental encouragement.

10,000 hours of practice won't make you a genius, but being good at something might make you more likely to pursue it for 10,000 hours!

Whistle blower report by a UCLA professor Timothy Groseclose on administration attempts to circumvent Proposition 209. If you read this report you will have a good idea of the micro-level dynamics and political economy of UC admissions. The title is pretty strong: Report on suspected malfeasance in UCLA admissions and the accompanying cover-up; let's see how long Groseclose retains the Marvin Hoffenberg Chair of American Politics :-/

Tuesday, November 03, 2009

New paper! This is a followup to our earlier work 0805.0145, which characterized the size of uncertainties in coupling constant unification due to unknown short distance physics such as quantum gravity. In the new paper we show that non-supersymmetric SU(5) and SO(10) models can be made to unify for reasonable (natural) sizes of short distance effects. This raises the question of whether successful unification in supersymmetric models should be taken as strong evidence in favor of low-energy supersymmetry, as has been argued.

We systematically study the unification of gauge couplings in the presence of (one or more) effective dimension-5 operators cHGG/4MPl, induced into the grand unified theory by gravitational interactions at the Planck scale MPl. These operators alter the usual condition for gauge coupling unification, which can, depending on the Higgs content H and vacuum expectation value, result in unification at scales MX significantly different than naively expected. We find non-supersymmetric models of SU(5) and SO(10) unification, with natural Wilson coefficients c, that easily satisfy the constraints from proton decay. Furthermore, gauge coupling unification at scales as high as the Planck scale seems feasible, possibly hinting at simultaneous unification of gauge and gravitational interactions. In an appendix we work out the group theoretical aspects of this scenario for SU(5) and SO(10) unified groups in detail; this material is also relevant in the analysis of non-universal gaugino masses obtained from supergravity.

From the introduction to the paper:

What are the boundary conditions for grand uniﬁcation? One typically assumes that the gauge couplings of the broken subgroups must become numerically equal at the uniﬁcation scale MX [1]. However, eﬀects from physics above the uniﬁcation scale can alter the gauge coupling uniﬁcation condition. In an eﬀective ﬁeld theory approach, such eﬀects can be caused by dimension-5 operators of the form c H Gµν Gµν /4MPl , which shift the coeﬃcients of the gauge kinetic terms in the low-energy theory after the Higgs H acquires a vacuum expectation value in grand uniﬁed symmetry breaking [2, 3]; one obvious source of such operators is quantum gravitational eﬀects. Indeed, it would be unnatural (or require some special explanation) to assume that the Wilson coeﬃcients c above be zero or especially small [4]; the default assumption should be that these coeﬃcients are of order unity in grand uniﬁed models, with consequent uniﬁcation conditions.

In conventional uniﬁcation models, one might expect $< H > \sim 10^16$ GeV, plausibly leading to eﬀects from quantum gravity of order a fraction of a percent, $< H > / MPl \sim 10^{−3}$, on the gauge coupling uniﬁcation condition. In [5] we showed that these dimension-5 operators can be even more relevant than previously suspected since the Planck mass MPl tends to be smaller than naively assumed due to its renormalization group evolution [6, 7] under the inﬂuence of the large number of ﬁelds in supersymmetric grand uniﬁed theories. It was noted [5] that these dimension-5 operators introduce in supersymmetric uniﬁcation models an uncertainty that can be bigger than the two-loop eﬀects which are considered to be necessary to obtain good numerical uniﬁcation of the gauge couplings.

The aim of this paper is diﬀerent. We study whether the dimension-5 operators discussed above can lead to perfect gauge coupling uniﬁcation without supersymmetry by their modifying the gauge coupling uniﬁcation condition. This uniﬁcation scheme has been studied previously in the literature for models with and without supersymmetry, e.g. in [2, 3, 5, 8–13], but in less detail and generality and mostly only the eﬀect from a single gravitational operator has been considered. ...

Sunday, November 01, 2009

Caption: Left to right: Ludwig Prandtl (German scientist), Qian Xuesen, Theodore von Kármán. Prandtl served for Germany during the World War II; von Kármán and Qian served [for the] US Army; after 1956, Qian served for China. Notice that at that time Qian had US Army rank. Interestingly, Prandtl was doctoral advisor for von Kármán; von Kármán was doctoral advisor for Qian. (Picture and caption from this Wikipedia entry.)

"It was the stupidest thing this country ever did," former Navy Secretary Dan Kimball later said, according to Aviation Week. "He was no more a Communist than I was, and we forced him to go."

My father, also a professor of aerospace engineering, was an admirer of Qian and of von Karman. He was quite pleased that I decided to attend Caltech as an undergraduate -- although he would have preferred, for practical reasons, that I study EE or CS rather than theoretical physics!

Deported in 1955 on suspicion of being a Communist, the aeronautical engineer educated at Caltech became known as the father of China's space and missile programs.

November 1, 2009

Qian Xuesen, a former Caltech rocket scientist who helped establish the Jet Propulsion Laboratory before being deported in 1955 on suspicion of being a Communist and who became known as the father of China's space and missile programs, has died. He was 98.

Qian, also known as Tsien Hsue-shen, died Saturday in Beijing, China's state news agency reported. The cause was not given.

Honored in his homeland for his "eminent contributions to science," Qian was credited with leading China to launch intercontinental ballistic missiles, Silkworm anti-ship missiles, weather and reconnaissance satellites and to put a human in space in 2003.

The man deemed responsible for these technological feats also was labeled a spy in the 1999 Cox Report issued by Congress after an investigation into how classified information had been obtained by the Chinese.

Qian, a Chinese-born aeronautical engineer educated at Caltech and the Massachusetts Institute of Technology, was a protege of Caltech's eminent professor Theodore von Karman, who recognized him as an outstanding mathematician and "undisputed genius."

Qian's research contributed to the development of "jet-assisted takeoff" technology that the military began using in the 1940s.

He was the founding director of the Daniel and Florence Guggenheim Jet Propulsion Center at Caltech and a member of the university's so-called Suicide Squad of rocket experimenters who laid the groundwork for testing done by JPL.

But his brilliant career in the United States came to a screeching halt in 1950, when the FBI accused him of being a member of a subversive organization. Qian packed up eight crates of belongings and set off for Shanghai, saying he and his wife and two young children wanted to visit his aging parents back home. Federal agents seized the containers, which they claimed contained classified materials, and arrested him on suspicion of subversive activity.

Qian denied any Communist leanings, rejected the accusation that he was trying to spirit away secret information and initially fought deportation. He later changed course, however, and sought to return to China.

Five years after his arrest, he was shipped off in an apparent exchange for 11 American airmen captured during the Korean War.

"I do not plan to come back," Qian told reporters. "I have no reason to come back. . . . I plan to do my best to help the Chinese people build up the nation to where they can live with dignity and happiness."

Welcomed as a national hero in China, where the Communist regime had defeated the Nationalist forces, Qian became director of China's rocket research and was named to the Central Committee of the Communist Party. China, whose scientific development lagged during the Communist revolution, quickly began making strides.

Qian was born in the eastern city of Hangzhou, and in 1934 graduated from Jiaotong University in Shanghai, where he studied mechanical engineering. He won a scholarship to MIT and, after earning a master's degree in aeronautical engineering there, continued his doctoral studies at Caltech.

He taught at MIT and Caltech and, having received a security clearance, served on the Scientific Advisory Board that advised the U.S. military during and after World War II.

Sent to Germany to interrogate Nazi scientists, Qian interviewed rocket scientist Wernher von Braun. As the trade magazine Aviation Week put it in 2007, upon naming Qian its person of the year, "No one then knew that the father of the future U.S. space program was being quizzed by the father of the future Chinese space program."

Qian returned to Caltech in 1949 and a year later faced the accusation by two former members of the Los Angeles Police Department's "Red Squad" that he was a card-carrying member of the Communist Party.

He admitted that while a graduate student in the 1930s he had been present at social gatherings organized by colleagues who also were accused of party membership, but he denied any political involvement.

Few can agree on the question of whether Qian was a spy. An examination of the papers Qian packed away failed to turn up any classified documents. Colleagues at Caltech firmly stood behind him, and he continued to do research there after he lost his security clearance. In fact, the university gave him its distinguished alumni award in 1979 in recognition of his pioneering work in rocket science.

Although federal officials started deportation procedures in 1950, he was prevented from leaving the country because it was decided that he knew too much about sensitive military matters that could be of use to an enemy.

For years, Qian was in a sort of limbo, being watched closely by the U.S. government and living under partial house arrest. Eventually he quit fighting his expulsion and actively worked to return to China. Some associates said that he was insulted because his loyalty to this country was questioned and that he initially wanted to clear his name.

Once he returned home in 1955, he threw himself into his research with what some saw as calculated revenge.

"It was the stupidest thing this country ever did," former Navy Secretary Dan Kimball later said, according to Aviation Week. "He was no more a Communist than I was, and we forced him to go."