About Rationally Speaking

Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Tuesday, January 15, 2013

Rationally Speaking podcast: Intelligence and Personality Testing

What's your IQ? Are you an ENTJ, or maybe an ISFP? What's your Openness score, your Conscientiousness score, your Neuroticism score? And just how seriously should you take all those test scores, anyway?

In this episode of Rationally Speaking, Massimo and Julia discuss the science - and lack thereof - of intelligence and personality testing.

13 comments:

I bet NT's are overrepresented here. See what I did there? The value of personality tests is in how well they correlate with behaviors and outcomes. Doesn't matter if the test is how you sign your name or how you eat an Oreo cookie, as long as it predicts something of interest.

Picture-based IQ tests like Raven's Progressive Matrices test pattern recognition and abstract thinking by making you find the simplest rule that describes two patterns, and apply it to a third pattern. I suspect that playing video games like Tetris and creating computer graphics would improve one's score.

Overwhelming majority? That's surprising. Over-representation makes sense, but considering INTJs and INTPs are only 10% of the population, if they were really in the *majority* that would be kind of shocking.

For comparison with a similar demographic, 26% of LWers are INTJ or INTP (including myself).

So far on this thread it's three for three, because really, who comments on blogs about rationality? I wrote "majority" at first, but decided to play it safe with "overrepresented."Don't know about Internet Infidels, but I'd expect more diversity there.

At some point (or now, probably) they will data-mine the internet using statistical learning and be able to predict all sorts of stuff about us based on who-knows-what sort of breadcrumbs we leave behind. Just like 20Q.

You correctly point out the lack of theoretical underpinning for the personality theories you discussed. But the theory may come at the level of the individual trait or trait cluster rather than for personality as a whole. For example, psychologist Jerome Kagan has correlated emotional reactivity in infancy (specifically, distress in response to novel situations) with an “inhibited” temperament later in life. He does not propose a hard determinism and acknowledges the role of environmental influences in shaping personality. But his work suggests, among other things, that highly reactive infants are unlikely to develop into extreme extroverts. Other traits may be more difficult to study in a similar way, but at least in this case it seems possible to conceive of a personality feature as originating in a physiologic response.

We live in an odd time in which there is a widely used scheme for classifying personality disorders, but no widely accepted theory of normal personality. Without a standardized vocabulary to describe the varieties of human behavior in non-pathologic terms, there’s a natural tendency to over-apply labels like “Asperger’s-like” or “frontal” or “borderline” to people who seem eccentric to us, even if we grant a continuum from the normal to the quirky to the disordered. Then there’s the age-old problem of simply failing to understand those who don’t match the ideal of our own personalities.

Trait theories of personality (or temperament), for all their flaws, attempt to provide such a vocabulary and implicitly encourage an attitude of acceptance of individual differences, as well as a reluctance to “diagnose” if there are more neutral ways to describe someone. That may explain in part why they are so popular. Obviously we shouldn’t endorse unsupported ideas simply because there might be some sociological benefits. But we could certainly benefit from having better theories of personality.

Late in the podcast you mention it is quite reasonable to assume different environmental factors explain differences in test results but dismiss out of hand that categories like race or gender could as well.

I sympathise with the motives but the podcast is called 'rationally speaking' so, we can't just dismiss conclusions we don't like without reason. If the test result differences are because of questions that are of the 'missing tennis net' sort, then it is reasonable to dismiss them. However, obvious clangers like that have been excluded for a long time now and still the massive margins persist. To conclude that the groups that do poorly should have extra resources dedicated to them over and above the rest in the hope the Flynn effect will average everyone out is really not all that reasonable either. What if all groups got the same amount of extra attention, would they all see IQs rising, e.g. the gap would remain?

I think it is surprising how hard it is for us to shake off our blank slate premises when, say, no one would seriously apply the same premises when it comes to physical attributes of different gender/race.

> What if all groups got the same amount of extra attention, would they all see IQs rising, e.g. the gap would remain? <

It's an empirical question, and the evidence leans toward a negative answer. Even if the answer were positive, it would still mean that every group benefits for better education, contra the conclusions often reached by politically right wingers sympathetic to a strong notion of genetic determinism.

It's not accurate to say that g was designed as a common factor across "IQ tests", as though they came first. If anything, causation went the other way. Historically, a huge range of individual questions was tried out, a single unexpectedly large common factor (g) was noted even with very unexpected items, and then IQ tests were _designed_ by choosing those questions that loaded most strongly onto g. Or, since as best I can see the more common bias is to look for several statistical factors rather than one, questions were also added to play up secondary statistical factors (quantitative, verbal etc.) and gather as much independent information on them as possible.

If you are going to question whether g measures "something real", you should really mention the correlation with reflex speed. That seemed a striking omission, given which way you were leaning.

And if you want to enlighten people about the Flynn effect, please mention that the evidence is consistent with it operating exclusively at the lower end of the IQ distribution. This seems a very important factual point.

good point about the asymmetrical moving of the Flynn effect. I'm not sure what reflex speed has to do with this, though. As for the history of IQ testing, my understanding is that the first talk of a factor-analysis based common G-factor was the result of correlating the performance on a number of IQ tests. I'm sure once that happened *further* development of IQ testing relied on correlation with the (alleged) G-factor.

The correlation with reaction time to g, even for tasks that too simple to require "intelligence" is pertinent to the hypothesis that g might indicate total cognitive power = circuits * speed. g also correlates with brain volume. These findings are reviewed in http://www.nature.com/ejhg/journal/v14/n6/full/5201588a.html

Yes, there were some kind of IQ tests before there was g. But they are best thought of as proto-IQ tests. Modern IQ test items have been selected with a view to their factor loadings, with a bias towards also discarding obviously culturally-dependent items EVEN IF they load strongly. If you want to know what a modern IQ test is, you should discuss how the questions were chosen. When you said on the podcast that "g measures correlations between IQ tests", there was no reason not to assume you were talking about modern IQ tests, and with that assumption, the statement refers to a circularity and hence makes no sense.

The bit about reaction time is interesting, since as you say it would indicate that "g" - whatever it is - measures something basic, which may in turn be correlated with "intelligence" (whatever *that* is).

As for the IQ vs g issue, I'm afraid we are still faced with circularity: original IQ test > g as their correlate > new IQ tests correlate with g.

As you can see, I'm pretty skeptical of statistical reification in general, and of the idea that something as multifaceted as intelligence can be captured by a simple linear scale.

I agree that there was a circularity, but it still needs to be noted that modern IQ tests are designed to measure g; this is still a better, if incomplete, summary of affairs and their causation than saying that g measures the correlation between different contemporary IQ tests.

Internal evidence, for what little it is worth, comes from the fact that such a large single component defies the expectations of many, including yourself. As a counterintuitive result, it was clearly not an a priori goal. A huge amount of effort has gone into designing tests that will yield more factors of decent size. The amount of success (not zero) as a function of the amount of effort on the part of people who shared your skepticism that intelligence could be collapsed to one dimension, is uninspiring. Still, if you think intelligence is multifaceted, maybe you should have discussed the secondary factors that have been found? There are interesting stories there.

External validation, rather more importantly, comes from the correlation of g with reaction time, brain size, birth weight, probability of admission to hospital as an accident victim etc. etc.