Author: Dalliard
(page 1 of 3)

Given the central role that testing plays in the American educational system, most datasets that we have on racial and ethnic differences in cognitive ability include only children, adolescents, or young adults. Most of the economic and social effects of cognitive differences are, however, produced by the working age population, so it would be useful to have test scores from older adults as well. The PIAAC survey of adult skills conducted by the OECD provides excellent data for this purpose. Continue reading

Regression to the mean, RTM for short, is a statistical phenomenon which occurs when a variable that is in some sense unreliable or unstable is measured on two different occasions. Another way to put it is that RTM is to be expected whenever there is a less than perfect correlation between two measurements of the same thing. The most conspicuous consequence of RTM is that individuals who are far from the mean value of the distribution on first measurement tend to be noticeably closer to the mean on second measurement. As most variables aren’t perfectly stable over time, RTM is a more or less universal phenomenon.

In this post, I will attempt to explain why regression to the mean happens. I will also try to clarify certain common misconceptions about it, such as why RTM does not make people more average over time. Much of the post is devoted to demonstrating how RTM complicates group comparisons, and what can be done about it. My approach is didactic and I will repeat myself a lot, but I think that’s warranted given how often people are misled by this phenomenon.Continue reading

In the more than three years of its existence, about 110 posts have been published on this blog. While blogging has unfortunately been light in recent times around here, the upside of the data- and analysis-heavy format of our posts is that they rarely lose their relevance with time, making the perusal of our old posts well worth the time.

To help readers search through our archives, below is a list of what I consider to be some of the best content we’ve published. They’re not necessarily our most popular posts, but I think they offer a good dive into human biodiversity, in particular our perennial favorite topic of IQ differences between groups. The list is in the order of original publication. Continue reading

In the classic twin study design, identical (MZ) twin pairs are compared to fraternal (DZ) twin pairs so as to estimate the relative contributions of heredity and environment to individual differences. The classic twin design depends on the equal environments assumption (EEA) according to which the shared environment of MZ twins is not more similar than that of DZ twins.

The claim that the EEA is an unrealistic assumption which is routinely violated in reality is perhaps the most common criticism of the classic twin design. Violations of the EEA generally bias estimates of the effect of heredity upwards and those of the environment downwards. For this reason, there have been a number of studies where the assumption has been put to test with research questions such as:

Are twin pairs who are misinformed about their actual zygosity as similar as pairs who know their real zygosity?

Are twin pairs with objectively more similar environments more similar phenotypically?

Are the results of twin studies consistent with the results of other kinds of behavioral genetic designs, such as adoption studies?

This research has indicated that the EEA is generally valid and that even when it’s violated, the effect on parameter estimates is small (Barnes et al., 2014; Felson, 2014).

I think sex differences offer an underappreciated way of further evaluating the EEA. Half of DZ pairs are same-sex (male-male or female-female) and half are opposite-sex (male-female), whereas MZ pairs are, of course, all same-sex. Differences in twin correlations across these sex categories are informative about the EEA because if the shared environment differs by zygosity, you would expect it to differ by sex, too. Continue reading

In his recent book Hive Mind economist Garett Jones argues that the direct effect of IQ on personal income is modest, and that most of the benefits of higher IQ flow from various spillover effects that make societies more productive, boosting everyone’s income. This, he says, explains the “IQ paradox” whereby IQ differences appear to explain a lot more of the economic differences between nations than within them.

Jones does not say in his book what he thinks the exact effect of IQ on personal income is, but on Twitter he has asserted that “Fans of g would do well to look at the labor lit: 1 IQ point predicts just 0.5% to 1.2% higher wages.” He has also said that, in terms of standardized effect sizes, IQ accounts for only about 10% of variance in personal income (a correlation of ~0.32).

While I don’t doubt Jones’s overall thesis that the effect of IQ on productivity is broader than its effect on personal productivity or income, I think he understates the importance of IQ in explaining income differences between individuals. I analyzed a large American population sample and found a substantially larger effect of IQ on permanent income than previous investigations. It appears that the literature Jones refers to has failed to pay sufficient attention to various measurement issues. Continue reading

Michael Rönnlund and colleagues have a very nice paper out in Intelligence. They show that the individual differences in general intelligence that exist at age 18 are almost perfectly preserved to age 60, after which this stability starts to slowly break down. Continue reading

A few years ago James Heckman, together with some other economists, published a study arguing that “achievement tests” and “IQ tests” are different beasts: the former, they claim, are better predictors of criterion outcomes (such as grade point averages) and are more strongly influenced by personality differences than the latter. Like most of Heckman’s forays into psychometrics — he has been obsessed with trying to shoot down Bell Curve -type arguments ever since the book was released — the study leaves much to be desired. David Salkever has published a nifty reanalysis of Heckman and colleagues’ study, showing that their results stem from faulty imputation and a failure to take into account age effects. Continue reading

There’s a long-standing debate about if and how parental socioeconomic status moderates the heritability of IQ. Research has often but not always found that heritability is lower in low-SES families. See Turkheimer and Horn’s excellent review for details (although some of Turkheimer’s own research on this is less than convincing).

Robert Kirkpatrick and colleagues have conducted what may be the best study on the question so far. They use a big Minnesota sample, comprising about about 2500 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings, and investigate if SES moderates either genetic or environmental determinants of IQ. Continue reading

It is claimed that implicit association tests, or IATs, reveal unconscious biases against racial and ethnic minorities and other stigmatized groups. The tests are simple and their results appear to be straightforward to interpret: if you are quicker to associate positive words (or other positive stimuli) with the non-stigmatized group (e.g., whites) and quicker to associate negative words with the stigmatized group (e.g., blacks), you have an implicit preference for the former and against the latter. Moreover, it has been shown that IAT scores are (modestly) related to arguably discriminatory behaviors. Given that the IAT scores of most people suggest that they are biased against stigmatized groups, it has been claimed that implicit biases explain discriminatory behaviors in the real world.

Hart Blanton, a long-term critic of various theoretical and methodological absurdities in the IAT paradigm, has written, with some colleagues, a paper challenging a key assumption of the IAT. Re-analyzing several published implicit bias studies, they found that the standard IAT scoring procedure will typically label as implicitly biased people whose observed behavior is neutral and unbiased. IAT researchers assume that individuals who associate positive and negative IAT stimuli with different groups with equal ease are unbiased, but the research by Blanton et al. suggests that such individuals tend to be biased in favor of the stigmatized group. In other words, the zero point of the IAT scale is not associated with behavioral neutrality.

The results of Blanton et al. are pretty straightforward, but not necessarily easy to understand, so I’ll try to clarify them a bit. Continue reading