I have no strong opinion on the general utility of "Implicit Association Tests", but I find these results entirely believable. An association of "American" and "White" would have been entirely unremarkable a generation or two ago. Here, the participants are present-day Yale students.

J Pers Soc Psychol. 2005 Mar;88(3):447-66.

American = White?

Devos T, Banaji MR.

Department of Psychology, San Diego State University, San Diego, CA 92182-4611, USA. tdevos@sciences.sdsu.edu

Six studies investigated the extent to which American ethnic groups (African, Asian, and White) are associated with the category "American." Although strong explicit commitments to egalitarian principles were expressed in Study 1, Studies 2-6 consistently revealed that both African and Asian Americans as groups are less associated with the national category "American" than are White Americans. Under some circumstances, a dissociation between mean levels of explicit beliefs and implicit responses emerged such that an ethnic minority was explicitly regarded to be more American than were White Americans, but implicit measures showed the reverse pattern (Studies 3 and 4). In addition, Asian American participants themselves showed the American = White effect, although African Americans did not (Study 5). The American = White association was positively correlated with the strength of national identity in White Americans. Together, these studies provide evidence that to be American is implicitly synonymous with being White. ((c) 2005 APA, all rights reserved).

PMID: 15740439 [PubMed - indexed for MEDLINE]

[. . .]

These data also indicate that the American = White effect cannot be reduced to a form of pro-White automatic attitude. Even though American symbols were highly valued, pairing these symbols with faces of White and Asian individuals produced a pattern of associations that differ, in terms of direction and intensity, from that observed on a measure tapping implicit ethnic attitudes. Specifically, Asian American participants displayed a significant implicit preference for their ethnic group (in-group favoritism), yet they showed the American = White effect. In addition, responses provided by White American participants indicated that their propensity to link Whites to American was much stronger than their automatic pro-White attitude.

[. . .]

The strength of American and ethnic identity were associated for White American participants (r =.40, p <.02), whereas no significant association between these two indexes was found for Asian American participants (r =.00). These findings are consistent with data reported by Sidanius et al. (1997) and support the idea that American and ethnic identities overlap for White Americans, whereas these identities are distinct for an ethnic minority such as Asian Americans.

[. . .]

The results of this study provide strong evidence for implicit national identity. The category “American” automatically elicits a positive evaluation. It is also clearly incorporated in the collective aspect of the self. A comparison of the mean levels of American identity for Asian and White Americans revealed that these two groups displayed equally strong American identity. This finding is counterintuitive, because Asian Americans, at the same time, internalized the idea that their group does not fully belong to the national entity. These data are in line with results of a previous study showing that African Americans felt as strongly American as White Americans but were aware that they were not perceived as being American (Barlow, Taylor, & Lambert, 2000). A major difference between this previous study and the results obtained here is that we provide evidence for a discrepancy between beliefs about the group and the self operating outside of conscious control.

The equally strong level of American identification among White and Asian Americans should not eclipse important differences in the interrelations among ethnic and American identities. In line with social dominance theory (Sidanius et al., 1997; Sidanius & Petrocik, 2001; Sinclair et al., 1998), ethnic and American identities were inextricably linked for White Americans, whereas these identities were distinct for Asian Americans. [. . .] Even on the basis of the current findings, a clear asymmetry characterized the interrelations between ethnic and American identities for the White majority and the ethnic minority. In contrast to White Americans, Asian Americans cannot rely on their ethnicity to achieve a national identity. For White Americans, these two identities tend to be merged beyond the level of conscious awareness.

white jumpers go 1 and 3 at high jump, black americans do not even final at long jump or high jump, yet here we are, immediately back to talking about 100 meters. actually, we never stopped, because fawning over black 100 meter runners is the main interest of "track" fans like steve. most other interesting questions about track and field are rarely explored.

it's like some kind of jon entine-esque gay fantasy. and let's not pretend that's not what going on here.

the immediate and unrelenting comparison of west african sprinters to white men, and only white men, as if white men were the only other humans on earth, and they alone needed to be singled out for being inferior, when in fact they're pretty good athletes, and a lot better than the other races on average, well, it's just gay. and i don't mean gay in the "this is retarded" way. it borders on homosexual.

not an olympics wrap-up, just a track and field wrap-up. and not even a track & field wrap-up. just a track wrap-up, and pretty much just a track sprinting wrap-up.

totally ignoring field is very important for the silly entine-esque analysis going on here. white men can't jump, except when they can. black americans easily dominate jumping, except when they don't.

throwing objects is completely natural, as natural as running. it's why there is a huge difference between men and women when it comes to throwing. men are literally designed to throw. it's genetic. africans are throwing rocks and spears every single day. yet it's white men who are better at throwing things on average. and not just a shotput or a discus or a javelin, but footballs and baseballs and basketballs too. they're just plain better at hurling and shooting stuff for accuracy and distance. but positive stereotypes about white athletes are never allowed.

should we even get into how ridiculous some of steve's statements are in his vdare column? boxing is not important now he says. that can only be because white boxers have taken many of the belts.

In the same way that wide-ranging, exorbitantly expensive attempts to find and develop White world beaters in the 100 meters, 10,000 meters, or table tennis would be mocked, America’s attempts to have world-beating Black weightlifters, Black swimmers, Black shot putters, Black wrestlers, Black cyclists, Black high jumpers, Black javelin throwers, and even Black or semi-Black decathletes are ridiculous. Indeed it is nothing more than a severe abuse of the available potential.

Fifty-two Jewish subjects and 60 non-Jewish subjects were presented with a disguised ethnic identification task. Sixty systematically grouped portraits were used. Religion sortings were administered among sortings by intelligence, age and likability. Subjects were also questioned concerning the cues they used. Jewish pictures were comparatively accurately identified by all subjects, and tended to be more accurately identified by Jewish subjects than by non-Jews. Pictures classified as Jewish also tended to be sorted as bright, especially by Jewish subjects. There was less evidence of ethnocentricity in other sortings, apparently in part because of relatively stereotyped classifications by brightness and liability. These tow evaluative dimensions were also highly correlated. Jewish pictures tended to be judged through the use of physiognomic cues, whereas Catholicism was frequently inferred from other categories (nationality); Protestantism tended to be a residual category.

[. . .]

Accuracy of Identification. All three groups of subjects (Jews, Catholics and Protestants) identified the Jewish pictures as Jewish at better than chance frequency. Combining all the subjects, 16 of the 20 Jewish pictures were correctly identified (see criterion above for correct identification). Using the binomial expansion, it was found that the probability of correctly identifying this many or more pictures by chance was less than .006. Thirteen of the 20 Jewish pictures were correctly identified by more than half of the subjects.

The level of accuracy achieved by the classifiers in this study, using only small photographs, suggests a lower boundary that can only be improved upon by anyone with above-average perceptiveness interacting with actual people.

I actually agree with the commenter at Dienekes' that the fraction of (Ashkenazi) American Jews with blatant, stereotypically-Jewish features is around 1/2. But I strongly disagree with the implicit claim that the other half are visually indistinguishable as Jews.

The four most frequently cited sets of cues were dark hair, an oversized nose, general "Jewish" facial characteristics, and similarity in appearance of the depicted persons to Jews in the subjects' acquaintance. Non-Jewish subjects seemed more prone than Jewish subjects to refer to specific features (hair color being the most salient example), whereas Jews seemed more inclined to cite global characteristics. [A superscript for the following footnote appears here: "Rommetveit concludes from a series of studies of social perception that "intuitive" global judgments (as opposed to those based on specific traits) are characteristic of accurate social perceivers. (See Ragnar Rommetveit, Selectivity, Intuition and Halo Effects in Social Perception, Oslo, Norway: Oslo University Press, 1960.)] Dark complexion constitutes an exception to this rule, since it was cited by proportionately more of the Jewish subjects.

Those who hilariously cite such blond Jews as Barbra Streisand and Gene Wilder as proof "Jews are white" should see the second point I've emphasized in the above text (on global judgments vs. specific traits). Two more of RM's "Northern European" Jews, pre-dye and nose jobs:

Incidentally, with data of the sort presumably to be generated by the "Jewish HapMapProject", it ought to be easy enough to quantify European admixture in Ashkenazi Jews. It may even be possible to directly test the proposition that AJs were selected for lighter pigmentation while remaining predominantly non-European in ancestry.

Compared to the recent Europe-wide genetic structure paper, this paper contains more (and better-characterized with respect to geography) samples from Finland and Sweden, but typed at fewer loci. The authors detect an east-west duality in Finland. They fail to detect substructure within Sweden, though poorer-quality data or the presence of non-European immigrants in their Swedish sample may be confusing the issue. Nonetheless:

The principal component analysis clearly separated the Finnish regions and Eastern and Western counties from the Swedish as well as the Finnish regions and counties from each other (Figure 2C and 2D). Geneland showed three clusters (Figure 3B), roughly corresponding to Sweden, Eastern Finland and Western Finland. Thus, Geneland was able to correctly identify the country of origin of the individuals despite the lower quality of the Swedish data. Interestingly, the county-level PCA (Figure 2D) and Geneland (Figure 3B) placed the Finnish subpopulation of Swedish-speaking Ostrobothnia closest to Sweden. This minority population originates from the 13th century, when Swedish settlers inhabited areas of coastal Finland [34]. Our result is in congruence with earlier studies where intermediate allele frequencies between Finns and Swedes have been observed in the Swedish speaking Finns [35].

Geneland is an algorithm which "in contrast with Structure, assumes that population membership is structured across space":

If this assumption is correct, the power of inferring clusters increases; if the assumption is incorrect, it will lead to a loss of power but generally not to inference of spurious clusters (in the case of weak spatial organization, Geneland tends to perform like Structure in terms of inferred clusters [27]). Besides, in previous studies with similar goals it has been estimated that Structure needs a minimum of 65 to 100 random markers to separate continental groups and that the number of markers rather than samples is the most important parameter determining statistical power [13, 37]. The differences between and within the neighbouring countries studied here are presumably smaller than those between continents and not large enough to be detected by Structure.

The detection of three clusters by Geneland versus one single cluster by Structure can thus be interpreted as an example of increased power in spatially structured populations.

[. . .]

Our results from the Geneland algorithm demonstrate the benefit of including spatial information in clustering individuals according to their genetic similarity, particularly at low levels of differentiation. Although Geneland has successfully clustered individuals into groups with low or moderate FST in ecological studies [44-46], to the best of our knowledge, this is the first time the algorithm has been used for human or SNP data.

Background
Despite several thousands of years of close contacts, there are genetic differences between the neighbouring countries of Finland and Sweden. Within Finland, signs of an east-west duality have been observed, whereas the population structure within Sweden has been suggested to be more subtle. With a fine-scale substructure like this, inferring the cluster membership of individuals requires a large number of markers. However, some studies have suggested that this number could be reduced if the individual spatial coordinates are taken into account in the analysis.

Results
We genotyped 34 unlinked autosomal single nucleotide polymorphisms (SNPs), originally designed for zygosity testing, from 2044 samples from Sweden and 657 samples from Finland, and 30 short tandem repeats (STRs) from 465 Finnish samples. We saw significant population structure within Finland but not between the countries or within Sweden, and isolation by distance within Finland and between the countries. In Sweden, we found a deficit of heterozygotes that we could explain by simulation studies to be due to both a small non-random genotyping error and hidden substructure caused by immigration. Geneland, a model-based Bayesian clustering algorithm, clustered the individuals into groups that corresponded to Sweden and Eastern and Western Finland when spatial coordinates were used, whereas in the absence of spatial information, only one cluster was inferred.

Conclusions
We show that the power to cluster individuals based on their genetic similarity is increased when including information about the spatial coordinates. We also demonstrate the importance of estimating the size and effect of genotyping error in population genetics in order to strengthen the validity of the results.

In screening for subjects for a reeducation experiment, Gregory Razran collected data on the "ethnic attitudes" of 150 Americans (about 100 college students from Columbia and Barnard and 50 middle-aged New Yorkers) in the 1930s, using 'a special "ethnic surnames plus nonethnic photographs" rating method' along with interviews of some subjects. Razran finds [1]:

The evidence for the existence of very definite unfavorable stereotypes and dislikes of Jews and Italians, and to a small extent also of Irish, is unmistakable. Photographs to which Jewish surnames had been attached dropped, as seen from Table 3, 1.21 points in General Liking, 0.81 in Character, 0.29 in Beauty, while going up 1.01 in Ambition and 0.36 in Intelligence, with little consistent change in Entertainingness. The photographs with Italian surnames went down 0.78 points in General Liking, 0.33 in Beauty, 0.35 in Intelligence, 0.45 in Character, 0.34 in Entertainingness, while going up 0.39 points in Ambition. The Irish surnames produced a drop of 0.25 points in General Liking, 0.12 points in Beauty, 0.19 in Intelligence, 0.29 in Character, o.i i points in Entertainingness and a rise of 0.18 points in Ambition. The drops in General Liking and Character for Jews and Italians and the rise in Ambition for Jews are fully reliable statistically, while the other drops and rises possess some degree of reliability, are consistent, and borne out, in the main, by the interviews. The results are even more striking if one considers the fact that nearly 30 per cent of the subjects showed no consistent changes in their ratings and were—as revealed by some post-ratings questionings — definitely free from ethnic dislike.

In contrast, "changes in the ratings of the photographs with Old American surnames" were "few and in no case statistically reliable".

Contrary to what one might expect based on the rantings of various German-identified types, the Anglo-Saxons in this sample are not overly philo-Semitic--just the opposite:

ethnic dislike and unfavorable stereotyping of Jews among Americans of different ethnic descents diminishes in the following order [n/a: line breaks added]:

The dislikes and unfavorable stereotypings of Italians follow approximately the same order, except that here the differences do not become statistically reliable till we pit those of Anglo-Saxon, German, Scandinavian, Irish, and Dutch descents against those who descend from white ethnic groupings in Eastern and Southern Europe (including the Jews among the latter).

Regional differences seem minimal, but in this sample, at least, Mid-Atlantic and New England residents like Jews the least and Southerners like Jews the most.

Also of interest, dislike of Jews peaks among the middle-class and middle-income, while dislike of Italians continues to rise with income (on the whole, all occupation/income groups dislike both Jews and Italians; only the degree varies):

From Table 5 we learn that college students are less prejudiced against Jews when their parents' incomes are either less than $3,000 or more than $12,000 than when these incomes are in intermediate brackets; that the parents' college education lessens a little the prejudices of their children; and that children of professionals, laborers, and big businessmen have less Jewish prejudice than children of farmers, white-collar workers, and small businessmen. The differences in amount of Jewish prejudice between the children of professionals and white-collar workers are fully reliable statistically, while the other differences are fairly or somewhat reliable. On the other hand, this table shows that prejudice against Italians is smaller among children of white-collar workers and small businessmen than among children of laborers and big businessmen; and that this prejudice is little affected by the education of the parents of the students, and is the greater the higher the income of the parents.

Additionally:

prejudice against Jews is greater among Republicans than among Democrats, among opponents than among proponents of the New Deal (in 1938), among men students than among women students, among students who are members of sororities or fraternities than among those who are not members of these organizations, and among those who spent part of their lives in rural communities. Prejudice against Italians is, on the other hand, unrelated to political party preferences, attitudes toward the New Deal, and residence in rural communities. [. . .] Both prejudices are less among Catholics than among Protestants—very much less in the prejudice against Italians—but in this study religious affiliations have been so much overshadowed by ethnic descent that not much significance should be attached to this finding.

Razran concludes:

that among present-day Americans ethnic dislike and unfavorable stereotyping of Jews possess an extent, a quality, and a structure that mark them off significantly from the dislike and unfavorable stereotyping of a comparable group such as Italians, not to mention the mild dislike and unfavorable stereotyping of the Irish. Merely quantitatively, in terms of standard scores, the mean of ethnic dislike of Jews is about 50 per cent higher than that of Italians and about five times as great as the dislike of the Irish. [Freudian psychobabble excised.]

For one thing, in some areas that are no doubt determinants of ethnic—or any other social—status, the Jews have been judged favorably, or only slightly or moderately unfavorably. Take, for instance, intelligence and education—or rather a lack of them. The adjectives "ignorant," "stupid," "uncivilized," "primitive," "nai've," and the like have been applied very lavishly to the Italians in this study, and to some small extent also to the Irish. But in the case of the Jews the stereotyping was, as seen from the tables, in a favorable direction, with only occasional unfavorable Comments in the interviews such as "Jewish intelligence lacks originality," is "destructive," or is "too verbal and academic," and the like. In another area, that of "hygiene" and "grooming" and the adjectives of "dirty," "smelly," "sloppy," the stereotype of Jews was only a little more unfavorable than that of the Irish and clearly less unfavorable than that of the Italians.

[. . .]

In two other social realms that unquestionably are determining factors in ethnic status and distance, the Jews were judged moderately unfavorably. These realms are, first, what may be called "manners," "etiquette," and "taste," and, secondly, emotional stability. Neither of these realms has come in for direct rating, but both of them have been important in setting the rating of General Liking—to a considerable extent also Character and Entertainingness— and both have figured heavily in the interviews. In the first realm, the characterizations of "loud," "gaudy," "vulgar," "ostentatious," "uncouth," "don't know how to behave," often dubbed both Jews and Italians. There was, however, this difference that the ill-manners of the Italians were attributed to ignorance and to what may be called a "culture lag"—using culture in its popular rather than its sociological connotation—while Jewish bad manners were said to stem from more basic character defects which, as will be seen later, are the crux and "focal organizer" of nearly all the prejudice toward Jews. The unfavorable stereotyping with regard to emotional stability was "neuroticism" for the Jews, "irresponsibility," "hot air," and "alcoholism," for the Irish, and "hot tempered," "impulsive," "revengeful," and "primitive emotionality," for the Italians. Again, there was a tendency to consider the alleged instability of the Jews as socially more offensive, even though it was admitted that in concrete social situations the alleged instability of the other two groups would be more likely to be harmful and disruptive.

Except for General Liking, Jews scored lowest in Character and highest in Ambition. However, while Ambition is quite a specific aspect of behavior to be rated, Character is of course very composite, and we must turn to the interviews for specifications. On the whole, the chief determinants of the very low Jewish scores in Character was the stereotype of their unethical conduct—"unscrupulous," "dishonest," "crooked," "unfair," "scheming," "egotistic," "egocentric,"—with the stereotypes of "aggression," "cowardice," and "ill-manners" following in order.

Continuing, Razran finds:

Another distinguishing characteristic, this time a favorable one, of attitudes toward Jews is the considerable number of individuals whose attitudes and stereotyping clearly class them as pro-Jewish. (Excluding self-ratings no comparable pro-Italian and pro-Irish groups, to speak of, were found.) There were 29 such individuals—15 per cent— in this study, and 12 of them were interviewed. In three of the 12, the pro- Jewishness was primarily a matter of ethics, Christian ethics, a desire to help the downtrodden, to atone for the "sins of the fathers," and in at least one of the three, these feelings were tied up with an unhappy frustrated personal life. The pro-Jewishness of the remaining nine was, however, little governed by such considerations, but seemed to stem directly from a conviction that the Jews are a superior group in most, if not in all, personal and social qualities. Among the students, this alleged superiority revolved around Jewish contributions to civilization, their preeminent and forward ideas and ideals, almost a belief that most positive qualities of Western culture are largely due to Jews. In some ways, the views of these non-Jewish Americans correspond to the doctrine of a "Jewish mission:" "peace and love," as preached by some American rabbis; "revolution and a new social order," as put forth by some early Russian-Jewish revolutionaries. On the other hand, the pro-Jewish subjects of the middle-aged group saw Jewish superiority primarily in the personal success, achievements, and habits of the latter. Said a small storekeeper of Irish descent: "I take my hat off to the Jews. They know how to do things and get things, despite handicaps I certainly would be happy if my daughter married a Jew. Jews are good family people, good providers, loyal to their wife and children, and don't drink."

The general curve of the distribution of dislike-like of Jews also seems to differ from the curves of dislike-like of the two other ethnic groups, as may be gathered from Table 8. The curves for both Italians and Irish are essentially unimodal, the first being bell-shaped and the second being positively skewed, but the curve for the Jews unmistakably points toward a bi-modality of distribution.

[1] Razran, G. Ethnic dislikes and stereotypes: a laboratory study. The Journal of Abnormal and Social Psychology. Vol. 45(1), January 1950, pp. 7-27.

Previous studies based on genome-wide SNP diversity reported differences between individuals of southern and northern/central European ancestry [3, 5, 6] and, to a lesser extent, between those of eastern and western European ancestry [3], which were not confirmed in our study.

However, looking at theearlierstudies cited and comparing like to like, the picture is broadly similar: the main axis of genetic variation in Europe is North-South; Greeks and Italians are cleanly separable from Northern Europeans. The N-S gap is bridged somewhat by central Europeans and Iberians, but (unmixed) Iberians in particular are numerically insignificant in the U.S.

Regardless of whether genetic variation is "clinal" or "clustered" within Europe, America's Northern European majority is genetically distinct from its southern Italian minority.

The image at left shows Utah whites (the "CEU" HapMap population) overlaid on the PCA plot of European populations generated by Lao et al. Clearly, Utah whites are not going to be confused with Italians, "small" differences or not.

Other points:

As discussed elsewhere, the "UK" sample is a reference sample from London and likely contains many individuals with Irish, Welsh, and Scottish forebears. It should not be taken as representative of the English, or used as the basis for arguments about the genetic impact of historical migrations.

To determine the genetic patterns across the British Isles, we will use genetic “markers” to look at every individual sample. One might expect, for example, to find fewer genetic differences between people in Cornwall and Devon than Cornwall and the Shetlands because, historically, there has been less movement between the more distant counties.

Once these genetic patterns have been identified, it should also be possible to use them to investigate historical patterns of movement within the UK. As well as this, comparison of these patterns with results from other populations that surround the UK, such as the Scandanavians, French and Germans, should help us to understand the impact they have had on the British over the Centuries.

[. . .]

What we plan to do is collect blood samples from between 100 and 150 people from about 30 different rural regions throughout the UK. To try and make sure that the sample is representative of the area throughout the ages, we are looking for people whose parents and grandparents were all born in the same locality.

The goal is 3500 samples, of which the project website reports 3294 have been collected. According to the May 2008 newsletter (pdf):

We are currently in the process of analysing our data from the latest round of genotyping and hope to report the results later on this year.

The final sample consisted of 10 participants (5 men and 5 women) who were second-generation graduate students from a predominantly White, midsized urban university in the Northeast. This sample size corresponds to the CQR method of recruiting between 8 and 12 participants (Hill et al., 1997). Regarding racial background, 5 identified as Asian/Pacific Islander, 3 identified as Hispanic, 1 identified as Caribbean, and 1 identified as White/Hispanic.

[. . .]

Physical Characteristics of True Americans

Seven participants reported that white skin, blonde hair, and blue eyes were the physical characteristics of a true American. Among these 7 participants, 6 participants mentioned White (n = 2), Caucasian (n = 3), or light skin (n = 1); 4 participants mentioned blonde hair (n = 2) or light hair (n = 2); and 4 participants mentioned blue eyes (n = 2) or light eyes (n = 2). For example, 1 Asian American male participant strongly associated being American with being White and believed that skin color was more important than other characteristics. He stated, “Being White is like a trump card, you can be like ignorant in politics and be White but more American than like a Black or Asian person.” Only 1 of the 7 participants who described White features also included gender. This Caribbean American man stated, “Definitely male, White umm, definitely male and White.” Only 2 of the 7 participants spoke of these features being part of a cookie-cutter or stereotypical American view of what is considered American.

[. . .]

These findings should be considered in light of recent research in the area of American identity. For example, Cheryan and Monin (2005) found that although Asian Americans felt as American as their White American counterparts, they also recognized that they were not perceived as such by other Americans. Thus, it is possible that although our participants may have felt American, as second-generation Americans and racial/ethnic minorities, they may also have recognized that they were not perceived to be as American as White European Americans and thus described features such as blonde hair and blue eyes.

[. . .]

Collectively, the results of our study indicated that being and feeling like a true American was complex and related to a number of individual and contextual factors. The complexity of participants' American identity definitions and negotiations is clearly evident in the results, in which four out of the six domains included categories that could be considered conceptual opposites: physical characteristics (White with blonde hair and blue eyes vs. diverse); beliefs and values (ethnocentrism vs. multiculturalism); impact of 9/11 (us-vs.-them mentality vs. greater unity); and participants' American identity (felt like a true American vs. did not feel like a true American). In addition, our results highlight the potential impact of sociopolitical forces in determining individuals' definitions and feelings of inclusion within a superordinate national identity.

[1] Park-Taylor et al. What It Means to Be and Feel Like a “True” American: Perceptions and Experiences of Second-Generation Americans. Cultural Diversity and Ethnic Minority Psychology. April 2008, Vol. 14, No. 2, p 128-137

The present studies demonstrate that conceiving of racial group membership as biologically determined increases acceptance of racial inequities (Studies 1 and 2) and cools interest in interacting with racial outgroup members (Studies 3-5). These effects were generally independent of racial prejudice. It is argued that when race is cast as a biological marker of individuals, people perceive racial outgroup members as unrelated to the self and therefore unworthy of attention and affiliation. Biological conceptions of race therefore provide justification for a racially inequitable status quo and for the continued social marginalization of historically disadvantaged groups. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

PMID: 18505316 [PubMed - in process]

More:

Human survival and well-being fundamentally depend on connections to other people. In the present research, we examine the extent to which people's conceptions of social groups determine which connections are most worthy of investment. Specifically, we investigate whether conceiving of racial group membership as biologically rooted determines to whom people attend and with whom they affiliate. We argue that a biological notion of race saps people's desire to reach out to members of racial groups that have been historically disadvantaged. These biological outgroup members ultimately are rendered, as a group and individually, less relevant to the self.

In the United States, race has traditionally been viewed in terms of biological essentialism—that is, race is understood to be a fundamental and stable source of division among humankind that is rooted in our biological makeup. More recently, however, some have come to see race as a social construct, initially created for purposes of maintaining a hierarchical social order but now a meaningful marker of cultural orientation, social identity, and experiences with discrimination (Smedley & Smedley, 2005).

[. . .]

The purpose of the present research is not to determine which view is most accurate but instead to investigate the consequences of endorsing one conception over another.

[. . .]

Less often have researchers investigated the role of people's evaluatively neutral beliefs in explaining reactions to racial disparities and the quality of interracial interactions. Beyond racial prejudice, in this article we investigate whether a simple belief that racial categories are biologically determined has the power to dampen people's motivation to engage with historically disadvantaged racial groups. Affiliating and engaging with others is a fundamental need. However, a biological conception of race may function as an affiliation cue that operates preferentially, such that people who hold this conception most desire to affiliate with those who are in their biological ingroup. That is, because people are more likely to direct their resources and attention to those whom they perceive as kin (Hamilton, 1964; Kruger, 2003; O'Gorman, Wilson, & Miller, 2005), they may direct their resources and attention to those within their racial ingroup when they view race as biological in nature.

We demonstrate in the present studies that individuals who understand race to be biologically derived are more accepting of racial inequities. They tend to understand racial inequities as natural, unproblematic, and unlikely to change (Study 1), a relationship that cannot be accounted for by racial prejudice. Moreover, an experimentally manipulated view of race as biological leads people to respond to racial inequities with less emotional engagement (Study 2). That is, they are not only less motivated to change racial inequities but also less concerned with and moved by such disparities. At the interpersonal level, we show that those with a biological conception of race maintain friendship networks that are less racially diverse (Study 3), have less desire to develop friendships across race lines (Studies 3 and 4), and are less interested in simply sustaining contact with a person of another race (Study 5) than are those with a social conception of race. Thus, we argue that a biological notion of race—beyond racial prejudice—sharpens associational preferences along race lines.

Consanguineous marriages: do genetic benefits outweigh its costs in populations with α+-thalassemia, hemoglobin S, and malaria?

Srdjan Denic et al.

Consanguinity is widespread in populations with endemic malaria. This practice, leading to an increase of homozygosis, could be either detrimental for lethal alleles (like hemoglobin S) or be potentially advantageous for beneficial alleles (like α+-thalassemia). The objective of this study was to analyze the effects of inbreeding on the fitness of a population with both, α+-thalassemia and hemoglobin S mutations. We calculated the relative fitness of an inbred population with α+-thalassemia and sickle cell anemia using a standard formula, and then compared it to that of an outbred population. An increase in the frequency of α+-thalassemia allele (0–1) results in a gain of relative fitness that is proportional to the coefficient of inbreeding; it is maximal at an allele frequency in the vicinity of 0.5. For hemoglobin S, an increase of frequency (0 to equilibrium point) produces a progressive loss of relative fitness that is also proportional to the coefficient of inbreeding; it is lowest at the equilibrium frequency that is always lower than 0.5. In a consanguineous population with both α+-thalassemia and hemoglobin S under selection pressure of malaria, the sum of contrary effects of inbreeding on the relative fitness of population depends on the frequencies of the two alleles and the coefficient of inbreeding.

Our findings provide a plausible hypothesis for explaining the confinement of consanguineous marriages to the tropical and subtropical regions where malaria is endemic and explain their absence in other parts of the World. As such, they complement the socioeconomic benefits theory of consanguinity ([Alwan and Modell, 1997], [Bittles, 2001] and [Khlat, 1997]). If consanguinity produces more surviving offspring (higher fitness) in some malarious populations, then a better protection of these survivors of malaria, as per socioeconomic theory of consanguinity, would further add to family fitness. Although neither theory is experimentally testable, the theoretical arguments underpinning both, as well as their complementing picture, will further insight into the causes and effects of customs regarding human reproduction.

Previous studies have reported variation in women's preferences for masculinity in men's faces and voices. Women show consistent preferences for vocal masculinity, but highly variable preferences for facial masculinity. Within individuals, men with attractive voices tend to have attractive faces, suggesting common information may be conveyed by these cues. Here we tested whether men and women with particularly strong preferences for male vocal masculinity also have stronger preferences for male facial masculinity. We found that masculinity preferences were positively correlated across modalities. We also investigated potential influences on these relationships between face and voice preferences. Women using oral contraceptives showed weaker facial and vocal masculinity preferences and weaker associations between masculinity preferences across modalities than women not using oral contraceptives. Collectively, these results suggest that men's faces and voices may reveal common information about the masculinity of the sender, and that these multiple quality cues could be used in conjunction by the perceiver in order to determine the overall quality of individuals.

Why do some dads get more involved than others? Evidence from a large British cohort

Daniel Nettle

Previous studies in developed-world populations have found that fathers become more involved with their sons than with their daughters and become more involved with their children if they are of high socioeconomic status (SES) than if they are of low SES. This paper addresses the idea proposed by Kaplan et al. that this pattern arises because high-SES fathers and fathers of sons can make more difference to offspring outcomes. Using a large longitudinal British dataset, I show that paternal involvement in childhood has positive associations with offspring IQ at age 11, and offspring social mobility by age 42, though not with numbers of grandchildren. For IQ, there is an interaction between father's SES and his level of involvement, with high-SES fathers making more difference to the child's IQ by their investment than low-SES fathers do. The effects of paternal investment on the IQ and social mobility of sons and daughters were the same. Results are discussed with regard to the evolved psychology and social patterning of paternal behaviour in humans.

Keywords: Fathers; Sons; Daughters; Socioeconomic status

[. . .]

As several previous studies in developed societies have also found ([Cabrera et al., 2000], [Harris et al., 1998], [Kaplan et al., 1998] and [Lawson & Mace, submitted for publication]), paternal involvement is patterned by SES and by sex of the child, with high-SES fathers more involved than low-SES ones, and sons receiving more paternal involvement than daughters. High paternal involvement is associated with significantly increased IQ scores at age 11 in this large British cohort, even when family SES and number of other siblings are controlled for. This result is consistent with previous findings for IQ and educational attainment measures from this (Flouri & Buchanan, 2004 E. Flouri and A. Buchanan, Early father's and mother's involvement and child's later educational outcomes, British Journal of Educational Psychology 74 (2004), pp. 141–153. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (17)Flouri & Buchanan, 2004) and other (Kaplan et al., 1998) cohorts.

[. . .]

This study shows for the first time an interaction effect with father's SES, with professional and managerial fathers making more difference to child IQ scores when they invest than unskilled fathers do (see Fig. 3). High-SES fathers may have more skills to enrich and improve the environment of the child's development than low-SES fathers do. As Kaplan et al. (1998) suggested might be the case, high SES fathers seem to be more efficient at embodying human capital in their children than low-SES fathers are. This gives a powerful potential explanation of why low-SES groups are characterised by low paternal effort. The returns to effort are low, and therefore men have no incentive for higher effort.

[. . .]

High-investing fathers did not have more grandchildren than low-investing fathers in this cohort. This does not necessarily mean that investment is not adaptive, since evolution favours strategies that maximise the contribution of the lineage to the population at an indefinitely far point in the future, and strategies can be adaptive even if their mean payoffs do not exceed the average for several generations (McNamara & Houston, 2006). High-investing fathers, especially from high SES backgrounds, did improve the quality and final social status of their children, and given that social status generally predicts marriage and fertility, at least for men (Fieder & Huber, 2007), it is quite plausible that they thereby reduce the risk of lineage extinction in the longer term. On the other hand, it may be that in this low-fertility, high parental investment, post demographic transition society, investment strategies that might have had an adaptive basis in ancestral environments have become decoupled from realised (grand)offspring numbers.

Ellis (1988) and others (e.g., Rushton, 1985) argue that a fast LH strategy underlies general criminality. Consistent with this view, we found that the short form of the Arizona LH Battery converged on the Protective LH factor with measures of socially deviant attitudes (e.g., aggression, psychopathy, machiavellianism), which served as inverse indicators of that factor. As noted above, LH strategies are composed of coordinated tactics. Our findings suggest that if men possess evolved specialized adaptations for sexual coercion, then sexual coercion may be one tactic among many subsumed by a general fast LH strategy, that is, a suite of tactics characterized by a diverse repertoire of socially deviant adaptive tactics. For example, if general social deviance is driven by fast LH strategies (e.g., [Ellis, 1988] and [Figueredo et al., 2006]), then sexually coercive individuals could be “criminal-generalists” (Malamuth et al., 2005 N. Malamuth, M. Huppin and B. Paul, Sexual coercion, The handbook of evolutionary psychology (2005), pp. 394–418.Malamuth et al., 2005), yet also be specialized to use sexual coercion as one of the tactics characteristic of their fast reproductive strategies. In short, sexual coercion could be one specialized adaptive tactic that contributed to the reproductive success of fast LH individuals in certain social contexts. One possibility is that fast LH strategies develop partly in response to self-assessments of low mate value and that these strategies are specialized for sexual coercion. Alternatively, sexual coercion may not have directly contributed to reproductive success but instead might be generated as a side effect of selection for fast LH traits that were under direct selective pressure such as interest in casual sex and risk-taking ([Palmer, 1991], [Symons, 1979] and [Thornhill and Palmer, 2000]).

[. . .]

To summarize, slow LH strategy, mate value, low mating-effort, a long-term sexual strategy, low psychopathy, low machiavellianism, and low aggression clustered into one common Protective LH factor that was negatively associated with a Sexual Coercion factor. The Protective LH factor fully mediated the relation between subject sex and Sexual Coercion. Therefore, Protective LH predictors co-occurred within individuals, indicating a single underlying construct that buffers individuals against using sexually coercive tactics. The LH view is consistent with either the idea that sexual coercion is a specific adaptation or that it is a by-product of traits adaptive for fast LH individuals ([Palmer, 1991], Thornhill and Palmer, 2000 R. Thornhill and C. Palmer, A natural history of rape: Biological bases of sexual coercion, MIT Press, Cambridge, MA (2000).[Thornhill and Palmer, 2000], [Thornhill and Palmer, 2004] and [Thornhill and Thornhill, 1992]). The Protective LH factor found in the present study must be replicated in other samples to support or refute the view that the three seemingly alternative evolutionary accounts describe different features of fast LH individuals.

Social Differences in Insulin-like Growth Factor-1: Findings from a British Birth Cohort

Meena Kumaria et al.

Purpose

Insulin-like growth factor-1 (IGF-1) is related to factors that are socially patterned and may play a role in social differences in the development of morbidities including disability. Our aim is to examine whether there are social differences in IGF-1 in a cohort of participants between 44 and 45 years of age.

Methods

We examine the association of IGF-1 with social position measured by father's or own occupational class at three time points in childhood and adulthood, in a cohort of individuals born in one month in 1958 (N = 3,374 men and 3,302 women).

Results

Lower IGF-1 levels were associated with lower social position measured with father's occupational class at birth (p < 0.0001) and own occupational class aged 42 years (p < 0.001). Adult social position was associated with IGF-1 independently of social position at birth (p < 0.001) or any covariates examined.
Conclusions
IGF-1 secretion is associated with social position such that low social position is associated with lower levels of IGF-1. This biomarker may play a role in the development of social differences in morbidities associated with aging, such as the development of disability.

Insulin-like growth factor–1 (IGF-1) is an anabolic protein, related to insulin, which has important actions on cell division, metabolism, as well as on cell proliferation in vascular smooth muscle. Low levels of IGF-1 are associated with atherosclerosis and may be predictive of cardiovascular events (1), type 2 diabetes (2), loss of physical functioning (3), whereas high levels of IGF-1 are associated with the development of certain cancers (2).

[. . .]

Heart disease (1), diabetes (2), functioning, or disability (3) show associations with social position, and our findings suggest that IGF-1 may play a role in the pathways that mediate these differences. This is interesting in light of the recent increase in the prevalence of type 2 diabetes and in the context of an aging population with the resultant increases in disability and poor functioning. An independent association with disability may be mediated by IGF-1 because low IGF-1 levels may correspond to a decrease in the ability to maintain muscle mass (42). Further investigation into the predictors of high IGF-1 levels may help to identify predictors for the maintenance of muscle mass that may militate against the development of disability.

Female facial attractiveness was best predicted by BMI and past health problems, whereas male facial attractiveness was best predicted by the socioeconomic status (SES) of their rearing environment.

[. . .]

Good genes theory predicts that variables contributing positively to individual health and fitness should be positively related to each other, and negatively related to variables that impact negatively on health and fitness. In this study, “positive” variables are SES and attractiveness, and “negative” variables are BMI, asymmetry and Health Problems. The results of between-variables correlations are thus generally consistent with good genes theory, although not all correlations were significant (Table 2).

A Comparative Analysis of the Genetic Epidemiology of Deafness in the United States in Two Sets of Pedigrees Collected More than a Century Apart

Kathleen S. Arnos et al.

Abstract

In 1898, E.A. Fay published an analysis of nearly 5000 marriages among deaf individuals in America collected during the 19th century. Each pedigree included three-generation data on marriage partners that included at least one deaf proband, who were ascertained by complete selection. We recently proposed that the intense phenotypic assortative mating among the deaf might have greatly accelerated the normally slow response to relaxed genetic selection against deafness that began in many Western countries with the introduction of sign language and the establishment of residential schools. Simulation studies suggest that this mechanism might have doubled the frequency of the commonest forms of recessive deafness (DFNB1) in this country during the past 200 years. To test this prediction, we collected pedigree data on 311 contemporary marriages among deaf individuals that were comparable to those collected by Fay. Segregation analysis of the resulting data revealed that the estimated proportion of noncomplementary matings that can produce only deaf children has increased by a factor of more than five in the past 100 years. Additional analysis within our sample of contemporary pedigrees showed that there was a statistically significant linear increase in the prevalence of pathologic GJB2 mutations when the data on 441 probands were partitioned into three 20-year birth cohorts (1920 through 1980). These data are consistent with the increase in the frequency of DFNB1 predicted by our previous simulation studies and provide convincing evidence for the important influence that assortative mating can have on the frequency of common genes for deafness.

[. . .]

Introduction

The importance of heredity as a cause of hearing loss has been recognized at least since the beginning of the 19th Century. For example, in 1857, the Irish otologist William Wilde concluded from an analysis of questions about deaf individuals in census data that parental consanguinity and the existence of deafness in one or both parents were important indicators of a hereditary etiology in some cases.1 In 1883, Alexander Graham Bell published a report titled Memoir upon the Formation of a Deaf Variety of the Human Race, which included a retrospective analysis of records from schools for the deaf in the United States.2 Bell expressed his concern about “the formation of a deaf variety of the human race in America,” based on analyses of the frequency of deaf relatives of deaf students and the hearing status of the offspring of marriages among those who were congenitally deaf compared to those who were adventitiously deaf. Bell argued that the use of sign language, the trend toward education in residential schools, and the creation of societies and conventions for deaf people restricted mating choices and fostered intermarriage, leading to a steady increase in the frequency of congenital deafness. Geneticists have generally discounted Bell's concerns once the extreme heterogeneity of genes for deafness was recognized; however, as described below, recent evidence suggests that, in combination with relaxed selection, assortative mating among the deaf population might in fact have preferentially amplified the commonest forms of recessive deafness.3

[. . .]

Because of the large number of recognized genes for deafness, the discovery that mutations at a single locus, DFNB1 (MIM 220290), account for 30%–40% of nonsyndromic deafness in many populations came as a great surprise.[12] and [13] DFNB1 includes the GJB2 (MIM 121011) and GJB6 (MIM 604418) genes, coding for the Connexin 26 (Cx26) and Connexin 30 (Cx30) subunits of homologous gap-junction proteins. These subunits are expressed in the inner ear, where they form heteromeric gap-junction channels between adjacent cells that permit the exchange of small molecules and may facilitate the recycling of potassium ions from the hair cells, after acoustic stimulation, back into the cochlear endolymph. More than 154 GJB2 mutations have been identified in the coding exon of GJB2, but a single chain-termination mutation, 35 del G, accounts for up to 70% of pathologic alleles in many populations. Although DFNB1 is common in Western Europe and the Middle East,[14] and [15] much lower frequencies have been observed in Asia.[16], [17] and [18] The 35 del G mutation exhibits linkage disequilibrium, and haplotype analysis suggests that it arose from a single individual in the Middle East approximately 10,000 years ago.[19] and 20 M. Tekin, N. Akar, S. Cin, S.H. Blanton, X.J. Xia, X.Z. Liu, W.E. Nance and A. Pandya, Connexin 26 (GJB2) mutations in the Turkish population: implications for the origin and high frequency of the 35delG mutation in Caucasians, Hum. Genet. 108 (2001), pp. 385–389. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (27)[20]

[. . .]

In 2000, we proposed that the high frequency of DFNB1 deafness reflects the joint effect of intense assortative mating and the relaxed genetic selection against deafness, which occurred after the introduction of sign language 400 years ago in many Western countries and the subsequent establishment of residential schools for the deaf.29 Using computer simulation, we showed that this mechanism could have doubled the frequency of DFNB1 deafness in the United States during the past 200 years.3

Importance of the Mating Structure of the Population

Along with consanguinity, assortative mating is an important characteristic of a population that can have a profound influence on the incidence of deafness. When a new recessive mutation first arises, there is a substantial risk that it will be lost by stochastic processes. Consanguinity helps ensure that at least some recessive mutations are expressed phenotypically where they can be exposed to positive or negative selection. Only after genes for deafness are expressed can assortative mating accelerate their increase in response to relaxed selection. Consanguinity, of course, affects all recessive genes indiscriminately, but the effect of assortative mating among the deaf is limited to genes for deafness, in which it preferentially increases the frequency of the commonest form of recessive deafness in a population.3 Acting together, these genetic mechanisms can thus promote the survival, expression, and spread of genes for deafness. The acquisition of either a traditional or an indigenous sign language, especially when used by both deaf and hearing family members, is perhaps the most important factor that can improve the “genetic fitness” of the deaf population. Although their fitness was generally quite low in Europe prior to the time that sign language and schools for the deaf were introduced, it is now becoming apparent from a growing number of examples that a similar amplification of the frequency of specific genes for deafness can result from the development of indigenous sign languages that are used within extended families to allow deaf and hearing family members to communicate with one another.[30], [31], [32] and 33 T.B. Friedman, Y. Liang, J.L. Weber, J.T. Hinnant, T.D. Barber, S. Winata, I.N. Arhya and J.H. Asher, A gene for congenital, recessive deafness DFNB3 maps to the pericentromeric region of chromosome 17, Nat. Genet. 9 (1995), pp. 86–91. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (94)[33] As a result of the integration of the deaf population into the community, the fitness of deaf individuals can be unimpaired in this setting, and when D × D marriages occur, virtually all are noncomplementary, as expected, because there is usually only one form of genetic deafness in the community. Although gene drift and endogamy undoubtedly play essential roles in the survival and initial phenotypic expression of genes in such populations, it is hard to escape the conclusion that relaxed selection and assortative mating must also contribute to the remarkable increases that can be seen in both gene and phenotype frequencies and to the strong evidence for a founder effect.

[. . .]

In the United States, 80%–90% of individuals with profound deafness currently marry a deaf partner;39 however, the introduction of cochlear-implant technology is profoundly altering the mating structure of the deaf population. By facilitating oral communication and educational mainstreaming, substantially all of the deaf children of hearing parents will be redirected into the hearing mating pool. Even if all of the deaf children of deaf parents eschewed implants, continued to learn sign language, and mated assortatively, the size of the pool would decrease dramatically and would be increasingly composed of individuals with DFNB1 mutations. Under these assumptions, the ultimate size at which the mating pool stabilizes might well be influenced by the extent to which genotypic mate selection replaces phenotypic selection in the interim (Nance et al., American College of Medical Genetics meeting 2006, San Diego, USA, Abstract 52). On the other hand, if deaf couples begin to embrace cochlear-implant technology for their children, the pool size will continue to decrease, eventually resulting in the substantial disappearance of the deaf culture. Thus, the collection and analysis of data on marriages of deaf individuals might represent a vanishing opportunity to understand the factors that have contributed to secular changes in the genetic epidemiology of deafness in this country since Fay's landmark study.

Prehistoric population history: from the Late Glacial to the Late Neolithic in Central and Northern Europe

Stephen Shennan and Kevan Edinborough

Abstract

Summed probability distributions of radiocarbon dates are used to make inferences about the history of population fluctuations from the Mesolithic to the late Neolithic for three countries in central and northern Europe: Germany, Poland and Denmark. Two different methods of summing the dates produce very similar overall patterns. The validity of the aggregate patterns is supported by a number of regional studies based on other lines of evidence. The dramatic rise in population associated with the arrival of farming in these areas that is visible in the date distributions is not surprising. Much more unexpected are the fluctuations during the course of the Neolithic, and especially the indications of a drop in population at the end of the LBK early Neolithic that lasted for nearly a millennium. Possible reasons for the pattern are discussed.

In a recent paper Gamble et al. (2005) used the S2AGES database of radiocarbon dates for the period from c. 25–8 ka that they had compiled for western and northern Europe to propose an outline of the population history of the region during the Late Glacial period. The object of this paper is to follow up that study, albeit it on a more limited geographical scale, by adopting essentially the same approach to trace regional population histories in three areas of Central and Northern Europe up into the Neolithic, and in particular beyond the Neolithic transition on which earlier radiocarbon work by one of us was focussed (Gkiasta et al., 2003). In our view the results reveal some striking patterns which have significant implications for our understanding not just of the beginning of the Neolithic but more importantly what happened after it.

[. . .]

Starting with the earliest periods and working to the right, a number of features may be observed. In all cases the Mesolithic population shows fluctuations, but immediately before the beginning of the Neolithic it is actually lower than it was in some earlier phases. If one compares the maximum Mesolithic peak with the first Neolithic peak in each region, the Mesolithic peak in Denmark is proportionally the highest, which is likely to be a reflection of the significance of aquatic resources in Denmark and the high populations they are capable of supporting (cf. Schmölke, 2005). As noted above, while there may be some doubt about the comparability of the Mesolithic and Neolithic proxy population patterns, the bias against the former in favour of dates for the latter would have to be massive to alter the obvious inference to be made from the figure. The Danish pattern is basically the same as that produced in a similar radiocarbon exercise for Denmark and Sweden by Persson (1998, reproduced in Price, 2003). The beginning of the Neolithic is strikingly apparent in all three areas: the start of the LBK in Germany at c. 5500 cal BC, slightly later in Poland; and the beginning of the TRB Neolithic in Denmark at just after 4000 cal BC. In all cases there is a rapid rise in population to a ceiling; in Germany and Denmark this is basically maintained for some time; the marked dip in the Polish R_Combine data may or may not be a sampling artefact.

After 5000 cal BC the German data suggest a remarkable decline in population, to a fraction of its maximum LBK levels, lasting, with one or two fluctuations, until after 3500 cal BC. Poland shows a very similar picture although the decline is not as striking. In Denmark there is no such marked crash although there is a decline to just over half the maximum 3500 cal BC value at c. 3000 BC, roughly at the transition between the Middle Neolithic TRB and the Single Grave Culture. A slight upturn follows, with a more marked decline after 2500 BC. In Poland a sudden rise to a peak at 3500 BC is followed by a decline to a much lower level in the centuries after 3000 BC, corresponding to the various local Polish versions of the Corded Ware. Germany by contrast shows a rapid rise to a new population plateau at c. 3400 BC, maintained until 2500 BC, followed by a marked dip and then a rapid rise again at a time corresponding to the Bell Beaker culture and the beginning of the early Bronze Age. The pattern in the final centuries of the third millennium BC should be treated with some caution, since in southern Germany and Poland at least this is already the beginning of the Early Bronze Age, so it is possible that not all available dates have been included.

[. . .]

That the appearance of the LBK marked a major population increase in the areas where it is found is well established. What the data make clear is the extremely low levels of Mesolithic population prior to this arrival; the implication being that existing hunter-gatherer populations only made a significant contribution demographically, genetically and culturally to the extent that they were incorporated into the advancing LBK demographic wave.

However, the most significant result, we would argue, is the demonstration of the drastic demographic decline at the end of the LBK and the long subsequent period of relatively low population levels. Explaining the reasons for this now becomes a major issue. The decline suggested here on the basis of the radiocarbon evidence also fits in with an increasing number of indications from other sources that far from being the foundation of the subsequent Neolithic across large parts of central, northern and northwestern Europe, in some respects at least it actually left little trace. Thus, the recent ancient DNA study of LBK samples (Haak et al., 2005 W. Haak, P. Forster, B. Bramanti, S. Matsumura, G. Brandt, M. Tänzer, R. Villems, C. Renfrew, D. Gronenborn, K.W. Alt and J. Burger, Ancient DNA from the first European farmers in 7500-year-old neolithic sites, Science 310 (2005), pp. 1016–1018. View Record in Scopus | Cited By in Scopus (40)Haak et al., 2005) suggested that the most frequent mtDNA variant was one which is extremely rare in the region in modern populations. Archaeobotanical studies are also making it increasingly apparent that the LBK crop exploitation system was an unusual one which did not have any descendants (Coward et al., unpublished paper and Bakels, in press).