Abstract: Crystallized intelligence is a pivotal broad ability factor in the major theories of intelligence including the Cattell-Horn-Carroll (CHC) model, the three-stratum model, and the extended Gf-Gc (fluid intelligence-crystallized intelligence) model and is usually measured by means of vocabulary tests and other verbal tasks. In this paper the C-Test, a text completion test originally proposed as a test of general proficiency in a foreign language, is introduced as an integrative measure of crystallized intelligence. Based on the existing evidence in the literature, it is argued that the construct underlying the C-Test closely matches the abilities underlying the language component of crystallized intelligence, as defined in the well-established theories of intelligence. It is also suggested that by carefully selecting texts from pertinent knowledge domains, the factual knowledge component of crystallized intelligence could also be measured by the C-Test.

Abstract: Despite the high heritability of intelligence in the normal range, molecular genetic studies have so far yielded many null findings. However, large samples and self-imposed stringent standards have prevented false positives and gradually narrowed down where effects can still be expected. Rare variants and mutations of large effect do not appear to play a main role beyond intellectual disability. Common variants can account for about half the heritability of intelligence and show promise that collaborative efforts will identify more causal genetic variants. Gene–gene interactions may explain some of the remainder, but are only starting to be tapped. Evolutionarily, stabilizing selection and selective (near)-neutrality are consistent with the facts known so far.

Abstract: The role of response time in completing an item can have very different interpretations. Responding more slowly could be positively related to success as the item is answered more carefully. However, the association may be negative if working faster indicates higher ability. The objective of this study was to clarify the validity of each assumption for reasoning items considering the mode of processing. A total of 230 persons completed a computerized version of Raven’s Advanced Progressive Matrices test. Results revealed that response time overall had a negative effect. However, this effect was moderated by items and persons. For easy items and able persons the effect was strongly negative, for difficult items and less able persons it was less negative or even positive. The number of rules involved in a matrix problem proved to explain item difficulty significantly. Most importantly, a positive interaction effect between the number of rules and item response time indicated that the response time effect became less negative with an increasing number of rules. Moreover, exploratory analyses suggested that the error type influenced the response time effect.

Abstract: Bi-factor confirmatory factor models have been influential in research on cognitive abilities because they often better fit the data than correlated factors and higher-order models. They also instantiate a perspective that differs from that offered by other models. Motivated by previous work that hypothesized an inherent statistical bias of fit indices favoring the bi-factor model, we compared the fit of correlated factors, higher-order, and bi-factor models via Monte Carlo methods. When data were sampled from a true bi-factor structure, each of the approximate fit indices was more likely than not to identify the bi-factor solution as the best fitting. When samples were selected from a true multiple correlated factors structure, approximate fit indices were more likely overall to identify the correlated factors solution as the best fitting. In contrast, when samples were generated from a true higher-order structure, approximate fit indices tended to identify the bi-factor solution as best fitting. There was extensive overlap of fit values across the models regardless of true structure. Although one model may fit a given dataset best relative to the other models, each of the models tended to fit the data well in absolute terms. Given this variability, models must also be judged on substantive and conceptual grounds.

Abstract: This paper analyzes notions of culture and human intelligence. Drawing on implicit and explicit theory frameworks, I explore discourses about perceptions of intelligence and culture. These include cultural perceptions and meanings of intelligence in Asia, Africa and Western cultures. While there is little consensus on what intelligence really means from one culture to the next, the literature suggests that the culture or sub culture of an individual will determine how intelligence is conceived. In conclusion, the view is that culture and intelligence are interwoven.