Abstract: Publication date: July 2018 Source:Cognition, Volume 176 Author(s): James Winters, Simon Kirby, Kenny Smith Aligning on a shared system of communication requires senders and receivers reach a balance between simplicity, where there is a pressure for compressed representations, and informativeness, where there is a pressure to be communicatively functional. We investigate the extent to which these two pressures are governed by contextual predictability: the amount of contextual information that a sender can estimate, and therefore exploit, in conveying their intended meaning. In particular, we test the claim that contextual predictability is causally related to signal autonomy: the degree to which a signal can be interpreted in isolation, without recourse to contextual information. Using an asymmetric communication game, where senders and receivers are assigned fixed roles, we manipulate two aspects of the referential context: (i) whether or not a sender shares access to the immediate contextual information used by the receiver in interpreting their utterance; (ii) the extent to which the relevant solution in the immediate referential context is generalisable to the aggregate set of contexts. Our results demonstrate that contextual predictability shapes the degree of signal autonomy: when the context is highly predictable (i.e., the sender has access to the context in which their utterances will be interpreted, and the semantic dimension which discriminates between meanings in context is consistent across communicative episodes), languages develop which rely heavily on the context to reduce uncertainty about the intended meaning. When the context is less predictable, senders favour systems composed of autonomous signals, where all potentially relevant semantic dimensions are explicitly encoded. Taken together, these results suggest that our pragmatic faculty, and how it integrates information from the context in reducing uncertainty, plays a central role in shaping language structure.

Abstract: Publication date: July 2018 Source:Cognition, Volume 176 Author(s): Shiri Lev-Ari We learn language from our social environment, but the more sources we have, the less informative each source is, and therefore, the less weight we ascribe its input. According to this principle, people with larger social networks should give less weight to new incoming information, and should therefore be less susceptible to the influence of new speakers. This paper tests this prediction, and shows that speakers with smaller social networks indeed have more malleable linguistic representations. In particular, they are more likely to adjust their lexical boundary following exposure to a new speaker. Experiment 2 uses computational simulations to test whether this greater malleability could lead people with smaller social networks to be important for the propagation of linguistic change despite the fact that they interact with fewer people. The results indicate that when innovators were connected with people with smaller rather than larger social networks, the population exhibited greater and faster diffusion. Together these experiments show that the properties of people’s social networks can influence individuals’ learning and use as well as linguistic phenomena at the community level.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): Ruth E. Corps, Abigail Crossley, Chiara Gambi, Martin J. Pickering During conversation, there is often little gap between interlocutors’ utterances. In two pairs of experiments, we manipulated the content predictability of yes/no questions to investigate whether listeners achieve such coordination by (i) preparing a response as early as possible or (ii) predicting the end of the speaker’s turn. To assess these two mechanisms, we varied the participants’ task: They either pressed a button when they thought the question was about to end (Experiments 1a and 2a), or verbally answered the questions with either yes or no (Experiments 1b and 2b). Predictability effects were present when participants had to prepare a verbal response, but not when they had to predict the turn-end. These findings suggest content prediction facilitates turn-taking because it allows listeners to prepare their own response early, rather than because it helps them predict when the speaker will reach the end of their turn.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): Giles Hamilton-Fletcher, Katarzyna Pisanski, David Reby, Michał Stefańczyk, Jamie Ward, Agnieszka Sorokowska Cross-modal correspondences describe the widespread tendency for attributes in one sensory modality to be consistently matched to those in another modality. For example, high pitched sounds tend to be matched to spiky shapes, small sizes, and high elevations. However, the extent to which these correspondences depend on sensory experience (e.g. regularities in the perceived environment) remains controversial. Two recent studies involving blind participants have argued that visual experience is necessary for the emergence of correspondences, wherein such correspondences were present (although attenuated) in late blind individuals but absent in the early blind. Here, using a similar approach and a large sample of early and late blind participants (N = 59) and sighted controls (N = 63), we challenge this view. Examining five auditory-tactile correspondences, we show that only one requires visual experience to emerge (pitch-shape), two are independent of visual experience (pitch-size, pitch-weight), and two appear to emerge in response to blindness (pitch-texture, pitch-softness). These effects tended to be more pronounced in the early blind than late blind group, and the duration of vision loss among the late blind did not mediate the strength of these correspondences. Our results suggest that altered sensory input can affect cross-modal correspondences in a more complex manner than previously thought and cannot solely be explained by a reduction in visually-mediated environmental correlations. We propose roles of visual calibration, neuroplasticity and structurally-innate associations in accounting for our findings.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): James S. Adelman, Zachary Estes, Martina Cossu Rapidly communicating the emotional valence of stimuli (i.e., negativity or positivity) is vital for averting dangers and acquiring rewards. We therefore hypothesized that human languages signal emotions via individual phonemes (emotional sound symbolism), and more specifically that the phonemes at the beginning of the word signal its valence, as this would maximize the receiver’s time to respond adaptively. Analyzing approximately 37,000 words across five different languages (English, Spanish, Dutch, German, and Polish), we found emotional sound symbolism in all five languages, and within each language the first phoneme of a word predicted its valence better than subsequent phonemes. Moreover, given that averting danger is more urgent than acquiring rewards, we further hypothesized and demonstrated that phonemes that are uttered most rapidly tend to convey negativity rather than positivity. Thus, emotional sound symbolism is an adaptation providing an early warning system in human languages, analogous to other species’ alarm calls.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): Steven Moran, Damián E. Blasi, Robert Schikowski, Aylin C. Küntay, Barbara Pfeiler, Shanley Allen, Sabine Stoll How does a child map words to grammatical categories when words are not overtly marked either lexically or prosodically' Recent language acquisition theories have proposed that distributional information encoded in sequences of words or morphemes might play a central role in forming grammatical classes. To test this proposal, we analyze child-directed speech from seven typologically diverse languages to simulate maximum variation in the structures of the world’s languages. We ask whether the input to children contains cues for assigning syntactic categories in frequent frames, which are frequently occurring nonadjacent sequences of words or morphemes. In accord with aggregated results from previous studies on individual languages, we find that frequent word frames do not provide a robust distributional pattern for accurately predicting grammatical categories. However, our results show that frames are extremely accurate cues cross-linguistically at the morpheme level. We theorize that the nonadjacent dependency pattern captured by frequent frames is a universal anchor point for learners on the morphological level to detect and categorize grammatical categories. Whether frames also play a role on higher linguistic levels such as words is determined by grammatical features of the individual language.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): Andrea M. Cataldo, Andrew L. Cohen A context effect is a change in preference that occurs when alternatives are added to a choice set. Models of preferential choice that account for context effects largely assume a within-dimension comparison process. It has been shown, however, that the format in which a choice set is presented can influence comparison strategies. That is, a by-alternative or by-dimension grouping of the dimension values encourage within-alternative or within-dimension comparisons, respectively. For example, one classic context effect, the compromise effect, is strengthened by a by-dimension presentation format. Extrapolation from this result suggests that a second context effect, the similarity effect, will actually reverse when stimuli are presented in a by-dimension format. In the current study, we presented participants with a series of apartment choice sets designed to elicit the similarity effect, with either a by-alternative or by-dimension presentation format. Participants in the by-alternative condition demonstrated a standard similarity effect; however, participants in the by-dimension condition demonstrated a strong reverse similarity effect. The present data can be accounted for by Multialternative Decision Field Theory (MDFT) and the Multiattribute Linear Ballistic Accumulator (MLBA), but not Elimination by Aspects (EBA). Indeed, when some weak assumptions of within-dimension processes are met, MDFT and the MLBA predict the reverse similarity effect. These modeling results suggest that the similarity effect is governed by either forgetting and inhibition (MDFT), or attention to positive or negative differences (MLBA). These results demonstrate that flexibility in the comparison process needs to be incorporated into theories of preferential choice.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): Heida Maria Sigurdardottir, Liv Elisabet Fridriksdottir, Sigridur Gudjonsdottir, Árni Kristjánsson Evidence of interdependencies of face and word processing mechanisms suggest possible links between reading problems and abnormal face processing. In two experiments we assessed such high-level visual deficits in people with a history of reading problems. Experiment 1 showed that people who were worse at face matching had greater reading problems. In experiment 2, matched dyslexic and typical readers were tested, and difficulties with face matching were consistently found to predict dyslexia over and above both novel-object matching as well as matching noise patterns that shared low-level visual properties with faces. Furthermore, ADHD measures could not account for face matching problems. We speculate that reading difficulties in dyslexia are partially caused by specific deficits in high-level visual processing, in particular for visual object categories such as faces and words with which people have extensive experience.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): Ryan B. Scott, Jason Samaha, Ron Chrisley, Zoltan Dienes While theories of consciousness differ substantially, the ‘conscious access hypothesis’, which aligns consciousness with the global accessibility of information across cortical regions, is present in many of the prevailing frameworks. This account holds that consciousness is necessary to integrate information arising from independent functions such as the specialist processing required by different senses. We directly tested this account by evaluating the potential for associative learning between novel pairs of subliminal stimuli presented in different sensory modalities. First, pairs of subliminal stimuli were presented and then their association assessed by examining the ability of the first stimulus to prime classification of the second. In Experiments 1–4 the stimuli were word-pairs consisting of a male name preceding either a creative or uncreative profession. Participants were subliminally exposed to two name-profession pairs where one name was paired with a creative profession and the other an uncreative profession. A supraliminal task followed requiring the timed classification of one of those two professions. The target profession was preceded by either the name with which it had been subliminally paired (concordant) or the alternate name (discordant). Experiment 1 presented stimuli auditorily, Experiment 2 visually, and Experiment 3 presented names auditorily and professions visually. All three experiments revealed the same inverse priming effect with concordant test pairs associated with significantly slower classification judgements. Experiment 4 sought to establish if learning would be more efficient with supraliminal stimuli and found evidence that a different strategy is adopted when stimuli are consciously perceived. Finally, Experiment 5 replicated the unconscious cross-modal association achieved in Experiment 3 utilising non-linguistic stimuli. The results demonstrate the acquisition of novel cross-modal associations between stimuli which are not consciously perceived and thus challenge the global access hypothesis and those theories embracing it.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): Samuel Shaki, Martin H. Fischer Spatial-numerical associations (SNAs) have been studied extensively in the past two decades, always requiring either explicit magnitude processing or explicit spatial-directional processing. This means that the typical finding of an association of small numbers with left or bottom space and of larger numbers with right or top space could be due to these requirements and not the conceptual representation of numbers. The present study compares explicit and implicit magnitude processing in an implicit spatial-directional task and identifies SNAs as artefacts of either explicit magnitude processing or explicit spatial- directional processing; they do not reveal spatial-conceptual links. This finding requires revision of current accounts of the relationship between numbers and space.

Abstract: Publication date: May 2018 Source:Cognition, Volume 174 Author(s): Anna Vaskevich, Roy Luria Current statistical learning theories predict that embedding implicit regularities within a task should further improve online performance, beyond general practice. We challenged this assumption by contrasting performance in a visual search task containing either a consistent-mapping (regularity) condition, a random-mapping condition, or both conditions, mixed. Surprisingly, performance in a random visual search, without any regularity, was better than performance in a mixed design search that contained a beneficial regularity. This result was replicated using different stimuli and different regularities, suggesting that mixing consistent and random conditions leads to an overall slowing down of performance. Relying on the predictive-processing framework, we suggest that this global detrimental effect depends on the validity of the regularity: when its predictive value is low, as it is in the case of a mixed design, reliance on all prior information is reduced, resulting in a general slowdown. Our results suggest that our cognitive system does not maximize speed, but rather continues to gather and implement statistical information at the expense of a possible slowdown in performance.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): Erin M. Anderson, Susan J. Hespos, Lance J. Rips Infants fail to represent quantities of non-cohesive substances in paradigms where they succeed with solid objects. Some investigators have interpreted these results as evidence that infants do not yet have representations for substances. More recent research, however, shows that 5-month-old infants expect objects and substances to behave and interact in different ways. In the present experiments, we test whether infants have expectations for substances when the outcomes are not simply the opposite of those for objects. In Experiment 1, we find that 5-month-old infants expect that when a cup of sand pours behind a screen, it will accumulate in just one pile rather than two. Similarly, infants expect that when two cups of sand pour in separate streams, two distinct piles will accumulate rather than one. Infants look significantly longer at outcomes with an inconsistent number of piles, providing evidence that infants have expectations for how sand accumulates. To test whether the number of cups or the number of pours guided expectations about accumulation, Experiment 2 placed these cues in conflict. This resulted in chance performance, suggesting that, for infants to build expectations about these outcomes, they need both cues (cup and pour) to converge. These findings offer insight into the nature of infants’ representations for non-cohesive substances like sand.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): Hernando Taborda-Osorio, Erik W. Cheries Adults and preschool-aged children believe that internal properties are more important than external properties when determining an agent’s identity over time. The current study examined the developmental origins of this understanding using a manual-search individuation task with 13-month-old infants. Subjects observed semi-transparent objects that looked and behaved like animate agents placed into box that they could reach but not see into. Across trials infants observed objects with either the same- or different-colored insides placed into the box. We found that infants used internal property differences more than external property differences to determine how many agents were involved in the event. A second experiment confirmed that this effect was specific to the domain of animate entities. These results suggest that infants are biased to see an agent’s ‘insides’ as more important for determining its identity over time than its outside properties.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): Ryan J. Brady, Robert R. Hampton Working memory is a system by which a limited amount of information can be kept available for processing after the cessation of sensory input. Because working memory resources are limited, it is adaptive to focus processing on the most relevant information. We used a retro-cue paradigm to determine the extent to which monkey working memory possesses control mechanisms that focus processing on the most relevant representations. Monkeys saw a sample array of images, and shortly after the array disappeared, they were visually cued to a location that had been occupied by one of the sample images. The cue indicated which image should be remembered for the upcoming recognition test. By determining whether the monkeys were more accurate and quicker to respond to cued images compared to un-cued images, we tested the hypothesis that monkey working memory focuses processing on relevant information. We found a memory benefit for the cued image in terms of accuracy and retrieval speed with a memory load of two images. With a memory load of three images, we found a benefit in retrieval speed but only after shortening the onset latency of the retro-cue. Our results demonstrate previously unknown flexibility in the cognitive control of memory in monkeys, suggesting that control mechanisms in working memory likely evolved in a common ancestor of humans and monkeys more than 32 million years ago. Future work should be aimed at understanding the interaction between memory load and the ability to control memory resources, and the role of working memory control in generating differences in cognitive capacity among primates.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): Alon Hafri, John C. Trueswell, Brent Strickland A crucial component of event recognition is understanding event roles, i.e. who acted on whom: boy hitting girl is different from girl hitting boy. We often categorize Agents (i.e. the actor) and Patients (i.e. the one acted upon) from visual input, but do we rapidly and spontaneously encode such roles even when our attention is otherwise occupied' In three experiments, participants observed a continuous sequence of two-person scenes and had to search for a target actor in each (the male/female or red/blue-shirted actor) by indicating with a button press whether the target appeared on the left or the right. Critically, although role was orthogonal to gender and shirt color, and was never explicitly mentioned, participants responded more slowly when the target’s role switched from trial to trial (e.g., the male went from being the Patient to the Agent). In a final experiment, we demonstrated that this effect cannot be fully explained by differences in posture associated with Agents and Patients. Our results suggest that extraction of event structure from visual scenes is rapid and spontaneous.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): Roger Johansson, Franziska Oren, Kenneth Holmqvist When recalling something you have previously read, to what degree will such episodic remembering activate a situation model of described events versus a memory representation of the text itself' The present study was designed to address this question by recording eye movements of participants who recalled previously read texts while looking at a blank screen. An accumulating body of research has demonstrated that spontaneous eye movements occur during episodic memory retrieval and that fixation locations from such gaze patterns to a large degree overlap with the visuospatial layout of the recalled information. Here we used this phenomenon to investigate to what degree participants’ gaze patterns corresponded with the visuospatial configuration of the text itself versus a visuospatial configuration described in it. The texts to be recalled were scene descriptions, where the spatial configuration of the scene content was manipulated to be either congruent or incongruent with the spatial configuration of the text itself. Results show that participants’ gaze patterns were more likely to correspond with a visuospatial representation of the described scene than with a visuospatial representation of the text itself, but also that the contribution of those representations of space is sensitive to the text content. This is the first demonstration that eye movements can be used to discriminate on which representational level texts are remembered and the findings provide novel insight into the underlying dynamics in play.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): You-jung Choi, Hyun-joo Song, Yuyan Luo The present study examines how infants use their emergent perspective-taking and language comprehension abilities to make sense of interactions between two human agents. In the study, one agent (Agent1) could see only one of two identical balls on an apparatus because of a screen obstructing her view while the infant and another agent (Agent2) could see both balls. 19-month-old English-learning monolingual infants seemed to expect Agent2 to grasp the ball visible to Agent1 when she said to Agent2 “Give me the ball” but not when she said “Give me a ball.” 14-month-olds appeared to accept that Agent2 could grasp either ball when Agent1 said “Give me the ball.” Therefore, by 19 months of age, English-learning infants seem to attend to the specific linguistic units used, e.g., the definite article, to identify the referent of others’ speech. Possible reasons in connection with language acquisition processes and/or environmental factors for the two age groups’ respective failures with the definite and the indefinite articles are discussed.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): Kaidi Lõo, Juhani Järvikivi, R. Harald Baayen Estonian is a morphologically rich Finno-Ugric language with nominal paradigms that have at least 28 different inflected forms but sometimes more than 40. For languages with rich inflection, it has been argued that whole-word frequency, as a diagnostic of whole-word representations, should not be predictive for lexical processing. We report a lexical decision experiment, showing that response latencies decrease both with frequency of the inflected form and its inflectional paradigm size. Inflectional paradigm size was also predictive of semantic categorization, indicating it is a semantic effect, similar to the morphological family size effect. These findings fit well with the evidence for frequency effects of word n-grams in languages with little inflectional morphology, such as English. Apparently, the amount of information on word use in the mental lexicon is substantially larger than was previously thought.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): Laura M. Getz, Michael Kubovy An audiovisual correspondence (AVC) refers to an observer’s seemingly arbitrary yet consistent matching of sensory features across the two modalities; for example, between an auditory pitch and visual size. Research on AVCs has frequently used a speeded classification procedure in which participants are asked to rapidly classify an image when it is either accompanied by a congruent or an incongruent sound (or vice versa). When, as is typically the case, classification is faster in the presence of a congruent stimulus, researchers have inferred that the AVC is automatic and bottom-up. Such an inference is incomplete because the procedure does not show that the AVC is not subject to top-down influences. To remedy this problem, we devised a procedure that allows us to assess the degree of “bottom-up-ness” and “top-down-ness” in the processing of an AVC. We did this in studies of AVCs between pitch and five visual features: size, height, spatial frequency, brightness, and angularity. We find that all the AVCs we studied involve both bottom-up and top-down processing, thus undermining the prevalent generalization that AVCs are automatic.

Abstract: Publication date: June 2018 Source:Cognition, Volume 175 Author(s): Alessandro Guida, Ahmed M. Megreya, Magali Lavielle-Guida, Yvonnick Noël, Fabien Mathy, Jean-Philippe van Dijck, Elger Abrahamse The ability to maintain arbitrary sequences of items in the mind contributes to major cognitive faculties, such as language, reasoning, and episodic memory. Previous research suggests that serial order working memory is grounded in the brain’s spatial attention system. In the present study, we show that the spatially defined mental organization of novel item sequences is related to literacy and varies as a function of reading/writing direction. Specifically, three groups (left-to-right Western readers, right-to-left Arabic readers, and Arabic-speaking illiterates) were asked to memorize random (and non-spatial) sequences of color patches and determine whether a subsequent probe was part of the memorized sequence (e.g., press left key) or not (e.g., press right key). The results showed that Western readers mentally organized the sequences from left to right, Arabic readers spontaneously used the opposite direction, and Arabic-speaking illiterates showed no systematic spatial organization. This finding suggests that cultural conventions shape one of the most “fluid” aspects of human cognition, namely, the spontaneous mental organization of novel non-spatial information.

Abstract: Publication date: May 2018 Source:Cognition, Volume 174 Author(s): Jodi R. Smith, Teresa A. Treat, Thomas A. Farmer, Bob McMurray This work applies a dynamic competition framework of decision making to the domain of sexual perception, which is linked theoretically and empirically to college men’s risk for exhibiting sexual coercion and aggression toward female acquaintances. Within a mouse-tracking paradigm, 152 undergraduate men viewed full-body photographs of women who varied in affect (sexual interest or rejection), clothing style (provocative or conservative), and attractiveness, and decided whether each woman currently felt sexually interested or rejecting. Participants’ mouse movements were recorded to capture competition dynamics during online processing (throughout the decisional process), and as an index of the final categorical decision (endpoint of the decisional process). Participants completed a measure of Rape-Supportive Attitudes (RSA), a well-established correlate of male-initiated sexual aggression toward female acquaintances. Mixed-effects analyses revealed greater curvature toward the incorrect response on conceptually incongruent trials (e.g., rejecting and dressed provocatively) than on congruent trials (e.g., rejecting and dressed conservatively). This suggests that the two decision alternatives are simultaneously active and compete continuously over time, consistent with a dynamic competition account. Congruence effects also emerged at the decisional endpoint; accuracy was typically lower when stimulus features were incongruent, rather than congruent. RSA potentiated online congruence effects (intermediate states of behavior) but not offline congruence effects (endpoint states of behavior). In a hierarchical regression analysis, online processing indices accounted for unique variability in RSA above and beyond offline accuracy rates. The process-based account of men’s sexual-interest judgments ultimately may point to novel targets for prevention strategies designed to reduce acquaintance-initiated sexual aggression on college campuses. Graphical abstract

Abstract: Publication date: May 2018 Source:Cognition, Volume 174 Author(s): Linda Liu, T. Florian Jaeger One of the central challenges in speech perception is the lack of invariance: talkers differ in how they map words onto the speech signal. Previous work has shown that one mechanism by which listeners overcome this variability is adaptation. However, talkers differ in how they pronounce words for a number of reasons, ranging from more permanent, characteristic factors such as having a foreign accent, to more temporary, incidental factors, such as speaking with a pen in the mouth. One challenge for listeners is that the true cause underlying atypical pronunciations is never directly known, and instead must be inferred from (often causally ambiguous) evidence. In three experiments, we investigate whether these inferences underlie speech perception, and how the speech perception system deals with uncertainty about competing causes for atypical pronunciations. We find that adaptation to atypical pronunciations is affected by whether the atypical pronunciations are seen as characteristic or incidental. Furthermore, we find that listeners are able to maintain information about previous causally ambiguous pronunciations that they experience, and use this previously experienced evidence to drive their adaptation after additional evidence has disambiguated the cause. Our findings revise previous proposals that causally ambiguous evidence is ignored during speech adaptation.

Abstract: Publication date: May 2018 Source:Cognition, Volume 174 Author(s): Mara Breen Word durations convey many types of linguistic information, including intrinsic lexical features like length and frequency and contextual features like syntactic and semantic structure. The current study was designed to investigate whether hierarchical metric structure and rhyme predictability account for durational variation over and above other features in productions of a rhyming, metrically-regular children's book: The Cat in the Hat (Dr. Seuss, 1957). One-syllable word durations and inter-onset intervals were modeled as functions of segment number, lexical frequency, word class, syntactic structure, repetition, and font emphasis. Consistent with prior work, factors predicting longer word durations and inter-onset intervals included more phonemes, lower frequency, first mention, alignment with a syntactic boundary, and capitalization. A model parameter corresponding to metric grid height improved model fit of word durations and inter-onset intervals. Specifically, speakers realized five levels of metric hierarchy with inter-onset intervals such that interval duration increased linearly with increased height in the metric hierarchy. Conversely, speakers realized only three levels of metric hierarchy with word duration, demonstrating that they shortened the highly predictable rhyme resolutions. These results further understanding of the factors that affect spoken word duration, and demonstrate the myriad cues that children receive about linguistic structure from nursery rhymes.

Abstract: Publication date: May 2018 Source:Cognition, Volume 174 Author(s): Kathryn E. Schertz, Sonya Sachdeva, Omid Kardan, Hiroki P. Kotabe, Kathleen L. Wolf, Marc G. Berman Prior research has shown that the physical characteristics of one’s environment have wide ranging effects on affect and cognition. Other research has demonstrated that one’s thoughts have impacts on mood and behavior, and in this three-part research program we investigated how physical features of the environment can alter thought content. In one study, we analyzed thousands of journal entries written by park visitors to examine how low-level and semantic visual features of the parks correlate with different thought topics. In a second study, we validated our ecological results by conducting an online study where participants were asked to write journal entries while imagining they were visiting a park, to ensure that results from Study 1 were not due to selection bias of park visitors. In the third study, we experimentally manipulated exposure to specific visual features to determine if they induced thinking about the same thought topics under more generalized conditions. Results from Study 3 demonstrated a potential causal role for perceived naturalness and high non-straight edges on thinking about “Nature”, with a significant positive interaction. Results also showed a potential causal effect of naturalness and non-straight edges on thinking about topics related to “Spiritual & Life Journey”, with perceived naturalness having a negative relationship and non-straight edges having a positive relationship. We also observed a significant positive interaction between non-straight edge density and naturalness in relation to “Spiritual & Life Journey”. These results have implications for the design of the built environment to influence human reflection and well-being.

Abstract: Publication date: May 2018 Source:Cognition, Volume 174 Author(s): Martin Marko, Igor Riečanský Cognitive flexibility emerges from an interplay of multiple cognitive systems, of which lexical-semantic and executive are thought to be the most important. Yet this has not been addressed by previous studies demonstrating that such forms of flexible thought deteriorate under stress. Motivated by these shortcomings, the present study evaluated several candidate mechanisms implied to mediate the impairing effects of stress on flexible thinking. Fifty-seven healthy adults were randomly assigned to psychosocial stress or control condition while assessed for performance on cognitive flexibility, working memory capacity, semantic fluency, and self-reported cognitive interference. Stress response was indicated by changes in skin conductance, hearth rate, and state anxiety. Our analyses showed that acute stress impaired cognitive flexibility via a concomitant increase in sympathetic arousal, while this mediator was positively associated with semantic fluency. Stress also decreased working memory capacity, which was partially mediated by elevated cognitive interference, but neither of these two measures were associated with cognitive flexibility or sympathetic arousal. Following these findings, we conclude that acute stress impairs cognitive flexibility via sympathetic arousal that modulates lexical-semantic and associative processes. In particular, the results indicate that stress-level of sympathetic activation may restrict the accessibility and integration of remote associates and bias the response competition towards prepotent and dominant ideas. Importantly, our results indicate that stress-induced impairments of cognitive flexibility and executive functions are mediated by distinct neurocognitive mechanisms.

Abstract: Publication date: May 2018 Source:Cognition, Volume 174 Author(s): Brian P. Keane In his monograph Modularity of Mind (1983), philosopher Jerry Fodor argued that mental architecture can be partly decomposed into computational organs termed modules, which are characterized as having nine co-occurring features such as automaticity, domain specificity, and informational encapsulation. Do modules exist' Debates thus far have been framed very generally with few, if any, detailed case studies. The topic is important because it has direct implications on current debates in cognitive science and because it potentially provides a viable framework from which to further understand and make hypotheses about the mind’s structure and function. Here, the case is made for the modularity of contour interpolation, which is a perceptual process that represents non-visible edges on the basis of how surrounding visible edges are spatiotemporally configured. There is substantial evidence that interpolation is domain specific, mandatory, fast, and developmentally well-sequenced; that it produces representationally impoverished outputs; that it relies upon a relatively fixed neural architecture that can be selectively impaired; that it is encapsulated from belief and expectation; and that its inner workings cannot be fathomed through conscious introspection. Upon differentiating contour interpolation from a higher-order contour representational ability (“contour abstraction”) and upon accommodating seemingly inconsistent experimental results, it is argued that interpolation is modular to the extent that the initiating conditions for interpolation are strong. As interpolated contours become more salient, the modularity features emerge. The empirical data, taken as a whole, show that at least certain parts of the mind are modularly organized.

Abstract: Publication date: May 2018 Source:Cognition, Volume 174 Author(s): Marcell Székely, John Michael Can the perception that one’s partner is investing effort generate a sense of commitment to a joint action' To test this, we developed a 2-player version of the classic snake game which became increasingly boring over the course of each round. This enabled us to operationalize commitment in terms of how long participants persisted before pressing a ‘finish’ button to conclude each round. Our results from three experiments reveal that participants persisted longer when they perceived what they believed to be cues of their partner’s effortful contribution (Experiment 1). Crucially, this effect was not observed when they knew their partner to be an algorithm (Experiment 2), nor when it was their own effort that had been invested (Experiment 3). These results support the hypothesis that the perception of a partner’s effort elicits a sense of commitment, leading to increased persistence in the face of a temptation to disengage.

Abstract: Publication date: April 2018 Source:Cognition, Volume 173 Author(s): Yuqi Liu, Jared Medina In the mirror box illusion, participants often report that their hand is located where they see it, even when the position of the reflected hand differs from the actual position of their hand. This illusory shift (an index of multisensory integration) is stronger when the two hands engage in synchronous bimanual movement, in which visual and proprioceptive information is congruent in both motor-based (i.e. coordinate centered on the effector) and external (i.e. coordinates centered on elements external to the effector) frames of reference. To investigate the separate contributions of external and motor-based congruence in multisensory integration, we instructed participants to make synchronous or asynchronous tapping movements in either the same (i.e. both hands palms up) or opposing (palm up, palm down) postures. When in opposing postures, externally congruent movements were incongruent in a motor-based frame of reference, and vice versa. Across three experiments, participants reported more illusory shift and stronger ownership of the viewed hand in the mirror for external versus motor-based congruence trials regardless of motor outflow or motor effort, indicating that information from an externally-based representation is more strongly weighted in multisensory integration. These findings provide evidence that not only information across sensory modalities, but also information regarding crossmodal congruence represented in different spatial frames of reference, is differentially weighted in multisensory integration. We discuss how our findings can be incorporated into current computational models on multisensory integration.

Abstract: Publication date: April 2018 Source:Cognition, Volume 173 Author(s): Maria Kozhevnikov, Yahui Li, Sabrina Wong, Takashi Obana, Ido Amihai This research reports the existence of enhanced cognitive states in which dramatic temporary improvements in temporal and spatial aspects of attention were exhibited by participants who played (but not by those who merely observed) action video-games meeting certain criteria. Specifically, Experiments 1 and 2 demonstrate that the attentional improvements were exhibited only by participants whose skills matched the difficulty level of the video game. Experiment 2 showed that arousal (as reflected by the reduction in parasympathetic activity and increase in sympathetic activity) is a critical physiological condition for enhanced cognitive states and corresponding attentional enhancements. Experiment 3 showed that the cognitive enhancements were transient, and were no longer observed after 30 min of rest following video-gaming. Moreover, the results suggest that the enhancements were specific to tasks requiring visual-spatial focused attention, but not distribution of spatial attention as has been reported to improve significantly and durably as a result of long-term video-game playing. Overall, the results suggest that the observed enhancements cannot be simply due to the activity of video-gaming per se, but might rather represent an enhanced cognitive state resulting from specific conditions (heightened arousal in combination with active engagement and optimal challenge), resonant with what has been described in previous phenomenological literature as “flow” (Csikszentmihalyi, 1975) or “peak experiences” (Maslov, 1962). The findings provide empirical evidence for the existence of the enhanced cognitive states and suggest possibilities for consciously accessing latent resources of our brain to temporarily boost our cognitive capacities upon demand.

Abstract: Publication date: April 2018 Source:Cognition, Volume 173 Author(s): O. Rosa-Salva, M. Hernik, A. Broseghini, G. Vallortigara From the first hours of life, the prompt detection of animate agents allows identification of biologically relevant entities. The motion of most animate agents is constrained by their bilaterally-symmetrical body-plan, and consequently tends to be aligned with the main body-axis. Thus parallelism between the main axis of a moving object and its motion trajectory can signal the presence of animate agents. Here we demonstrated that visually-naïve newborn chicks (Gallus gallus domesticus) are attracted to objects displaying such parallelism, and thus show preference for the same type of motion patterns that elicit perception of animacy in humans. This is the first demonstration of a newborn non-human animal’s social preference for a visual cue related to the constraints imposed on behaviour by bilaterian morphology. Chicks also showed preference for rotational movements – a potential manifestation of self-propulsion. Results are discussed in relation to the mechanisms of animacy and agency detection in newborn organisms.

Abstract: Publication date: April 2018 Source:Cognition, Volume 173 Author(s): Daniel Kleinman, Tamar H. Gollan It is commonly assumed that bilinguals enable production in their nondominant language by inhibiting their dominant language temporarily, fully lifting inhibition to switch back. In a re-analysis of data from 416 Spanish-English bilinguals who repeatedly named a small set of pictures while switching languages in response to cues, we separated trials into different types that revealed three cumulative effects. Bilinguals named each picture (a) faster for every time they had previously named that same picture in the same language, an asymmetric repetition priming effect that was greater in their nondominant language, and (b) more slowly for every time they had previously named that same picture in the other language, an effect that was equivalent across languages and implies symmetric lateral inhibition between translation equivalents. Additionally, (c) bilinguals named pictures in the dominant language more slowly for every time they had previously named unrelated pictures in the nondominant language, exhibiting asymmetric language-wide global inhibition. These mechanisms dynamically alter the balances of activation between languages and between lemmas, providing evidence for an oft-assumed but seldom demonstrated key mechanism of bilingual control (competition between translations), resolving the mystery of why reversed language dominance sometimes emerges (the combined forces of asymmetrical effects emerge over time in mixed-language blocks), and also explaining other longer-lasting effects (block order). Key signatures of bilingual control can depend on seemingly trivial methodological details (e.g., the number of trials in a block) because inhibition is applied cumulatively at both local and global levels, persisting long after each individual act of selection.

Abstract: Publication date: April 2018 Source:Cognition, Volume 173 Author(s): Myrthe Faber, Gabriel A. Radvansky, Sidney K. D'Mello How does the dynamic structure of the external world direct attention' We examined the relationship between event structure and attention to test the hypothesis that narrative shifts (both theoretical and perceived) negatively predict attentional lapses. Self-caught instances of mind wandering were collected while 108 participants watched a 32.5 min film called The Red Balloon. We used theoretical codings of situational change and human perceptions of event boundaries to predict mind wandering in 5-s intervals. Our findings suggest a temporal alignment between the structural dynamics of the film and mind wandering reports. Specifically, the number of situational changes and likelihood of perceiving event boundaries in the prior 0–15 s interval negatively predicted mind wandering net of low-level audiovisual features. Thus, mind wandering is less likely to occur when there is more event change, suggesting that narrative shifts keep attention from drifting inwards.

Abstract: Publication date: May 2018 Source:Cognition, Volume 174 Author(s): Daniel N. Bub, Michael E.J. Masson, Hannah van Mook Switching between competing grasp postures incurs costs on speeded performance. We examined switch costs between lift versus use actions under task conditions that required subjects to identify familiar objects. There were no asymmetrical interference effects, though reliable costs occurred when the same object required a different action on consecutive trials. In addition, lift actions were faster to objects targeted for a prospective use action than objects irrelevant to this intended goal. The benefit of a lift-then-use action sequence was not merely due to the production of two different actions in short order on the same object; use actions to an object marked for the distal goal of a lift action were not faster than use actions applied to another object. We propose that the intention to use an object facilitates the prior action of lifting it because the motor sequence lift-then-use is habitually conscripted to enact the proper function of an object.

Abstract: Publication date: April 2018 Source:Cognition, Volume 173 Author(s): Nese Oktay-Gür, Alexandra Schulz, Hannes Rakoczy Three studies tested scope and limits of children’s implicit and explicit theory of mind. In Studies 1 and 2, three- to six-year-olds (N = 84) were presented with closely matched explicit false belief tasks that differed in whether or not they required an understanding of aspectuality. Results revealed that children performed equally well in the different tasks, and performance was strongly correlated. Study 3 tested two-year-olds (N = 81) in implicit interactive versions of these tasks and found evidence for dis-unity: children performed competently only in those tasks that did not require an understanding of aspectuality. Taken together, the present findings suggest that early implicit and later explicit theory of mind tasks may tap different forms of cognitive capacities.

Abstract: Publication date: April 2018 Source:Cognition, Volume 173 Author(s): Athena Vouloumanos Infants understand that speech in their native language allows speakers to communicate. Is this understanding limited to their native language or does it extend to non-native languages with which infants have no experience' Twelve-month-old infants saw an actor, the Communicator, repeatedly select one of two objects. When the Communicator could no longer reach the target but a Recipient could, the Communicator vocalized a nonsense phrase either in English (infants’ native language), Spanish (rhythmically different), or Russian (phonotactically different), or hummed (a non-speech vocalization). Across all three languages, native and non-native, but not humming, infants looked longer when the Recipient gave the Communicator the non-target object. Although, by 12 months, infants do not readily map non-native words to objects or discriminate most non-native speech contrasts, they understand that non-native languages can transfer information to others. Understanding language as a tool for communication extends beyond infants’ native language: By 12 months, infants view language as a universal mechanism for transferring and acquiring new information.

Abstract: Publication date: April 2018 Source:Cognition, Volume 173 Author(s): Emmanuel Dupoux Spectacular progress in the information processing sciences (machine learning, wearable sensors) promises to revolutionize the study of cognitive development. Here, we analyse the conditions under which ’reverse engineering’ language development, i.e., building an effective system that mimics infant’s achievements, can contribute to our scientific understanding of early language development. We argue that, on the computational side, it is important to move from toy problems to the full complexity of the learning situation, and take as input as faithful reconstructions of the sensory signals available to infants as possible. On the data side, accessible but privacy-preserving repositories of home data have to be setup. On the psycholinguistic side, specific tests have to be constructed to benchmark humans and machines at different linguistic levels. We discuss the feasibility of this approach and present an overview of current results.

Abstract: Publication date: April 2018 Source:Cognition, Volume 173 Author(s): Angela Cooper, Natalie Fecher, Elizabeth K. Johnson How do children represent words' If lexical representations are based on encoding the indexical characteristics of frequently-heard speakers, this predicts that speakers like a child’s own mother should be best understood. Alternatively, if they are based on the child’s own motor productions, this predicts an own-voice advantage in word recognition. Here, we address this question by presenting 2.5-year-olds with recordings of their own voice, another child’s voice, their own mother’s voice, and another mother’s voice in a child-friendly eye-tracking procedure. No own-voice or own-mother advantage was observed. Rather, children uniformly performed better on adult voices than child voices, even performing better for unfamiliar adult voices than own voices. We conclude that children represent words not in the form of own-voice motor codes or frequently heard speakers, but on the basis of adult speech targets.

Abstract: Publication date: April 2018 Source:Cognition, Volume 173 Author(s): Daniel Yon, Clare Press Perception during action is optimized by sensory predictions about the likely consequences of our movements. Influential theories in social cognition propose that we use the same predictions during interaction, supporting perception of similar reactions in our social partners. However, while our own action outcomes typically occur at short, predictable delays after movement execution, the reactions of others occur at longer, variable delays in the order of seconds. To examine whether we use sensorimotor predictions to support perception of imitative reactions, we therefore investigated the temporal profile of sensory prediction during action in two psychophysical experiments. We took advantage of an influence of prediction on apparent intensity, whereby predicted visual stimuli appear brighter (more intense). Participants performed actions (e.g., index finger lift) and rated the brightness of observed outcomes congruent (index finger lift) or incongruent (middle finger lift) with their movements. Observed action outcomes could occur immediately after execution, or at longer delays likely reflective of those in natural social interaction (1800 or 3600 ms). Consistent with the previous literature, Experiment 1 revealed that congruent action outcomes were rated as brighter than incongruent outcomes. Importantly, this facilitatory perceptual effect was found irrespective of whether outcomes occurred immediately or at delay. Experiment 2 replicated this finding and demonstrated that it was not the result of response bias. These findings therefore suggest that visual predictions generated during action are sufficiently general across time to support our perception of imitative reactions in others, likely generating a range of benefits during social interaction.

Abstract: Publication date: April 2018 Source:Cognition, Volume 173 Author(s): Samuel J. Gershman The dilemma between information gathering (exploration) and reward seeking (exploitation) is a fundamental problem for reinforcement learning agents. How humans resolve this dilemma is still an open question, because experiments have provided equivocal evidence about the underlying algorithms used by humans. We show that two families of algorithms can be distinguished in terms of how uncertainty affects exploration. Algorithms based on uncertainty bonuses predict a change in response bias as a function of uncertainty, whereas algorithms based on sampling predict a change in response slope. Two experiments provide evidence for both bias and slope changes, and computational modeling confirms that a hybrid model is the best quantitative account of the data.

Abstract: Publication date: April 2018 Source:Cognition, Volume 173 Author(s): Leyre Castro, Edward A. Wasserman, Marisol Lauffer Supervised learning results from explicit corrective feedback, whereas unsupervised learning results from statistical co-occurrence. In an initial training phase, we gave pigeons an unsupervised learning task to see if mere pairing could establish associations between multiple pairs of visual images. To assess learning, we administered occasional testing trials in which pigeons were shown an object and had to choose between previously paired and unpaired tokens. Learning was evidenced by preferential choice of the previously unpaired token. In a subsequent supervised training phase, learning was facilitated if the object and token had previously been paired. These results document unsupervised learning in pigeons and resemble statistical learning in infants, suggesting an important parallel between human and animal cognition.

Abstract: Publication date: April 2018 Source:Cognition, Volume 173 Author(s): Sayuri Hayakawa, Boaz Keysar Mental imagery plays a significant role in guiding how we feel, think, and even behave. These mental simulations are often guided by language, making it important to understand what aspects of language contribute to imagery vividness and consequently to the way we think. Here, we focus on the native-ness of language and present evidence that using a foreign language leads to less vivid mental imagery than using a native tongue. In Experiment 1, participants using a foreign language reported less vivid imagery of sensory experiences such as sight and touch than those using their native tongue. Experiment 2 provided an objective behavioral measure, showing that muted imagery reduced accuracy when judging the similarity of shapes of imagined objects. Lastly, Experiment 3 demonstrated that this reduction in mental imagery partly accounted for the previously observed foreign language effects in moral choice. Together, the findings suggest that our mental images change when using a foreign tongue, leading to downstream consequences for how we make decisions.

Abstract: Publication date: April 2018 Source:Cognition, Volume 173 Author(s): Eckart Zimmermann Two theories compete to explain how we estimate the numerosity of visual object sets. The first suggests that the apparent numerosity is derived from an analysis of more low-level features like size and density of the set. The second theory suggests that numbers are sensed directly. Consistent with the latter claim is the existence of neurons in parietal cortex which are specialized for processing the numerosity of elements in the visual scene. However, recent evidence suggests that only low numbers can be sensed directly whereas the perception of high numbers is supported by the analysis of low-level features. Processing of low and high numbers, being located at different levels of the neural hierarchy should involve different receptive field sizes. Here, I tested this idea with visual adaptation. I measured the spatial spread of number adaptation for low and high numerosities. A focused adaptation spread of high numerosities suggested the involvement of early neural levels where receptive fields are comparably small and the broad spread for low numerosities was consistent with processing of number neurons which have larger receptive fields. These results provide evidence for the claim that different mechanism exist generating the perception of visual numerosity. Whereas low numbers are sensed directly as a primary visual attribute, the estimation of high numbers however likely depends on the area size over which the objects are spread.

Abstract: Publication date: March 2018 Source:Cognition, Volume 172 Author(s): Calum Hartley, Sophie Fisher Ownership has a unique and privileged influence on human psychology. Typically developing (TD) children judge their objects to be more desirable and valuable than similar objects belonging to others. This ‘ownership effect’ is due to processing one’s property in relation to ‘the self’. Here we explore whether children with autism spectrum disorder (ASD) – a population with impaired self-understanding – prefer and over-value property due to ownership. In Experiment 1, we discovered that children with ASD did not favour a randomly endowed toy and frequently traded for a different object. By contrast, TD children showed a clear preference for their randomly endowed toy and traded infrequently. Both populations also demonstrated highly-accurate tracking of owner-object relationships. Experiment 2 showed that both TD children and children with ASD over-value their toys if they are self-selected and different from other-owned toys. Unlike TD children, children with ASD did not over-value their toys in comparison to non-owned identical copies. This finding was replicated in Experiment 3, which also established that mere ownership elicited over-valuation of randomly endowed property in TD children. However, children with ASD did not consistently regard their randomly endowed toys as the most valuable, and evaluated property irrespective of ownership. Our findings show that mere ownership increases preferences and valuations for self-owned property in TD children, but not children with ASD. We propose that deficits in self-understanding may diminish ownership effects in ASD, eliciting a more economically-rational strategy that prioritises material qualities (e.g. what a toy is) rather than whom it belongs to.

Abstract: Publication date: March 2018 Source:Cognition, Volume 172 Author(s): Rebecca M. Foerster, Werner X. Schneider Many everyday tasks involve successive visual-search episodes with changing targets. Converging evidence suggests that these targets are retained in visual working memory (VWM) and bias attention from there. It is unknown whether all or only search-relevant features of a VWM template bias attention during search. Bias signals might be configured exclusively to task-relevant features so that only search-relevant features bias attention. Alternatively, VWM might maintain objects in the form of bound features. Then, all template features will bias attention in an object-based manner, so that biasing effects are ranked by feature relevance. Here, we investigated whether search-irrelevant VWM template features bias attention. Participants had to saccade to a target opposite a distractor. A colored cue depicted the target prior to each search trial. The target was predefined only by its identity, while its color was irrelevant. When target and cue matched not only in identity (search-relevant) but also in color (search-irrelevant), saccades went more often and faster directly to the target than without any color match (Experiment 1). When introducing a cue-distractor color match (Experiment 2), direct target saccades were most likely when target and cue matched in the search-irrelevant color and least likely in case of a cue-distractor color match. When cue and target were never colored the same (Experiment 3), cue-colored distractors still captured the eyes more often than different-colored distractors despite color being search-irrelevant. As participants were informed about the misleading color, the result argues against a strategical and voluntary usage of color. Instead, search-irrelevant features biased attention obligatorily arguing for involuntary top-down control by object-based VWM templates.

Abstract: Publication date: March 2018 Source:Cognition, Volume 172 Author(s): Cornelius Maurer, Valerian Chambon, Sacha Bourgeois-Gironde, Marion Leboyer, Tiziana Zalla The present study was designed to investigate the effects of reputational priors and direct reciprocity on the dynamics of trust building in adults with (N = 17) and without (N = 25) autism spectrum disorder (ASD) using a multi-round Trust Game (MTG). On each round, participants, who played as investors, were required to maximize their benefits by updating their prior expectations (the partner’s positive or negative reputation), based on the partner’s directed reciprocity, and adjusting their own investment decisions accordingly. Results showed that reputational priors strongly oriented the initial decision to trust, operationalized as the amount of investment the investor shares with the counterpart. However, while typically developed participants were mainly affected by the direct reciprocity, and rapidly adopted the optimal Tit-for-Tat strategy, participants with ASD continued to rely on reputational priors throughout the game, even when experience of the counterpart’s actual behavior contradicted their prior-based expectations. In participants with ASD, the effect of the reputational prior never disappeared, and affected judgments of trustworthiness and reciprocity of the partner even after completion of the game. Moreover, the weight of prior reputation positively correlated with the severity of the ASD participant’s social impairments while the reciprocity score negatively correlated with the severity of repetitive and stereotyped behaviors, as measured by the Autism Diagnostic Interview–Revised (ADI-R). In line with Bayesian theoretical accounts, the present findings indicate that individuals with ASD have difficulties encoding incoming social information and using it to revise and flexibly update prior social expectations, and that this deficit might severely hinder social learning and everyday life interactions.

Abstract: Publication date: March 2018 Source:Cognition, Volume 172 Author(s): Stian Reimers, Chris Donkin, Mike E. Le Pelley When people consider a series of random binary events, such as tossing an unbiased coin and recording the sequence of heads (H) and tails (T), they tend to erroneously rate sequences with less internal structure or order (such as HTTHT) as more probable than sequences containing more structure or order (such as HHHHH). This is traditionally explained as a local representativeness effect: Participants assume that the properties of long sequences of random outcomes—such as an equal proportion of heads and tails, and little internal structure—should also apply to short sequences. However, recent theoretical work has noted that the probability of a particular sequence of say, heads and tails of length n, occurring within a larger (>n) sequence of coin flips actually differs by sequence, so P(HHHHH) < P(HTTHT). In this alternative account, people apply rational norms based on limited experience. We test these accounts. Participants in Experiment 1 rated the likelihood of occurrence for all possible strings of 4, 5, and 6 observations in a sequence of coin flips. Judgments were better explained by representativeness in alternation rate, relative proportion of heads and tails, and sequence complexity, than by objective probabilities. Experiments 2 and 3 gave similar results using incentivized binary choice procedures. Overall the evidence suggests that participants are not sensitive to variation in objective probabilities of a sub-sequence occurring; they appear to use heuristics based on several distinct forms of representativeness.