Amongst philosophers and cognitive scientists, modularity remains a popular choice for an architecture of the human mind, primarily because of the supposed explanatory value of this approach. Modular architectures can vary both with respect to the strength of the notion of modularity and the scope of the modularity of mind. We propose a dilemma for modular architectures, no matter how these architectures vary along these two dimensions. First, if a modular architecture commits to the informational encapsulation of modules, as (...) it is the case for modularity theories of perception, then modules are on this account impenetrable. However, we argue that there are genuine cases of the cognitive penetrability of perception and that these cases challenge any strong, encapsulated modular architecture of perception. Second, many recent massive modularity theories weaken the strength of the notion of module, while broadening the scope of modularity. These theories do not require any robust informational encapsulation, and thus avoid the incompatibility with cognitive penetrability. However, the weakened commitment to informational encapsulation greatly weakens the explanatory force of the theory and, ultimately, is conceptually at odds with the core of modularity. (shrink)

Since social skills are highly significant to the evolutionary success of humans, we should expect these skills to be efficient and reliable. For many Evolutionary Psychologists efficiency entails encapsulation: the only way to get an efficient system is via information encapsulation. But encapsulation reduces reliability in opaque epistemic domains. And the social domain is darkly opaque: people lie and cheat, and deliberately hide their intentions and deceptions. Modest modularity [Currie and Sterelny (2000) Philos Q 50:145–160] attempts to (...) combine efficiency and reliability. Reliability is obtained by placing social skills in un-encapsulated central cognition; efficiency by having the social system sensitive to encapsulated socially tagged cues. In this paper, I argue that this approach fails. I focus on eye-gaze as a plausible example of a socially significant encapsulated cue. I demonstrate contra modest modularity that eye-gaze is subject to influence from central cognition. (shrink)

Is vision informationally encapsulated from cognition or is it cognitively penetrated? I shall argue that intentions penetrate vision in the experience of visual spatial constancy: the world appears to be spatially stable despite our frequent eye movements. I explicate the nature of this experience and critically examine and extend current neurobiological accounts of spatial constancy, emphasizing the central role of motor signals in computing such constancy. I then provide a stringent condition for failure of informational encapsulation that emphasizes a (...) computational condition for cognitive penetration: cognition must serve as an informational resource for visual computation. This requires proposals regarding semantic information transfer, a crucial issue in any model of informational encapsulation. I then argue that intention provides an informational resource for computation of visual spatial constancy. Hence, intention penetrates vision. (shrink)

To what extent are cognitive capacities, especially perceptual capacities, informationally encapsulated and to what extent are they cognitively penetrable? And why does this matter? Two reasons we care about encapsulation/penetrability are: (a) encapsulation is sometimes held to be definitional of modularity, and (b) penetrability has epistemological implications independent of modularity. I argue that modularity does not require encapsulation; that modularity may have epistemological implications independently of encapsulation; and that the epistemological implications of the cognitive penetrability of (...) perception are messier than is sometimes thought. (shrink)

My aim in this paper is to defend the view that the processes underlying early vision are informationally encapsulated. Following Marr (1982) and Pylyshyn (1999) I take early vision to be a cognitive process that takes sensory information as its input and produces the so-called primal sketches or shallow visual outputs: informational states that represent visual objects in terms of their shape, location, size, colour and luminosity. Recently, some researchers (Schirillo 1999, Macpherson 2012) have attempted to undermine the idea of (...) the informational encapsulation of early vision by referring to experiments that seem to show that colour recognition is affected by the subject's beliefs about the typical colour of objects. In my view, however, one can reconcile the results of these experiments with the position that early vision is informationally encapsulated. Namely, I put fort a hypothesis according to which the early vision system has access to a local database that I call the mental palette and define as a network of associative links whose nodes stands for shapes and colours. The function of the palette is to facilitate colour recognition without employing central processes. I also describe two experiments by which the mental palette hypothesis can be tested. (shrink)

Is perception cognitively penetrable, and what are the epistemological consequences if it is? I address the latter of these two questions, partly by reference to recent work by Athanassios Raftopoulos and Susanna Seigel. Against the usual, circularity, readings of cognitive penetrability, I argue that cognitive penetration can be epistemically virtuous, when---and only when---it increases the reliability of perception.

This chapter critically assesses recent arguments that acquiring the ability to categorize an object as belonging to a certain high-level kind can cause the relevant kind property to be represented in visual phenomenal content. The first two arguments, developed respectively by Susanna Siegel (2010) and Tim Bayne (2009), employ an essentially phenomenological methodology. The third argument, developed by William Fish (2013), by contrast, is supported by an array of psychophysical and neuroscientific findings. I argue that while none of these arguments (...) ultimately proves successful, there is a substantial body of empirical evidence that information originating outside the visual system can nonetheless modulate the way an object’s low-level attributes visually appear. Visual phenomenal content, I show, is not only significantly influenced by crossmodal interactions between vision and other exteroceptive senses such as touch and audition, but also by interactions between vision and non-perceptual systems involved in motor planning and construction of the proprioceptive body-image. (shrink)

The focus of this research is the re-use of the Yakṣī image. The study of the evolution of a certain iconography induces one to face the problem of re-use in correlation to the transmission of images in time, and of their survival or transformation in historical and cultural environments different from the original ones. In fact, every different time period formulates its new iconography but above all it takes up again pre-existing images: these may be “revived” and, hence, may be (...) rediscovered after an oblivion, or they may have “survived” either maintaining their original aspect or having suffered modifications. In the figurative domain re-use is often used as an explicit quotation in order to assert a new ideology, a new ruler, or a new religion. In this article we investigate the different aspects of the mechanism of re-use of the iconography of Yakṣī focusing, in the end, on the case-study of Cenna Keśava Temple at Beḷūr. (shrink)

The direct reading emission spectrometer was developed during the1940s. By substituting photo-multiplier tubes and electronics forphotographic film spectrograms, the interpretation of special lineswith a densitometer was avoided. Instead, the instrument providedthe desired information concerning percentage concentration ofelements of interest directly on a dial. Such instruments `de-skill' the job of making such measurements. They do this by encapsulatingin the instrument the skills previously employed by the analyst,by `skilling' the instrument. This paper presents a history of thedevelopment of the Dow Chemical/Baird Associates (...) direct reader. Thishistory is used to argue for a materialist conception of knowledge.The instrument is a material form of knowledge, knowledge of aspectsof spectroscopy, analytical spectrochemistry, electronics, instrumentdesign and construction, and metal production industry economics. (shrink)

According to Pylyshyn, the early visual system is able to categorize perceptual inputs into shape classes based on visual similarity criteria; it is also suggested that written words may be categorized within early vision. This speculation is contradicted by the fact that visually unrelated exemplars of a given letter (e.g., a/A) or word (e.g., read/READ) map onto common visual categories.

Thomas & Karmiloff-Smith (T&K-S) raise the excellent and, in retrospect, obvious point that in a dynamic learning environment where feedback is possible, we should expect networks to adapt to damage by altering details of their behavior. We should therefore not expect that developmental disorders should result in “normal” modules. The implications of this point go much further, since interprocess dependency in the brain does not rely only on learned neural connections. This argues strongly against behavioral and process-related definitions, as (...) opposed to structural and architecture-related definitions, of mental modularity. (shrink)

Churchland's paper "Perceptual Plasticity and Theoretical Neutrality" offers empirical, semantical and epistemological arguments intended to show that the cognitive impenetrability of perception "does not establish a theory-neutral foundation for knowledge" and that the psychological account of perceptual encapsulation that I set forth in The Modularity of Mind "[is] almost certainly false". The present paper considers these arguments in detail and dismisses them.

Language is at the core of the cognitive revolution that has transformed that discipline over the last forty years or so, and it is also the central paradigm for the most prominent attempt to synthesise psychology and evolutionary theory. A single and distinctively modular view of language has emerged out of both these perspectives, one that encourages a certain idealisation. Linguistic competence is uniform, independent of other cognitive capacities, and with a developmental trajectory that is largely independent of environmental input (...) (Pinker 1994; Pinker 1997). Thus language is seen as a paradigm of John Tooby and Leda Cosmides’ concept of “evoked culture”: linguistic experience serves only to select a specific item from a menu of innately available options (Tooby and Cosmides 1992). In explaining this concept, they appeal to the metaphor of a jukebox. The human genome pre-stores a set of options, and the different experiences provided by different cultures select different elements out of this option set. I think an appropriate evolutionary perspective on language substantially undercuts this idealisation and the evoked culture model of language. Variability between speakers; the sensitivity of linguistic development to environmental input; and the limits of encapsulation are not noise. They are central to the language and its evolution. (shrink)

Paul Churchland has recently argued that empirical evidence strongly suggests that perception is penetrable to the beliefs or theories held by individual perceivers (1988). While there has been much discussion of the sorts of psychological cases he presents, little has been said about his arguments from neurology. I offer a critical examination of his claim that certain efferents in the brain are evidence against perceptual encapsulation. I argue that his neurological evidence is inadequate to his philosophical goals, both by (...) itself and taken in concert with his psychological evidence. (shrink)

The view that moral cognition is subserved by a two-tieredarchitecture is defended: Moral reasoning is the result both ofspecialized, informationally encapsulated modules which automaticallyand effortlessly generate intuitions; and of general-purpose,cognitively penetrable mechanisms which enable moral judgment in thelight of the agent's general fund of knowledge. This view is contrastedwith rival architectures of social/moral cognition, such as Cosmidesand Tooby's view that the mind is wholly modular, and it is argued thata two-tiered architecture is more plausible.

One of the most foundational and continually contested questions in the cognitive sciences is the degree to which the functional organization of the brain can be understood as modular. In its classic formulation, a module was defined as a cognitive sub-system with (all or most of) nine specific properties; the classic module is, among other things, domain specific, encapsulated (i.e. maintains proprietary representations to which other modules have no access), and implemented in dedicated neural substrates. Most of the examinations—and especially (...) the criticisms—of the modularity thesis have focused on these properties individually, for instance by finding counterexamples in which otherwise good candidates for cognitive modules are shown to lack domain specificity or encapsulation. The current paper goes beyond the usual approach by asking what some of the broad architectural implications of the modularity thesis might be, and attempting to test for these. The evidence does not favor a modular architecture for the cortex. Moreover, the evidence suggests that best way to approach the understanding of cognition is not by analyzing and modelling different functional domains (visual perception, attention, language, motor control, etc.) in isolation from the others, but rather by looking for points of overlap in their neural implementations, and exploiting these to guide the analysis and decomposition of the functions in question. This has significant implications for the question of how to approach the design and implementation of intelligent artifacts in general, and language-using robots in particular. (shrink)

One major idea within the great epic of the Mahabharata is the concept of fate. Daiva, literally 'of the gods', could be said to direct or even manipulate every character and theme throughout the entire epic. The story of Nala and Damayanti offers us an opportunity for insight into Daiva within the epic as a whole. The short story, when placed in the Mahabharata, results in an interesting encapsulation of a love story, numerous metaphors and a (...) tale of initial loss and eventual redemption. Through the investigation of each character's specific dharma, we will see that actions and consequences seemingly blend together, with an arguable disregard for the passage of time. Throughout the story of Nala and Damayanti, we will notice the overarching theme of fate. Human choice and divine authority are questioned as people and gods are unable to escape from what must be. (shrink)

In Computer Science stepwise refinement of algebraic specifications is a well-known formal methodology for rigorous program development. This paper illustrates how techniques from Algebraic Logic, in particular that of interpretation, understood as a multifunction that preserves and reflects logical consequence, capture a number of relevant transformations in the context of software design, reuse, and adaptation, difficult to deal with in classical approaches. Examples include data encapsulation and the decomposition of operations into atomic transactions. But if interpretations open such a (...) new research avenue in program refinement, (conceptual) tools are needed to reason about them. In this line, the paper’s main contribution is a study of the correspondence between logical interpretations and morphisms of a particular kind of coalgebras. This opens way to the use of coalgebraic constructions, such as simulation and bisimulation, in the study of interpretations between (abstract) logics. (shrink)

We respond to Farah (1994) by making some general remarks about information encapsulation and locality and asking how these are violated in her computational models. Our point is not that we disagree, but rather that Farah's treatment of the issues is not sufficiently rigorous to allow an evaluation of her claims.

Inspired by the thinking of authors such as Andrew Feenberg, Tim Ingold and Richard Sennett, this article sets forth substantial criticism of the ‘social uprooting of technology’ paradigm, which deterministically considers modern technology an autonomous entity, independent and indifferent to the social world (practices, skills, experiences, cultures, etc.). In particular, the authors’ focus on demonstrating that the philosophy,methodology and experience linked to open source technological development represent an emblematic case of re-encapsulation of the technical code within social relations (reskilling (...) practices). Open source is discussed as a practice, albeit not unique, of community empowerment aimed at the participated and shared rehabilitation of technological production ex-ante. Furthermore, the article discusses the application of open source processes in the agro-biotechnological field, showing how they may support a more democratic endogenous development, capable of binding technological innovation to the objectives of social (reducing inequalities) and environmental sustainability to a greater degree. (shrink)

The target article argues for the modularity of language interpretive processes without the usual criterion that a module be informationally encapsulated. It is the encapsulation criterion, however, that gives modularity most of its testability. Without the criterion of encapsulation, testing whether relatively automatic comprehension processes use their own unique resource is a very tricky matter.

Medical Humanities the journal started life in 2000 as a special edition of the JME. However, the intellectual taproots of the medical humanities as a field of enquiry can be traced to two developments: calls made in the 1920s for the development of multidisciplinary perspectives on the sciences that shed historical light on their assumptions, methods and practices; refusals to assimilate all medical phenomena to a biomedical worldview. Medical humanities the term stems from a desire to situate the significance of (...) medicine as a product of culture. But despite growing usage over half a century the term defies a unifying encapsulation and continues to conjure up a multitude of discourse communities, including scholars working at the interfaces of health and humanities, arts and health, and medical education and bioethics. The field is intellectually capacious and polymorphous, forming and reforming around critical new research questions and teaching tasks spanning disciplines. (shrink)

This paper presents use cases for modular development of ontologies using the OWL imports mechanism. Many of the methods are inspired by work in modular development in software engineering. The approach is aimed at developers of large ontologies covering multiple subdomains that make use of OWL reasoners for inference. Such ontologies are common in biomedical sciences, but nothing in the paper is specific to biomedicine. There are four groups of use cases: (i) organisation and factoring of ontologies; (ii) maintaining stable (...) interfaces and bindings between ontologies and between ontologies and software; (iii) localization of ontologies to the requirements of specific sites and (iv) extension of ontologies and encapsulation of modifications. OWL's axiom-oriented import mechanism has many similarities with import mechanisms in object-oriented software but also important differences – in particular, the effects of OWL imports are global, and the order in which modules are imported is irrelevant. The advantages and disadvantages of OWL's axiom-oriented approach are discussed, and suggestions are made for extensions to allow axioms to be filtered out as well as added – a mechanism that we term “adaptation” to distinguish it from the standard import mechanism. Finally we discuss possible alternatives and practical experience with the approaches presented. (shrink)

The recently retired Homeland Security Advisory System constituted a main means by which the intensity of the terrorist threat was communicated to the United States' public. An examination of its inner workings and its social impact shows the System as part of a modality of government: an encapsulation of intelligence-led governmentality. Informed by the political philosophy of Cornelius Castoriadis, I contextualise this modality as a settling of fundamental tensions inherent in modern sociopolitical culture, those between the principle of social (...) and personal autonomy, and that of rational mastery of people and nature. These principles are strongly connected to democratic and oligarchic political organisation, respectively, and they give rise to different justifications of state authority. In turn, they pertain to the fundamental question of whether scientific expertise on politics is possible. (shrink)

This research concentrates on the design and analysis of an algorithm referred to as Virtual Network Configuration (VNC) which uses predicted future states of a system for faster network configuration and management. VNC is applied to the configuration of a wireless mobile ATM network. VNC is built on techniques from parallel discrete event simulation merged with constraints from real-time systems and applied to mobile ATM configuration and handoff. Configuration in a mobile network is a dynamic and continuous process. Factors such (...) as load, distance, capacity and topology are all constantly changing in a mobile environment. The VNC algorithm anticipates configuration changes and speeds the reconfiguration process by pre-computing and caching results. VNC propagates local prediction results throughout the VNC enhanced system. The Global Positioning System is an enabling technology for the use of VNC in mobile networks because it provides location information and accurate time for each node. This research has resulted in well defined structures for the encapsulation of physical processes within Logical Processes and a generic library for enhancing a system with VNC. Enhancing an existing system with VNC is straight forward assuming the existing physical processes do not have side effects. The benefit of prediction is gained at the cost of additional traffic and processing. This research includes an analysis of VNC and suggestions for optimization of the VNC algorithm and its parameters. (shrink)

The phenomenal concept strategy is considered a powerful response to anti-physicalist arguments. This physicalist strategy aims to provide a satisfactory account of dualist intuitions without being committed to ontological dualist conclusions. In this paper I first argue that physicalist accounts of phenomenal concepts fail to explain their cognitive role. Second, I develop an encapsulation account of phenomenal concepts that best explains their particularities. Finally, I argue that the encapsulation account, which features self-representing experiences, implies non-physical referents. Therefore, the (...) account of phenomenal concepts that has strong explanatory power does not explain away dualist intuitions—rather, it reinforces dualism. (shrink)

In the Philosophy of Cognitive Science, it is a common held view that the modularity hypothesis for cognitive mechanisms and the innateness hypothesis for mental contents are conceptually independent. In this paper I distinguish between substantial and deflationist modularity as well as between substantial and deflationist innatism, and I analyze whether the conceptual independence between substantial modularity and innatism holds. My conclusion will be that if what is taken into account are the essential properties of the substantial modules, i.e. domain (...) specificity and informational encapsulation, then it seems to be such independence. However, if what is taken into account is the function of the substantial modules, then it seems to be a conceptual connection from modularity to substantial innateness. (shrink)

The broad objective of this paper is to examine the evolution of gendered aspects of livelihood strategies and their interaction with various development interventions. Central to this is an empirical analysis of gendered divisions of labor in the context of rapidly changing pastoralist livelihoods. The paper begins with a literature review on gender roles in pastoralist societies. Two important gaps in the existing literature are identified. First, studies on gender roles are too often studies on women’s roles as men’s roles (...) are rarely included. Secondly, despite a recognition that pastoral livelihoods are rapidly changing, much of the research has ignored the gendered impacts of this change. The study area is Loitokitok Division, Kajiado District, Kenya. Field data were collected in an extensive household survey, key informant interviews, and group discussions held in two field seasons between 2001 and 2004. Results indicate that development interventions led to land use encapsulation, sedentarization, new ways of accessing dry season grazing areas, new land uses, new livestock breeds, and increased school enrollment. In the context of these livelihood changes and increasing drought, a fundamental shift in gendered roles in livestock production has occurred. Maasai women in the study area contribute more labor to livestock production than men do. Various efforts to modernize the livestock sector are leading to a loss of women’s control of milk resources. This finding has important implications for current and future development interventions in pastoralist communities and their ability to improve livelihoods of the most vulnerable sections of the population. (shrink)

It is widely accepted that the ethical supervenes on the natural, where this is roughly the claim that it is impossible for two circumstances to be identical in all natural respects, but different in their ethical respects. This chapter refines and defends the traditional thought that this fact poses a significant challenge to ethical non-naturalism, a view on which ethical properties are fundamentally different in kind from natural properties. The challenge can be encapsulated in three core claims which the chapter (...) defends: that a defensible non-naturalism is committed to the supervenience of the ethical, that this commits the non-naturalist to a brute necessary connection between properties of distinct kinds, and that commitment to such brute connections counts against the non-naturalist’s view. Each of these claims has recently been challenged. Against Nicholas Sturgeon’s recent doubts about the dialectical force of supervenience, this chapter defends a supervenience thesis as deserving to be common ground among ethical realists. It is then argued that attempts to explain supervenience on behalf of the non-naturalist either fail as explanations, generate near-identical explanatory burdens elsewhere, or appeal to commitments that are inconsistent with core motivations for non-naturalism. The chapter concludes that, suitably refined, the traditional argument against nonnaturalism from supervenience is alive and well. (shrink)

Abstract. In Dynamics of Reason Michael Friedman proposes a kind of synthesis between the neokantianism of Ernst Cassirer, the logical empiricism of Rudolf Carnap, and the historicism of Thomas Kuhn. Cassirer and Carnap are to take care of the Kantian legacy of modern philosophy of science, encapsulated in the concept of a relativized a priori and the globally rational or continuous evolution of scientific knowledge,while Kuhn´s role is to ensure that the historicist character of scientific knowledge is taken seriously. More (...) precisely, Carnapian linguistic frameworks, guarantee that the evolution of science procedes in a rational manner locally,while Cassirer’s concept of an internally defined conceptual convergence of empirical theories provides the means to maintain the global continuity of scientific reason. In this paper it is argued that Friedman’s neokantian account of scientific reason based on the concept of the relativized a priori underestimates the pragmatic aspects of the dynamics of scientific reason. To overcome this short-coming, I propose to reconsider C.I. Lewis’s account of a pragmatic the priori, recently modernized and elaborated by Hasok Chang. This may be<br><br><br><br><br><br><br><br><br><br&g t;<br><br><br><br><br><br>Keywords: Dynamics of reason, Paradigms, Logical Empiricism,Neokantianism, Pragmatism, Mathematics, Communicative Rationality. (shrink)

I present an argument that encapsulates the view that theory is underdetermined by evidence. I show that if we accept Williamson's equation of evidence and knowledge, then this argument is question-begging. I examine ways of defenders of underdetermination may avoid this criticism. I also relate this argument and my critique to van Fraassen's constructive empiricism.

Inferentialism claims that expressions are meaningful by virtue of rules governing their use. In particular, logical expressions are autonomous if given meaning by their introduction-rules, rules specifying the grounds for assertion of propositions containing them. If the elimination-rules do no more, and no less, than is justified by the introduction-rules, the rules satisfy what Prawitz, following Lorenzen, called an inversion principle. This connection between rules leads to a general form of elimination-rule, and when the rules have this form, they may (...) be said to exhibit “general-elimination” harmony. Ge-harmony ensures that the meaning of a logical expression is clearly visible in its I-rule, and that the I- and E-rules are coherent, in encapsulating the same meaning. However, it does not ensure that the resulting logical system is normalizable, nor that it satisfies the conservative extension property, nor that it is consistent. Thus harmony should not be identified with any of these notions. (shrink)

Cognitive science is, more than anything else, a pursuit of cognitive mechanisms. To make headway towards a mechanistic account of any particular cognitive phenomenon, a researcher must choose among the many architectures available to guide and constrain the account. It is thus fitting that this volume on contemporary debates in cognitive science includes two issues of architecture, each articulated in the 1980s but still unresolved: " • Just how modular is the mind? – a debate initially pitting encapsulated mechanisms against (...) highly interactive ones. • Does the mind process language-like representations according to formal rules? – a debate initially pitting symbolic architectures against less language-like architectures. " Our project here is to consider the second issue within the broader context of where cognitive science has been and where it is headed. The notion that cognition in general—not just language processing—involves rules operating on language-like representations actually predates cognitive science. In traditional philosophy of mind, mental life is construed as involving propositional attitudes—that is, such attitudes towards propositions as believing, fearing, and desiring that they be true—and logical inferences from them. On this view, if a person desires that a proposition be true and believes that if she performs a certain action it will become true, she will make the inference and perform the action. (shrink)

The notion of the absolute time-constituting flow plays a central role in Edmund Husserl’s analysis of our consciousness of time. I offer a novel reading of Husserl’s remarks on the absolute flow, on which Husserl can be seen to be grappling with two key intuitions that are still at the centre of current debates about temporal experience. One of them is encapsulated by what is sometimes referred to as an intentionalist (as opposed to an extensionalist) approach to temporal experience. The (...) other centres on the thought that temporal experience itself necessarily unfolds over time. I show how some of Husserl’s more enigmatic-sounding remarks about the absolute flow become intelligible if they are read as attempts to accommodate both these intuitions at the same time. However, I also question whether Husserl ultimately provides good reasons for preferring his intentionalist approach to a rival extensionalist one. (shrink)

It is natural to assume that the fine-grained and highly accurate spatial information present in visual experience is often used to guide our bodily actions. Yet this assumption has been challenged by proponents of the Two Visual Systems Hypothesis , according to which visuomotor programming is the responsibility of a “zombie” processing stream whose sources of bottom-up spatial information are entirely non-conscious . In many formulations of TVSH, the role of conscious vision in action is limited to “recognizing objects, selecting (...) targets for action, and determining what kinds of action, broadly speaking, to perform” . Our aim in this study is to show that the available evidence not only fails to support this dichotomous view but actually reveals a significant role for conscious vision in motor programming, especially for actions that require deliberate attention. (shrink)

In my book How the Mind Works, I defended the theory that the human mind is a naturally selected system of organs of computation. Jerry Fodor claims that 'the mind doesn't work that way'(in a book with that title) because (1) Turing Machines cannot duplicate humans' ability to perform abduction (inference to the best explanation); (2) though a massively modular system could succeed at abduction, such a system is implausible on other grounds; and (3) evolution adds nothing to our understanding (...) of the mind. In this review I show that these arguments are flawed. First, my claim that the mind is a computational system is different from the claim Fodor attacks (that the mind has the architecture of a Turing Machine); therefore the practical limitations of Turing Machines are irrelevant. Second, Fodor identifies abduction with the cumulative accomplishments of the scientific community over millennia. This is very different from the accomplishments of human common sense, so the supposed gap between human cognition and computational models may be illusory. Third, my claim about biological specialization, as seen in organ systems, is distinct from Fodor's own notion of encapsulated modules, so the limitations of the latter are irrelevant. Fourth, Fodor's arguments dismissing of the relevance of evolution to psychology are unsound. (shrink)

Variabilism is the view that proper names (like pronouns) are semantically represented as variables. Referential names, like referential pronouns, are assigned their referents by a contextual variable assignment (Kaplan 1989). The reference parameter (like the world of evaluation) may also be shifted by operators in the representation language. Indeed verbs that create hyperintensional contexts, like ‘think’, are treated as operators that simultaneously shift the world and assignment parameters. By contrast, metaphysical modal operators shift the world of assessment only. Names, being (...) variables, refer rigidly in the latter merely intensional contexts, but may vary their reference in hyperintensional contexts. This conforms to the intuition that the content of attitude ascriptions encapsulates referential uncertainty. Furthermore, names in hyperintensional contexts are ambiguous between de re* and de dicto* interpretations. This fact is used to account for asymmetric mistaken identity attributions (for example, Biron thinks Katherine is Rosaline, but he doesn’t think Rosaline is Katherine). -/- The variable theory compares favourably with its alternatives, including Millianism and descriptivism. Millians cannot account for the behaviour of names in hyperintensional contexts, while descriptivists cannot generate a necessary contrast between intensional and hyperintensional contexts. No other theory can capture the facts pertaining to the existentially bound use of names. (shrink)

In a shift of position that has gone largely unnoticed by the great majority of commentators, Thomas Kuhn's version of the incommensurability thesis underwent a major transformation over the last decade and a half of his life. In his later work, Kuhn argued that incommensurability is a relation of translation failure between local subsets of interdefined theoretical terms, which encapsulate the taxonomic structure of a theory. Incommensurability arises because it is impossible to transfer the natural categories employed within one taxonomic (...) structure into the categorial system of another such structure. Apparently on the basis of such taxonomic incommensurability, Kuhn asserted a number of antirealist theses about truth, reference and reality. In this paper, it will be argued, however, that, far from leading to antirealist consequences about the relationship between theory and reality, the taxonomic incommensurability thesis may be incorporated unproblematically within a reasonably robust scientific realist framework. (shrink)

The idea that there is a “Number Sense” (Dehaene, 1997) or “Core Knowledge” of number ensconced in a modular processing system (Carey, 2009) has gained popularity as the study of numerical cognition has matured. However, these claims are generally made with little, if any, detailed examination of which modular properties are instantiated in numerical processing. In this article, I aim to rectify this situation by detailing the modular properties on display in numerical cognitive processing. In the process, I review literature (...) from across the cognitive sciences and describe how the evidence reported in these works supports the hypothesis that numerical cognitive processing is modular. I outline the properties that would suffice for deeming a certain processing system a modular processing system. Subsequently, I use behavioral, neuropsychological, philosophical, and anthropological evidence to show that the number module is domain specific, informationally encapsulated, neurally localizable, subject to specific pathological breakdowns, mandatory, fast, and inaccessible at the person level; in other words, I use the evidence to demonstrate that some of our numerical capacity is housed in modular casing. (shrink)