In this paper, I review the objections against the claim that brains are computers, or, to be precise, information-processing mechanisms. By showing that practically all the popular objections are either based on uncharitable interpretation of the claim, or simply wrong, I argue that the claim is likely to be true, relevant to contemporary cognitive (neuro)science, and non-trivial.

The claim defended in the paper is that the mechanistic account of explanation can easily embrace idealization in big-scale brain simulations, and that only causally relevant detail should be present in explanatory models. The claim is illustrated with two methodologically different models: Blue Brain, used for particular simulations of the cortical column in hybrid models, and Eliasmith’s SPAUN model that is both biologically realistic and able to explain eight different tasks. By drawing on the mechanistic theory of computational explanation, I (...) argue that large-scale simulations require that the explanandum phenomenon is identified; otherwise, the explanatory value of such explanations is difficult to establish, and testing the model empirically by comparing its behavior with the explanandum remains practically impossible. The completeness of the explanation, and hence of the explanatory value of the explanatory model, is to be assessed vis-à-vis the explanandum phenomenon, which is not to be conflated with raw observational data and may be idealized. I argue that idealizations, which include building models of a single phenomenon displayed by multi-functional mechanisms, lumping together multiple factors in a single causal variable, simplifying the causal structure of the mechanisms, and multi-model integration, are indispensable for complex systems such as brains; otherwise, the model may be as complex as the explanandum phenomenon, which would make it prone to so-called Bonini paradox. I conclude by enumerating dimensions of empirical validation of explanatory models according to new mechanism, which are given in a form of a “checklist” for a modeler. (shrink)

In this paper, we defend a novel, multidimensional account of representational unification, which we distinguish from integration. The dimensions of unity are simplicity, generality and scope, non-monstrosity, and systematization. In our account, unification is a graded property. The account is used to investigate the issue of how research traditions contribute to representational unification, focusing on embodied cognition in cognitive science. Embodied cognition contributes to unification even if it fails to offer a grand unification of cognitive science. The study of this (...) failure shows that unification, contrary to what defenders of mechanistic explanation claim, is an important mechanistic virtue of research traditions. (shrink)

In this paper, the Author reviewed the typical objections against the claim that brains are computers, or, to be more precise, information-processing mechanisms. By showing that practically all the popular objections are based on uncharitable interpretations of the claim, he argues that the claim is likely to be true, relevant to contemporary cognitive science, and non-trivial.

In this paper, I argue that even if the Hard Problem of Content, as identified by Hutto and Myin, is important, it was already solved in natu- ralized semantics, and satisfactory solutions to the problem do not rely merely on the notion of information as covariance. I point out that Hutto and Myin have double standards for linguistic and mental representation, which leads to a peculiar inconsistency. Were they to apply the same standards to basic and linguistic minds, they would (...) either have to embrace representationalism or turn to semantic nihilism, which is, as I argue, an unstable and unattractive position. Hence, I conclude, their book does not offer an alternative to representation- alism. At the same time, it reminds us that representational talk in cognitive science cannot be taken for granted and that information is different from men- tal representation. Although this claim is not new, Hutto and Myin defend it forcefully and elegantly. (shrink)

In this paper, we argue that several recent ‘wide’ perspectives on cognition (embodied, embedded, extended, enactive, and distributed) are only partially relevant to the study of cognition. While these wide accounts override traditional methodological individualism, the study of cognition has already progressed beyond these proposed perspectives towards building integrated explanations of the mechanisms involved, including not only internal submechanisms but also interactions with others, groups, cognitive artifacts, and their environment. The claim is substantiated with reference to recent developments in the (...) study of “mindreading” and debates on emotions. We claim that the current practice in cognitive (neuro)science has undergone, in effect, a silent mechanistic revolution, and has turned from initial binary oppositions and abstract proposals towards the integration of wide perspectives with the rest of the cognitive (neuro)sciences. (shrink)

The purpose of this paper is to present a general mechanistic framework for analyzing causal representational claims, and offer a way to distinguish genuinely representational explanations from those that invoke representations for honorific purposes. It is usually agreed that rats are capable of navigation because they maintain a cognitive map of their environment. Exactly how and why their neural states give rise to mental representations is a matter of an ongoing debate. I will show that anticipatory mechanisms involved in rats’ (...) evaluation of possible routes give rise to satisfaction conditions of contents, and this is why they are representationally relevant for explaining and predicting rats’ behavior. I argue that a naturalistic account of satisfaction conditions of contents answers the most important objections of antirepresentationalists. (shrink)

In this article, after presenting the basic idea of causal accounts of implementation and the problems they are supposed to solve, I sketch the model of computation preferred by Chalmers and argue that it is too limited to do full justice to computational theories in cognitive science. I also argue that it does not suffice to replace Chalmers’ favorite model with a better abstract model of computation; it is necessary to acknowledge the causal structure of physical computers that is not (...) accommodated by the models used in computability theory. Additionally, an alternative mechanistic proposal is outlined. (shrink)

In this paper, I argue that computationalism is a progressive research tradition. Its metaphysical assumptions are that nervous systems are computational, and that information processing is necessary for cognition to occur. First, the primary reasons why information processing should explain cognition are reviewed. Then I argue that early formulations of these reasons are outdated. However, by relying on the mechanistic account of physical computation, they can be recast in a compelling way. Next, I contrast two computational models of working memory (...) to show how modeling has progressed over the years. The methodological assumptions of new modeling work are best understood in the mechanistic framework, which is evidenced by the way in which models are empirically validated. Moreover, the methodological and theoretical progress in computational neuroscience vindicates the new mechanistic approach to explanation, which, at the same time, justifies the best practices of computational modeling. Overall, computational modeling is deservedly successful in cognitive science. Its successes are related to deep conceptual connections between cognition and computation. Computationalism is not only here to stay, it becomes stronger every year. (shrink)

Replicability and reproducibility of computational models has been somewhat understudied by “the replication movement.” In this paper, we draw on methodological studies into the replicability of psychological experiments and on the mechanistic account of explanation to analyze the functions of model replications and model reproductions in computational neuroscience. We contend that model replicability, or independent researchers' ability to obtain the same output using original code and data, and model reproducibility, or independent researchers' ability to recreate a model without original code, (...) serve different functions and fail for different reasons. This means that measures designed to improve model replicability may not enhance (and, in some cases, may actually damage) model reproducibility. We claim that although both are undesirable, low model reproducibility poses more of a threat to long-term scientific progress than low model replicability. In our opinion, low model reproducibility stems mostly from authors' omitting to provide crucial information in scientific papers and we stress that sharing all computer code and data is not a solution. Reports of computational studies should remain selective and include all and only relevant bits of code. (shrink)

Cognitive science is an interdisciplinary conglomerate of various research fields and disciplines, which increases the risk of fragmentation of cognitive theories. However, while most previous work has focused on theoretical integration, some kinds of integration may turn out to be monstrous, or result in superficially lumped and unrelated bodies of knowledge. In this paper, I distinguish theoretical integration from theoretical unification, and propose some analyses of theoretical unification dimensions. Moreover, two research strategies that are supposed to lead to unification are (...) analyzed in terms of the mechanistic account of explanation. Finally, I argue that theoretical unification is not an absolute requirement from the mechanistic perspective, and that strategies aiming at unification may be premature in fields where there are multiple conflicting explanatory models. (shrink)

In this paper, I show how semantic factors constrain the understanding of the computational phenomena to be explained so that they help build better mechanistic models. In particular, understanding what cognitive systems may refer to is important in building better models of cognitive processes. For that purpose, a recent study of some phenomena in rats that are capable of ‘entertaining’ future paths (Pfeiffer and Foster 2013) is analyzed. The case shows that the mechanistic account of physical computation may be complemented (...) with semantic considerations, and in many cases, it actually should. (shrink)

In this paper, an account of theoretical integration in cognitive (neuro)science from the mechanistic perspective is defended. It is argued that mechanistic patterns of integration can be better understood in terms of constraints on representations of mechanisms, not just on the space of possible mechanisms, as previous accounts of integration had it. This way, integration can be analyzed in more detail with the help of constraintsatisfaction account of coherence between scientific representations. In particular, the account has resources to talk of (...) idealizations and research heuristics employed by researchers to combine separate results and theoretical frameworks. The account is subsequently applied to an example of successful integration in the research on hippocampus and memory, and to a failure of integration in the research on mirror neurons as purportedly explanatory of sexual orientation. (shrink)

The paper defends the claim that the mechanistic explanation of information processing is the fundamental kind of explanation in cognitive science. These mechanisms are complex organized systems whose functioning depends on the orchestrated interaction of their component parts and processes. A constitutive explanation of every mechanism must include both appeal to its environment and to the role it plays in it. This role has been traditionally dubbed competence. To fully explain how this role is played it is necessary to explain (...) the information processing inside the mechanism embedded in the environment. The most usual explanation on this level has a form of a computational model, for example a software program or a trained artificial neural network. However, this is not the end of the explanatory chain. What is left to be explained is how the program is realized (or what processes are responsible for information processing in the artificial neural network). By using two dramatically different examples from the history of cognitive science I show the multi-level structure of explanations in cognitive science. These examples are (1) the explanation of human process solving as proposed by A. Newell & H. Simon; (2) the explanation of cricket phonotaxis via robotic models by B. Webb. (shrink)

In this paper, the role of the environment and physical embodiment of computational systems for explanatory purposes will be analyzed. In particular, the focus will be on cognitive computational systems, understood in terms of mechanisms that manipulate semantic information. It will be argued that the role of the environment has long been appreciated, in particular in the work of Herbert A. Simon, which has inspired the mechanistic view on explanation. From Simon’s perspective, the embodied view on cognition seems natural but (...) it is nowhere near as critical as its proponents suggest. The only point of difference between Simon and embodied cognition is the significance of body-based off-line cognition; however, it will be argued that it is notoriously over-appreciated in the current debate. The new mechanistic view on explanation suggests that even if it is critical to situate a mechanism in its environment and study its physical composition, or realization, it is also stressed that not all detail counts, and that some bodily features of cognitive systems should be left out from explanations. (shrink)

I discuss whether there are some lessons for philosophical inquiry over the nature of simulation to be learnt from the practical methodology of reengineering. I will argue that reengineering serves a similar purpose as simulations in theoretical science such as computational neuroscience or neurorobotics, and that the procedures and heuristics of reengineering help to develop solutions to outstanding problems of simulation.

In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically. -/- In this paper, I claim that mechanistic accounts of computation should allow for a broad variation of models (...) of computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail. (shrink)

Is the mathematical function being computed by a given physical system determined by the system’s dynamics? This question is at the heart of the indeterminacy of computation phenomenon (Fresco et al. [unpublished]). A paradigmatic example is a conventional electrical AND-gate that is often said to compute conjunction, but it can just as well be used to compute disjunction. Despite the pervasiveness of this phenomenon in physical computational systems, it has been discussed in the philosophical literature only indirectly, mostly with reference (...) to the debate over realism about physical computation and computationalism. A welcome exception is Dewhurst’s ([2018]) recent analysis of computational individuation under the mechanistic framework. He rejects the idea of appealing to semantic properties for determining the computational identity of a physical system. But Dewhurst seems to be too quick to pay the price of giving up the notion of computational equivalence. We aim to show that the mechanist need not pay this price. The mechanistic framework can, in principle, preserve the idea of computational equivalence even between two different enough kinds of physical systems, say, electrical and hydraulic ones. (shrink)

Is there a field of social intelligence? Many various disciplines ap-proach the subject and it may only seem natural to suppose that different fields of study aim at explaining different phenomena; in other words, there is no spe-cial field of study of social intelligence. In this paper, I argue for an opposite claim. Namely, there is a way to integrate research on social intelligence, as long as one accepts the mechanistic account to explanation. Mechanistic inte-gration of different explanations, however, comes (...) at a cost: mechanism requires explanatory models to be fairly complete and realistic, and this does not seem to be the case for many models concerning social intelligence, especially models of economical behavior. Such models need either be made more realistic, or they would not count as contributing to the same field. I stress that the focus on integration does not lead to ruthless reductionism; on the contrary, mechanistic explanations are best understood as explanatorily pluralistic. (shrink)

In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically. In this paper, I claim that mechanistic accounts of computation should allow for a broad variation of models of (...) computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail. (shrink)

The purpose of this paper is to argue against the claim that morphological computation is substantially different from other kinds of physical computation. I show that some (but not all) purported cases of morphological computation do not count as specifically computational, and that those that do are solely physical computational systems. These latter cases are not, however, specific enough: all computational systems, not only morphological ones, may (and sometimes should) be studied in various ways, including their energy efficiency, cost, reliability, (...) and durability. Second, I critically analyze the notion of “offloading” computation to the morphology of an agent or robot, by showing that, literally, computation is sometimes not offloaded but simply avoided. Third, I point out that while the morphology of any agent is indicative of the environment that it is adapted to, or informative about that environment, it does not follow that every agent has access to its morphology as the model of its environment. (shrink)

Many philosophers use “physicalism” and “naturalism” interchangeably. In this paper, I will distinguish ontological naturalism from physicalism. While broad versions of physicalism are compatible with naturalism, naturalism doesn't have to be committed to strong versions of physical reductionism, so it cannot be defined as equivalent to it. Instead of relying on the notion of ideal physics, naturalism can refer to the notion of ideal natural science that doesn't imply unity of science. The notion of ideal natural science, as well as (...) the notion of ideal physics, will be vindicated. I will shortly explicate the notion of ideal natural science, and define ontological naturalism based on it. (shrink)

The paper proposes an empirical method to investigate linguistic prescriptions as inherent corrective behaviors. The behaviors in question may but need not necessarily be supported by any explicit knowledge of rules. It is possible to gain insight into them, for example by extracting information about corrections from revision histories of texts (or by analyzing speech corpora where users correct themselves or one another). One easily available source of such information is the revision history of Wikipedia. As is shown, the most (...) frequent and short corrections are limited to linguistic errors such as typos (and editorial conventions adopted in Wikipedia). By perusing an automatically generated revision corpus, one gains insight into the prescriptive nature of language empirically. At the same time, the prescriptions offered are not reducible to descriptions of the most frequent linguistic use. (shrink)

Multiple realizability (MR) is traditionally conceived of as the feature of computational systems, and has been used to argue for irreducibility of higher-level theories. I will show that there are several ways a computational system may be seen to display MR. These ways correspond to (at least) five ways one can conceive of the function of the physical computational system. However, they do not match common intuitions about MR. I show that MR is deeply interest-related, and for this reason, difficult (...) to pin down exactly. I claim that MR is of little importance for defending computationalism, and argue that it should rather appeal to organizational invariance or substrate neutrality of computation, which are much more intuitive but cannot support strong antireductionist arguments. (shrink)

In Darwin’s Dangerous Idea, Daniel Dennett claims that evolution is algorithmic. On Dennett’s analysis, evolutionary processes are trivially algorithmic because he assumes that all natural processes are algorithmic. I will argue that there are more robust ways to understand algorithmic processes that make the claim that evolution is algorithmic empirical and not conceptual. While laws of nature can be seen as compression algorithms of information about the world, it does not follow logically that they are implemented as algorithms by physical (...) processes. For that to be true, the processes have to be part of computational systems. The basic difference between mere simulation and real computing is having proper causal structure. I will show what kind of requirements this poses for natural evolutionary processes if they are to be computational. (shrink)

Artificial models of cognition serve different purposes, and their use determines the way they should be evaluated. There are also models that do not represent any particular biological agents, and there is controversy as to how they should be assessed. At the same time, modelers do evaluate such models as better or worse. There is also a widespread tendency to call for publicly available standards of replicability and benchmarking for such models. In this paper, I argue that proper evaluation ofmodels (...) does not depend on whether they target real biological agents or not; instead, the standards of evaluation depend on the use of models rather than on the reality of their targets. I discuss how models are validated depending on their use and argue that all-encompassing benchmarks for models may be well beyond reach. (shrink)

Nietzsche's treatment of Epicurus is an interesting example of philosophical hermeneutics. Epicurus bas tren notoriously misinterpreted, claims Nietzsche, because bis mask bas been taken for bis true face. Traditionally Epicurus is presented as a utilitarian or hedonist avant la lettre. This is a simplification motivated by a desire to deprecate bis philosophy. To Nietzsche Epicurus was „an idyllic hero”, a teacher with anistocratic predilections aun his own concept of good, critical of the traditional form of religion, and of the „pre-existent (...) form of Cbristianity”. As a hedonist he was much less convincing, as he was afraid of both pain and pleasure. He assumed the mask of an epicure in order to hide bis true self. Nietzsche warn us over and over again not to trust the traditional interpretation of Epicurus and urges us to penetrate beyond the veil of a stylish disguise. (shrink)

Recent work on skin-brain thesis suggests the possibility of empirical evidence that empiricism is false. It implies that early animals need no traditional sensory receptors to be engaged in cognitive activity. The neural structure required to coordinate extensive sheets of contractile tissue for motility provides the starting point for a new multicellular organized form of sensing. Moving a body by muscle contraction provides the basis for a multicellular organization that is sensitive to external surface structure at the scale of the (...) animal body. In other words, the nervous system first evolved for action, not for receiving sensory input. Thus, sensory input is not required for minimal cognition; only action is. The whole body of an organism, in particular its highly specific animal sensorimotor organization, reflects the bodily and environmental spatiotemporal structure. The skin-brain thesis suggests that, in contrast to empiricist claims that cognition is constituted by sensory systems, cognition may be also constituted by action-oriented feedback mechanisms. Instead of positing the reflex arc as the elementary building block of nervous systems, it proposes that endogenous motor activity is crucial for cognitive processes. In the paper, I discuss the issue whether the skin-brain thesis and its supporting evidence can be really used to overthrow the main tenet of empiricism empirically, by pointing out to cognizing agents that fail to have any sensory apparatus. (shrink)

Is there a field of social intelligence? Many various disciplines approach the subject and it may only seem natural to suppose that different fields of study aim at explaining different phenomena; in other words, there is no special field of study of social intelligence. In this paper, I argue for an opposite claim. Namely, there is a way to integrate research on social intelligence, as long as one accepts the mechanistic account to explanation. Mechanistic integration of different explanations, however, comes (...) at a cost: mechanism requires explanatory models to be fairly complete and realistic, and this does not seem to be the case for many models concerning social intelligence, especially models of economical behavior. Such models need either be made more realistic, or they would not count as contributing to the same field. I stress that the focus on integration does not lead to ruthless reductionism; on the contrary, mechanistic explanations are best understood as explanatorily pluralistic. (shrink)

The standard objection against naturalised epistemology is that it cannot account for normativity in epistemology (Putnam 1982; Kim 1988). There are different ways to deal with it. One of the obvious ways is to say that the objection misses the point: It is not a bug; it is a feature, as there is nothing interesting in normative principles in epistemology. Normative epistemology deals with norms but they are of no use in prac-tice. They are far too general to be guiding (...) principles of research, up to the point that they even seem vacuous (see Knowles 2003). In this chapter, my strategy will be different and more in spirit of the founding father of naturalized epistemology, Quine, though not faithful to the letter. I focus on methodological prescriptions supplied by cogni-tive science in re-engineering of cognitive architectures. Engineering norms based on mechanism design weren’t treated as seriously as they should in epistemology, and that is why I will develop a sketch of a framework for researching them, starting from analysing cognitive sci-ence as engineering in section 3, then showing functional normativity in section 4, to eventually present functional engineering models of cogni-tive mechanisms as normative in section 5. Yet before showing the kind of engineering normativity specific for these prescriptions, it is worth-while to review briefly the role of normative methodology and the levels of norm complexity in it, and show how it follows Quine’s steps. (shrink)