Does computation
require representation? To what extent should representation figure within
computational models? Can representational properties causally influence
computation? How central an explanatory role should semantics occupy within
computational psychology? Is the mind a “syntax-driven” machine? Can
computational models help elucidate the nature of representation? Can they help
us reduce the intentional to the non-intentional? What semantic frameworks are
most useful for computer science and Artificial Intelligence? Can we build an
artificial computing machine that thinks? How might the construction of such a
machine illuminate the mind, including our capacity to represent? Is mental
activity best modeled through “classical” computation, through “connectionist”
computation, or through some other framework?

Key works

The seminal article Turing 1936 introduces the
Turing machine, thereby laying the foundation for all subsequent research on
computation within computer science, recursion theory, Artificial Intelligence,
cognitive psychology, and philosophy. Putnam 1967 introduced philosophers to the
thesis that Turing-style computation provides illuminating models of mental
activity. Fodor 1975 developed Putnam’s suggestion, combining it with the
traditional picture of the mind as a representational organ. Fodor’s subsequent
writings, including Fodor 1981 and many other articles and books, investigate the
relation between mental computation and mental representation. Stich 1983 combines
a computational approach to the mind with eliminativism
regarding intentionality. Dennett 1987 advocates a broadly instrumentalist
approach to intentionality. Searle 1980 is a widely discussed critique of the
computational approach, centered on the relation between syntax and semantics. Putnam 1975 introduces the Twin Earth thought experiment, which crucially
informs much of the subsequent literature on computation and representation. Burge 1982 applies the Twin Earth thought experiment to mental representation (whereas
Putnam initially applied it only to linguistic representation).

Introductions

The first three chapters of Rogers 1987 present
the foundations of computation theory, with an emphasis on the Turing machine. Fodor 1981 offers a good (albeit opinionated) introduction to issues
surrounding computation and mental representation.
Horst 2005 and Pitt 2008 offer helpful surveys of the contemporary literature.

It is unlikely that the systematic, compositional properties of formal symbol systems -- i.e., of computation -- play no role at all in cognition. However, it is equally unlikely that cognition is just computation, because of the symbol grounding problem (Harnad 1990): The symbols in a symbol system are systematically interpretable, by external interpreters, as meaning something, and that is a remarkable and powerful property of symbol systems. Cognition (i.e., thinking), has this property too: Our thoughts are systematically interpretable by (...) external interpreters as meaning something. However, unlike symbols in symbol systems, thoughts mean what they mean autonomously: Their meaning does not consist of or depend on anyone making or being able to make any external interpretations of them at all. When I think "the cat is on the mat," the meaning of that thought is autonomous; it does not depend on YOUR being able to interpret it as meaning that (even though you could interpret it that way, and you would be right). (shrink)

This work investigates symbols and transformative symbol systems from a variety of angles and philosophical/religious viewpoints. Discourses on the idea and term of the symbol are defined and integrated with cultural, philosophical, and historical time-frames begin the inquiry. This is carried into an investigation of both the original and essential qualities involved, and an exploration of the purposes and intentionalities of symbolic perception. Throughout the work, a secondary theme is that of occultic and messianic connections and undertones, and speculations are (...) made concerning the historical/cosmic grounding of the symbolic realms. Tantric practice is also touched upon, especially in relationship to the formative principles of the circle, square, and triangle. A number of suggestions are given for the interplay of transformative symbol systems, including methodologies for discerning the structural components of these systems. The final parts of the work are dedicated to investigations centering upon the ways in which symbol systems and symbolic perception is both grounded in our past and vital to future cognition. A glossary of terms is included. (shrink)

The immune self is our reified way to describe the processes through which the immune system maintains the differentiated identity of the organism and itself. This is an interpretative process, and to study it in a scientifically constructive way we should merge a long hermeneutical tradition asking questions about the nature of interpretation, together with modern understanding of the immune system, emerging sensing technologies and advanced computational tools for analyzing the sensors' data.

Various research initiatives try to utilize the operational principles of organisms and brains to develop alternative, biologically inspired computing paradigms and artificial cognitive systems. This article reviews key features of the standard method applied to complexity in the cognitive and brain sciences, i.e. decompositional analysis or reverse engineering. The indisputable complexity of brain and mind raise the issue of whether they can be understood by applying the standard method. Actually, recent findings in the experimental and theoretical fields, question central assumptions (...) and hypotheses made for reverse engineering. Using the modeling relation as analyzed by Robert Rosen, the scientific analysis method itself is made a subject of discussion. It is concluded that the fundamental assumption of cognitive science, i.e. complex cognitive systems can be analyzed, understood and duplicated by reverse engineering, must be abandoned. Implications for investigations of organisms and behavior as well as for engineering artificial cognitive systems are discussed. (shrink)

What we have learnt in the last six or seven decades about virtual machinery, as a result of a great deal of science and technology, enables us to offer Darwin a new defence against critics who argued that only physical form, not mental capabilities and consciousness could be products of evolution by natural selection. The defence compares the mental phenomena mentioned by Darwin’s opponents with contents of virtual machinery in computing systems. Objects, states, events, and processes in virtual machinery which (...) we have only recently learnt how to design and build, and could not even have been thought about in Darwin’s time, can interact with the physical machinery in which they are implemented, without being identical with their physical implementation, nor mere aggregates of physical structures and processes. The existence of various kinds of virtual machinery (including both “platform” virtual machines that can host other virtual machines, e.g. operating systems, and “application” virtual machines, e.g. spelling checkers, and computer games) depends on complex webs of causal connections involving hardware and software structures, events and processes, where the specification of such causal webs requires concepts that cannot be defined in terms of concepts of the physical sciences. That indefinability, plus the possibility of various kinds of self-monitoring within virtual machinery, seems to explain some of the allegedly mysterious and irreducible features of consciousness that motivated Darwin’s critics and also more recent philosophers criticising AI. There are consequences for philosophy, psychology, neuroscience and robotics. (shrink)

As a step towards comprehensive computer models of communication, and effective human machine dialogue, some of the relationships between communication and affect are explored. An outline theory is presented of the architecture that makes various kinds of affective states possible, or even inevitable, in intelligent agents, along with some of the implications of this theory for various communicative processes. The model implies that human beings typically have many different, hierarchically organized, dispositions capable of interacting with new information to produce affective (...) states, distract attention, interrupt ongoing actions, and so on. High "insistence" of motives is defined in relation to a tendency to penetrate an attention filter mechanism, which seems to account for the partial loss of control involved in emotions. One conclusion is that emulating human communicative abilities will not be achieved easily. Another is that it will be even more difficult to design and build computing systems that reliably achieve interesting communicative goals. (shrink)

A great deal of literature on the symbol approached this notion from epistemological, ontological, and hermeneutic perspectives. This article examines the symbol as an important category of philosophical anthropology that sheds light on the issue of man's origins and culture.

This article reviews eight proposed strategies for solving the Symbol Grounding Problem (SGP), which was given its classic formulation in Harnad (1990). After a concise introduction, we provide an analysis of the requirement that must be satisfied by any hypothesis seeking to solve the SGP, the zero semantical commitment condition. We then use it to assess the eight strategies, which are organised into three main approaches: representationalism, semi-representationalism and non-representationalism. The conclusion is that all the strategies are semantically committed and (...) hence that none of them provides a valid solution to the SGP, which remains an open problem. (shrink)

This paper presents an approach to solve the symbol grounding problem within the framework of embodied cognitive science. It will be argued that symbolic structures can be used within the paradigm of embodied cognitive science by adopting an alternative definition of a symbol. In this alternative definition, the symbol may be viewed as a structural coupling between an agent's sensorimotor activations and its environment. A robotic experiment is presented in which mobile robots develop a symbolic structure from scratch by engaging (...) in a series of language games. In this experiment it is shown that robots can develop a symbolic structure with which they can communicate the names of a few objects with a remarkable degree of success. It is further shown that, although the referents may be interpreted differently on different occasions, the objects are usually named with only one form. (shrink)

It is easy to give a list of cognitive processes. They are things like learning, memory, concept formation, reasoning, maybe emotion, and so on. It is not easy to say, of these things that are called cognitive, what makes them so? Knowing the answer is one very important reason to be interested in the mark of the cognitive. In this paper, consider some answers that we think do not work and then offer one of our own which ties cognition to (...) actions explained via the having of reasons. (shrink)

Among such social-philosophic notions as society, culture, civilization, system, human, sense, sign, truth and others, concept “symbol” takes a special place. Most of the researchers meet the view, that symbol possesses an important place in the development of culture as a social phenomenon. The role of symbol in cultures birth and development is characterized by antipathy and polysemy. However revelation of the symbol role in spiritual processes of cultural transitions is beyond question one of the urgent philosophic issues. Symbol is (...) a form of access to over-sensible culture’s substance, it functions as a money-box and hearth of culture senses, which are included by the systems of signs, images, metaphors into the circulation of world and human relations. (shrink)

Thirty years ago, grounded cognition had roots in philosophy, perception, cognitive linguistics, psycholinguistics, cognitive psychology, and cognitive neuropsychology. During the next 20 years, grounded cognition continued developing in these areas, and it also took new forms in robotics, cognitive ecology, cognitive neuroscience, and developmental psychology. In the past 10 years, research on grounded cognition has grown rapidly, especially in cognitive neuroscience, social neuroscience, cognitive psychology, social psychology, and developmental psychology. Currently, grounded cognition appears to be achieving increased acceptance throughout cognitive (...) science, shifting from relatively minor status to increasing importance. Nevertheless, researchers wonder whether grounded mechanisms lie at the heart of the cognitive system or are peripheral to classic symbolic mechanisms. Although grounded cognition is currently dominated by demonstration experiments in the absence of well-developed theories, the area is likely to become increasingly theory driven over the next 30 years. Another likely development is the increased incorporation of grounding mechanisms into cognitive architectures and into accounts of classic cognitive phenomena. As this incorporation occurs, much functionality of these architectures and phenomena is likely to remain, along with many original mechanisms. Future theories of grounded cognition are likely to be heavily influenced by both cognitive neuroscience and social neuroscience, and also by developmental science and robotics. Aspects from the three major perspectives in cognitive science—classic symbolic architectures, statistical/dynamical systems, and grounded cognition—will probably be integrated increasingly in future theories, each capturing indispensable aspects of intelligence. (shrink)

The notion of a ‘symbol’ plays an important role in the disciplines of Philosophy, Psychology, Computer Science, and Cognitive Science. However, there is comparatively little agreement on how this notion is to be understood, either between disciplines, or even within particular disciplines. This paper does not attempt to defend some putatively ‘correct’ version of the concept of a ‘symbol.’ Rather, some terminological conventions are suggested, some constraints are proposed and a taxonomy of the kinds of issue that give rise to (...) disagreement is articulated. The goal here is to provide something like a ‘geography’ of the various notions of ‘symbol’ that have appeared in the various literatures, so as to highlight the key issues and to permit the focusing of attention upon the important dimensions. In particular, the relationship between ‘tokens’ and ‘symbols’ is addressed. The issue of designation is discussed in some detail. The distinction between simple and complex symbols is clarified and an apparently necessary condition for a system to be potentially symbol, or token bearing, is introduced. (shrink)

The notion of a ‘symbol’ plays an important role in the disciplines of Philosophy, Psychology, Computer Science, and Cognitive Science. However, there is comparatively little agreement on how this notion is to be understood, either between disciplines, or even within particular disciplines. This paper does not attempt to defend some putatively ‘correct’ version of the concept of a ‘symbol.’ Rather, some terminological conventions are suggested, some constraints are proposed and a taxonomy of the kinds of issue that give (...) rise to disagreement is articulated. The goal here is to provide something like a ‘geography’ of the various notions of ‘symbol’ that have appeared in the various literatures, so as to highlight the key issues and to permit the focusing of attention upon the important dimensions. In particular, the relationship between ‘tokens’ and ‘symbols’ is addressed. The issue of designation is discussed in some detail. The distinction between simple and complex symbols is clarified and an apparently necessary condition for a system to be potentially symbol, or token bearing, is introduced. (shrink)

The substitution of knowledge to information as the entity that organizations process and deliver raises a number of questions concerning the nature of knowledge. The dispute on the codifiability of tacit knowledge and that juxtaposing the epistemology of practice vs. the epistemology of possession can be better faced by revisiting two crucial debates. One concerns the nature of cognition and the other the famous mind-body problem. Cognition can be associated with the capability of manipulating symbols, like in the traditional computational (...) view of organizations, interpreting facts or symbols, like in the narrative approach to organization theory, or developing mental states (events), like argued by the growing field of organizational cognition. Applied to the study of organizations, the mind-body problem concerns the possibility (if any) and the forms in which organizational mental events, like trust, identity, cultures, etc., can be derived from the structural aspects (technological, cognitive or communication networks) of organizations. By siding in extreme opposite positions, the two epistemologies appear irreducible one another and pay its own inner consistency with remarkable difficulties in describing and explaining some empirical phenomena. Conversely, by legitimating the existence of both tacit and explicit knowledge, by emphasizing the space of human interactions, and by assuming that mental events can be explained with the structural aspects of organizations, Nonaka's SECI model seems an interesting middle way between the two rival epistemologies. (shrink)

In 1949, the Department of Philosophy at the University of Manchester organized a symposium “Mind and Machine” with Michael Polanyi, the mathematicians Alan Turing and Max Newman, the neurologists Geoff rey Jeff erson and J. Z. Young, and others as participants. Th is event is known among Turing scholars, because it laid the seed for Turing’s famous paper on “Computing Machinery and Intelligence”, but it is scarcely documented. Here, the transcript of this event, together with Polanyi’s original statement and his (...) notes taken at a lecture by Jeff erson, are edited and commented for the fi rst time. Th e originals are in the Regenstein Library of the University of Chicago. Th e introduction highlights elements of the debate that included neurophysiology, mathematics, the mind-body-machine problem, and consciousness and shows that Turing’s approach, as documented here, does not lend itself to reductionism. (shrink)