Knowledge concerning the cognition involved in perceiving and remembering faces has informed the design of at least two generations of facial compositing technology. These systems allow a witness to work with a computer (and a police operator) in order to construct an image of a perpetrator. Research conducted with systems currently in use has suggested that basing the construction process on the witness recalling and verbally describing the face can be problematic. To overcome these problems and make better use of witness cognition, the latest systems use a combination of Principal Component Analysis (PCA) facial synthesis and an array-based interface. The present paper describes a preliminary study conducted to determine whether the use of an array-based interface really does make appropriate use of witness cognition and what issues need to be considered in the design of emerging compositing technology.

This paper analyzes the process of perceptual recalibration (PR) in light of two cases of technologically-mediated cognition: sensory substitution and perceptual modification. We hold that PR is a very useful concept — perhaps necessary — for explaining the adaptive capacity that natural perceptive systems display as they respond to functional demands from the environment. We also survey critically related issues, such as the role of learning, training, and nervous system plasticity in the recalibrating process. Attention is given to the interaction between technology and cognition, and the case of epistemic prostheses is presented as an illustration. Finally, we address the following theoretical issues: (1) the dynamic character of spatial perception; (2) the role of functional demands in perception; (3) the nature and interaction of sensory modalities. We aim to show that these issues may be addressed empirically and conceptually — hence, the usefulness of sensory-substitution and perceptual-modification studies in the analysis of perception, technologically-mediated cognition, and cognition in general.

Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able to do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing (“know-how”).This is called the Turing Test. It cannot test whether a process can generate feeling, hence thinking — only whether it can generate doing. The processes that generate thinking and know-how are “distributed” within the heads of thinkers, but not across thinkers’ heads. Hence there is no such thing as distributed cognition, only collaborative cognition. Email and the Web have spawned a new form of collaborative cognition that draws upon individual brains’ real-time interactive potential in ways that were not possible in oral, written or print interactions.

Robotics can be seen as a cognitive technology, assisting us in understanding various aspects of autonomy. In this paper I will investigate a difference between the interpretations of autonomy that exist within robotics and philosophy. Based on a brief review of some historical developments I suggest that within robotics a technical interpretation of autonomy arose, related to the independent performance of tasks. This interpretation is far removed from philosophical analyses of autonomy focusing on the capacity to choose goals for oneself. This difference in interpretation precludes a straightforward debate between philosophers and roboticists about the autonomy of artificial and organic creatures. In order to narrow the gap I will identify a third problem of autonomy, related to the issue of what makes one’s goals genuinely one’s own. I will suggest that it is the body, and the ongoing attempt to maintain its stability, that makes goals belong to the system. This issue could function as a suitable focal point for a debate in which work in robotics can be related to issues in philosophy. Such a debate could contribute to a growing awareness of the way in which our bodies matter to our autonomy.

This paper explores the evolution of the techno-management imagination (TMI). This is the process by which, in times of crisis, managers think not just out of the box, but out of the very reality in which the box resides. Tacit social consensus, also known as corporate culture, can lead to a shared, implicit, and incorrect view that certain actions are impossible. TMI transcends local culture, accessing technological solutions that are unknown and/or unimagined. Members of the organization tend to call such solutions “magic”. The paper looks at social, perceptual, and managerial aspects of magic from a practical point of view that is grounded in research. It examines the risks of TMI, and concludes with suggested perspectives and research questions for management scientists and cognitive scientists.

The impact of new advanced technology on issues that concern meaningful information and its relation to studies of intelligence constitutes the main topic of the present paper. The advantages, disadvantages and implications of the synthetic methodology developed by cognitive scientists, according to which mechanical models of the mind, such as computer simulations or self-organizing robots, may provide good explanatory tools to investigate cognition, are discussed. A difficulty with this methodology is pointed out, namely the use of meaningless information to explain intelligent behavior that incorporates meaningful information. In this context, it is inquired what are the contributions of cognitive science to contemporary studies of intelligent behavior and how technology may play a role in the analysis of the relationships established by organisms in their natural and social environments.

The relationship between cognition and culture is discussed in terms of technology and representation. The computational metaphor is discussed in relation to its providing an account of cognitive and technical development: the role of representation and self-modification through environmental manipulation and the development of open learning from stigmery. A rationalisation for the transformational effects of information and representation is sought in the physical and biological theories of Autokatakinetics and Autopoiesis. The conclusion drawn is that culture, rather than being an intrinsic property of our human phenotype was learned and that cultural cognition is an information transforming system that is inadequately characterised by notions of parameterised deep-structure and that it is an open and potentially unbounded informational system.

This paper explores connections between Radical Empiricism (RE), a philosophic attitude developed by William James at the beginning of the 20th century, and Empirical Modelling (EM), an approach to computer-based modelling that has been developed by the author and his collaborators over a number of years. It focuses in particular on how both RE and EM promote a perspective on the nature of knowing that is radically different from that typically invoked in contemporary approaches to knowledge representation in computing. This is illustrated in detail with reference to the modelling of several scenarios of lift use. Some potential implications for knowledge management are briefly reviewed.