London, 4 April 2014

OVERVIEW Artificial Intelligence, in its early years, adopted an almost purely “dis-embodied” view of behaviour and cognition. The brain was modeled as a black-box information processing device, fed with symbolic representations of “percepts” derived from the world, and producing symbolic “actions” which somehow affect physical systems. However, the shortcomings of this approach have become all too apparent in the last decades. These were manifest not only through difficulties in creating real-world physically-embedded intelligent systems such as robots, but also through experimental results that show the centrality of embodiment to human and animal cognition. Further, it has become clear that embodiment is inseparable from both language use and, more generally, social coordination. In investigation of intelligence, therefore, increasing importance is given to more holistic approaches that transcend the dichotomies, of body vs. mind, motor act vs. speech act and, most crucially, self vs.other. Simulation also has played a crucial role in research on Artificial Intelligence. In order to test theories and study the behavior of systems, one can strive to link physical implementation with real-world experimentation. Often, however, this route is impractical or impossible. This is the case, for example, in studying phenomena such as language evolution that occur across large spatiotemporal ranges. It also applies when the creation and deployment of physical instantiations is costly or difficult as in examining massive amounts of human-robot interaction data. Finally, the same applies in using assumptions that contradict actual physical or social conditions. In these cases, and many more, computational simulation offers an attractive alternative. This, however, brings its own shortcomings: how does one show that the results of simulations have real world validity? Are there methodologically constrained upper bounds for the strength of simulation-based claims? Also, practically, how does one devise effective and realizable simulations for classes of problems related to behaviour and cognition? These and many other interesting questions have arisen, together with the numerous successful past applications of simulation, also not necessarily in mutual exclusion but also in conjunction with embodied experimentation. This symposium will focus on the embodiment vs. simulation debate, exploring the strengths, limitations, and contributions of these approaches in exploring behaviour, language and cognition. Our aim is to use the outcomes to develop optimal ways of exploring complex phenomena such as human behaviour and cognition. Highlighting the complexity of coordinated behaviour the symposium will give special weight to language and, specifically, how it is integrated with action and perception. Could viewing language as a coordination device benefit from the use of embodied platforms? What is/has been the role of simulation in language studies? Could whole-body human coordination in which wordings play a part, become the central “object” of enquiry in the language sciences?

Recent years have seen a huge increase in the public’s awareness of so-called ‘intelligent’ gadgets and gismos, many including voice-based interaction. Probably the most significant was the surprise release in 2011 of Siri - Apple’s voice-enabled personal assistant and knowledge navigator. Siri represents the culmination of many years R&D investment in the field of spoken language technology, and it gives an insight into the practicalities of future language-based human-machine interaction. However, Siri is neither embodied or simulated; it has no direct experience of the physical world (other than through its microphone), yet it is obliged to interact vocally in real-time with genuine human users. As a result, the scientific community are faced with an interesting dilemma – does real-world technology such as Siri tell us something interesting about language, cognition and behavior, or does it represent a pragmatic solution of no long-term value? This talk will attempt to unravel some of these issues by addressing the role of robotics in the study of language-based interaction and the barriers that currently prevent us from developing truly communicative machines.

In order to use AI techniques to study human cognition, the AI model should be developed such that it mimics human behaviour as closely as possible, ideally using the same cognitive mechanisms. And, in order to validate such a model, the AI's behaviour should be compared with human behaviour. In this presentation, I present our approach to apply this in studying children's early language acquisition. In which we aim to implement multimodal interactions between children and their social environment in an agent-based model. These interactions are based on corpus derived from observations of natural behaviour collected in Mozambique and the Netherlands. The language development observed in the model can then be compared to the development observed in reality, thus validating the model. Initially, the model will be implemented in simulations, but -at a later stage- it is desirable to validate the model using a robotic platform. I will discuss the advantages and disadvantages of using agent-based simulations, as well as robots.