Professor Nick Campbell

Professor Nick Campbell

Fellow Emeritus (C.L.C.S.)

Nick Campbell (nick@tcd,ie) is SFI Stokes Professor of Speech & Communication Technology at Trinity College Dublin (The University of Dublin) in Ireland. He received his Ph.D. degree in Experimental Psychology from the University of Sussex in the U.K., and was previously engaged at the Japanese National Institute of Information and Communications Technology, (as nick@nict.go.jp) and as Chief Researcher in the Department of Acoustics and Speech Research, Advanced Telecommunications Research Institute International (as nick@atr.jp), Kyoto, Japan, where he also served as Research Director for the JST/CREST Expressive Speech Processing and the SCOPE "Robot's Ears" projects. He was first invited as a Research Fellow at the IBM U.K. Scientific Centre, where he developed algorithms for speech synthesis, and later at the AT&T Bell Laboratories, where he worked on the synthesis of Japanese. He served as Senior Linguist at the Edinburgh University Centre for Speech Technology Research before joining ATR in 1990. His research interests are based on large speech databases, and include nonverbal speech processing, concatenative speech synthesis, and prosodic information modeling. He spends his spare time working with postgraduate students as Visiting Professor at the School of Information Science, Nara Institute of Science and Technology (NAIST), Nara, Japan, and was also Visiting Professor at Kobe University, Kobe, Japan for 10 years.

Multiperspective Multimodal Dialogue: dialogue system with metacognitive abilities The goal of METALOGUE is to produce a multimodal dialogue system that is able to implement an interactive behaviour that seems natural to users and is flexible enough to exploit the full potential of multimodal interaction. It will be achieved by understanding, controlling and manipulating system's own and users' cognitive processes. The new dialogue manager will incorporate a cognitive model based on metacognitive skills that will enable planning and deployment of appropriate dialogue strategies. The system will be able to monitor both its own and users' interactive performance, reason about the dialogue progress, guess the users' knowledge and intentions, and thereby adapt and regulate the dialogue behaviour over time. The metacognitive capabilities of the METALOGUE system will be based on a state-of-the-art approach incorporating multitasking and transfer of knowledge among skills. The models will be implemented in ACT-R, providing a general framework of metacognitive skills. http://cordis.europa.eu/projects/rcn/110655_en.html

Funding Agency

EU

Programme

FP7-ICT

Project Type

STREP

Person Months

2

Project Title

JOKER

From

Jan 2014

To

Dec 2016

Summary

JOKe and Empathy of a Robot/ECA: Towards social and affective relations with a robot This project will build and develop JOKER, a generic intelligent user interface providing a multimodal dialogue system with social communication skills including humor, empathy, compassion, charm, and other informal socially-oriented behavior. Talk during social interactions naturally involves the exchange of propositional content but also and perhaps more importantly the expression of interpersonal relationships, as well as displays of emotion, affect, interest, etc. This project will facilitate advanced dialogues employing complex social behaviors in order to provide a companion-machine (robot or ECA) with the skills to create and maintain a long term social relationship through verbal and non verbal language interaction. Such social interaction requires that the robot has the ability to represent and understand some complex human social behavior. It is not straightforward to design a robot with such abilities. Social interactions require social intelligence and 'understanding' (for planning ahead and dealing with new circumstances) and employ theory of mind for inferring the cognitive states of another person. JOKER will emphasize the fusion of verbal and non-verbal channels for emotional and social behavior perception, interaction and generation capabilities. Our paradigm invokes two types of decision: intuitive (mainly based upon non-verbal multimodal cues) and cognitive (based upon fusion of semantic and contextual information with non-verbal multimodal cues.) The intuitive type will be used dynamically in the interaction at the non-verbal level (empathic behavior: synchrony of mimics such as smile, nods) but also at verbal levels for reflex small- talk (politeness behavior: verbal synchrony with hello, how are you, thanks, etc). Cognitive decisions will be used for reasoning on the strategy of the dialog and deciding more complex social behaviors (humor, compassion, white lies, etc.) taking into account the user profile and contextual information. http://www.chistera.eu/projects/joker

My background is in experimental psychology and linguistics, but most of my experience is in speech technology. I prefer corpus-based approaches and have pioneered advanced (and paradigm-shifting) methods of speech synthesis and natural conversational speech collection in a multimodal environment. My principal interest is in speech prosody, extending this research to social interaction to show how the voice is used in discourse to express personal relations as well as propositional content. Most of my previous work has used speech materials collected in Japan, and I am happy now to be in Ireland where I can confirm the universality of my previous findings - both for Irish and for Hiberno-English. Ultimately, I am working to produce a friendlier speech-based human-machine interface for web-based information, customer-services, games, and robotics, while trying to understand how humans perform such often perfect communication.