The AISB Convention is an annual conference covering the range of AI and Cognitive Science, organised by the Society for the Study of Artificial Intelligence and Simulation of Behaviour. The 2016 Convention will be held at the Uni...

Stephen Hawking thinks computers may surpass human intelligence and take over the world. This view is based on the ideology that all aspects of human mentality will eventually be realised by a program running on a suitable compu...

All individual members of The Society for the Study of Artificial Intelligence and Simulation of Behaviour have a personal subscription to the Taylor Francis journal Connection Science as part of their membership.
How to Acce...

AISB Committee member and Research Fellow at Goldsmiths, University of London, Dr Mohammad Majid al-Rifaie was interviewed by the BBC (in Farsi) along with his colleague Mohammad Ali Javaheri Javid on the 6 November 2014. He was a...

After 2 hours of judging at Bletchley Park, 'Rose' by Bruce Wilcox was declared the winner of the Loebner Prize 2014, held in conjunction with the AISB. The event was well attended, film live by Sky News and the special guest jud...

The AISB Convention is an annual conference covering the range of AI and Cognitive Science, organised by the Society for the Study of Artificial Intelligence and Simulation of Behaviour. The 2015 Convention will be held at the Uni...

AISB Committee member, and Philosophy Programme Director and Lecturer, Dr Yasemin J. Erden interviewed for the BBC on 29 October 2013. Speaking on the Today programme for BBC Radio 4, as well as the Business Report for BBC world N...

Mark Bishop, Chair of the Study of Artificial Intelligence and the Simulation of Behaviour, appeared on Newsnight to discuss the ethics of ‘killer robots’. He was approached to give his view on a report raising questions on the et...

The AISB has launched a YouTube channel: http://www.youtube.com/user/AISBTube (http://www.youtube.com/user/AISBTube).
The channel currently holds a number of videos from the AISB 2010 Convention. Videos include the AISB round t...

Notice

AISB event Bulletin Item

CFP: LREC 2010 Workshop on Multimodal Corpora

*** 1st Call for Papers ***
LREC 2010 Workshop on
Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality
*** 18 May 2010, Malta ***
http://www.multimodal-corpora.org
A "Multimodal Corpus" involves the recording, annotation and analysis of
several communication modalities such as speech, hand gesture, facial
expression, body posture, etc. As many research areas are moving from
focused but single modality research to fully-fledged multimodality
research, multimodal corpora are becoming a core research asset and an
opportunity for interdisciplinary exchange of ideas, concepts and data.
This workshop follows similar events held at LREC 00, 02, 04, 06, 08.
There is an increasing interest in multimodal communication and multimodal
corpora as visible by European Networks of Excellence and integrated
projects such as HUMAINE, SIMILAR, CHIL, AMI, CALLAS and SSPNet.
Furthermore, the success of recent conferences and workshops dedicated to
multimodal communication (ICMI-MLMI, IVA, Gesture, PIT, Nordic Symposium
on Multimodal Communication, Embodied Language Processing) and the
creation of the Journal of Multimodal User Interfaces also testify to the
growing interest in this area, and the general need for data on multimodal
behaviours.
The 2010 full-day workshop is planned to result in a significant follow-up
publication, similar to previous post-workshop publications like the 2008
special issue of the Journal of Language Resources and Evaluation and the
2009 state-of-the-art book published by Springer.
AIMS
In 2010, we are aiming for a wide cross-section of the field, with
contributions on collection efforts, coding, validation and analysis
methods, as well as actual tools and applications of multimodal corpora.
However, we want to put emphasis on the fact that there have been
significant advances in capture technology that make highly accurate data
available to the broader research community. Examples are the tracking of
face, gaze, hands, body and the recording of articulated full-body motion
using motion capture. These data are much more accurate and complete than
simple videos that are traditionally used in the field and therefore, will
have a lasting impact on multimodality research. However, the richness of
the signals and the complexity of the recording process urgently call for
an exchange of state-of-the-art information regarding recording and coding
practices, new visualization and coding tools, advances in automatic
coding and analyzing corpora.
TOPICS
This LREC 2010 workshop on multimodal corpora will feature a special
session on databases of motion capture, trackers, inertial sensors,
biometric devices and image processing. Other topics to be addressed
include, but are not limited to:
* Multimodal corpus collection activities (e.g. direction-giving
dialogues, emotional behaviour, human-avatar interaction, human-robot
interaction, etc.) and descriptions of existing multimodal resources
* Relations between modalities in natural (human) interaction and in
human-computer interaction
* Multimodal interaction in specific scenarios, e.g. group interaction
in meetings
* Coding schemes for the annotation of multimodal corpora
* Evaluation and validation of multimodal annotations
* Methods, tools, and best practices for the acquisition, creation,
management, access, distribution, and use of multimedia and multimodal
corpora
* Interoperability between multimodal annotation tools (exchange
formats, conversion tools, standardization)
* Collaborative coding
* Metadata descriptions of multimodal corpora
* Automatic annotation, based e.g. on motion capture or image
processing, and the integration with manual annotations
* Corpus-based design of multimodal and multimedia systems, in
particular systems that involve human-like modalities either in input
(Virtual Reality, motion capture, etc.) and output (virtual
characters)
* Automated multimodal fusion and/or generation (e.g., coordinated
speech, gaze, gesture, facial expressions)
* Machine learning applied to multimodal data
* Multimodal dialogue modelling
IMPORTANT DATES
* Deadline for paper submission (complete paper): 12 February 2010
* Notification of acceptance: 10 March
* Final version of accepted paper: 26 March
* Final program: 7 April
* Final proceedings: 14 April
* Workshop: 18 May
SUBMISSIONS
The workshop will consist primarily of paper presentations and
discussion/working sessions. Submissions should be 4 pages long, must be
in English, and follow the submission guidelines available under
http://multimodal-corpora.org/mmc10.html
Submit your paper here: https://www.softconf.com/lrec2010/MMC2010
Demonstrations of multimodal corpora and related tools are encouraged as
well (a demonstration outline of 2 pages can be submitted).
LREC-2010 MAP OF LANGUAGE RESOURCES, TECHNOLOGIES AND EVALUATION
When submitting a paper through the START page, authors will be kindly
asked to provide relevant information about the resources that have been
used for the work described in their paper or that are the outcome of
their research. For further information on this new initiative, please
refer to
http://www.lrec-conf.org/lrec2010/?LREC2010-Map-of-Language-Resources
ORGANISING COMMITTEE
Michael Kipp, DFKI, Germany
Jean-Claude Martin, LIMSI-CNRS, France
Patrizia Paggio, University of Copenhagen, Denmark
Dirk Heylen, University of Twente, The Netherlands