November 25, 2003

ICVS was in Toulouse this year, in the southwest of France. Once again (after TIDSE’03), this European conference managed to gather researchers on virtual storytelling from both Europe and US. Three other continents were even represented, by researchers from Japan, South Africa and Australia.

The term “virtual” in the name of the conference reveals an orientation of the conference towards Virtual Reality, hence some papers which to my point of view (and others’!) were not totally on the topic of interactive story… I will focus here on papers related to narrative, obviously.
Ken Perlin opened the conference with a great introductory talk. He presented his past and current research, including the Polly project, which aims at achieving universal procedural literacy, by providing to children with an environment where they program their characters in a digital storytelling environment. Beyond the quality of Ken’s work, I could appreciate how his work has often been motivated by a need of “self-expression”, which departs from traditional scientific approaches (including mine!).

At this conference, few people were tackling the very problem of mixing interactivity and narrative. Several projects were simply based on oriented graphs! Two innovative papers must however be mentioned. First Mark Riedl (Liquid Narrative Group) presented an architecture for interactive narrative which mixes plot-based and character-based approaches: the idea is to have a narrative plan for the global plot and to distribute this plan into smaller plans, for each character, according to their specificity (skills, limitations, personality). Characters are then real actors, in the sense that they act in order to “show off”, to demonstrate what they are. Second, the paper presented by Marc Cavazza (University of Teesside) presents an interactive story environment based on mixed reality where the user’s gesture and speech must be interpreted in real-time and incorporated into the story. For this, a mapping between user actions and speech acts is performed, based on the current plan of the character (the speech act recognition is contextual).

Several talks concerned the authoring issue. Barry Silverman presented a full working system called AESOP. The ambition of the project is to put the system in the authors’ hands, in order that they could tell their own stories (educational stories, in a medical context). But even writing nodes and lines in a graph appeared to be difficult for authors, so at the end, a secretary is entering the stories! These graph based representation are very far from the concepts manipulated by the IDtension system, that I presented: goals, tasks, obstacles, values. I focused my presentation (and my paper) on the difficulty of authoring with the system, and proposed a general framework which should help writing at the abstract level of what I called “highly generative Interactive Drama” (paper available by request!). The paper of Ulrike Spierling also reported the difficulties of authoring interactive narrative, and proposed a simple methodology to facilitate the process. The paper of Daniel Sobral et al (INSC, IST, Lisbon) proposed the use of an ontology, a common language between the author and the system; an ontology is specific to each story.

Another trend was the “mixed reality” concept. In Marc Cavazza’s new system, the player sees himself sitting in the projected virtual world, as in a mirror. In Ulrike Spierling and Ido Urgel’s paper, in a edutainment context, the user sees and interact with virtual characters around a real painting (a replica!). Sally J Norman, who is directing the “Ecole Supérieure de l’Image” in Angoulême, France, explained how current art practice, including those of her students, tend to merge the technology into the real world, departing from the dominating “Virtual Reality” paradigm of the conference.

Two papers were focused on the narrative discourse, i.e. on the way the story is told: one on the lightning (Magy Seif El-Nasr, Northwestern University), another on the camera positioning (Nicolas Courty et al.). Congratulation to Magy, who received the price of the best paper without being there! (Ian Horswill did a good job on the pres…).

Virtual Reality is sometimes used differently than in Interactive Drama: a storyteller rather than characters is simulated in the virtual word. In the Papous project, from André Silva et al. (INESC, Lisbon), a “3D granddad” tells a story, adapting to audience’s interaction (through real cards inserted into an “influencing box”). The University of Cape Town, South Africa, created a virtual environment containing a San storyteller and his companions (San people are indigenous to southern Africa). They measured the influence of additional sounds and image on the story perception. Interestingly, this study includes a rigorous psychological evaluation of three factors: Presence, Story Involvement and Enjoyment.

Finally, some interesting more art-related projects were presented. Naoko Tosa presented an interactive storytelling environment based on Buddhism. It includes an Neural Networks based component for automatically identifying the personality of user, according to the disposition of elements placed on the screen by users. Martin Hachet, from the University of Bordeaux, presented a performing environment where real theatre clowns interact with both audience and the virtual environment.

I omitted many other papers, demo or posters (there were 30 in total), including various Head Mounted Displays and 3D visualization technologies (Immersion company). My general impression from this technical conference is that the field of Interactive Storytelling is slowly evolving: towards the authoring issue, towards new interfaces, towards mixed reality.