Abstract: In this presentation, we will focus on two main challenges faced by designers of intelligent agents for complex simulations and video games: the credibility challenge and the challenge of scale. We illustrate how we tackle these challenges in a number of projects, with a main focus on the TerraDynamica collaborative project aiming at simulating realistic virtual cities, in which our group is in charge of the AI of virtual inhabitants. We will show in more details our approach to the modeling of how affects impact behaviors to improve credibility, and our approach to dynamic change of representations to allow the scaling up in number of agents. Last we will focus on the designer point of view with some considerations on how richer AI techniques can be kept manageable and on the trade-offs to explore in this area.

Speaker Bio: Vincent Corruble is Associate Professor in the Multi-Agent Systems group at LIP6, the Computer Science laboratory of University Pierre et Marie Curie (Paris 6), in Paris, France. He has a background in AI and Knowledge Discovery, and has been active for over 10 years in the field of Intelligent Agents, and especially of Learning Agents. He got interested in video games, first seen as fantastic experimental platforms for AI techniques, but also as a great source of new problems and ideas for AI research. He has contributed to the area of learning game AI, especially for large strategy games, to the issue of dynamic difficulty adjustment and game balancing via learning, to the notion of credible NPCs through deep models of emotions and how they interact with personality and social relations, .... He is currently heading in his lab a large collaborative project with industry and academic partners aiming at building large scale simulations of virtual cities to be used for games and other applications (security, urban planning,...) which has required tackling several challenges for AI such as the design of flexible agent architectures, the authoring of credible behaviors, the ability to manage up to hundreds of thousands of agents, the coordination of agents involved in collective behaviors, etc.

Abstract: Augmented Reality (AR) overlays virtual content, such as computer generated graphics, on the physical world. The augmented view of the world can be presented to the user via a head mounted display, a tablet/mobile device, or projection on the physical space around the user. While Ivan Sutherland first presented the concept of the â€œUltimate Displayâ€? in 1965, it was not possible to truly implement augmented reality applications until almost 25 years later. Therefore, the field of AR research is usually considered to have begun in the early 90â€™s. In this 20-year period, AR has gone from being viewed as a heavyweight technology, only appropriate for industrial and military applications, to a new medium for art, games and entertainment applications. The evolution of the field is due in part to the extensive research that has gone into exploring the AR application space, but also the recent rise of powerful mobile devices that make it easy to deploy a wide-variety of AR applications to consumers.

This is a critical moment for the field of AR. Over the past three years, AR technology has become accessible outside of computer science research labs. At first this was mainly HCI researchers, but now we see participation from a variety of groups including game developers, visual and performance artists, user experience experts, toy designers, web developers, and entrepreneurs. As a result, there is an increased demand for tools and techniques to support AR experience design, evaluation, development, and deployment that fully address the needs of these diverse groups.

Low-level AR research in computer vision, graphics, sensors, and optics is, of course, critical to the success and growth of AR. However, my research focuses on higher level questions regarding what applications are appropriate for AR, how effective AR applications can be designed, and, most importantly, how we can support the participation of makers from outside the AR research domain. In this talk I will discuss the three intertwined research domains that are critical to the advancement of AR as a new medium: authoring, evaluation, and deployment.

Speaker Bio: Maribeth Gandy is the Director of the Interactive Media Technology Center and the Associate Director of Interactive Media in the Institute for People and Technology at Georgia Tech. She received a B.S. in Computer Engineering as well as a M.S. and Ph.D. in Computer Science from Georgia Tech. In her twelve years as a research faculty member her work has been focused on the intersection of technology for augmented reality, accessibility/disability, human computer interaction, and gaming. She has developed computer-based experiences for entertainment and informal education in a variety of forms including augmented reality, virtual, and mobile. She also teaches the “Video Game Design” and “Computer Audio” course in the College of Computing at Georgia Tech. In her AR research, she is interested in advancing AR as a new medium by focusing on authoring, evaluation, and deployment. She was the lead architect on a large open source software project called the Designer’s Augmented Reality Toolkit (DART), which had thousands of users and was used to create a variety of large-scale AR systems. She was also co-PI on an NSF grant focused on the development of presence metrics for measuring engagement in AR environments using qualitative and quantitative data. She is currently collaborating on the creation of an open source AR web browser called Argon. She is also interested in the use of gaming interfaces for health and wellness. Currently, she is the co-PI on an NSF grant exploring the concept of cognitive gaming for older adults. The goal is to both isolate what components are necessary in an activity for it to have general cognitive benefits and to craft a custom game based on these guidelines that is accessible and compelling for an older player. Previously, she led a project funded by Georgia Tech’s Health Systems Institute to develop home-based computer games for stroke rehabilitation. For seven years she worked in the fields of disability and accessibility as a project director in the Wireless RERC (through the Shepard Center in Atlanta and Georgia Tech) and generated guidelines for the universal design and user centered design process with disabled persons. In her consulting work she has built commercial games, designed a home medical device for older adults, enhanced live rock concerts, and worked with startup companies to develop AR business models and products.

Abstract: Storytelling is a pervasive part of the human experience--we as humans tell stories to communicate, inform, entertain, and educate. Indeed there is evidence to suggest that narrative is a fundamental means by which we organize, understand, and explain the world. In this talk, I present research on artificial intelligence approaches to the generation of narrative structures. I discuss how computational story generation capabilities facilitate the creation of engaging, interactive user experiences in virtual worlds, computer games, and training simulations. I conclude with an ongoing research effort toward generalized computational narrative intelligence.

Speaker Bio: Mark Riedl is an Assistant Professor in the Georgia Tech School of Interactive Computing and director of the Entertainment Intelligence Lab. Dr. Riedl's research focuses on the intersection of artificial intelligence, virtual worlds, and storytelling. The principle research question Dr. Riedl addresses through his research is: how can intelligent computational systems reason about and autonomously create engaging experiences for users of virtual worlds and computer games. Dr. Riedl earned a PhD degree in 2004 from North Carolina State University, where he developed intelligent systems for generating stories and managing interactive user experiences in computer games. From 2004 to 2007, Dr. Riedl was a Research Scientist at the University of Southern California Institute for Creative Technologies where he researched and developed interactive, narrative-based training systems. Dr. Riedl joined the Georgia Tech College of Computing in 2007 where he continues to study artificial intelligence approaches to story generation, interactive narratives, and adaptive computer games. His research is supported by the NSF, DARPA, the U.S. Army, and Disney.

29Oct12
Last Updated on 03 December 2012

Virtual Humans

Talk Title: Virtual HumansSpeaker: Stacy Marsella, Associate Director of Social Simulation, Institute for Creative Technologies and Research Associate Professor of Computer Science at the University of Southern California Talk Date: October 29, 2012

Time: 10:00 AM Place: 3211, EBII; NCSU Centennial Campus

Abstract: Virtual humans are autonomous virtual characters that can have meaningful interactions with human users. They can reason about the environment, understand and express emotion, and communicate using speech and gesture. I will discuss various application areas of virtual humans in education, health intervention and entertainment. I will then go on to discuss the design of virtual humans with specific focus on their expressive capabilities.

Speaker Bio: Stacy C. Marsella is a Research Associate Professor in the Department of Computer Science at the University of Southern California, Associate Director of Social Simulation Research at the Institute for Creative Technologies (ICT) and a co-director of USCâ€™s Computational Emotion Group. His general research interest is in the computational modeling of cognition, emotion and social behavior, both as a basic research methodology in the study of human behavior as well as the use of these computational models in a range of gaming and analysis applications. His current research spans the interplay of emotion and cognition, modeling of the influence that beliefs about the mental processes of others have on social interaction and the role of nonverbal behavior in face-to-face interaction. He has extensive experience in the application of these models to the design of virtual humans, software entities that look human and can interact with humans in a virtual environment using spoken dialog. He is an associate editor of IEEE Transactions on Affective Computing and a member of the steering committee of the Intelligent Virtual Agents conference, as well as a member of the International Society for Research on Emotions (ISRE). Professor Marsella has published over 150 technical articles and received the Association for Computing Machinery's (ACM/SIGART) 2010 Autonomous Agents Research Award, for research influencing the field of autonomous agents.