AAAI-06 Tutorial Forum

We are pleased to present the tutorials for AAAI-06. Please note that there are several full day tutorials this year. Descriptions of the tutorials are linked to the tutorial code number (such as SA1).

Teams of agents are more robust and potentially more efficient than single agents. However, the team members need to be coordinated in an effective way. This tutorial covers recent advances in auction-based methods for agent coordination, where agents bid on tasks and the tasks are then allocated to the agents by methods that resemble winner determination methods in auctions. Auction-based methods balance the trade-off between totally centralized coordination methods and absolutely decentralized coordination methods without any communication, in terms of communication efficiency, computation efficiency, and solution quality.

This tutorial gives an overview of various auction-based methods for agent coordination, discusses their advantages and disadvantages and compares them to each other and other coordination methods. It covers algorithms, their analysis, implementation issues as well as their experimental evaluation, using multi-robot routing tasks as running examples. The tutorial teaches the necessary background in artificial intelligence, robotics, economics and operations research and thus makes no assumptions about the background of the audience, other than a very general understanding of algorithms. Consequently, the tutorial is appropriate for students (both undergraduate and graduate students), researchers and practitioners who are interested in learning more about how to coordinate teams of agents using auction-based mechanisms.

The presented material is provided by the speakers and Pinar Keskinocak of Georgia Institute of Technology and includes material by their coworkers A. Stentz, D. Kempe, A. Meyerson, V. Markakis, A. Kleywegt, and C. Tovey.

M. Bernardine Dias is research faculty at the Robotics Institute at Carnegie Mellon University. Her research interests are in technology for developing communities, multirobot coordination, space robotics, and diversity in computer science. Her dissertation developed the TraderBots framework for market-based multirobot coordination and she has published extensively on a variety of topics in robotics.

Sven Koenig is an associate professor at the University of Southern California. From 1995 to 1997, Sven demonstrated that it is possible to combine ideas from different decision-making disciplines by developing a robust mobile robot architecture based on POMDPs from operations research. Since then, he has published over 100 papers in robotics and artificial intelligence, continuing his interdisciplinary research.

Michail G. Lagoudakis is an assistant professor at the Technical University of Crete. He is interested in machine learning (reinforcement learning), decision making under uncertainty, numeric artificial intelligence, as well as robots and other complex systems. He has published extensively in artificial intelligence and robotics.

Robert Zlot is a Ph.D. student at the Robotics Institute at Carnegie Mellon University, where he earned a Master’s degree in Robotics in 2002. Robert’s main interests are in multirobot coordination and space robotics. His current research focuses on market-based algorithms for tasks that exhibit complex structure.

Nidhi Kalra is a Ph.D. student at the Robotics Institute at Carnegie Mellon University. She is interested in developing coordination strategies for robots working on complex real-world problems. To this end, she is developing the market-based Hoplites framework for tight multirobot coordination.

E. Gil Jones is a Ph.D. student at the Robotics Institute at Carnegie Mellon University. His primary interest is market-based multi-robot coordination. He received his BA in computer science from Swarthmore College in 2001, and spent two years as a software engineer at Bluefin Robotics in Cambridge, Massachusetts.

The goal of this tutorial is to provide participants with an understanding of the theoretical foundations of the field of case-based reasoning (CBR) and with practical information about fielded applications of CBR. CBR involves comparing a new problem to prior similar problems in order to reuse past experience in solving the new problem. This tutorial will describe the case-based reasoning process, quickly review classic CBR research applications, give detailed descriptions of current CBR applications that have been fielded in the real world, and then have tutorial participants create small CBR systems themselves. Participants should leave the tutorial with an understanding of the history of the field, how the theory was developed, how it has been applied, and a working knowledge of how to apply it themselves. We expect participants to have general programming experience and a basic understanding of AI. No other prior knowledge is required.

Cindy Marling is an assistant professor at Ohio University, where she teaches “AI: Case-Based Reasoning.” Her research focus is on case-based and multi-modal reasoning in medical domains. She has cochaired three workshops on CBR in the health sciences at the international and European conferences on case-based reasoning.

William Cheetham is a senior researcher at the General Electric (GE) Global Research Center and an adjunct professor at Rensselaer Polytechnic Institute. He has led the development of over a dozen intelligent systems that are currently in use throughout GE. Most of these systems use case-based reasoning.

Computational Biology: Perspective and Approaches Based on Feature Extraction and Selection (SA3)

Weixiong ZhangSunday, July 16, 9:00 am–1:00 pm

From a computation viewpoint, a central theme of computational biology / genomics is to extract and identify characteristic features from experimental data and construct computational models to elucidate the biological mechanisms underlying observations. For many problems in biology, finding the right features is critical. However, it is nontrivial and computationally difficult to extract and identify the most characteristic features, due to many factors such as the complexity of biological processes (such as cancer and neurodegenerative disorders), the heterogeneity and quantity of available experimental data (such as genomic sequences, gene expression, and proteomic data).

This tutorial has three objectives. The first is to discuss many important problems in biology and genomics where feature selection plays important roles. The second objective is to survey the existing computational methods for feature extraction and identification. I will discuss various methods, for example, singular value decomposition, variable ranking, motif identifications, and discriminative models. The third objective is to demonstrate how feature selection methods can be applied to some problems in molecular biology and genomics. This tutorial is for young researchers who are entering the area of computational biology. It is also suitable for practitioners who like to get familiar with various methods for feature extraction and selection and their applications in biology.

ResearchCyc is a version of the Cyc common-sense knowledge base and reasoning system that is available, at no-cost, for research purposes. It contains a large, richly represented, and extensible ontology as well as a broad range of knowledge specification, querying, importing and exporting, natural language parsing and generation, and reasoning capabilities. The technology can be used in a number of ways, including as a repository of common-sense and domain-specific knowledge, as a suite of knowledge-based services, or as a development environment for knowledge-based applications.

This tutorial will present a tour of the Cyc technology, including its tools and interfaces, and show how these can be used to support a broad range of research projects. In addition, we will show examples of how ResearchCyc is currently being used in academic and industrial research labs. Upon completion of this tutorial, researchers and software engineers will understand Cyc’s basic architecture and underlying functionality, how to explore and modify Cyc’s knowledge base. The course will describe Cyc’s ability to link to external information sources (such as SQL databases) and tools for exporting Cyc knowledge in other human- and machine-readable forms. In addition, attendees will be able to assert and query knowledge programmatically as well as through Cyc’s GUIs and application interfaces. A basic understanding of predicate calculus or formal representation languages and logical inference would be beneficial but is not strictly required.

Larry Lefkowitz is the executive director for business solutions at Cycorp and is responsible for the development of applications of the Cyc technology. His AI research has focused on knowledge representation and acquisition, planning and plan recognition, and intelligent tutoring. He has consulted worldwide on the use of advanced technology to address business challenges. Lefkowitz received his Ph.D. in computer and information science from the University of Massachusetts, Amherst.

Michael Witbrock is the vice president for research at Cycorp where he has overall responsibility for corporate research. His current interests include automating the process of knowledge acquisition and elaboration, extending the range of knowledge representation and reasoning to mixed logical and probabilistic representations, and validating and elaborating knowledge in the context of task performance, particularly in tasks that involve understanding text and communicating with users. Witbrock received his Ph.D. in computer science from Carnegie Mellon University, where his dissertation focused on speaker modeling.

Keith Goolsbey is the chief software architect and inference engine programmer of the Cyc system, and is a senior member of the technical staff. He holds a B.S. in electrical engineering and an M.S. in computer science, both from the University of Texas at Austin. The focus of his M.S. studies was artificial intelligence and expert systems. In 1990, he joined the Cyc Project at Microelectronics and Computer Technology Corporation (MCC), where he worked to expand the nascent Cyc ontology. In 1995, Keith became a founding member of Cycorp, Inc. At this time, he assumed responsibility for the design and implementation of numerous enhancements to the Cyc technology, including the storage of the knowledge base and the inference engine.

This tutorial will provide an in-depth introduction to a framework that aids in the development, testing, and performance evaluation of intelligent systems. This framework is composed of a set of simulation tools and intelligent control modules that enable researchers and developers to build high-fidelity models of robots that behave realistically within in complex simulated, real, and mixed (real or simulated) environments.

Mobility open architecture simulation and tools (MOAST) and urban search and rescue simulation (USARSim) provide a turn-key framework that is an exceptional educational tool for introducing students to mobile robotics and encapsulates facilities that enable researchers to explore advanced topics on specific aspects of autonomous robots. This tutorial will address a broad range of topics including the framework’s simulation engine, control architecture, and communications mechanisms, as well as how the framework aids in the development of sensor processing, world modeling, and single-agent and multiagent control algorithms. In addition, the new RoboCup USAR virtual competition will be discussed.

This tutorial will offer hands-on developmental experience that will appeal to researchers and developers from various backgrounds in the fields of mobile robotics, urban search and rescue, distributed AI, human-computer interaction, and simulation/training systems. Basic knowledge of a Unixlike computer environment (Linux, Cygwin, SunOS) and C++ are prerequisites.

Stephen Balakirsky is a researcher in the Knowledge Systems Group of the Intelligent Systems Division at the National Institute of Standards and Technology. He has over 15 years of experience in multiple areas of robotic systems that include simulation, autonomous plan and behavior generation, human-computer interfaces, automatic target acquisition, and image stabilization. Balakirsky’s research interests include planning systems, simulation development environments, knowledge representations, world modeling, and architectures for autonomous systems.

Michael Lewis is an associate professor in the Department of Information Science and Telecommunications in the School of Information Sciences at the University of Pittsburgh. His research has been supported by NASA, NSF, ONR, AFRL and AFOSR. Applications of his current research include interaction design for a wide area search munition used in a live flight test and development of a robotic simulation adopted for an urban search and rescue competition by Robocup. His projects have investigated human-robot interaction, information fusion, the design of analogical representations, effectiveness of visual information retrieval interfaces (VIRIs), human-agent interaction, and virtual environments.

Stefano Carpin obtained the MSc and Ph.D. in computer science from the university of Padova (Italy) in 1999 and 2002 respectively. Since 2003 he is with the International University Bremen, Germany, where he is currently an assistant professor of computer science. His research interests are in robot algorithms and simulation.

Constraint-based local search—the idea of using constraints to describe and control local search—combines the high-level modeling and control structures of constraint programming with the computational model of local search. Constraint-based local search enables combinatorial optimization applications to be expressed in terms of modeling and search components. The modeling component conveys the combinatorial structure of the applications, while the search component uses the model to drive the search toward high-quality solutions. As a result, constraint-based local search provides a number of fundamental benefits ranging from sofware engineering concerns such as expressiveness, modularity, and reuse to computational properties such as incrementality, efficiency, and scaleability.

This tutorial is a comprehensive introduction to constraint-based local search and its implementation in the Comet programming language and system. It discusses the constraint-based architecture, its modeling concepts (e.g., differentiable constraints and objectives), and its advanced control structures. The architecture is then illustrated on a number of realistic applications in resource allocation, combinatorial design, facility location, sequencing, and scheduling. The tutorial also reviews novel opportunities offered by constraint-based local search.

Pascal Van Hentenryck is a professor of computer science at Brown University. Before coming to Brown in 1990, he spent four years at the European Computer-Industry Research Center (ECRC), where he was the primary designer and implementator of the CHIP programming system, one of the pioneering constraint programming systems. In the last decade, he has built a number of influential systems, including Numerica, OPL, and Comet. Pascal is the author of four books all published by The MIT Press and is the recipient of an NSF National Young Investigator award, the 2002 INFORMS ICS Award for research excellence at the interface between computer science and operations research, and an IBM Faculty Award. He is on the board of directors of the INFORMS Computing Society.

During the last decade a number of breakthroughs have lifted the efficiency of domain-independent planning to a level required in many challenging applications. This tutorial presents the most important approaches to state space traversal used in planning, including techniques based on propositional satisfiability testing, heuristic state-space search, and logic-based data structures like binary decision diagrams. The main applications of these techniques in classical planning and in more complex forms of planning, including planning with uncertainty and incomplete information, are explained.

Rintanen’ presentation is based on general and uniform concepts that allow the description of the most central notions in planning concisely, and makes the essential differences and similarities between the different approaches more apparent. Unlike in most research papers, which use restricted STRIPS operators, the presentation uses an expressive language similar to ADL/PDDL that is used by many recent planner implementations. As a prerequisite to attending the tutorial, only basic knowledge of the classical propositional logic and search algorithms is assumed.

Jussi Rintanen earned his doctoral degree at the Helsinki University of Technology in 1997. Since then he has held research and academic positions at the universities of Ulm and Freiburg in Germany. As of January, 2006, Rintanen is a senior researcher in the Knowledge Representation and Reasoning program at the National ICT Australia in Canberra.

This tutorial frames and motivates the problem of developing automated trading strategies for electronic markets. It surveys the state of the art in analyzing strategies for basic market games, covers examples of more complex (intractable) market scenarios, and presents a general methodology (empirical and game-theoretic) for trading agent design and analysis. Examples from an annual trading agent competition illustrate concepts and results.

The tutorial is designed to be broadly accessible for those interested in trading domains as an application area for AI methods, for either research or practice. Previous exposure to basic concepts of game theory (e.g., the definition of Nash equilibrium) is assumed. Some knowledge of auctions is helpful, but not required.

If successful, tutorial participants will walk away with an understanding of (1) what is and is not known about strategies for trading games, (2) key building blocks of successful trading strategies, (3) what to consider in evaluating particular agent designs, and (4) how to combine analysis and search to derive new and improved trading strategies.

Note that this tutorial is oriented toward market games motivated by commerce scenarios, and will not focus on financial (e.g., equities, commodities) markets. It will not address techniques for predicting price movements in financial markets.

Michael P. Wellman is a professor of computer science and engineering at the University of Michigan, where his research focuses on computational market mechanisms for distributed decision making and electronic commerce. Wellman is Chair of the ACM Special Interest Group on Electronic Commerce (SIGecom) and a Fellow of the AAAI and ACM.

Intelligent software agents, general game players, and high-level controllers for autonomous robots are examples of systems for which the ability to reason about actions and their effects plays a key role. Action programming languages allow us to design such systems, which can solve complex tasks with huge state spaces. Thanks to a high level of abstraction, these programs are easy to write, understand, and maintain. Highly optimized implementations have recently been developed for various action programming languages.

This tutorial will give an introduction to selected languages and systems. Participants will learn how to specify domains and how to write programs for endowing autonomous agents with problem solving abilities. The tutorial will provide an insight into the underlying mathematics and into the advantages and disadvantages of these languages in comparison. A variety of successful applications of action programming languages will be discussed with a focus on general game playing on the one hand, and the combination with low-level control of autonomous robots on the other hand.

The tutorial is directed at every AI researcher who wants to gain an insight into state-of-the-art research in knowledge representation for actions and action programming languages. The only required background is some basic knowledge of standard propositional and first-order logic.

Michael Thielscher is a professor and head of the Computational Logic Group at Dresden University, Germany. He received his PhD with distinction from Darmstadt University. He has published a variety of papers on knowledge representation for actions. With his system FLUXPLAYER he has successfully participated (coming in Co-3rd) in the 1st General Game Playing Competition at AAAI’05.

Scenario-based Design of User Interfaces: Theory from AI and Application in HCI (MA2)

Hermann KaindlMonday, July 17, 9:00 am–1:00 pm

This intermediate-level tutorial teaches scenario-based design of (traditional and intelligent) user interfaces by including aspects of both artificial intelligence and human-computer interaction. Usage scenarios are successfully applied, among others, in object-oriented approaches, requirements engineering, interaction design and for designing user interfaces. This tutorial shows how scenario-based design can be theoretically underpinned through work on artificial intelligence (AI) planning and functional representation. This insight was utilized for creating an intelligent user interface in the form of a guide through a systematic process for requirements engineering and interaction design (which is both scenario-based and itself described in scenarios). This user interface will be shown and its functioning explained in terms of the theoretical background, and the scenario-based design process will be taught as well. So, attendees may see how AI theories can be applied usefully outside of AI. Finally, this tutorial shows designing of more traditional user interfaces based on scenarios, also building upon the theories presented before. This presenter conjectures that such applications can be facilitated by the deeper understanding gained from these theories. This tutorial will consist of lectures, group exercises and discussions. The technical points made will be illustrated with running examples throughout.

Hermann Kaindl joined the Vienna University of Technology in Vienna, Austria, in early 2003 as a full professor. Prior to moving to academia, he was a senior consultant with the division of program and systems engineering at Siemens Austria. There he has gained more than 24 years of industrial experience.

Much computer science and most AI is empirical, meaning we test ideas by building and studying systems. Students learn the methods of most empirical disciplines in graduate school. They learn how to visualize data, design experiments, run statistical tests, build statistical models, and so on. AI has no such curriculum, nor can it be provided in a single tutorial. A tutorial can introduce and illustrate a wide range of empirical methods, present the general logic of empirical research, and give heuristic guidance about good and not-so-good practice. This tutorial will start with exploratory data analysis and ways to present and transform data. It will introduce the logic of hypothesis testing, the idea of sampling distributions, confidence intervals, errors, and significance. The most important kinds of tests will be illustrated, including t tests, chi-square tests, and analysis of variance. Nonparametric resampling methods, including randomization and the bootstrap, provide ways to test hypotheses and estimate errors for unconventional statistics. Lastly, the tutorial will survey issues in experiment design and management. All these topics will be tied together by a progression of heuristics, or tricks of the empiricist’s trade.

Paul Cohen is deputy director of the Intelligent Systems Division at USC’s Information Sciences Institute, and a research professor in USC’s Department of Computer Science. Prior to joining ISI, Cohen was a professor at the University of Massachusetts, where he taught a course based on his textbook Empirical Methods for Artificial Intelligence. Cohen advises several government programs on evaluation methods. Cohen’s Ph.D. is from Stanford University. He is a Fellow of AAAI.

The tutorial presents the state of the art in the emerging concept of semantic web services. The vision of the semantic web has identified ontologies and Web services as the base technologies for the next generation of Web technology, enabling more sophisticated web content processing and computing over the Internet. In order to overcome the deficiencies of initial Web service technologies for automated discovery, composition, and usage of Web services, semantic web services develop semantically enabled technologies on the basis of exhaustive frameworks and ontologies as the underlying data model. This provides an integrated technology for realizing the vision of the semantic web by turning the Web from a world-wide infrastructure for information consumption and exchange by humans towards a Web for distributed computation with semantic interoperability.

The aim of the tutorial is to make attendees familiar with the idea and concepts of Semantic Web services and to present the most recent technology developments. The tutorial addresses academic as well as industrial researches and developers working with Web technologies and have interests in Web services and the semantic web. Although no specific knowledge is demanded as a prerequisite for attending the tutorial, basic knowledge about ontologies and web services allows the participant to better understand and follow the tutorial.

The tutorial will be presented as a full day event with the following schedule: (1) Introduction: Semantic Web and Web services. (2) Semantic Web Service Frameworks (WSMO and OWL-S). (3) Semantic Techniques and Tools (discovery, composition, mediation) (4) Semantic Web Service Systems (WSMX and IRS). (5) Hands-On Session (practical exercises)

Michael Stollberg is a researcher with the Digital Enterprise Research Institute DERI. He is working in the area of semantic web services, being a founding member of the WSMO working group. He has published around 20 articles in international conferences and journals, and is project manager of the Semantic Web Fred project and workpackage manager in the DIP project.

Emilia Cimpian is a researcher with the Digital Enterprise Research Institute DERI. She is working in the area of semantic web services, focusing on development of semantic web services technologies. Emilia is a founding member of Web Service Execution Environment (WSMX), her main interest being the development of a process mediation framework as part of WSMX.

John Domingue is the deputy director of the Knowledge Media Institute at The Open University, UK. He has published over 80 refereed articles in the areas of AI and human computer interaction; he is involved in a number of projects and is currently a coprinciple investigator on the UK EPSRC funded Advanced Knowledge Technologies (AKT) project, the scientific director of the EU funded Integrated Project on Semantic Web Services DIP, and a chair of the WSMO working group. John Domingue is the director of the Fourth Summer School on Ontological Engineering and the Semantic Web run under the auspices of the KnowledgeWeb EU network of Excellence.

Liliana Cabral is a research fellow at the Knowledge Media Institute, The Open University, where she has been employed since September, 2002. Her main research focuses on the design and implementation of IRS-II, a framework for Semantic Web Services. Previous projects include MIAKT where knowledge technologies from the AKT project were combined with medical imaging technology and GRID services to support the diagnosis of breast cancer. She is currently a member of the EU-framework 6 project DIP.

Planning and scheduling algorithms are increasingly guiding autonomous systems that interact with the environment and with humans in the real world. Without effective management of time and resources these autonomous systems cannot guarantee safe and efficient operations over a long period of time. In this tutorial we review basic and advanced topics in time and resource constraint reasoning and their applications to planning, scheduling and execution. The emphasis on plan execution is increasingly important as planning moves from the laboratory to real applications. Significant CPU and memory limitations during plan execution provide a strong driver for the design of efficient algorithms. Several such algorithms will be presented in this tutorial together with their justification from applications such as space exploration, health care systems, military systems and robotic soccer. The tutorial will present a comprehensive review of current temporal and resource constraint-based formalisms, their motivation, their propagation algorithms and their use in planning, scheduling and execution systems.

The tutorial targets both graduate students who are starting to work in this area and researchers in planning and execution who want to deepen their knowledge of advanced topics. Familiarity with algorithms for shortest-path and maximum flow is an advantage but not an essential prerequisite.

Nicola Muscettola is the chief scientist for autonomy at the Advanced Technology Center, Lockheed Martin Corporation. Muscettola received all his degrees from the Politecnico di Milano, Milano, Italy. He worked in temporal planning and scheduling research at Carnegie Mellon University from 1987 to 1993. From 1993 to 2006 Muscettola was at the NASA Ames Research Center. He was the architect and project lead for the Planner/Scheduler module of the Deep Space 1 Remote Agent that flew in May 1999. He is the architect of the Intelligent Distributed Execution Agent, a re-engineering and rationalization of the Remote Agent architecture, extending it to multiagent system with real-time guarantees. Muscettola has conducted pioneering research in both temporal and resource reasoning including the analysis of executability of temporal networks, temporal constraint propagation with controllability and observability constraints, statistical analysis of schedule resource congestion and the polynomial theory of resource envelopes.

Martha E. Pollack is a professor and the associate chair for computer science and engineering at the University of Michigan. She does research on the foundations of plan generation, execution, and management, as well as in the use of these techniques in the development of assistive technology for people with cognitive impairment. She is a Fellow of the AAAI, and a past recipient of the Computers and Thought Award.

Intelligent user interfaces (IUI) aim to improve human-machine interaction by representing, reasoning, and intelligently acting on models of the user, domain, task, discourse, and media (e.g., graphics, natural language, gesture). IUIs are multifaceted, in purpose and nature, and include capabilities for multimedia input analysis, multimedia presentation generation, and the use of user, discourse and task models to personalize and enhance interaction. Some IUIs support asynchronous, ambiguous, and inexact input through analysis of multimodal input. Others include animated computer agents that express system and discourse status via facial displays, that tailor explanations to particular contexts, or that manage dialogues between human and machine.

Potential IUI benefits include more efficient interaction — enabling more rapid task completion with less work; more effective interaction — doing the right thing at the right time, tailoring the content and form of the interaction to the context of the user, task, or dialogue; and more natural interaction — supporting humanlike spoken, written, and gestural interaction.

Methods that identify similar (but not identical) units of text have wide potential application. For example, Web search results can be better organized by grouping together pages with related and similar content. E-mail can be automatically foldered and categorized by finding which messages are similar to each other. Word senses can be discovered by clustering multiple contexts that use a particular ambiguous word.

This tutorial will introduce a language independent methodology for identifying similar contexts based on lexical features. The tutorial will explore the use of first and second order co-occurrence vectors for representing contexts, and introduce methods for carrying out dimensionality reduction that lower the noise and computational complexity associated with these large feature spaces. A number of different clustering methods will be discussed, as will various methods of evaluating the quality of the clustering results. Finally, the tutorial will explore methods of automatically generating descriptive labels for clusters.

The tutorial will also include a hands-on option for those with laptop computers. Attendees will be given a bootable Knoppix CD that will let them experiment with many of these ideas and applications using the SenseClusters package. This tutorial only presumes an interest in the topic; no specific background knowledge is required.

Ted Pedersen is a tenured associate professor in the Department of Computer Science at the University of Minnesota Duluth. His research interests are in computational linguistics and natural language processing, in particular on developing methods to automatically resolve the meaning of ambiguous words in written and spoken communication. This research is partially supported by a National Science Foundation CAREER grant, the focus of which is summarized in this formal abstract and in this nontechnical summary.