What Went Wrong and Why: Lessons from AI Research and Applications

Papers from the AAAI Workshop

Bugs, glitches, and failures shape research and development by charting the boundaries of technology; they identify errors, reveal assumptions, and expose design flaws. When a system works we focus on its input/output behavior, but when a problem occurs, we examine the mechanisms that generated behavior to account for the flaw and hypothesize corrections. This process produces insight and forces incremental refinement. In a sense, failures are the mother of necessity, and therefore the grandmother of invention.

Unfortunately, bugs, glitches, and failures are rarely mentioned in academic discourse. Their role in informing design and development is essentially lost. The first What Went Wrong and Why workshop during the 2006 AAAI spring symposium started to address this gap by inviting AI researchers and system developers to discuss their most revealing bugs, and relate problems to lessons learned.

This workshop continued our analysis of failures in research. In addition to examining the links between failure and insight, we would like to determine if there is a hidden structure behind our tendency to make mistakes that can be utilized to provide guidance in research.