Models of dynamic control tasks are often inaccurate. Their accuracy can be improved through recalibration, which requires an enormous amount of data. An alternative approach improves a model by learning from experience; in particular, using nearestneighbor and similar memory-based reasoning algorithms to improve performance. Recently these methods have begun to be explored by researchers studying dynamic control tasks, with some degree of success. For example, published demonstrations have shown how they can be used to simulate running machines, bat balls into a bucket, and balance poles on a moving cart. However, these demonstrations did not highlight the fact that small changes in these learning algorithms can dramatically alter their performance on dynamic control tasks. We describe several variations of these algorithms, and apply them to the problem of teaching a robot how to catch a baseball. We empirically investigate severed hypotheses concerning design decisions that should be addressed when applying nearest neighbor algorithms to dynamic control tasks. Our results highlight several strengths and limitations of memory-based control methods.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.