Explanation-based learning is a recently developed approach to concept acquisition by computer. In this type of machine learning, a specific problem's solution is generalized into a form that can later be used to solve conceptually similar problems. A number of explanation-based generalization algorithms have been developed. Most do not alter the structure of the explanation of the specific problems--no additional objects nor inference rules are incorporated. Instead, these algorithms generalize by converting constants in the observed example to variables with constraints. However, many important concepts, in order to be properly learned, require that the structure of explanations be generalized. This can involve generalizing such things as the number of entities involved in a concept or the number of times some action is performed. For example, concepts such as momentum and energy conservation apply to arbitrary numbers of physical objects, clearing the top of a desk can require an arbitrary number of object relocations, and setting a table can involve an arbitrary number of guests.Two theories of extending explanations during the generalization process have been developed and computer implementations have been created to computationally test these approaches. The Physics 101 system utilizes characteristics of mathematically-based problem solving to extend mathematical calculations in a psychologically-plausible way, while the BAGGER system implements a domain-independent approach to generalizing explanation structures. Both of these systems are described and the details of their algorithms presented. Several examples of learning in each system are discussed. An approach to the operationality/generality trade-off and an empirical analysis of explanation-based learning are also presented. The computer experiments demonstrate the value of generalizing explanation structures in particular, and of explanation-based learning in general. These experiments also demonstrate the advantages of learning by observing the intelligent behavior of external agents. Several open research issues in generalizing the structure of explanations and related approaches to this problem are discussed. This research brings explanation-based learning closer to its goal of being able to acquire the full concept inherent in the solution to a specific problem.