Virtual environments (VEs) provide an appealing vehicle for training complex skills, particularly for domains where real-world practice incurs significant time, expense, or risk. Two impediments currently block widespread use of intelligent training tools for VEs. The first impediment is that techniques for assessing performance focus on algorithmic skills that force learners to follow rigid solution paths. The second impediment is the high cost of authoring the models that drive intelligent training capabilities.

This paper presents an approach to training in VEs that directly addresses these challenges and summarizes its application to a weapons maintenance task. With our approach, a learnerís actions are recorded as he completes training exercises in a semantically instrumented VE. An example-tracing methodology, in which the learnerís actions are compared to a predefined solution model, is used to generate assessment information with contextually relevant feedback. Novel graph-matching technology, grounded in edit-distance optimization, aligns student actions with solution models while tolerating significant deviation. With this robustness to learner mistakes, assessment can support exploratory learning processes rather than forcing learners down fixed solution paths.

Our approach to content creation leverages predefined ontologies, enabling authoring by domain experts rather than technology experts. A semantic mark-up framework supports authors in overlaying ontologies onto VE elements and in specifying actions with their effects. Drawing on these semantics, exercises and their solutions are created through end-user programming techniques: a domain expert demonstrates one or more solutions to a task and then annotates those solutions to define a generalized solution model. A concept validation study shows that users are comfortable with this approach and can apply it to create quality solution models.