Testing denotes a set of activities that aim at comparing a system's actual and intended behaviors. The intended behavior can be provided in the form of explicit behavior models. Runs of validated models are test cases, i.e., sequences of input and expected output signals. Since models are abstractions, or simplifications, it is necessary to translate abstract model traces into concrete intended traces of the implementation. The latter are then compared to actual traces. In this way, complexity is distributed between model and the driver component that takes care of the translation. Technologically, approaches to test case generation with symbolic execution are presented. Particularly true for deterministic systems, test case generation can be perceived as a problem of searching the model's state space. State space explosion can be alleviated by means of strategies for storing sets of states as well as variants of directed search. For structural coverage criteria, algorithms for the incremental generation of integration test suites are presented. This entails the generation of selection criteria on the runs of the model, so-called test case specifications. Test case generators are then able to automatically compute corresponding test cases. For functional test case specification in the form of interaction patterns, test case generation means to fill in missing signals. For universal properties such as invariants, a methodology is presented that allows the deduction of test case specifications by syntactically transforming temporal logic formulae. The result of this transformation can then be used for automated test case generation. Further methodological results concern different scenarios of model-based testing. In particular, the relationship with automatic code generation and models as specifications is discussed. A possible dove-tailing of incremental development of models with incremental development of test cases is scrutinized. Industrial case studies in the domain of chip card applications demonstrate the practical applicability of the presented concepts and approaches. «

Testing denotes a set of activities that aim at comparing a system's actual and intended behaviors. The intended behavior can be provided in the form of explicit behavior models. Runs of validated models are test cases, i.e., sequences of input and expected output signals. Since models are abstractions, or simplifications, it is necessary to translate abstract model traces into concrete intended traces of the implementation. The latter are then compared to actual traces. In this way, complexity... »