1083engreportzib02008-08-142008-08-14--Using Model Counting to Find Optimal Distinguishing TestsTesting is the process of stimulating a system with inputs in order to reveal hidden parts of the system state. In the case of non-deterministic systems, the difficulty arises that an input pattern can generate several possible outcomes. Some of these outcomes allow to distinguish between different hypotheses about the system state, while others do~not. In this paper, we present a novel approach to find, for non-deterministic systems modeled as constraints over variables, tests that allow to distinguish among the hypotheses as good as possible. The idea is to assess the quality of a test by determining the ratio of distinguishing (good) and not distinguishing (bad) outcomes. This measure refines previous notions proposed in the literature on model-based testing and can be computed using model counting techniques. We propose and analyze a greedy-type algorithm to solve this test optimization problem, using existing model counters as a building block. We give preliminary experimental results of our method, and discuss possible improvements.08-321438-00641118urn:nbn:de:0297-zib-10832Appeared in: Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems : 6th International Conference, CPAIOR 2009, Lecture Notes in Computer Science 5547, pp. 117-131, 2009Stefan Heinzunknown unknownMartin SachenbacherZIB-Report08-32deuuncontrolledzählendeuuncontrolledautomatische Test Generierungenguncontrolledcountingenguncontrolledautomated test generationenguncontrolledconstraint programmingMathematikMathematical OptimizationHeinz, StefanVeriCounthttps://opus4.kobv.de/opus4-zib/files/1083/ZR_08_32.pdfhttps://opus4.kobv.de/opus4-zib/files/1083/ZR_08_32.ps