A random testing strategy for object-oriented software basically constructs test cases by performing the following two tasks: 1) randomly select a method under test (MUT); 2) randomly select or construct objects to feed to the chosen method as either target or arguments. Usually, all the objects that are created for or returned by a MUT are stored in an object pool so they can be reused for future test cases. When working with OO software equipped with contracts, it becomes difficult for a random testing strategy to select objects that satisfy the precondition of the MUT. As a result some methods are never tested because all generated test cases fail to satisfy their preconditions. An evaluation of the object pool showed that the traditional strategy often misses object combinations that do satisfy the MUTs preconditions. Therefore we keep track of these object combinations during the testing process, and directly select them for MUTs. We call this the smart object selection strategy. We implemented the idea in our testing tool AutoTest for Eiffel. We introduced a predicate pool to keep track of object combinations satisfying preconditions of a certain method. All preconditions appearing in the classes under test are collected into a predicate pool. After each test case run, these predicates are evaluated against the objects that are used in that test case. Object combinations satisfying a given predicate are marked in the pool and associated with that predicate. Later, when a method is to be tested, objects satisfying that methods precondition predicates (as shown by the predicate pool) can be directly selected. This is called smart object selection. We ran this algorithm for one hour on classes with strong preconditions. Results show that the algorithm is indeed able to test methods whose preconditions were rarely satisfied by the original random testing strategy (which therefore often remained untested). In terms of number of faults, the algorithm finds slightly more than original random testing in general. In terms of kinds of faults, this algorithm finds up to 30% of faults that cannot be found by original random testing in some classes.

Bios: Yi Wei is a PhD candidate in Chair of Software Engineering, ETH Zurich since 2007. He is working on software testing. Before he came to ETH, he received a masters degree in Software Engineering in Wuhan University, China in 2006. He was an intern in Eiffel Software, California, USA from 2005 to 2006.

Serge Gebhardt is a MSc student at the Chair of Software Engineering at ETH Zurich, where he currently writes his MSc thesis in the area of automated software testing. He previously earned a BSc in Computer Science from ETH Zurich and has been a visiting research student at EPFL (Lausanne, CH).