The objectives of this workshop are to bring together researchers and industrial practitioners both from SBST and the wider software engineering community to collaborate, to share experience, to provide directions for future research, and to encourage the use of search techniques in novel aspects of software testing in combination with other aspects of the software engineering lifecycle.

The 9th International Workshop on Search-Based Software Testing (SBST) will be co-located with ICSE 2016 in Austin, Texas on May 16-17, 2016.

News and Updates

Call for Submissions

Full Papers

Maximum of 10 pages, on original research- either empirical or theoretical - in SBST, practical experiences using SBST, or SBST tools.

Short Papers

Maximum of 4 pages, describing novel techniques, ideas, or positions that have yet to be fully developed; or that discuss the importance of recently published SBST work by another author in setting a direction for the SBST community.

Position Papers

Maximum of 2 pages, analyzing trends in SBST and raising issues of importance. Position papers are intended to seed discussion and debate at the workshop, and will be reviewed with respect to relevance and their ability to spark discussions.

Competition Reports

Maximum of 4 pages. We invite researchers, students, and industrial developers to design innovative new approaches to software test generation. Find out more.

Format and Submission

All submissions must be in PDF format. Make sure that you are using the correct ACM style file (for LaTeX, "option 2" style) and that the paper is in the
US letter page format. All submissions should be performed electronically through EasyChair.

Accepted papers will be published as an ICSE 2016 Workshop Proceedings in the ACM and IEEE Digital Libraries. The official publication date of the workshop proceedings is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of ICSE 2016. The official publication date affects the deadline for any patent filings related to published work.

Keynote Speakers

Claire Le GouesPassing tests is easy: when full coverage isn't enough

Research in automated program improvement seeks to improve programs by, e.g.,
fixing bugs, porting functionality, or improving non-functional properties.
Most such techniques, whether search-based or semantic, rely on test cases to
validate transformation correctness, in the absence of formal correctness
specifications. In this talk I will discuss the progression of the area of
automated bug repair in particular. I will especially focus on the key
challenge of assuring, measuring, and reasoning about the quality of bug-fixing
patches. I will outline recent results on the relationship between test suite
quality and origin and output quality, with observations about both semantic
and heuristic approaches. I will conclude with a discussion of potentially
promising future directions and open questions, especially focusing on the
potential synergies with search-based automated testing.

Claire Le Goues is an Assistant Professor in the School of Computer Science at Carnegie Mellon University in the Institute for Software Research. She is broadly interested in how engineers can construct, maintain, evolve, and then assure high-quality, real-world and open-source systems.Her research is in Software Engineering, inspired/informed by program analysis and transformation, with a side of search-based software engineering. She focuses on automatic program improvement and repair (using stochastic or search-based as well as more formal approaches such as SMT-informed semantic code search); assurance and testing, especially in light of the scale and complexity of modern evolving systems; and quality metrics.
For more, see her vita, list of publications, or home page.

Tim MenziesData Science2 = (Test * Data Science)

I will argue that the limits to test are really the limits to science
and, also, the limits to data science.

This is an important point since half of “data science” is “science”
and science is about communities studying each other's models, while trying
to refute or improve those models. Yet much of the current work is
concerned with either the (1) the systems layer required to reason
over data set or (2) the creation of dashboards that just let anyone view
the data.

Missing in much of that work are the tools required to continually,
share, critique and maybe refute and improve the models generated from
others. Accordingly, this talk explores what extra is required in
order to perpetually test the models generated by communities working on data science.

Tim Menzies (Ph.D., UNSW) is a full Professor in CS at North Carolina State University where he teaches software engineering and automated software engineering. His research relates to synergies between human and artificial intelligence, with particular application to data mining for software engineering. He is the author of over 230 referred publications; and is one of the 100 most cited authors in software engineering out of over 80,000 researchers (http://goo.gl/BnFJs). In his career, he has been a lead researcher on projects for NSF, NIJ, DoD, NASA, USDA, as well as joint research work with private companies. Prof. Menzies is the co-founder of the PROMISE conference series devoted to reproducible experiments in software engineering (http://openscience.us/repo). He is an associate editor of IEEE Transactions on Software Engineering, Empirical Software Engineering, the Information Sand Software Technology journal, the Automated Software Engineering Journal, the Software Quality Journal, and the Big Data Research Journal. In 2015, he served as co-chair for the ICSE'15 NIER track. In 2016, he
serves as co-general chair of ICMSE'16. In 2017 he will serve as PC
co-chair for SBSE'17. For more, see his vita, list of publications or home page.

Tutorial

EvoSuite is a tool that automatically generates test cases with assertions for classes written in Java code. To achieve this, EvoSuite applies a novel hybrid approach that generates and optimizes whole test suites towards satisfying a coverage criterion. For the produced test suites, EvoSuite suggests possible oracles by adding small and effective sets of assertions that concisely summarize the current behavior; these assertions allow the developer to detect deviations from expected behavior, and to capture the current behavior in order to protect against future defects breaking this behaviour. In this tutorial, Gordon Fraser will discuss how to use EvoSuite, how to integrate it into other tools, and how to extend it.

Gordon Fraser is a lecturer in Computer Science at the University of Sheffield, UK. He received a PhD in computer science from Graz University of Technology, Austria, in 2007, and worked as a post-doc researcher at Saarland University, Germany. The central theme of his research is improving software quality, and his recent research concerns the prevention, detection, and removal of defects in software. More specifically, he develops techniques to generate test cases automatically, and to guide the tester in validating the output of tests by producing test oracles and specifications. He is chair of the steering committees of the International Conference on Software Testing, Verification, and Validation (ICST) and the International Symposium on Search-Based Software Engineering (SSBSE).

Tool Competition

After three successful competitions we, again this year, invite developers of tools for Java unit testing at the class level—both SBST and non-SBST—to participate in the 4th round of our tools competition!

The contest is targeted at developers/vendors of testing tools that generate test input data for unit testing java programs at the class level. Each tool will be applied on a set of java classes taken from open-source projects, and selected by the contest organization. The participating tools will be compared for statement and branch coverage ratios, fault detection and mutation scores, and preparation, generation and execution times.

Competition entries are in the form of short papers (maximum of 4 pages) describing an evaluation of your tool against a benchmark supplied by the workshop organizers. In addition to comparing your tool to other popular and successful tools such as Randoop, we will manually create unit tests for the classes under test, to be able to obtain and compare benchmark scores for manual and automated test generation.