NSF is sponsoring a one-day workshop to consider how ideas and technologies from Artificial Intelligence (AI) can help achieve the goals of Software Engineering (SE). The purpose of the workshop is to gather researchers from both communities who have a common interest in leveraging AI research to advance SE. The converse -- improving SE for AI applications and systems -- is also in scope. The objective is to assemble a meeting of researchers from both communities to formulate a fruitful research agenda. After the workshop, the organizers will invite some attendees to co-author a report entitled Future Directions in Software Engineering and Artificial Intelligence Research.

Participation is by invitation only. Prospective participants should submit a research vision statement written from one or more of the following perspectives:

On the Value of User Preferences in Search-Based
Software Engineering: A Case Study in Software
Product Lines

Abdel Salam Sayyad
Tim Menzies
Hany Ammar

Software design is a process of trading off
competing objectives. If the user objective space is rich, then
we use optimizers that can fully exploit that richness. For
example, this study configures software product lines
(expressed as feature maps) using various search-based
software engineering methods. As we increase the number of
optimization objectives, we find that methods in widespread
use (e.g. NSGA-II, SPEA2) perform much worse than IBEA
(Indicator-Based Evolutionary Algorithm). IBEA works best
since it makes most use of user preference knowledge. Hence
it does better on the standard measures (hypervolume and
spread) but it also generates far more products with 0%
violations of domain constraints. Our conclusion is that we
need to change our methods for search-based software
engineering- particularly when studying complex decision
spaces.

Fayola Peters, Tim Menzies, Andrian Marcus
Abstract— How can we find data for quality prediction? Early
in the lifecycle, projects may lack the data needed to build such
predictors. Prior work assumed that relevant training data was
found nearest to the local project. But is this the best approach?
This paper introduces the Peters filter that is based on the
following conjecture. When local project data is scarce, there is
more information in other projects than locally. Accordinging, this
filter selects training data via the structure of the other projects.
We tested the Peters filter on 21 small data set looking
for training data in 35 larger data sets. In the majority case
(67%), the Peters filter builds much better defect predictors that
the current-state-of-the-art methods. Hence, we recommend the
Peters filter for cross-company learning.