In the age of big data, data science is an essential skill that should be equipped by software engineers. It can be used to predict useful information on new projects based on completed projects. This tutorial reflects on the state-of-the-art in this important field. Before data mining, this tutorial discusses the tasks needed to deploy data mining algorithms to organizations including how to determine the information needs of particular managers. During data mining, this tutorial discusses the following: (a) when studying particular organizations, how to use surveys and interviews to guide data analysis; (b) when local data is scarce, we show how to adapt data from other organizations to local problems; (c) when working with data of dubious quality, we show how to prune spurious information; (d) when data or models seem too complex, we show how to simplify data mining results; (e) when the world changes, and old models need to be updated, we show how to handle those updates; (f) When the effect is too complex for one model, we show to reason over ensembles. Target audience: Software practitioners and researchers wanting to understand the state of the art in using data mining for software engineering (SE) data. Pre-requisites: This tutorial makes minimal use of maths of advanced algorithms and would be understandable by developers and technical managers.

Mining software repositories provides developers and researchers a chance to learn from previous development activities and apply that knowledge to the future. Ultra-large-scale open source repositories (e.g., SourceForge with 350k+ projects) provide an extremely large corpus to perform such mining tasks on. This large corpus allows researchers the opportunity to test new mining techniques and empirically validate new approaches on real-world data. However, the barrier to entry is often extremely high. Researchers interested in mining must know a large number of techniques, languages, tools, etc, each of which is often complex. Additionally, performing mining at the scale proposed above adds additional complexity and often is difficult.
The Boa language and infrastructure was developed to solve these problems. Boa provides a web-based interface for mining ultra-large-scale software repositories. Users use a domain specific language tailored for software repository mining and submit queries to the website. These queries are then automatically parallelized and executed on a cluster.
This tutorial teaches users how to efficiently perform mining tasks on over 600k open-source software projects. We introduce the language and supporting infrastructure and then perform several mining tasks. Users need not be experts in mining: Boa is simple enough for even novices, yet still powerful enough for experts.

Over the past decade, the reliance of the Software Engineering (SE) community on data and on quantitative analysis has grown tremendously. Statistical tools, given their general nature, require a level of sophistication and in-depth knowledge about modeling and data analysis, yet a typical SE practitioner has a limited exposure to these domains. Furthermore, most tutorial examples for statistical tools tend to be drawn from non-SE domains and do not take into account the peculiarities of integrating highly-structured and large-scale data from the version control systems and other data sources used in SE.
At the end of this half-day tutorial, participants would be familiar with many of the challenges associated with statistical analysis of SE data, and will be exposed to the best practices and techniques aimed to address such challenges. In particular, we will focus on the best practices for building regression models, pitfalls in interpreting the models and in examining the validity of the models. We will also cover some basic topics such as hypothesis testing, and some advanced topics such as the role of sampling, the time dependence of data (i.e., auto-correlation) and the handling of missing data, and censored observations. They could then apply this knowledge to their own problems and data sets.

Software for massive transaction processing systems, smart grid, telecommunication systems require to deliver the expected quality of service without disruption almost all the time. Due to the enormous complexity of such a system and its operating environment, the applications become much more vulnerable to failure. Assuring fault-tolerance of such systems is absolutely essential and highly challenging. Traditionally, fault-tolerance has been addressed only at the infrastructure level. However, this is not sufficient to ensure uninterrupted service as it is demanded today when business needs are changing constantly, and the operating environment exhibit unexpected behavior. In addition to the infrastructure, fault-tolerance must be crafted at the application level.
In this tutorial we present the milieu of useful models and methods across various stages of the software development life cycle to ensure fault-tolerance at the software application level.
Part 1: Software Architecture & Design. We discuss key architecture and design challenges to build fault tolerant applications that that can adapt itself for the future uncertainties.
Part 2: Software Testing. We discuss verification and validation of the application's fault tolerance. We introduce the concept of fault models and software fault injection techniques.
Part 3: Post-deployment Stage. Here we discuss techniques to assure the fault-tolerance of the software after its deployment so that it is possible to predict and prevent failures, and also recover fast after failure.

Conference presentations are the moment to share your results, and to connect with researchers about future directions. However, presentations are often created as an afterthought and as a result they are often not as exciting as they could be. In this tutorial Felienne Hermans, software engineering researcher and also highly experienced speaker
(TEDx 2011, StrataConf keynote 2013), will share hands-on techniques to engage an audience. Participants of this workshop are invited to bring a paper and accompanying slides to this workshop, which will be screened during the workshop, based on material from Felienne and feedback from other participants. The workshop covers the whole spectrum of presenting: we start with advice on how to structure a talk and how to incorporate a core message into it. Once we have addressed the right structure for a talk, we will work on adding a story line and arcs of tension to your presentation. Finally, to really perform as a presenter, we will talk about how slide design and body language can support your presentation.

Being successful in software development, software teams must have the knowledge to systematically evaluate the progress and health of their projects, and detect and resolve risks early. How do teams acquire and apply such knowledge? How do teams adapt this knowledge to different development contexts? This tutorial demonstrates how Essence, the software engineering kernel and language, addresses these challenges. Essence is the result of the global SEMAT initiative that has taken place for a few years and now emerging as a standard adopted by the OMG. Essence provides an innovative and novel user experience based on cards and game boards that are used to assess the progress and health of software development. Through the cards and the boards developers can enact various development games, such as planning your sprints/iterations, agreeing on lifecycle models, evaluating health and progress of your project, all of which are demonstrated in this tutorial. Essence is an effective approach both in real software development and in software engineering education. Moreover, Essence provides the foundation for software engineering research and it has been demonstrated as a framework for presenting case studies. The target audiences for this tutorial are practitioners, educators, and researchers.

Fred Brooks in his book "The Mythical Man Month" describes how the inherent properties of software (i.e. complexity, conformity, changeability, and invisibility) make its design an "essential" difficulty. Good design practices are fundamental requisites to address this difficulty. One such good design practice is identifying and addressing smells. Most practitioners know about identifying and refactoring code smells. However, there is a lack of awareness on refactoring design smells and architecture smells, which are also equally important for creating high quality software. In this half-day tutorial, we introduce a comprehensive catalog, classification, and naming scheme for design smells to the participants. We discuss important structural design smells based on how they violate the four key object oriented design principles (abstraction, encapsulation, modularization, and hierarchy). Each of these smells are illustrated through design smells found in OpenJDK 7.0 (Open source Java Development Kit) code base, with detailed discussions on refactoring strategies for addressing them. Participants are expected to have working knowledge in an object oriented programming language. By attending this session, the participants will get a good understanding on design smells and how to refactor them in real-world projects.

Case study research is the investigation of a small sample of cases in the field, in order to identify the structural components of the case that are responsible for the production of observed phenomena. In software engineering, the structural components of a case can be people, techniques, methods, hardware, software, organizational roles, and organizational entities, among others. The goal of case study research is to reveal how these components contribute to phenomena in software engineering projects, improving our knowledge about their applicability in the real world.
In this tutorial we explain two case study research methodologies in empirical software engineering, namely a methodology for observational case study research, in which the researcher does not intervene in the case, and a methodology for technical action research, in which the researcher intervenes
to help real-world stakeholders, and then studies the effects of this intervention.
The tutorial presents checklists for designing, reporting and interpreting case study research. We discuss the role of theory, and of causal and architectural reasoning in explaining case phenomena. We also discuss the role of analogy in generalizing from case studies, and of the process of analytical induction to
extend and refine a case-based generalization. We contrast this with sample-based statistical generalization. We give examples from published research, and there will be the opportunity to discuss application of these material to case studies done by members of the audience. After this tutorial, participants should be able to design, execute and report about a case study.
The current tutorial is part of a PhD course given yearly to PhD students in information systems, databases and artificial intelligence in The Netherlands.

Mobile application usage and development is experiencing exponential growth. According to Gartner, by 2016 more than 300 billion applications will be downloaded annually. The mobile domain presents new challenges to software engineering. Mobile platforms are rapidly changing, including diverse capabilities as GPS, sensors, and input modes. Applications must be omni-channel and work on all platforms. Activated on mobile platforms, modern applications must be elastic and scale on demand according to the hardware abilities. Bring your own device (BYOD) policies bring new challenges to mobile development in enterprises e.g., security data leaks.
Developing mobile applications requires suitable practices and tools e.g., architecture techniques that relate to the complexity at hand; improved refactoring tools for hybrid applications using dynamic languages and polyglot development; and testing techniques for applications that run on different devices.
The goal of this tutorial is to introduce and practice development and testing of mobile applications using a mobile development platform. As part of this tutorial, we review the challenges in mobile development, and present advanced techniques for development and testing of mobile applications. We further facilitate hands-on exercises on mobile client development thus participants experience some of the aspects that are presented. This tutorial is relevant for software practitioners who are interested in development and testing of mobile applications and for managers who wish to evolve with mobile applications in their organizations. It is also relevant for instructors in academia who are interested in implementing such practices as part of their teaching.

This tutorial provides the conceptual understanding of the use of social network analysis techniques in software engineering domain and brings out how these models supplement and complement existing approaches in software engineering. In the first part of the tutorial, we provide rigorous foundations of relevant concepts in social network analysis and the construction of useful networks from software engineering data. In the second part of the tutorial, we bring out how the social network analysis techniques actually help analyze certain problems in software engineering better and also how to apply these concepts to problem solving in a rigorous way. In particular, we present a comprehensive study of a few contemporary and pertinent problems in the intersection of software engineering and social network analysis such as allocating resources to project maintenance, understanding project organization structure, and forming effective teams. In the third part of the tutorial, we will provide a hands-on demonstration of software tools for network analysis and visualisation. And finally, weíll have a Bring-Your-Own-Problem session to liven up the discussion and get the audience to start using the concepts that we introduced.

Meta modeling conceptualizes data models and provides required facilities to support the functionality required for adapting a data model to new requirements. These meta models are useful for describing the data, relationships and the associated information such as rules and exceptions to model complex applications such as workflows, e-contracts and e services. Meta-models are also useful to define (or augment) new constructs, instances, constraints and semantics, besides supporting reusability. The supporting features for meta models help in modeling the complexities across many applications to instantiate appropriate (or customizable) model for a specific application. Meta execution models are specific to conceptualizing and representing the execution logic for executing processes. For example, Meta Execution workflows describe the execution logic of workflow management system (WfMS) that drives specification and execution of workflows. The overall goal of this tutorial is to present a class of meta models and meta execution models by promoting active conceptual modeling and meta execution models. The objective of this tutorial is to bring out the importance and role of meta models in the design of complex applications and present recent developments in supporting meta modeling and meta-execution models. This tutorial introduces meta models and meta- execution models and covers how they help in orchestrating the complex application design and development, and deployment. The tutorial focuses on understanding the underlying concepts and issues in developing meta models and meta execution models. We also present our concepts through a case study on e-contracts enactment and describe meta models and meta execution models (through meta-execution workflows) to support their enactment.

Recently, numerous articles have appeared in the popular press discussing the strengths and weaknesses of MOOCs (Massive Open Online Courses). MOOCs are computer-based, distance education systems characterized by very large-scale participation and open access via the web. For better or for worse, these systems represent a potentially disruptive technology that may change the scope and purpose of education, how and where education is delivered, and how it is funded throughout the world. Although MOOCs are currently in the public eye, they are just one manifestation of technology-assisted teaching approaches, loosely referred to as "Active Learning". Other examples include SPOCs (Small, Private Online Courses) and other hybrid face-to-face models referred to as "blended learning" or "flipped classrooms". As the name suggests, active learning approaches are structured so that learners affirmatively engage in and actively construct their learning. In contrast, passive approaches, such as traditional lecture, are often structured so that the learner passively receives information from an instructor. This tutorial will teach participants about Active Learning approaches, provide some of the authorís personal experience and lessons.

Recent surveys of over 50,000 software projects covering a range of size and complexity show that only 10% of large (>$10M) software projects using conventional methodologies such as Waterfall are successful. In contrast, leading SaaS companies such as Amazon and others build large, complex, and reliable sites comprising hundreds of integrated subsystems by using modern agile methods and service-oriented architecture. Sadly, however, few university students are taught these methods. As a result, industry often complains that academia ignores vital software topics, leaving students unprepared upon graduation.
Happily, the confluence of cloud computing, Massive Open Online Courses (MOOCs), and Software as a Service has not only revolutionized the future of software, but made it easier and more rewarding than ever to teach. UC Berkeley’s revised Software Engineering course and the accompanying textbook, Engineering Software as a Service (ESaaS), allow students to both enhance a legacy application and to develop new apps that match the requirements of non-technical customers, all using Agile techniques and the same best-of-breed tools used by professional developers. By experiencing the whole software lifecycle repeatedly within a single college course, students actually use and learn to appreciate the skills that industry has long encouraged. The ESaaS course is now popular with students, rewarding for faculty, and praised by industry. Indeed, our students now create software for nonprofit organizations and campus business units who would otherwise be unable to afford to hire professional help, thus “doing well by doing good.” A subset of the course has been offered as a MOOC (free Massive Open Online Course) to hundreds of thousands of students via the edX platform.
To encourage other instructors to adopt ESaaS, we have created a low-cost textbook (under US$10, and rated 4.4 out of 5 stars on Amazon) available both in print and as an ebook that receives free updates for life; hosted software that performs automated detailed grading of student programming assignments; the ability for instructors to use the edX software for a SPOC (Small Private Online Course) customized to their own classroom; lecture videos with self-check questions; worksheets for small-group lab activities; an online instructor community including frequent teleconferences; and the ability for ambitious instructors to create their own assignments that can be graded using our autograder, further enriching the ESaaS education ecosystem.
This tutorial will introduce and demonstrate all these features for instructors potentially interested in adopting ESaaS in their classrooms. We will give an overview of the material covered in the course and book, suggestions for how to run a course with and without open-ended projects, guidance for managing student team projects, a demonstration of the autograders and how to use them, and a description of how a course using ESaaS meets the 2013 ACM/IEEE curriculum guidelines for software engineering.
Participants should bring a wifi-enabled laptop. No other special software is needed. Background reading materials will be made available before the tutorial. More information about the material can be found at http://saasbook.info.

Social sciences and information systems have widely adopted ethnographic methods to understand social communities and subcultures. They provide a source of insight complementary to quantitative methods predominantly applied in software engineering. As a research method in software engineering, ethnography allows the researcher to focus on how a software team achieves their outcome without intervention and without imposing preconceptions about how software should be developed. Ethnography thus allows the researcher to ground eventual improvements in a sound understanding of software development practice.
We have been applying ethnographic methods for several years, as a research approach in its own right and also in the context of tool design and method development. This tutorial distills our experience and lessons learned about how to make the most of ethnographic studies within empirical software engineering research, illustrating how these methods can lead to insightful results both in research and in practice. The proposed tutorial introduces ethnography and its application in software engineering research. It is mainly aimed at researchers and graduate students with a basic knowledge
of empirical studies in software engineering, who are interested in increasing their repertoire of empirical research methods. The tutorial is especially appropriate for those wanting to study the complexities of software development practice. The tutorial will provide a sound understanding of ethnographic
methods and how they can be meaningfully applied in software engineering research.

Software is a crucial element of virtually all modern engineered systems. Software plays a major role in the design, manufacture, and operation of artifacts from automobiles and aircraft to bridges and buildings to games and consumer products. Ensuring that engineering systems are adequately dependable is a significant challenge, and requires a variety of analysis and development techniques. Computer engineers, software engineers, and project managers need to understand the major elements of current technology in the field of dependability, yet this material tends to be unfamiliar.
This tutorial will present the principles of dependability from the software engineerís point of view showing:
(1) How software engineering affects and is affected by the engineering of dependable systems.
(2) The key techniques that need to be applied in software engineering to address the demands imposed on software by the system in which the software operates.
The areas to be covered includes basic terminology of dependability, dependability requirements, types of faults, dependability analysis, computing platform hardware architectures, software fault avoidance in specification and implementation, software fault elimination in implementation, software fault tolerance, and software assurance arguments. Specific software topics that will be introduced include formal languages, formal specification, strongly typed programming languages, correctness by construction using SPARK Ada, rigorous inspections, design diversity, data diversity, key issues in testing, the role of standards, and the role of assurance arguments. Workshop attendees will be supplied with a copy of the text "Fundamentals of Dependable Computing for Software Engineers" authored by the presenter of this tutorial.

Many recent studies show how a significant percentage of software projects are out of budget, suffer delays or even have to be cancelled. One of the most recognized causes for this scenario is the failure in producing good requirements specifications. Methods for improving their quality are therefore needed. Including patterns in requirements engineering is one of such strategies. The definition and use of a software requirements pattern catalogue supports the elicitation, validation, documentation and management of requirements. By designing an appropriate catalogue, an IT organization may have reduced the requirements engineering costs and produce better requirements. This briefing will present a state of the art and state of the practice of software requirements patterns. For the state of the art, the results of a systematic literature review we have conducted will serve to present the different existing approaches classified under several dimensions. For the state of the practice, we will summarize: the results of some existing empirical studies (mainly surveys but also semi-structured interviews); and one case in which we have applied them. All these sources will be consolidated to present a unified view.
The briefing is addressed to researchers, practitioners and educators in software engineering, especially requirements engineers. For researchers, an updated state of the art will be exposed, and the presentation will rely on scientific grounds. For practitioners, processes and templates will be outlined and a
successful case study of pattern-based requirements engineering will be presented. For educators, the briefing will provide the basis for developing course material.

We present an overview of an extensible multi-agent system (MAS) modeling and verication framework. Rather than develop a verication engine specic to the language, we use the semantics of the language along with the model to generate the feasible execution behaviors of the system. The generated transition graph is then encoded automatically into the input format for dierent state of the art verication engines. In our framework we developed Brahms models to depict interactions between automated systems and humans; we also implemented the Brahms semantics as a Java library. We then use Java PathFinder to explore all possible behaviors of the model and, also, produce a generalized intermediate representation that encodes these behaviors. The intermediate representation is automatically transformed to the input language of mainstream model checkers, including PRISM, SPIN, and NuSMV allowing us to check different types of properties. We will present some results on how this approach has been successfully used to model and verify the Air France Flight 447 accident among others.

Goal-oriented requirements engineering has shown that goals are among the key forces for requirements elicitation, modelling, analysis and evolution. Goals capture stakeholder purposes which are related to software requirements. Under this assumption, techniques, methods and modeling frameworks (GBRAM, KAOS, NFR, Ö) have been formulated since the early 90s. Among them, the i* framework occupies a prominent position. By explicitly modelling and analyzing strategic relationships among multiple actors, the i* framework incorporates rudimentary social analysis into a systems analysis framework. Actors depend on each other for goals to be achieved, tasks to be performed, and resources to be furnished. A notion of softgoal is used to deal systematically with non-functional requirements. Networks of dependencies are analyzed using qualitative and even quantitative reasoning. Actors explore alternative configurations of dependencies to assess their strategic positioning in a multi-agent, social context.
This briefing will present a general overview of the i* framework. It will focus on the modeling language and the portfolio of available reasoning techniques and will also provide a historical view to learn about the dialects and variations proposed in the community. Also it will compile some success stories of industrial application reported in the community. The briefing is addressed to researchers, practitioners and educators in software engineering, especially requirements engineers. For researchers, an updated state of the art will be exposed, and the presentation will rely on scientific grounds. For practitioners, the briefing will provide examples of applicability. For educators, it will provide the basis for developing course material.