Communications networks remain incredibly difficult to manage, troubleshoot, and secure. Network management challenges exist in all kinds of networks. In this talk, I will describe how Software Defined Networking (SDN), which decouples logical network control from the underlying network infrastructure, can simplify many network management tasks in different types of networks and may ultimately provide a means by which network operators (and home users) can make their networks more predictable, manageable, and secure. I will first present Kinetic, a new programming language and runtime for SDNs that we have developed, implemented, and deployed (in both home networks and on a large campus network) and describe how it allows network operators to express and implement complex policies in a simple and high-level control framework. Current SDN controller platforms typically offer little domain-specific support for programming changes to data-plane policy over time (dynamic policy). Links are provisioned and fail; users arrive and depart; traffic demands change; and hosts are compromised and patched. Today’s controller platforms offer SDN programmers little guidance on how to encode dynamic policies, which makes the resulting programs difficult to write and analyze. Kinetic encodes dynamic policies and realizes them in the underlying network. It offers novel Finite State Machine (FSM)-based abstractions for encoding dynamic policies that are expressive and intuitive, efficient for programmers to write, and amenable to automated verification. To prevent state explosion, we develop a new type of runtime policy that reactively generates only the required portions of the FSM abstractions that correspond to received events. I will then describe how we are applying new SDN abstractions and control to help approach longstanding problems in interdomain routing in a framework called SDN. To date, SDN has not affected how we interconnect separately administered networks as we do today through BGP. Because many of the current failings of the Internet are due to BGP’s poor performance and limited functionality, it behooves us to explore incrementally deployable ways to leverage SDN’s power to improve interdomain routing. Towards this goal, this project exploits the re-emergence of Internet eXchange Points (IXPs) to create Software Defined eXchanges (SDXs). Although the SDX approach does involve deploying SDN technology at IXPs, the improvements we describe involve fundamental changes to network control. I will describe how improved network control can realize the potential of SDN-capable functions at Internet exchange points.

The design of the Dart programming language is heavily based on syntax, features, and performance characteristics from past object-oriented systems. This intentional choice has resulted in a productive and yet simple to learn programming language with clear semantics. It can even be implemented efficiently on a wide variety of platforms. We will in this talk discuss several important design decisions, including: constructor semantics, the optional type system, and support for incremental execution and fast application startup. Finally, we will evaluate where the language should be improved.

The value of software is no longer just about the logic of its algorithms, but also about the data flowing through that logic. Think of popular services like Google search, Bing search, Yelp, Facebook, and Instagram. These services are useful because of a combination of user-generated data (restaurant reviews, status posts) and the algorithms that surface that data (rankings and recommendations based on machine learning). Even software that is valued for its logic, like games, often have data-driven features, like matching up players. Behind the scenes, software companies are also using data to make engineering and business decisions. For instance, some product teams monitor real-time metrics to troubleshoot problems and to decide when to scale out to more servers. Other teams analyze usage data to triage bugs and to brainstorm new features. Whether inside the product or behind the scenes, today, software = logic + data. Sadly, programming tools have not kept up and are still designed for authoring logic. Indeed, the popular languages and tools for analyzing data, like R, MATLAB, and the IPython Notebook, are a separate world from development environments, like Visual Studio, XCode and Eclipse. This separation creates awkward and inefficient workflows. For example, a data scientist might use one language and tool (say, R) to explore new recommendation algorithms; then, to deploy the final algorithm into the service, a programmer will entirely re-implement it, using a different language and tool (say, C# and Visual Studio). This problem gets worse as companies switch their focus from stored data to real-time streaming data, like live service telemetry and sensor data from wearable devices and the Internet of Things. In this talk, I’ll describe how data is changing the nature of professional software development and demonstrate new programming tools that make it easier for the user to work with both data and logic together.

Formal specifications for APIs help developers correctly use them and enable checker tools automatically verify their uses. However, formal specifications are not always available with released APIs. In this work, we demonstrate an approach for mining API preconditions from a large-scale corpus of open-source software. It considers conditions guarding API calls in client code as potential preconditions of the corresponding APIs. Then it uses consensus among a large number of API usages to keep the ones appearing in the majority. Finally, the mined preconditions are ranked based on their frequencies and reported to users.

Dynamic analysis tools often perform instrumentation via interfaces that are implementation-specific, so are not supported by alternative implementations of a given source language.
The Android mobile platform is one example: its Dalvik virtual machine executes an alternative, register-based bytecode, and lacks debugging and instrumentation interfaces that Java analysis developers rely upon.
In this demonstration, we present a framework for dynamic program analysis development on Android, based on the existing ShadowVM framework for Java.
By re-creating the latter's abstractions in the impoverished Android environment, it offers a high-level programming interface, load-time instrumentation, full bytecode coverage, and strong isolation, thereby avoiding common problems suffered by existing dynamic analyses on Android (offline-only instrumentation, lack of support for dynamic loading, and risk of unsound results owing to gaps in coverage).
We will demonstrate our system with an Android-specific network traffic analysis, deployed on both an ARM/Intel-based emulator and a real device.

A typical mobile user employs multiple devices (e.g., a smartphone, a tablet, wearables, etc.). These devices are powered by varying mobile platforms. Enabling such cross-platform devices to seamlessly share their computational, network, and sensing resources has great potential benefit. However, sharing resources across platforms is challenging due to a number of difficulties. First, the varying communication protocols used by major mobile vendors tend to overlap minimally, making it impossible for the devices to communicate through a single protocol. Second, the host platforms' underlying architectural differences lead to drastically dissimilar application architectures and programming support. In this demo, we present Heterogeneous Device Hopping, a novel approach that systematically empowers heterogeneous mobile devices to seamlessly, reliably, and efficiently share their resources. The approach comprises 1) a declarative domain-specific language for device-to-device communication based on the RESTful architecture; 2) a powerful runtime infrastructure that supports the language's programming model. In this demo, we show how our approach can be used to implement a multi-device animation across heterogeneous nearby devices. The animation starts on one device and moves across the device boundaries, irrespective of the underlying mobile platform.

Live programming environments are powerful experimental tools that enable programmers to write programs in a trial-and-error way thanks to its quick feedback. Since the feedback includes intermediate data such as a control flow and a history of variable bindings, the live programming environments integrate debugging into editing. One of the disadvantages of such interactive systems is that tests are transient. If we wrote persistent tests using an automated testing framework like JUnit, we could not fully enjoy "liveness." This is because we need to write proper parameters and expected values in advance. We develop Shiranui, a live programming environment with unit testing features. In Shiranui, the programmers can check functions' behaviors in a lively manner and then convert the results into persistent test cases. One of the features enables the programmers to make a test case from an intermediate result that are found in a debugging process. It makes constructing error-reproducing-tests easier.

Programming language researchers often study real-world projects to see how language features have been adopted and are being used. Typically researchers choose a small number of projects to study, due to the immense challenges associated with finding, downloading, storing, processing, and querying large amounts of data. The Boa programming language and infrastructure was designed to solve these challenges and allow researchers to focus on simply asking the right questions. Boa provides a domain-specific language to abstract details of how to mine hundreds of thousands of projects and also abstracts how to efficiently query that data. We have previously used this platform to perform a large study of the adoption of Java's language features over time. In this demonstration, we will show you how we used Boa to quickly analyze billions of AST nodes and study the adoption of Java's language features.

Understanding the run-time behaviour of object-oriented applications entails the comprehension of run-time objects. Traditional object inspectors favor generic views that focus on the low-level details of the state of single objects. While universally applicable, this generic approach does not take into account the varying needs of developers that could benefit from tailored views and exploration possibilities. GTInspector is a novel moldable object inspector that provides different high-level ways to visualize and explore objects, adapted to both the object and the current developer need. More information about the GTInspector can be found at: scg.unibe.ch/research/moldableinspector

Pointcut fragility is a well-documented problem in Aspect-Oriented Programming; changes to the base-code can lead to join points incorrectly falling in or out of the scope of pointcuts. Deciding which pointcuts have broken due to base-code changes is a daunting venture, especially in large and complex systems. We demonstrate an automated tool called FRAGLIGHT that recommends a set of pointcuts that are likely to require modification due to a particular base-code change. The underlying approach is rooted in harnessing unique and arbitrarily deep structural commonality between program elements corresponding to join points selected by a pointcut in a particular software version. Patterns describing such commonality are used to recommend pointcuts that have potentially broken with a degree of confidence as the developer is typing. Our tool is implemented as an extension to the Mylyn Eclipse IDE plug-in, which maintains focused contexts of entities relevant to a task.

The demonstration aims to present JerryScript, a JavaScript engine for the Internet of Things (IoT). This is a lightweight JavaScript engine intended to run on very constrained devices such as microcontrollers, which have only a few kilobytes of RAM available to the engine (<64 KB RAM) and constrained ROM space for the code of the engine (<200 KB ROM). The engine is ECMA-262 5.1 compliant, supports on-device compilation, execution, and provides access to peripherals from JavaScript. It powers the IoT.js project, which provides an interoperable service platform in the world of web-based IoT. This demonstration proves that usage of JavaScript on every constrained device is reasonable and profitable.

This paper proposes the idea of Trace Register Allocation, a register allocation approach that is tailored for just-in-time (JIT) compilation in the context of virtual machines with run-time feedback.
The basic idea is to offload costly operations such as spilling and splitting to less frequently executed branches and to focus on efficient registers allocation for the hot parts of a program. This is done by performing register allocation on traces instead of on the program as a whole.
We believe that the basic approach is compatible to Linear Scan, the predominant register allocation algorithm for just-in-time compilation, in both code quality and allocation time, while our design leads to a simpler and more extensible solution. This extensibility allows us to add further enhancements in order to optimize the allocation based on the run-time profile of the application and thus to outperform current Linear Scan implementations.

In recent years, execution trace obliviousness has become an important security property in various applications in the presence of side channels. On the one hand, a cryptographic protocol called Oblivious RAM (ORAM) has been developed as a generic tool to achieve obliviousness, while incurring an overhead. On the other hand, customized oblivious algorithms with better performance have been developed. This method, however, is not scalable in terms of human efforts. This thesis work adopts a language design approach to facilitate users to develop efficient oblivious applications. I will study sequential and parallel programs and different channels. I will design languages and security type systems to support efficient algorithm implementations, while formally enforcing obliviousness. My study for the secure computation application shows that using our compiler, one PhD student can develop an oblivious algorithm in one day which took a research group of 5 researchers 4 months to develop in 2013, while achieving 10* to 20* better performance.

In collaborative software development, developers submit their contributions, such as code commits or pull requests, to a repository. Often, this code contribution is reviewed in order to avoid privacy and security problems. Manual code review is a common way to detect such problems, but it is expensive, error-prone, and time consuming. Other automatic approaches are either designed for specific domains, such as Android platform, or demand significant effort from developers. To minimize these problems, we propose a new policy language to allow developers to specify constraints for code contributions and to enforce them between existing code and new code contributions. Our language implementation automatically checks adherence of new code contributions to these constraints for systems of different domains without demanding further effort from developers. Moreover, we plan to evaluate it regarding effectiveness and reduction of effort in finding privacy and security violations.

This paper takes a cognition-centric approach for programming languages. It promotes the spreadsheet paradigm, with two concrete goals. First, it calls for the design and implementation of several language features to enhance the expressiveness of spreadsheet programming. Second, it describes a plan for rigorous empirical studies to retain the learnability of spreadsheet programming.

Live programming environments help the programmers to try out expressions by giving immediate feedback on the results as well as intermediate evaluation processes. However, the feedback is transient, and its correctness is merely confirmed by the programmers' manual inspection. We seamlessly integrate live programming with unit testing by proposing novel features (1) that converts a lively-tested expression into a unit test case, and (2) that extracts a unit test case from an execution trace of a lively-tested expression. In this poster, we overview Shiranui, our live programming environment, and present the proposed features implemented in Shiranui.

The computational heart of modern mobile devices such as smartphones, tablets, and wearables is a powerful system-on-chip (SoC) with rich parallelism and heterogeneity. While the hardware parallelism of these mobile systems continues to increase year-over-year, they remain resource constrained with respect to power consumption and thermal dissipation. Efficient use of multi-core processors in mobile devices is a key requirement for improving performance, while staying within the power and thermal limits of mobile devices.

The growing ubiquity of personal connected devices has created the opportunity for a range of applications which tap into their sensors. The sensing requirements of applications often dynamically evolve over time depending on contextual factors, evolving interest in different types of data, or simply to economize resource consumption. The code implementing this evolution is typically mixed with that of the application's functionality. Here we separate the two concerns by modeling the evolution of sensing requirements as transitions between modes. The paper describes ModeSens, an approach to modeling and programming multi-modal sensing requirements of applications. The approach improves programmability by enhancing modularity. Our experimental evaluation measures the performance and energy costs of using ModeSens.

Encapsulation and information hiding are essential and fundamental to object-oriented and aspect-oriented programming languages. These principles ensure that one part of a program does not depend on assumptions on the internal structure and logic of other parts of the program. While this assumption allows for clearly defined modules, interfaces and interaction protocols when software is initially developed, it is possible that rigid encapsulation causes problems, such as brittleness, as software changes and evolves over time. We suggest that, just as the strength of type systems have relaxed over time, perhaps structural boundaries could, too be relaxed. Perhaps there could be a new kind of flexible encapsulation: one that allows non-permanent and flexible boundaries between program parts.

This paper introduces “statik”, a C++ software library for automatically generating fully-incremental compilers. Given a grammar for any phase of a compilation process (e.g. lexer, parser, code-generator), the library provides a top-down chart parser that accepts incremental changes to a linked-list of input for that compilation phase, and emits the corresponding changes as a linked-list of output. The output of one phase can be chained as input to another, so that a whole compiler can be constructed as a pipeline of an arbitrary number of compilation phases. This can be used as an incremental mapping between character-by-character edits anywhere in an input source file through to the resulting changes in the compiled object code, with minimal recomputation of intermediary state. Statik is released as Free software, and is available under the GPLv3+ license at http://statik.rocks.

A type system is a set of type rules and with respect to these type rules a type checker has an important role to ensure that programs exhibit a desired behavior. We consider Java type rules and extend the co-contextual formulation of type rules introduced in [1] to enable it for Java. Regarding the extension type rules result is a type, a set of context requirements and a set of class requirements. Since context and class requirements are propagated bottom-up and while traversing the syntax tree bottom-up and are merged from independent subexpression, this enables the type system to be incremental therefore the performance is increased.

In widely-used actor-based programming languages, such as Erlang, sequential execution performance is as important as scalability of concurrency. We are developing a virtual machine called Pyrlang for the Erlang BEAM bytecode with a just-in-time (JIT) compiler. By using RPython’s tracing JIT compiler, our preliminary evaluation showed approximately twice speedup over the standard Erlang interpreter. In this poster, we overview the design of Pyrlang and the tech- niques to apply RPython’s tracing JIT compiler to BEAM bytecode programs written in the Erlang’s functional style of programming.

Federated conferences such as SPLASH are complex organizations composed of many parts (co-located conferences, symposia, and workshops), and are put together by many different people and committees. Developing the website for such a conference requires a considerable effort, and is often reinvented for each edition of a conference using software that provides little to no support for the domain. In this paper, we give a high-level overview of the design of Conf.Researchr.Org, a domain-specific content management system developed to support the production of large conference web sites, which is being used for the federated conferences of ACM SIGPLAN.

Bitmap indices are popular in managing large-scale data, but their size quickly grows out-of-core without compression. At the same time, Moore's enables a proliferation of machines with parallel architectures, letting users exploit symmetric multiprocessors (SMP) for common tasks. In this poster, we evaluate two widely used parallel work distribution models for parallelizing bitmap compression.

Program comprehension requires developers to reason about many kinds of highly interconnected software entities. Dealing with this reality prompts developers to continuously intertwine searching and navigation. Nevertheless, most integrated development environments (IDEs) address searching by means of many disconnected search tools, making it difficult for developers to reuse search results produced by one search tool as input for another search tool. This forces developers to spend considerable time manually linking disconnected search results. To address this issue we propose Spotter, a model for expressing and combining search tools in a unified way. The current implementation shows that Spotter can unify a wide range of search tools. More information about Spotter can be found at scg.unibe.ch/research/moldablespotter.

Unmanned Aerial Vehicles (UAVs) have recently emerged as a promising platform for civilian tasks and public interests, such as merchandise delivery, traffic control, news reporting, natural disaster management, mobile social networks, and Internet connectivity in third-world countries.
Looking forward, the exciting potential of UAVs is accompanied with significant hurdles that call for broad and concerted interdisciplinary research, with diverse focuses on real-time system design, energy efficiency, safety and security, programmability, robotics and mechanical design, among others. This poster proposes an open-source and extensible software infrastructure for UAVs.

Java 8 is one of the largest upgrades to the popular language and framework in over a decade. However, the Eclipse IDE is missing several key refactorings that could help developers take advantage of new features in Java 8 more easily. In this paper, we discuss our ongoing work in porting the enhanced for loop to lambda expression refactoring from the NetBeans IDE to Eclipse. We also discuss future plans for new Java 8 refactorings not found in any current IDE.

In the 21st Century, software is the enabling innovation pillar for all of civilization’s needs – including: food supply, living space (water, waste, power, and climate) management, services (health, financial, transportation, communication) and human relations (social networking). While the professionalism inherent in implement-ing, deploying, and configuring software systems may not appear as advanced as that found in other more regulated professions such as medicine, aviation, and engineering – is it “good enough”? This panel will discuss whether we are learning effectively from our expe­riences with failure and human hazards. Panelists will also discuss how software professionalism can be accelerated and debate the effectiveness of proficiency certifications in fostering increased professionalism.

In the beginning “programs” were patterns of bits that commanded the execution of individual machines. As machines evolved in complexity – languages evolved, starting with a variety of assembly languages and growing to encompass higher levels of abstraction. Over the years – somewhat surprisingly – programmers evolved from engineers at the pinnacle of their profession with many years of experience to individuals not yet 10 years old giving evidence that programming does not necessarily require a formal education. This panel will bring together a diverse set of industry and academic professionals to discuss the future of programming languages and programmers.

Transactional programs, using transactional memory (TM) and non-transactional programs (non-TM) (e.g., using locks) provide weak semantics under commonly used memory models. Strong memory models incur high implementation overhead and yet prove to be insufficient. TM programs and non-TM programs have different semantics based on the memory model. Adding new atomic blocks to lock-based code is difficult without adding high overhead or introducing weak semantics. A system where users can add atomic blocks or lock-based critical sections seamlessly to existing TM programs or lock-based code facilitates incremental deployment. A unified and strong memory model enforced efficiently by a single runtime for both kinds of programs is therefore desirable.

Simultaneous use of multiple programming languages aids in creating efficient modern programs in the face of legacy code; however, creating language bindings to low-level languages like C by hand is tedious and error prone. We offer an automated suite of analyses to enhance the quality of automatically produced bindings by recovering high-level array type information missing in C's type system. We emit annotations in the style of GObjectIntrospection, which produces bindings from annotations. We annotate an array argument as terminated by a special sentinel value, fixed length of a constant size, or of length determined by another argument. This information helps produce more idiomatic, efficient bindings.

Some developers do not trust automated refactoring tools to refactor correctly. Refactoring without tools can be a cumbersome and error-prone process. It is possible for a tool to support developers in refactoring without requiring them to trust in automated code manipulation. This paper contributes KinEdit—a tool which is designed to help developers manually refactor more quickly, with higher quality, and without requiring the developers’ trust.

Spreadsheets are considered one of the most widely used end-user programming environments. Just as it is important for software to be free of bugs, spreadsheets need to be free of errors. This is important because in some cases, errors in spreadsheets can cost a financial entity thousands of dollars. In this work, we formulate a class of commonplace errors based on our manual inspection of real life spreadsheets, and further provide an analysis algorithm to detect these errors. We introduce ``reference counting'' as a simple yet effective algorithm to detect range errors. We finally demonstrate how reference counting can effectively point out erroneous cells in faulty spreadsheets.

While existing architectures like x86 and SPARC provide strong hardware memory consistency models, such as TSO, programming language memory models are more relaxed. This divide nullifies the usefulness of providing strong hardware memory models, since languages and compilers provide a weaker guarantee. Moreover, current shared memory systems implement complex cache coherence protocols which add to the complexity.
This work proposes a microarchitecture, called Viser, that ensures strong semantics---serializability of synchronization-free regions (SFRs)---in the absence of region conflicts even for racy program executions. Given an execution, Viser either reports a serializability violation or guarantees SFR-serializability, in effect providing the same guarantees provided by languages such as C++ and Java for data-race-free programs only. Viser's design also allows for greatly simplifying existing cache coherence protocols, without requiring any assumptions about language-level properties such as data-race-freedom.

Reactive programming is a declarative style of defining applications which deal with continuous inputs of new data and events. Being declarative in nature, reactive programming allows the programmer to state the intent of the application, instead of specifying concrete execution behavior as needed in applications using the Observer design pattern. Declarative definitions not only improve code clarity, but also leave concrete execution behavior unspecified – the underlying runtime can freely change as long as the intended semantic is kept intact. We exploit this freedom to support concurrent propagation of concurrently admitted changes.

Modeling energy related concepts as high level constructs that can be checked by a type system is challenging due to the dependency on runtime factors related to energy consumption. Pushing energy concepts such as energy mode types into a language helps less skilled programmers write energy-aware software without relying on lower level techniques that depend upon hardware. We develop a language that allows energy specific type checking to be done gradually with both static and dynamic checks. As a result we allow energy-aware programming that is both natural and flexible at the language level.

Due to its asynchronous, event-driven nature, JavaScript, similar to concurrent programs, suffers from the problem of data races. Past research provides methods for automatically exploring a web application to generate a trace and uses an offline dynamic race detector to find data races in the application. However, the existing random exploration techniques fail to identify races that require complex interactions with the application. While more sophisticated approaches to explore websites exist, these are not targeted towards finding data races. We conduct a study of data race bugs in open source software that shows most data race bugs are related to AJAX requests. Motivated by these findings, we present an approach for UI-level test generation which explores a website with the goal to find additional data races.

Female software developers account for only a small portion of the total developer community. This inequality is caused by subtle beliefs and sometimes interactions between different genders and society, referred to as implicit biases and explicit behavior, respectively. In this study, I mined user contribution acceptance from a popular software collaboration service. The contributions of female developers were accepted into open-source projects with roughly equivalent success to those of males, partially discounting recent findings that explicit behavior accompanies implicit gender bias, while bolstering the claim that implicit bias is cultural, rather than as a result of innate differences.

Developers use configuration options to tailor systems to different platforms. This configurability leads to exponential configuration spaces and traditional tools (e.g., gcc) check only one configuration at a time. As a result, developers introduce configuration-related issues (i.e., bad smells and faults) that appear only when we select certain configuration options. By interviewing 40 developers and performing a survey with 202 developers, we found that configuration- related issues are harder to detect and more critical than is- sues that appear in all configurations. We propose a strategy to detect configuration-related issues and a catalogue of refactorings to remove bad smells in preprocessor directives. We found 131 faults and 500 bad smells in 40 real-world configurable systems, including Apache and Libssh.

A strong memory model, such as region serializability, helps programmers reason about programs in the granularity of synchronization free regions and allows compiler and hardware to more freely reorder accesses. However, providing region serializability usually is expensive in software or requires custom hardware.
We introduce a new approach to support a memory model that guarantees write-atomicity and a consistent snapshot view to reads in a synchronization free region by tolerating the majority region-conflicts caused by write-write and write-read conflicts and freezes the program state if a read-write conflict violates the memory model.

We have recently introduced object propositions as a modular verification technique that combines abstract predicates and fractional permissions. The Oprop tool implements the theory of object propositions and verifies programs written in a simplified version of Java, augmented with the object propositions specifications. Our tool parses the input files and automatically translates them into the intermediate verification language Boogie, which is verified by the Boogie verifier. We present the details of our implementation, the lessons that we learned and a number of examples that we have verified using the Oprop tool.

The Eclipse platform was originally designed for building an integrated development environment for object-oriented applications. Over the years it has developed into a vibrant ecosystem of platforms, toolkits, libraries, modeling frameworks, and tools that support various languages and programming styles. The seventh ETX workshop provides a platform for researchers and practitioners to transfer knowledge about the Eclipse Platform and exchange new ideas. It is held in Pittsburgh, Pennsylvania on October 27th, 2015 and co-located with SPLASH 2015.

The goal of the MobileDeli 2015 workshop is to establish a vibrant research community of researchers and practitioners for sharing work and leading further research and development in the area of mobile software engineering. At the workshop, we will discuss how other technologies (e.g., DSLs, cloud computing) drive new capabilities in mobile software development. The workshop attendees will also examine the lifecycle of mobile software development and how it relates to the software engineering lifecycle. There will also be working group discussions and activities where attendees will explore and evaluate existing techniques, patterns, and best practices of mobile software development. Additional information about the workshop (e.g., photos, presentations, schedule) can be found at the MobileDeli workshop website: http:// sysrun.haifa.il.ibm.com/hrl/mobiledeli2015

The second international workshop on Software Engineering for Parallel Systems (SEPS) will be held in Pittsburgh, PA, USA on October 27, 2015 and co-located with the ACM SIGPLAN conference on Systems, Programming, Languages and Applications: Software for Humanity (SPLASH 2015). The purpose of this workshop is to provide a stable forum for researchers and practi-tioners dealing with compelling challenges of the software development life cycle on modern parallel platforms. The increased complexity of parallel applications on modern parallel platforms (e.g. multicore, manycore, distributed or hybrid) requires more insight into development processes, and necessitates the use of advanced methods and techniques supporting developers in creating parallel applications or parallelizing and reengineering sequential legacy applications. We aim to advance the state of the art in different phases of parallel software development, covering software engineering aspects such as requirements engineering and software specification; design and implementation; program analysis, profiling and tuning; testing and debugging.

Dynamic analysis techniques are prevalently used for understand-ings runtime program behaviors for bug detection, memory man-agement, or performance analysis. The 13th International Work-shop on Dynamic Analysis (WODA’15) provides a forum for researchers and practitioners to discuss recent development of dynamic analysis techniques and exchange new ideas. It is held in Pittsburgh, PA on October 26, 2015 and is co-located with SPLASH 2015.

The AGERE! workshop focuses on programming systems, languages and applications based on actors, active/concurrent objects, agents and -- more generally -- high-level programming paradigms promoting a mindset of decentralized control in solving problems and developing software. The workshop is designed to cover both the theory and the practice of design and programming, bringing together researchers working on models, languages and technologies, and practitioners developing real-world systems and applications.

Domain-specific languages provide a viable and time-tested solution for continuing to raise the level of abstraction, and thus productivity, beyond coding, making systems development faster and easier. When accompanied with suitable automated modeling tools and generators it delivers to the promises of continuous delivery and devops. In domain-specific modeling (DSM) the models are constructed using concepts that represent things in the application domain, not concepts of a given programming language. The modeling language follows the domain abstractions and semantics, allowing developers to perceive themselves as working directly with domain concepts. Together with frameworks and platforms, DSM can automate a large portion of software production. This paper introduces Domain-Specific Modeling and describes the SPLASH 2015 workshop, to be held on 27th of October in Pittsburgh, PA, which is the 15th anniversary of the event.

Today, mobile devices (e.g., smartphones, tablets, smartwatches, etc.) are the main target platforms for developers. To support the new challenges, traditional programming languages are not enough anymore and new ones are emerging to enable program-mers (and even end-users) to develop software taking advantage of the most recent hardware capabilities. Since the first edition in 2013, PROMOTO has brought together researchers interested in exploring new programming paradigms and embracing the new technologies in the area of touch-enabled mobile devices.

NOOL-15 is a new unsponsored workshop to bring together users and implementors of new(ish) object oriented systems. Through presentations, and panel discussions, as well as demonstrations, and video and audiotapes, NOOL-15 will provide a forum for sharing experience and knowledge among experts and novices alike.

Parsing@SLE is a workshop on parsing programming languages, now in its third edition, and collocated with SLE and SPLASH 2015. It is held in Pittsburgh, Pennsylvania, USA on October 25h 2015. The goal is to bring together today's experts in the field of parsing, in order to hear about ongoing research, explore open questions and possibly forge new collaborations. Parsing@SLE 2015 will have an invited talk and eight regular talks. We expect to attract participants that have been or are developing theory, techniques and tools in the broad area of parsing non-natural languages such as programming languages.

Reactive programming and event-based programming are two closely related programming styles that are becoming ever more important with the advent of advanced HCI technology and the ever increasing requirement for applications to run on the web or on collaborating mobile devices. A number of publications about middleware and language design – so-called reactive and event-based languages and systems – have already seen the light, but the field still raises several questions. For example, the interaction with mainstream language concepts is poorly understood, implementation technology is in its infancy and modularity mechanisms are almost totally lacking. Moreover, large applications are still to be developed and patterns and tools for developing reactive applications is an area that is vastly unexplored. This workshop gathers researchers in reactive and event-based languages and systems. The goal of the workshop is to exchange new technical research results and to define better the field by coming up with taxonomies and overviews of the existing work.

Are there any good lessons that software people can learn 15 years after the Y2K crisis? We live in a much more software-dependent world today, and the next generation of technical innovations may have some technical risks that have worldwide consequences. The 1990s was the most recent massive effort to improve and modernize software – and we might look to the past to explore some technical and management approaches that will prepare us for the “smart technology” wave. As was the case in the 1990s, every company, every industry, and every country will need to be concerned with the potential risks in our software-driven world of the future: to better address software requirements, design, coding, and testing of our smart applications and smart support software. Are there some valuable technical and management ideas we can use again?

Today, mobile devices (e.g., smartphones, tablets, smartwatches, etc.) are the main target platforms for developers. To support the new challenges, traditional programming languages are not enough and new ones are emergent to enable programmers (and end-users) to develop software that takes advantage of new hardware capabili-ties. Since the first edition in 2013, PROMOTO has brought to-gether researchers interested in exploring new programming para-digms and embracing the new technologies in the area of touch-enabled mobile devices.

Are there any good lessons that software people can learn 15 years after the Y2K crisis? We live in a much more software-dependent world today, and the next generation of technical innovations may have some technical risks that have worldwide consequences. The 1990s was the most recent massive effort to improve and modernize software - and we might look to the past to explore some technical and management approaches that will prepare us for the "smart technology" wave. As was the case in the 1990s, every company, every industry, and every country will need to be concerned with the potential risks in our software-driven world of the future: to better address software requirements, design, coding, and testing of our smart applications and smart support software. Are there some valuable technical and management ideas we can use again?