2009

2008

2007

This is a list of past D3S Seminars. If you are looking for scheduled (future) seminars, please go to the
main D3S Seminar page.

June 2017

2017-06-06 10:00 in S5:GPU System Calls

Ján Veselý (Rutgers University, USA)

We explore how to directly invoke generic system calls in GPU programs. We examine how system calls should be meshed with prevailing GPGPU programming models, where thousands of threads are organized in a hierarchy of execution groups: Should a system call be invoked at the individual GPU task, or at different execution group levels? What are reasonable ordering semantics for GPU system calls across these hierarchy of execution groups? To study these questions, we implemented GENESYS – a mechanism to allow GPU programs to invoke system calls in the Linux operating system. Numerous subtle changes to Linux were necessary, as the existing kernel assumes that only CPUs invoke system calls. We analyze the performance of GENESYS using micro-benchmarks and three applications that exercise the filesystem, networking, and memory allocation subsystems of the kernel. We conclude by analyzing the suitability of all of Linux’s system calls for the GPU.

We present a novel technique for Satisfiability Modulo the quantifier-free theory of nonlinear arithmetic with transcendental functions over the reals. The approach is based on an abstraction-refinement loop, in which the nonlinear functions are represented as uninterpreted in the abstract space, which is described in terms of the combined theory of linear arithmetic on the rationals with uninterpreted functions. Nonlinear functions are incrementally axiomatized by leveraging techniques from differential calculus, using a lemmas-on-demand approach. Besides presenting the basic solving procedure, we also discuss how to integrate it in state-of-the-art SMT based techniques for the verification of transition systems with nonlinear constraints.

May 2017

2017-05-30 14:00 in S9: Checker Framework

Vlastimil Dort

The seminar will introduce the Checker Framework developed at University of Washington. The framework provides type checking of pluggable type systems for Java. Each type system is defined in terms of a collection of Java type annotations, which can be applied to types in Java 8 code. The Framework also comes with various type systems and checkers designed to find and prevent common bugs such as null pointer dereferences, out-of-bounds array accesses or invalid format strings.

April 2017

2017-04-11 14:00 in S9: Communication in intelligent ensembles

Vladimír Matěna

2017-04-05 09:00 in S5: Scalable Constraint Solving

Antti Hyvärinen (USI Lugano)

Modeling systems and reasoning about their properties in an automatic and scalable way is increasingly important in a variety of modern applications. For such an approach to be successful it is necessary to find a compromise between the expressibility of the modeling language and the efficiency of the deduction engine. The Satisfiability Modulo Theories (SMT) framework integrates a modeling language with the solving machinery, but provides little support for automatically adjusting the language to fit the needs of a particular modeling task. The proposed research answers this challenge through a natural combination of SMT theories and techniques for over-approximating and refining the model based on counter-examples that is at the same time capable of adjusting to a computing environment with a high degree of parallelism.

March 2017

2017-03-14 14:00 in S9: Context-sensitive XSS

Antonín Steinhauser

Cross-site scripting bugs are not always caused by missing sanitization of user inputs. Sometimes the inputs are sanitized, but the sanitization is incompatible with output context of the sanitized value. That can leave the application vulnerable as if no sanitization was used, but it's impossible to discover these bugs with traditional taint tracking. We propose an extension of dynamic taint tracking for web applications that can successfully discover these context-sensitive cross-site scripting bugs.

Regular expressions permit to describe set of strings using a pattern-based syntax. Writing a correct regex that exactly captures the desired set of strings is difficult, also because a regex is seldom syntactically incorrect, and so it is rare to detect faults at parse time. We propose a fault-based approach for generating tests for regexes. We identify fault classes representing possible mistakes a user can make when writing a regex, and we introduce the notion of distinguishing string, i.e., a string that is able to witness a fault. We provide a tool, based on the automata representation of regexes, for generating distinguishing strings exposing the faults introduced in mutated versions of a regex under test. The basic generation process is improved by two techniques, namely monitoring and collecting. Experiments show that the approach produces compact test suites having a guaranteed fault detection capability, differently from other test generation approaches.

February 2017

2017-02-22 09:00 in S5: Tutorial on formal methods II.

Jan Kofroň

The seminar will introduce the basic concepts and algorithms used in software verification and model checking to make the subsequent specialized seminars more comprehensible for people from outside this research area.

January 2017

The seminar will introduce the basic concepts of statistical evaluation of computer performance data. The focus will be on statistical methods that are useful for evaluation whether two data sets differ (e.g. to detect performance regressions). Practical behaviour of the statistical methods on real data will be reported.

December 2016

2016-12-14 09:00 in S5: Atos training course

Steffen Becker (TU Chemnitz)

An ever increasing number of developers are work-ing in the software industry for some time after completing their studies or equivalent education. Their knowledge on recent software engineering topics is often limited and needs to be kept up-to-date in regularly training courses. One of the topic areas often not known to those developers is that of software architectures. It is an abstract topic covering component selection and composition, architecture documentation and evaluation as well as the skills and tasks needed by architects. Several books exist covering all aspects of software architecture and their number is still increasing. Based on these books, some proposals for architecture courses, in particular at universities, have been published. Using the available background material and our own experience, we have designed and conducted a training course on software architecture for senior level developers from Atos, one of Europe’s largest software development companies. While teaching our course we learned several lessons, in particular that the approach of using physical bricks to represent components is a suitable way to approach the topic for educating senior staff.

2016-12-13 14:00 in S9: Time Series Analysis to Architecture Modes in Smart Cyber Physical Systems

Rima Al Ali

2016-12-06 14:00 in S9: Abstract Interpretation of Programs with Strings

Vlastimil Dort

November 2016

2016-11-22 14:00 in S9: Partial Variable Assignment Interpolants

Martin Blicha

Craig interpolants are widely used in program verification as a means of abstraction. Computed interpolants are often used again in later steps in verification algorithms as part of SAT/SMT queries. Since querying solver is an expensive operation, it is desirable for the interpolants to be as small as possible. A variable assignment focuses computed interpolants by restricting the set of clauses taken into account during interpolation. A framework of Labeled Partial Assignment Interpolation Systems has been proposed for computation of such focused interpolants. We show that if a general input is considered (in contrast to input expected in CNF), the framework can be improved to further reduce the size of the computed interpolants.

The Robocup Rescue Simulation Platform is a comprehensive simulation environment for research in disaster response management. The talk presents an overview of the platform, challenges related to coordination of a multi-agent system in the simulation and common approaches coping with the complexity of the environment.

2016-11-15 14:00 in S9: gRPC - A solution for RPCs by Google

2016-11-01 14:00 in S9: Tutorial on formal methods I.

Jan Kofroň

The seminar will introduce the basic concepts and algorithms used in software verification and model checking to make the subsequent specialized seminars more comprehensible for people from outside this research area. [PDF]

October 2016

Smart cyber-physical systems place great emphasis on autonomous component operation as well as opportunistic cooperation, and ensemble-based components models such as DEECo allow us to implement both without violating core architectural principles. However, while the current implementation of DEECo theoretically supports ensembles of all forms, some more complicated cases must be implemented manually by having a component decide the structure of the system, thus polluting the business logic with coordination code. To address this, we have introduced ensemble definition language (EDL) and enriched the ensemble semantics with more powerful coordination constructs. By allowing developers to directly describe the coordination constraints and goals, such a language can then be used for partially automating the sCPS software process, thus both decreasing the likelihood of human error and increasing the confidence of the architect in the code. This seminar presents an overview of the current features of the EDL and its runtime realization, implemented via the Java EMF tools, XText, and the Z3 SMT solver.

2016-10-19 16:20 in S1:Nagini: Verifying Python Programs in Viper

Marco Eilers (ETH Zürich)

Dynamic languages like Python and JavaScript have gained popularity because of their expressiveness and ease of use. They are increasingly being used even for critical applications with high correctness demands. However, the state of the art in static analysis and program verification provides little support for reasoning about such programs. In this talk, we present Nagini, a verifier for Python. Nagini takes Python programs that include statically-checkable type annotations as well as contracts, and encodes them into the Viper verification infrastructure. We will explain some highlights of this encoding and show how to write specifications for programs that make use of object-orientation and polymorphism. We also outline some advanced verification techniques for liveness properties and Input/Output behavior.

The automation of verification techniques based on first-order logic specifications has benefited greatly from verification infrastructures such as Boogie and Why. These offer an intermediate language that can express diverse language features and verification techniques, as well as back-end tools such as verification condition generators. However, these infrastructures are not ideal for verification techniques based on separation logic and other permission logics, because they do not provide direct support for permissions and because existing tools for these logics often prefer symbolic execution over verification condition generation. Consequently, tool support for these logics is typically developed independently for each technique, dramatically increasing the burden of developing automatic tools for permission-based verification. In this talk, we present a verification infrastructure whose intermediate language supports an expressive permission model natively. We provide tool support, including two back-end verifiers – one based on symbolic execution and one on verification condition generation – and a specification inference tool based on abstract interpretation. Various existing verification techniques can be implemented via this infrastructure, alleviating much of the burden of building permission-based verifiers, and allowing the developers of higher-level techniques to focus their efforts at the appropriate level of abstraction.

2016-10-05 09:00 in S5: News in Java

Petr Hnětynka

News in Java

June 2016

2016-06-28 14:00 in S7:String Analysis for Code Contracts

Vlastimil Dort

2016-06-22 09:00 in S8:Narrowing the Uncertainty Gap between Software Models and Performance Results

Catia Trubiani (GSSI, Italy)

The problem of interpreting the results of performance analysis is quite critical in the software performance domain: mean values, variances, and probability distributions are hard to interpret for providing feedback to software architects. Support to the interpretation of such results that helps to fill the gap between numbers and software alternatives is still lacking. This talk is aimed at illustrating PANDA (Performance Antipatterns aNd FeeDback in software Architectures), a framework for addressing the results interpretation and feedback generation problem by means of performance antipatterns, that are recurring solutions to bad practices in software development. Such antipatterns can play a key role in understanding the traceability between software model elements and performance analysis results, since they can be used to narrow the uncertainty while bridging these two domains.

2016-06-14 14:00 in S9: Self-Adaptive Software Systems

Modern software systems often have to operate under highly dynamic conditions that introduce uncertainties, such as changing conditions in the environment, user behavior that is difficult to predict, and goals that may dynamically change. Business continuity requires that these uncertainties are handled at runtime. The key idea of self-adaptation is to let a system gather new knowledge at runtime, reason about itself and its context and adapt to realise its goals. In this talk, I will elaborate on what is a self-adaptive system and highlight our current research on executable formal models combined with effective verification at runtime to provide guarantees for the adaptation goals under uncertainties.

2016-06-01 10:00 in S8:Processes of Software Development and Maintenance

Václav Rajlich (Wayne State University, US)

Software development and maintenance cover overwhelming proportion of software engineering activities. There are numerous software processes used in these stages; this talk discusses iterative, agile, directed, educational, safeguarded, open source, inner source, exploratory, solo development, and waterfall. These are well-known processes with a track record of success. These processes share certain practices; the most common of them is the practice of software change where the old code is updated in order to accommodate a new functionality or a new property. The most common changes are localized changes and this talk explains two techniques: Phased Model of Software Change (PMSC), and wrapping.

May 2016

2016-05-31 14:00 in S9: Autonomous Agent Behaviour Modelled in PRISM

Ruth Hoffmann (University of Glasgow, UK)

With the rising popularity of autonomous systems and their increased deployment within the public domain, ensuring the safety of these systems is crucial. Although testing is a necessary part in the process of deploying such systems, simulation and formal verification are key tools, especially at the early stages of design. Simulation allows us to view the continuous dynamics and monitor behaviour of a system. On the other hand, formal verification of autonomous systems allows for a cheap, fast, and extensive way to check for safety and correct functionality of autonomous systems, that is not possible using simulations alone. In this talk I will demonstrate a simulation and the corresponding probabilistic model of an unmanned aerial vehicle (UAV) in an exemplary autonomous scenario and present results of the discrete models. Further, I discuss a possible formal framework to abstract autonomous systems using simulations to inform probabilistic models.

Model refinement is a technique indispensable for modeling large and complex systems. Many formal specification methods share this concept which usually comes together with the definition of refinement correctness, i.e., the mathematical proof of a logical relation between an abstract model and its refined models. Model refinement is one of the main concepts which the Abstract State Machine (ASM) formal method is built on. Proofs of correct model refinement are usually performed manually, which reduces the usability of the ASM model refinement approach. An automatic support to assist the developer in proving refinement correctness along the chain of refinement steps could be of extreme importance to improve, in practice, the adoption of ASMs. We present how the integration between the ASMs and Satisfiability Modulo Theories (SMT) can be used to automatically prove correctness of model refinement for the ASM method.

Service dependability is a major challenge for the widespread adoption of virtual and cloud environments for mission-critical applications. The provided level of dependability (availability and reliability) is a major distinguishing factor between different service offerings. To make such offerings comparable, novel metrics and techniques are needed allowing to measure and quantify the dependability of virtual and cloud environments, e.g., public cloud platforms or general virtualized service infrastructures. In this talk, we first discuss the inherent challenges of providing service dependability in virtual and cloud environments in the presence of highly variable workloads, load spikes, and security attacks. We then present novel metrics and techniques for measuring and quantifying service dependability, specifically taking into account the dynamics of modern service infrastructures. We consider both environments where virtualization is used as a basis for enabling resource sharing, e.g., as in Infrastructure-as-a-Service (IaaS) offerings, as well as multi-tenant Software-as-a-Service (SaaS) applications, where the whole hardware and software stack is shared among different customers. We focus on evaluating three dependability aspects: i) the ability of the system to provision resources in an elastic manner, i.e., system elasticity, ii) the ability of the system to isolate different applications and customers sharing the physical infrastructure in terms of the performance they observe, i.e., performance isolation, and iii) the ability of the system to deal with attacks exploiting novel attack surfaces such as hypervisors, i.e., intrusion detection and prevention. We discuss the challenges in measuring and quantifying the mentioned three dependability properties, presenting existing approaches to tackle them. Finally, we discuss open issues and emerging directions for future work in the area of dependability benchmarking.

April 2016

2016-04-26 14:00 in S9: Engineering Scalable Cloud Systems

Steffen Becker (TU Chemnitz, Germany)

Cloud-computing offers scalable, elastic infrastructures or platforms which allow to build systems of web-scale. However, the applications running on those platforms need to utilize the provided features so that the overall solution becomes scalable, elastic and ultimately efficient. In the CloudScale EU FP7 project we developed in the last three years a method to engineer such systems. The method helps software architects in planning, designing, and analyzing such systems. In addition, it also provides support for migrating existing legacy systems into cloud-computing environment and hence, offer web-scale services.

2016-04-12 14:00 in S9: Rules selection & Robot Soccer Strategies

Václav Svatoň (Technical University of Ostrava)

The robot soccer game presents an uncertain and dynamic environment for cooperating agents. Robot soccer is interesting mainly for its multi-agent research, including real-time image processing and control, path planning, obstacle avoidance and machine learning. In a robot soccer, the game situation on playground is typically read in terms of robot postures and a ball position. Using real-time information of this dynamically changing game situation, the system of robot soccer team would need to continually decide the action of each team robot and to direct each robot to perform a selected action. Our goal is to propose new algorithms for strategy comparison and evaluation. The talk summarizes our approach to rule selection and evaluation using sequence extraction and methods for sequence comparison. The focus in the area of robot soccer games is to create algorithms which will be applicable in different simulators and environments, and in a game containing real or simulated robots.

March 2016

Building a feature model for an existing SPL can improve the automatic analysis of the SPL and reduce the effort in maintenance. However, developing a feature model can be error prone, and checking that it correctly identifies each actual product of the SPL may be unfeasible due to the huge number of possible configurations. We apply mutation analysis and propose a method that selects special configurations that are able to distinguish a feature model from its mutants, to detect conformance faults, and remove them. We propose a technique that, by iterating this process, is able to repair a faulty model. We devise several variations of a simple hill climbing algorithm for automatic removal of faults and we compare them by a series of experiments on three different sets of feature models. We find that our technique is able to improve the conformance of around 90% of the models and find the correct model in around 40% of the cases.

Smart Cyber-Physical Systems (sCPS) are complex distributed decentralized systems of cooperating components. They typically operate in uncertain environments and thus require means for managing variability at run-time. Architectural modes have traditionally been a proven means for the runtime variability. They are easy to understand, easy to realize in resource-constrained systems and (contrary to more sophisticated methods of learning) provide an explicit specification that can be inspected and validated at design time. However, in uncertain environments (which is the case of sCPS), they tend to lack expressivity to take into account the level of uncertainty and factor it in the mode-switching logic. In this paper we present a rich language to specify mode-switch guards. The semantics of the language is based on statistical tests, which, as we show, is a convenient way to reason about uncertainty in the state of the environment.

Computer system performance is a result of complex interactions between multiple system elements, from modern multicore processors through virtualization layers to application code and supporting framewors executing in managed runtime environments. The talk summarizes our past research results that move towards achieving performance awareness in software development, and illustrates how the same issues can also threaten the integrity of experimental evaluation in computer science.

The Invariant Refinement Method for Self Adaptation (IRM-SA) is a design method targeting development of smart Cyber-Physical Systems (sCPS). It allows for a systematic translation of the system requirements into the system architecture expressed as an ensemble-based component system (EBCS). However, since the requirements are captured using natural language, there exists the danger of their misinterpretation due to natural language requirements' ambiguity, which could eventually lead to design errors. Thus, automation and validation of the design process is desirable. In this paper, we (i) analyze the translation process of natural language requirements into the IRM-SA model, (ii) identify individual steps that can be automated and/or validated using natural language processing techniques, and (iii) propose suitable methods.

2016-03-08 14:00 in S9: Architecture Homeostasis in siCPS

Dominik Škoda

Software-intensive cyber-physical systems (siCPS) encounter a high level of run-time uncertainty. Numerous failures may appear when self-adaptive siCPS operate in environment conditions they are not specifically designed for. We propose architecture homeostasis of siCPS – the ability to change their self-adaptation strategies at run-time according to environment stimuli (including those unanticipated). We also describe three mechanisms that reify the idea - Collaborative Sensing, Faulty Component Isolation, Unspecified Mode Switching.

2016-03-02 09:00 in S5: Truffle Development Updates

Jaroslav Tulach

Truffle is a novel open-source framework for implementing managed languages in Java. The language implementer writes an AST interpreter. It uses the Truffle framework that allows tree rewriting during AST interpretation. Tree rewrites incorporate type feedback and other profiling information into the tree, thus specializing the tree and augmenting it with run-time information. When the tree reaches a stable state, partial evaluation compiles the tree into optimized machine code. The partial evaluation is done by Graal, a just-in-time compiler for the Java HotSpot VM.

2016-03-01 14:00 in S9: ROS simulations

Vladimír Matěna

Robot Operating System (ROS) is set of libraries providing environment for development of robotic applications. The backbone of the whole system is a message passing middleware used to implement publish-subscribe and remote procedure calls for ROS modules. These are implemented as standalone applications possibly running on different network nodes. ROS comes with modules implementing basic robot functions such as navigation, localization, sensor reading and actuator control. Moreover it is possible to run the system with simulated robot using Stage, Gazebo or STDR. We used Stage based simulation in order implement simulation framework consisting of ROS, Stage and jDEECo runtime. The framework allows us to run simulation of DEECo based robotic systems with little effort. Particularly, we focused on Turtlebot robots equipped with 802.15.4 radio interface. Finally, as the ROS interface of the simulated robot and the real one is equal, the same code can be used in simulation and real-world deployment.

In production environments, runtime performance monitoring is often limited to logging of high level events. More detailed measurements, such as method level tracing, tend to be avoided because their overhead can disrupt execution. This limits the information available to developers when solving performance issues at code level. One approach that reduces the measurement disruptions is dynamic performance monitoring, where the measurement instrumentation is inserted and removed as needed. Such selective monitoring naturally reduces the aggregate overhead, but also introduces transient overhead artefacts related to insertion and removal of instrumentation. We experimentally analyze this overhead in Java, focusing in particular on the measurement accuracy, the character of the transient overhead, and the longevity of the overhead artefacts. Among other results, we show that dynamic monitoring requires time from seconds to minutes to deliver stable measurements, that the instrumentation can both slow down and speed up the execution, and that the overhead artefacts can persist beyond the monitoring period.

January 2016

2016-01-20 09:00 in S6:Porting HelenOS to RISC-V

Martin Děcký

RISC-V is the most recent attempt (originally from UC Berkeley) to design a brand new instruction set architecture based on the reduced instruction set computing (RISC) principles. One of its goals is to be completely open and free (both as in free beer and as in free speech) for designers, users and manufacturers. HelenOS is an open source operating system designed and implemented from scratch based on the microkernel multiserver design principles. One of its goals is to provide excellent target platform portability and it currently supports 8 different hardware platforms.

Both projects are still in the process of maturing: While the unprivileged (user space) instruction set architecture of RISC-V has been declared stable in 2014, the privileged instruction set architecture is still in a stage of draft and is allowed to change in the future. Likewise, many major design features and building blocks of HelenOS are already in place, but no official commitment to ABI or API stability has been made yet.

This talk introduces both projects, presents the initial lessons learned from porting HelenOS to RISC-V and evaluates the portability of HelenOS on yet another porting effort. [PDF]

The Kalman filter is a recursive algorithm that estimates the state of a linear dynamical system from a sequence of noisy sensor measurements. Due to its relative simplicity, numerical efficiency and optimality, the Kalman filter and its variants have been applied to a wide range of problems in technology, notably in the areas of guidance, navigation, and control. The traditional definition of the Kalman filter is based on the assumption that at any given time, the errors associated with the predicted state estimate and the observation are statistically independent. However, in many practical problems, this assumption is not satisfied, and as such the Kalman filter may provide overconfident state estimates and diverge. This can have serious consequences in the context of safety-critical systems. Although there are modifications of the Kalman filter that accommodate various types of correlation in the process and observation noises, these are not suitable in the situation where the correlation between the errors associated with the predicted state estimate and the observation is caused by the presence of common past information between the state estimate and the observation, which is characteristic of distributed sensor networks. On the contrary, existing methods that deal with the common past information problem either provide overly conservative estimates, or have too strict assumptions on the structure of the problem, such as the communication topology of the sensor network. This thesis presents two new filters to address various correlated estimation problems that are based on the Ensemble Kalman filter, a Monte Carlo variant of the Kalman filter, which represents the state estimates and observations using sets of random samples instead of the conventional mean vectors and covariance matrices. Specifically, both of these filters provide a new generalised update rule that computes consistent state estimates even in the presence of correlation between the errors associated with the state estimate and the observation. This is only possible due to the fact that in the context of the Ensemble Kalman filter, the magnitude of such a correlation can be estimated from the random samples. The new filters retain all of the important features of the Ensemble Kalman filter, such as scaling linearly with the number of state-space dimensions, and supporting non-linear process and observation models. An analysis of the numerical properties of the filters is provided, including a comparison with state-of-the-art methods in several benchmark scenarios. Furthermore, in order to demonstrate their practical utility, the new filters have been applied to three different real-world problems in the larger field of robot localisation: cooperative vehicle localisation, simultaneous localisation and mapping, and global satellite-based positioning.

2015-12-03 15:40 in S8:Uncovering distributed system bugs with P#

Pantazis Deligiannis (Imperial College London, UK)

Distributed systems are notoriously hard to test. This is due to many well-known sources of nondeterminism, such as races in the asynchronous interaction between system components and unexpected failures. Unit testing, integration testing and stress testing techniques (commonly used by engineering teams today) are typically unable to capture and control all these sources of nondeterminism. This means that some very tricky bugs (heisenbugs) could be missed during testing and only get exposed after a system has been put in production, which can cause downtimes, data losses, customer dissatisfaction and financial losses. We are trying to address this long-standing problem using P#. P# is based on successful ideas from the P language (used for systematically testing the Windows 8 USB 3.0 drivers) and the Chess systematic concurrency tester. P# provides (i) extensions to C# for asynchronous event-driven programming and writing environmental models, and (ii) a systematic testing engine for .NET that captures and controls sources of nondeterminism and is able to detect safety and liveness violations in the actual executable code. The P# project is available as open source in https://github.com/p-org/PSharp, and is a collaboration between Microsoft Research and the Multicore Programming Group at Imperial College London. We have used P# to test various systems inside Microsoft. The Azure Storage team used P# to test components of Azure Storage vNext (the next generation storage system for Azure), and discovered a tricky liveness bug that could not be found for months using traditional testing techniques. The Tools for Software Engineers team used P# during development of a Live Azure Table Migration protocol, and discovered more than 10 safety bugs. Researchers at MSR India developed a P# executable-model for Azure Service Fabric (http://azure.microsoft.com/en-gb/campaigns/service-fabric/), which can be used for systematic testing of user-services built on top of Fabric. In this talk, I will give you an overview of the P# project and discuss the Azure Storage vNext case study in more details.

October 2015

2015-10-22 09:00 in S5:Model-Driven Support for Semi-automated Architectural Abstraction

Uwe Zdun (Universität Wien, Austria)

The talk proposes an approach for supporting the semi-automated abstraction of architectural models throughout the software lifecycle based on model-driven concepts. It addresses the problem that the design and the implementation of a software system often drift apart as software systems evolve, leading to architectural knowledge evaporation. Our approach provides concepts and tool support for the semi-automatic abstraction of architectural knowledge from implemented systems and keeping the abstracted architectural knowledge up-to-date. In particular, we propose architecture abstraction concepts that are supported through a domain-specific language (DSL). We focus on providing architectural abstraction specifications in the DSL that only need to be changed, if the architecture changes, but can tolerate non-architectural changes in the underlying source code. The approach supports full traceability between source code elements and architectural abstractions, as well as guidance through automatically calculated software metrics.

2015-10-20 14:00 in S9: Ph.D. defense rehearsal

Peter Libič

On Garbage collection.

2015-10-15 15:40 in Refectory:Time and Events: From Physics to Informatics and Music

Gérard Berry (Collège de France)

Time has always been a mystery, both in current life and in Physics. It took a very long time to build accurate clocks telling what time it is and making it possible to precisely measure durations, a problem that has been only recently solved by Physics thanks to atomic clocks. This is reflected in our everyday language, which is largely unable to talk precisely about time, as well as in classical programming languages that basically ignore time and keep the handling of external events outside their instruction core. However, correctly handling time- and event-related issues has become crucial in many domains: electronics circuits driven by multiple clocks, network-based distributed systems, cyber-physical systems that embed computers to control physical objects, time-aware data bases, computer music, etc. The talk discusses the recent ways to deal with time and events using specific formalisms and programming languages. We demonstrate that the standard real-number based "time arrow" is too limited and discuss much more elaborate models that generalize the basic notion of time to the repetition of arbitrary and possibly irregular events, deal with actions that look timeless and atomic at one level of observation but timeful at a lower abstraction level, etc. We present the programming formalisms and languages that implement this richer view, and discuss applications in fields as diverse as electronic circuits, critical software in avionics, and computer music.

Cyber-physical systems (CPSs) are deemed as the key enablers of next generation applications. Needless to say, the design, verification and validation of cyber-physical systems reaches unprecedented levels of complexity, specially due to their sensibility to safety issues. Under this perspective, leveraging architectural descriptions to reason on a CPS seems to be the obvious way to manage its inherent complexity. A body of knowledge on architecting CPSs has been proposed in the past years. Still, the trends of research on architecting CPS is unclear. In order to shade some light on the state-of-the art in architecting CPS, this talk presents the results of an ongoing systematic study on the challenges, goals, and solutions reported so far in architecting CPSs.

September 2015

2015-09-15 14:00 in S11:Application of Software Components in Operating System Design

Martin Děcký

Ph.D. defense rehearsal

2015-09-14 09:00 in S11:Ph.D. Rehearsal talk

This is a rehearsal talk before the SCANS 2015 workshop. Recently, several ensemble-based component models have been created to address the dynamicity and complexity of designing cyber-physical systems. Experience in applying these models to actual case studies has shown that there are still scenarios in distributed organization that are hard to capture by utilizing only the concepts of these component models. In this paper, we present a summary of issues encountered, based on the analysis of selected case studies. We propose new concepts that build on those contained in ensemble-based models. In particular, we introduce the ideas of ensemble nesting, dynamic role cardinalities and ensemble fitness. These concepts and their support in the runtime framework aim at serving as a bridge between high-level ensemble formation rules, and the low-level decentralized implementation. These concepts are illustrated on one of the case studies, demonstrating a domain specific language based on that used in the DEECo component model. The other talk: This is a rehearsal talk before the SCANS 2015 workshop. Security and trust plays an important role in Smart Cyber-Physical Systems (sCPS), which are formed as open and large collections of autonomous context- and self-aware adaptive components that dynamically group themselves and cooperate (all in a rather decentralized manner). Such a high level of dynamicity, open-endedness and context-dependence however makes existing approaches to security and trust in distributed systems not fully suitable (typically being too static and not able to cope with decentralization). In this paper we introduce the concepts of context-dependent security and trust defined at the architecture level of sCPS. Contrary to traditional approaches, our solution allows components to adapt their security clearance according to their context (i.e. their state and the surrounding environment), while preserving high level security policies. We further define the interplay of security and trust in sCPS and show their interrelation as an important ingredient in achieving security in systems of adaptive autonomous components.

July 2015

2015-07-07 11:30 in S7:Why3 and SPARK projects

David Hauzar (Inria, France)

June 2015

Modern software systems typically operate in dynamic environments and deal with highly changing operational conditions: components can appear and disappear, may become temporarily or permanently unavailable, may change their behavior, etc. Self-Adaptation is an effective approach to deal with the complexity, uncertainty, and dynamicity of these systems. To provide guarantee of the functional correctness of the adaptation logic, formal methods can be used as a rigorous means for specifying and reasoning about self-adaptive systems' behavior. Formally founded design models, covering both structural and behavioral aspects of self-adaptation, and approaches to validate and verify behavioral properties are highly demanded. The talk presents a conceptual and methodological framework for modeling, validating, and verifying distributed self-adaptive systems. The framework is based on the multi-agent Abstract State Machine formal method, and permits to specify decentralized adaptation control by using MAPE-K (Monitor-Analyze-Plan-Execute over a Knowledge base) computations.

2015-06-23 14:00 in S9: Using logic solvers to detect faults in feature models

Angelo Gargantini (Università di Bergamo, Italy)

Feature models (FMs) allow designers to specify families of products, generally called Software Product Lines (SPLs), in a simple way. A feature model lists the features in a product line together with their possible values and constraints. In this way, it can represent in a compact and easily manageable way millions of variants, each representing a possible product. We can distinguish two kinds of faults in FMs: static faults or anomalies (like dead features) that refer to the structure of the FM (regardless the actual SPL it should represent), and behavioural or conformance faults that refer to discrepancies between the FM and the SPL it should represent. We show that using a logic representation and a logic solver (like a SAT solver) our method is able to detect static anomalies and generate tests able to discover behavioural faults.

One of the well-known techniques for model-based test generation exploits the capability of model checkers to return counterexamples upon property violations. However, this approach is not always optimal in practice due to the required time and memory, or even not feasible due to the state explosion problem of model checking. A way to mitigate these limitations consists in decomposing a system model into suitable subsystem models separately analyzable. We show a technique to decompose a system model into subsystems by exploiting the model variables dependency, and then we propose a test generation approach which builds tests for the single subsystems and combines them later in order to obtain tests for the system as a whole. Such approach mitigates the exponential increase of the test generation time and memory consumption, and, compared with the same model-based test generation technique applied to the whole system, shows to be more efficient. We prove that, although not complete, the approach is sound.

2015-06-17 09:00 in S5: Control Theory for Software Engineering

Many software applications may benefit from the introduction of control theory at different stages of the development process. The requirements identification often translates directly into the definition of control goals. In fact, when these requirements are quantifiable and measurable, control theory offers a variety of techniques to complement the software design process with a feedback loop design process, that empowers the original software with self-adaptive capabilities and allows it to fulfill the mentioned quantifiable requirements. The feedback loop design process consists in the definition of the “knobs”, what can be changed during the software life that affect the software behavior and the measurements of the goals. Control theory allows then to define models that describes the relationship between the values of the knobs and the measured values of the software behavior. These models are used to design decision loops and guarantee properties of the closed-loop systems. In this talk I will briefly describe examples where model-based control allowed us to guarantee the satisfaction of specific properties, like synchronization between different nodes in a wireless sensor network and upper bounds on response times in a cloud application.

Current complex, heterogeneous and open information systems bring multiple challenges in cyber security area. One of methods of securing enterprise information systems on network basis is implementation of Intrusion Prevention Systems (IPS). The talk gives an overview of principles of IPS functionality and its architecture. Further we will focus to more detailed explanation of signature based attack detection methods, their advantages and limitations.

May 2015

We present a new verification algorithm, PANDA, that combines predicate abstraction with concrete execution and dynamic analysis. Both the concrete and abstract state spaces are traversed simultaneously, guiding each other through on-the-fly mutual interaction. Specific information from dynamic program states is used to improve precision even further. A consequence of the simultaneous concrete and abstract execution is that inconsistencies may arise during traversal of the combined state space. We designed two methods for solving the inconsistencies - (1) dynamic pruning of abstract branches with discovery of feasible covering paths, and (2) adjusting of concrete states based on predicates' valuations. Additional spurious errors are eliminated using the well-known approach based on lazy abstraction refinement with interpolants. [PDF]

2015-05-18 15:30 in S8:The Abstract State Machines Method for the Design and Analysis of Software-Intensive Systems

Egon Börger (Università di Pisa, Italy)

The huge gap between much of academic theory and the prevailing software and hardware practice is still with us, as is a wide-spread scepticism about the industrial benefit of formal methods. In this talk I will survey the ASM approach to the design and analysis of software-intensive systems which contributes to bridging this gap. I will explain that it offers a mathematically well founded and rigorous but nevertheless simple discipline, practical and scalable to industrial applications in a great variety of fields including programming and domain specific languages, architectures, protocols, control systems, business processes and others. As an illustration I will present the recent definition of ASM nets which are tailored for rigorous business process specification.

2015-05-13 09:00 in S5: An Architecture Framework for Experimentations with Self-Adaptive Cyber-Physical Systems

Ilias Gerostathopoulos

Rehearsal talk for SEAMS 2015 artifact presentation. Expected audience are the members of the component group.

2015-05-12 14:00 in S9: Nested Antichains for WS1S

Tomáš Fiedor (FIT VUT Brno)

We propose a novel approach for coping with alternating quantification as the main source of nonelementary complexity of deciding WS1S formulae. Our approach is applicable within the state-of-the-art automata-based WS1S decision procedure implemented, e.g., in MONA. The way in which the standard decision procedure processes quantifiers involves determinization, with its worst case exponential complexity, for every quantifier alternation in the prefix of a formula. Our algorithm avoids building the deterministic automata---instead, it constructs only those of their states needed for (dis)proving validity of the formula. It uses a symbolic representation of the states, which have a deeply nested structure stemming from the repeated implicit subset construction, and prunes the search space by a nested subsumption relation, a generalization of the one used by the so-called antichain algorithms for handling nondeterministic automata. We have obtained encouraging experimental results, in some cases outperforming MONA by several orders of magnitude.

April 2015

2015-04-28 14:00 in S9: Programming with Numerical Uncertainties

Eva Darulová (EPFL, Switzerland)

Numerical software, common in scientific computing or embedded systems, inevitably uses an approximation of the real arithmetic in which most algorithms are designed. Finite-precision arithmetic, such as fixed-point or floating-point, is a common and efficient choice, but introduces an uncertainty on the computed result that is often very hard to quantify. We need adequate tools to estimate the errors introduced in order to choose suitable approximations which satisfy the accuracy requirements. I will present a new programming model where the scientist writes his or her numerical program in a real-valued specification language with explicit error annotations. It is then the task of our verifying compiler to select a suitable floating-point or fixed-point data type which guarantees the needed accuracy. I will show how a combination of SMT theorem proving, interval and affine arithmetic and function derivatives yields an accurate, sound and automated error estimation which can handle nonlinearity, discontinuities and certain classes of loops. We have further combined our error computation with genetic programming to not only verify but also improve accuracy. Finally, together with techniques from validated numerics we developed a runtime technique to certify solutions of nonlinear systems of equations, quantifying truncation in addition to roundoff errors.

The emerging area of (smart) Cyber Physical Systems (sCPS) triggers demand for new methods of design, development, and deployment of architecturally dynamic distributed systems. Current approaches (e.g. Component-Based Software Engineering and Agent-Based Development) become insufficient since they fail in addressing challenges specific to sCPS such as mobility, heterogeneous and unreliable deployment infrastructure, and architectural dynamicity. The strong dependence on the underlying communication infrastructure, often combining ad-hoc established links typical for wireless connectivity with more reliable connections of infrastructural networks, requires a novel method to optimize system deployment. In this paper we propose such a method based on the domain knowledge elicited from design level specification. As a proof of concept, we have provided an extension to the DEECo (Dependable Emergent Ensembles of Components) model and validated it on a scenario from the domain of Vehicular Area Networks.

Many software-intensive systems today are very-large-scale software systems with systems of systems (SoS) architectures comprising interrelated and heterogeneous systems developed by diverse teams over many years. Due to their scale, complexity, heterogeneity, and variability engineers face significant challenges when determining the compliance of SoS with their re-quirements. In particular, certain behavior only emerges at runtime due to complex interactions between the involved systems and their environment. Monitoring the behavior of SoS at runtime is thus essential. However, existing requirements monitoring approaches are often limited to par-ticular architectural styles or technologies and are thus hard to apply in SoS architectures. They do not adequately consider the characteristics of SoS: requirements exist at different levels, across different systems, and are owned by diverse stakeholders. This talk provides an overview of the research on requirements monitoring for very-large-scale software systems conducted at the Christian Doppler Laboratory MEVSS at the Johannes Kepler University Linz, Austria. More specifically, in cooperation with the company PRIMETALS Technologies, we have been devel-oping REMINDS, a flexible framework for runtime monitoring of system-of-systems architec-tures, which is based on a requirements monitoring model defining the key elements to be moni-tored and their relations: requirements in SoS can have different scopes and have to be refined and formalized as constraints to allow checking them at runtime. The SoS is instrumented using probes, which provide events and event data at runtime to a unified event model. Constraints operate on this model and check events and event data. Examples include behavioral constraints such as event sequences, pre-conditions and post-conditions on event occurrence, as well as invariant checks performed on the data associated to monitored events. The separation of con-cerns between the actual systems and a higher-level instance for constraint definition and evalu-ation allows the definition of cross-cutting and global constraints, which require data aggregated from various systems. The talk concludes with a summary of ongoing work, including details on how we manage the co-evolution of monitoring infrastructure and monitored system by applying variability management techniques.

March 2015

In this talk I will first make a quick overview of the existing empirical research methodologies and then describe the design and execution of a controlled experiment with students that we perfomed in D3S for the evaluation of a design method for distributed dynamic systems – the Invariant Refinement Method. I will present the results of the experiment and the lessons learned. Finally, the related threats to validity will be discussed together with the way we tried to address them. [PDF]

2015-03-24 14:00 in S9: Verification of Use-Cases with FOAM tool in Context of Cloud Providers

Jiří Vinárek

Use-cases are a well-known technique for capturing functional requirements. Their advantage is the understandability for a wide range of stakeholders. With a growing number of use- cases and their continuous refactoring, inconsistencies inevitably sneak in. This problem has been targeted in the FOAM tool that runs lightweight formal verification of temporal invariants in use-cases. This talk presents the usability of our tool using a real-life case study of a system for managing applications on a PaaS cloud platform. In particular, we show how the development and refactoring is supported by our tool and the types of errors that can be discovered early. [PDF]

2015-03-18 09:00 in S5: How to Optimize the Use of SAT and SMT Solvers for Test Generation of Boolean Expressions

Paolo Arcaini

In the context of automatic test generation, the use of propositional satisfiability (SAT) and Satisfiability Modulo Theories (SMT) solvers is becoming an attractive alternative to traditional algorithmic test generation methods, especially when testing Boolean expressions. The main advantages are the capability to deal with constraints over the inputs, the generation of compact test suites and the support for fault-detecting test generation methods. However, these solvers normally require more time and a greater amount of memory than classical test generation algorithms, making their applicability not always feasible in practice. We propose several ways to optimize the SAT/SMT-based process of test generation for Boolean expressions and we compare several solving tools and propositional transformation rules. These optimizations promise to make SAT/SMT-based techniques as efficient as standard methods for testing purposes, especially when dealing with Boolean expressions, as proved by our experiments.

2015-03-17 14:15 in S9:Asynchronous programming in C# and new features of C# 6

The paper introduces a technique to symbolically execute hierarchically composed models based on communicating state machines. The technique is modular and starts with non-composite models, which are symbolically executed. The results of the execution, symbolic execution trees, are then composed according to the communication topology. The composite symbolic execution trees may be composed further reflecting hierarchical structure of the analyzed model. The technique supports reuse, meaning that already generated symbolic execution trees, composite or not, are used any time they are required in the composition. For illustration, the technique is applied to analyze UML-RT models and the paper shows several analyses options such as reachability checking or test case generation. The presentation of the technique is formal, but we also report on the implementation and we present some experimental results.

February 2015

Incremental static analysis involves analyzing changes to a version of a source code along with analyzing code regions that are semantically affected by the changes. Existing analysis tools that attempt to perform incremental analysis can perform redundant computations due to poor abstraction. In this paper, we design a novel and efficient incremental analysis algorithm for reducing the overall analysis time. We use a path abstraction that encodes different paths in the program as a set of constraints. The constraints encoded as boolean formulas are input to a SAT solver and the (un)satisfiability of the formulas drives the analysis further. While a majority of boolean formulas are similar across multiple versions, the problem of finding their equivalence is graph isomorphism complete. We address a relaxed version of the problem by designing efficient memoization techniques to identify equivalence of boolean formulas to improve the performance of the static analysis engine. Our experimental results on a number of large codebases (upto 87 KLoC) show a performance gain of upto 32% when incremental analysis is used. The overhead associated with identifying equivalence of boolean formulas is less (not more than 8.4%) than the overall reduction in analysis time.

The processing of Big Data often includes sorting as a basic operator. Indeed, it has been shown that many software applications spend up to 25% of their time sorting data. Moreover, for compute-bound applications, the most energy-efficient executions have shown to use a CPU speed lower than the maximum speed: the CPU sweet spot frequency. In this talk, I'll present recent findings to run Big Data intensive applications in a more energy-efficient way. I'll show empirical evidence that data-intensive analytic tasks are more energy-efficient when CPU(s) operate(s) at sweet spots frequencies. The approach uses a novel high-precision, fine-grained energy measurement infrastructure to investigate the energy (joules) consumed by different sorting algorithms and database queries. The experiments show that algorithms and queries can have different sweet spot frequencies for the same computational task. To leverage these findings, I'll describe how a self-adaptive system can be engineered, which makes use of sweet spot frequencies.

January 2015

On contemporary execution platforms, even small scale performance experiments are turning from an easy and reliable way of evaluating software performance into a difficult exercise of tracking a multitude of technical details that must be dealt with even in otherwise very simple scenarios. The goal of this tutorial is to help practitioners - programmers, administrators, researchers - who need to evaluate software performance by providing a compact technical overview of the essential problems and the available solutions in performance experiments.

Many decisions taken during software development impact the resulting application performance. The key decisions whose potential impact is large are usually carefully weighed. In contrast, the same care is not used for many decisions whose individual impact is likely to be small – simply because the costs would outweigh the benefits. Developer opinion is the common deciding factor for these cases, and our goal is to provide the developer with information that would help form such opinion, thus preventing performance loss due to the accumulated effect of many poor decisions. Our method turns performance unit tests into recipes for generating performance documentation. When the developer selects an interface and workload of interest, relevant performance documentation is generated interactively. This increases performance awareness – with performance information available alongside standard interface documentation, developers should find it easier to take informed decisions even in situations where expensive performance evaluation is not practical. We demonstrate the method on multiple examples, which show how equipping code with performance unit tests works.

We present an SMT-based symbolic model checking algorithm for safety verification of recursive programs. The algorithm is modular and analyzes procedures individually. Unlike other SMT-based approaches, it maintains both over- and under-approximations of procedure summaries. Under-approximations are used to analyze procedure calls without inlining. Over-approximations are used to block infeasible counterexamples and detect convergence to a proof. We show that for programs and properties over a decidable theory, the algorithm is guaranteed to find a counterexample, if one exists. However, efficiency depends on an oracle for quantifier elimination (QE). For Boolean Programs, the algorithm is a polynomial decision procedure, matching the worst-case bounds of the best BDD-based algorithms. For Linear Arithmetic (integers and rationals), we give an efficient instantiation of the algorithm by applying QE lazily. We use existing interpolation techniques to over-approximate QE and introduce Model Based Projection to under-approximate QE. Empirical evaluation on SV-COMP benchmarks shows that our algorithm improves significantly on the state-of-the-art.

December 2014

2014-12-17 09:00 in S5: On Models at Runtime in the Design of Flexible Software Architectures - Lessons Learned and Challenges Ahead

Mahdi Derakhshanmanesh (Universität Koblenz)

Software needs to be flexible so it can be altered in the face of ever changing requirements. If software is no longer adaptable, it needs to be re-engineered or even replaced. Modern approaches such as self-adaptive software and dynamic software product lines aim to provide solutions in which software is capable of autonomously adapting itself to sensed changes in its context (e.g., technical or human). The vision is that such systems reduce the amount of human intervention to the cases in which fundamental changes are required (macro-adaptation). In addition software can react quickly -- within certain bounds -- by making changes to their own behavior, structure and state (micro-adaptation). To engineer such complex software systems, solid conceptual and technological foundations are crucial. In this talk we report on our experiences with models at runtime as an enabling technology for designing, constructing and maintaining flexible software architectures. In our approach, maintenance and self-adaptation can be carried out at the level of models that are an integrated part of the software system. We elaborate on lessons learned and discuss identified challenges as a basis for further discussion and collaboration.

Physically-driven methods of simulating fluid dynamics and frequency-based ocean surface synthesis methods are of long-standing interest for the field of computer graphics. However, they have been historically used separately or without any interaction between them. The thesis presented in this talk focuses on the possibility of combining the approaches into one adaptive solution by proposing methods for unified surface representation, method result blending and one-way interaction between the methods. The thesis also outlines several future developments of the combined method and proposes a level-of-detail approach taking advantage of hardware tessellation that can be used regardless of what method was used for the simulation.

Application crashes and errors that occur while loading a document are one of the most visible defects of consumer software. While documents become corrupted in various ways - from storage media failures to incompatibility across applications to malicious modifications - the underlying reason they fail to load in a certain application is that their contents cause the application logic to exercise an uncommon execution path which the software was not designed to handle, or which was not properly tested. We present Docovery, a novel document recovery technique based on symbolic execution that makes it possible to fix broken documents without any prior knowledge of the file format. Starting from the code path executed when opening a broken document, Docovery explores alternative paths that avoid the error, and makes small changes to the document in order to force the application to follow one of these alternative paths. We implemented our approach in a prototype tool based on the symbolic execution engine KLEE. We present a preliminary case study, which shows that Docovery can successfully recover broken documents processed by several popular applications such as the e-mail client pine, the pagination tool pr and the binary file utilities dwarfdump and readelf.

November 2014

Control theory provides solid foundations and tools for designing and developing a reliable feedback control that drives software adaptations at runtime. However, the integration of the resulting control mechanisms is usually left to an extensive handcrafting of a non-trivial implementation code. This is a challenging task when considering the variety and complexity of contemporary distributed computing systems. In this talk I will present a domain-specific modeling language called FCDL that addresses integration of adaptation mechanisms into software systems through external feedback control loop. The key advantage in the domain-specific modeling approach is the possibility to raise the level of abstraction on which the feedback control, its processes and interactions are described. This makes the resulting architecture amenable to automated analysis and implementation code synthesis. FCDL defines feedback control architectures as hierarchically organized networks of adaptive elements, actor-like entities that represent the corresponding processes such as monitoring, decision-making and reconfiguration. It is a statically typed language with a support for composition, distribution and reflection thereby enabling coordination of multiple hierarchically composed control loops. As a result, this should allow researchers and engineers to experiment and to put easily in practice different self-adaptation mechanisms and policies.

2014-11-18 14:00 in S9: Simulink Block Library for LEGO NXT

Dominik Škoda

Model-driven development (MDD) is modern methodology of software creation. Its growing potential is used in the development of embedded and real-time systems (ERS). There is an effort to teach the students this development technique in the area of embedded, real-time devices. One of the suitable target platforms for this purpose is LEGO NXT. It is low-cost, modular platform with wide range of different sensors which reflects hardware parameters of industry products in the category of ERS. The development environment which is suitable for teaching the MDD techniques is Simulink which is an extension of Matlab. However there is an official support for LEGO NXT in Simulink it doesn't fulfill our needs. The main reasons are, that the support works only on Windows operating systems and that it is closed source, disabling modification, customization and extension. The goal of this study was to implement this support for LEGO NXT and seamlessly integrate it into Simulink. The solution works on Linux and is opened for further modification.

Domain-specific languages (DSLs) have demonstrated their capability to reduce the gap between the problem domain and the technical decisions during the software development process. However, building a DSL is not an easy task because it requires specialized knowledge and skills. Moreover, the challenge becomes even more complex in the context of multi-domain companies where several domains coexist across the business units and, consequently, there is a need of dealing not only with isolated DSLs but also with families of DSLs. To deal with this complexity, the research community has been working on the definition of approaches that use the ideas of Software Product Lines Engineering (SPLE) for building and maintaining families of DSLs. In this talk, I will present my current PhD thesis that is aimed to contribute to this effort. In particular, I will explain the challenges that need to be addressed during the process of going from a family of DSLs to a software language line.

2014-11-04 14:00 in S9: Deviations prediction in timetables based on AVL data

Zbyněk Jiráček

Relevant path planning using public transport is limited by reliability of the transportation network. In some cases it turns out that we can plan paths with respect to expected delays and hereby improve the reliability of the resulting path. This study focuses on prediction of the delays in public transport systems using data from vehicle tracking systems - known as the AVL data. These data are typically collected by the transit operators. Various algorithms are compared using real data from Prague trams tracking system. The study also includes a discussion about a possible utilization of the information gained from the used methods in passenger information systems.

October 2014

Typical CEGAR-based verification methods refine the abstract domain based on full counterexample traces. The finite state model checking algorithm IC3 introduced the concept of discovering, generalizing from, and thereby eliminating individual state counterexamples to induction (CTIs). This focus on individual states suggests a simpler abstraction-refinement scheme in which refinements are performed relative to single steps of the transition relation, thus reducing the expense of refinement and eliminating the need for full traces. Interestingly, this change in refinement focus leads to a natural spectrum of refinement options, including when to refine and which type of concrete single-step query to refine relative to. Experiments validate that CTI-focused abstraction refinement, or CTIGAR, is competitive with existing CEGAR-based tools.

2014-10-21 14:00 in S9: Implementation of the DEECo component framework for embedded systems

Vladimír Matěna

Recent development in the field of distributed and decentralized cyber-physical systems let to emerge of DEECo model. As many DEECo use cases are embedded applications it is interesting to evaluate DEECo on embedded hardware. Currently there is only reference DEECo implementation which is written in Java thus cannot be used for embedded applications. In this talk I will present C++ DEECo mapping and embedded CDEECo++ framework designed in C++ with usage of the FreeRTOS operating system. The mapping and the framework will be described on a simple application designed for the STM32F4 board.

The number of interleavings of a concurrent program makes automatic analysis of such software very hard. Modern multiprocessors' execution models make this problem even harder. Modelling program executions with partial orders rather than interleavings addresses both issues: we obtain an efficient encoding into integer difference logic for bounded model checking that enables first-time formal verification of deployed concurrent systems code. We implemented the encoding in the CBMC tool and present experiments over a wide range of memory models, including SC, Intel x86 and IBM Power. Our experiments include core parts of PostgreSQL, the Linux kernel and the Apache HTTP server.

Craig interpolants are widely used in program verification as a means of abstraction. In this paper, we (i) introduce Partial Variable Assignment Interpolants (PVAIs) as a generalization of Craig interpolants. A variable assignment focuses computed interpolants by restricting the set of clauses taken into account during interpolation. Variable assignment restricts the set of clauses taken into account during interpolation thus focusing the interpolant. PVAIs can be for example employed in the context of DAG interpolation, in order to prevent unwanted out-of-scope variables to appear in interpolants. Furthermore, we (ii) present a way to compute PVAIs for propositional logic based on an extension of the Labeled Interpolation Systems, and (iii) analyze the strength of computed interpolants and prove the conditions under which they have the path interpolation property.

2014-10-13 09:30 in a lecture room announced later:Runtime monitoring through Abstract State Machines

Paolo Arcaini (University of Bergamo, Italy)

In runtime monitoring, operational models describing the expected system behavior offer some advantages with respect to declarative specifications of properties, especially when designers are more accustomed to them. CoMA (Conformance Monitoring by Abstract State Machines) is a specification-based approach for runtime monitoring of Java software. Based on the information obtained from code execution and model simulation, the conformance of the concrete implementation is checked with respect to its formal specification given in terms of Abstract State Machines. At runtime, undesirable behaviors of the implementation, as well as incorrect specifications of the system behavior are recognized. Nondeterminism in the specification usually affects the performances of CoMA that explicitly represents all the possible conformant states. To mitigate this problem, CoMA-SMT has been proposed: it is an SMT-based technique in which ASM computations are symbolically represented and conformance verification is performed by means of satisfability checking.

2014-10-07 14:00 in S9: Resource-aware runtime monitoring

Borzoo Bonakdarpour (McMaster University, Canada)

Runtime monitoring refers to the technique, where a monitor process checks at run time whether or not the execution of a system under inspection satisfies its specification. Runtime monitoring has numerous applications in online testing, tracing, controlling, and steering of computing systems. Thus, research efforts expand over different models of computation (e.g., real-time and distributed systems), application domains (e.g., cyber-physical, fault-tolerant, and secure systems), and expressiveness of specifications. In all these cases, the main drawback of runtime monitoring is its runtime overhead. That is, naive design and implementation of a runtime monitor may seriously violate the resource constraints of the system under scrutiny. In this talk, I will present our techniques on design, implementation, and deployment of resource-aware runtime monitors for LTL specifications that take into account timing, power, memory, and CPU constraints into account simultaneously. These techniques are based on a diverse set of sophisticated static and dynamic analysis methods, control theory, optimization and constraint solving, and parallel processing.

September 2014

Writing high quality, bug-free code is extremely time-consuming and expensive. Static software analysis tools promise to alleviate this problem by finding (residual) errors in programs fully automatically. However, they often suffer from certain imprecisions that result, e.g., in false positives. In this talk we present the Software Bounded Model Checker LLBMC, a high-precision software analysis tool for C and C++ programs that has been developed at the Karlsruhe Institute of Technology (KIT). LLBMC builds upon the LLVM compiler framework, using LLVM's intermediate representation as a starting point for its analysis. Within LLBMC, programs are modeled with bit-precision, thus achieving high accuracy. LLBMC also uses advanced SMT solvers as core decision procedures and employs a large set of rewriting rules to improve performance. LLBMC participated successfully in the Competitions on Software Verification (SV-COMP), and recently received a Gödel medal at the FLoC Olympic Games 2014.

2014-09-16 14:00 in S7:Correct Compilers for Correct Processors

Andreas Krall (Technische Universität Wien, Austria)

2014-09-16 09:00 in S9:Faults in Linux 2.6

Julia Lawall (INRIA/LIP6, France)

In August 2011, Linux entered into its third decade. Ten years before, in 2001, Chou et al. published a study of faults found by applying a static analyzer to Linux versions 1.0 through 2.4.1. A major result of their work was that the drivers directory contained up to 7 times more of certain kinds of faults than other directories. This result inspired a number of development and research efforts on improving the reliability of driver code. Today Linux is used in a much wider range of environments, provides a much wider range of services, and has adopted a new development and release model. What has been the impact of these changes on code quality? Are drivers still a major problem? To answer these questions, we have transported the experiments of Chou et al. to all the versions of Linux 2.6, released between 2003 and 2011. We find that Linux has more than doubled in size during this period, but that the number of faults per line of code has been decreasing. And, even though drivers still accounts for a large part of the kernel code and contains the most faults, its fault rate is now below that of other directories, such as arch (HAL) and fs (file systems). These results can guide further development and research efforts for the decade to come. To enable others to continually update these results as Linux evolves, we define our experimental protocol and make our checkers available. Joint work with Nicolas Palix, Gael Thomas, Suman Saha, Christophe Calves, and Gilles Muller.

August 2014

2014-08-19 14:00 in S9: Normative Multiagent Systems

Samhar Mahmoud (King's College London)

Norms provide a valuable mechanism for establishing coherent cooperative behaviour in decentralised systems in populations of self-interested agents. However, the existence of norms does not mean that agents will always comply with them. Within the literature, norm compliance can be established using two different approaches, top-down and bottom-up. In the top-down approach, a norm is imposed by some kind of authority, which is responsible for monitoring compliance with the norm. In the bottom-up approach, agents become aware of the existence of a norm through their interactions with each other, and it is the responsibility of every agent to monitor compliance with this norm. In this talk, I will discuss these two approaches focusing more on the latter. In particular, I will show how the metanorm model and adaptive punishment managed to achieve norm emergence over various type of network topologies.

June 2014

2014-06-25 09:00 in S5: Graal and Truffle: One VM to Rule Them All

Thomas Wuerthinger (Oracle Labs Austria)

Graal is a dynamic meta-circular research compiler for Java that is designed for extensibility and modularity. One of its main distinguishing elements is the handling of optimistic assumptions obtained via profiling feedback and the representation of deoptimization guards in the compiled code. Truffle is a self-optimizing runtime system on top of Graal that uses partial evaluation to derive compiled code from interpreters. Truffle is suitable for creating high-performance implementations for dynamic languages with only moderate effort. The presentation includes a description of the Truffle multi-language API and performance comparisons within the industry of current prototype Truffle language implementations (JavaScript, Ruby, and R). Both Graal and Truffle are open source and form themselves research platforms in the area of virtual machine and programming language implementation (http://openjdk.java.net/projects/graal/).

Resource management is critical for application domains where components share their execution environments but belong to different stakeholders, such as smart homes or cloud systems. Yet, current middleware and application containers often hide system-level details needed for dynamic resource management. In particular, they tend to hide resource usage and offer automatic management instead (e.g., CPU, memory and I/O). In contrast, system-level containers, such as Linux Containers (LXC), allow fine-grain resource management. However, they lack knowledge about the application’s structure and its requirements in order to provide well tuned resource management. In this paper, we propose a flexible and efficient approach to resource management that takes advantage of the application’s structure and requirements to deploy components on system-level containers to manage resources. Our approach follows the models@runtime paradigm, which simplifies the development of self-adaptive systems and captures structural information of the application. In Squirrel, the application’s model is augmented with resource management contracts that are used at runtime to drive system-level containers and enforce resource management. We validate Squirrel’s feasibility and show its overhead regarding communication, CPU/memory consumption, and adaptation performance. The results demonstrate negligible impact on performance and only slight memory overhead when comparing a resource managed application to its original version.

May 2014

2014-05-27 14:00 in S9: Partial state matching

Pavel Jančík

2014-05-13 14:00 in S9: Dynamic Predicate Abstraction

Jakub Daniel

Last year we started working on dynamic predicate abstraction in the context of Java Pathfinder. Our approach combines systematic traversal of the concrete program state space with on-the-fly predicate abstraction. This talk will cover the basic approach, specific technical details (predicate language, heap representation, state matching), and current status of a prototype implementation.

2014-05-07 09:00 in S5: Fault detection and classifications

Mo Adda

The visualization of classified faults can help network managers to take well informed decisions pertaining to security, faults and performance. This presentation therefore addresses fault classification using FCM. We will present a prototype to demonstrate the efficiency of the algorithm used. The empirical results are compared with other algorithms such as Neural Networks. Results reveal that FCM outperforms other algorithms.

The design of Exascale computing is opening a new horizon for interconnection network in parallel systems. This presentation will look at the evolution of parallel computers from on-chip to off-chip design, and suggest a generic semantic to classify interconnection network, and their generalisation into super-nodes. We will look at the optimisations in term of power consumption requirement for future networks.

2014-04-30 09:00 in S5: On data-flow analysis of dynamic language

2014-04-02 09:00 in S5: PonyCloud

March 2014

2014-03-12 09:00 in S5: On The Limits of Modeling Generational Garbage Collector Performance

Peter Libič

ICPE rehearsal

2014-03-11 14:00 in S9: Performance Awareness (Keynote Rehearsal)

Petr Tůma

The talk will take a broad look at performance awareness, defined as the ability to observe performance and to act on the observations. The implicit question posed in the talk is what can be done to improve various aspects of performance awareness – be it our awareness of the various performance relevant mechanisms, our awareness of the expected software performance, our ability to attain and exploit performance awareness as software developers, and our options for implementing performance aware applications.

2014-03-05 09:00 in S5: A brief overview of Java 8 new features

Petr Hnětynka

A brief overview of Java 8 new features

February 2014

2014-02-25 14:00 in S9: An Overview of Requirements Traceability Methods

Jiří Vinárek

Requirements traceability, defined as “the ability to describe and follow the life of a requirement, in both a forwards and backwards direction (i.e., from its origins, through its development and specification, to its subsequent deployment and use, and through periods of on-going refinement and iteration in any of these phases)” (Gotel and Finkelstein 1994) is a critical element of any rigorous software and systems development process. This talk will present overview of the methods for (semi)automatic trace derivation.

2014-02-19 09:00 in S5: Introduction to Bck2brwsr project

Jaroslav Tulach (Oracle)

A year and half ago I started to play with the idea of small Java that would fit into any modern browser and run without any plugins. After that time I managed to reach several milestones. I have special, slim API for controlling an HTML page front end from Java. I can run the system on desktop,some mobile phones/platforms and also directly in a browser. The system can run complex apps including those using Javac: http://dew.apidesign.org/dew/. I have IDE tooling that makes creating such clients real fun. Overall I am starting to believe that coding in my Java/HTML mixture is more effective, compact and more errorprone than using JavaScript instead. The work is not finished, but I believe there is a lot to show. Stop by, I’ll be glad share the direction to get Java bck2brwsr…

December 2013

Formal verification techniques have been used successfully in analyses of software and hardware. Where they are lacking is the domain of Hybrid Systems, which model the interaction of continuously evolving dynamics with discrete transitions. Common examples are safety critical embedded systems which interact with physical systems using sensors and actuators in the domain of robotics, automotive and aviation. In this talk, we present an approach for falsification of safety properties. A falsification procedure finds a trajectory and the associated concrete set of conditions lead to the safety property being violated. Our approach borrows ideas from the field of optimal control and motion planning to search for candidate trajectory segments which can then be spliced together to form concrete trajectories. We compare our technique with other falsification procedures, including uniform random sampling and robustness guided falsification used by S-Taliro. A preliminary evaluation shows the potential of our approach.

2013-12-12 14:00 in S8:Utilisation of formal methods for cyber security at FireEye

Hendrik Tews

The talk will first provide some background information about the challenges of operating-system verification and the results achieved so far. I will then present FireEye's approach for improving cyber security and introduce the new research and development center at Dresden, whose task is to use formal methods to further improve FireEye's products.

2013-12-11 09:00 in S5: Cooperative Web Cache

Luboš Mátl

Many web pages try to attract as many visitors as possible with attractive design. However, this comes at a price as clients need to transmit significantly more data. Average size of today's web pages is rising so fast that it actually surpasses the increase in internet connection speeds. This results in slower page load times. Longer wait for page load discourages the visitors resulting in loss of potential customers. The goal of the CWC (Cooperative Web Cache) is improving the current method for distributing content on the web by introducing a distributed cache. The basic idea is sharing static content from the visited web pages between their visitors. By providing a small amount of their resources visitors help spread the static content distributed by web servers. In comparison to the current model of obtaining the content needed to render a page, this innovative approach brings excellent scalability because the number of content providers increases along with the number of clients requesting the content. The CWC is built on peer-to-peer network Pastry and its extension Scribe.

2013-12-10 14:00 in S9: Composability and Predictability in the CoMPSoC Platform

Benny Åkesson (FEL ČVUT)

System-on-chip (SOC) design gets increasingly complex, as a growing number of applications are integrated in modern systems. Some of these applications have real-time requirements, such as a minimum throughput or a maximum latency. To reduce cost, system resources are shared between applications, making their timing behavior inter-dependent. Real-time requirements must hence be verified for all possible combinations of concurrently executing applications, which is not feasible with commonly used simulation-based techniques. This presentation addresses this problem using two complexity-reducing concepts: composability and predictability. Applications in a composable system are completely isolated and cannot affect each other’s behaviors, enabling them to be independently verified. Predictable systems, on the other hand, provide lower bounds on performance, allowing applications to be verified using formal performance analysis. Five techniques to achieve composability and/or predictability in SOC resources are presented and we explain their implementation for processors, interconnect, and memories in the CoMPSoC platform. [PDF]

2013-12-03 14:00 in S9: Highlights of MOD 2013

Pavel Jancik

November 2013

2013-11-27 09:00 in S5: Aspect-driven design of Information Systems

Karel Čemus (FEL, ČVUT)

Contemporary enterprise web applications must deal with a large stack of different kinds of concerns involving business rules, security policies, cross-cutting configuration, etc. At the same time, increasing demands on user interface complexity make designers to consider the above concerns in the presentation. To locate a concern knowledge, we try to identify an appropriate system component with the concern definition. Unfortunately, this is not always possible, since there exist concerns cross-cutting multiple components. Thus to capture the entire knowledge we need to locate multiple components. In addition to it, we must restated the knowledge in the user interface because of technological incompatibility between the knowledge source and the user interface language. Such design suffers from tangled and hard to read code, due to the cross-cutting concerns and also from restated information and duplicated knowledge. This leads to a product that is hard to maintain, a small change becomes expensive, error-prone and tedious due to the necessity of manual changes in multiple locations. The talk will introduce a novel approach based on independent description of all orthogonal concerns in information systems and their dynamic automated weaving according to the current user's context. Such approach reduces information restatement, speeds up development and simplifies maintenance efforts due to application of automated programming and runtime weaving of all concerns, and thus distributes the knowledge through the entire system.

2013-11-26 14:00 in S9: Is Static Analysis of PHP Simple?

David Hauzar

Report on what we have been doing in the static analysis of PHP topic. The talk will summarize what has to be done to implement static analysis that supports real-world code.

2013-11-20 09:00 in S5: News from Garbage Collection World

Peter Libic

Report on the interesting GC papers: one on GC implementation in FPGA and one collection algorithm that uses free()-like hints to decrease GC pause times.

2013-11-19 14:15 in S9:Summary of CSEE&T and ICSE 2013

Pavel Jezek

2013-11-13 09:00 in S5: Designing for adaptation in DEECo-based systems

Ilias Gerostathopoulos

This is a continuation of last week's seminar held on 06/11/13. This part will focus on runtime monitoring of assumptions, selection of alternatives through the SAT solving, extensions of the jDEECo platform to support IRM-SA, and interesting future extensions to IRM-SA. [PDF]

2013-11-06 09:00 in S5: Designing for adaptation in DEECo-based systems

Ilias Gerostathopoulos

In this talk we will describe IRM-SA (IRM for Self-Adaptivity), which is an extension to our IRM (Invariant Refinement Method). IRM is a design method specifically focusing on the domain of DEECo-based (or, more generally, ensemble-based) systems that allows for traceability between high-level system goals/requirements and low-level implementation artifacts. IRM-SA builds on this traceability and extends it by connecting the identified assumptions about different operational environments with predefined variants of system architecture, thus achieving situation-based adaptation.

October 2013

2013-10-30 09:00 in S5: Read-Copy-Update for HelenOS, Part II

Martin Děcký

Second part of the talk about Read-Copy-Update algorithms for HelenOS. This time we will hopefully get to the actual implementation. [PDF]

2013-10-29 14:00 in S9: Read-Copy-Update for HelenOS

Martin Děcký

Overview of the characteristics of the new scalable microkernel-specific RCU algorithms developed by Adam Hraška and others for HelenOS. These novel RCU algorithms allow for implementing highly scalable concurrent data structures, for example a hash table that provides concurrent lock-free lookups, concurrent lock-free updates and concurrent growing and shrinking. [PDF]

2013-10-23 09:00 in S5: Predicate Abstraction in Java Pathfinder

Jakub Daniel

A presentation of an effort to support Predicate Abstraction in JPF. Our solution supports predicates over numerical variables. The main challenges that we have addressed include (1) the design of the predicate language, (2) support for arrays, (3) finding predicates affected by a given statement, (4) aliasing between variables, (5) propagating values of predicates over method call boundaries, and (6) computing weakest preconditions for complex predicates. The talk will describe our solution to these challenges and possible directions of future work.

2013-10-22 14:00 in S9: GPCE'13 rehearsal talk - ShadowVM

Lukáš Marek

Dynamic analysis tools are often implemented using instrumentation, particularly on managed runtimes including the Java Virtual Machine (JVM). Performing instrumentation robustly is especially complex on such runtimes: existing frameworks offer limited coverage and poor isolation, while previous work has shown that apparently innocuous instrumentation can cause deadlocks or crashes in the observed application. This talk presents ShadowVM, a system for instrumentation-based dynamic analyses on the JVM which combines a number of techniques to greatly improve both isolation and coverage. These centre on the offload of analysis to a separate process.

Embedded systems usually implement a set of control loops to interact with their environment. These control loops are associated with quality-of-control (QoC) requirements, which are necessary to achieve a desired behavior. For example, control loops might be required not to exceed a specified settling time, or to have a given stability margin, etc. However, since most embedded systems are implemented upon limited resources, it is sometimes difficult to reliably guarantee all QoC requirements. In particular, since delay might affect control performance, the effect of scheduling on control algorithms needs to be considered. In this presentation, we discuss some techniques for controller/scheduler co-design that allow efficiently utilizing available resources and, at the same time, meeting all QoC requirements. In particular, we analyze scheduling strategies for control messages on a mixed time-/event-triggered bus such FlexRay in the automotive domain.

2013-10-03 09:00 in S5:Software Deployment on Heterogeneous Platforms

Ivica Crnkovic (MDH, Sweden)

A recent development of heterogeneous platforms (i.e. those containing different types of computing units such as multicore CPUs, GPUs, and FPGAs) has enabled significant improvements in performance processing large amount of data in realtime. This possibility however is still not fully utilized due to a lack of methods for optimal configuration of software; the allocation of different software components to different computing unit types is crucial for getting the maximal utilization of the platform, but for more complex systems it is difficult to find ad-hoc a good enough or the best configuration. This seminar will give an overview of an approach to find a feasible and locally optimal solution for allocating software components to processing units in a heterogeneous platform. This work is a part of a research program RALF3 with a objective to provide methods and tools for optimal distribution of software, particular those with heavy calculations and high data rates, on hardware platforms that include Field-Programmable Gate Arrays (FPGAs), Graphics Processing Units (GPUs) and Central Processing Units (CPUs). Two cases as proofs as concepts of concepts are explored in the program: a) an underwater robot with stereo cameras and a heterogeneous platform that includes multicore CPUs, FPGA and GPUs, and with ability to recognize objects in real-time under specific conditions with predictable and optimized performance on the heterogeneous platform, and b) mammography device based on non-harmful microwaves with ability to build the images based on detected microwaves scattering with a significant decrease of computation time. A short overview of the program will also be presented.

September 2013

2013-09-30 09:00 in S7:Towards Model-Driven Engineering for Robotics: modelling that can also be used by the robots!

Herman Bruyninckx (KU Leuwen, Belgium)

Lessons learned and tooling in Model-Driven Engineering are, slowly, being introduced in the domain of complex robotics systems. The main drivers are the impossibility to still make significant progress by only and-coded components, and the increasing expectation of code reuse. The robotics domain brings one particular extra challenge to MDE, and that is the fact that we robots are expected to make use themselves of the formal models that human developers have used to create them. So, this presentation will make the link between MDE and "cognitive" knowledge engineering.

2013-09-25 10:15 in S7:ProCom and beyond

Jan Carlson

ProCom is a component model for embedded real-time systems, developed at Mälardalen University, Sweden. It is based around a notion of rich, predictable design-time components, and relies on a synthesis mechanism to produce efficient run-time executables. In this talk, I will describe the key characteristics of ProCom, and present some of our recent and current lines of research in the area, such as timing analysis, synthesis for hierarchical scheduling, mode shift handling and GPU/multicore allocation.

Including performance tests as a part of unit testing is technically more difficult than including functional tests -- besides the usual challenges of performance measurement, specifying and testing the correctness conditions is also more complex. In earlier work, we have proposed a formalism for expressing these conditions, the Stochastic Performance Logic. In this paper, we evaluate our formalism in the context of performance unit testing of JDOM, an open source project for working with XML data. We focus on the ability to capture and test developer assumptions and on the practical behavior of the built in hypothesis testing when the formal assumptions of the tests are not met.

The talk aims at the problem of automatic generation of a control strategy for a robotic vehicle given a complex mission specification. In particular, we focus on cases when the robot cannot accomplish the desired goal as a whole due to the unrealizability of the specification and/or the environmental constraints. Our target is to find the optimal, i.e. the least-violating control strategy while taking into consideration different priorities of different parts of the specification. We consider the robot modeled as a deterministic transition system and the mission expressed as a set of linear temporal logic formulas. We suggest a quantitative metric determining "how close" each violating trace of the robot is to a trace satisfying the specification and we propose a method that builds on the automata-based approach to model-checking to automatically find a provably optimal robotic trace with respect to this metric.

Conventional software engineering on the basis of informal or semi-formal methods is facing tremendous challenges in ensuring software quality. Formal methods have attempted to address those challenges by introducing mathematical notation and calculus to support formal specification, refinement, and verification in software development. The theoretical contributions to the discipline of software engineering made by formal methods researchers are significant. However, in spite of their potential in improving the controllability of software process and reliability, formal methods are generally difficult to apply to large-scale and complex systems in practice because of many constraints (e.g., capability limitations, limited expertise, complexity, changing requirements).

June 2013

2013-06-04 14:00 in S7:Computational Grid: Aiming to Provide Easy and Uniform Access to HPC Resources - the New Zealand Experience

Vladimir Mencl (University of Canterbury, New Zealand)

The computational grid aims to provide easy and uniform access to high-performance computing (HPC) resources. This talk will first provide an overview of the concepts used in the computational grid and the technologies implementing these concepts (Globus Toolkit, VOMS, GUMS, ...), as well as other e-Research services closely related to the grid (federated identity management, data grid). This talk will also reflect on the speaker's experience of working on the computational grid for 6+ years and highlight the ups and downs the grid has experienced in the New Zealand (and Australian) context.

May 2013

2013-05-29 09:00 in S5: Using May-Happen-Before Analysis to Optimize State Space Traversal with JPF

Pavel Jancik

2013-05-22 09:00 in S5: WEb VErifiCAtion for PHP - progress report

David Hauzar

2013-05-21 14:00 in S9: Application-Only Call Graph Construction

Ondrej Lhotak

A call graph is a basic prerequisite for any kind of interprocedural static program analysis. Existing precise call graph construction algorithms for Java are expensive because they require the whole program to be analyzed, including all libraries. I will report on our recent work to efficiently construct sound and precise call graphs for Java programs without analyzing their dependent libraries.

2013-05-14 14:00 in S9: GC modelling - "progress" report

Peter Libic

Report on what we have been doing in the GC modelling toppic. Measurement accuracy issues, benhmark scalability, using less input data.

April 2013

Software systems are constantly evolving, with new versions and patches being released on a continuous basis. Unfortunately, software updates present a high risk, with many releases introducing new bugs and security vulnerabilities. We tackle this problem using a simple but effective multi-version based approach. Whenever a new update becomes available, instead of upgrading the software to the new version, we run the new version in parallel with the old one; by carefully coordinating their executions and selecting the behaviour of the more reliable version when they diverge, we create a more secure and dependable multi-version application. We implemented this technique in Mx, a system targeting Linux applications running on multi-core processors, and show that it can be applied successfully to several real applications such as Coreutils, a set of user-level UNIX applications; Lighttpd, a popular web server used by several high-traffic websites such as Wikipedia and YouTube; and Redis, an advanced key-value data structure server used by many well-known services such as GitHub and Flickr.

2013-04-16 14:00 in S9: Dynamic CBSE with Models@runtime

Noel Plouzeau (IRISA/ISTIC, France)

In this talk I will present the general approach used by the Triskell team to address adaptation of distributed component based systems: model driven engineering techniques, prototypes and ongoing research on open issues.

2013-04-10 09:00 in S5: How you should (not) do a user study & Development of research "prototypes"

Lukas Marek

In the first part, seminar will summarize our experience while doing user study on programming with DiSL. Second part of the seminar will focus on several aspects of software development in research environment. [PDF]

2013-04-09 14:00 in S9: How you should (not) do a user study & Development of research "prototypes"

Lukas Marek

In the first part, seminar will summarize our experience while doing user study on programming with DiSL. Second part of the seminar will focus on several aspects of software development in research environment. [PDF]

Distributed systems involve complex infrastructures that are characterized by a wide diversity of technologies and requirements imposed by the domains they target. They face stringent requirements and many extra functional properties need to be dealt with. This presentation focuses on the software engineering perspective of these platforms. We present our results on the design and implementation of component-based and service-oriented platforms. We especially focus on the solutions that enable adaptation and reconfiguration of these platforms, both at design time and at run-time.

March 2013

The talk will serve as a rehearsal for the habilitation thesis presentation.

2013-03-19 14:30 in S9:Are We Ready for Computer Assisted Living?

Tomas Bures

The talk will serve as a rehearsal for the habilitation thesis presentation.

2013-03-13 09:00 in S5: DEECo Computation Model

Rima Al Ali

The talk will overview the current draft of the DEECo computational model. The aim of the seminar is to review the concepts and the semantics of the model.

2013-03-12 14:00 in S9: C# 5.0, .NET 4.5 and beyond

Pavel Jezek

2013-03-05 14:00 in S9: JMS performance modeling

Tomas Martinec

JMS frameworks are intended to make the implementation of distributed software in Java easier by providing message-based communication to developers. As the size of the distributed system grows, the hardly predictable performance aspects of the messaging layer becomes important. This talk will be about an ongoing research of how to model and predict message throughput and latency in a JMS-based software, which is being done in the scope of the Ferdinand project.

February 2013

2013-02-27 09:00 in S5: DEECo Roadmap 2013

Tomas Bures

The talk will briefly overview the recent research activities in DEECo. Further the talk will overview features planned for 2013 and connect them with upcoming activities in the ASCENS project.

2013-02-26 14:00 in S9: Renormalized Synergetics within CHAP

Alfons Salden (ALMENDE, the Netherlands)

In this talk first of all the matematical-physical concepts underlying synergetics of systems and their environment are presented. This will give insight why and how dynamic network patterns of such systems and environment emerge through interaction of large number of microscopic scale subsystems. Then renormalization techniques are considered to capture the interaction and co-evolution of large number of multiple spatio-temporal scale subsystems to let emerge effective systemic-envrionmental properties. Second it is shown how renormalised synergetics fo multi-agent systems and their environments is sustained by the open complex adaptive system development approach of CHAP, the Common Hybrid Agent Platform continuously aiming at enabling the interoperability and adaptability of the collective of human networks, agents aand other cyber-physical systems. [PDF]

2013-02-20 09:00 in S5: Usefulness evaluation of debugging tools

Tomas Martinec

We will talk about an ongoing study of how people debug computer programs, which is currently being done for the MFF course of operating systems. The main goal of the study is providing a methodology for usefulness evaluation of various debugging tools, and performing the evaluation for some debugging tools. We will explain why it is contributive to propose such a methodology. Furthermore, we can discuss secondary results of the study that may be worth of publishing or details about the study.

2013-02-12 14:00 in S9: DiSL - Rehearsal Talk

Lukáš Marek

Rehearsal Talk for Malostranská IT setkání (14.2.2013). This talk will introduce DiSL (already presented 2011-12-21), framework for instrumentation. The length of the talk is only 10 minutes. The talk will be given in Czech.

January 2013

2013-01-08 14:00 in S9: Summary of LinuxCon Europe 2012

Martin Děcký

This talk provides a summary of some of the topics discussed at LinuxCon Europe 2012, a major free and open source software conference organized by the Linux Foundation. Besides providing a generic overview of the current hot topics that move both the community and the industry around GNU/Linux we will also cover a few topics in more detail.

December 2012

2012-12-19 09:00 in S5: TraceContract

Rima Al Ali

2012-12-18 14:00 in S9: GC Modeling: Input Sensitivity

Peter Libic

Report on progress (or lack of it) in the toppic of GC modeling. Experiments to determine simulation algorithm stability on various input parameters.

An introduction to the main concepts of goal-oriented requirements engineering is given by inspecting the KAOS model and method. We will attempt to compare KAOS to DEECo predicate-based design method. [PDF]

2012-12-11 14:00 in S9: DAG interpolant

Pavel Jančík

2012-12-04 14:00 in S9: Shadow VM

Lukáš Marek

This talk presents ShadowVM, a system for dynamic analysis of programs running within the Java Virtual Machine (JVM) which advances on prior work by simultaneously combining strong isolation and high coverage with lower run-time overhead than comparable systems. Analyses execute asynchronously with respect to the observed program, allowing parallelism to mitigate isolation-induced slowdowns.

2012-12-03 10:40 in S1:Interpolant Strength and Proof Reduction in OpenSMT

Simone Fulvio Rollini, Natasha Sharygina (University of Lugano)

Craig interpolation is a well known method of abstraction successfully used in both hardware and software model checking. The logical strength of interpolants can affect the quality of approximations and the performance of the model checkers. Recently, it was observed that for the same resolution proof a complete lattice of interpolants ordered by strength can be derived. Most state-of-the-art model checking techniques based on interpolation subject the interpolants to verification-specific constraints which in general are not satisfied by all possible interpolants. We analyze the restrictions within the lattice of interpolants under which the required constraints are satisfied; this enables investigation of the effect of the strength of interpolants on the particular techniques, while preserving their soundness. Interpolants are usually generated from proofs of unsatisfiability in the propositional resolution system, and the size of the interpolants critically depends on the size of the proofs. We present a post-processing method for proof reduction, based on the elimination of redundancies of occurrences of pivots along the proof paths, by means of matching and rewriting substructures into smaller ones; we show how proof reduction concretely affects the generation of interpolants. The talk will include a discussion of interpolation and proof reduction within the OpenSMT solver, and a short demo will be given.

November 2012

2012-11-28 09:00 in S5: Compact Symbolic Execution

Marek Trtík (FI MUNI)

We present a generalisation of King’s symbolic execution technique called compact symbolic execution. It is based on a concept of templates: a template is a declarative parametric description of such a program part, generating paths in symbolic execution tree with regularities in program states along them. Typical sources of these paths are program loops and recursive calls. Using the templates we fold the corresponding paths into single vertices and therefore considerably reduce size of the tree without loss of any information. There are even programs for which compact symbolic execution trees are finite even though the classic symbolic execution trees are infinite.

2012-11-21 09:00 in S5: Predator

We will present current state of SimCo - tool developed for simulation based testing of component applications. The using of scenarios in SimCo will be shown, as wellas possibilities of measurements that can be performed by this tool. The last topic will be automatization of generating mockup components, that is in progress right now.

2012-11-14 09:00 in S5: Weverca

David Hauzar

The talk will present our recent results in the Weverca project. In particular, we will mention the memory model and open issues in future work.

2012-11-13 14:00 in S9: DEECo and ASCENS e-mobility case-study

Michal Kit

This seminar will present the implementation of ASCENS e-mobility case-study in jDEECo framework and the lessons learned along the way.

An overview of the envisioned high-level design of DEECo-based applications will be given. The approach is based on capturing the system requirements by means of predicates over stakeholders knowledge and refining them to the level of component communication and computation semantics. ASCENS e-mobility case-study will be used as a running example.

October 2012

2012-10-31 09:00 in S5: NP-complete services

Jaroslav Tulach (Oracle)

Modularity is essential part of modern computing systems. Various Linux distributions are based on RPM or dpkg. Many frameworks come up with their own packaging schemes (ruby, scala, etc.). Modularity, and dependencies are hot industry topic. Too bad their current realizations include inherent NP-complete problems. There is a proposal to include modularity into Java language. As part of trying to make the new Java modularity sound, this talk is going to analyse the reasons why resolving dependencies leads to NP-completeness. It will come up with advices how to avoid it. It will show limitations of a non-NP dependency system. It will show the need for a more flexible one. It will show why such enhancements again lead to NP-completeness. It will state the conditions when the NP-completeness can be eliminated. Then a real world examples demonstrating why such system is insufficient for production use will be brought up. After struggling through all this pain a system of modular dependencies satisfying most of the business requirements and avoiding NP-completeness will be presented.

2012-10-30 14:00 in S9: HelenOS: State of the Union 2012

Martin Děcký

Summary of latest achievements and improvements in HelenOS including the most recent 0.5.0 release, Google Summer of Code projects and defended master theses. We should also discuss the draft of the upcoming HelenOS-related PhD thesis. [PDF]

2012-10-24 09:00 in S5: OSRAc Project Report

Michal Malohlava

OSRAc project focuses on consolidation of on-board software reference architecture for space vehicles. The talk is going to explain the architecture, corresponding Space Component Model, and our role in the context of the project.

The seminar will be focused on my ongoing research in the area of requirements engineering. I will describe the idea of deriving a domain model from a plain-text specification using a statistical approach (supervised learning which utilizes the Maximum Entropy models for classification). I will also present the design of an experiment which evaluates the reliability of this technique. [PDF]

The seminar reports about our effort on a formal model of property networks allowing for efficient capture of modifications of architecture-relevant information and shows, how this model can be used to employ the concept of modes for system architectures in hierarchical component systems. [PDF]

The MechatronicUML method enables the model-driven design of discrete software of self-adaptive mechatronic systems. The key concepts of MechatronicUML are a component-based system model which enables scalable compositional verification of safety-properties, the model-driven specification and verification of reconfiguration operations, and the integration of the discrete software with the controllers of the mechatronic system. Therefore, MechatronicUML provides a set of domain specific visual languages (DSVL) as well as a defined development process. In MechatronicUML, system elements are specified by components that can be composed hierarchically. Self-adaptive behavior is specified by reconfiguration operations that change the embedded component structure of a hierarchical component. In this presentation, I will focus on how we enable reconfiguration operations that require changes on several levels of hierarchy in a hierarchical component. In addition, I explain how we map reconfigurable components to MATLAB/Simulink which is an industry standard tool for the development of software for mechatronic systems.

2012-10-09 14:00 in S9: An Overview on CRC 901 - On-The-Fly Computing

Steffen Becker (University of Paderborn)

The CRC 901 On-The-Fly (OTF) Computing addresses the future of global service markets. We assume a world with a large variety of compute centre providers, service providers which host their components on the compute centre's hardware and OTF service providers which use their domain knowledge to offer (automated) composition services for their domain to end-users. In my talk, I will give an overview on CRC 901 and highlight its research challenges. In particular, I will focus on the software engineering research in the CRC. This research deals with service specifications in a heterogeneous world, automated service composition, analysis of compositions for functional and non-functional correctness as well as techniques for certifying correct implementations of service specifications. Finally, I will detail a modelling and analysis approach for the performance of services in dynamic environments.

2012-10-03 09:00 in S5: Composability Testing for Plux Components

Markus Löberbauer (Johannes Kepler Unversity, Linz)

Plux is a plugin platform for extensible and customizable programs. It supports dynamic composition of software, which allows developers to build applications in which users can load and integrate just those components which they need for their current work. With dynamic composition users can also reconfigure an application on-the-fly by dynamically swapping components. Component provision is the process that connects host components with their requested contributor components. In this talk, we present a classification of component provision characteristics, define composition mechanisms by their component provision characteristics, and show typical composability faults for each composition mechanism. We also present the test method Act and the test tool Actor that tests the composability of Plux components. Furthermore, we present the debugging method Doc and the debugger Doctor that locates the cause of composability errors.

2012-10-02 14:00 in S9: Plux - A Dynamic Plugin Platform

Reinhard Wolfinger, Markus Jahn (Johannes Kepler Unversity, Linz)

Plux is a plugin platform for extensible and customizable programs. It supports dynamic composition of software, which allows developers to build applications in which users can load and integrate just those components which they need for their current work. With dynamic composition users can also reconfigure an application on-the-fly by dynamically swapping components. In this talk, we present the Plux composition model, the Plux composition infrastructure, and an application for recording working hours that we have built with Plux.

September 2012

Modern multicore processors share cache and bandwidth across all cores, leading to performance degradation for cache- or bandwidth-sensitivite applications. To understand this impact, we need to know the application's sensitivity to its shared resource allocation. However, this is quite difficult to model due to the complexities of the hardware, including out-of-order execution, hardware prefetching, memory system queues, and the details of cache replacement policies. To overcome these difficulties we have developed techniques that allow us to "steal" shared resources from an application while measuring its performance, thereby precisely capturing the application's shared resource sensitivity, while including all the effects of the real hardware. This data can then be used to accurately predict the impact of resource sharing across multiple applications. This talk will cover these technologies and how they work together to enable accurate shared resource modeling.

2012-09-24 15:00 in S4:Predictability and Evolution in Resilient Systems

Ivica Crnkovic (Mälardalen University, Sweden)

This seminar gives an overview and the challenges in software development of resilient systems. The challenges come of the resilience characteristic as such it a system emerging lifecycle property, neither directly measurable nor computable. While software is an essential part of a system, its analysis not enough for determining the system resilience. The talk will discuss about system resilience reasoning, its limitations, and possible approaches in the software design that include resilience analysis. [PDF]

2012-09-18 14:00 in S4:Variability of Execution Environments for Component-based Systems

July 2012

2012-07-11 09:45 in S4:Are we ready for computer assisted living? (seminar will be held in Czech language)

2012-07-11 09:00 in S4:Supporting dynamicity and variability in component-based systems (seminar will be help in Czech language)

Petr Hnetynka

June 2012

2012-06-26 15:45 in S4:Software Change in the Solo Iterative Process: An Experience Report

Václav Rajlich (Wayne State University)

This lecture reports an experience of a solo programmer who added a new feature into an open source program muCommander. The process is observed on two granularities: Granularity of software change (SC) and granularity of Solo Iterative Process (SIP). The experience confirms that both SC and SIP process models can be successfully enacted, are able to implement the described feature, and produced a high quality code in reasonable time. The lessons learned, particularly the exit criteria for SC phases, are discussed in more detail and may be applicable to team iterative processes, including agile processes.

Our SLAstic approach aims to increase the resource efficiency of distributed, component-based software systems employing architecture-based runtime reconfiguration. Relevant aspects about the software system, including its composition, deployment, and quality-of-service (QoS) objectives are specified in architectural models. At runtime, these models are updated based on continuous monitoring and used for online QoS evaluation in order to plan and execute appropriate reconfiguration plans. In this talk, I would like to provide an overview of the approach and the tool infrastructure for monitoring and online adaptation, which has been developed for the experimental evaluation.

2012-06-05 14:00 in S9: Dynamic Program Analysis within the Observed Process - A Misdirection?

Danilo Ansaloni (University of Lugano)

Dynamic program analysis tools support numerous software engineering tasks, including profiling, debugging, and reverse engineering. In Java, these tools are usually based on low-level bytecode instrumentation; that is, some monitoring code is inserted at interesting points of the observed program's bytecode. At execution time, the monitoring code may inspect the runtime state of the program and perform the analysis, usually within the same Java Virtual Machine process that is being observed. While this approach simplifies the specification and the deployment of the analysis, it also introduces serious problems, such as infinite regression, performance degradation, and measurement perturbation. In this talk, we elaborate on the subtleties of performing analysis tasks within the observed address space and present new techniques that help tackle the aforementioned issues.

2012-06-04 10:00 in S11:Introduction to the Descartes Meta-Model (DMM)

Samuel Kounev (University of Karlsruhe)

Modern service-oriented enterprise systems have increasingly complex and dynamic loosely-coupled architectures that often exhibit suboptimal performance and resource efficiency and have high operating costs. This is due to the inability to predict at run-time the effect of dynamic changes in the system environment and adapt the system configuration accordingly. Architecture-level performance models provide a powerful tool for performance prediction, however, current approaches to modeling the execution context of software components are not suitable for use at run-time. In this talk, we analyze the typical online performance prediction scenarios and present a novel meta-model, the Descartes Meta-Model, for expressing and resolving parameter and context dependencies, specically designed for use in online scenarios. We motivate and validate our approach in the context of a realistic and representative online performance prediction scenario based on the SPECjEnterprise2010 standard benchmark. The Descartes Meta-Model (DMM) is a new architecture-level modeling language for modeling quality-of-service and resource management related aspects of modern dynamic IT systems, infrastructures and services. DMM is designed to serve as a basis for self-aware resource management during operation ensuring that system quality-of-service requirements are continuously satisfied while infrastructure resources are utilized as efficiently as possible. The term Quality-of-Service (QoS) is generally used to refer to non-functional system properties including performance (considering response time, throughput, scalability and efficiency) and dependability (considering, in addition to performance: availability, reliability and security aspects). The current version of DMM is focused on performance and availability including capacity, responsiveness, and resource/energy efficiency aspects, however, work is underway to provide support for modeling further QoS properties. The meta-model itself is designed in a generic fashion and is intended to eventually support the full spectrum of QoS properties mentioned above. [PDF]

May 2012

2012-05-23 09:00 in S5: SPL Progress Report

2012-05-22 14:00 in S9: Volatile Variables and Memory Barriers

Andrej Podzimek

Andrej's previous talk presented a concurrent AVL tree algorithm developed at Stanford University. This algorithm has the potential to improve scalability of current operating system kernels. However, the original implementation is written in Java and strongly bound to the Java memory model, namely to the Java-specific meaning of „volatile“. Current Oracle JVM separates accesses to „volatile“ variables by memory barriers. Presumably, not all the barriers are necessary. Frequent memory barriers can lead to high memory subsystem overhead, growing with the SMP system size and defeating the purpose of parallel non-blocking algorithms. This talk will focus on open questions related to automated placement of memory barriers and (hopefully) initiate a discussion on how current model-checking techniques could help. [PDF]

2012-05-09 09:00 in S5: Problems and challenges in performance analysis in large scale enterprise systems

Dawid Nowak (UCD, Ireland)

With increasing growth in adoption of various cloud offering companies are finding themselves in a situation where it is getting harder and harder to test their applications. As number of possible components grows it becomes harder and harder generate enough load in the system to ensure that the application is working sufficiently well under stress. Even when enough load is generated the analysis of the output leaves much scope for improvement as the tools available on the market offer a very limited functionality when it comes to analysis and correlation of data points from different sources such as network monitoring tools, application logs, heap dumps, system performance monitors. Sophisticated algorithms are needed to automatically understand and correlate key points and automatically identify bottlenecks and other patterns of invalid behaviour. In this presentation I would like to highlight some of these problems from the perspective of a big verification lab.

Which GC algorithm is better? When developers care for performance they usually ignore reference counting as an option that is too slow. The talk will present a recent paper showing that a well tuned and optimized reference counting collector can compete in performance with a modern tracing collector. [PDF]

April 2012

Growing performance of hand-held devices hardware prepares ground for a growing market of mobile applications. The advantage of running applications anytime anywhere has one significant drawback - it increases battery consumption. As the Internet connection has become available nearly anywhere, an attractive technique to overcome this obstacle is a remote execution of processor intensive parts of an application in the case of available network connection...

2012-04-24 14:00 in S9: FOAM : A Lightweight Method for Verification of Use-Cases

Viliam Simko

Textual use-cases have been traditionally used at the design stage of development process for describing software functionality from the user's view. Their advantage is that they can be easily understood by stakeholders and domain experts. However, since use-cases typically rely on a natural language, they cannot be directly subject to a formal verification. We present FOAM - a method featuring simple user-definable annotations, which are inserted into a use-case to make its semantics more suitable for verification. Subsequently a model-checking tool verifies temporal invariants associated with the annotations. This way, FOAM allows for harnessing the benefits of model-checking while still keeping the use-cases understandable for non-experts. [PDF]

2012-04-18 09:00 in S5: On the Accuracy of Cache Sharing Models

Vlastimil Babka

The seminar is based on the paper to be presented at ICPE 2012, put into the context of the previous talks about our Composable Cache Model and its evaluation. We investigate the problem of obtaining accurate stack profiles that serve as inputs of the model and thus greatly affect its accuracy. The problem appear to be related to the LRU approximation used in the processor. We show how the two of the previously used approaches for obtaining stack profiles can be combined using a specially developed algorithm that performs memory accesses according to a given profile. The resulting profiles singnificantly improve the model accuracy. [PDF]

2012-04-17 14:00 in S9: Main research directions of ASCENS

Jaroslav Keznikl

Summary of the topics presented on the ASCENS meeting in Florence.

2012-04-11 09:00 in S5: Report on KnowLang

Ilias Gerostathopoulos

The aim of this talk is to present the advances of the Lero - Irish Research Center - in specifying a formal language for knowledge representation in autonomic service component-ensembles termed "Knowlang". The requirements, design choices and the proposed specification model is discussed. The use of Knowlang in the "Robots Ensemble" use case is also briefly examined. [PDF]

March 2012

2012-03-21 09:00 in S5: Connector as a Service

Jaroslav Král

Large information systems (IS) support as a rule business and are involved in social processes. We show that implies that the communication of software components forming the IS must take human aspects into account. The communication must be based on messages well understood by users. The messages should be coarse grained, declarative, and based on user knowledge domain. Many components are integrated as black boxes. Examples are legacies or the (information) systems of business partners. Their interfaces are as a rule fine grained and programmer oriented (they use RPC). We have often a limited possibility, if any, to change it as usually no source code is available. An effective solution in SOA is to design the interface as two layer structure where the upper layer is implemented as a service called front-end gate (FEG). We show that FEG can easily solve many business needs (agile business processes, role in business suits, development of business intelligence) and has many technical advantages (easy prototyping, agile development). FEG is in fact a connector. SOA in common sense does rarely explicitly use the concept of connector. The most important is that FEG can be easily modified onto head of a composition service (HCS) enabling unlimited composition of services. HCS can be modified and used such that it can serve as generalized router. It together with FEG and HCS are architectural services. The constructed SOA is a layer structure formed by application services and a network of architectural services (ArS). It is open how the further concepts results and models of component orientation can be applied in such SOA. Author used the variants of ArS in several successful projects, including soft real-time ones.

2012-03-20 14:00 in S9: MINIX 3: Building a Dependable Operating System

Ben Gras, Arun Thomas (Vrije Universiteit, Netherlands)

MINIX 3 is an open-source multiserver operating system designed for high reliability. Unlike traditional operating systems, where flaws in device drivers and system components can crash the entire operating system, MINIX 3 can transparently recover from many such failures. In this talk, we will describe our ongoing research into OS dependability. Additionally, we will outline our efforts to turn MINIX 3 into a mainstream embedded operating system and provide a (rough) roadmap for the future.

2012-03-06 14:00 in S9: ASCENS brainstorming

February 2012

2012-02-29 11:00 in S5:Platform Dependent Verification

Jiri Barnat (FI MUNI)

With the increase in complexity and degree of parallelism of computer systems, it became critical to develop automated formal verification methods for ensuring their quality. Unfortunately, those methods are computationally demanding and memory-intensive in general, hence, their applicability to large and complex systems routinely seen in practice these days is limited. To verify large systems no option was left out than to employ combined computing power of multiple computing devices. In this talk we will exemplify steps that must have been taken to build high-prefomance scalable tool for parallel and distributed LTL model checking. [PDF]

2012-02-28 14:00 in S9: ASCENS brainstorming

2012-02-22 09:00 in S5: DEECo and ASCENS

D3S Component Group

The talk will overview the basic concepts of DEECo, which is a currently being developed component model for ubiquitos components with emergent connections. Subsequently, the seminar will have a form of brainstorming, the aim of which is to concretize D3S tasks within ASCENS project. [PDF]

2012-02-21 14:00 in S9: HAVEN: An Open Framework for FPGA-Accelerated Functional Verification

Marcela Šimková (VUT Brno)

Functional verification is a widespread technique to check whether a hardware system satisfies a given correctness specification. As the complexity of modern hardware systems rises rapidly, it is a challenging task to find appropriate techniques for acceleration of this process. The presentation is aimed at HAVEN, a freely available open functional verification framework that exploits the field-programmable gate array (FPGA) technology for cycle-accurate acceleration of simulation-based verification runs. HAVEN takes advantage of the inherent parallelism of hardware systems and moves the verified system together with transaction-based interface components of the functional verification environment from software into an FPGA. The presented framework is written in SystemVerilog and complies with the principles of functional verification methodologies (OVM, UVM), assertion-based verification, and also provides adequate debugging visibility, making its application range quite large. Performed experiments confirm the assumption that the achieved acceleration is proportional to the complexity of the verified system, with the peak acceleration ratio being over 1,000. [PDF]

2012-02-02 14:00 in a lecture room announced later:Introduction to Genode

Norman Feske (Genode Labs, Dresden)

Today's operating systems try to find a balance between seemingly conflicting goals. Ease of use is traded against security, resource utilization is traded against resource accountability, and system complexity is traded against scalability. For example, SELinux is ill famed as hard to use and consequently remains widely unused. As another example, isolation kernels minimize the complexity of critical system software but at the cost of limiting these solutions to static applications.

The Genode OS architecture shows how these apparently inherent conflicts can be solved by operating-system design. By combining a recursive system structure with capability-based security, mandatory access control becomes easy to deploy. At the same time, the trusted computing base can be minimized for each application individually such that the attack surface for security-critical system functions gets reduced by orders of magnitude compared to existing approaches. Furthermore, a concept for trading physical resources among processes allows for dynamic workloads while maintaining quality of service. That is not just theory - the system is ready for demonstration.

Norman Feske will briefly present the roots and mission of Genode Labs, the company founded to support and drive the Genode OS technology. The main part of the talk will be focused on the OS architecture, give a glimpse at the implementation via live demonstrations, and outline the future road map. [PDF]

January 2012

2012-01-18 10:00 in S9:Overview of the e-Mobility case study in the EU-Project ASCENS

Henry Bensler (Corporate Research, Volkswagen, Germany)

The talk will give a short overview of Volkswagen Group Research and an overview of VW involvement in ASCENS project. In particular, it will present: motivation and objectives of the e-mobility case study, design approach and scenario description, model driven requirements, description of service component types, levels of service component ensembles, awarenes requirements, and expected results of project.

2012-01-10 14:00 in S9: System verification and integration in Airbus

Louis Hache

After a brief introduction about Airbus history, organization and way to work, the aim of this talk is to present a tool used for system mixability checking. Additionally, the talk will discuss the usefulnes of task automatisation as well as the problems developers may encounter while working in a company of Airbus size.

2012-01-04 09:00 in S5: Industry experience

Tomáš Poch

Introduction to GoodData.

2012-01-03 14:00 in S9: Making SPL practical via Java annotations

Vojtěch Horký

Past seminars introduced the idea of capturing and meassuring relative method performance using Stochastic Performance Logic (SPL). This talk will focus on the practical aspects of this technique, namely annotating Java code with "performance assertions". Progress in implementation of a simple performance-testing framework as well as its usability with real-world projects will be reported. [PDF]

December 2011

Many dynamic analysis tools for programs written in managed languages such as Java rely on bytecode instrumentation. Tool development is often tedious because of the use of low-level bytecode manipulation libraries. While aspect-oriented programming (AOP) offers high-level abstractions to concisely express certain dynamic analyses, the join point model of mainstream AOP languages such as AspectJ is not well suited for many analysis tasks and the code generated by weavers in support of certain language features incurs high overhead. DiSL is (domain-specific language for instrumentation) a new language especially designed for dynamic program analysis. DiSL offers an open join point model where any region of bytecodes can be a shadow, synthetic local variables for efficient data passing, efficient access to comprehensive static and dynamic context information, and weave-time execution of user-defined static analysis code. [PDF]

2011-12-20 14:00 in S9: Garbage Collection: What Do We Need for Simulation?

Peter Libič

A report on progress (or lack of progress) in the topic of garbage collection modelling. Overview of problems and my attempts for solutions. Discussion on possible usage scenarios. [PDF]

2011-12-14 09:00 in S5: JPF-Inspector

2011-12-13 14:00 in S9: GPU Accelerated Explicit-State Model Checking

Jiří Barnat (FI MUNI)

The aim of the talk is to present and summarize resent results in GPU acceleration of formal verification procedures, explicit state model checking in particular. The talk will briefly recapitulate explicit state model checking workflow and will identify what steps can be and what steps cannot be accelerated. The talk will focus on adaptation of parallel model checking algorithms and on usage of multiple GPU cards for handling a single verification taks.

2011-12-07 09:00 in S5: AGLOBE - Lessons learned

Michal Kit

The talk will present a preliminary experience with the AGLOBE middleware. In addition to the overview of the API also the code samples will be presented. [PDF]

2011-12-06 14:00 in S9: PhD thesis progress report

November 2011

2011-11-30 09:00 in S5: Moving from Specifications to Contracts in Component-based Design

Sebastian Bauer (LMU, Germany)

My talk is about recent results of our ongoing effort to develop a unified framework for compositional specification of component-based systems. In particular, I will focus on the relation between specifications of component behaviors and contracts providing means to specify assumptions on environments as well as component guarantees. A contract framework can be built in a generic way on top of any specification theory which supports composition and specification refinement. Our contract framework lifts refinement to the level of contracts and proposes a notion of contract composition on the basis of dominating contracts. Contract composition satisfies a universal property and can be constructively defined if the underlying specification theory is complete, i.e. if it offers operators for quotienting and conjoining specifications. I will illustrate the generic construction of contracts by moving a specification theory for modal transition systems to contracts and we show that a (previously proposed) trace-based contract theory is an instance of our framework.

AGLOBE is an agent platform designed for testing experimental scenarios featuring agents position and communication inaccessibility, but it can be also used without these extended functions. The platform provides functions for the residing agents, such as communication infrastructure, store, directory services, migration function, deploy service, etc. Communication in AGLOBE is very fast and the platform is relatively lightweight. AGLOBE is suitable for real-world simulations including both static (e.g. towns, ports, etc.) and mobile units (e.g. vehicles). AgentFly is a multi-agent system enabling large-scale simulation of civilian and unmanned air traffic. The system integrates advanced flight path planning, decentralized collision avoidance with highly detailed models of the airplanes and the environment. All aerial vehicles in AgentFly are modeled by autonomous software agents. Each vehicle/agent is responsible for its own flight operation. The high-level mission of each vehicle is specified by an arbitrary number of waypoints. The operation is tentatively planned before take-off without consideration of possible collisions with other flying objects. During the flight, the agents detect mutual future conflicts in their flight plans and engage in peer-to-peer negotiation aimed at sophisticated re-planning in order to avoid the conflicts and maintain collision-free trajectories.

During model checking of software against various specifications, it is often the case that the same parts of the program have to be modeled/verified multiple times. To reduce the overall verification effort, this paper proposes a new technique that extracts function summaries after the initial successful verification run, and then uses them for more efficient subsequent analysis of the other specifications. Function summaries are computed as over-approximations using Craig interpolation, a mechanism which is well-known to preserve the most relevant information, and thus tend to be a good substitute for the functions that were examined in the previous verification runs. In our summarization-based verification approach, the spurious behaviors introduced as a side effect of the over-approximation, are ruled out automatically by means of the counter-example guided refinement of the function summaries. We implemented interpolation-based summarization in our FunFrog tool, and compared it with several state-of-the-art software model checking tools. Our experiments demonstrate the feasibility of the new technique and confirm its advantages on the large programs. [PDF]

2011-11-22 14:00 in S9: SCEL: Service Component Ensemble Language

Jaroslav Keznikl

The draft of the Service Component Ensemble Language as it is being prepared in ASCENS will be presented and feedback on the applicability will be gathered. [PDF]

This reading seminar will describe a new concurrent binary search tree algorithm based on AVL trees, developed and benchmarked at Stanford University by Nathan G. Bronson et al. (http://github.com/nbronson/snaptree). Since the algorithm has been implemented in the Java environment (depending on garbage collection and specific features of the Java memory model), making it applicable in operating system kernels would require a more general variant based around the RCU mechanism. Combining our RCU mechanism for UTS (the Solaris kernel) with the presented algorithm by Bronson et al. might improve concurrency of UTS kernel algorithms that use AVL trees. [Link]

2011-11-15 14:00 in S9: Overview of previous work

2011-11-02 09:00 in S5: The ASCENS Science Cloud

Stephan Reiter (LMU, Germany)

Cloud computing is a model that enables access to data and services from all kinds of devices over the Internet. Inherent in the continually growing use of products such as groupware, cloud storage and infrastructure as a service, is a trend towards centralization of computing resources, yielding potentially widespread disruption in case of failures at a provider. Just recently, Research in Motion had to acknowledge that a defect in its internal network resulted in an outage of the Blackberry e-mail service, affecting millions of customers around the globe. In the ASCENS project we focus on building systems on top of autonomous components. As part of a case study we pursue the implementation of cloud computing on top of a peer-to-peer network, utilizing the resources of connected computers in a decentralized manner. Availability of services, smart scheduling of tasks and easy participation are in the focus of our research. In this talk we will discuss these and other goals for our Science Cloud, present use cases and give an introduction to the technology that we will use to implement the cloud.

2011-11-01 14:00 in S9: EPEW 2011: Overview of selected papers

Vlastimil Babka

The talk will summarize two selected papers from the recent EPEW 2011 workshop, related to I/O and memory allocation profiling, and also present our own paper from the workshop - Can Linear Approximation Improve Performance Prediction ? [PDF]

October 2011

Compared to functional unit testing, automated performance testing is difficult, partially because correctness criteria are more difficult to express for performance than for functionality. Where existing approaches rely on absolute bounds on the execution time, we aim to express assertions on code performance in relative, hardware-independent terms. To this end, we introduce Stochastic Performance Logic (SPL), which allows making statements about relative method performance. Since SPL interpretation is based on statistical tests applied to performance measurements, it allows (for a special class of formulas) calculating the minimum probability at which a particular SPL formula holds. We prove basic properties of the logic and present an algorithm for SAT-solver-guided evaluation of SPL formulas, which allows optimizing the number of performance measurements that need to be made. Finally, we propose integration of SPL formulas with Java code using higher-level performance annotations, for performance testing and documentation purposes. [PDF]

Compared to functional unit testing, automated performance testing is difficult, partially because correctness criteria are more difficult to express for performance than for functionality. Where existing approaches rely on absolute bounds on the execution time, we aim to express assertions on code performance in relative, hardware-independent terms. To this end, we introduce Stochastic Performance Logic (SPL), which allows making statements about relative method performance. Since SPL interpretation is based on statistical tests applied to performance measurements, it allows (for a special class of formulas) calculating the minimum probability at which a particular SPL formula holds. We prove basic properties of the logic and present an algorithm for SAT-solver-guided evaluation of SPL formulas, which allows optimizing the number of performance measurements that need to be made. Finally, we propose integration of SPL formulas with Java code using higher-level performance annotations, for performance testing and documentation purposes. [PDF]

2011-10-18 14:00 in S9: Overview of previous work

Iris Breddin (RELATE Candidate)

In this talk, Iris Breddin will describe her work on supervised learning in AI with focus on mathematical algorithmic problems and the design of a wireless network protocol that uses minimum power and storage.

2011-10-12 09:00 in S5: Computer Memory: Why We Should Care What Is Under The Hood

Vlastimil Babka, Petr Tůma

The memory subsystems of contemporary computer architectures are increasingly complex { in fact, so much so that it becomes difficult to estimate the performance impact of many coding constructs, and some long known coding patterns are even discovered to be principally wrong. In contrast, many researchers still reason about algorithmic complexity in simple terms, where memory operations are sequential and of equal cost. The goal of this talk is to give an overview of some memory subsystem features that violate this assumption signifcantly, with the ambition to motivate development of algorithms tailored to contemporary computer architectures.

This seminar summarizes our paper presented at FACS 2011 symposium. We have presented a semi-automated method that helps iteratively write use-cases in natural language and verify consistency of behavior encoded within them. In particular, this is beneﬁcial when the use-cases are created simultaneously by multiple developers. The proposed method allows verifying the consistency of textual use-case speciﬁcation by employing annotations in usecase steps that are transformed into LTL formulae and veriﬁed within a formal behavior model. A supporting tool for plain English use-case analysis is currently being enhanced by integrating the veriﬁcation algorithm proposed in the paper. [PDF]

2011-10-04 14:00 in S9: Hunting Bugs Inside Web Applications

David Hauzar

The state-of-the-art tools for a bug discovery in languages used for a web application development, such as PHP, suffer a relatively high false-positive rate and low coverage of real errors; this is caused mainly by an unprecise modeling of dynamic features of such languages and path-insensivity of the tools. In this talk, we will describe our approach to finding security problems inside web applications. It combines path-sensitive static analysis, concrete and symbolic execution, literal analysis, taint analysis and type analysis.

May 2011

This talk will give an overview of selected recent benchmarking techniques applicable to distributed computing and storage systems. Early classical benchmarking metrics for database systems (the Wisconsin Benchmark) will be described and compared to recent advances in the area of cloud benchmarking suites.

2011-05-25 09:00 in S5: Challenges in developing long living automation systems

Roland Weiss (ABB AG, Ladenburg, Germany)

ABB provides automation systems in the process, manufacturing, and power domain that are in service for a long time, e.g. it is not uncommon to run plants for more than 30 years. This poses several challenges to ABB's software development to maintain these products over their whole lifecylce, i.e. maintenance and evolution? Technologies in the IT area have typically very different lifecycles, e.g. operating systems change significantly within 10 years, and support for 10 year old operating systems is not common. In addition, the downtime of plants (both from an economical and social view point) has to be reduced. This requires runtime mechanism for live updates, redundancy etc. In this talk I want to discuss the challenges we see in software development at ABB, areas for research, and also some solutions that we promote.

2011-05-24 14:00 in S9: Ferdinand - Middleware measurement

Lukáš Marek

The current state of the Ferdinand project focused mainly on middleware measurement and related tasks. [PDF]

2011-05-17 14:00 in S9: Current Issues in Software Composition, Adaptation, and Reconfiguration

Carlos Canal (University of Malaga)

Component-based Software Development aims at builiding systems by composition of existing services, possibly developed by third parties. Indeed, a vast number of services are already available. These services are black-box components described by their interfaces. Hence, there is a need for techniques to combine them safely and efficiently. Moreover, components or services are hardly reused as they are, due to interface mismatch and they require customized adaptation to combine them. Our group has been working on Software Adaptation for abou ten years now. One of our recent research lines is to apply our previous results to dynamic system reconfiguration when adaptation is required, ensuring this way adaptation to environmental changes (such as netwrok failures, component upgrading, or availability of new, more suitable components) without stopping the system and by means of transparent substitution, and preserving certain system properties that go further backwards compatibility.

2011-05-10 14:00 in S9: An Approach to Embedded System Development Based on Dynamically-typed Language

Marcel Paška (University of West Bohemia, Pilsen)

The talk will present a novel approach to software development, mainly useful for embedded devices. Embedded software is described in a programming language with very high level of abstraction. We first generate a verifiable code from the description and prove that it has certain properties defined by LTL formulae. Then we generate the final C code with the same properties.

2011-05-04 09:00 in S5: Evaluation of the Composable Cache Model

Vlastimil Babka

The talk presents initial evaluation of the precision of the cache capacity model introduced at the previous seminar on 23th November. The discussion includes the effects of using different methods to obtain stack profile inputs, factors that could be causing the cases of imprecise prediction, and also an initial comparison with a different published cache model. [PDF]

2011-05-03 14:00 in S9: Asynchronous Components with Futures: Semantics, Specification, and Proofs in a Theorem Prover

Ludovic Henrio (INRIA, France)

This talk presents a model for distributed components communicating asynchronously using futures – placeholders for results. Our components communicate via asynchronous requests and replies where the requests are enqueued at the target component, and the invoker receives a future. Then, future references can be dispersed among components. When the result is available for a future, it needs to be transmitted to all interested components, as determined by a future update strategy. We present formal semantics of our component model incorporating formalisation of one such future update strategy. We formalise in Isabelle/HOL, a component model comprising notions of hierarchical components, asynchronous communications, and futures. We present the basic constructs and corresponding lemmas related to structure of components. We present formal operational semantics of our components in presence of a future update strategy; proofs showing correctness of our future update strategy are presented. Our work can be considered as a formalisation of ProActive/GCM and shows the correctness of the middleware implementation.

April 2011

2011-04-27 09:00 in S5: REPROTOOL Workflow

Viliam Šimko

The talk will overview the goals of the ongoing project REPROTOOL, which takes up on Procasor and extends it by explicit use of models and different input/outputs. [PDF]

The development of service-oriented applications, prominently Web-Services, raises several challenges that call for new ideas, specialized models and dedicated analysis techniques. This talk covers some contributions that address such challenges, namely we present a specification model based on a novel notion of conversation, and static analysis techniques that allow to check crucial properties of service-oriented systems --- safety and progress of communication protocols. Our approach addresses systems that involve several simultaneous conversations between multiple parties, including conversations with a dynamic and unpredicted number of participants, scenarios found in real systems and that are out of reach of previous works.

Many dynamic analysis tools for the Java Virtual Machine are implemented with low-level bytecode instrumentation techniques. While program manipulation at the bytecode level is very flexible, because the possible bytecode transformations are not restricted, tool development is tedious and error-prone. Specifying dynamic analyses at a higher level using AOP is a promising alternative in order to reduce tool development time and cost. However, general-purpose AOP languages lack some features that are essential for dynamic analyses, such as pointcuts at the level of individual bytecodes and basic blocks of code, primitives for passing data between woven advice in local variables, and support for custom static analysis at weaving time. This talk presents the design of @J (Aspect Tools in Java), an aspect-oriented DSL for dynamic program analysis currently under development at the University of Lugano. @J addresses the aforementioned issues and offers dedicated support for optimizing dynamic analyses on multicores.

2011-04-19 14:00 in S9: Effective Concurrent Programming

Václav Pech (JetBrains)

An overview of advanced concurrent programming abstractions and concepts (actors, agents, fork/join, dataflow concurrency, continuations) and an introduction to the GPars framework for Groovy. [Link]

2011-04-12 14:00 in S9: Dynamic reconfigurations in SOFA 2

David Babka

The talk will overview the status and achievements of the master thesis on the dynamic reconfigurations in SOFA 2. The thesis covers the factory pattern as proposed in SOFA 2 and the newer extensions, which allow modelling of dynamic entities.

2011-04-06 09:00 in S5: Formal Approaches to Software Architecture

Jaroslav Keznikl

Reading seminar on methods for formal modeling of software architectures, their evolution and checking of architectural properties. [PDF]

2011-04-05 14:00 in S9: Identification of Abstractions in Documents

Viliam Šimko

Report on paper "On the Effectiveness of Abstraction Identification in Requirements Engineering". The paper describes a statistical method of automatic identification of abstractions from documents written in natural language. The method computes log-likelihood (LL) value for every word (using English National Corpus), a heuristic function computes score of multi-word terms. This method performs better than the original single-word approach by AbstFinder. To evaluate the method, authors used a book about RFID technology with the book index as a set of abstractions. [PDF]

March 2011

2011-03-30 09:00 in S5: Education in Global Software Engineering, Experience of a course Distributed Software Development

Ivica Crnkovic (MDU, Sweden)

This talk presents needs, challenges and experience of education of Global Software engineering. In particular experiences from a course "Distributed Software Development" will be discussed. DSD course was organized and performed between the School for Innovation, Design andEngineering at Mälardalen University (MDU), Sweden, the University of Zagreb, Faculty of Electrical Engineering and Computing (FER), Croatia, and partially joined by the University of Paderborn (UPB), Germany. The talk will discuss the challenges in creating a joint DSD course (organizational, distance-related and cultural), the solutions implemented at DSD, lessons learned, and success stories. The talk is based on several papers, latest two accepted on CTGDSD workshop at ICSE 2011.

2011-03-29 14:00 in S9: Microsoft Azure

Pavel Ježek

The talk will overview the Microsoft Azure cloud technology. It will focus on the services provided and it will demonstrate the technology on code examples. The talk will also mention the business model underlying the provisioning of the platform.

2011-03-23 09:00 in S5: Semantics of C++ Concurrency

David Hauzar

The talk will introduce basic concepts of the specification of concurrent behavior in the next version of C++. [PDF]

2011-03-22 14:00 in S9: Run-time characteristics of JavaScript.

Pavel Jančík

How many JavaScript programs use eval? Do authors of the JavaScript engines, optimizations and static analyzers make correct assumptions about behavior of the JavaScript? How does behavior of the JavaScript differ from other languages? [PDF]

2011-03-16 09:00 in S5: Technology trends Q1 2011

Tomáš Pop

This seminar refers about emerging trends and technologies that affects the market today or are supposed to affect the market in the near future. This overview will summarize Gartner top 10 technologies, Thoughtworks technology radar and IBM Next Five reports.

2011-03-15 14:00 in S9: SOFAnet 2 - master thesis report

Michal Papež

The aim of SOFAnet 2 is to exchange components between SOFAnodes in a simple and rational way. New high level concepts of Applications and Components were introduced. The talk will cover their mapping to SOFA 2 entities, means of distribution and a methodology to keep SOFA 2 repository clean. A description of prototype implementation and its current status will be presented and new tools demonstrated. [PDF]

2011-03-09 09:00 in S9:Paradoxes of API Design

Jaroslav Tulach (Oracle, Czech Republic)

API design is something that everyone needs, yet only few have enough experience to do it right. As an original NetBeans architect I went through many phases: from believing that API design is art, to understanding that it is hard. Finally I realized that there are general rules which distinguish good design from bad one. I'll explain some of them in this presentation. However, as humans are attracted by weird creatures, I'll demonstrate that on paradoxes. We'll start with some claims that look silly and then we investigate why they are correct in the world of API design. I hope this is going to be interesting investigation.

2011-03-08 14:00 in S9: Garbage Collection Modeling: Update

Peter Libič

The talk will present progress in the effort on modeling of Java garbage collector performance, discuss the motivation for further study and possible future work on the topic. Also a report from the stay at USI Lugano will be part of the seminar. [PDF]

2011-03-02 09:00 in S5: New features in Java 7

Petr Hnětynka

The talk will provide an overview of the features expected to appear in new Java 7 (and possibly Java 8 ...). [PDF]

2011-03-01 14:00 in S9: Generic Process Shape Types and the Poly* System

Jan Jakubův

Process calculi are used to model concurrent systems with several interacting units called processes. Shape types are a general concept of process types which allows verification of various properties of processes from various calculi. The Poly* system, originally designed by Makholm and Wells, is a type system scheme which can be instantiated to a shape type system for many calculi. The talk starts with a short introduction to concurrent systems, to process calculi, and to process calculi type systems. Then the Poly* system is introduced and Poly* instantiation is demonstrated on simple examples. Finally we show several concrete applications of the Poly* system and its instantiations. [PDF]

February 2011

MetaEdit+ is a leading tool in the domain of domain-specific modeling primarily targeting design of graphical modeling languages. It combines an elastic modeling environment with simple but powerful code generation techniques. The objective of the seminar is to show main tool concepts. The talk will demonstrate it on development of a simple graphical language for controlling mixing machine. [PDF]

In late 1990s, after almost a decade when cluster was a synonym of VMS Cluster, a lot of other commercial cluster solutions appeared. Later the principle of clusters was applied not only to operating systems and applications, but also to storage systems, databases, and virtual machines. As among all of these clusters there are a lot of similarities and differences, we propose cluster taxonomy based on resource sharing level and load balancing capabilities. In second part of the talk we will discuss design patterns of shared everything clusters.

Extensible applications can be extended and customized by third party developers. This is an increasingly popular style (e.g. NetBeans, Firefox, Eclipse), but experience has shown that these applications are difficult to build, as they require the application creators to predict expected types of extensions in advance. We address this by adding three constructs to an ADL, allowing both reuse and evolution to be captured in the design phase. This fully aligns initial creation and subsequent extension, whilst retaining an architectural focus. We show the applicability to extensible systems where even unplanned extension is catered for. We will also demonstrate our toolset support for the approach, featuring automated structural consistency checking. At the end of the presentation, there will be an opportunity for a hands on tutorial. [Link][Link]

2011-02-08 14:00 in S6:Process Algebras for Performance Evaluation: An Overview

Mirco Tribastone (LMU, Munich)

This talk gives an overview of recent research concerning the use of stochastic process algebras for the quantitative analysis of hardware/software systems. The talk consists of three parts. The first part will introduce some basic notions on the usual discrete-state interpretation of models as continuous-time Markov chains. The second part will discuss deterministic approximations with ordinary differential equations for the analysis of large-scale models defined with the stochastic process algebra PEPA. Finally, the third part will conclude with comparisons against other approaches in the literature and, if time permits, with a discussion of open problems and ongoing research to tackle them.

January 2011

2011-01-18 14:00 in S9: More CUDA accelerated LTL Model Checking

Milan Češka (FI, MUNI)

Recent technological developments made various many-core hardware platforms widely accessible. These massively parallel architectures have been used to significantly accelerate many computation demanding tasks. In this presentation we show how the algorithms for LTL model checking can be redesigned in terms of matrix-vector product in order to accelerate LTL model checking on many-core GPU platforms. Our experiments demonstrate that using the NVIDIA CUDA technology results in a significant speedup of the verification process. However, the CUDA technology is quite often limited to small or middle-sized instances due to space limitations, which is also the case of our CUDA-aware LTL Model Checking solution. We further suggest how to overcome these limitations by employing multiple CUDA devices for acceleration of our fine-grained communication-intensive parallel algorithm for LTL Model Checking. [PDF]

2011-01-12 09:00 in S5: Highlights of The Future of Software Engineering Symposium

David Hauzar

A brief overview and summary of several interesting topics presented at The Future of Software Engineering Symposium by Erich Gamma, Joseph Sifakis and Andreas Zeller. [PDF]

2011-01-11 14:00 in S9: Modal Transition Systems

Nikola Beneš (FI, MUNI)

Modal transition systems is a formalism that allows to describe partial (loose) specifications. The formalism supports stepwise refinement and compositional reasoning; it is thus well suited for design and specification of component-based systems. The aim of the first part of the talk is to introduce the modal transition systems and some of the problems that arise in connection with this formalism. The second part of the talk is devoted to showing some of our original results and is based on two published papers (TCS'09 and ICTAC'09). [PDF]

December 2010

2010-12-22 09:00 in S5: STRIPS Planning as an Approach to Automated Transformations of Object Oriented Models

Model transformations are one of the most discussed topics of the modern software engineering. This paper deals with STRIPS planning as a powerful engine for uniformly automated object oriented model transformations like refactoring, design patterns application, and object normalization. [PDF]

2010-12-21 14:00 in S9: Pattern-based verification of concurrent program

Tomas Poch

Reachability of a program location in the concurrent program is an undecidable problem. The pattern based verification is a technique deciding the related problem - whether the program location is reachable assuming that the threads are interleaved in a way corresponding to the given pattern. The pattern takes the form w_1* ... w_n* - sequence of words, where each of them can repeat arbitrarily many times. The talk will formulate the task of pattern based verification as a language problem and present the decision procedure in this manner. The talk will also cover the application of the technique in context of parallel processes communicating in the rendez-vous style as well as parallel processes with shared memory, touch the recent complexity results and comment on the ongoing work on the tool implementation. [PDF]

2010-12-15 09:00 in S5: Modal Interface Theories

Rolf Hennicker (LMU, Germany)

Preservation of compatibility by refinement and preservation of refinement by component composition are crucial requirements for any kind of interface theory. In this talk we discuss several variants of interface theories on the basis of modal I/O-transition systems (MIOs) introduced by Larsen et al. First we present interface theories for MIOs with strong and weak modal refinement and compatibility on the basis of synchronous communication. Then we describe extensions in two directions, to asynchronous communication and towards the integration of changing data states of components. For each case we consider particular compatibility and refinement notions and we show that they work properly together, i.e. form an interface theory.

2010-12-14 14:00 in S9: Verification of Timed Output-Compatibility and Timed Input-Compatibility in Networks of Timed Input/Output-Automata

Ludwig Adam (LMU, Germany)

The ability to specify and verify time constraints is of great importance in time-critical enterprise applications. In this work we consider Timed I/O-Automata for the specification of component behaviour with time constraints. The synchronized communication of a component with its environment can lead to a violation of specified time constraints. We are therefore interested in two aspects of compatibility: While timed output-compatibility ensures, that output messages of a component are accepted by its environment, timed input-compatibility ensures, that an input requirement of a component is always satisfied by its environment. Current model checkers do not explicitly support these notions of compatibility. We introduce syntactical transformation rules for Timed I/O-Automata that allows us to automatically verify these properties with the model checker UPPAAL.

2010-12-14 09:00 in S201:Benchmarking in Virtual Environments

Andy Georges (Ghent University, Belgium)

In spite of the widespread adoption of virtualization and consolidation, there exists no consensus with respect to how to benchmark consolidated servers that run multiple guest VMs on the same physical hardware. For example, VMware proposes VMmark which basically computes the geometric mean of normalized throughput values across the VMs; Intel uses vConsolidate which reports a weighted arithmetic average of normalized throughput values.

These benchmarking methodologies focus on total system throughput (i.e., across all VMs in the system), and do not take into account per-VM performance. We argue that a benchmarking methodology for consolidated servers should quantify both total system through- put and per-VM performance in order to provide a meaningful and precise performance characterization. We therefore present two performance metrics, Total Normalized Throughput (TNT) to characterize total system performance, and Average Normalized Reduced Throughput (ANRT) to characterize per-VM performance.

We compare TNT and ANRT against VMmark using published performance numbers, and report several cases for which the VM- mark score is misleading. This is, VMmark says one platform yields better performance than another, however, TNT and ANRT show that both platforms represent different trade-offs in total system throughput versus per-VM performance. Or, even worse, in a cou- ple cases we observe that VMmark yields opposite conclusions than TNT and ANRT, i.e., VMmark says one system performs better than another one which is contradicted by the TNT/ANRT performance characterization.

2010-12-13 16:30 in Malá Aula:Evaluation in Computer Science

Andy Georges (Ghent University, Belgium)

In (experimental) computer science, a large portion of time should be (but mostly is not) spent on evaluating ideas, refining them, re-evaluate, and finally measure the results of the implemented idea. Computer science papers usually contain an evaluation section -- albeit a fairly limited one at times -- where the authors try to provide evidence that their approach is both valid and improves the state-of-the-art. During the last decade, there has been an increase in awareness that the computer science community is lacking on several aspects of (performance) evaluation. We have made several important contributions to improving the state-of-the-art in this domain, focusing on Java (process-level virtualisation). However, while we need to take the rigour in evaluating our ideas and tools one step further, we feel that many researchers still have not caught up yet. Furthermore, in our opinion, the field as a science urgently needs to wake up and become more scientific in its ways.

In this talk we will discuss some of the steps that have been proposed by us and others, and present opportunities for further improving the way we conduct evaluations. These ideas are by no means set in stone; they are meant to provide a lead for further discussion and refinement. Three issues seem to form the crux of the matter: experimental design, benchmarks and measurements. However, all of these issues are intertwined with the motivation people are given to conduct decent experimentation, and here as well a (cultural) change is needed.

In the presentation, the Component-based Simulation Framework for Component Testing is described. The simulation framework, which is currently under active development, enables to test extrafunctional properties of real software components in simulated environment. Hence, there is no need for creation of the (potentially incorret) models of the components. The framework itself is constructed from software component in order to ensure modularity of performed simulations. [PDF]

2010-12-07 14:00 in S9: Pogamut - Middleware for bot behaviours

Jakub Gemrot

There is an arise of interest in complex virtual worlds, including U.S.Army, therapists, fire fighters. They may be used to for training simulations, medical treatment of psychological illnesses or training rescue missions. These applications demands simulated virtual beings, be it virtual humans or animals, with complex behaviours. Trying to experiment with virtual behaviors means to have a virtual world paired with a tool that enables one to quickly test different approaches for behaviour coding. As far as we know, our tool Pogamut is the only one that is being systematically developed to enable such experimentation with different behaviour languages. The presentation will be aimed to discuss problems of pairing behaviour language interpreters with virtual worlds showing Pogamut bound together with virtual world of the Unreal Tournament 2004 computer game. [PDF]

2010-12-01 09:00 in S5: Models everywhere

Michal Malohlava

The talks summarizes 1st modeling wizard summer school which took place in Oslo at the end of September. The talk highlights key points of sessions and reports identified challenges in modeling world.

November 2010

In late nineteen's, after almost a decade when cluster was a synonym of VMS Cluster, a lot of other commercial cluster solutions appeared. Later the principle of clusters was applied not only to operating systems and applications, but also to storage systems, databases, and virtual machines. As among all of these clusters there are a lot of similarities and differences, we propose cluster taxonomy based on resource sharing level and load balancing capabilities. In second part of the talk we will discuss design patterns of shared everything clusters.

2010-11-24 09:00 in S5: Read-Copy-Update for OpenSolaris

Andrej Podzimek

Read-Copy-Update (RCU) is a synchronization mechanism that increases concurrency and improves throughput in SMP (Symmetric Multiprocessing) environments. These improvements are achieved by keeping multiple versions of the protected data, which allows readers and writers to run in parallel. The RCU mechanism is used in virtually all subsystems of the Linux kernel. The talk will present a new and original implementation of RCU for UTS (the kernel used by Solaris, former OpenSolaris, Nexenta and Illumos). The new RCU implementation for UTS has been benchmarked in cooperation with a non-blocking hash table. The benchmark compared the RCU algorithm with a readers-writer lock and with another RCU implemetation called QRCU. More details can be found in Andrej's diploma thesis linked below. The second link refers to the presentation slide show. [PDF][PDF]

2010-11-23 14:00 in S9: A Cache Capacity Sharing Model

Vlastimil Babka

The talk introduces a work-in-progress model for predicting cache misses and performance of workloads running in parallel over a shared cache. The model takes stack distance profiles and several other metrics of the workloads as its input, and can produce an effective stack distance profile of the workloads under sharing. The talk also describes the adopted method for obtaining the required input metrics of an executable workload, and shows preliminary results. [PDF]

Chaplin ACT is a Java class transformer, which modifies classes in such a way that their instances may form composites at runtime. While Scala and other languages support a static object composition, i.e. a composite has an immutable structure defined at the time of the composite's creation, Chaplin ACT allows creating composites whose internal structures may evolve. A new component may be added to a composite or, conversely, an existing component may be removed from it. Furthermore, after inserting an object to a composite as a new component the object's behavior may change. In this sense, the object plays a certain role in the composite. On the contrary, the presence of a new component in a composite may also influence the behavior of other components. It is analogous to our everyday's experience: anybody who enters a certain context normally adapts his/her behavior to it. Some behavior may be suppressed while other may emerge. The presence itself may also influence the behavior of other people in the current context. We will demonstrate this technology on several examples and show how to incorporate it into our applications. Furthermore, some insights to the project's internals will be given. Chaplin ACT is developed as an open-source project (http://www.iquality.org/chaplin/) in the framework of my doctoral thesis. [PDF]

2010-11-03 09:00 in S5: Compliance of TBP with implementation

Pavel Jančík

The behavior protocols (BP) are typically used to check the correctness of the component composition. Even though component composition is error-free (from the BP point of view) a component system can violate the behavior protocol due to bugs arising from implementation.The talk gives an overview of the way how to check compliance of the primitive component implementation with behavior specification and summarize the experience gained during the work on my master thesis. The overall structure of the checker, extension of the Java PathFinder as well as features of the TBP protocol and their implications for model checking of code will be presented [PDF]

The systematic review led to the analysis of 20 primary studies (16 approaches) obtained after a carefully designed procedure for selecting papers published in journals and conferences from 1996 to 2008 and Software Engineering textbooks. A conceptual framework was designed to provide common concepts and terminology and to define a unified transformation process. [PDF]

2010-10-20 09:00 in S5: Industrialization of Research Tools: the ATL Case (Reading seminar)

Tomáš Pop

This seminar refers about the paper "Industrialization of Research Tools: the ATL Case". The paper describes AtlanMod research team experience with a long term process of a technical transfer of research results to commercial quality tool with large user base. [PDF]

2010-10-19 14:00 in S9: Highlights of the 40th International Summer School in Marktoberdorf

Martin Děcký

A brief overview and summary of several interesting topics presented at the 40th International Summer School in Marktoberdorf (Software and Systems Safety: Specification and Verification) by Manfred Broy, Tony Hoare, Ed Brinksma, Carlo Ghezzi, Susanne Graf, John Harrison, Connie Heitmeyer, Holger Hermanns, Kim Larsen, Doron Peled and John Rushby. [PDF]

Modern software applications are increasingly embedded in an open world that is constantly evolving, because of changes in the requirements, in the environment, and in usage profiles. These changes are difficult to predict and anticipate, and are out of control of the application. In many cases, changes cannot be handled off-line, but require the software to self-react by adapting its behavior dynamically, to continue to ensure the desired quality of service. The big challenge in front of us is how to achieve the necessary degrees of flexibility and dynamism required by software applications without compromising the needed dependability. The talk focuses on quantitative probabilistic requirements. It discusses how the initial design of an application may proceed through a model-driven process towards an implementation that satisfies requirements. Design-time parameters characterizing the environment are intrinsically subject to uncertainty, both because predictions are intrinsically inaccurate and because the environment is likely to change. It is thus necessary to check at run time if these parameters change significantly and may eventually lead to requirements violations. To do so, we need to extend verification to run time, by monitoring the environment, in order to get the real data that characterize it and affect the behavior of the application, and feeding the model (which continues to exist at run time) with new parameters, replacing the outdated values that were used at development time. The updated model can check whether the requirements are still met, or whether a reconfiguration is needed in the application in order to continue to satisfy the requirements. The talk reports on some results of recent research, developed within the SMScom project, funded by the European Commission, Programme IDEAS-ERC, Project 227977 (http://www.erc-smscom.org/). The focus is on modelling, model update, and automatic reasoning on requirements. An initial attempt to support self-adapting reactions will also be outlined.

2010-10-06 09:00 in S5: Various

The CESAR project aims on improving the development process of safety-critical embedded systems. A major goal of CESAR is to provide the reference technology platform (RTP) for embedded systems development. The vision of the RTP is a set of compatible entities that can be used to form tailored, integrated tools chains for embedded systems development. These tools chains follow a model-based integration approach. Among the first tools that have been adapted to the RTP were AVL InMotion and Papyrus for EAST-ADL2. These tools have been combined to create a tool chain that allows tracing between requirements and test cases as well as multi-site and multi-user collaboration. [PDF]

September 2010

2010-09-29 09:00 in S5: Scala and its Ecosystem

Petr Hošek

The Scala language continuously gains more and more popularity among programmers thanks to many new concepts it offers. Therefore, it makes perfect sense to take a further insight on this language. Beside the language itself, its ecosystem is also very important. That is why we will focus on the Scala ecosystem in this presentation. First of all, we will take a closer look at Scala compiler. This is being used not only as a compiler, but as an important building block of other tools as well. We will show how the compiler could be easily extended and used as a part of our applications. We will continue by overview of the most important tools and libraries you will use when developing applications in Scala. In the end, we will take a look at the entirely new member of the Scala ecosystem, the Collaborative Scaladoc project. This project has been created within the scope of Google Summer of Code 2010 program and aims to use the collaborative approach for the development of Scala documentation. [PDF]

2010-09-06 14:00 in S6:Scaling a game from 100 to 100 M users

Luke Rajlich (Zynga)

The talk will cover how FarmVille built a massively scalable game service. This talk will cover the technical challenges the team faced in building the service and how to overcome hurdles in scaling a service. Participants will leave with a better understanding of how large scale game services are designed, built, and scaled.

July 2010

2010-07-28 09:00 in S4:The Poor State of Experimental Evaluation in Computer Science

Peter Sweeney (IBM T. J. Watson Hawthorne)

Summary of the state-of-the-art.

June 2010

2010-06-30 09:00 in S5: On Quantitative Software Verification

Marta Kwiatkowska (University of Oxford, UK)

The vast majority of software verification research to date has concentrated on qualitative analysis methods, for example the absence of safety violations in program executions. Many programs, however, contain randomisation, timing and resource information. Quantitative verification is a technique for establishing quantitative properties of a system model, such as the probability of battery power dropping below minimum, the expected time for message delivery and the expected number of messages lost before protocol termination. Tools such as the probabilistic model checker PRISM (www.prismmodelchecker.org) are widely used in several application domains, including security and network protocols, but their application to real software is limited. This lecture presents recent results concerning quantitative software verification for ANSI-C programs extended with random assignment. The goal is to focus on system software that exhibits probabilistic behaviour and properties such as “the maximum probability of file-transfer failure”, or “the maximum expected number of failed transmissions”. We use a quantitative abstraction-refinement framework based on predicate abstraction, in which probabilistic programs are represented as Markov decision processes and their abstractions as stochastic two-player games. These techniques have been implemented and successfully used to verify actual networking software. [PDF]

2010-06-08 14:00 in S9: CBSE presentation rehearsal

2010-06-01 09:00 in S9:Foundations of Service-Oriented Modelling

José Luiz Fiadeiro (University of Leicester, UK)

We report on the work that we developed within the FP6-IST-FET Integrated Project SENSORIA (www.sensoria-ist.eu), which aimed at providing formal support for modelling service-oriented systems in a way that is independent of the languages in which services are programmed and the platforms over which they run. We discuss the semantic primitives that are being provided in the SENSORIA Reference Modelling Language (SRML) for modelling composite services, i.e. services whose business logic involves a number of interactions among more elementary service components as well the invocation of services provided by external parties. This includes a logic for specifying stateful, conversational interactions, a language and semantic model for the orchestration of such interactions, and an algebraic framework supporting service discovery, selection and dynamic assembly. (More information is available at http://www.cs.le.ac.uk/srml/) [PDF]

May 2010

I will introduce TectoMT, which is a highly modular NLP (Natural Language Processing) software system implemented in Perl programming language under Linux. It is primarily aimed at Machine Translation, making use of the ideas and technology created during the Prague Dependency Treebank project. At the same time, it is also hoped to significantly facilitate and accelerate development of software solutions of many other NLP tasks, especially due to re-usability of the numerous integrated processing modules (such as taggers, parsers, or named-entity recognizers), which are all equipped with uniform object-oriented interfaces.

2010-05-12 09:00 in S5: What is Powering the Matrix?

Radovan Janeček (CA Inc.)

I will provide brief overview of the IBM Mainframe platform with the emphasis on the latest trends (virtualization, data center consolidation, power/cooling optimization, etc.) that keep this platform still very interesting for CIOs. As the mainframe has become first class citizen of the distributed world (zLinux, J2EE, CICS web services, etc.) the question is whether there are interesting research topics we can together collaborate on. Interactive discussion and joint discoveries are the main purpose of this seminar.

2010-05-05 09:00 in S5: Performance Study of Active Tracking in a Cellular Network Using a Modular Signaling Platform

Tomáš Pop

In this semiar will be presented complex performance study of SS7Tracker: application for tracking mobile users in GSM networks build on top of modular architecture. Analysis is based on simulation using statistical distributions extracted from previous system measurements. [PDF]

Garbage collectors are integral part of many runtime systems. The seminar consists of two parts: the first is an overview of selection of contemporary garbage collection algorithms, including basic reference counting, tracing and advanced algorithms like G1 and concurrent mark and sweep. The secound part presents the basic models and their simple solutions. [PDF]

April 2010

2010-04-28 09:00 in S5: SOFA-HI

Petr Hošek

Overview of the current status of SOFA HI, profile of SOFA 2 targeted at high-integrity real-time embedded applications development. Generic concepts will be presented together with the description of prototype implementation and possible future development directions will be given. [PDF]

2010-04-27 14:00 in S9: Implementing component migration in SOFA 2

Václav Remeš

The possibility to migrate components between different computers at application runtime brings along many possibilities, mainly at the field of load-balancing of the system. There are many issues related to component migration, from leading the component into a reconfigurable state till obtaining component state and transfer to a different computer. Discussion of experience gained during the work on master thesis "Migration and load-balancing in distributed hierarchical component systems". [PDF]

2010-04-21 09:00 in S5: SOFA Microdock

Michal Malohlava

uSOFA represents a new generation of the SOFA component system. The main goal of uSOFA is to describe SOFA concepts with SOFA and hence, to provide highly configurable self-hosted SOFA implementation. uSOFA is based on a simple micro-component model allowing us to capture design as well as runtime concerns of component-based applications. To mitigate the runtime complexity, micro-components can be assembled into two types of model aspects: a component aspect which provides control layer for components and platform aspects focusing on expressing required platform functionality. The talk focuses on clarification of these concepts and explains them on an example. [PDF]

2010-04-13 14:00 in S9: Object Instance Profiling

Lukáš Marek

Existing Java profilers mostly use one of two distinct profiling methods, sampling and instrumentation. Sampling is not as informative as instrumentation but the overall overhead can be small. Instrumentation is more informative than sampling, since it intercepts every entrance and exit in the measured code, but the overhead is large. We propose a method that collects profiling information associated with a specific object instance, rather than with a specific code location. Our method, called object instance profiling, can collect contextual information similarly to other instrumentation methods, but can be used more selectively and therefore with lesser overhead. [PDF]

2010-04-07 09:00 in S5: How to Develop Adaptive Applications for Ubiquitous Computing Environments?

Kurt Geihs (Universität Kassel, Germany)

Adaptive applications modify their behaviour at run-time in response to significant changes in their operating context. In particular, ubiquitous computing applications require run-time adaptivity due to the inherently dynamic nature of the computing environment. We discuss the adaptation requirements and constraints of such scenarios. Our proposed solution is based on architectural adaptation and extended context management that integrates dynamic service discovery. This allows the substitution of an application component by an external, dynamically discovered service at run-time, if the utility of the application is increased by the reconfiguration. The development of such adaptive applications is a complex and challenging task. We present a new model-driven development method that supports the design and implementation of adaptive applications. The core of the approach is an architectural variability model that specifies application variants and their dependencies on the operating context. This design-time adaptation model is transformed to a run-time model which is used by the adaptation manager in the middleware to decide about adaptation actions according to a given objective function. Experiences with case studies and open research questions will be discussed at the end of the talk. [PDF]

2010-04-06 14:00 in S9: Overview of ETAPS 2010

March 2010

2010-03-30 14:00 in S9: Static analysis of PHP applications

Ondřej Šerý

Low-level errors related to memory safety (e.g., buffer overflow) are no longer considered to be the most dangerous programing errors. Nowadays, the top ranking programming errors (see, e.g., http://cwe.mitre.org/top25/) are SQL injection (SQLI) and Cross-site scripting (XSS). These errors are especially hard to avoid in the dynamically typed scripting languages, such as PHP, with fast unconstrained evolution and rather poor information sources. This talk is a brief excursion into the topic of static analysis of web applications with special focus on PHP and static string analysis as published by Wassermann and Su. [PDF]

Continuation of the previous talk considering refinement of TBP supporting unlimited number of threads. [PDF]

2010-03-23 14:00 in S9: From Textual Use-Cases to component based implementation

Viliam Šimko

An automated model-driven approach to generate executable code directly from the textual use-cases written in plain English. The approach is based on earlier JEE generator extends to allows for the use-cases to be prepared in different formats and for generating final code in different programming languages and component frameworks. [PDF]

2010-03-10 09:00 in S5: MEF (Managed Extensibility Framework)

The talk starts with an overview of the various challenges in the construction of reliable, safety-critical embedded systems. One challenge is to overcome the brittle, hardware-dependent timing behavior of existing systems by providing an adequate correct-by-construction method. The Timing Definition Language (TDL) is the result of several years of research of ourgroup in this area. TDL allows the explicit description of timing behavior along with automatic (timing) code generation and formal guarantees of the timing behavior of concurrent activities. TDL also meets real-world requirements known from the automotive and aerospace domains, such as the description of both synchronous and asynchronous activities. We present TDL's core language elements, highlight some concepts for code generation and demonstrate the TDL tool chain. [PDF]

2010-03-03 09:00 in S5: Real-time Java: Promises and Challenges

Tomáš Kalibera

Real-time Java is being considered as a promising platform for real-time and embedded systems of the future. Distinguishing features of Java compared to technologies used today include automatic memory management, integrated support for concurrency or a good supply of libraries with standard APIs. Despite several successful deployments in demonstrator studies based on production and open-source implementations, there are significant open issues that prevent deployment in resource constrained embedded systems and/or safety-critical systems. The talk will focus on automated memory management as a key feature desired for real-time Java systems, which is also the key obstacle for predictability and throughput. The talk will also mention open issues related to the verification of real-time Java systems. [PDF]

Software performance prediction methods are typically validated by taking an appropriate software system, performing both performance predictions and performance measurements for that system, and comparing the results. The validation includes manual actions, which makes it feasible for a small number of systems only. To significantly increase the number of systems on which software performance prediction methods can be validated, and thus improve the validation, we propose an approach where the systems are generated together with their models and the validation runs without manual intervention. The approach is described in detail and initial results demonstrating both its benefits and its issues are presented. [PDF]

February 2010

2010-02-24 09:00 in S5: A Proﬁle Approach to Using UML Models for Rich Form Generation

Tomáš Černý (FEL, ČVUT)

The Model Driven Development (MDD) has provided a new way of engineering today’s rapidly changing requirements into the implementation. However, the development of user interface (UI) part of an application has not beneﬁt much from MDD although today’s UIs are complex software components and they play an essential role in the usability of an application. As one of the most common UI examples consider view forms that are used for collecting data from the user. View forms are usually generated with a lot of manual efforts after the implementation. For example, in case of Java 2 Enterprise Edition (Java EE) web applications, developers create all view forms manually by referring to entity beans to determine the content of forms, but such manual creation is pretty tedious and certainly very much error-prone and makes the system maintenance difﬁcult. One promise in MDD is that we can generate code from UML models. Existing design models in MDD, however, cannot provide all class attributes that are required to generate the practical code of UI fragments. In this paper, we propose a UML proﬁle for view form generation as an extension of the object relational mapping (ORM) proﬁle. A proﬁle form of hibernate validator is also introduced to implement the practical view form generation that includes an user input validation. [PDF]

VCC is an industrial-strength verification suite for the formal verification of concurrent, system-level C code. VCC's development is driven by two major verification initiatives in the Verisoft XT project: the Microsoft Hyper-V Hypervisor and SYSGO's PikeOS micro kernel. The talk will give a brief overview on the Hypervisor with a special focus on verification related challenges that this kind of highly optimized, concurrent, system-level software poses. It will discuss how the design of VCC addresses these challenges, and will highlight some specific issues of the Hypervisor verification and how they can be solved with VCC. [PDF]

While software is an immaterial object that does not decay with time, Parnas pointed out that it is in fact aging. Lehman's laws of software evolution accordingly states that a system that is being used undergoes continuing adaption or degrades in effectiveness. Consequently, we can observe that the ability to cost-effectively adapt software has become one of the most important critical success factors for software development today.<br>One particular vision to address this challenge is self-adaptive software that incorporates the capability to adjust itself to the changing needs into the software itself. This capability promises at first to considerably reduce the costs for required administration and maintenance and to avoid a decline in quality. In addition, future generation of software systems that interconnect the today more or less decoupled applications into complex, evolving software landscapes will require the capability to adapt itself as an important cornerstone as the software as whole can no longer be engineered at development time.<br>In this talk we want review why we should look for means to engineer self-adaptive software systematically and what requirements have to be fulfilled to achieve the systematic software engineering of self-adaptive systems. Then, we will look into the particular role of models for engineering self-adaptive systems and discuss the current vision for the model-driven software engineering of self-adaptive systems. Besides the means to build self-adaptive systems with models, we will also review the role of models for the validation and verification of such systems. [PDF]

2010-02-09 14:00 in S4:Orthographic Software Modeling: A Practical Approach to View-Oriented, Component-Based Development

Colin Atkinson (University of Mannheim, Germany)

Although they differ in how they decompose and conceptualize software systems, one thing that all advanced software engineering paradigms such as SOA, MDD and CBD have in common is that they increase the number of different views involved in visualizing a system. Managing these different views can be challenging even when a paradigm is used independently, but when they are used together the number of views and inter-dependencies quickly becomes overwhelming. In this talk Colin Atkinson will present a novel approach for organizing and generating the different views used in advanced component-based development methods referred to as Orthographic Software Modeling (OSM). This provides a simple metaphor for integrating different development paradigms and for leveraging domain specific languages in software engineering. Development environments that support OSM essentially raise the level of abstraction at which developers interact with their tools by hiding the idiosyncrasies of specific editors, storage choices and artifact organization policies. The overall benefit is to simplify the development, evolution and maintenance of component -based systems. [PDF]

January 2010

Application landscapes in large enterprises consist of hundreds or thousands of highly connected semi-autonomous application systems which are designed, created, evolved, maintained, used and financed by people with diverse interests and sometimes incompatible educational background. We report on recent efforts in academia and industry to improve the long-term and strategic management of this core enterprise asset by improving the communication between these stakeholders. A key challenge is to develop models, visualizations, tools and management practices which simultaneously address social, technical and economic aspects in a balanced and pragmatic manner based on sound software, service and data modeling principles. [PDF]

2010-01-19 14:00 in S1:Component-Based Real-Time Operating System for Embedded Applications

Frédéric Loiret (INRIA, France)

As embedded systems must constantly integrate new functionalities, their developement cycles must be based on high-level abstractions, making the software design more flexible. CBSE provides an approach to these new requirements. However, low-level services provided by operating systems are an integral part of embedded applications, furthermore deployed on resource-limited devices. Therefore, the expected benefits of CBSE must not impact on the constraints imposed by the targeted domain, such as memory footprint, energy consumption, and execution time. In this talk, we will present the componentization of a legacy industry-established Real-Time Operating System, and how component-based applications are built on top of it. We use the Think framework that allows to produce flexible systems while paying for flexibility only where desired. Performed experimentions show that the induced overhead is negligeable. [PDF]

2010-01-13 09:00 in S5: The Progress Integrated Development Environment

2009-12-08 14:00 in S9: Linux Driver Verification

Alexey Khoroshilov (Linux Verification Center, ISPRAS)

The talk covers the main activities of the Linux Verification Center of ISPRAS (LinuxTesting.org): (i) Model-based (formal specification based) test suite OLVER covering more than 1500 interfaces of the Linux Standard Base Core, (ii) LSB Infrastructure Program, which is a joint effort of the ISPRAS and the Linux Foundation, and (iii) Linux Driver Verification Program aimed to develop domain specific automatic verification toolset for Linux device drivers. The latter program is discussed in details including the specifics of the domain (Linux device drivers), and their consequences for verification tools, repository of domain specific rules to be verified, our experience with BLAST and other approaches, and first results and future plans. [PDF]

November 2009

2009-11-25 09:00 in S5: Automated Verification of Software

Runtime-verification and runtime-enforcement are lightweight and effective software validation techniques that do not require complete and accurate formal models of the system under consideration. Thus, they can be easily applied for service oriented applications, where components can be dynamically updated, or available only as binary code. In this presentation we will discuss the expressive power of these techniques in the framework of the safety/progress classification of linear temporal properties. We will give an efficient procedure to generate a validation or enforcement monitor from a given property, and we will discuss some practical implementation issues trough our prototype tool called j-VETO (for Java Validation end Enforcement Tool). [PDF]

2009-11-11 09:00 in S5: Study of the SpMV performance in a NUMA architecture using the Rooﬂine Model

2009-11-10 14:00 in S9: HelenOS Architecture Description

Status update on the progress of formal description of the architecture and behavior of HelenOS. General approach, the use of ADL and BP, challenges and open questions. [PDF]

2009-11-04 09:00 in S5: Domain-Specific Software Component Models

Michal Malohlava

Domain modeling and engineering is currently massively adopted approach. Different types of domain-specific languages (DSL) as well as frameworks for deriving such DSLs are introduced. However, in the domain of component-based systems pure domain-specific component models do not exist. This talk present an approach proposed by Kung-Kiu Lau and Faris M. Taweel to derive domain-specific component models for a given domain. [PDF]

2009-11-03 14:00 in S9: Extending the OMG’s D&C specification for the design and analysis of real-time component-based applications

October 2009

2009-10-27 14:45 in S9:Automated Versioning As a Mechanism for Component Software Consistency Guarantee

Jaroslav Bauml (University of West Bohemia in Pilsen)

The existing consistency controls in component systems at build time or in runtime cannot prevent type mismatch failures caused by independent client and server bundle development. Our solution to this problem use automated versioning of components. Version identifiers are generated from results of subtype-based comparison of component representations, thus achieving a consistent and formally backed interpretation of the version numbering scheme. [PDF]

Components interchangeability checks will fail if extra-functional properties are not resolved properly. Although a lot of work towards an extra-functional properties description has been done, a solution targeting their context independence has not been proposed yet. We have been working on a model hiding direct values of the properties which allows us to deploy components in a set of contexts without destroying scales of values. We propose a novelty system of registries which encapsulates direct (context dependent) values in each registry prepared for each context. A component's properties descriptor consequently uses only abstract names of values which are linked to registries. [PDF]

2009-10-21 09:00 in S5: The RTEmbed Extension for JPF

Overview of the current state of the Q-ImPrESS project will be presented; the focus will be set on the current state of the tools, on the outlook to the nearest future, and identification of actions to be performed to prepare the Q-ImPrESS tools for annual review. Please note that the presentation will be quite technical and insight into the project technologies is a prerequisite for understanding the talk. [PDF]

2009-10-14 09:00 in S5: Scala programming language

2009-10-13 14:00 in S9: Web application optimization

2009-10-07 09:00 in S5: State Dependence in Performance Evaluation of Component-Based Software Systems

Barbora Bühnová [Zimmerová] (FI MUNI)

Performance evaluation of (component-based) system models is highly dependent on the level of detail used for building the models. This talk focuses on one particular construct, which is not getting much attention in existing performance models - the context and history-dependent internal state of the system (or its components). In the talk, we define the state, design a classification of possible state types, and discuss the performance impact of the state types, together with the implied increase of the model size. [PDF]

Model-driven performance prediction methods require abstract design models to evaluate the performance of the target system during early development stages. Developing such abstract models including important implementation details should not incur additional effort for performance engineers. The talk focuses on performance completions which hide low-level details from performance engineers and introduces their automatical integration using model-driven technologies. [PDF]

September 2009

2009-09-22 14:00 in S9: topic to be announced

2009-09-08 09:00 in S9:Q-Impress coding session report

August 2009

2009-08-25 14:00 in S9: When Opportunity Proceeds from Autonomy: A Tour-Based Architecture for Disconnected Mobile Sensors

Radim Bartos (University of New Hampshire, Durham)

We consider the case of sparse mobile sensors deployed to implement missions in challenging environments. This paper explores a notion of tour networks that is well suited to circumstances in which autonomous sensing agents cannot rely on standard networking abstractions and must create their own opportunities for communication and interaction. Tours are high-level building blocks that combine motion, communication and sensing and can be assembled to implement a broad class of autonomous sensing missions. They are supported by an architecture designed to deliver performance and robustness without compromising design abstraction.

In the aeronautical context, numerous regulations constraints both building A/C (Aircraft) and the way they are used. In the context of many European projects mainly supported by DG TREN (Directorate-General Energy and Transport) the underlying concepts either must be compliant with current regulations, or impact so much them that regulations must be updated. In the INOUI (Inovative Operational UAV Integration) European project, as Computer Scientists, we have developed and implement MISR, a methodology that performs compliance with air regulations (mainly CS25 from EASA and Annex II of ICAO). This methodology begins with specifying the concepts under consideration – A/C and UAVs – and the current regulations. UML is used as a rigorous notation. Class diagrams support the structural view of the integration of A/C and UAVs during the different phases of flights, whereas state-transition diagrams allow to describe the dynamic aspects of the concepts. MISR has been specifically developed to help deriving, from state-transition diagrams, the safety requirements that UAV must meet. The talk will present the methodology and the first available results.

Frama-C is a collection of tools for the analysis of the source code of software written in C. Frama-C is developed in the context of the ES_PASS project at CEA LIST (France). The Jessie plug-in of Frama-C allows the deductive verification of C programs, whose behavior has been formally specified with the ANSI C Specification Language (ACSL). Frama-C can often automatically prove that a piece of C software corresponds to its ACSL-specification. At Fraunhofer FIRST we use Frama-C for the verification of safety-critical railway software. In my talk I will give an overview on ACSL and will explore opportunities and limits of deductive verification with Frama-C.

Random testing is a fundamental testing technique. Recently, we have proposed how to improve the fault-detection capability of random testing by enforcing a more even, well-spread distribution of test cases over the input domain. Such an approach is named as adaptive random testing. In this seminar, we will cover 1. the motivation; 2. failure pattern based testing; 3. various principles that could enforce an even spread of test cases, and the advantages and disadvantages of their corresponding ART implementations; 4. comparison of random testing and adaptive random testing with respect to various testing effectiveness measures; 5. applications of ART

May 2009

2009-05-27 09:00 in S5: A Self Introduction & A brief outline Of Education In India

Durga Prasana Sahoo

2009-05-20 09:00 in S5: topic to be announced

Peter Libic

2009-05-19 14:00 in S9: Engineering a State of the Art Software Model Checker: MoonWalker

Viet Yen Nguyen (RWTH Aachen)

Good model checkers are hard to engineer. State space explosion, unoptimised code and outdated algorithms easily lead to a poor performing model checker. This is our experience when we engineered MoonWalker, a model checker for .NET programs. It is the central topic in this talk. The talk is outlined as follows: First, we'll introduce MoonWalker and give a live demo where we use it to hunt for a hard-to-spot data-race. This is followed by an overview of its implementation, with a particular focus on its novel algorithms and techniques that it employs to effectively traverse the state space. The talk concludes on the engineering problems we encountered, how those inspired us to devise novel solutions and which challenges lay ahead for future work.

2009-05-13 09:00 in S5: Summary of the selected papers from ETAPS'09

Ondrej Sery

The talk will summarize the selected papers from the ETAPS'09 conference. The goal is to present the most recent advancements and currend trends in testing, verification, and model checking of code.

2009-05-12 14:00 in S9: Scade

Michal Malohlava

April 2009

2009-04-29 09:00 in S5: Summary of selected papers from ETAPS'09

Pavel Parizek

The talk will focus on papers that present application of formal methods in software development tasks. [PDF]

2009-04-28 14:00 in S9: Overview of recent projects and activities

Michal Bečka

2009-04-22 09:00 in S5: Virtualization and VMware

Martin Decky, Pavel Jezek

2009-04-21 14:00 in S9: What is a Multi-Modeling Language?

Martin Wirsing (Ludwig-Maximilians-Universität München)

In large software projects often multiple modeling languages are used in order to cover the different domains and views of the application and the language skills of the developers appropriately. Such “multi-modeling” raises many methodological and semantical questions, ranging from semantic consistency of the models written in different sublanguages to the correctness of model transformations between the sublanguages. We provide a first formal basis for answering such questions by proposing semantically well-founded notions of a multimodeling language and of semantic correctness for model transformations. In our approach, a multi-modeling language consists of a set of sublanguages and correct model transformations between some of the sublanguages. The abstract syntax of the sublanguages is given by MOF meta-models. The semantics of a multi-modeling language is given by associating an institution, i.e., an appropriate logic, to each of its sublanguages. The correctness of model transformations is defined by semantic connections between the institutions.

2009-04-15 09:00 in S5: Summary of the selected papers from ETAPS'09

Ondrej Sery

The talk will summarize the selected papers from the ETAPS'09 conference. The goal is to present the most recent advancements and currend trends in testing, verification, and model checking of code. [PDF]

2009-04-14 14:00 in S9: Service Architectures - Open Issues

Martin Necasky

In this presentation we will briefly introduce architectures based on services, concretely service-Oriented Architectures and Event-Oriented Architectures, and we will discuss some open research problems in this area. These problems include conceptual modeling, storage, integration, evolution and security. We will introduce our new results as well as our future research related to service architectures.

2009-04-08 09:00 in S5: Partial Order Reduction for State/Event LTL

Nikola Benes (Faculty of Informatics, Masaryk University, Brno)

Software systems assembled from a large number of autonomous components become an interesting target for formal verification due to the issue of correct interplay in component interaction. State/event LTL incorporates both states and events to express important properties of component-based software systems. The main contribution of this work is a partial order reduction technique for verification of state/event LTL properties. The core of the partial order reduction is a novel notion of stuttering equivalence which we call state/event stuttering equivalence. The positive attribute of the equivalence is that it can be resolved with existing methods for partial order reduction. State/event LTL properties are, in general, not preserved under state/event stuttering equivalence. To this end we define a new logic, called weak state/event LTL, which is invariant under the new equivalence.

2009-04-07 14:00 in S9: Report on a tool: CHESS

Ondrej Sery

CHESS is a tool for finding and reproducing Heisenbugs in concurrent programs. CHESS repeatedly runs a concurrent test ensuring that every run takes a different interleaving. If an interleaving results in an error, CHESS can reproduce the interleaving for improved debugging. CHESS is available for both managed and native programs. [http://research.microsoft.com/en-us/projects/chess/default.aspx] [PDF]

2009-04-01 09:00 in S5: Modeling cache sharing

March 2009

2009-03-18 09:00 in S5: TBP vs. Code: Extraction and Verification

Pavel Parizek, Tomas Poch

2009-03-17 14:00 in S9: Computer-Assisted Proving for the Analysis of Systems and Specifications

Wolfgang Schreiner (Johannes Kepler University, Linz, Austria)

Nowadays the formal methods landscape is dominated by model checking, an approach to the verification of programs/systems that is fully automatic but limited in its scope, in particular with respect to verifiable properties. In this talk, we discuss the alternative and more general direction of computer-assisted interactive proving which in the last decade (due to the development of automatic "Satisfiability Modulo Theories" solvers as supporting components) has made tremendous progress; consequently practical formal reasoning on more general system models and system properties has become viable. Furthermore, while not fully automatic and depending on human assistance, this approach has the potential of increasing a programmer's understanding (why is a system correct or what are the fundamental reasons of its failure) much more than the plain yes/no answers and counterexample executions produced by fully automatic model checkers. As an example of this direction, we present the "RISC ProofNavigator", an interactive proving assistant developed for education in reasoning about programs and specifications. Finally, we argue why modern computer science curricula should teach the use of proving assistants for formal specification and verification as an integral part of developing correct software. [PDF]

2009-03-11 09:00 in S5: TBP vs. Code: Extraction and Verification

Pavel Parizek, Tomas Poch

2009-03-10 14:00 in S9: Parallel Verification of LTL(F,G) Properties

Jitka Kudrnacova

Computational complexity of model checking is the reason, why researchers are trying to find ways how to lower the amount of memory and time needed for the model checking procedure. When considering specification formulae for model checking only from the fragment LTL(F,G), there is a way how to translate the specification formula into a finite disjunction of formulae in special form called alpha-formulae. Then, instead of verifying the original formula, we verify every formula in the disjunction. The implementation of this translation of formulae from the fragment LTL(F,G) into the finite disjunction of alpha-formulae is the main objective of this thesis. Since every alpha-formula can be easily translated into a Büchi automaton and there is a strong probability that this automaton will be significantly smaller than the automaton obtained by traditional translation of LTL formulae into Büchi automata, the translation of alpha-formula into a Büchi automaton will be implemented as well. Afterwards, model checking of alpha-formulae will be evaluated within the DiVinE environment.

February 2009

2009-02-25 09:00 in S5:

Mini-workshop - ADAM Team, Lille, France

2009-02-24 13:30 in S9:

Mini-workshop - ADAM Team, Lille, France

January 2009

2009-01-14 09:00 in S5: Unit checking for Java IDE

Michal Kebrt

Unit testing is a frequently used technique for ensuring quality of software during its development and maintenance. Frameworks (e.g., JUnit) exist that facilitate creation, execution, and evaluation of simple unit tests. With increasing popularity of software model checking and availability of tools (e.g., Java PathFinder), a question about the place of software model checking in the development cycle is discussed. One of the ideas is to employ model checking in a way similar to unit testing; i.e., to create simple scenarios for checking small portions of software—unit checking. The benefit lies in the ability of a model checker to exhaustively examine all possible executions (including random choices and all thread interleavings). We will present results that were collected in past months while working on a diploma thesis which aims at extending Java PathFinder to support JUnit tests checking. Problems that appeared both on JPF and JUnit sides will be shown. We will also introduce the Eclipse plugin that allows to run JUnit tests under JPF.

2009-01-13 14:00 in S9: Flexible component runtime

2009-01-06 14:00 in S9: Entities and dynamic reconfigurations

December 2008

2008-12-16 14:00 in S9: Evaluating the Visual Syntax of UML: Improving the Cognitive Effectiveness of the UML Suite of Diagrams

Daniel L. Moody (University of Twente)

UML is a visual language. However surprisingly, there has been very little attention in either research or practice to the visual notations used in UML: both academic analyses and revisions to UML have focused almost exclusively on semantic issues, with little debate about or modification to the visual notations. We believe that this is a major oversight and that for this reason, UML’s "visual development" is lagging behind its "semantic development". The lack of attention to visual aspects is particularly surprising given that the form of representations is known to have a comparatively greater effect on understanding and problem solving performance than their content. The UML visual notations were developed in a bottom-up manner, by reusing and synthesising existing notations. There is little or no justification for choice of graphical conventions, with decisions based on "expert consensus" rather than scientific evidence. This paper evaluates the UML family of diagrams using a set of predefined principles for visual notation design, which are based on theory and empirical evidence from communication, semiotics, graphic design, visual perception, psychophysics and cognitive science. This is the first comprehensive analysis of the UML visual language and covers all diagram types, constructs and symbols in the latest release of UML (2.1.2). The paper identifies some serious design flaws in the UML visual notations together with practical recommendations for improvement. The conclusion from the analysis is that radical surgery is required to the UML visual notations for them to be cognitively effective. We believe that the time is right for a major revamp of the UML visual notations now its semantics (the UML metamodel) is relatively complete and stable.

Often academic approaches do not make it to industrial application for several reasons. We are very interested in verifying and improving our approach in an industrial setting. In this talk we will see, how we can use the approach presented in the first talk to solve real problems in component-based systems. To show this, we will give an overview of several properties and optimizations of the defined verification process, we are working on. Last we will show the corner marks of the case study we currently perform with our industrial partner and early results. [PDF]

Often, component-based systems describe their interfaces purely by signatures. Using type-checking it can be automatically checked whether components are used consistent with the signature. However, just checking signatures is not sufficient for checking the correct usage of components. General specifications (e.g. contracts) are more useful, but fully automatic support of checking preconditions cannot be expected. An intermediate approach of these two extremes is the protocol checking approach: These approaches assume that components implements interfaces, and that component protocols are described by finite state machines that describe legal sequences of procedure calls to a component. Checking conformance to a component protocol to check whether there is a sequence of procedure calls to the component that is not accepted by its protocol. Related work also use finite state machines to describe the use of a component and reduces the protocol checking problem to a language inclusion problem of two regular languages. The approaches can construct counterexamples, i.e. sequences of procedure calls to a component that violate the component's protocol. These finite state machine approaches can also deal with concurrent execution. However, we show that if recursion is present (e.g. due to recursive call-backs) then finite state machine approaches can lead to false positives: although the approaches confirm protocol conformance, there are protocol violations. Using pushdown machines for modeling the use of components, it is possible to avoid this problem. However, concurrency is not modeled with pushdown machines. Our main contribution is the extension of the above appoches for protocol conformance checking to recursion and concurrency. For modelling recursive procedure and concurrent execution we use process rewrite systems (Mayr 1997). We use his decidability results to show that the protocol conformance checking problem becomes now undecidable. Therefore we check use a conservative approach for checking protocol conformance. False negatives introduced due the conservative approximation might be improved by a CEGAR-approach (counter-example guided abstraction refinement) or by safely pruning the search for counter-examples. Finally, we show that process rewrite systems can be made compositional which enables their use for component systems without availability of the component's code. [PDF]

2008-12-01 14:00 in a lecture room announced later:On the Observable Behaviour of Composite Components

Rolf Hennicker (Ludwig-Maximilians-Universität München)

The crucial strength of the component paradigm lies in the possibility to encapsulate behaviours. In this work, we focus on the observable behaviour of composite components which encapsulate the behaviour of (possibly large) assemblies of connected subcomponents. We first present our general component model which is equipped with a precise formal semantics allowing us to distinguish systematically different kinds of behaviours for ports, for components, and for component assemblies; technically we use UML2 notation for describing component structures and I/O-transition systems for behaviours. Then we investigate an efficient method for the computation of the observable behaviour of composite components which can circumvent the possibly infeasible intermediate computation of the usually complex behaviour of underlying assemblies if there are behaviourally neutral subcomponents. Finally, we utilise the fact that components are connected via ports such that checks for behavioural neutrality of components can be reduced to checks for behavioural neutrality of connected ports in the case of weakly deterministic port behaviours. [PDF]

November 2008

2008-11-26 09:00 in S5: ADCSS 2008 - Report from ESA workshop

When designing a metamodel for a particular application domain, offering just the structural view of allowed concepts and relationships is not enough. Most often, there are additional constraints that must be enforced, which cannot be expressed in a graphical manner. These take the form of metamodel WFRs (Well Formedness Rules) and may be stated in either natural language or using a textual formalism, such as OCL. The latter approach is desirable, since it enables to take advantage from automatic tool support in checking models’ consistency with respect to the stated WFRs. We argue these ideas using a proposal for a component metamodel, together with the OCLE tool. Within the universe of OCL-supporting tools, OCLE distinguishes itself by the fact that it not only reports validation errors, but it also offers a valuable aid in identifying the failure reasons and correcting them in real time. Since, generally, the same WFR can be expressed by multiple equivalent OCL expressions, we show that the shape of such an expression is highly important from an evaluation perspective. [PDF]

2008-11-18 14:00 in S9: The Java Modeling Language

Viliam Holub

The Java Modelling Language (JML) is a design-by-contract formal specification language for Java. JML keeps close to Java's syntax, thus making the language easier to learn, but extends the language with special constructs when the original syntax lacks of expressiveness. In this talk, we will explain basic annotating constructs of Java classes and interfaces as well as more advanced constructs. [PDF]

The Real-Time Specification for Java (RTSJ) is becoming a popular choice in the world of real-time and embedded programming. However, RTSJ introduces many non-intuitive rules and restrictions which prevent its wide adoption. In this talk we extend our philosophy that RTSJ concepts need to be considered at early stages of software development, postulated in our prior work, in a framework that provides continuum between the design and implementation process. A component model designed specially for RTSJ serves here as a cornerstone. As the first contribution of this work, we propose a development process where RTSJ concepts are manipulated independently of functional aspects. Second, we mitigate complexities of RTSJ-development by automatically generating execution infrastructure where real-time concerns are transparently managed. We thus allow developers to create systems for variously constrained real-time and embedded environments. Performed benchmarks show that the overhead of the framework is minimal in comparison to manually written object-oriented applications, while providing more extensive functionality. Finally, the framework is designed with the stress on dynamic adaptability of target systems, a property we envisage as a fundamental in an upcoming era of massively developed real-time systems. [PDF]

Self-management is put forward as one of the means by which we could provide systems that are scalable, support dynamic composition and rigorous analysis, and are flexible and robust in the presence of change. In the talk, we focus on architectural approaches to self-management, not because the language-level or network-level approaches are uninteresting or less promising, but because we believe that the architectural level provides the required level of abstraction and generality to deal with the challenges posed. A self-managed software architecture is one in which components automatically configure their interaction, in a way that is compatible with an overall architectural specification, to achieve the goals of the system. The objective is to minimise the degree of explicit management necessary for construction and subsequent evolution whilst preserving the architectural properties implied by its specification. The talk discusses some of the current promising work and presents an outline three-layer reference model as a context in which to articulate some of the main outstanding research challenges. In addition, the talk will describe a prototype system being developed at Imperial College that conforms to this model.

This seminar addresses behavioural specification of distributed components.We present a specification language that allows us to specify the behaviour of distributed components, and behavioural models that can be created from instances of the specification language. The specification language is close to Java, and provides a powerful high-level abstraction of the system. It deals with hierarchical components that communicate by asynchronous method calls, gives the component behaviour as a set of services, and provides semantics close to a programming language by dealing with abstractions of user-code. The benefits are twofold: (i) we can interface with verification tools, so we are able to verify various kinds of properties; and (ii), the specification is complete enough to generate code-skeletons defining the control part of the components.

The Oasis team develops methods for programming large scale Grid applications using (distributed) component-based technics. One important point in this approach is to provide formal models, and software tools, to garantee the properties of components, of their composition, and of application managment, including reconfiguration of the components. In this presentation, we will describe a behavioral model that is suitable for modelling such applications, and for reasonning about their behavioral properties. The model is called "Parameterized Networks of Synchronised Automata" (pNets), and is both expressive and compact (hierarchical and paramaterized by data variables). We will show how we generate pNets models for Fractal components (including their non-functional managment) and for distributed components. We will discuss abstraction methods for building finite abstract models preserving behavioral properties of our components. And we will describe our prototype tool platform Vercors, and show som eresults on a case-study.

October 2008

2008-10-22 09:00 in S5: A CBSE approach for the development of trustwothy systems

Mubarak Mohammad (Concordia University, Canada)

Developing trustworthy software systems that are complex, and used by a large heterogenous population of users is a challenging task. Component-based software engineering (CBSE) has many attractive features that can provide an effective solution to these challenging issues. However, the essential requirements of CBSE have not been met in the current approaches. Therefore, we present a CBSE approach that involves three contributions. The first contribution is a component model that defines the trustworthiness quality attributes as first class structural elements. This enables us to formally verify trustworthiness properties and demonstrate that a high level of trustworthiness has been achieved. In our approach, formalism is integrated into the various stages of the development process. So, our second contribution is a process model that plays this role. The third and final contribution is a development framework of comprehensive tool support. We describe the tools and justify their role in assuring trustworthiness during the different stages of software development. [PDF]

2008-10-21 14:00 in S9: A formal component model and ADL for trustwothy systems

Mubarak Mohammad (Concordia University, Canada)

Existing architecture description languages mainly support the specification of the structural elements of the system under design. These languages have either only a limited support or no support to specify non-functional requirements. In a component-based development of trustworthy systems, the trustworthiness properties must be specified at the architectural level. Analysis techniques should be available to verify the trustworthiness properties early at design time. Towards this goal we present a meta-architecture that is based on formal foundataion and TADL, a new architecture description language suited for describing the architecture of trustworthy component-based systems. The TADL is a uniform language for specifying the structural, functional, and nonfunctional requirements of component-based systems. It also provides a uniform source for analyzing the different trustworthiness properties using unified methods. Also, we presents analysis techniques to generate behavior models and verify the trustworthiness properties by analyzing the TADL specification early at design time. [PDF]

2008-10-15 09:00 in S5: Java Profiling

Peter Libic

Mainly presentation of paper by Georges A., Buytaert D. and Eeckhout L. from Department of Electronics and Information Systems, Ghent University, Belgium - Statistically Rigorous Java Performance Evaluation. Discussion of performance measurement methodologies and guidelines how to interpret results, so they are correct from statistical perspective. Main focus is for Java language, but the ideas are applicable in all languages. [PDF]

2008-10-08 09:00 in S5: Resource Experiments in Q-Impress

2008-10-07 15:40 in S3:An Introduction to Top-k and Skyline Computation

Apostolos N. Papadopoulos (Aristotle University of Thessaloniki)

Preference queries are very important to users, since they return the "best" objects according to some criteria. In this talk, we discuss the fundamental issues regarding Top-k and Skyline computation, which constitute the most important methods for determining the objects that best match users' preferences. More specifically, we study the fundamental research contributions by Ronald Fagin towards Top-k processing and an efficient Skyline algorithm. Moreover, we provide the necessary background material (e.g. R-trees) and give application examples where preference queries are of enormous interest. Additionally, we touch some advanced topics and briefly discuss research directions in the area.

Web 2.0 represents the recent evolutionary shift towards more user-oriented and user-driven web. The presentation will try to distil the essence of this concept -- identify the distinguishing characteristics of the new web, illustrate them on successful services, and comment on the trends. As a specific topic, the phenomenon of integrated services -- "mashups" -- will be discussed, pointing out its crucial issues including technological as well as legal and business aspects.

2008-10-01 09:00 in S5: INRIA Internship Experience

Michal Malohlava

The presentation describes experiences obtained during an internship in ADAM project-team in INRIA. It presents results of work which was done in the context of generation an execution infrastructure of RTSJ-based component systems. Except this research related information, the talk also mentions several notes about living and working in France. [PDF]

September 2008

2008-09-23 14:00 in S9: Formal Verification of Components in Java

Pavel Parízek

Formal verification of a hierarchical component application involves (i) checking of behavior compliance among sub-components of each composite component, and (ii) checking of implementation of each primitive component against its behavior specification and other properties like absence of concurrency errors. In this thesis, we focus on verification of primitive components implemented in Java against the properties of obeying a behavior specification defined in behavior protocols (frame protocol) and absence of concurrency errors. We use the Java PathFinder model checker as a core verification tool. We propose a set of techniques that address the key issues of formal verification of real-life components in Java via model checking: support for high-level property of obeying a behavior specification, environment modeling and construction, and state explosion. The techniques include (1) an extension to Java PathFinder that allows checking of Java code against a frame protocol, (2) automated generation of component environment from a model in the form of a behavior protocol, (3) efficient construction of the model of environment's behavior, and (4) addressing state explosion in discovery of concurrency errors via reduction of the level of parallelism in a component environment on the basis of static analysis of Java bytecode and various heuristics. We have implemented all the techniques in the COMBAT toolset and evaluated them on two realistic component applications. Results of the experiments show that the techniques are viable. [PDF]

The semantics of modelling languages are not always specified in a precise and formal way, and their rather complex underlying models make it a non-trivial exercise to reuse them in newly developed tools. We report on experiments with a virtual machine-based approach for state space generation. The virtual machine's (VM) byte-code language is straightforwardly implementable, facilitates reuse and makes it an adequate target for translation of higher-level languages like the SPIN model checker's PROMELA, or even C. As added value, it provides efficiently executable operational semantics for modelling languages. To evaluate the benefits of the proposed approach, several tools have been built around the VM implementation we developed, among them a model checker for Embedded Systems software.

2008-09-16 14:00 in S9: A Software Component Model with Encapsulation and Compositionality

Kung-Kiu Lau (University of Manchester, UK)

A software component model should define what components are and how they can be composed. Current models lack explicit composition operators with proper composition theories. In this talk we discuss a model we have defined. Our model has explicit composition operators that can be used in both the design phase and the deployment phase of the component life cycle. Composition operators themselves can be composed. Such composite composition operators are in fact design patterns. Furthermore, our model can also be specialised into domain-specific models by using domain models.

2008-09-02 14:00 in S9: Real-Time Java in Space On-Board Software

Martin Děcký

Results of evaluating the usability of two real-time Java implementations for space on-board software at SciSys UK Ltd. General properties of hard real-time systems used by the European Space Agency and Real-Time Java are also discussed. The other part of the talk is dealing with the big picture of using Java in space, especially concerning explicit architecture description (components), verification of correctness and other safety properties.

2008-09-23 14:00 in S9: Formal Verification of Components in Java

The semantics of modelling languages are not always specified in a precise and formal way, and their rather complex underlying models make it a non-trivial exercise to reuse them in newly developed tools. We report on experiments with a virtual machine-based approach for state space generation. The virtual machine's (VM) byte-code language is straightforwardly implementable, facilitates reuse and makes it an adequate target for translation of higher-level languages like the SPIN model checker's PROMELA, or even C. As added value, it provides efficiently executable operational semantics for modelling languages. To evaluate the benefits of the proposed approach, several tools have been built around the VM implementation we developed, among them a model checker for Embedded Systems software.

2008-09-16 14:00 in S9: A Software Component Model with Encapsulation and Compositionality

Kung-Kiu Lau (University of Manchester, UK)

A software component model should define what components are and how they can be composed. Current models lack explicit composition operators with proper composition theories. In this talk we discuss a model we have defined. Our model has explicit composition operators that can be used in both the design phase and the deployment phase of the component life cycle. Composition operators themselves can be composed. Such composite composition operators are in fact design patterns. Furthermore, our model can also be specialised into domain-specific models by using domain models.

2008-09-02 14:00 in S9: Real-Time Java in Space On-Board Software

Martin Děcký

Results of evaluating the usability of two real-time Java implementations for space on-board software at SciSys UK Ltd. General properties of hard real-time systems used by the European Space Agency and Real-Time Java are also discussed. The other part of the talk is dealing with the big picture of using Java in space, especially concerning explicit architecture description (components), verification of correctness and other safety properties.

July 2008

2008-07-23 14:00 in S1:Performance Optimizations in High Performance Computing

Juan Ángel Lorenzo

Using hardware counters to improve performance of irregular code on FinnisTerae computing cluster.

May 2008

2008-05-27 14:00 in S9: CoCoME in SOFA

Peter Hladký

CoCoME &ndash; The Common Component Modeling Example models a real trading system. The implementation of CoCoME serves as a benchmark for software modeling technologies, including performance modeling. The goal is to create a version of CoCoME suitable for performance modeling with hardware resource usage by individual parts of the application. The talk will be about technologies and procedures used to implement CoCoME in SOFA component model with components reflecting the existing SOFA CoCoME architecture.

2008-05-20 14:00 in S4:Concept location

Václav Rajlich (Wayne State University)

Concept location in source code is an evolution activity that identifies where a software system implements a specific concept. While it is well accepted that concept location is essential for the maintenance of complex procedural code like code written in C, it is much less obvious whether it is also needed for the maintenance of the Object-Oriented code. After all, the Object-Oriented code is structured into classes and well-designed classes already implement concepts, so the issue seems to be reduced to the selection of the appropriate class. The objective of our work is to see if the techniques for concept location are still needed (they are) and whether Object-Oriented structuring facilitates concept location (it does not).

The problem of data semantics is establishing and maintaining a correspondence between a data source (e.g., a database, an XML document) and its intended subject matter. We review the (relatively minor) role data semantics has played in Databases under the term "semantic data models", its more prominent place in ontology-based information integration, and then outline two new views: (i) Semantics as a composition of mappings between models, and (ii) Attaching intentional aspects (stakeholder goals) to Information Systems. (Joint work with John Mylopoulos and others at Univ. of Toronto)

The talk gives an overview of our previous and recent work in performance prediction of component systems with resource sharing. In the past we have been using the LQN formalism for the performance model of the CoCoME system. The talk discusses the benefits of switching to the QPN formalism and first results of the work in progress, demonstrating the new possibilities of resource modeling.

The purpose of the talk is to provide basic overview of a new course called "Best Programming Practices" taught at the Department of Software Engineering. The goal of the course is to provide students with guidelines and techniques for writing quality code, attempting to fill the gap in the curriculum concerning this aspect of software development. In the talk, the motivation for introducing such a course will be presented, along with information on course structure and requirements, as well as brief outline of the contents along with a few examples.

2008-04-17 14:00 in a lecture room announced later:Experimental work in explicit model checking

Radek Pelánek (FI MUNI)

We present BEEM (BEnchmarks for Explicit Model checkers) and experiments performed over this benchmark set, e.g., study of properties of state spaces, evaluation of error detection techniques, evaluation of techniques for reducing memory consumption of model checkers. We will also mention the model checker DiVinE and related research by verification group in Brno.

2008-04-09 09:00 in S5: Interact: A general contract model for the guarantee of components and services assemblies

Alain Ozanne (France Telecom R&D)

In industry as well as in research, commonly used component and service frameworks don't support tools that enable to reason in a generic way on the reliability and robustness of their architectural configurations. In this talk, I will present Interact a contracting framework that meets this need. I will show how it can guarantee an assembly of components or services by organizing in a generic way the verifications of their specifications. I will also explain how Interact can handle various formalisms and can be applied to different types of components and services.

2008-04-08 14:00 in S9: HelenOS IPC and Behavior Protocols

Martin Děcký

Overview of the features of HelenOS IPC mechanism and considerations about deploying Behavior Protocols as a formal base for run-time IPC communication checking.

2008-04-02 09:00 in S5: SOFA - internal mini-seminar

Michal Malohlava

2008-04-01 14:00 in S9: Typestate for Multiple Interacting Objects

Ondřej Lhoták (University of Waterloo)

Typestate is a formalism for specifying and verifying the temporal behaviour of an object. A finite automaton is associated with each object, and each operation is modelled by a transition of the automaton. However, the behaviour of an object-oriented system depends on interactions between multiple objects. We have extended typestate to collections of interacting objects. The intended behaviour is specified using a "tracematch", a temporal extension of an AspectJ pointcut. The behaviour can be checked dynamically or verified statically. A key challenge in verifying object behaviour is determining which objects are referenced by program variables at different times. This talk will present a precise static analysis that we have developed for verifying temporal safety properties of a system of interacting objects.

March 2008

2008-03-19 09:00 in S5: Q-ImPrESS

František Plášil

2008-03-18 14:00 in S9: Formalizing Threads in Behavior Protocols

Jan Kofroň

When reasoning about behavior compatibility of comunicating software components (e.g. in SOFA2), parallel processing of a method call may cause both appearance of an artificial error and hiding of a real one. In this talk, we will present a method of capturing the notion of thread within a behavior specification to face the problem. We will also discuss related issues and, since it is still work in progress, also the future work.

2008-03-11 14:00 in S9: Report about "Workshop on CBSE Life Cycle (22-23 January 2008, Manchester, UK)"

Petr Hnětynka

2008-03-10 14:00 in a lecture room announced later:Service Compositions: From Models to Self-Management

Howard Foster (Imperial College London)

The talk and demonstrations will illustrate a rigorous approach to the engineering of services for service-oriented architectures and in particular, web service compositions. We use formal model checking techniques to cover aspects of architecture, orchestration, choreography and deployment configurations for service compositions. A demonstration will illustrate our techniques using an Eclipse based tool, known as WS-Engineer. WS-Engineer is based upon the Labelled Transition System Analyser (LTSA) and provides mechanisms to assist engineers in developing and analyzing service compositions. The tool has also been adopted as part of academic courses in the teaching aspects of a services science. The talk is based on work in London Software Systems a grouping that includes academics at Imperial College London (the speaker, Jeff Magee, Jeff Kramer and Sebastian Uchitel) and at University College London (Wolfgang Emmerich, Anthony Finkelstein and David Rosenblum).<br><br>Howard Foster is currently a Research Fellow with the Department of Computing, Imperial College London. His research interests include services, compositions, choreography and model-based verification and validation techniques. He is also actively working in the area of software self-management and behavioral synthesis of software components. He obtained his PhD in 2006 at Imperial College London and has over 10 years industrial experience as a Principal Consultant for leading Business and IT Professional Service organizations.

2008-03-05 09:00 in S5: Behavior analysis with Blast

Ondřej Šerý

Status report on extending the Blast model checker to support checking C code against behavior specification. Presentation of the idea of encoding behavior specification as a separate CPA, summary of necessary changes in the Blast tool, demonstration of the prototype implementation and discussion on open issues and further directions.

2008-03-04 14:00 in S9: Behavior Extraction using Recoder

Tomáš Poch

Ensuring the compatibility between component behavior specification and implementation is an important part of the component behavior related reasoning puzzle. The presented approach seems to be an alternative to the technique combining the state spaces of implementation and specification. Instead of it, the latter one is abstracted from the former one. The abstraction is guided by the user to bridge the gap between the power of a general program and the limited expressivness of the specification language.

February 2008

2008-02-27 09:00 in S5: Configurable Software Verification

Grégory Théoduloz (École Polytechnique Fédérale de Lausanne)

In automatic software verification, we have observed a theoretical convergence of model checking and program analysis. In practice, however, model checkers are still mostly concerned with precision, e.g., the removal of spurious counterexamples; for this purpose they build and refine reachability trees. Lattice-based program analyzers, on the other hand, are primarily concerned with efficiency. In this talk, I shall present an algorithm that can be configured to perform not only a purely tree-based or a purely lattice-based analysis, but offers many intermediate settings that have not been evaluated before. The algorithm and tool implementation take one or more abstract interpreters, such as a predicate abstraction, a shape analysis, or other abstract domains, and configure their execution and interaction using several parameters. Our experiments show that such customization may lead to dramatic improvements in the precision-efficiency spectrum.

2008-02-26 14:00 in S9: Inter-Object and Intra-Object Concurrency in Creol

Jasmin Blanchette (University of Oslo)

In traditional object-oriented languages, method calls are synchronous. This suits tightly coupled systems but leads to unnecessary delays in distributed environments. Another problem shared by thread-based object-oriented languages is that control threads may interfere with each other. Creol is a language for concurrent objects that addresses these issues through two novel language constructs: asynchronous method calls and explicit processor release points. The language is gradually being extended to incorporate facilities for reflective programming, dynamic updates, and a coordination language based on behavioral interfaces.

2008-02-13 14:00 in S5:Modular Functional Specifications or: How to break down complex things to build them up properly

Bernhard Schätz (TU München)

The construction of reactive systems often requires the combination of different individual functionalities, thus leading to a complex overall behavior. To achieve an efficient construction of reliable systems, a structured approach to the definition of the behavior is needed. Here, functional modularization supports a separation of the overall functionality into individual functions as well as their combination to construct the intended behavior, by using functional modules as basic paradigm together with conjunctive and disjunctive modular composition. Combined with a notion of refinement including the treatment of partiality as well as non-determinism and supported by automatic proof mechanisms, a methodical construction of complex reactive behavior is achieved.

January 2008

2008-01-22 14:00 in S9: Reo Networks and Symbolic Constraint Automata

Tobias Blechmann (TU Dresden)

Reo is a channel-based exogenous coordination language in which complex coordinators, called connectors, are compositionally built out of simpler ones. Constraint Automatas have been introduced as an operational model for connectors described in Reo. This talk introduces a symbolic representation for Constraint Automatas and addresses the problem of compositionally building them for given Reo connectors. Further it reports on techniques to efficiently minimize a given symbolic constraint automata or to check whether two given automatas show equivalent behavior.

December 2007

2007-12-18 14:00 in S9: ProSave

Tomáš Bureš

2007-12-12 09:00 in S5: Miniseminar - Internal meeting

Petr Hnětynka

2007-12-11 14:00 in S9: Spec# and BoogiePL

Pavel Ježek

Spec# language is a research extension of the standard C# language. It adds specification of contracts to allow developers to write correct programs with less effort and with support of standard tools. BoogiePL is a language in the center of the Spec# platforms allowing using the platform to verify programs in other languages (like C, Java, Eiffel) and also to test different theorem provers (other than Microsoft's Z3). The presentation is a mixture of talks by K. Rustan M. Leino from Microsoft Research, Redmond (with minor updates to reflect current version of Spec#). Original version of his talks can be found at <a href="http://research.microsoft.com/~leino/">http://research.microsoft.com/~leino/</a>

2007-12-05 09:00 in S5: Java 7

Michal Malohlava

New version of Java language is extensively discussed in Java community. Many proposals of new features and language extensions exist, but only a few of them are really specified by JSR. This talk try to make a brief overview of main features and projects which can be expected to become a part of Java 7.

Real-time kernels are usually implemented as monolithic kernels, whereby the kernel is organized in a number of service layers that operate on shared data structures. The monolithic approach results in relatively large kernels, which are difficult to scale up or down to the requirements of specific embedded applications. Component-based design offers a solution to this problem by encapsulating data and system functions into subsystems implemented as reconfigurable components, which can be used to build various kernel configurations in accordance with application requirements. This presentation is about the HARTEX kernel framework, which defines kernel component and configuration models, and a formalized approach to component development and kernel configuration. It has been used to develop a repository of components that can be used to configure safe real-time kernels for hard-real-time embedded applications. In this framework, a component is defined as a self-contained unit encapsulating a kernel subsystem, such as task management, event management, resource management, etc. Complex components are decomposed into sub-components that implement an atomic functionality within the subsystem under consideration. Each component is specified in terms of public functions (primitives) and protected functions that are used by other components. Accordingly, kernel configurations are modelled by component call graphs that represent kernel components and their interactions. Configurations are actually developed by deriving a conformance class specification from the requirements specification of a real-time application, and then mapping it onto an appropriate subset of kernel components, augmented with relevant component dependencies. Kernel safety is enhanced by the rigorous design of kernel functions, using advanced algorithms that provide for very small overhead and constant execution time of kernel primitives, independent of the number of tasks involved.

2007-11-26 14:00 in S4:MINIX 3: A Reliable and Secure Operating System

Andrew S. Tanenbaum

Most computer users nowadays are nontechnical people and have a mental model of what they expect from a computer based on their experience with TV sets and stereos: you buy it, plug it in, and it works perfectly for the next 10 years. Unfortunately, they are often disappointed as computers are not very reliable when measured against the standards of other consumer electronics devices. A large part of the problem is the operating system, which is often millions of lines of kernel code, each of which can potentially bring the system down. The worst offenders are the device drivers, which have been shown to have bug rates 3-7x more than the rest of the system. As long as we maintain the current structure of the operating system as a huge single monolithic program full of foreign code and running in kernel mode, the situation will only get worse. While there have been ad hoc attempts to patch legacy systems, what is needed is a different approach. In an attempt to provide much higher reliability, we have created a new multiserver operating system with only 4000 lines in kernel and the rest of the operating system split up into small components each running as a separate user-mode process. For example, each device driver runs as a separate process and is rigidly controlled by the kernel to give it the absolute minimum amount of power to prevent bugs in it from damaging other system components. A reincarnation server periodically tests each user-mode component and automatically replaces failed or failing components on the fly, without bringing the system down and in some cases without affecting user processes. The talk will discuss the architecture of this system, called MINIX 3. The system can be downloaded for free from www.minix3.org.

Since checking the code is one of the crucial tasks necessary to succeed in our current projects, several extensions of EBP making the specification closer to code will be presented. Moreover, the resulting language could be beneficial for evaluating performance.

2007-11-21 09:00 in S5: Formal Subgroup Miniseminar

Frantisek Plasil

2007-11-14 09:00 in S5: Overview of the BLAST model-checker

Ondřej Šerý

2007-11-13 14:00 in S4:Verifying Specifications with Proof Scores in CafeOBJ

Kokichi Futatsugi

Verifying specifications is still one of the most important undeveloped topics in software engineering. It is important because quite a few critical bugs are caused at the level of domains, requirements, and/or designs. It is also important for the cases where no program codes are generated and specifications are analyzed/ verified only for justifying models of problems in real world. In this talk, a survey of our research activities in verifying specifications is given. After explaining fundamental issues and importance of verifying specifications, the proof score approach in CafeOBJ and its applications to several areas are described.

2007-11-07 09:00 in S5: SOFA &ndash; Internal meeting

Petr Hnětynka

2007-11-06 14:00 in S9: Klapper

Antonino Sabetta (ISTI CNR, Pisa)

2007-11-05 14:15 in a lecture room announced later:Platform-independent Performance Prediction

Michael Kuperberg (Universität Karlsruhe)

2007-11-05 12:30 in a lecture room announced later:Palladio

Klaus Krogmann (Universität Karlsruhe)

October 2007

During the last decade the chair of software and systems engineering conducted intensive research in the field of distributed systems as can be found for example in embedded systems in modern cars and airplanes. Software and systems engineering thereby comprises suitable processes, methods, models and tools. In this talk I will give an overview of our research results in the different phases of the engineering process. Development starts with the requirements engineering phase. RE is very important, because any errors introduced here can have serious impact (read costs) on the later development activities. During system design we obtain a profound understanding of the system under development through the specification of the system using suitable models that range from coarse, abstract ones to fine-grained, implementation related ones. Along these levels of abstraction, we usually specify the data the system works with, its structure or architecture, and its behavior, i.e. dynamic aspects, of the system parts. The system interface from the user's perspective must also be specified during the design phase. User acceptance relies heavily on the usability of the system. This is an important topic that has to be treated during requirements analysis and system design. In modern model-based software engineering processes implementation is mostly reduced to code generation. However, deployment of the system into its runtime environment is a complex task that also uses specific models to define, e.g., task and bus schedules. Due to the complexity of modern systems and real applications system development must follow a rigorous process and defined methods. Furthermore, these have to be supported by suitable tools that take care of development artifact management and allow complex engineering tasks and analyses to be executed in reasonable time.

2007-10-30 14:00 in S4:Modeling and Architecture of an Integrative Environmental Simulation System

Rolf Hennicker (LMU München)

We describe the concepts and design principles of an environmental simulation system which supports the study and analysis of water-related global change scenarios in the Upper Danube Basin. The system provides a Web-based platform integrating the distributed simulation models of all socio-economic and natural science disciplines taking part in the GLOWA-Danube project which is part of the German programme on global change in the hydrological cycle. Crucial aspects of the system development concern the specification of interfaces between simulation models, the treatment of the simulation space, the modeling of socio-economic actors and the coordination of coupled simulations for which we have developed a coordination framework. To ensure the correctness of the synchronization of concurrently running simulation models we have applied formal methods of process algebra.

2007-10-24 09:00 in S5: Reflecting Creation and Destruction of Instances in Component-Based Systems Modelling and Verification

Barbora Zimmerová (FI MUNI)

The talk discusses our solution to the issue of modelling and verification of communicational behaviour in component-based systems allowing creation and destruction of component instances. In the talk, I first present a modelling technique for capturing each component type and component instance as a finite-state transition system, and define the system model as a collection of those. Then I am going to present a verification technique we have defined for such a type of systems, and discuss application of the technique to the systems with dynamic instantiation of components at run time.

2007-10-23 14:00 in S9: Evaluating Non-Determinism in Linux with Different Page Allocation Strategies

Tomáš Kalibera

Although physical page allocation is well known to impact both mean performance and performance determinism, existing evaluation work of different page allocation strategies has so far only covered their impact on mean performance. The talk will present statistical methods, benchmarks, and results of an evaluation of non-determinism in performance of Linux applications, comparing bin-hopping, page coloring and the default Linux page allocation strategy.

2007-10-19 09:00 in a lecture room announced later:Ph.D. rehearsal

Jaroslav Gergic

2007-10-17 09:00 in S5: Using StrategoXT for generation of software connectors

Michal Malohlava

Software connectors are used in component based systems as a special entities modeling and realizing component interactions. Besides this behavior, connectors can provide extra functionality and benefits (e.g. logging, adaptation, monitoring). This approach requires generation of connector code with respect to requirements of components, a target environment and features specified at the design stage. In this talk I will show how to extend the existing connector generator by the Stratego/XT transformation engine, which includes a language for implementing program transformations and a collection of transformation tools. The toolset helps to realize a simple method of defining connector implementation, which is used as a template for a process of generation source code.

2007-10-16 14:00 in S4:Enterprise Content Management

Peter Eklund (The University of Wollongong)

We deal with Enterprise Content Management and some of the tough computing issues that are faced in that domain. This mostly draws on my experience working with my secondment company, www.objective.com, and is a mixed bag of problems: database replication, information retrieval, object caching, security and access control.

2007-10-10 09:00 in S5: EPEW 2007 &ndash; Overview of selected papers

Vlastimil Babka

The talk will provide an overview of the keynote and two selected papers presented at the 4th European Performance Evaluation Workshop (EPEW), which was held on September 27-28, 2007 in Berlin, Germany. The keynote and the papers address optimization problems in service provisioning systems, property-driven stochastic model checking using the SPDL logic, and the problem of automated generation of architectural feedback from performance analysis.

2007-10-03 09:00 in S5: UPPAAL 4.0

Tomáš Poch

Uppaal is a state-of-the-art explicit model checker for real-time systems. It has been developed jointly by Uppsala University and Aalborg University. So far, it has been successfully applied on many case studies ranging from communication protocols to multimedia applications. Within this seminar, an overview of basic functions and a verification algorithm will be presented.