Past talks

Graphical user interfaces (GUIs) make software easy to use by providing the user with visual controls. Therefore, correctness of GUI's code is essential to the correct execution of the overall software. Models can help in the evaluation of interactive applications by allowing designers to concentrate on its more important aspects. This paper describes our approach to reverse engineer an abstract model of a user interface directly from the GUI's legacy code. We also present results from a case study. These results are encouraging and give evidence that the goal of reverse engineering user interfaces can be met with more work on this technique.

Analytical software quality assurance (SQA) constitutes a significant part of the total development costs of a typical software system. Typical estimates say that up to 50\% of the costs can be attributed to defect detection and removal. Hence, this is a promising area for cost-optimisations. The main question in that context is then how to use those different techniques in a cost-optimal way. In detail this boils down to the questions to use (1) which techniques, (2) in which order, and (3) with what effort. This talk describes an analytical and stochastic model of the economics of analytical SQA that can be used to analyse and answer these questions. The practical application is shown on the example of a case study on model-based testing in the automotive domain.

The end of the Research.PURe project is approaching. We will be able to look back at a successful project with lots of interesting deliverables. But now it becomes urgent to reflect on what comes after Research.PURe. A deadline for new FCT proposals is July 31. We will initiate discussion about concrete ideas for such a follow-up project.

The starting point to this project are state-based systems modeled as components, i.e, "black boxes" that have "buttons" or "ports" to communicate with the whole system. A list of operators to connect these components is given. A possible way to implement components as coalgebras in Haskell is shown. Finally, a tool to generate a component from a state-based system definition is implemented.

This communication is a follow up of [1]. We show how to model some of the key concepts of VDM++ in Haskell. Classes, objects, operations and inheritance are encoded based on the recently developed OOHaskell library.

This communication is an attempt to apply the calculational style underlying the so-called Bird-Meertens formalism to generalised coinduction, as defined in F. Bartels' thesis. In particular, equational properties of such generic recursion scheme are given, relying on its universal characterisation. We also show how corresponding calculational kits for particular instances of the general coinduction are derived by specialisation.

We present a local graph-rewriting system capable of deciding equality on a fragment of the language of point-free expressions. The fragment under consideration is quite limited since it does not includes exponentials. Nevertheless, it constitutes a non-trivial exercise due to interactions between additive and multiplicative laws.

We propose a novel, comonadic approach to dataflow (stream-based) computation. This is based on the observation that both general and causal stream functions can be characterized as coKleisli arrows of comonads and on the intuition that comonads in general must be a good means to structure context-dependent computation. In particular, we develop a generic comonadic interpreter of languages for context-dependent computation and instantiate it for stream-based computation. We also discuss distributive laws of a comonad over a monad as a means to structure combinations of effectful and context-dependent computation. We apply the latter to analyse clocked dataflow (partial streams based) computation.

Joint work with Varmo Vene, University of Tartu. Appeared in Proc of APLAS 2005.

We define a strongly-typed model of relational databases and operations on them. In this model, table meta-data is represented by type-level entities that guard the semantic correctness of all database operations at compile time. The model relies on type-class bounded and parametric polymorphism and we demonstrate its encoding in the functional programming language Haskell. Apart from the standard relational database operations, such as selection and joins, we formalize functional dependencies and normalization. We show how functional dependency information can be represented at the type level, and can be transported through operations from argument types to result types. The model can be used for static query checking, but also as the basis for a formal, calculational approach to database design, programming, and migration. The model is available as sub-library of the UMinho Haskell Libraries, and goes under the name of CoddFish.

Functional programming provides different programming styles for programmers to write their programs in. The differences are even more explicit under a lazy evaluation model, under which a computation of a value is only triggered upon its need. When facing the challenge of implementing solutions for multiple traversals algorithm problems, the functional programmer often tends to assume himself as the decision maker of all the implementation details: how many traversals to define, in which order to schedule them, which intermediate structures to define and traverse, just to name some. Another sub-paradigm is available, though. Given the proper will and sufficient practice, multiple traversal algorithms are possible to be implemented as lazy circular programs, where some of the programming work is handed to the lazy machinery.In this talk I will present a calculational rule to convert multiple traversal programs into single-pass circular ones. Feedback is more than expected, it is wanted!

A BIC was recently assigned to Zé Pedro Correia. Its subject is to develop a tool to manipulate point-free expressions, according to transformation laws, in order to construct a point-free proof. This talk will serve to introduce this project to the Research.PURe group, presenting the work done so far and the work to do. We also expect to collect some opinions on the general expected "behaviour" of the tool.

We constructed a tool, called VooDooM, which converts datatypes in VDM-SL into SQL relational data models. The conversion involves transformation of algebraic types to maps and products, and pointer introduction. The conversion is specified as a theory of refinement by calculation. The implementation technology is strategic term rewriting in Haskell, as supported by the Strafunski bundle. Due to these choices of theory and technology, the road from theory to practise is straightforward.

This work consists in the first steps for the re-implementation of CAMILA, which is a software development environment intended to promote the use of formal methods in industrial software environments. The new CAMILA, also called Research.PURe CAMILA, is written in Haskell and makes use of monadic programming. A small prototype interpreter was created, but the purpose of this work is to study the concepts behind such a tool, such as datatype invariants and partial functions.

In this paper, we present a collection of aspect-oriented refactorings covering both the extraction of aspects from object-oriented legacy code and the subsequent tidying up of the resulting aspects. In some cases, this tidying up entails the replacement of the original implementation with a different, centralized design, made possible by modularization. The collection of refactorings includes the extraction of common code in various aspects into abstract superaspects. We review the traditional object-oriented code smells in the light of aspect-orientation and propose some new smells for the detection of crosscutting concerns. In addition, we propose a new code smell that is specific to aspects.

In the context of Joost Visser's Spreadsheets under Scrutiny talk, I have been looking at refinement laws for spreadsheet normalization. Back to good-old `functional dependency theory', D. Maier's book etc, I ended up by rephrasing of the standard theory using the binary relation pointfree calculus. It turns out that the theory becomes simpler and more general, thanks to the calculus of 'simplicity' and coreflexivity. This research also shows the effectiveness of the binary relation calculus in «explaining» and reasoning about the n-ary relation calculus «à la Codd».

Grammars are to parser generators what programming languages are to compilers. Although parsing is a subset of a well studied area like compilers, grammars were always looked upon as "the ugly duckling". In this presentation I will propose a methodology for developing grammars using the SDF (Syntax Definition Formalism) inspired in Extreme Programming. This methodology was used to develop a grammar for ISO VDM-SL, one of the most common languages used for formal specification. The grammar is available from: VooDooMFront.

In the Pure Workhsop last Sptember a limitation of the Hylo-shift law for program calculation was identified, and a possible way of solving it was then discussed. In this talk I will present my current understanding of how a generalized version of the law can be formalized.

In this talk we will show how Haskell can be used to process spreadsheets. We will demonstrate a prototype that takes an excell workbook as input, converts it to gnumeric's xml representation of spreadsheets with formulas, reads and parses the xml representation into a strongly type set of haskell datatypes, and finally computes and visualizes the spreadsheet data flow graph. The audience will be invited to suggest interesting spreadsheet analyses and transformations that might be implemented with this infrastructure.

This is a sneak preview of the student presentation João will give at the Summer School on Advanced Functional Programming (Tartu, 21 august, 2004). GLR parsing is a generalization of LR parsing where ambiguous grammars do not lead to parse errors, but to several parsers working in parallel. We have implemented GLR in Haskell relying on lazy evaluation and incremental computation to improve its performance and approach closer to the original imperative formulation of the algorithm.

The data models of most legacy software systems are not explicitly defined. Instead, they are encoded in the program source code. Using a mixture of program understanding techniques it is possible to (partly) reconstruct these models. In this talk we provide details of a reverse engineering project carried out at the Software Improvement Group. In this project, the software portfolio of a large multinational bank, consisting of many million lines of Cobol code, was analyzed. Both hierarchical databases (IMS) and relational ones (DB2) are used in this portfolio. We will survey the program understanding techniques used, and detail the particular challenges posed by the size and heterogeneity of the analyzed portfolio.

Datatype invariants are a significant part of the business logic which is at the heart of any commercial software application. However, invariants are hard to maintain consistent and their formal verification requires costly «invent and verify» procedures, most often neglected throughout the development life-cycle. We sketch a basis for a calculational approach to maintaining invariants based on a «correct by construction» development principle. We propose that invariants take the place of data-types in the diagrams that describe computations and use weakest liberal preconditions to type the arrows in such diagrams. All our reasoning is carried out in the relational calculus structured by Galois connections.

Parser combinators elegantly and concisely model generalised LL parsers in a purely functional language. They nicely illustrate the concepts of higher-order functions, polymorphic functions and lazy evaluation. Indeed, parser combinators are often presented as "the" motivating example for functional programming. Generalised LL, however, has an important drawback: it does not handle (direct nor indirect) left recursive context-free grammars. In a different context, the (non-functional) parsing community has been doing a considerable amount of work on generalised LR parsing. Such parsers handle virtually any context-free grammar. Surprisingly, no work has been done on generalised LR by the functional prog. community. In this talk, I will present a concise (100 lines!), elegant and efficient implementation of an incremental generalised LR parser generator/interpreter in Haskell. Such parsers rely heavily on lazy evaluation. Incremental evaluation is obtained via function memoisation. I will present the HaGlr tool: a prototype implementation of our generalised LR parser generator. The profiling of some (toy) examples will be discussed.

The basic motivation of component based development is to replace conventional programming by the composition and configuration of reusable off-the-shelf units externally coordinated, through a network of connecting devices, to achieve a common goal. This work introduces a new relational model for connectors of software components. The proposed model adopts a coordination point of view in order to deal with components temporal and spatial decoupling and, therefore, to provide support for looser levels of inter-component dependency and effective external control.

A Unified Approach for the Integration of Distributed Heterogeneous Software Components

Barrett Bryant

A framework is proposed for assembling software systems from distributed heterogeneous components. For the successful deployment of such a software system, it is necessary that its realization not only meets the functional requirements but also non-functional requirements such as Quality of Service (QoS) criteria. The approach described is based on the notions of a meta-component model called the Unified Meta Model (UMM), a generative domain model, and specification of appropriate QoS parameters. A formal specification based on Two-Level Grammar is used to represent these notions in a tightly integrated way so that QoS becomes a part of the generative domain model. A simple case study is described in the context of this framework.

In this talk, I will give a brief overview of Haskell software development around the world and I will present a light-weight infrastructure to support such development by us. Among other things, I will talk about development tools such as Haddock, QuickCheck, HUnit, and GHood. I will point out some nice development projects that seem relevant to the Research.PURe project, such as the refactoring project in Kent.

Functional programs are particularly well suited to formal manipulation by equational reasoning. In this setting, it is straightforward to use calculational methods for program transformation. Well-known transformation techniques, like tupling or the introduction of accumulating parameters, can be implemented using calculation through the use of the fusion (or promotion) strategy. In this paper we revisit this transformation method, but, unlike most of the previous work on this subject, we adhere to a pure point-free calculus that emphasizes the advantages of equational reasoning. We focus on the accumulation strategy initially proposed by Bird, where the transformed programs are seen as higher-order folds calculated systematically from a specification. The machinery of the calculus is expanded with higher-order point-free operators that simplify the calculations. A substantial number of examples (both classic and new) are fully developed, and we introduce several shortcut optimization rules that capture typical transformation patterns.

Functional transposition is a technique for converting relations into functions aimed at developing the relational algebra via the algebra of functions. This talk attempts to develop a basis for generic transposition. Two well-known instances of the construction are considered, one applicable to any relation and the other applicable to simple relations only. Our illustration of the usefulness of the generic transpose takes advantage of the free theorem of a polymorphic function. We show how to derive laws of relational combinators as free theorems of their transposes. Finally, we relate the topic of functional transposition with the hash-table technique for data representation.

Members of the Research.PURe team seem to see in sorting algorithms a never-ending source of inspiration. In this session JoseBarros and ManuelBernardoBarbosa will explain why they think program-understanding can help optimize sorting algorithms, and JorgeSousaPinto will discuss how all major functional sorting algorithms can be derived from insertion sort.

We will explain the basics of collaborative web editing with Wiki. We will discuss the way we have organized our local Wiki and invite you to contribute to it. Finally we will exchange ideas about how to use the Wiki for our research and education.