Presentaties

SAC (for Single Assignment C) is in several aspects a functional
programming language out of the ordinary. As the name suggests,
SAC combines a C-like syntax (with lots of curly brackets)
with a state-free, purely functional semantics. Originally
motivated to ease adoption by programmers with an imperative
background, the choice offers surprising insights into what
constitutes a "typical" functional or a "typical" imperative
language construct.

Again on the exotic side, SAC does not favour lists and trees,
or more generally algebraic data types, but puts all emphasis on
multi-dimensional arrays as the primary data structure. Based on a formal
array calculus SAC supports declarative array processing in the spirit
of interpreted languages such as APL. Array programming treats
multidimensional arrays in a holistic way: functions map potentially
huge argument array values into result array values following a
call-by-value semantics and new array operations are defined by
composition of existing ones.

SAC is a high-productivity language for application domains that
deal with large collections of data in a computationally intensive way.
At the same time SAC also is a high performance language competing with
low-level imperative languages through compilation technology. The abstract
view on arrays combined with the functional semantics support far-reaching
program transformations. A highly optimised runtime system takes care
of automatic memory management with an emphasis on immediate reuse.
Last not least, the SAC compiler exploits the state-free semantics of SAC
and the data-parallel nature of SAC programs for fully compiler-directed
acceleration on contemporary multi- and many-core architectures.

Thomas Arts - Testing AUTOSAR components with QuickCheck

The amount of software in a car is growing exponentially. This software has to be produced quickly, differentiate from the competition in functionality, multiplicity of features, and quality. There are several ingredients for enabling this, among them choosing the right technologies, improving the software process, and also being extremely thorough and efficient in testing.

The automotive industry have standardized their components in the AUTOSAR standard. Each component has about 500 pages thorough specification behind it, but many corners can be cut if the car need only part of the features; making the software faster and run on cheaper hardware.

Integration of components from different vendors is a nightmare for car companies. The vast amount of different configurations and scenarios in which the software should operate require an enormous and practically impossible amount of test cases to be written. Smart design of tests is tempting, but it is easy to overlook a corner case or combination one cannot foresee.

We created QuickCheck models for 5 major AUTOSAR components. The models are about 10% of the size of the implementation and condense 1500 pages of specification in 4500 lines of Erlang code. The models take a configuration and software component as input and automatically generate and run thousands of tests against that component. We have been able to find anomalies in all provided, well tested, software components. We cover many more scenarios and tricky combinations than manual test cases are able to cover. Moreover, we can re-use the model for any given implementation and configuration.

With this technology we can increase test efficiency dramatically, find more errors and only invest a fraction of what it takes to write manual test cases.

Many algorithms that are common in digital signal processing (DSP), such as
video streaming, contain a high degree of instruction-level parallelism. To
accelerate such algorithms, coarse-grained reconfigurable architectures
(CGRA) can be used.

In practise, many existing DSP algorithms are implemented in a sequential
way, and extracting the instruction level parallelism from these algorithms
is not trivial.

Here we present a programming paradigm for expressing instruction-level
parallelism at a high level. The programming paradigm was implemented
leading to a compiler for a previously developed CGRA. Both the compiler and
the architecture were implemented using the functional programming language
Haskell. This allows algorithms to be implemented in a concise and
straight-forward matter by using Haskell's higher order functions. As these
functions have a notion of structure, all information on parallelism and
flow of data is automatically contained in the resulting expressions as we
demonstrate on a few examples.

Over the years we have been developing Haskell libraries which enable one to define a compiler as a large collection of components, which can be individually type checked and compiled into machine code. The final compiler then consists of a small program which imports these modules and combines them. The Haskell type checker then checks whether the collection is consistent.

In the talk we will show how we have used our libraries to solve the challenges set in a competition organised bij de LDTA workshop, which can be found at: http://ldta.info/tool.html. A combination of n tasks and m language levels gives in our solution rise to m*n independent components.

Task-Oriented Programming (TOP) is a novel programming paradigm for the
construction of interactive multi-user systems.
TOP structures programs into compositions of "tasks", subprograms that
facilitate a user in some work he has to do, or that completely automate
part of the work.
With TOP, complex multi-user interactive systems can be programmed in a
declarative style, just by defining the tasks that have to be accomplished.

TOP builds on four core concepts: Tasks that represent computations or
work a user has to do. These tasks have an observable value that
may change over time. Data sharing enables tasks to observe each other
while the work is in progress. Generic type driven generation facilitates
user interaction and a set of combinators is provided for sequential and
parallel task composition.

TOP emerged from experiments that blended functional programming with
workflow management concepts.
In this talk we explain the core concepts of TOP, their semantics and
some history of how and why we arrived at the particular set of
definitions and combinators we use.
We'll illustrate the concepts with examples from the iTask3 framework
that embeds TOP in the functional programming language Clean.

Betsy Pepels - Time Travelling: the philosophy of Functional Programming applied in the wild

The Dutch Tax and Customs Administration is responsible for the implementation of income-related schemes, known as benefits. For the calculation and payment of these benefits, a newly built system is in production since December 2011. This Benefits system runs on the Microsoft.NET Framework using C# as the implementation language.

An important feature of the Benefits application is Time Travelling. This is the ability to do computations with and to travel within the three different time axes present in the Benefits domain. These axes are the valid time (keeping track of the situation of a citizen), the report time (keeping track of when information is reported) and the transaction time (keeping track of when information is recorded).

Our group accomplished Time Travelling by a clever combination of several concepts. The application is defined in a dedicated DSL in which the objects and computations are basically untimed. The core idea is domain lifting (which we gratefully borrowed from the Functional Programming world): the untimed DSL is “under the hood” lifted to a timed domain. This is done by translating the DSL via code generators to a timed internal DSL. Finally, FP features of C# (most notably LINQ, lambda expressions and extension methods) facilitate the implementation of the internal DSL.

Rinse Wester - Complex hardware design using CλaSH

In order to effectively utilize the growing number of resources avail- able on FPGAs, higher level abstraction mechanisms are needed to deal with the increasing complexity resulting from large designs. Functional hardware description languages, like the CλaSH HDL, offer adequate abstraction mechanisms such as polymorphism and higher-order functions to address this problem.

A two step design method to implement a complex DSP application on an FPGA is presented, starting from a mathematical specification, followed by an implementation in CλaSH. A particle filter is chosen as application to be implemented.

First, a straightforward translation is performed from the mathematical definition of a particle filter to plain Haskell. Secondly, minor changes are applied to the Haskell implementation so that it is accepted by the CλaSH compiler and hardware can be generated. The resulting hard- ware produced by this method is evaluated and shows that this method eases reasoning about structure and parallelism in both the mathematical definition and the resulting hardware.

Alex Gerdes - Testing webservices with Erlang QuickCheck

More and more applications are made available as web applications, which run in a web browser. A web application often makes use of web services. These web applications rely on the quality of those web services. It is important to assess the quality of the web services as well as the overall quality of the web application. One way of assessing the quality of a web application or a web service is to test if it displays the expected behaviour. In this talk we show how we use Erlang QuickCheck to test web applications and web services. We do so by modeling a web application in QuickCheck, state properties that the web application should have, and test whether these properties hold. We use the Dudle web application as an example to show how our approach works. Dudle is an open-source version of the popular Doodle web application.