Learning & Testing Meetings

Every friday at 11:00 in room M1.0.02 a meeting is held in which ongoing work in the area of automata learning and testing is discussed.
Also from time to time a small informal presentation over a specific topic is held with some discussion afterwards.

Upcoming talks

In this presentation, I will present our active, coevolutionary and multi-objective approach to learning deterministic finite automata from examples, and its application to requirements engineering. Formal requirements specifications are usually produced, from vague user input, by requirements engineers. It is our aim to automate this process.

From user input in the form of positive and negative example behavior of a desired system, we infer automata that completely describe its behavior. The setting of requirements engineering yields specific challenges, like a small amount of input data and the selection of the right automata from the many automata that are consistent with the input data. Furthermore, the number of queries to the user should be as small as possible.

I will show how our approach deals with these challenges and discuss open questions and future work.

Paul will wish you a very nice holiday, but before that, he will touch on the key aspect of the determinizer in Tomte,
how it actually works in the tool, limitations and discuss a move to a more symbolic framework.

In 1989 Rob van Glabbeek and Peter Weijland defined branching bisimulation as
an alternative to weak bisimulation of Robin Milner. Frits Vaandrager and I formulated
an algorithm with complexity O(mn) where m is the number of transitions of a labelled
transition system and n is the number of states. This was quite a spectacular improvement
over weak bisimulation which is cubic due to the transitive tau-closure. At that time
there was also the O(mn) algorithm for strong bisimulation, by Kannelakis and Smolka,
which was spectacularly improved by Paige and Tarjan to O(mlog n). To me it was
open whether the algorithm for branching bisimulation could be improved, until
a discussion with Anton Wijs about implementing the algorithms on a GPU, brought
me to investigate the paper of Paige and Tarjan in relationship to the branching
bisimulation algorithm again. This led to the insight to obtain the current algorithm
(except for a small and easily repairable flaw, pointed out by Jansen and Keiren).
The algorithm is amazingly complex, but it outperforms existing algorithms by
orders of magnitude, especially if systems become large.

In this talk, I will extend the basic system relations of trace inclusion, trace equivalence, simulation, and bisimulation to a quantitative setting in which propositions are interpreted not as boolean values, but as real values in the interval [0,1]. Trace inclusion and equivalence give rise to asymmetrical and symmetrical linear distances, while simulation and bisimulation give rise to asymmetrical and symmetrical branching distances. I will present the relationships among these distances, and we provide a full logical characterization of the distances in terms of quantitative versions of LTL and μ-calculus. I will show that, while trace inclusion (resp. equivalence) coincides with simulation (resp. bisimulation) for deterministic boolean transition systems, linear and branching distances do not coincide for deterministic quantitative transition systems. Finally, I will go into algorithms for computing the distances.

(This work dates back to when I was at UC Santa Cruz and is joint work with Luca de Alfaro and Marco Faella)

Mark JanssenApril 8th[Combining active learning with fuzzing]

Petra van den BosMarch 18th TBA

trial presentation ICT.OPEN

Rick SmetsersMarch 11th Separating sequences for all pairs of states

Rick will practice his talk for LATA 2016 about finding minimal separating sequences for all pairs of state in O(n log n).

Joshua MoermanNovember 27th The different learning algorithms for register automata

I will discuss the differences between the various learning algorithms for register automata. This includes the abstract learning algorithm from nominal automata, our tool Tomte and RAlib from Uppsala (and maybe more).

As far as I know, there only exist random test methods to generate test cases from an LTS/IOTS-model. I will discuss an adapted version of the FSM-based Lee and Yannakakis-method to find adaptive distinguishing sequences. These sequences enable a smarter and more directed way of testing.

As part of the Cyber-Physical Systems project in partnership with Ocť, my current goal is to use Machine Learning (and other) techniques in order to predict printers failures. I will present here not only the current achievements or Machine Learning techniques employed, but also the main challenges to be completed. My presentation will also trigger a debate on the multidisciplinarity of the project.

I will give a brief overview on our CEGAR based algorithm and detail on how we
have extended this algorithm to handle fresh output values. I will also talk
on some of the optimizations we've made to the algorithm, and to the tool
implementing it. The presentation is going to be a draft version of what I am
going to present in Colombia next week.