Standard algorithms for reachability analysis of timed automata are sensitive to the order in which the transitions of the automata are taken. To tackle this problem, we propose a ranking system and a waiting strategy.
This paper discusses the reason why the search order matters and shows how a ranking system and a waiting strategy can be integrated into the standard reachability algorithm to alleviate and prevent the problem respectively. Experiments show that the combination of the two approaches gives optimal search order on standard benchmarks
except for one example. This suggests that it should be used instead of the standard BFS algorithm for reachability analysis of timed automata.

We study logics expressing properties of paths in graphs that are tailored to querying graph databases. The basic construct of such logics, a regular path query, checks for paths whose labels belong to a regular language. These logics fail to capture two commonly needed features: counting properties, and the ability to compare paths. It is known that regular path-comparison relations (e.g., prefix or equality) can be added without significant complexity overhead; however, adding common relations often demanded by applications (e.g., subword, subsequence, suffix) results in either undecidability or astronomical complexity.
We propose, as a way around this problem, to use automata with counting functionalities, namely Parikh automata. They express many counting properties directly, and they approximate many relations of interest. We prove that with Parikh automata defining both languages and relations used in queries, we retain the low complexity of the standard path logics for graphs. In particular, this gives us efficient approximations to queries with prohibitively high complexity. This is a joint work with Leonid Libkin.

jeudi 11 juin 2015

Marek Trtik (VERIMAG)

Abstracting Path Conditions

We present a symbolic-execution-based algorithm that for a given
program and a given program location in it produces a nontrivial
necessary condition on input values to drive the program execution to
the given location. The algorithm is based on computation of loop
summaries for loops along acyclic paths leading to the target
location. We also propose an application of necessary conditions in
contemporary bug-finding and test-generation tools. Experimental
results on several small benchmarks show that the presented technique
can in some cases significantly improve performance of the tools.

jeudi 04 juin 2015

Igor Walukiewicz (LaBRI)

Safety of parametrized asynchronous shared-memory systems is almost always decidable

Verification of concurrent systems is a difficult problem in general,
and this is the case even more in a parametrized setting where
unboundedly many concurrent components are considered. Recently,
Hague proposed an architecture with a leader process and unboundedly
many copies of a contributor process interacting over a shared memory
for which safety properties can be effectively verified. All
processes in Hague's setting are pushdown automata.
We extend this setting by considering other formal models and, as a
main contribution, find very liberal conditions on the individual
processes under which the safety problem is decidable: the only
substantial condition we require is the effective computability of the
downward closure for the class of the leader processes.
Furthermore, our result allows for a hierarchical approach to
constructing models of concurrent systems with decidable safety
problem: networks with tree-like architecture, where each process
shares a register with its children processes (and another register
with its parent). Nodes in such networks can be for instance pushdown
automata, Petri nets, or multi-pushdown systems with decidable
reachability problem.
Joint work with Salvatore La Torre and Anca Muscholl.

jeudi 28 mai 2015

Alessia Milani (LaBRI)

An Introduction to Software Transactional Memory

Software Transactional memory (or STM for short) is a promising programming paradigm that aims at simplifying concurrent programming by using the notion of a transaction. A transaction executes a piece of code containing accesses to data items which are shared by several processes in a concurrent setting. By using transactions, the programmer needs only enhance its sequential code with invocations of special operations to read or write data items. It is guaranteed that if any operation of a transaction takes place, they all do, and that if they do, they appear to other threads to do so atomically, as one indivisible operation.
In this talk, I will present the STM paradigm and the main algorithmic techniques to implement it.

jeudi 21 mai 2015

Jérôme Leroux (LaBRI)

Ideals And Well Quasi Orders (Part III)

More than 30 years after their inception, the decidability proofs for
reachability in vector addition systems (VAS) still retain much of
their mystery. These proofs rely crucially on a decomposition of runs
successively refined by Mayr, Kosaraju, and Lambert, which appears
rather magical.
This talk offers a justification for this decomposition technique, by
showing that it emerges naturally in the study of the ideals of well
quasi ordered sets. A lot of applications of this recent breakthrough is
expected on various VAS extensions. This is a joint work with Sylvain
Schmitz.
PART I (45 minutes): Ideals for well quasi ordered sets.
PART II (45 minutes): Ideals of the VAS runs.
PART III (45 minutes): End of Part II.

jeudi 07 mai 2015

Jérôme Leroux (LaBRI)

Ideals And Well Quasi Orders (Part II)

More than 30 years after their inception, the decidability proofs for
reachability in vector addition systems (VAS) still retain much of
their mystery. These proofs rely crucially on a decomposition of runs
successively refined by Mayr, Kosaraju, and Lambert, which appears
rather magical.
This talk offers a justification for this decomposition technique, by
showing that it emerges naturally in the study of the ideals of well
quasi ordered sets. A lot of applications of this recent breakthrough is
expected on various VAS extensions. This is a joint work with Sylvain
Schmitz.
PART I (45 minutes): Ideals for well quasi ordered sets.
PART II (45 minutes): Ideals of the VAS runs.

jeudi 16 avril 2015

Jérôme Leroux (LaBRI)

Ideals And Well Quasi Orders (Part I)

More than 30 years after their inception, the decidability proofs for
reachability in vector addition systems (VAS) still retain much of
their mystery. These proofs rely crucially on a decomposition of runs
successively refined by Mayr, Kosaraju, and Lambert, which appears
rather magical.
This talk offers a justification for this decomposition technique, by
showing that it emerges naturally in the study of the ideals of well
quasi ordered sets. A lot of applications of this recent breakthrough is
expected on various VAS extensions. This is a joint work with Sylvain
Schmitz.
PART I (45 minutes): Ideals for well quasi ordered sets.
PART II (45 minutes): Ideals of the VAS runs.

Formal models for real-time distributed systems, like time Petri nets and networks of timed automata have proved their interest for the verification of real-time systems. On the other hand, the question of using these models as specifications for designing real-time systems raises several difficulties. Here we focus on the ones that are related to the distributed nature of the system. Implementing a model may be possible at the cost of some transformations, which make it suitable for the target device. In this talk, I present several results about semantics of distributed real-time systems and provide methods for the design of such systems.

jeudi 05 mars 2015

Benjamin Monmege (ULB)

Reachability in MDPs: Refining Convergence of Value Iteration

Markov Decision Processes (MDP) are a widely used model including both
non-deterministic and probabilistic choices. Minimal and maximal
probabilities to reach a target set of states, with respect to a
policy resolving non-determinism, may be computed by several methods
including value iteration. This algorithm, easy to implement and
efficient in terms of space complexity, consists in iteratively
finding the probabilities of paths of increasing length. However, it
raises three issues: (1) defining a stopping criterion ensuring a
bound on the approximation, (2) analyzing the rate of convergence, and
(3) specifying an additional procedure to obtain the exact values once
a sufficient number of iterations has been performed. The first two
issues are still open and for the third one a « crude » upper bound on
the number of iterations has been proposed. Based on a graph analysis
and transformation of MDPs, we address these problems. First we
introduce an interval iteration algorithm, for which the stopping
criterion is straightforward. Then we exhibit convergence
rate. Finally we significantly improve the bound on the number of
iterations required to get the exact values.

jeudi 22 janvier 2015

Anca Muscholl (LaBRI)

On the parameterized verification of asynchronous shared-memory systems

I will present a recent paper of Esparza, Ganty and Majumdar (CAV'13, ) showing
that under the assumption of non-atomicity of R/W operations on the shared memory,
parameterized verification of pushdown processes becomes decidable.
Link to the paper: http://arxiv.org/pdf/1304.1185

jeudi 18 décembre 2014

Denis Kuperberg (IRIT/ONERA)

Cost functions and Value 1 problem in practice

I will present a tool implementing algorithms on an algebraic structure called stabilization monoids. This monoids are used to abstract various quantiative behaviours, and allow to solve problems like boundedness of particular weighted automata (namely B-automata from the theory of cost functions), or the value 1 problem for a special class of probabilistic automata (called leaktight).
I will briefly explain the theory behind these problems, as well as how cost functions were motivated by historical problems like star-height, and finally demonstrate how
the tool works.
This is joint work with Nathanael Fijalkow, a newer optimized version is in progress with Nathanael Fijalkow, Hugo Gimbert and Edon Kelmendi.

jeudi 04 décembre 2014

Sylvain Schmitz (LSV)

Alternating Vector Addition Systems with States

Vector addition systems with the ability to `fork' into independent
computations were originally defined in the study of propositional
linear and relevance logics. Recently, more ties with
multi-dimensional energy games, finite-system simulations, and
zero-reachability games were noticed. I shall survey both these
relationships and complexity results.
Based on work with Jean-Baptiste Courtois presented at MFCS (with a
full paper available on HAL: <https://hal.inria.fr/hal-00980878v2>),
and on ongoing work with Marcin Jurdzinski and Ranko Lazic.

jeudi 27 novembre 2014

Igor Walukiewicz & Laurent Simon (LaBRI)

IC3: model checking avec SAT, 2ème partie

jeudi 06 novembre 2014

Paulin de Naurois (LIPN)

Reachability In Vector Addition with States and Split/Join Transitions

We define Vector Addition with States and Split/Join Transitions, a new model that extends VASS. Non-negative reachability in this model without join transitions is known to be equivalent to the decidability of MELL, and to be TOWER-hard. As a first step towards a reachability result, we define a suitable notion of covering graph for the model, and prove its finiteness and effective constructibility.

jeudi 23 octobre 2014

Igor Walukiewicz & Laurent Simon (LaBRI)

IC3: model checking avec SAT

jeudi 16 octobre 2014

Ranko Lazic (Univ. Warwick)

Zeno, Hercules and the Hydra

Metric temporal logic (MTL) is one of the most prominent specification
formalisms for real-time systems. Over infinite timed words, full MTL
is undecidable, but satisfiability for its safety fragment was proved
decidable several years ago. The problem is also known to be
equivalent to a fair termination problem for a class of channel
machines with insertion errors. However, the complexity has remained
elusive, except for a non-elementary lower bound. Via another
equivalent problem, namely termination for a class of rational
relations, we show that satisfiability for safety MTL is not primitive
recursive, yet is Ackermannian, i.e., among the simplest non-primitive
recursive problems. This is surprising since decidability was
originally established using Higman's Lemma, suggesting a much higher
non-multiply recursive complexity.
Joint work with Joel Ouaknine and James Worrell.

jeudi 18 septembre 2014

Piotr Hofman (Univ. Bayreuth)

Branching bisimulation for normed commutative context free grammars

Bisimulation is known and well studied equivalence relation between states of a labeled transition system, however there is no canonical extension of it to systems with epsilon transitions.
There are several candidates among which the best known are weak and branching bisimulation. During the talk I will present some ideas behind one of algorithms to compute branching bisimulation for infinite state systems, namely those induced by normed commutative context free grammars. I will start from basic definitions and then I will go to the sketch of the proof.
This talk bases on a joint work with Wojciech Czerwinski and Slawomir Lasota published on Concur 2011.

jeudi 19 juin 2014

Thème MV (LaBRI)

Réunion de Thème

jeudi 05 juin 2014

Salvatore La Torre (Univ. Salerno)

Sequentialization and Bounded Model-checking

Bounded model checking (BMC) has successfully been used
for many practical program verification problems, but concurrency still
poses a challenge. Sequentializations have emerged as an efficient technique
to obtain analysis tools for concurrent programs by reusing tools designed for sequential programs.
In this talk, we describe some recent results concerning the design and the implementation of
efficient sequentializations targeted to using BMC tools as backends.
The presented sequentializations are implemented in the tool CSeq
(http://users.ecs.soton.ac.uk/gp4/cseq/cseq.html).

lundi 02 juin 2014

Chris Newcombe (Amazon.com)

Why Amazon Chose TLA+

Since 2011, engineers at Amazon have been using TLA+ to help solve difficult
design problems in critical systems. This talk describes the reasons why we
chose TLA+ instead of other methods, how we’ve applied it so far, and areas in which we would welcome further progress.

jeudi 22 mai 2014

Salvatore La Torre (Univ. Salerno)

Analysis of Concurrent Programs via Sequentialization

jeudi 17 avril 2014

Patrick Totzke (LaBRI)

Monotonicity in One-Counter-Systems: what gives?

I will discuss several basic problems involving one-counter automata (OCA)
with special focus on the impact of monotonicity.
First we will recall a characterization of reachability based on Valiant's
classic "hill cutting" argument, which leads to an NL decision procedure.
Not surprisingly, many problems involving two OCA are already undecidable:
(Trace) inclusion and equivalence, simulation preorder, weak (and branching)
bisimulation and so on.
Forbidding zero-tests in OCA results in a model called one-counter nets (OCN)
which corresponds to 1-dimensional VASS and lies in the intersection of VASS
and pushdown systems. Due to the monotonicity of the step relation in OCN,
some of the problems mentioned above become decidable.
I will discuss our technique for deciding strong simulation for OCN and
mention some implications, generalizations and open problems.
Keywords: counter-automata, reachability, simulation, energy games.

jeudi 13 mars 2014

Sylvain Schmitz (LSV)

The Power of Priority Channel Systems

We introduce Priority Channel Systems, a new class of channel systems where messages carry a numeric priority and where higher-priority messages can supersede lower-priority messages preceding them in the fifo communication buffers. The decidability of safety and inevitability properties is shown via the introduction of a priority embedding, a well-quasi-ordering that has not previously been used in well-structured systems. We then show how Priority Channel Systems can compute Fast-Growing functions and prove that the aforementioned verification problems are F_{epsilon_{0}}-complete.
Joint work with Christoph Haase and Philippe Schnoebelen. A short version was presented at Concur 2013; see http://arxiv.org/abs/1301.5500 for the full paper.

jeudi 20 février 2014

Valentin Perrelle (IRT SystemX)

Static analysis of programs manipulating arrays

Static analysis is a key area in compilation, optimization and software validation. The complex data structures (arrays, dynamic lists, graphs, ...) are ubiquitous in programs, and can be challenging, because they can be large or of unbounded size and accesses are computed. (through indexing or indirections)
Whereas the verification of the validity of the array accesses was one of the initial motivations of abstract interpretation, the discovery of properties about array contents was only adressed recently. Most of the analyses of array contents are based on a partitioning of the arrays. Then, they try to discover properties about each fragment of this partition.
The choice of this partition is a difficult problem and each method has its flaw. Moreover, classical representations of array partitions induce an exponential complexity for these analyzes.
We generalize the concept of array partitioning into the concept of "fragmentation" which allow overlapping fragments, handling potentially empty fragments and selecting specialized relations. On the other hand, we propose an abstraction of these fragmentations in terms of graphs called "slices diagrams" as well as the operations to manipulate them and ensuring a polynomial complexity. Finally, we propose a new criterion to compute a semantic fragmentation inspired by the existing ones which attempt to correct their flaws. These methods have been implemented in a static analyzer. Experimentations shows that the analyzer can efficiently and precisly prove some challenging exemples in the field of static analysis of programs manipulating arrays.

jeudi 13 février 2014

Emmanuel Fleury (LaBRI)

Binary Analysis: Theory and Practice

Since now, program analysis was performed only on
mathematical models of the program or high-level source code. We aim at
performing an analysis only based on the binary form of the program and,
if possible deduce useful properties of the program (either for software
verification or reverse-engineering purposes). In this talk, we will
define and present the main problems encountered when dealing with the
binary format of programs.

jeudi 06 février 2014

Philippe Schnoebelen (LSV)

The Power of Well-Structured Systems

Well-structured systems, aka WSTS, are computational models where the set of possible configurations is equipped with a well-quasi-ordering which is compatible with the transition relation between configurations. This structure supports generic decidability results that are important in verification and several other fields. This paper recalls the basic theory underlying well-structured systems and shows how two classic decision algorithms can be formulated as an exhaustive search for some "bad" sequences. This lets us describe new powerful techniques for the complexity analysis of WSTS algorithms. Recently, these techniques have been successful in precisely characterizing the power, in a complexity-theoretical sense, of several important WSTS models like unreliable channel systems, monotonic counter machines, or networks of timed systems.
[ <a href="http://www.lsv.ens-cachan.fr/~phs/publis-phs.php?onlykey=SS-concur13">Paper</a> ]
Joint work with Sylvain Schmitz.

jeudi 23 janvier 2014

Yliès Falcone (LIG, Univ. Grenoble 1)

Towards distributed monitoring (of Android applications)

In this talk, I will present some recent results on the topic of monitoring. The first part presents a solution for monitoring distributed systems. The second part presents what the techniques of runtime monitoring allow when trying to improve the security, safety and reliability of Android applications.
Part 1: Decentralized LTL Monitoring. Users wanting to monitor distributed or component-based systems often perceive them as monolithic systems which, seen from the outside, exhibit a uniform behavior as opposed to many components displaying many local behaviors that together constitute the system's global behavior. This level of abstraction is often reasonable, hiding implementation details from users who may want to specify the system's global behavior in terms of an LTL formula. However, the problem that arises then is how such a specification can actually be monitored in a distributed system that has no central data collection point, where all the components' local behaviors are observable.
Part 2: Runtime Verification and Enforcement for Android Applications with RV-Droid. RV-Droid is an implemented framework dedicated to runtime verification (RV) and runtime enforcement (RE) of Android applications. RV-Droid consists of an Android application that interacts closely with a cloud. Running RV-Droid on their devices, users can select targeted Android applications from Google Play (or a dedicated repository) and a property. RV-Droid is generic and currently works with two existing runtime verification frameworks for (pure) Java programs: with Java- MOP and (partially) with RuleR. RV-Droid does not require any modification to the Android kernel and targeted applications can be retrieved off-the-shelf. We carried out several experiments that demonstrated the effectiveness of RV-Droid on monitoring (security) properties.

We extend the quantitative synthesis framework by going beyond the worst-case. On the one hand, classical analysis of two-player games involves an adversary (modeling the environment of the system) which is purely antagonistic and asks for strict guarantees. On the other hand, stochastic models like Markov decision processes represent situations where the system is faced to a purely randomized environment: the aim is then to optimize the expected payoff, with no guarantee on individual outcomes. We introduce the beyond worst-case synthesis problem, which is to construct strategies that guarantee some quantitative requirement in the worst-case while providing an higher expected value against a particular stochastic model of the environment given as input. This problem is relevant to produce system controllers that provide nice expected performance in the everyday situation while ensuring a strict (but relaxed) performance threshold even in the event of very bad (while unlikely) circumstances. We study the beyond worst-case synthesis problem for two important quantitative settings: the mean-payoff and the shortest path. In both cases, we show how to decide the existence of finite-memory strategies satisfying the problem and how to synthesize one if one exists. We establish algorithms and we study complexity bounds and memory requirements.

Concurrency is present at different levels, from applications using high level synchronization mechanisms and abstract data structures under atomicity and isolation assumptions, to low level code implementing concurrent/distributed data structures and system services over multi-core architectures/large scale networks.
We will address the problem of verifying automatically concurrent programs, from the point of view of decidability and complexity: We will show the difficulties that raise in addressing this problem at the different levels mentioned above (huge amount of computations, complex control due to recursion and dynamic task creation, weak memory models, etc.), and we will overview existing approaches for tackling these issues. We will show in particular how the verification problems that must be considered in this context can be reduced to "simpler" problems (stated as reachability queries either on sequential programs, or on concurrent but sequentially consistent programs) for which there are already known verification algorithms and tools.

jeudi 05 décembre 2013

Jean-Marie Lagniez (CRIL)

Knowledge Compilation for Model Counting: Afﬁne Decision Trees

Counting the models of a propositional formula is a key issue for many AI problems, but few propositional languages offer the possibility to count models efficiently. In order to fill the gap, we recently introduced the extended affine decision tree language (EADT). An extended affine decision tree simply is a tree with affine decision nodes and some
specific decomposable conjunction or disjunction nodes. Unlike standard decision trees, the decision nodes of an EADT formula are not labeled by variables but by affine clauses. During this talk, a brief overview of the different compilation languages and their theoritical caracteristics will be done. Then, we will focus on EADT and several of its subsets regarding the knowledge compilation map. We also describe a CNF-to-EADT compiler and present some experimental results.

We study the complexity of deciding boundedness of Well Structured
Transition Systems equipped with a stack. We introduce a general
scheme of induction for bounding lengths of bad nested words over a well
quasi ordered set. We use this scheme to show that lengths of bad
nested words over vectors of natural numbers is bounded by a
function in the class F_ω^ω of the extended
Gregorczyk hierarchy. This upper bound is used to derive an upper
bound for the running time of an algorithm we give, which is an
extension of the classical Karp & Miller algorithm for deciding
boundedness. We give a matching lower bound to show that the worst
case running time of this extended algorithm can not be below F_ω^ω.

jeudi 14 novembre 2013

Patrick Totzke (Univ. Edinburgh)

Inclusion problems for One-Counter Nets

A fundamental question in formal verification is the inclusion problem, that
asks if the behaviour of one process can be reproduced by another. Just as
equivalence and model checking problems, inclusion has been extensively studied
for various notions of behavioural preorders and for many computational models.
This talk will focus on One-Counter Nets (OCN), which consist of a finite
control and a single integer counter that cannot be fully tested for zero.
OCN form a natural subclass of both One-Counter Automata and thus Pushdown
Systems, which allow explicit zero-tests and Petri Nets/VASS, which allow
multiple such weak counters. They are arguably the simplest model of discrete
infinite-state systems but make an interesting case study for different
settings of the inclusion problem.
I will highlight recent results on trace/language inclusion for OCN. This
problem is NL-complete if only one of the given processes is deterministic.
At the same time, trace universality for (nondeterministic) OCN is already
Ackermanian.

jeudi 07 novembre 2013

Salvatore La Torre (Univ. Salerno)

Multistack Pushdown Systems: formal languages theories

Multi-stack pushdown systems are natural models for capturing the control flow of multi-threaded programs. The intuition of exploring computations up to a bounded number of context switches has stimulated a series of interesting results in program verification, such as for example the possibility of sequentializing concurrent programs without maintaining the cross-product of the involved threads, and the study of the formal theory of decidable subclasses.
In this talk, we focus on the formal languages theory of the classes of MPS with the bounded-context switching, scope-bounded, phase-bounded, and stack-ordered restrictions. We also discuss some related results in verification.

Since the first complete SAT solvers (DPLL-62, based on classical backtrack-search), major breakthroughs were observed in the practical – and theoretical – study of SAT. Nowadays, SAT solvers can be used in many critical systems, from very hard problems (Formal Verification, Biology, Cryptology) with potentially long runs, to more reactive applications (were a SAT solver can be called many times per second). In this talk, we will review the progresses made since the first breakthrough of 2001, with the introduction of CDCL solvers, called “modern solvers”. We will show how the field is slowly moving from a “complete” vision of Solvers to a more “uncomplete” one. We will end the talk by some open questions on the future of SAT Solvers, and the recent trends in "incremental SAT Solving", where SAT solvers are called thousand times for each run.

jeudi 03 octobre 2013

Philipp Ruemmer (Univ. Uppsala)

Disjunctive Interpolants for Horn-Clause Verification

One of the main challenges in software verification is efficient and
precise compositional analysis of programs with procedures and loops.
Interpolation methods remain one of the most promising techniques for
such verification, and are closely related to solving Horn clause
constraints. In this talk, I introduce a new notion of interpolation,
disjunctive interpolation, which solves a more general class of
interpolation problems in one step, compared to previous notions of
interpolants, such as tree interpolants or inductive sequences of
interpolants. I present algorithms and complexity for construction of
disjunctive interpolants, as well as their use within an
abstraction-refinement loop. We have implemented Horn clause
verification algorithms that use disjunctive interpolants and evaluate
them on benchmarks expressed as Horn clauses over the theory of integer
linear arithmetic.
This talk presents joint work with Hossein Hojjat and Viktor Kuncak.

jeudi 20 juin 2013

Aiswarya Cyriac (LSV)

Split-width technique for the verification of concurrent programs with data structures

We consider systems of concurrent boolean programs equipped with several data structures. The data structures can be unbounded stacks or unbounded queues. Each data structure allows write access to only one process (call it writer), and read access to only one (possibly different) process (call it reader). A stack whose writer and reader are the same corresponds to an unbounded local stack of the writer. A queue serves as an unbounded reliable FIFO channel from its writer to its reader. As these systems are very powerful, they render most verification problems undecidable.
We introduce a parameter called split-width for its under-approximate verification. Split-width is a notion similar to tree-width, but more handy for the systems we consider. Restricting to bounded split-width behaviors gives an EXPTIME decision procedure for model checking these systems against temporal logics and propositional dynamic logics, and a decision procedure for model checking against MSO. We show that split-width can capture several interesting and relaxed scheduling policies.
This is a joint work with Paul Gastin and K. Narayan Kumar.

jeudi 13 juin 2013

Jérôme Leroux (LaBRI)

Acceleration For Presburger Petri Nets

The reachability problem for Petri nets is a central problem of net theory. The problem is known to be decidable by inductive invariants definable in the Presburger arithmetic. When the reachability set is definable in the Presburger arithmetic, the existence of such an inductive invariant is immediate. However, in this case, the computation of a Presburger formula denoting the reachability set is an open problem. Recently this problem got closed by proving that if the reachability set of a Petri net is definable in the Presburger arithmetic, then the Petri net is flatable, i.e. its reachability set can be obtained by runs labeled by words in a bounded language. As a direct consequence, classical algorithms based on acceleration techniques effectively compute a formula in the Presburger arithmetic denoting the reachability set.

jeudi 06 juin 2013

Vincent Penelle (LIGM)

On the Context-freeness Problem for Vector Addition Systems

Petri nets, or equivalently vector addition systems (VAS),
are widely recognized as a central model for concurrent systems.
Many interesting properties are decidable for this class,
such as boundedness, reachability, regularity,
as well as context-freeness, which is the focus of this paper.
The context-freeness problem asks whether the trace language
of a given VAS is context-free.
This problem was shown to be decidable by Schwer in 1992,
but the proof is very complex and intricate.
The resulting decision procedure relies on five technical conditions over
a customized coverability graph.
These five conditions are shown to be necessary,
but the proof that they are sufficient is only sketched.
In this paper, we revisit the context-freeness problem for VAS,
and give a simpler proof of decidability.
Our approach is based on witnesses of non-context-freeness,
that are bounded regular languages satisfying a nesting condition.
As a corollary,
we obtain that the trace language of a VAS is context-free if,
and only if,
it has a context-free intersection with every bounded regular language.

jeudi 30 mai 2013

Elias Tsigaridas (LIP6 )

Exact algorithms for stochastic games and real algebraic geometry

Shapley's discounted stochastic games and Everett's recursive games are classical
models of game theory describing two-player zero-sum games of potentially infinite
duration. We present an exact algorithm for solving such games based on separation
bounds from real algebraic geometry. When the number of positions of the game is
constant, the algorithm runs in polynomial time and is the first with this property.
We also present lower bounds on the algebraic degree of the values of stochastic
games induced from the irreducibility of polynomials that have coefficients that
depend on the combinatorial parameters of the games, based on a generalization
of Eisenstein criterion.

jeudi 16 mai 2013

Hugues Cassé (IRIT)

Path Analysis in OTAWA Framework

OTAWA (Open Tool for Adaptive WCET Analysis) is an open framework dedicated to the computation of WCET (Worst Case Execution Time). To be sound, effective and precise, this type of analysis must be performed at the machine code level that requires to decode and build back the execution paths of the program.
A challenging and interesting issue of path analysis is that the embedded real-time systems, targeted by the OTAWA, exhibit a lot of different ISA (Instruction Set Architecture) like PowerPC, Sparc, ARM, TriCore, etc. As it would be a possible but a tedious and error-prone solution to implement these back-ends by hand, OTAWA re-use a well-known technology based on ADL (Architecture Description Language). Basically, ADL were developed to allow automatic and safe generation of ISS (Instruction Set Simulator) including the ability of decoding instructions from the binary code. In OTAWA, we have adapted an existing ADL generator, called GLISS, not only to get an instruction decoder but also to provide other useful information items like target of branches, used registers, type of instructions, etc.
Decoding instructions is necessary to build back execution paths but we need also to get a simple and concise representation of the execution paths. CFG (Control Flow Graph) is good and common candidate: starting from the function entry points, they are build by following the instruction control flow to get edges and by grouping instruction in basic blocks to improve compactness of the representation. Yet, it remains some constructions that are hard to handle, i.e. indirect or computed branches, whose target can not be obtained by just looking to the instructions. This constructions are induced by the use of function pointers or by optimization of switch-like statements in high-level languages. The determination of the possible target of such branches requires to perform data flow analysis based on machine code and once again, the ADL have been used in OTAWA to easily produce semantics of machine code instruction making the data flow analysis independent of the actual ISA.
The presentation will be followed by a demonstration of the OTAWA plugin in the Eclipse environment.

Our aim is to develop a formal method in order to prove cryptographic primitives. We consider cryptographic primitives as small programs, usually it is just few lines of code. In the last century Tony Hoare provided mechanisms in order to prove that a program is correct, so-called Hoare logic.
Our approach consists in the three following steps:
Modeling properties that studied primitives have to ensure by proposing some invariants.
Defining the language which represent the small set of commands used to construct a primitives at the good abstract level.
Proving Logics rules, that allows us to generate and propagate invariants according to the commands executed in the program.
Once the language fixed, the invariants developed and the rules of the Hoare Logic established, we can verify a primitive. Using this approach we have developed three logics to prove security of several public encryption schemes, symmetric encryption modes and also several MAC.

jeudi 04 avril 2013

Aymeric Vincent (LaBRI)

The Insight binary analysis framework

In this talk, we will present the current status of the Insight
project which is aimed at analysing binary code.
Binary code analysis is challenging because of the inherent
difficulties of analysing code where the control flow graph is not
explicitly given, where datatypes are not available, compounded with
the technical challenge of decoding mnemonics of real processors as
complex as the Intel 32-bit architecture.
The formal model used in Insight will be presented, and an application
to CFG recovery will be detailed. Further work will be discussed.

jeudi 28 mars 2013

Loïg Jezequel (ENS Cachan Bretagne)

Cost-optimal factored planning using weighted automata calculus

Factored planning is a relatively new domain in planning. It consists in exploiting the possible decompositions of many planning problems into exponentially smaller subproblems in order to efficiently solve them by parts. Even if, in the worst case, the complexity of factored planning is the same as the complexity of classical planning, in many cases of interest it is much more efficient. However, before this work, no factored planning method existed to compute cost-optimal solutions to planning problems. In this talk we will present an approach to factored planning allowing to find cost-optimal solutions thanks to the use of weighted automata calculus. The principle of this approach is to represent the set of solutions of each subproblem as a weighted automaton and apply a standard message passing algorithm to the network constituted by all these automata. This allows to compute new automata representing exactly those solutions of subproblems which are part of solutions of the global planning problem, and from that to give a globally optimal solution of this planning problem in a modular way.

jeudi 07 mars 2013

Shibashis Guha (IIT Delhi)

A semi-decidable approach for determining language inclusion for timed automata

Language inclusion for timed automata is known to be undecidable. The
universality problem can be considered to be a special case of the
language inclusion problem. Given a universal timed automaton A and any
other timed automaton B, the language inclusion problem asks if the timed
language accepted by A is included in the timed language accepted by B.
Solving this problem implies deciding the universality problem for B which
is known to be undecidable!
In this work, we present a semi-decidable algorithm which gives three
outputs as follows: 'yes' indicates that we conclude that the language
accepted by A is a subset of the language accepted by B. 'no' indicates
that the language accepted by timed automaton B does not include the
language accepted by A. 'do not know' indicates that we do not know how
the langauges accepted by A and B are related.
The language inclusion problem for timed automata is known to be decidable
when timed automaton B is deterministic or determinizable or when timed
automaton B (strongly time) simulates timed automaton A or when B is a
single clock timed automaton. We show that our method works in many cases
where B cannot simulate A or B cannot be determinized.

jeudi 21 février 2013

Stefan Göller (Univ. Bremen)

Equivalence Checking of Pushdown Automata and One-Counter Automata

The first part of my talk will be about bisimilarity of one-counter automata (which are pushdown automata over a singleton stack symbol with a bottom-of-stack symbol). I will show that this problem is PSPACE-complete, improving the previously best-known 3-EXPSPACE upper bound by Yen (joint work with Stanislav Böhm and Petr Jancar).
The second part of the talk will be about the complexity of bisimilarity checking of pushdown automata. While decidability of this problem has been shown by Sénizergues, no complexity-theoretic upper bound of this problem is known to date. I will present a recent nonelementary lower bound for this problem, improving the previously best-known EXPTIME-hardness result of Kucera and Mayr (joint work with Michael Benedikt, Stefan Kiefer and Andrzej Murawski).

jeudi 31 janvier 2013

M. Praveen (LaBRI)

Reasoning about Data Repetitions with Counter Systems

We study linear-time temporal logics interpreted
over data words with multiple attributes. We restrict the atomic
formulas to equalities of attribute values in successive positions
and to repetitions of attribute values in the future or past.
We demonstrate correspondences between satisfiability problems
for logics and reachability-like decision problems for counter
systems. We show that allowing/disallowing atomic formulas
expressing repetitions of values in the past corresponds to the
reachability/coverability problem in Petri nets. This gives us
2EXPSPACE upper bounds for several satisfiability problems. We
prove matching lower bounds by reduction from a reachability
problem for a newly introduced class of counter systems. This
new class is a succinct version of vector addition systems with
states in which counters are accessed via pointers, a potentially useful feature in other contexts. We strengthen further
the correspondences between data logics and counter systems
by characterizing the complexity of fragments, extensions and
variants of the logic. For instance, we precisely characterize the
relationship between the number of attributes allowed in the logic
and the number of counters needed in the counter system.
This is joint work with Stephane Demri and Diego Figueira.

We show how to underapproximate the procedure summaries of recursive programs
over the integers using off-the-shelf analyzers for non-recursive programs. The
novelty of our approach is that the non-recursive program we compute may capture
unboundedly many behaviors of the original recursive program for which stack
usage cannot be bounded. Moreover, we identify a class of recursive programs on
which our method terminates and returns the precise summary relations without
underapproximation. Doing so, we generalize a similar result for non-recursive
programs to the recursive case. Finally, we present experimental results of an
implementation of our method applied on a number of examples.
Joint work with Pierre Ganty (IMDEA) and Filip Konecny (EPFL).

jeudi 17 janvier 2013

Sven Schewe (Univ. Liverpool)

Beautiful Games You Cannot Stop Playing

Parity games are simple two player games played on a finite arena that have all that it takes to attract the attention of researchers: it is a simple problem which is hard to analyse. A pocket version of P vs. NP.
Parity games are known to be in UP, CoUP, and some weirder classes besides, but whether or not they are in P has proven to be a rather elusive question. What is more, when you work with them, you will have the constant feeling that there is a polynomial time solution just around the corner, although it dissolves into nothingness when you look more closely. This talk is about the beauty of these games, the relevant algorithmic approaches and their complexity and development over time. But be careful: they are addictive, don't get hooked!

jeudi 10 janvier 2013

Gilles Geeraerts (ULB, Belgium)

Time-bounded reachability for monotonic hybrid automata

Hybrid automata (HA) form a very popular class of model for describing hybrid systems, i.e. systems that exhibit both discrete and continuous behaviors. An HA is roughly speaking a finite automaton augmented with a finite set of real-valued variables, whose values evolve continuously with time elapsing. The rate of growth of those variables can change along time and depends on the current location of the automaton. Unfortunately, HA are notoriously hard to analyse: even the reachability problem is undecidable on the very restricted subclass of stopwatch automata (where variables can have rates 0 or 1 only).
However, most of the undecidability results on HA rely on the unboundedness of time to encode runs of undecidable models such as two-counter machines. A natural question is thus to study the time-bounded variant of the verification problems on HA, hoping that this additional requirement will allow to recover decidability. A further incentive is the recent line of research by Ouaknine, Worrell, et al., who have shown the benefits of considering time-bounded variants of classical problems in the case of (alternating) timed automata.
In this talk, we study the time-bounded reachability problem for monotonic rectangular hybrid automata (MRHA). An MRHA is a rectangular hybrid automata where the rate of each clock is either alway non-negative, or always non-positive. We show that this problem is decidable (even though the unbounded reachability problem for even very simple classes of hybrid automata is well-known to be undecidable), and give a precise characterisation of its complexity: it is NExpTime-c. We also show that extending this class by either allowing a clock to have positive and negative rates in the same HA, or by allowing diagonal guards, leads to undecidability, even in bounded time. Finally, we present a more practical extension of our positive result, by showing that we can effectively compute fixpoints characterising the sets of states that are reachable (resp. co-reachable) within T time units from a given state.
This is joint work with Thomas Brihaye, Laurent Doyen, Joël Ouaknine, Jean-François Raskin and James Worrell.

jeudi 20 décembre 2012

Antoine Rollet (LaBRI)

Runtime Enforcement of Timed Properties

Runtime enforcement is a powerful technique to ensure that a running system respects some desired properties. Using an enforcement monitor, an (untrustworthy) input execution (in the form of a sequence of events) is modified into an output sequence that complies to a property. Runtime enforcement has been extensively studied over the last decade in the context of untimed properties. In this talk, we introduce runtime enforcement of timed properties. We revisit the foundations of runtime enforcement when time between events matters. We discuss how runtime enforcers can be synthesized for a safety or co-safety timed property. Proposed runtime enforcers are time retardant: to produce an output sequence, additional delays are introduced between the events of the input sequence to correct it.

jeudi 06 décembre 2012

Javier Esparza (TUM, Germany)

A perfect model for bounded verification

A class of languages C is perfect if it is closed under Boolean operations and the emptiness problem is decidable. Perfect language classes are the basis for the automata- theoretic approach to model checking: a system is correct if the language generated by the system is disjoint from the language of bad traces. Regular languages are perfect, but because the disjointness problem for context-free languages is undecidable, no class containing them can be perfect.In practice, verification problems for language classes that are not perfect are often under-approximated by checking if the property holds for all behaviors of the system belonging to a fixed subset. A general way to specify a subset of behaviors is by using bounded languages. A class of languages C is perfect modulo bounded languages if it is closed under Boolean operations relative to every bounded language, and if the emptiness problem is decidable relative to every bounded language.We consider finding perfect classes of languages modulo bounded languages. We show that the class of languages accepted by multi-head pushdown automata are perfect modulo bounded languages, and characterize the complexities of decision problems. We also show that bounded languages form a maximal class for which perfection is obtained. We show that computations of several known models of systems, such as recursive multi-threaded programs, recursive counter machines, and communicating finite-state machines can be encoded as multi-head pushdown automata, giving uniform and optimal underapproximation algorithms modulo bounded languages.

jeudi 06 décembre 2012

Jean-Francois Raskin (ULB, Belgium)

Multi-dimension Quantitative Games: Complexity and Strategy Synthesis

In mean-payoff games, the objective of the protagonist is to ensure that the limit average of an infinite sequence of numeric weights is nonnegative. In energy games, the objective is to ensure that the running sum of weights is always nonnegative. Multi-mean-payoff and multi-energy games replace individual weights by tuples, and the limit average (resp. running sum) of each coordinate must be (resp. remain) nonnegative. These games have applications in the synthesis of resource-bounded processes with multiple resources. In this talk, I will summarize several recent results that we have obtained recently: -We prove the finite-memory determinacy of multi-energy games and show the inter-reducibility of multi-mean-payoff and multi-energy games for finite-memory strategies. -We improve the computational complexity for solving both classes of games with finite-memory strategies: while the previously best known upper bound was ExpSpace, and no lower bound was known, we give an optimal coNP-complete bound. For memoryless strategies, we show that the problem of deciding the existence of a winning strategy for the protagonist is NP-complete. -We provide an optimal symbolic and incremental algorithm to synthesis strategies in multi-energy games. -We present the first solution of multi-mean-payoff games with infinite-memory strategies. We show that multi-mean-payoff games with mean-payoff-sup objectives can be decided in NP and in coNP, whereas multi-mean-payoff games with mean-payoff-inf objectives are coNP-complete.

jeudi 29 novembre 2012

Grégoire Sutre (LaBRI)

Safety Verification of Communicating One-Counter Machines

In order to verify protocols that tag messages with integer
values, we investigate the decidability of the reachability
problem for systems of communicating one-counter machines. These
systems consist of local one-counter machines that asynchronously
communicate by exchanging the value of their counters via, a
priori unbounded, FIFO channels. This model extends communicating
finite-state machines (CFSM) by infinite-state local processes
and an infinite message alphabet. The main result of the paper
is a complete characterization of the communication topologies
that have a solvable reachability question. As already CFSM
exclude the possibility of automatic verification in presence of
mutual communication, we also consider an under-approximative
approach to the reachability problem, based on rendezvous
synchronization.
Joint work with Alexander Heußner and Tristan Le Gall.

jeudi 08 novembre 2012

Frédéric Herbreteau (LaBRI)

Coarse abstractions make Zeno behaviors difficult to detect

An infinite run of a timed automaton is Zeno if it spans only a finite amount of time. Such runs are considered unfeasible and hence it is important to detect them, or dually, find runs that are non-Zeno. Over the years important improvements have been obtained in checking reachability properties for timed automata. We show that some of these very efficient optimizations make testing for Zeno runs costly. In particular we show NP-completeness for the LU-extrapolation of Behrmann et al. We analyze the source of this complexity in detail and give general conditions on extrapolation operators that guarantee a (low) polynomial complexity of Zenoness checking. We propose a slight weakening of the LU-extrapolation that satisfies these conditions.

jeudi 04 octobre 2012

Lorenzo Clemente (LaBRI)

Reachability of Communicating Timed Processes

We study the reachability problem for communicating timed processes, both in discrete and dense time. Our model comprises au- tomata with local timing constraints communicating over unbounded FIFO channels. Each automaton can only access its set of local clocks; all clocks evolve at the same rate. Our main contribution is a complete characterization of decidable and undecidable communication topologies, for both discrete and dense time. We also obtain complexity results, by showing that communicating timed processes are at least as hard as Petri nets; in the discrete time, we also show equivalence with Petri nets. Our results follow from mutual topology-preserving reductions between timed automata and (untimed) counter automata.

jeudi 27 septembre 2012

Sven Schewe (Liverpool University)

Strategy Synthesis (From Specification to Implementation)

This is a short jeurney from the pitfalls and beauty of Church's problem from an automata theoretic angle. We will admire the simplicity of the problems and some funny effects, from forcing the branching mode of the automata involved to the cost incurred. After looking at how we can keep our life simple for a price too hight to pay, we return more and more to the real problem and see with (maybe) some surprise that the it is even simpler. Probably. But we will prove that it is unlikely that we will ever discover the truth.

jeudi 07 juin 2012

M Praveen (LSV)

Parameterized Complexity of some Petri Net Problems

Parameterized complexity is the branch of complexity theory that seeks to identify in the computational complexity of some problems the dependence on different parameters of the input. The coverability and boundedness problems for Petri nets are known to be EXPSPACE-complete. Previous work [Ex: L.E. Rosier and H.-C. Yen. A multiparameter analysis of the boundedness problem for vector addition systems. JCSS 32, 1986] has already been done to analyse the dependency of this complexity on different parameters, such as the number of places, maximum arc weight etc. We continue this work and study the dependency of the complexity on two parameters called benefit depth and vertex cover. If k denotes a parameter and n denotes the size of the input for a problem, then the problem is said to be in the parameterized complexity class PARAPSpace if there is an algorithm to solve the problem that uses memory space O(f(k)poly(n)), where f(k) is any computable function of the parameter and poly(n) is some polynomial of the input size. We show PARAPspace results for the coverability and boundedness problems using the parameters mentioned above. We also show PARAPspace results for model checking a logic that can express some extensions of coverability and boundedness. This is joint work with Kamal Lodaya.

jeudi 24 mai 2012

Pierre Bourhis ()

Querying Schemas With Access Restrictions

We study verification of systems whose transitions consist of
accesses to a Web-based data-source. An access is a lookup
on a relation within a relational database, fixing values for
a set of positions in the relation. For example, a transition
can represent access to a Web form, where the user is restricted
to filling in values for a particular set of fields. We
look at verifying properties of a schema describing the possible
accesses of such a system. We present a language where
one can describe the properties of an access path, and also
specify additional restrictions on accesses that are enforced
by the schema. Our main property language, AccLTL, is
based on a first-order extension of linear-time temporal logic,
interpreting access paths as sequences of relational structures.
We also present a lower-level automaton model, Aautomata,
which AccLTL specifications can compile into.
We show that AccLTL and A-automata can express static
analysis problems related to “querying with limited access
patterns” that have been studied in the database literature
in the past, such as whether an access is relevant to answering
a query, and whether two queries are equivalent in
the accessible data they can return. We prove decidability
and complexity results for several restrictions and variants
of AccLTL, and explain which properties of paths can be
expressed in each restriction.

jeudi 10 mai 2012

Madhavan Mukund (CMI, India)

Tagging Make Local Testing of Message-Passing Systems Feasible

The only practical way to test distributed message-passing systems is
to use local testing. In this approach, used in formalisms such as
concurrent TTCN-3, some components are replaced by test
processes. Local testing consists of monitoring the interactions
between these test processes and the rest of the system and comparing
these observations with the specification, typically described in
terms of message sequence charts. The main difficulty with this
approach is that local observations can combine in unexpected ways to
define implied scenarios not present in the original
specification. Checking for implied scenarios is known to be
undecidable for regular specifications, even if observations are made
for all but one process at a time. We propose an approach where we
append tags to the messages generated by the system under test. Our
tags are generated in a uniform manner, without referring to or
influencing the internal details of the underlying system. These
enriched behaviours are then compared against a tagged version of the
specification. Our main result is that detecting implied scenarios
becomes decidable in the presence of tagging.
This is joint work with Puneet Bhateja.

jeudi 03 mai 2012

Michael Emmi (LIAFA, Paris)

Bounded Phase Analysis of Message-Passing Programs

We describe a novel technique for bounded analysis of asynchronous
message-passing programs with ordered message queues. Our bounding
parameter does not limit the number of pending messages, nor the
number of “contexts-switches” between processes. Instead, we limit the
number of process communication cycles, in which an unbounded number
of messages are sent to an unbounded number of processes across an
unbounded number of contexts. We show that remarkably, despite the
potential for such vast exploration, our bounding scheme gives rise to
a simple and efficient program analysis by reduction to sequential
programs. As our reduction avoids explicitly representing message
queues, our analysis scales irrespectively of queue content and
variation.

jeudi 12 avril 2012

Aiswarya Cyriac (LSV, ENS Cachan)

Model Checking Languages of Data Words

We consider the model-checking problem for data multi- pushdown automata (DMPA). DMPA generate data words, i.e, strings enriched with values from an infinite domain. The latter can be used to represent an unbounded number of process identifiers so that DMPA are suitable to model concurrent programs with dynamic process creation. To specify properties of data words, we use monadic second-order (MSO) logic, which comes with a predicate to test two word positions for data equality. While satisfiability for MSO logic is undecidable (even for weaker fragments such as first-order logic), our main result states that one can decide if all words generated by a DMPA satisfy a given formula from the full MSO logic.
This is a joint work with Benedikt Bollig, Paul Gastin and K. Narayan Kumar.

jeudi 05 avril 2012

Rémi Bonnet (LSV)

Forward Analysis for WSTS : Beyond Regular Accelerations

The well-known Karp and Miller algorithm constructs the
coverability tree of a Vector Addition System, obtaining a finite representation
of the cover (the downward closure of the reachability set).
The series "Forward analysis in WSTS" aims to generalize this procedure for
arbitrary well-ordered state spaces. I'll first recall the earlier works by Finkel
and Goubbault-Larrecq, that introduced the notion of complete WSTS, in
which a finite representation of the cover as set of maximal elements exists.
Then, i'll present our formalization of acceleration strategy and a parameterized Karp Miller procedure that relies on these strategy in order to compute this
set of maximal elements.
As an illustration of these ideas, I'll present an acceleration strategy for
Vector Addition Systems with two resets that allows the previously defined
procedure to terminate, effectively computing the finite representation of the cover.
This is joint work with Alain Finkel.

jeudi 29 mars 2012

Uli Fahrenberg (IRISA, Rennes)

The Quantitative Linear-Time–Branching-Time Spectrum

We present a distance-agnostic approach to quantitative verification. Taking as input an unspecified distance on system traces, or executions, we develop a game-based framework which allows us to define a spectrum of different interesting system distances corresponding to the given trace distance. Thus we extend the classic linear-time–branching-time spectrum to a quantitative setting, parametrized by trace distance. We also provide fixed-point characterizations of all system distances, and we prove a general transfer principle which allows us to transfer counterexamples from the qualitative to the quantitative setting, showing that all system distances are mutually topologically inequivalent.

jeudi 22 mars 2012

Emmanuel Filiot (ULB)

Exploiting Structure in LTL Synthesis

The aim of program synthesis is to automatically generate a program that satisfies a given specification, in contrast to program verification, for which both the specification and the program are given as input. The underlying goal is to improve program reliability and optimize design constraints, like time and human errors, and to get rid of the low-level programming tasks, by replacing them with the design of high-level specifications. The old dream of automatic synthesis, which among others was shared by Church, is difficult to realize for general-purpose programming languages. However in recent years, there has been a renewed interest in feasible methods for the synthesis of application specific programs, which have been, for instance, applied to reactive systems, distributed systems, programs manipulating arithmetic or concurrent data-structures.
Reactive systems are non-terminating programs that continuously interact with their environment. They arise both as hardware and software, and are usually part of safety-critical systems, for example microprocessors, air traffic controllers, programs to monitor medical devices, or nuclear plants. It is therefore crucial to guarantee their correctness. The temporal logic LTL is a very important abstract formalism to describe properties of reactive systems. As shown by Pnueli and Rosner in 89, the synthesis of reactive systems from LTL specifications is a 2-Exptime complete problem.
In this talk, I will present recent progresses in LTL synthesis based on a bounded synthesis approach inspired by bounded model-checking, and show that the high worst-case time complexity of LTL synthesis does not handicap its practical feasibility. This is achieved by exploiting the structure underlying the automata constructions used to solve the synthesis problem

jeudi 15 mars 2012

Benoît Delahaye ()

Compositional Specification Theories for Stochastic Systems

Markov Chains (MCs) and Probabilistic Automata (PAs) are
widely-recognized mathematical frameworks for the specification and
analysis of systems with non-deterministic and/or stochastic behaviors.
Notions of specification, implementation, satisfaction, and refinement,
together with operators supporting stepwise design, constitute a
specification theory. In the early 1990's, an abstraction of Markov
Chains, called Interval Markov Chains (IMCs) has been proposed as a
specification theory. This talk shows why IMCs are not perfectly suited
to play their role as a specification theory and instead introduces a
new, more permissive, abstraction called Constraint Markov Chains
(CMCs). We introduce all the operators that make CMCs a complete
specification theory and discuss computability and complexity. We then
show how to extend CMCs by mixing them with Modal Transition Systems in
order to propose a specification theory for Probabilistic Automata.

jeudi 08 mars 2012

Gabriele Puppis (LaBRI)

The Cost of Repairing Regular Specifications

What do you do if a computational object (e.g., a document, a program trace) fails a specification?
An obvious approach is to perform a "repair": modify the object minimally to get something that
satisfies the constraints. This approach has been extensively investigated in the database community
for relational integrity constraints, and in the AI community for propositional logics. Different modification
operators have been considered on the basis of the application scenarios. For instance, a repair of an
XML document usually consists of applying to the underlying tree structure a certain number of editing
operations such as relabelings, deletions, and insertions of nodes.
In this talk I will survey some results related to the worst-case cost of repairing documents between
regular specifications. Precisely, I will focus on the number of edits that are needed to get from a
document (i.e., a word or a tree) that satisfies a source specification (i.e., a regular language S) to
some document that satisfies a target specification (i.e., a regular language T). As this number may
well be unbounded, I will consider the problem of determining those pairs of languages (S,T) such
that one can get from any word/tree in S to a word/tree in T using a finite, uniformly bounded number
of editing operations. I will give effective characterizations of these pairs when S and T are given by
finite state automata (word case) or stepwise tree automata (tree case), and derive some complexity
bounds for the corresponding problems.
The presentation is based on joint works with Michael Benedikt, Cristian Riveros, and Sławek Staworko.

jeudi 16 février 2012

Lorenzo Clemente (LaBRI)

Fixed-word simulations and ranks

Minimization of Buchi automata is an intriguing topic in automata theory,
both for a theoretical understanding of automata over infinite words, and for practical applications.
Ideally, for a given language, one would like to find an automaton recognizing it with the least number of states.
Since exact minimization is computationally hard (e.g., PSPACE-complete),
we concentrate on quotienting, which is a state-space reduction technique which works by "glueing together" certain states.
Which states can be merged is dictated by suitable preorders:
In this talk, we study fixed-word simulations, which are simulation-like preorders sound for quotienting.
We show that fixed-word simulations are coarser than previously studied simulation-like preorders,
by characterizing it with a natural (but non-trivial) ranking argument.
Our ranking construction is related to the so-called Kupferman-Vardi construction for complementing Buechi automata.

We introduce a framework based on abstract interpretation for reasoning about programs with lists carrying integer numerical data.
In this framework, abstract domains are used to describe and manipulate complex constraints on configurations of these programs mixing constraints on the shape of the heap, sizes of the lists, and on the data stored in the lists.
We consider a domain where data is described by formulas in a universally quantified fragment of the first-order logic over sequences, as well as a domain where data is described using constrains on the multisets of data in the lists.
Moreover, we provide powerful techniques for automatic validation of Hoare-triples and invariant checking, as well as for automatic synthesis of invariants and procedure summaries using modular inter-procedural analysis. The approach has been implemented in a tool called CELIA and experimented successfully on a large benchmark of programs.

jeudi 26 janvier 2012

Peter Habermehl (LIAFA, Paris)

Forest Automata for Verification of Heap Manipulation

We consider verification of programs manipulating
dynamic linked data structures such as various forms of singly and doubly-linked
lists or trees. We consider important properties for this kind of systems like
no null-pointer dereferences, absence of garbage, shape properties, etc. We
develop a verification method based on a novel use of tree automata to represent
heap configurations. A heap is split into several ``separated'' parts such that
each of them can be represented by a tree automaton. The automata can refer to
each other allowing the different parts of the heaps to mutually refer to their
boundaries. Moreover, we allow for a hierarchical representation of heaps by
allowing alphabets of the tree automata to contain other, nested tree automata.
Program instructions can be easily encoded as operations on our representation
structure. This allows verification of programs based on a symbolic state-space
exploration together with refinable abstraction within the so-called abstract
regular tree model checking. A motivation for the approach is to combine
advantages of automata-based approaches (higher generality and flexibility of
the abstraction) with some advantages of separation-logic-based approaches
(efficiency). We have implemented our approach and tested it successfully on
multiple non-trivial case studies.
(joint work with Lukas Holik, Adam Rogalewicz, Jiri Simacek and Tomas Vojnar)

Inclusion checking is a central algorithmic problem in the theory of automata, with important applications, e.g., in formal methods.
The problem is PSPACE-complete, thus under standard complexity-theoretic assumptions no deterministic algorithm with worst case polynomial time can be expected.
We optimize the so-called Ramsey-based approach to inclusion checking.
In this approach, one explores a semigroup of exponential size for a counter-example to inclusion.
The exploration starts from a small set of generators, and all elements of the semigroup are progressively computed by composition.
On the way, a test operation is used to check for counter-examples.
The exploration stops if a counter-example is found; otherwise, all elements are generated.
Clearly, exhaustive exploration of the semigroup is in general infeasible, due to its exponential size.
We show how coarse simulation-based subsumption preorders between elements of the semigroup can be designed,
allowing one to prune away significant parts of the search space.
Subsumption allows us to solve inclusion checking for automata with thousand states, which was unthinkable before.
More details can be found on http://www.languageinclusion.org/.

jeudi 12 janvier 2012

Ocan Sankur (LSV, ENS Cachan)

Robustness and Implementability of Timed Automata

Timed automata are a well established model in real-time system design. They offer an automata-theoretic framework to design, verify and synthesize systems with timing constraints. The theory behind timed automata has been extensively studied and mature model-checking tools are available. However, this model makes unrealistic assumptions on the system, such as the perfect continuity of clocks and instantaneous reaction times, which are not preserved in implementation even in digital hardware with arbitrary finite precisions. While these assumptions are natural in the design phase, they must be validated before implementing the system.
In this talk, I will first outline recent results on robustness analysis of timed automata, that is, deciding/computing an upper bound on the imprecisions under which a given property holds. I will then concentrate on the implementability problem, by presenting algorithms that render timed automata implementable: given a timed automaton, the goal is to construct a new timed automaton whose behavior under imprecisions is equivalent to the behavior of the first automaton.
Based on joint works with Patricia Bouyer, Kim Larsen, Nicolas Markey, Claus Thrane.

Visibly pushdown transducers (VPTs) form a strict subclass of pushdown transducers (PTs) that extends finite state transducers with a stack. Like visibly pushdown automata, the input symbols determine the stack operations. It has been shown that visibly pushdown languages form a robust subclass of context-free languages. Along the same line, we show that word transductions defined by VPTs enjoy strong properties, in contrast to PTs. In particular, functionality is decidable in PTIME, k-valuedness is in NPTIME and equivalence of (non-deterministic) functional VPTs is EXPTIME-C.
In a second part, we study the problem of evaluating in streaming (i.e. in a single left-to-right pass) the transduction realized by a functional VPT. A transduction is said to be height bounded memory (HBM) if it can be evaluated with a memory that depends only on the height of the input word (and not on its length). We show that it is decidable in coNTPTime whether such a transduction is HBM. In this case, the required amount of memory may depend exponentially on the height of the input word. We exhibit a sufficient, decidable condition for a VPT to be evaluated with a memory that depends quadratically on the height of the input word. This condition defines a class of transductions that strictly contains all determinizable VPTs.
This talk is based on the two following papers :
Properties of Visibly Pushdown Transducers. Emmanuel Filiot, Jean-François Raskin, Pierre-Alain Reynier Reynier, Frédéric Servais and Jean-Marc Talbot. In Proc. MFCS’10.
Streamability of Nested Word Transductions. Emmanuel Filiot, Olivier Gauwin, Pierre-Alain Reynier and Frédéric Servais. In Proc. FSTTCS’11.

We consider the problem of controlling distributed automata that cooperate via
shared variables (rendez-vous). The setting corresponds to the framework of
Ramadge and Wonham, where certain actions (controllable ones) can be forbidden
by the local controller. Although the general question is still open, we
show that the problem is decidable on acyclic architectures, albeit of
non-elementary complexity.
Joint work with B. Genest, H. Gimbert and I. Walukiewicz.

jeudi 13 octobre 2011

Benjamin Monmege (LSV, ENS Cachan)

Weighted Expressions and Pebble Automata over Nested Words

We introduce a calculus over nested words (or equivalently, trees) to express quantitative properties of XML documents or recursive programs. Our weighted expressions borrow purely logical constructs from XPath, but they also involve rational arithmetic expressions. The latter allow us to perform computations in an arbitrary (commutative) semiring. For instance, one can count how often a given entry occurs in an XML document, or compute the memory consumption of a program execution. We characterize a fragment of weighted expressions in terms of a new class of weighted automata. In the spirit of tree-walking automata, our device traverses a nested word along its edges and may place pebbles during a traversal.
After proving this expressiveness result, we give a list of interesting decision or computation problems, with some hints on their resolution, and a set of possible applications of these problems, depending of the chosen semiring.

We consider the model-checking problem for a quantitative
extension of the modal mu-calculus on two classes of infinite
quantitative transition systems. The first class, initialized linear
hybrid systems, is motivated by verification of systems which
exhibit continuous dynamics. We show that the value of a formula
of the quantitative mu-calculus can be approximated with arbitrary
precision on initialized linear hybrid systems. The other class,
increasing tree rewriting systems, is motivated by efforts to allow
counting formulas in discrete verification. On such systems, we
show that the value of a quantitative formula can be computed
exactly. For both these classes of systems, the problem in the end
reduces to solving a new form of parity games with counters.

jeudi 29 septembre 2011

Barbara Jobstmann (Verimag)

Quantitative Verification and Synthesis

Quantitative constraints have been successfully used to state and
analyze non-functional properties such as energy consumption,
performance, or reliability. Functional properties are typically
viewed in a purely qualitative sense. Desired properties are written
in temporal languages and the outcome of verification is a simple Yes
or No answer stating that a system satisfies or does not satisfy the
desired property. We believe that this black and white view is
insufficient both for verification and for synthesis. Instead, we
propose that specifications should have a quantitative aspect.
Our recent research shows that quantitative techniques give new
insights into qualitative specifications. For instance,
average-reward properties allow us to express properties like default
behavior or preference relations between implementations that all
satisfy the functional property. These additional properties are
particularly useful in a synthesis setting, where we aim to
automatically construct a system that satisfies the specification,
because they allow us to guide the synthesis process making the outcome
of synthesis more predictable.
In this talk I will give an overview of
(1) how classical specification can be augmented with quantitative constraints,
(2) list different quantitative constraints that arise in this way, and
(3) show how to verify and synthesize systems that satisfied the
initial specification and optimize such quantitative constraints.
This is joint work with R. Bloem, K. Chatterjee, K. Greimel, T. Henzinger,
A. Radhakrishna, R. Singh, and C. von Essen.

jeudi 22 septembre 2011

Marc Zeitoun (LaBRI)

Model checking vector addition systems with one zero-test

We design a variation of the Karp-Miller algorithm to compute a
finite representation of the cover (i.e., the downward closure of
the reachability set) of a vector addition system with one
zero-test. This algorithm yields decision procedures for several
problems on these systems, open until now, such as
place-boundedness or LTL model-checking.
The proof techniques to handle the zero-test are based on two new
notions of cover: the refined and the filtered cover. The refined
cover is hybrid between the reachability set and the classical
cover. It inherits properties of the reachability set: equality
of two refined covers is undecidable, even for usual Vector
Addition Systems (with no zero-test), but the refined cover of a
Vector Addition System is a recursive set. The second notion of
cover, called the filtered cover, is the central tool of our
algorithms. It inherits properties of the classical cover, and in
particular, one can effectively compute a finite representation
of this set, even for Vector Addition Systems with one zero-test.

jeudi 30 juin 2011

Amélie Stainer (INRIA Rennes)

A game approach to determinize timed automata

Timed automata are frequently used to model real-time systems.
Their determinization is a key issue for several validation problems.
However, not all timed automata can be determinized, and determinizability
itself is undecidable. In this talk, we will present a game-based algorithm
which, given a timed automaton,
tries to produce a language-equivalent deterministic timed automaton,
otherwise a deterministic over-approximation. Our method subsumes two
recent contributions: it is at once more general than a recent determinization
procedure and more precise than the existing over-approximation algorithm.Then, we will explain how this method can be adapted to be usefull for test generation.
This talk is a joint work with Nathalie Bertrand, Thierry Jéron and Moez Krichen. Papers will be presented at FoSSaCs'11 and TACAS'11.

We introduce the model of finite state probabilistic monitors (FPM), which are finite state automata on infinite strings that have probabilistic transitions and an absorbing reject state. FPMs are a natural automata model that can be seen as either randomized run-time monitoring algorithms or as models of open, probabilistic reactive systems that can fail. We give a number of results that characterize, topologically as well as with respect to their computational power, the sets of languages recognized by FPMs. We also study the emptiness and universality problems for such automata and give exact complexity bounds for these problems. Joint work with A. Prasad Sistla and Mahesh Viswanathan.

jeudi 12 mai 2011

Renaud Tabary (LaBRI)

Hybrid syntactic/semantic computer virus detection scheme

The detection problem in computer virology is known to be undecidable. Nonetheless, a large body of work is still devoted to the subject. We will present the state of the art on detection schemes: the syntactic and semantic detection schemes.
Subsequently, we will introduce a new hybrid syntactic/semantic detection scheme, where the extraction of the virus signature is performed through abstract interpretation techniques while virus detection is let to efficient syntactic detectors. We will present a new abstract domain that is able to automatically extract the signature of a polymorphic virus as a context free grammar. Preliminary results on some real polymorphic virus samples will also be shown.

We study Markov decision processes (one-player stochastic games)
equipped with parity and positive-average conditions.
In these games, the goal of the player is to maximize the probability that both the
parity and the positive-average conditions are fulfilled.
We show that the values of these games are computable in polynomial time.
We also show that optimal strategies exist, require only finite memory and can be effectively computed.
Joint work with Hugo Gimbert and Soumya Paul.

jeudi 28 avril 2011

Srivathsan Balaguru (LaBRI)

A lazy reachability algorithm for timed automata

We consider the classic reachability problem for timed automata:
given an automaton, decide if there exists a path from its initial state
to a given target state. The standard solution to this problem involves
computing the *zone graph* of the automaton, which in principle could be
infinite. In order to make the graph finite, zones are approximated using
an extrapolation operator. For reasons of efficiency it is required that an
extrapolation of a zone is always a zone; and in particular that it is *convex*.
We propose a new solution to the reachability problem that uses no
such extrapolation operators. To ensure termination, we provide an
efficient algorithm to check if a zone is included in the
*region closure* of another. Although theoretically better,
closure cannot be used in the standard algorithm since a closure of
a zone may not be convex.
The structure of this new algorithm permits to calculate
approximating parameters on-the-fly during exploration of the zone
graph, as opposed to the current methods which do it by a static
analysis of the automaton prior to the exploration. This allows for
further improvements in the algorithm. Promising experimental results
are presented.
Joint work with Frédéric Herbreteau, Dileep Kini, Igor Walukiewicz

In a seminal paper, McMillan proposed a technique for constructing a finite complete prefix of the unfolding of bounded (i.e., finite-state) Petri nets, which can be used for verification purposes.
Contextual nets are a generalisation of Petri nets suited to model systems with read-only access to resources. When working with contextual nets, a finite complete prefix can be obtained by applying McMillan's construction to a suitable encoding of the contextual net into an ordinary net.
However, it has been observed that if the unfolding is itself a contextual net, then the complete prefix can be significantly smaller than the one obtained with the above technique. We propose a an unfolding algorithm that works for arbitrary semi-weighted, bounded contextual nets.
The talk is based on joint work with Paolo Baldan, Andrea Corradini, and Barbara Koenig.

jeudi 03 février 2011

Mohamed Faouzi Atig (LIAFA, PARIS)

Global Model Checking of Ordered Multi-Pushdown Systems

In this talk, we address the verification problem of ordered multi-pushdown systems, a multi-stack extension of pushdown systems that comes with a constraint on stack operations: a pop can only be performed on the first non-empty stack (which implies that we assume a linear ordering on the collection of stacks). First, we show that for an ordered multi-pushdown system the set of all predecessors of a regular set of configurations is an effectively constructible regular set. Then, we exploit this result to solve the global model checking which consists in computing the set of all configurations of an ordered multi-pushdown system that satisfy a given w-regular property (expressible in linear-time temporal logics or the linear-time µ-calculus). As an immediate consequence of this result, we obtain an 2ETIME upper bound for the model checking problem of w-regular properties for ordered multi-pushdown systems (matching its lower-bound).

jeudi 16 décembre 2010

Laurent Doyen (LSV, Cachan)

Energy and Mean-Payoff Games

We consider game models for the design of reactive systems
working in ressource-constrained environments.
In mean-payoff games, the ressource usage is computed as
the long-run average ressource level.
In energy games, the ressource usage is the intial amount
of ressource necessary to maintain the ressource level positive.
The resource can be memory, battery, or network usage for example.
While mean-payoff games are a well-established model in game theory
and computer science, energyy games have received attention only recently.
The talk reviews recent results about these games and their relationship.
Although they differ very basically in their definition,
it is known that energy and mean-payoff games are equivalent for
the simple decision problem of existence of a winning strategy.
This observation provides new complexity results for solving
mean-payoff games, and new insights for mean-payoff games combined
with other conditions such as fairness, imperfect information,
or multi-ressource systems, though the strong equivalence with
energy games usually breaks in such cases.

jeudi 09 décembre 2010

Ylies Falcone (projet VERTECS/INRIA, Rennes)

What Can You Verify and Enforce at Runtime ?

Runtime verification is an effective technique to ensure at execution time that a system meets a desirable behavior. It can be used in numerous application domains, and more particularly when integrating together untrusted software components. In runtime verification, a run of the system under scrutiny is analyzed incrementally using a decision procedure: a monitor. This monitor may be generated from a user-provided high level specification (e.g. a temporal property, an automaton). Runtime enforcement is an extension of runtime verification aiming to circumvent property violations. In runtime enforcement the monitors watch the current execution sequence and modify the execution sequence of the underlying program whenever it deviates from the desired property. More than simply halting an underlying program, some enforcement mechanisms can also ``suppress'' (i.e. freeze) and ``insert'' (frozen) actions in the current execution sequence. Runtime verification and enforcement have
proved to be effective techniques to achieve software reliability.
The questions we consider in this talk are the following: what are the classes of properties that can be verified and enforced at runtime, and is there a difference between the techniques we are interested in? These questions are not new, but we propose here to address them within a unified framework: the Safety-Progress classification of properties introduced by Manna and Pnueli.
In this talk, we start first by customizing the Safety-Progress classification in a runtime verification and enforcement context. Then we revisit the existing classical definition of monitorability, and
characterize the set of monitorable properties according to this definition. Then, we parameterize this definition according to the truth-domain under consideration. Moreover, we introduce a new definition of monitorability based on distinguishability of ``good'' and ``bad'' execution sequences. This definition is weaker than the classical one, but we believe that it better corresponds to practical needs and tool implementations. Furthermore, we characterize the set of enforceable properties in a way that is independent from any enforcement mechanism. A consequence is that the proposed set of enforceable properties is an upper bound of any enforcement mechanism.
This is joint work with Pr. Jean-Claude Fernandez and Dr. Laurent Mounier from Vérimag (Grenoble).

jeudi 02 décembre 2010

Frédéric Herbreteau (LaBRI)

Efficient Emptiness Check for Timed Büchi Automata

The Büchi non-emptiness problem for timed automata concerns deciding
if a given automaton has an infinite non-Zeno run satisfying the
Büchi accepting condition. The standard solution to this problem involves
adding an auxiliary clock to take care of the non-Zenoness. In this talk, we
show that this simple transformation may sometimes result in an
exponential blowup. Then, we propose a method avoiding this blowup.
However, in many cases, no extra construction is needed to ascertain
non-Zenoness. Hence, we finally present an on-the-fly algorithm for the
non-emptiness problem, using an efficient non-Zenoness construction
only when required.

jeudi 21 octobre 2010

Michael Ummels (RWTH Aachen)

The complexity of Nash Equilibria in Reward Games

We study the complexity of Nash equilibria in games with limit-average objectives. While an arbitrary Nash equilibrium of such a game can be found in polynomial space, we show that deciding whether a game has an equilibrium whose payoff meets certain constraints is, in general, undecidable. A more refined analysis takes into account the complexity of the strategies that realise the equilibrium. While for stochastic games, the above problem is undecidable for both pure and randomised strategies, the problem becomes decidable when one looks for a pure-strategy equilibrium in a non-stochastic game.

jeudi 14 octobre 2010

Ashutosh Trivedi (The University of Warwick, UK)

Expected Reachability-Time Games

In an expected reachability-time game (ERTG) two players, Min and
Max, move a token along the transitions of a probabilistic timed automaton,
so as to minimise and maximise, respectively, the expected time to reach a
target. We show that ERTGs are positionally determined. Using the boundary
region graph abstraction, and a generalisation of Asarin and Maler’s
simple function,
we show that the decision problems related to computing the value of ERTGs
are decidable and their complexity is in NEXPTIME ∩ co-NEXPTIME.

The reachability problem for Vector Addition Systems (VASs) is a central problem of net theory. The general problem is known decidable by algorithms exclusively based on the classical Kosaraju-Lambert-Mayr-Sacerdote-Tenney decomposition. Recently from this decomposition, we deduced that a final configuration is not reachable from an initial one if and only if there exists a Presburger inductive invariant that contains the initial configuration but not the final one. Since we can decide if a Preburger formula denotes an inductive invariant, we deduce from this result that there exist checkable certificates of non-reachability. In particular, there exists a simple algorithm for deciding the general VAS reachability problem based on two semi-algorithms. A first one that tries to prove the reachability by enumerating finite sequences of actions and a second one that tries to prove the non-reachability by enumerating Presburger formulas. In this presentation we provide the first proof of the VAS reachability problem that is not based on the classical Kosaraju-Lambert-Mayr-Sacerdote-Tenney decomposition. The new proof is based on the notion of productive sequences inspired from Hauschildt that directly provides the existence of Presburger inductive invariants.

jeudi 16 septembre 2010

Benedikt Bollig (LSV, Cachan)

Distributed Timed Automata with Independently Evolving Clocks

We propose a model of distributed timed systems where each component is a
timed automaton with a set of local clocks that evolve at a rate independent
of the clocks of the other components. A clock can be read by any component in
the system, but it can only be reset by the automaton it belongs to. Since we
have unrelated time values on different components, we are interested in the
underlying untimed behaviors of these distributed timed automata rather than
their timed behaviors. Thus, the clocks (and time itself) are synchronization
tools rather than being a part of the observation.
There are two natural semantics for such systems. The universal semantics
captures behaviors that hold under any choice of clock rates for the
individual components. Thus, it can be considered as an underapproximation
of the actual behavior. This is a natural choice when checking that a system
always satisfies a positive specification. However, to check if a system
avoids a negative specification, it is better to use the existential
semantics (an overapproximation of the actual system behavior): the set of
behaviors that the system can possibly exhibit under some choice of clock
rates.
We show that the existential semantics always describes a regular set of
behaviors. However, in the case of universal semantics, checking emptiness or
universality turns out to be undecidable, which is shown by a reduction from
Post’s correspondence problem. This result is further strengthened to some
bounded cases, where we have restrictions on the relative time rates such as
fixed slopes.
As an alternative to the universal semantics, we propose a game-based
reactive semantics that allows us to check positive specifications and yet
describes a regular set of behaviors.
This is joint work with S. Akshay, Paul Gastin, Madhavan Mukund, and
K. Narayan Kumar.