Workshop

Non-Classical Models of Automata and Applications

Invited Speakers

Peter Leupold (University of Leipzig, Germany)

Is Computation Observer-Relative?

Summary:

In his 1992 book `Rediscovery of the mind', the philosopher John
R. Searle claimed that all computation is observer-relative. This
means that there is no computation as long as nobody observes a
process and interprets it as a computation. On the other hand, just
about anything can be considered a computation, if we just observe it
in an adequate way. Thus computation is not an intrinsic property of
objects or processes.
We revisit results on Computing by Observing with this point of view
in mind. Since this model contains an explicit observer, it can be
used to investigate Searle's thesis in a more formal way. Already the
observation of very simple systems can lead to computational
completeness. And one and the same process can be used to compute any
computable function with different observers. This kind of result
supports Searle's claim.

František Mráz (Charles University at Prague, Czech Republic)

Limited Restarting Automata

Summary:

Restarting automata were introduced as a tool for modelling a method
of checking correctness of sentences called `analysis by reduction'
which consists of iterated (non-) correctness preserving simplifications
(reductions) of the original input sentence until a simple sentence
is obtained for which it is easy to decide whether or not it is correct.
While the first model of restarting automaton was very limited, it was
strong enough to model analysis by reduction for all deterministic
context-free languages and even some languages beyond this class.
In order to model analysis by reduction for wider classes of languages
and for modelling more complex phenomena, several extensions were added
to the first version of restarting automata. The most general versions
of the model aspire to model the `functional generative description' of
natural languages, which is a complex multilayer description of natural
language sentences.
In this talk we will concentrate on recent developments in the
opposite direction - from complex to simpler models. Such developments
aim to find automata models which are somewhat limited in their power,
but which have other desirable properties, like e.g. simpler definition,
more effective recognition or learnability. All these properties are
crucial for possible applications of restarting automata.
We will review several classes of `limited context restarting automata'
that are stateless restarting automata for which the rewriting depends
only on limited context around the rewritten substrings. Such automata
can characterize several classes of the Chomsky hierarchy and several
McNaughton families of languages. Various types of such automata are
learnable from positive and negative sample words or reductions.
Further, the classes of `locally testable' and `single k-reversible'
restarting automata are less restrictive as they are not stateless, but
they still allow inference of such automata from positive samples of
words and reductions.
We will compare various types of limited restarting automata with
respect to inclusion between the accepted classes of languages, the
complexity of their learning and some other properties. The talk
concludes by presenting several open problems that are related to the
complexity of the membership problem for the language classes and the
decidability of properties of limited restarting automata that are
necessary for their effective learning.