Paul Wegner and Dina Goldin have for over a decade been publishing papers and books arguing primarily that the Church-Turing thesis is often misrepresented in the CS Theory community and elsewhere. That is, it is presented as encompassing all computation when in fact it applies only to computation of functions, which is a very small subset of all computation. Instead they suggest we should seek to model interactive computation, where communication with the outside world happens during the computation.

The only critique I have seen of this work is in the Lambda the Ultimate forum, where somebody lamented these authors for continually publishing what is obviously known. My question then is, is there any more critique into this line of thinking, and in particular their Persistent Turing Machines. And if not, why then is it seemingly studied very little (I could be mistaken). Lastly, how does the notion of universality translate to an interactive domain.

$\begingroup$I think Andrej and Neel have explained here that the answer is negative for the higher-type function computation problems. So essentially Church-Turing Thesis is about number function computation problems. The usual equivalences between models of computation do not hold over higher-types. (However, as I understand it, this is more about the interaction mechanisms and how higher-type objects are represented than about the computational power of models.) (reposting to fix a few typos)$\endgroup$
– KavehAug 23 '12 at 19:20

$\begingroup$actually wegners 1st paper along these lines seems to date to 1996-1997, "Why interaction is more important than algorithms" or "The Paradigm Shift from Algorithms to Interaction". later in the paper there is ref to Platos cave, "the Turing tarpit" (?), Kants Critique of Pure Reason, Marx's dialectical logic, Descartes, Penrose, Searle. so maybe it should be seen as bordering on the philosophical and not so much in the vein of technical/math TCS. no math, no lemmas or proofs or thms. while maybe a bit grandiose, he earnestly seeks to understand "the big picture" of CT thesis wrt history etc...$\endgroup$
– vznAug 31 '12 at 1:26

7 Answers
7

Here's my favorite analogy. Suppose I spent a decade publishing books and papers arguing that, contrary to theoretical computer science's dogma, the Church-Turing Thesis fails to capture all of computation, because Turing machines can't toast bread. Therefore, you need my revolutionary new model, the Toaster-Enhanced Turing Machine (TETM), which allows bread as a possible input and includes toasting it as a primitive operation.

You might say: sure, I have a "point", but it's a totally uninteresting one. No one ever claimed that a Turing machine could handle every possible interaction with the external world, without first hooking it up to suitable peripherals. If you want a TM to toast bread, you need to connect it to a toaster; then the TM can easily handle the toaster's internal logic (unless this particular toaster requires solving the halting problem or something like that to determine how brown the bread should be!). In exactly the same way, if you want a TM to handle interactive communication, then you need to hook it up to suitable communication devices, as Neel discussed in his answer. In neither case are we saying anything that wouldn't have been obvious to Turing himself.

So, I'd say the reason why there's been no "followup" to Wegner and Goldin's diatribes is that TCS has known how to model interactivity whenever needed, and has happily done so, since the very beginning of the field.

Update (8/30): A related point is as follows. Does it ever give the critics pause that, here inside the Elite Church-Turing Ivory Tower (the ECTIT), the major research themes for the past two decades have included interactive proofs, multiparty cryptographic protocols, codes for interactive communication, asynchronous protocols for routing, consensus, rumor-spreading, leader-election, etc., and the price of anarchy in economic networks? If putting Turing's notion of computation at the center of the field makes it so hard to discuss interaction, how is it that so few of us have noticed?

Another Update: To the people who keep banging the drum about higher-level formalisms being vastly more intuitive than TMs, and no one thinking in terms of TMs as a practical matter, let me ask an extremely simple question. What is it that lets all those high-level languages exist in the first place, that ensures they can always be compiled down to machine code? Could it be ... err ... THE CHURCH-TURING THESIS, the very same one you've been ragging on? To clarify, the Church-Turing Thesis is not the claim that "TURING MACHINEZ RULE!!" Rather, it's the claim that any reasonable programming language will be equivalent in expressive power to Turing machines -- and as a consequence, that you might as well think in terms of the higher-level languages if it's more convenient to do so. This, of course, was a radical new insight 60-75 years ago.

Final Update: I've created a blog post for further discussion of this answer.

$\begingroup$There is a substantial difference between toasters and interaction: every model of computation has some IO mechanism. Toasters show up only rarely. Some models of computation model IO naively: for example Turing machines deal with IO only informally. This is not problematic where computation is understood to be functional, i.e. starting with an input and ending with an output, as it is with Turing machines. However, this naively becomes burdensome when you want to deal with genuine concurrent phenomena, e.g. when are two interactive computations equal? (Continued below.)$\endgroup$
– Martin BergerAug 29 '12 at 9:50

12

$\begingroup$In case my views aren't clear enough yet, I should add that I find the whole "myth of the Church-Turing Thesis" literature not merely hectoring, but (more to the point) depressingly barren of ideas. Reading it brings all the joy of reading someone claiming to refute Newtonian physics, not because of something cool like quantum mechanics or relativity, but because "Newton's laws ignore friction". Or listening to a child explain why she technically won a board game because she moved the pieces while you left to go to the bathroom.$\endgroup$
– Scott AaronsonAug 29 '12 at 22:04

7

$\begingroup$I think the Lance Fortnow quote extracted below in vzn's answer (original article here: ubiquity.acm.org/article.cfm?id=1921573) demonstrates that at least a few sane people do hold the "Strong" thesis. Fortnow claims that the CT thesis can be "simply stated" as "everything computable is computable by a Turing machine", writing "everything" where he should have really written "every $f : \mathbb{N} \to \mathbb{N}$".$\endgroup$
– Noam ZeilbergerAug 29 '12 at 22:56

10

$\begingroup$how can we debate about a so-called Thesis named after both Turing and Church, neither of whom actually stated in their own writing the thesis as it has later been interpreted & evolved? — See also: Euler's formula, Gaussian elimination, Euclid's algorithm, the Pythagorean theorem.$\endgroup$
– JeffεAug 30 '12 at 11:44

TMs are inconvenient languages for research on interactive computation (in most cases) because the interesting issues get drowned out in
the noise of encodings.

Everybody working on the mathematisation of interaction knows this.

Let me explain this in more detail.

Turing machines can obviously model all existing interactive models of
computing in the following sense: Choose some encoding of the relevant
syntax as binary strings, write a TM that takes as input two encoded
interactive programs P, Q (in a chosen model of interactive
computation) and returns true exactly when there is a one-step
reduction from P to Q in the relevant term rewriting system (if your
calculus has a ternary transition relation, proceed mutatis mutandis).
So you got a TM that does a step-by-step simulation of computation in
the interactive calculus. Clearly pi-calculus, ambient calculus, CCS,
CSP, Petri-nets, timed pi-calculus and any other interactive model of
computation that has been studied can be expressed in this
sense. This is what people mean when they say interaction does not go beyond TMs.
If you can come up with an interactive formalism that is physically
implementable but not expressible in this sense, please apply for your
Turing award.

N. Krishnaswami refers to a second approach to modelling interactivity
using oracle tapes. This approach is different from the interpretation
of the reduction/transition relation above, because the notion of TM
is changed: we move from plain TMs to TMs with oracle tapes. This
approach is popular in complexity theory and cryptography, mostly
because it enables researchers in these fields to transfer their tools
and results from the sequential to the concurrent world.

The problem with both approaches is that the genuinly concurrency
theoretic issues are obscured. Concurrency theory seeks to understand
interaction as a phenomenon sui generis. Both approaches via TMs simply replace a
convenient formalism for expressing an interactive programming
language with a less convenient
formalism.

In neither approach genuinely concurrency theoretic issues,
i.e. communication and its supporting infrastructure have a direct
representation. They are there, visible to the trained eye, but encoded, hidden in the impenetrable fog of
encoding complexity. So both approaches are bad at mathematisation of
the key concerns of interactive computation. Take for example what
might be the best idea in the theory of programming languages in the
last half century, Milner et al's axiomatisation of scope extrusion (which
is a key step in a general theory of compositionality):

How beautifully simple this idea is when expressed in a tailor-made
language language like the pi-calculus. Doing this using the encoding
of pi-calculus into TMs would probably fill 20 pages.

In other words, the invention of explicit formalisms for interaction
has made the following contribution to computer science: the direct
axiomatisation of the key primitives for communication (e.g. input and
output operators) and the supporting mechanisms (e.g. new name
generation, parallel composition etc). This axiomatisation has grown
into a veritable research tradition with its own conferences, schools,
terminology.

A similar situation obtains in mathematics: most concepts could be
written down using the language of set theory (or topos theory), but
we mostly prefer higher level concepts like groups, rings, topological
spaces and so on.

In terms of number computability (i.e., computing functions from $\mathbb{N} \to \mathbb{N}$), all known models of computation are equivalent.

However, it's still true that Turing machines are fairly painful for modelling properties like interactivity. The reason is a little bit subtle, and has to do with the kinds of questions that we want to ask about interactive computations.

The usual first pass at modelling interaction with TMs is with oracle tapes. Intuitively, you can think of the string printed on the oracle tape as being a "prediction" of the Turing machine's I/O interaction with the environment. However, consider the sorts of questions we want to ask about interactive programs: for example, we might want to know that a computer program will not output your financial data unless it receives your username and password as input, and furthermore that programs do not leak information about passwords. Talking about this kind of constraint is very painful with oracle strings, since it reflects a temporal, epistemic constraint on the trace of the interaction, and the definition of oracle tapes ask you to supply the whole string up front.

I suspect getting this right is doable, and essentially amounts (1) to considering oracle strings not as a set, but as a topological space whose open sets encode the modal logic of time and knowledge that you want to model, and (2) ensuring that the theorems you prove are all continuous with respect to this topology, viewing predicates as continuous functions from oracle strings to truth values viewed as the Sierpinski space. I should emphasize that this is a guess, based on the analogy with domain theory. You'd need to work out the details (and probably submit them to LICS or something) to be sure.

As a result, people prefer to model interaction using things like the Dolev-Yao model, where you explicitly model the interaction between computers and the environment, so that you can explicitly characterize what the attacker knows. This makes it a lot easier to formulate appropriate modal logics for reasoning about security, since the state of the system plus the state of the environment are represented explicitly.

reading Lance Fortnows blog, just ran across this recent/nice/long survey article on the subj with many perspectives & refs [1] (which has not been cited so far), includes Wegner/Goldin's perspective (among many others). Ill just quote Fortnows excellent/emphatic summary/declaration/assertion of the near-official/uniform/unanimous TCS party line:

"A few computer scientists nevertheless try to argue that the [Church-Turing] thesis fails to capture some aspects of computation. Some of these have been published in prestigious venues such as Science, Communications of the ACM, and now as a whole series of papers in ACM Ubiquity. Some people outside of computer science might think that there is a serious debate about the nature of computation. There isn't."

I don't understand Milner's work. (e.g. pi calculus, which Milner invented to describe communicating processes). It is quite unreadable to me, as are nearly all papers on maths and logic, such as Lambek's theories. I have no doubt that Lambek's ideas are very good, but I would like to see them translated into some kind of pidgin English that I can read.

I am thrown by Milner's comment that lambda calculus is fine for "sequential processes" but that something more is needed for communicating processes.

My (perhaps naïve) viewpoint was that cannot be so, because pi-calculus is Turing complete, and therefore can be converted mechanically to another Turing-complete notation, i.e. lambda calculus. Therefore Milner's pi-calculus notation can be converted automatically to lambda calculus.

It seems that I have identified a project: intuitively, it should be possible to mechanically convert from one Turing-complete language to another. is there an algorithm to do this? I will have to look on Google. Maybe this is incredibly hard to do, and as hard as the halting problem.

I looked yesterday on the net, and found papers on models of lambda calculus. I surprised to find that this seems to be a very deep rabbit hole.

Here's the thing, once you add (pure) interactivity, formality goes out the window. It's no longer a "closed" system. The question then is, what is the notion of computation once interactivity enters in? That answer: well, either the other user/machine is substituting for some of your computation (which can be enscribed by just another, bigger, state machine) or you're no longer in a formally-definable system and you're now playing a game, in which case there is no application of the Church-Turing thesis.

$\begingroup$Interactive models of computation like process calculi are games in the sense of game semantics.$\endgroup$
– Martin BergerAug 30 '12 at 6:22

1

$\begingroup$Human behaviour is irrelevant. What matters is that computable interactive devices act in an algorithmic, mechanical manner to their inputs.$\endgroup$
– Martin BergerSep 3 '12 at 13:05

1

$\begingroup$@ Mark J, I don't understand what you are saying. The interactive approach simply says that a device is computable if it reacts to its inputs in a mechanical way, using finite resources. Yes, if the other part of the interaction does something crazy, like inputting Chaitin's Omega, then the mechanical device can do something crazy, like computing the halting problem. So what?$\endgroup$
– Martin BergerSep 4 '12 at 9:58

1

$\begingroup$In my opinion the CTT is not about what is physically implementable. Instead, it's a crude test that rules out certain clearly not implementable things: If the CTT says something is not computable, then it is not physically implementable, but I don't think the reverse implication holds.$\endgroup$
– Martin BergerSep 4 '12 at 10:00

1

$\begingroup$@ Mark J, the requirement "a device is computable if it reacts to its inputs in a mechanical way, using finite resources" does not require the inputs be generated mechancially. Certainly inputting Chaitin's Omega cannot be mechanically generated.$\endgroup$
– Martin BergerSep 5 '12 at 9:31

skimming Wegner's paper, its clear he's being a bit melodramatic and contrarian, but he has a point. the future of computing is arguably much more significantly centered in robotics, AI, or datamining (of vast real world "big data") which he doesnt seem to mention very much by name, but which he is clearly alluding to with his model. and these areas very much largely focus on the universe outside of a TMs inputs and outputs.

historically it also went by the name cybernetics as invented/formulated by Weiner. the point of robotics is that inputs and outputs are not merely digital and without meaning, which one might conclude looking at a TM; they are, but they have real world implications/effects/causes etc., and the machine forms a feedback loop with the environment.

so I would argue that TMs and robotics form a very natural synergy or symbiotic relationship so to speak. but this is not a radical assertion and what Wegner announces with great fanfare is, phrased in different terms, not very controversial or novel. in other words, Wegner seems to be setting himself up as an intellectual or academic iconoclast in his style on purpose... and so who is the TCS community to deny him that melodramatic framing? nevertheless see [2] for a serious rebuttal.

Wegner's example of driving a car is very relevant & other key recent breakthoughs in TCS can be cited:

the DARPA road race challenge and also Google's closing in on the technology of a driving car.[3]

the case of the Big Blue AI chess victory over Kasparov

the recent Deep Blue Jeopardy Challenge victory

the increasingly autonomous Mars rover

a recent announced breakthrough in unsupervised object recognition by Google.[4]

commercialized speech recognition

but it is true, what started out decades ago as mere theory with TMs is now a very real-world phenomenon and segments of the ivory tower TCS community might be in some resistance or even denial of that fact and the associated, fundamental [near Kuhnian] transformation and shift "currently in play". this somewhat ironic because Turing was very applied in many of his perspectives & studies such as his interest in an operational AI test (the Turing test), chemical dynamics, chess solving computation, etc [5].

you can even see this in a microcosm on this site in clashes over how to define the scope, and heated arguments over whether a specific seemingly innocuous tag called application-of-theory is legitimate.[7]

and lets note that TCS is in fact studying many interactive models of computation and much key research is going on in that area... particularly interactive proof systems of which all the important computing classes can be defined in terms of.[6]

$\begingroup$an analogy that was at the edge of my thoughts while writing this, but finally figured out later: think the distinction of in vivo vs in vitro biology is relevant. the TM is analogous to the latter. other (emerging) models are analogous to the former. =)$\endgroup$
– vznAug 29 '12 at 22:35

$\begingroup$anyway the 2006 volume shows many prestigious computer scientists agreeing with the new paradigm. note also the final essay in the collection: Lynn Stein, Interaction, Computation, and Education — This volume as a whole documents a fundamental shift in the culture of computation from a focus on algorithmic problem solving to a perspective in which interaction plays a central role. In this chapter, Stein points out that such a shift must be accompanied by a corresponding shift in computer science education, in the fundamental ``story'' we tell our students in their introductory courses.$\endgroup$
– vznAug 30 '12 at 1:46