Hypercomputation: History of an Emerging Field

B.Jack Copeland (University of Canterbury, New Zealand)

This paper sketches the history of the idea of a hypermachine - a machine
able to generate functions that the universal Turing machine cannot. The
paper also examines the origins of the myth that Turing, or Church, or both,
somehow showed such machines to be an impossibility.

Effective Procedures and Causal Processes

Carol Cleland (University of Colorado at Boulder)

Intuitively, an effective procedure is a way of getting something done
in a finite number of distinct steps. Put another way, what makes a procedure
effective is the consequences of following it. As I have argued, the consequences
of following a procedure are not determined by its instruction set. This
is particularly significant in the case of mundane procedures (recipes,
methods, etc.) and computer programs, where the consequences of following
a procedure crucially depend upon causal processes. Yet the role of causal
processes is usually ignored in theoretical discussions of the computational
capacities of concrete computers. I identify the Turing machine model of
effective procedures as the source of the problem, contending that it collapses
the process which makes a procedure effective into the following (implementation)
of the procedure. I show that there are fundamental differences between
causal processes and implementations of procedures, and argue that failure
to respect these differences has led to a number of puzzles and mistaken
views about the capacities of concrete computers.

A New Concept of Computability

Mark Hogarth (Cambridge University)

The concept of computability takes on a new appearance in the light of
some recent investigations in modern physics. The picture that emerges encompasses
that advanced by Turing and his followers, but is richer and more revealing.
It is also more satisfying in that the pure and physical aspects of computability
are clearly disentangled, and the long-recognised difficulties with the
status of the Church-Turing thesis vanish since, without any theoretical
detriment, the thesis itself is seen to play no role.

Computing Beyond Digital Computers by Neural Models

Hava Siegelmann (Technion, Haifa)

The term "Neural Computation" is frequently used, but the associated research
field has many faces. While neuro-scientists mainly look for models to explain
experimental data, computer scientists consider variants of the classical
Turing model in which the additions are reminiscent of particular properties
of the brain. The gap between the models considered by these communities
is immense: The theory of computational complexity requires the assumption
of discrete computation and does not allow for other types of computational
paradigms.

In this work, we consider a basic neural network model: finite number of
neurons, recurrent interconnection, continuous activation function, and
real numbers as weights. This model is considered "analog" for both the
use of real numbers as weights which makes the phase space continuous, and
the continuity of its activation function. In computational terms, this
type of continuous flow (due to its activation function) is definitely a
restriction on our model. There are no discrete flow commands such as "If
$z$ is greater than $0$, compute one thing, otherwise continue in another
computation path".

Given the above, how does the analog network compare with the digital computer
in terms of computational capabilities? We show that although the network
is continuous, cannot branch on values, does not have a memory unit in the
classical sense, and utilizes only a fixed number of neurons, it still can
compute anything that a digital machine does. Furthermore, due to its real-valued
weights, it is inherently richer than the digital computational model and
can compute functions which no digital computer can. The network, although
transcending the Turing model, is still sensitive to resource constraints
and still computes only a small subclass of all possible functions. It thus
constitutes a well-defined super-Turing computational model.

An analogy to the Church-Turing thesis of computability is proposed, but
to the realm of analog computation: "No possible abstract analog device
can have more computational capabilities (up to polynomial time) than Analog
Recurrent Neural Networks". This can be interpreted as suggesting that the
neural network be considered a standard model in the realm of analog computation,
functioning in a role similar to that of the Turing machine in the realm
of digital computation.

An alternative interpretation is possible, in which the extra power is
not due to the use of real weights, but rather due to the ability of the
machine to tune its internal parameters. This is in contradiction to the
Turing model, which is inherently static.

(We note that our results consider the theoretical issue of computational
capabilities only: Practical issues such as convergence of the learning
algorithm and availability of training data may prevent a network from learning
functions that it can in principle represent).

Computation in an Arbitrary Universe

Mike Stannett (Controller, Noisefactory (http://noisefactory.co.uk/))

Because computation concerns the measurement of physical values and the
workings of physical devices, the question "what can be computed?" depends
ultimately on the underlying model of physics one assumes to be true. In
a Newtonian world where time moves forward one click after another, we naturally
expect to find computation taking place in well defined steps, and resulting
models include the ever-familiar model of Turing computation. Allowing quantum
indeterminacy into the model, but keeping the clockwork, we encounter quantum
computation. Similarly, in a Newtonian world where time is thought to pass
continuously rather than in jumps, we have the analog computation central
to Pour-ElŐs researches. Rather than ask what computation would be like
in this physics or that, we ask whether we can introduce universal physical
structure as a parameter; our interest is a general unified model of computation
which will work in any version of physics - you tell the model what your
physics looks like and the model tells you whatŐs computable. Since the
power of computational systems seems to be inherently tied to the assumed
nature of time, we have generated a model of computation over arbitrary
temporal structures, and are looking to extend it to include other physical
ÔgivensŐ like space and curvature.

Oron Shagrir and Itamar Pitowsky

Hebrew University of Jerusalem

The Hypercomputation Thesis (HT) is the claim that some machines (perhaps
only in the abstract) compute functions that cannot be computed by a universal
Turing-machine. If HT is true, then three important questions immediately
present themselves:

Are there any constraints on computable functions? Are there functions
that are not computable?

Is Church-Turing Thesis false?

Can physical systems carry out hypercomputations?

In this paper we primarily address questions (2) and (3). We first argue
that the truth of HT does not falsify the Church-Turing Thesis. We then
examine the relations between the Church-Turing Thesis and some physical
forms of it (e.g., Gandy (1980), Wolfram (1985), Sieg and Byrnes (1999)).
Finally, we explain why these physical forms of the Church-Turing Thesis
might be false.

Am I a Super-Turing Machine?

Gert-Jan Lokhorst (Erasmus University, Rotterdam)

Hava Siegelmann has proven that analog recurrent neural nets are, in general,
more powerful than the universal Turing machine. This result makes one wonder
whether human beings may perhaps be regarded as super-Turing machines. However,
there are three reasons to reject the latter hypothesis:

Physics suggests that there can be no super-Turing machines in the real
world (Bekenstein bound; speculations about the ultimate discreteness
of space-time);

The most sophisticated models of the brain which are currently available
(with spiking neurons and a generous amount of noise) imply that brains
are less powerful than finite automata (results by Maass and others);
and

It is doubtful whether the claim that system X is a super-Turing machine
is empirically meaningful.

I will discuss whether these reasons are really convincing, and argue that
they are not.

I am nevertheless reluctant to give up the venerable old hypothesis (which
has been defended by Kolmogorov, Burks, Minsky, Nelson, and others) that
man is, at best, a finite automaton. I do not think that I am a super Turing
machine, if only out of modesty. But I am only speaking for myself here;
I do not know about you.

Alan Turing and the Mathematical Objection: The
Generation of Non-computable Sequences

Gualtiero Piccinini (University of Pittsburgh)

The focus of this paper is on TuringŐs ideas about machines, mathematical
methods of proof, and intelligence. By the late 1930s, Kurt Godel and other
logicians, including Turing himself, had shown that no finite set of rules
could be used to generate all true mathematical statements. Yet according
to Turing, there was no upper bound to the number of mathematical truths
provable by intelligent human beings, for they could invent new rules and
methods of proof. So, the output of a human mathematician, for Turing, was
not a computable sequence (i.e., one that could be generated by a Turing
machine). Since computers only contained a finite number of instructions
(or programs), one might argue, they could not reproduce human intelligence.

Turing called this the mathematical objection to his view that machines
can think. Logico-mathematical reasons, stemming from his own work, helped
to convince Turing that it should be possible to reproduce human intelligence,
and eventually compete with it, by developing the appropriate kind of digital
computers. He felt it should be possible to program a computer so that it
could learn or discover new rules, overcoming the limitations imposed by
the incompleteness and undecidability results in the same way in which human
mathematicians presumably do.

Pseudorecursiveness and Hypercomputation: A Legacy
of Nonuniformity

Benjamin Wells (University of San Francisco)

As the last dissertation Alfred Tarski signed, my thesis involved showing
that equational theories may be decidable if the number of variables is
restricted, yet undecidable if arbitrary equations are considered (there
are now several nice papers along these lines by six or seven authors).
Tarski always encouraged me to write this up (and I still am!) by saying
that it was a good lesson for mathematicians as well as logicians in making
sure they got the quantifiers in the right order and paid some attention
to requirements of formality in math. But toward the end of the process,
late one early morning, he asked me if I knew why he liked this problem.
I replied that "it was a good lesson ... ," but he said, "Sure. But thatŐs
not why I like it. It is because your equational theory is decidable even
if it is not recursive." I protested, "But you need an oracle ... ." He
would have none of it, saying that he invented the notion of decidable theory,
and this was one.

In this talk I shall present the context for his bold statement, his informal
argument in support, routes to formalization at lower complexity, and speculation
on what he may have meant.

If you want to prepare, try B. Wells, "Pseudorecursive Varieties of Semigroups
- I", International Journal of Algebra and Computation, vol.6 (1996), pp.457-510.