I wonder if it is possible to specialize the question:
(a) What is the probability that a random Turing Machine program
will halt?, to: (b) What is the probability that a random Turing Machine
program that halts, when, given—say—the integer
coordinates of three vertices of a planar triangle on its tape,
will halt with the triangle area on the tape as an answer?

The answer to (a) is
Chaitin's $\Omega$.
What I am asking is whether "halting" can be narrowed to "useful"
in any meaningful way, with any positive probability attached. It seems
it should be possible, perhaps not to compute a specific probability, but
to guarantee that it is positive...?

This is well beyond my ken, so I'd especially welcome tutorial pointers.
Thanks!

1 Answer
1

Here is another way answer to (a). (And I have posted about this before on MO here, here, here and here.)

The idea is to consider the Turing machine programs as having random instructions. A single line in a Turing machine program is an instruction of the form: when in such-and-such a state, reading such-and-such a symbol, then write such-and-such, move in such-and-such a direction and change to such-and-such a new state. We may consider the collection of all programs having at most $n$ states, and for any fixed $n$, one may consider the notion of a random $n$-state program; every line of the program is chosen randomly, with uniform probability. The question becomes, then, what is the resulting typical nature of a random program?

For any desired behavior, one may consider the set $A$ of programs exhibiting that behavior. What we want is some way to measure the size of $A$. It seems quite natural to use the asymptotic density of the members of $A$ among the $n$-state programs, as $n$ becomes large. That is, the probability that a Turing machine program has property $A$ is the limit $$\lim_{n\to\infty} \frac{|A\cap P_n|}{|P_n|},$$ if this limit exists, where $P_n$ is the collection of all $n$-state Turing machines. For example, if the density is $1$, it means that more than $99\%$ of the $n$-state Turing machine programs are in $A$, as $n$ becomes large, and more than $99.9\%$, as close to $1$ as desired.

Your question was, what is the probability that a program halts? The answer, if one uses one of the standard Turing machine models, with a one-way infinite tape and a finite alphabet, and a single halt state, is that the probability is zero. This follows from the main argument of my paper

What we prove in that paper is that there is a set $A$ of Turing machine programs which consists of almost all Turing machine programs, in the sense that the asymptotic density of $A$ is $1$, such that membership in $A$ is linear-time decidable and also the halting problem for programs in $A$ is linear time decidable. Thus, the halting problem is decidable with probability one. This is an instance of the black-hole phenomenon, by which the difficulty on an infeasible or intractible problem is concentrated in a very small region, of measure zero, outside of which the problem is easy. Our main point is that even the halting problem admits a black hole. Clearly it will not do to base an encryption scheme on the difficulty of a problem with a black hole, since if the criminals can rob the bank $95\%$ of the time, or even $5\%$ of the time, it is bad enough.

The way the argument proceeds is by showing something more, namely, that for the standard model of computability I mentioned above, the probability one behavior of a Turing machine is that the head falls off the tape before a state is repeated. Thus, if this behavior is regarding as non-halting, than almost all programs do not halt on a fixed input. (But if having the head falling off the tape is regarded as halting, then almost all programs halt; the main point is that the behavior of a random program for this standard model is trivial in this way.) The proof appeals to the Polya recurrence phenemenon, with the basic argumment being that if every new line of a program is random, then as long as the state has no yet repeated, then the odds of moving left or right make the behavior like a random walk, and so with probability one, the head will return to the left-most cell and fall off the tape.

You asked about the probability of useful behavior. But because in the standard model the probability one behavior of a Turing machine is trivial---it falls off the tape before repeating a state---I take it to show that most programs are not useful. Indeed, almost all of them are useless.

Conclusion. Don't let monkeys write your programs.

Unfortunately, the argument depends on the computational model, and the question is open for the two-way infinite tape model.

@Joel, I've heard Alexei speak of this nice result many times. I always wanted to know what happens if you assume the machine stays where it is whenever it trys to move beyond the left end of the tape. This might be different than the 2-way tape because you keep moving to the left and staying awhile before moving right.
– Benjamin SteinbergMar 3 '12 at 3:03

With that model, the best we know is that $13.5\%$ of programs don't halt, or more specifically, $\frac{1}{e^2}$, for the trivial reason that this is the density of programs having no transition to the halt state. This is also the best we know for the two-way infinite tape model.
– Joel David HamkinsMar 3 '12 at 3:06