Question. Given a Turing-machine program $e$, which
is guaranteed to run in polynomial time, can we computably
find such a polynomial?

In other words, is there a
computable function $e\mapsto p_e$, such that whenever $e$
is a Turing-machine program that runs in polynomial time,
then $p_e$ is such a polynomial time bound? That is, $p_e$
is a polynomial over the integers in one variable and
program $e$ on every input $n$ runs in time at most
$p_e(|n|)$, where $|n|$ is the length of the input $n$.

(Note that I impose no requirement on $p_e$ when $e$ is not
a polynomial-time program, and I am not asking whether the
function $e\mapsto p_e$ is polynomial-time computable, but
rather, just whether it is computable at all.)

In the field of complexity theory, it is common to treat
polynomial-time algorithms as coming equipped with an
explicit polynomial clock, that counts steps during the
computation and forces a halt when expired. This convention
allows for certain conveniences in the theory. In the field
of computability theory, however, one does not usually
assume that a polynomial-time algorithm comes equipped with
such a counter. My question is whether we can computably
produce such a counter just from the Turing machine
program.

I expect a negative answer. I think there is no such
computable function $e\mapsto p_e$, and the question is
really about how we are to prove this. But I don't know...

Of course, given a program $e$, we can get finitely many
sample points for a lower bound on the polynomial, but this
doesn't seem helpful. Furthermore, it seems that the lesson
of Rice's Theorem is
that we cannot expect to compute nontrivial information by
actually looking at the program itself, and I take this as
evidence against an affirmative answer. At the same time,
Rice's theorem does not directly apply, since the
polynomial $p_e$ is not dependent on the set or function
that $e$ computes, but rather on the way that it computes
it. So I'm not sure.

Finally, let me mention that this question is related to and
inspired by this recent interesting MO question about the
impossibility of converting NP algorithms to P
algorithms.
Several of the proposed answers there hinged critically on
whether the polynomial-time counter was part of the input
or not. In particular, an affirmative answer to the present
question leads to a solution of that question by those
answers. My expectation, however, is for a negative answer
here and an answer there ruling out a computable
transformation.

The question naturally generalizes to other classes of functions, and their respective computability classes, such as EXPTIME and so on. (In the silly extreme case of COMPTIME, there is an affirmative answer, since the running time of the program is a computable function.)
–
Joel David HamkinsJun 13 '10 at 19:28

1

The two solutions that we have now can be viewed, from a blimp, as modifications of the two standard ways of proving Rice's theorem, by diagonalization and by reduction of the halting problem. You're right that the theorem as usually stated does not cover running time, but the method of proof is very flexible.
–
Carl MummertJun 13 '10 at 20:41

Thanks, everyone, for the great answers! MathOverflow is fantanstic! I would like to accept both answers, and regret that I am able to accept only one.
–
Joel David HamkinsJun 15 '10 at 1:13

3

Timothy Chow's answer was the right one to accept. Reduction proofs are almost always more informative than diagonalization proofs, and his leads immediately to a complete characterization of how uncomputable the bounding problem is. I posted mine just so that both proof methods would be represented.
–
Carl MummertJun 15 '10 at 1:45

2 Answers
2

[Edit: A bug in the original proof has been fixed, thanks to a comment by Francois Dorais.]

The answer is no. This kind of thing can be proved by what I call a "gas tank" argument. First enumerate all Turing machines in some manner $N_1, N_2, N_3, \ldots$ Then construct a sequence of Turing machines $M_1, M_2, M_3, \ldots$ as follows. On an input of length $n$, $M_i$ simulates $N_i$ (on empty input) for up to $n$ steps. If $N_i$ does not halt within that time, then $M_i$ halts immediately after the $n$th step. However, if $N_i$ halts within the first $n$ steps, then $M_i$ "runs out of gas" and starts behaving erratically, which in this context means (say) that it continues running for $n^e$ steps before halting where $e$ is the number of steps that $N_i$ took to halt.

Now if we had a program $P$ that could compute a polynomial upper bound on any polytime machine, then we could determine whether $N_i$ halts by calling $P$ on $M_i$, reading off the exponent $e$, and simulating $N_i$ for (at most) $e$ steps. If $N_i$ doesn't halt by then, then we know it will never halt.

Of course this proof technique is very general; for example, $M_i$ can be made to simulate any fixed $M$ as long as it has gas but then do something else when it runs out of gas, proving that it will be undecidable whether an arbitrary given machine behaves like $M$.

Here is a fix: Let $M_i$ keep running for $n^s$ steps where $s$ is the number of steps it takes the machine $N_i$ to converge. Then you can solve the halting problem by reading the exponent of any polynomial upper bound.
–
François G. Dorais♦Jun 13 '10 at 19:55

+1. For the converse, if we know that a program runs in polynomial time then we can use an oracle for the halting problem to find a polynomial upper bound. Just check all the polynomials, one after another, until you find one that is an upper bound. The property that a given total program does not run in a fixed time bound is Sigma^0_1 and so it can be answered by a query to the halting problem. This shows that the lower bound you give is sharp. (P.S. I doubt that the Turing reduction I describe here could be improved to a stronger reduction.)
–
Carl MummertJun 13 '10 at 20:35

Timothy, thanks very much for this answer! Could I ask you kindly to edit the answer to incorporate François' fix?
–
Joel David HamkinsJun 14 '10 at 2:03

You can also diagonalize directly against a purported bound-producing algorithm. Say that $R(j)$ returns a polynomial when run with any index $j$ of a polynomial time function as input.

Define a function $B(j,n)$ as follows. On input $n$, run $R(j)$ for $n$ steps. If this doesn't halt, return $0$ immediately. Otherwise, if $R(j)$ does not return a polynomial when it halts, return $0$ immediately. Otherwise, if the polynomial is $p(x)$, waste at least $(n+p(n))^2$ steps and then return $0$.

Note that for any $j$, the function $C_j(n) = \lambda n . B(j,n)$ is total and runs in polynomial time, and if $R(j)$ returns a polynomial then this is not a bound on the running time of $C_j(n)$.

Now the function that takes a number $j$ and returns an index for the $C_j$ is a total computable function. So we can use the recursion theorem to produce an index $k$ such that $\phi_k(n) = C_k(n)$. Then $\phi_k$ will be a total polynomial-time function, but if $R(k)$ returns a polynomial then this is not an upper bound for $\phi_k$.

Note: The previous paragraph requires more than the usual statement of the recursion theorem, it requires some knowledge of the proof to show that $\phi_k$ is polynomial-time. Here is the construction I need.

Let $s(j,k)$ be the usual polynomial-time function such that $\phi_{s(j,k)}(n) \simeq \phi_j(k,n)$; the key point we need is that the running time of $\phi_{s(j,k)}(n)$ is polynomially bounded if the running time of $\phi_j(k,n)$ is polynomially bounded, and the first of these is not smaller than the second. This can be checked by examining the construction of $s$ in the chosen model of computation.

Now let $d$ be the index for the computable function $\phi_d(j,n) = B(s(j,j),n)$ obtained by simple composition. Let $k = s(d,d)$. Then $\phi_k(n) = \phi_d(d,n) = B(k,n)$ as desired; this is the proof of the recursion theorem. Moreover, the implementation of these functions ensures that $\phi_k(n)$ runs in polynomial time but not faster than $B(k,n)$, because each computation of $\phi_k(n)$ consists of some polynomial-time-in-$n$ invocations of $s$ functions followed by the literal execution of the program for $B(k,n)$.

Carl, I'm a bit confused by your argument. Since the function $C_k$ is identically 0, mightn't the Recursion Theorem simply give me a program k that computes this function very quickly? In this case, I don't see the contradiction. That is, it seems that from $\phi_k=C_k$, we cannot deduce that time bounds are the same, since these programs may compute the same function in different ways. Or do you have in mind a deeper appeal to the proof of the Recursion Theorem, rather than just its statement, in which you get access to how the fixed point program works as well as the function it computes?
–
Joel David HamkinsJun 14 '10 at 1:52

I'm sorry, it took me a minute to see your point. I do need the proof of the recursion theorem, from that point of view. I'll expand the answer to clarify.
–
Carl MummertJun 14 '10 at 2:21

+1. Thanks, Carl! It's very nice. Your fix exactly addresses the point I was trying to make. It seems that your argument proves a version of the Recursion Theorem where one gets control over how long the function takes to conpute. Can you make a precise statement about this? I guess it shows something like: if $f$ is any total computable function, then there is a program $k$ for which $\phi_k=\phi_{f(k)}$, and program $k$ is never faster than $f(k)$ on any input and at worst polynomial time slower?
–
Joel David HamkinsJun 15 '10 at 1:11

Yes, that statement is right. I did some searching and that sort of thing is already present in the computational complexity literature, and I'm sure it must be well known to people who study complexity. It took me a moment to see the issue because I tend to think of the recursion theorem in terms of its proof rather than as a pure existence statement. Very informally: because the proof is uniform, the only way to imitate an arbitrary function is going to be to actually run it, so the amount of slowdown is the main thing that needs to be checked.
–
Carl MummertJun 15 '10 at 2:12