Certain problems are known to be undecidable, but it is nevertheless possible to make some progress on solving them. For example, the halting problem is undecidable, but practical progress can be made on creating tools for detecting potential infinite loops in your code. Tiling problems are often undecidable (e.g., does this polyomino tile some rectangle?) but again it is possible to advance the state of the art in this area.

What I am wondering is if there is any decent theoretical method of measuring progress on solving undecidable problems, that resembles the theoretical apparatus that has been developed for measuring progress on NP-hard problems. Or does it seem that we are stuck with ad hoc, I-know-progress-when-I-see-it assessments of how much particular breakthroughs advance our understanding of undecidable problems?

Edit: As I think about this question, it occurs to me that perhaps parameterized complexity may be relevant here. An undecidable problem may become decidable if we introduce a parameter and fix the value of the parameter. I'm not sure if this observation is of any use, though.

$\begingroup$The obvious measure which you probably you won't like is to simply order various partial solutions according to their domains (i.e., the set of inputs on which they work). What would you like to use the measure for?$\endgroup$
– Andrej BauerDec 30 '10 at 16:07

3

$\begingroup$@Andrej: Let me answer your question indirectly. In the realm of NP-hard problems, we sometimes have very nice results of the form, "Such-and-such an approximation ratio is attainable, and any further improvement is impossible unless P = NP." Being able to prove analogous results for interesting undecidable problems would be nice. It would give us some sense of whether there's some intrinsic barrier to further progress.$\endgroup$
– Timothy ChowDec 31 '10 at 21:19

$\begingroup$propose a concept of "quasialgorithms" with some research in the area$\endgroup$
– vznOct 11 '13 at 23:19

3 Answers
3

In the case of the halting problem, the answer is "not yet". The reason is that the standard logical method for characterizing how hard a program's termination proof is (eg, ordinal analysis) tends to lose too much combinatorial and/or number-theoretic structure.

The state of the art in practical termination analysis of imperative programs is something called "rank-function synthesis" (Byron Cook has a forthcoming book, Proving Program Termination, on the subject from CUP). The idea is to compute a linear function of the program's variables' values now and at the previous step, which serves as a termination metric. (One cool thing about this method is that it uses Farkas's lemma, which gives a neat geometric viewpoint to what's going on.) The interesting thing is that the tools built on this approach can do things like show the termination of the Ackerman function (which is non primitive recursive), but you can construct non-nested while loops which can defeat them (which only needs $\omega$ to show termination).

This means that there isn't a neat relationship between the proof-theoretic strength of the metalogic in which you show termination (this is very important in rewriting theory, for example) and the functions that techniques like rank-function synthesis can show termination for.

For the lambda calculus, we have a precise characterization of termination in terms of typability: a lambda term is strongly normalizing if and only if it is typeable under the intersection type discipline. Of course, this means that full type inference for intersection types is impossible, but it may also give a way of comparing partial inference algorithms.

This is answering the title of the question more than its content, but you can also consider "approximations" of the halting problem as algorithms which will give you a correct answer on "almost all" programs.

The notion of "almost all" programs only makes sense if your model of computation is optimal (in the same sense that for Kolmogorov's complexity), to avoid situations where the majority of your programs are trivial.

Given an optimal machine $M$ and an integer $n$, can you give a correct answer to the halting problem for all programs of size $\lt n$, except a fraction $\epsilon$, for arbitrarily small $\epsilon$? Lynch proved in the 70s that no, you can't. A few years ago, colleagues and I proved a stronger version of this result, allowing the algorithm to make mistakes or to not halt on a small proportion of inputs, or only requiring a probabilistic approximation with probability $p\gt 0$ (allowing the algorithm to flip coins).

Along the way, we proved a bunch of interesting things about the properties of the sequence $\rho_n$, where $\rho_n$ is the ratio of programs of size $<n$ which halt (turns out, this sequence is very weird). If the paper above is too dense, you can check out my slides from when we presented the results at ICALP.