One says that a language $L$ is complete for a complexity class $\mathcal{C}$ if $L$ is in $\mathcal{C}$ and every language in $\mathcal{C}$ is reducible to $L$. Thus, in a sense, $L$ is the hardest language in $\mathcal{C}$. For example, $3SAT$ is the hardest language in $\mathcal{NP}$.

Now recall Rice's theorem: any nontrivial property of a language is undecidable. The proof of this theorem essentially rests on the fact that if any such property were decidable, then it could be used to decide the $HALTING$ language. Thus, in a sense, $HALTING$ is the easiest property of all properties.

And then here is a peculiar thing: the proof of Rice's theorem---at least the one I've seen---amounts to constructing a reduction from $HALTING$ to every property. Contrast this with the definition of completeness, and one realizes that these are ``opposites'' of each other in a sense.

Question: Is there more to this observation? Is there work done on this? If there are experts in the audience, could you elaborate on my observation? Does one ever consider ``easiest'' problems in complexity theory as in my example above, and if so, does it lead to interesting results (like it does to Rice's theorem above)?

Thank you in advance for your comments and your effort in reading and thinking my question.

Rev. 1:

Note that when I say $HALTING$ is the easiest property, I don't mean to imply that it is easy. Of course, it is an undecidable language. It is ``easy'' in a relative sense: every other property is at least as hard as $HALTING$, because if any other property can be decided, then it yields a decider for $HALTING$.

When we talk of an complete problem, we use similar terminology: if a language has a reduction from every other language in its class, then we call it complete and rightly refer to it as being one of the hardest languages in the class.

This is the subtle similarity / difference I am trying to point out. These two are in a sense opposites. In one case, we get reductions from all languages in the class. In the other, we get reductions from one language to all other languages in the class. In the first case, we call the target hardest. In the second, we (could) call the source the easiest.

7 Answers
7

One of the most common and powerful means of showing that a decision problem is undecidable is, as in the case you mention, to show that the halting problem reduces to it. For example, this is the basis of undecidability for almost all of the answers listed in the MO question asking for attractive examples of undecidable problems.

But sometimes people are led astray by this fact, and begin erroneously to believe that a problem $A$ is undecidable if and only if the halting problem reduces to it. That is, to use your terminology, they are led to believe that the halting problem is the easiest undecidable problem. One sometimes sees this error among mathematicians, even very good ones, who first begin to consider decidability issues in their domain. The error is to assume that because undecidability is often proved by reducing the halting problem, that this is the only way undecidability arises. But this isn't true.

This issue and its resolution are central in the history of computability theory, connected with its birth and maturation as a subject. Specifically, Emil Post asked what is now known as Post's Problem, whether there are any computably enumerable Turing degress strictly between the decidable ones and the halting problem. To use the jump notation, Post's problem asks whether there is a computably enumerable set $d$ with $0\lt_T d \lt_T 0'$. One could view Post's problem in your terms: he is asking whether the halting problem is easiest among all c.e. problems. A related version would ask whether it is the easiest undecidable problem of all.

The answer was given by Friedberg and Muchnik, who invented the priority argument method specifically to solve it, a method that has become fundamental, used in thousands of constructions in computability theory in order to reveal the nature and structure of the Turing degrees. The answer is that there are many such intermediate degrees! Indeed, there is a rich structure of c.e. degrees strictly below the halting problem. In fact, every countable partial order embeds into the hierarchy of degrees below the halting problem.

Thus, it is not correct to say that HALTING is the easiest undecidable problem. Any of the intermediate sets constructed by the priority method are strictly easier than HALTING, but still undecidable.

This is not to dispute the technical claim you made about Rice's theorem, but only your interpretation of it. The properties you consider are not actually languages, but sets of languages, or rather, sets of Turing machine programs that respect the equivalence of deciding the same set. Thus, your properties are just a special case of the collection of all possible languages. The reason the halting problem reduces to these particular sets has to do with the fact that the problem of deciding whether two programs compute the same set does reduce the halting problem. Indeed, this equivalence relation, making two programs equivalent when they compute the same function, is strictly harder than the halting problem; I believe it is a complete $\Pi^0_2$ set, which would make it equivalent to the double jump $O''$.

Yes, that is a Π^0_2 complete set. There is a Π^0_2 formula R(i,j) which says: for all n, if φ_i(n) halts then φ_j(n) halts with the same value, and vice-versa. Conversely, it is easy to decide an arbitrary Π^0_2 formula if you are given as an oracle the set of indices that compute the constant 0 function. (Any other total computable function could be substituted in place of the constant 0 function.)
–
Carl MummertJun 26 '10 at 14:55

Note also that every language $L$ is easiest among all languages that are at least as hard as $L$.
–
Joel David HamkinsJun 27 '10 at 1:52

Thanks Joel for your informative answer. However, I am not sure I understand the implication of paragraph 2 and 5 in your answer. Nowhere, do I suggest that I believe Halting to be the easiest undecidable problem. My question is simply this: The proof of Rice's theorem gives an example where one gets a reduction from a language to all languages in the class ``properties of computations''. The proof of C-completeness gives example where one gets a reduction from every language in the class C. This constrast, I find interesting and wonder whether this is a symptom of something more general.
–
SBHJul 4 '10 at 0:07

Suppose we have a set of languages $\mathcal{S}$. Then the problem of deciding whether the language
of a given Turing machine is in $\mathcal{S}$ is undecidable, provided that there exists a Turing machine that recognizes a language in $\mathcal{S}$ and a Turing machine that recognizes a language not in $\mathcal{S}$.

Thus the theorem is not concerned with how "easy" the property of languages may be, but how hard it is to recognize even easy properties of a language $L$, given only a Turing machine that generates $L$. It says
that the only recognizable sets of languages are the empty set (recognized by saying NO for every Turing machine) and the set of all languages (recognized by saying YES for every Turing machine).

This is not so surprising: since you can't in general tell whether a Turing machine halts, you really
don't know anything about what it ultimately does.

Addendum. Your intuition that Rice's theorem shows the halting problem to
be "easiest" is correct if you amend your statement as follows: the halting problem is the easiest in the class of problems $P_{\mathcal S}$, where $P_{\mathcal S}$
is the problem of deciding, for an arbitrary Turing machine $M$, and nontrivial
language set $\mathcal{S}$, whether the language generated by $M$ is in
set $\mathcal{S}$.

The reduction in the proof of Rice's theorem shows that every such problem
$P_{\mathcal S}$ is at least as hard as the halting problem, and one can point
to sets $\mathcal{S}$ for which the problem $P_{\mathcal S}$ is actually harder,
for example, the set of infinite languages.

Yes, that is most certainly what I mean. My point, however, is not to insist that Halting is easy, but to ask whether this similarity---reduction from easiest to all properties, and reductions from all languages to hardest---is a symptom of something more interesting.
–
SBHJun 26 '10 at 10:02

1

As far as I know, it is only a symptom of the difference between looking downwards and looking upwards from the degree of unsolvability of the halting problem. Looking down, one asks which problems are reducible to the halting problem, and the answer includes, but is not confined to, the decision problems for all computably enumerable languages. Looking up, one asks which are the problems to which the halting problem is reducible, such as the problems $P_{\mathcal{S}}$. It turns out that there are infinitely many degrees of unsolvability both above and below, with very complex structure.
–
John StillwellJun 26 '10 at 10:15

+1 for reminding me of Kozen's neat paper. Personally, my favorite snippet from the abstract is "...if P≠NP is provable at all, then it is provable by diagonalization..."
–
Ryan WilliamsJul 14 '10 at 6:54

It sounds like what you're talking about is the very basics of recursion theory, particularly the Turing degrees and even more specifically the degree 0' that contains the complete set K (defined as the set of all $e$ such that $e \in W_{e}$ using some standard enumeration $W_{e}$ of the recursive sets). While it doesn't really provide a good jumping-off point for the rest of the field, Smullyan's book Recursion Theory for Metamathematics is a good book with a solid elementary approach to the subject and might make interesting reading if you can track down a copy.

And to confirm your intuition, there's quite a bit of overlap between recursion theory and complexity theory; there's a recursion-theoretic hierarchy that's an exact analog to the polynomial hierarchy (with the important distinction that it's explicitly known not to collapse) and the two subjects share a lot of common concepts (e.g., oracles). I'm actually a bit surprised that there isn't more work 'reflecting' from recursion theory down to complexity theory, but my (novice's) understanding is that there are a number of complications in the approach (obviously I suppose, since the reflection of the easy result $0 \ne 0'$ is the $P \ne NP$ claim...)

Thank you for your answer. Can you elaborate on my observation? For example, does one ever consider ``easiest'' problem in complexity theory and if so, what results has it led to?
–
SBHJun 26 '10 at 1:31

This should really be a comment under Steven Stadnicki's answer, but I just found out about this site, so don't have the rep yet...

I'm actually a bit surprised that there isn't more work 'reflecting' from recursion theory down to complexity theory, but my (novice's) understanding is that there are a number of complications in the approach

Yes, the problem has to do with oracles. in Recursion theory, (it's been several years) nothing relativises, so $A^B \neq A$ for almost any non trivial $B$. however, the opposite is true in complexity theory. Once you know that everything halts, oracles either don't help, or completely dominate, and so almost all of the techniques used in recursion theory are useless in complexity theory.

You asked, "does one ever consider ``easiest'' problems in complexity theory as in my example above, and if so, does it lead to interesting results (like it does to Rice's theorem above)?"

From the perspective of computability theory the answer is definitely yes. Those problems that are "easiest" can be thought of as problems that are close to computable in some sense. Studying things that are close to computable has lead to proving the existence of the following:

a) Minimal Turing degrees - These are degrees that are not 0 (the degree of all decidable problems) and such that there is no Turing degree strictly between it and 0.

b) Computably dominated Turing degrees - Any function computed by such a degree is dominated by a computable function.

c) K-trivial Turing degrees - These have many characterizations once of which is that these degrees have no compressive power as an oracle.

The point of completeness is not so much to say that a complete problem $L$ is easy relative to the others, but that it is hard. Rather, we assume (because we think that this class is intractable, whatever) that generally, solving (at least some) problems in this class is hard. Consequently, if $L$ is complete, the one specific problem $L$ must be at least as hard as anything else in that class, so $L$ is intractable. ($L$ is kind of like an upper bound for the whole complexity class, and the complexity class thus provides a lower bound on the intractability of $L$.)

The proof of Rice's theorem shows that $HALTING$ can be reduced to any problem $L$ that describes a nontrivial property of Turing machines that depends only on the corresponding language. $HALTING$ is hard (i.e., unsolvable). This means that the latter problem is also unsolvable. The proof of Rice's theorem itself (as far as I understand it) does not show that $HALTING$ is hard (a separate result), but rather that other problems are hard.

About your last point: I'm not sure what you mean. The "easiest" problem in complexity theory would just be the trivial language, no? (This admits a reduction to any language.)

This can, incidentally, be formalized via the notion of mapping reduction.