Go away if you want well-reasoned insightful commentary by an expert.
I don't understand this stuff.

The word "paradox", to me, refers to a line of reasoning which leads
to an unacceptable result. It is an indication that either (1) our assumptions
are in error, (2) our mode of reasoning is in error, or (3) we made a trivial
mistake, a mental "typo". Most paradoxes are resolved, with no great fanfare,
by noting (3). Paradoxes which don't immediately succomb to this analysis
usually tell us something much deeper about valid reasoning.

The word "logic", to me, refers to formal reasoning and deduction. The
study of logic -- and the goal of mathematical logic -- is thus to understand
the processes that we use to arrive at and hold our formal beliefs. In
the context of mathematics, this seems at first glance a sufficiently cut-and-dried
subject that we should be able to write down our understanding quite precisely.
As the paradoxes below demonstrate, this does not seem to be the case.
Why?

Note: I am not particularly well trained in mathematical logic, so my
observations below may be missing some crucial considerations; however,
a few conversations with people who should know better have so far failed
to clear things up for me. Perhaps you can?

Note: This page will change when (if) I understand the issues better.

Russell's Paradox

This is a famous paradox, conjured by a famous philosopher, which changed
the course of formal mathematics. Before his time, a set was loosely held
to be any collection of mathematical objects. You could have sets of numbers,
sets of functions, sets of sets... so long as you could give a clear criterion
for whether an object was in the set or not. So. Russell's Paradox states,
simply,

"Consider the set S of all sets which don't contain themselves."

Does S contain itself? If it does, then by definition it must not; but
if it doesn't then by definition it must. Oops.

Thus various complications have been developed -- for example, "typed"
sets: type 0 might be the integers, and now a set of type N can only contain
objects of lower type. This successfully, though some say inelegantly,
wards off self-referential statements such as Russell's Paradox. There
are alternative complications; it seems a matter of mathematical preference
which insect-repellent you choose.

The main lesson seems to be that it can be hard to tell whether a well-formed
mathematical statement actually defines a mathematical object. This is
elaborated on in the next paradox.

Berry's Paradox

I don't know of a fully satisfying explanation of this paradox. (See Chaitin's
note for who Berry was.) The English version goes:

"What is the smallest positive integer not specifiable in 50 words."

Suppose that number is n. There are only finitely many sentences of
50 words or less, so there must be infinitely many unspecifiable numbers
remaining. Thus n, the least of them, exists. Thus we can consider n to
be specified by the sentence above, which contains 11 words. 11 < 50.
Oops.

Maybe we are too loose in our definitions, and need more rigor.

Sure enough, the paradox goes away if we require "specified" to mean
"computed by a Turing machine". n still exists, but now we don't have any
way to compute it. (For example, the obvious program which tries all TMs
smaller than "size 50" won't be able to deal with ones which don't halt.)
So the paradox disappears, and (if you're Chaitin) a theorem appears in
its place.

But most mathematicians believe in at least some nonconstructive things
-- things which can be talked about with for-alls and there-exists and
so forth as in second order logic, say with axioms for arithmetic (and
for convenience also alphanumeric strings). Let's formalize Berry's paradox
in that context.

This version goes

"n = min {m in N s.t. not-exists s s.t. S(s,m) and |s| < 2^50}"

where S is a predicate for "formula s specifies integer m"; more precisely,
S says there-exists-uniquely m satisfying s. s is an alphanumeric string,
and S has to translate that string into logical requirements on m -- tedious
but I believe quite straightforward. But the size of S is presumably less
than 2^50, and as above we can prove that the statement above "n = min
..." is uniquely satisfied by some n. So n can't be in the set of which
it is the least member. Bummer.

My humble and naive opinion is that set theory is a can of worms. Can
we found mathematical logic not on set theory, but on a much more concrete
theory of algorithms and string manipulation? I once thought that this
would fix things up, but alas, the following paradox involves essentially
no set theory at all.

Winfree's Paradox

I'm probably not the first to stumble across this puzzle, but I haven't
yet seen other reference to it. It seems to be related to Loeb's Theorem
and Godel's second incompleteness theorem. It still troubles me.

I stumbled across this paradox years ago when I was thinking about classes
of algorithms... primitive recursive, general recursive, total recursive...
Then I asked, "Is the set {Turing machines which provably halt on
every input} enumerable or non-enumerable?" Let us here fix the formal
system to be F, which happens to be 'Erik's Favorite and most Fulfilling
Formal Theory'.

First I'll give you the intuitive sketch (in the form of a paradox),
and then I'll be a little more careful and identify what I think is the
faulty reasoning, and then mention the conclusions & questions I came
to. Here goes:

Consider the set of provably halting Turing machines, that is, Turing
machines for which you can prove that they always halt, no matter what
the input is.

I will now argue that this set is both enumerable (like the integers)
and non-enumerable (like the real numbers).

We can enumerate all provably halting Turing machines as follows: Enumerate
all syntactically correct, logically valid proofs (e.g. in alphabetical
order, increasing length). It is straightforward to check that each step
of a putative proof is valid. Now, if a proof ends in a statement which
matches "Turing machine M always halts" then output M; in any case, continue.
Thus we have demonstrated an algorithm which enumerates all provably halting
Turing machines (in some order).

Suppose we have an algorithm E which enumerates provably halting Turing
machines (e.g. the one above). We make a Turing machine P which does the
following on input n: runs E until it outputs the nth provably halting
Turing machine M; runs M on input n until it halts (we know it must halt
-- E just found a proof for it); adds 1 to the output of M; and halts.
So, note two things: First, P is different from every single Turing machine
enumerated by E, because for input n they give different results. Second,
by our construction P always halts and therefore P should be on the list
of provably halting Turing machines. These contradict, so our assumption
that E exists must be wrong.

Well, this is a paradox so there must be a bug. Where is it?

Let's start by being more careful and explicit. First of all, what you
can prove depends upon what assumptions you make and what you accept as
valid reasoning -- in short, your formal system. So, let's agree to use
your favorite formal system F. (Most mathematicians, myself excluded, would
choose Zermelo-Fraenkel set theory, I guess. But a formal theory just of
Turing Machines and whether or not they halt will suffice.)

Having your formal system explicitly written out allows us to write
an algorithm for checking proofs. Thus (1) goes through fine. The problem
is in (2). We can make P just fine, but let's be more careful answering
"does it always halt?" and "can we prove that it always halts?" First observe
that if you're a loser and your favorite formal system contains the axiom
"0 == 1" (or is otherwise inconsistent), then every Turing machine can
be proved to halt (whether or not it does). Thus P will sometimes try to
run a program which never halts, so P won't halt either. Great: there's
no paradox if F is inconsistent. Progress.

Now maybe you aren't a loser, and you have every reason to believe that
your favorite formal system F is sound and consistent. Fine, let's assume
F is sound and consistent -- and thus if you can prove that a Turing machine
always halts, it always will. It's what we believe anyway (most of us,
at least). So, accepting this assumption, then yes, P will always halt.
And therefore, it follows also that P really is different from all the
M enumerated by E. Next question: can we _prove_ that P always halts?

This, exactly, is the rub. For a good F, you won't be able to prove
(within F!) that P always halts. But I did notice that P provably halts
if F can prove the theorem, or has the axiom, that "if 'S' is provable,
then S is true". Formally, it becomes difficult (but possible) to even
phrase this theorem, but we can work with something simpler, such as "if
proof_checker(proof,'TM m halts') == 'valid', then m halts". If F has this
theorem, then then the paradox hits in full force.

Conclusion and interpretation: No consistent and
sufficiently strong formal system can
prove its own correctness. This is not a new result -- it's essentially
Lo\:b's Theorem (a variant of Godel's second incompleteness theorem) in
the context of rigorous mathematical logic: "If you can show that (if you
can show it then it's true) then you can show anything." I like to think
of it as an analysis of religious (or academic) fanatics: If you don't
accept the possibility that your way of thinking is wrong, then you are,
with certainty, wrong.

A few more comments. This all means we can explicitly write down a very
concrete theorem, "P always halts", which must be true of our formal system
if it's worth its salt. But this theorem is unprovable (in F). Now, since
we just agreed that we believe "P always halts" as much as we believe in
F in the first place, we must also believe equally in the new formal system
F' = F union "P always halts". If we apply our "paradox" to F', we get
a new program P' which also always halts (or at least, we believe this
as much as we believe in F).

And it's turtles all the way down.

What bothers me is that it seems to me, intuitively only, that accepting
the formal system F is tantamount to accepting F' and F'' and F''' and
all the rest. And yet we can't say "Here: I have encapsulated all my mathematical
beliefs in this here formal system F"... there always seems to be another
belief you hold, which wasn't a part of F.

That you really do hold this belief is indicated by a fanciful gambling
argument. I present to you a TM T, and I tell you I'm going to determine
its input by coin tosses. I've got a really _smokin_ fast quantum-DNA-chaotic-spectral-manifold-catastrophy-theoretic
universal Turing machine simulator, which takes half as much time to simulate
every successive step of the TM. It has a little green light, which turns
red when the TM halts. I'm taking bets: will T halt on the next input I
give it? What odds would you accept? $500 to 1? You can examine T if you
wish. Suppose you find a proof that T always halts. Now what odds will
you take? I see no reason to wimp out on any odds -- that $1 is yours!
OK, now the next TM T, upon examination, you identify as exactly P, as
we discussed above (it uses the same formal system you used in your proof
to identify the nth provably halting TM, and runs it on input n, and adds
1). What odds will you take? I see no reason to wimp out on any odds --
that $1 is yours! But there is no proof that P always halts within the
original theory... Hence we have not (in F) fully formalized the mathematical
reasoning that you use in practice to make absolute claims.

In other words, I seem to have convinced myself that, if I believe in
the axioms and mechanisms of the formal system F, then I also believe that
P must halt for all inputs. To me, this seems to be what should constitute
"proof". But if we can't prove this in the formal system, then the formal
system does not capture the reasoning that I used to draw my conclusion
about P (which is a very specific statement about a very specific Turing
machine). I would be very disappointed if no formal system can possibly
codify my mathematical thinking, because this would seem to imply that
my own beliefs do not rest on a finite foundation -- if to believe "P always
halts" requires a new "act of faith" (or axiom), and on and on...

(A robot mathematician could surely follow the arguments above; and
yet its beliefs demonstrably rest on a finite foundation. Is the robot
necessarily inconsistent? Or must it necessarily use a different logic
from the one we used here?)

The only way out seems to be to say that I actually don't necessarily
believe the results of formal system F. And thus using F is more a useful
engineering principle than a way to Truth (be it definable or knowable
or not). Either that, or my mathematical beliefs are a different kind
of formal system than that which is commonly studied in mathematics.

So, is math broken? For sure, it's weird.

A few comments on self-reference

All the paradoxes above seem to involve self-reference. Is this the culprit?
Maybe if we could just detect and declare illegal all statements which
involve self-reference, we'd be fine?

To me, this seems draconian, and misses the points that (1) presumably
we're already fine with formal system F, but it omits consideration of
some seemingly meaningful beliefs, and (2) omiting talk of self-referential
objects also makes your system incomplete in coverage, since there really
are self-referential objects.

To wit: It seems to me that self-reference is a real phenomenon, and
thus must be included in any system of logic buff enough to handle real
phenomena. For example, I can easily write an assembly language program
which prints itself. That's pretty well-defined & real. Or (less well
defined): I can study (perhaps not successfully or completely) how my mind
works.

A system of logic can easily be written down which "knows itself", just
as the assembly-language program can dis-assemble itself. (BTW, I don't
think that the "standard" logic systems found in math books can do this.)

It seems to me to be important to understand what's OK to do in a logic
which knows itself. Clearly, not everything that we're used to doing is
OK now -- as shown by the paradoxes.