>To my mind, the ongoing use of the assumption ``factoring is hard'' by
>computer scientists, which Widgerson exposited masterfully in his
>talk, is in the same spirit as how a number theorist might give a talk
>on results that have been proved under the assumption of the Extended
>Riemann Hypothesis. In both cases, the assumption is extremely
>fruitful and seems likely to be true, but its proof presents a well
>known stumbling block, so in the meantime we go ahead and use it.
That sounds good to me, but I think there's a key difference between
the Riemann hypothesis and the assumption that factoring is hard.
The Riemann hypothesis must be true. It plays a beautiful role in the
theory, it seems true numerically for lots of zeros, and most importantly
one can imagine how a proof might go and actually carry this out in
closely analogous circumstances (the Weil conjectures). Factoring, however,
is completely an open problem. Nobody has the faintest idea how one
might prove such a thing (P != NP seems hard enough, and that would be
necessary but nowhere near sufficient for proving factoring is hard),
and there seems to be no theoretical reason why factoring should be hard.
People have made huge progress in factoring algorithms over the last few
decades (giving roughly the same improvement in factoring speed as
the advances in computer speed have given over the same time period),
and the progress will presumably continue; who knows where it will end.
I think factoring has a wildly inflated reputation as a difficult problem,
and that not nearly as much work has been put in on it as one might guess.
I think one reason computer scientists use this assumption anyway
is that it has great practical value. If you show solving a problem
would let you factor quickly, then it tells you little about how hard
the problem will be in a hundred years, but an awful lot about how hard
it is right now (which is of more importance to computer scientists
than to pure mathematicians).
Incidentally, this is similar to how mathematicians sometimes use results
of the form "X implies Y" where Y is a famous conjecture and X is something
one was hoping to prove. For example, when Frey discovered his elliptic
curves (which Wiles eventually used to prove Fermat's Last Theorem), he
was trying to prove a conjecture about bounds on torsion (which Mazur
eventually proved). When Frey realized that these methods seemed to be
leading towards implying FLT, he gave up up that approach to the conjecture;
of course, later he realized that this was actually a good way to try to
prove FLT, which typically does not happen in these situations.
Henry Cohn
cohn at math.harvard.edu