Opinion 36:
Don't Ask: What Can The Computer do for ME?, But
Rather: What CAN I do for the COMPUTER?

By Doron Zeilberger

Written: March 5, 1999

Rabbi Levi Ben Gerson, in his pre-algebra text (1321),
Sefer Ma'asei Khosev, had about fifty theorems, complete
with rigorous proofs. Nowadays, we no longer call them
theorems, but rather (routine) algebraic identities.
For example, proving (a+b)*c=a*c+b*c took him about half a page,
while proving (a-b)*c+a*(b-c)=b*(a-c) took a page and a half,
and
proving a*(b*c*d)=d*(a*b*c)
took him one page.

The reason that it took him so long is that while he already
had the algebraic concepts, he still was too hung-up on
words, and while he used symbols, (denoted by dotted Hebrew letters),
he did not quite utilize, systematically, the calculus of algebraic
identities. The reason was that he was still in a pre-algebra
frame of mind, and it was more than three hundred years later (even after
Cardano), that probably Viete started the modern `high-school' algebra.

So Levi Ben Gerson had an inkling of the algebraic revolution to come, but
still did not go all the way, because we humans are
creatures of habit, and he liked proving these deep theorems
so much that it did not occur to him to streamline them,
and hence kept repeating the same old arguments again and again
in long-winded natural language.

Believe it not, our current proofs are just as long-winded
and repetitive, since we use an informal language, a
minor variation on our everyday speech.

We are now on the brink of a much more significant revolution in
mathematics, not of algebra, but of COMPUTER ALGEBRA.
All our current theorems, FLT included, will soon be
considered trivial, in the same way that Levi Ben Gerson's
theorems and `mophetim' (he used the word MOPHET
to designate proof, the literal meaning of mophet is
`perfect', `divine lesson', and sometimes even miracle),
are considered trivial today. I have a meta-proof that
FLT is trivial. After all, a mere human (even though
a very talented as far as humans go), with a tiny RAM,
disk-space, and very unreliable circuitry, did it.
So any theorem that a human can prove is, ipso facto, utterly trivial.
(Of course, this was already known to Richard Feynman, who stated
the theorem (Surely You're Joking Mr. Feynman, p. 70)-
`mathematicians can prove only trivial theorems, because every theorem
that is proved is trivial'.)

Theorems that only computers can prove, like the Four Color
Theorem, Kepler's Conjecture, and Conway's Lost Cosmological Theorem,
are also trivial, but not quite as
trivial, since, after all, computers are only a few order-of-magnitudes
better and faster than humans.
In fact, if something is provable (even by a computer),
it is a priori trivial (on complexity-theory
grounds). So Erdos's BOOK may exist, but all the proofs there,
though elegant, are really trivial (since they are short).
So for non-trivial stuff
we can only have, at best,
semi-rigorous proofs
, and sometimes just empirical evidence.

Since Everything that we can prove today will soon be provable,
faster and better,
by computers, it is a waste of time to keep proving, in the same
old-way, either by only pencil and paper, and even doing
`computer-assisted' proofs, regarding the computer
as a `pencil with power-stirring' (luckily,
George Andrews has come a long way since he made this
unfortunate gross understatement, look at his recent beautiful work,
with Peter Paule and Axel Riese, on automating
MacMahon's Partition Calculus). Very soon all our awkwardly phrased
proofs, in semi-natural language, with its endless redundancy,
will seem just as ludicrous as Levi's half-page statement
of (a+b)c=ac+bc, and his subsequent halp-page proof.

We could be much more useful than we are now, if,
instead of proving yet another theorem, we would start teaching
the computer everything we know, so that it would have a headstart.
Of course, eventually
computers will be able to prove everything humans did (and much more!)
ab initio, but if we want to reap the fruits of the
computer revolution as soon as possible, and see the proofs of
the Riemann Hypothesis and the Goldbach conjecture in OUR lifetime,
we better get to work, and transcribe our human mathematical
heritage into Maple, Mathematica, or whatever. Hopefully we
will soon have super-symbolic programming languages,
of higher and higher levels, continuing the sequence: Machine, Assembly, C,
Maple, ... further and further up. This will make the transcription task much
easier. So another worthwhile project is to develop these higher and higher
math systems.

So we can serve our time much better by programming rather
than proving. If you still don't know how to program,
you better get going! And don't worry. If you were smart enough
to earn a Ph.D. in math, you should be able to learn how to program,
once you overcome a possible psychological block.
More important, let's make sure
that our grad students are top-notch programmers, since
very soon, being a good programmer will be a prerequisite
to being a good mathematician.

Once you learned to PROGRAM (rather than just use) Maple
(or, if you insist Mathematica, etc.), you should immediately
get to the business of transcribing your math-knowledge into
Maple. You can get a (very crude) prototype by looking
at my own
Maple packages., in particular
RENE, my modest effort in stating (and hence proving!)
theorems in Plane Geometry.
Other noteworthy efforts
are by
Frederic Chyzak (Mgfun and Holonomic),
Christian Krattenthaler (HYP and qHYP),
John Stembridge (SF and Coxeter),
the INRIA gang
(Salvy and Zimmermann's gfun and Automatic Average Case
Analysis), Peter Paule and his RISC gang (WZ-stuff and Omega),
and many others (but still a tiny fraction of all mathematicians).

What, if like me, you are addicted to proving? Don't worry,
you can still do it. I go jogging every day for an hour, even though
I own a car, since jogging is fun, and it keeps my body in shape.
So proving can still be pursued as a very worthy recreation
(it beats watching TV!), and as mental calisthenics, BUT,
PLEASE, not instead of working! The real work of us mathematicians,
from now until, roughly, fifty years from now, when computers won't
need us anymore, is to make the transition from human-centric
math to machine-centric math as smooth and efficient as possible.

If we will dawdle, and keep loafing, pretending that `proving'
is real work, we would be doomed to never see non-utterly-trivial
results. Our only hope at seeing the proofs of RH, P!=NP, Goldbach etc., is
to try to teach our much more reliable, more competent, smarter. and of course
faster, but inexperienced, silicon-colleagues, what we know, in a language
that they can understand! Once enough edges will be established,
we will very soon see a PERCOLATING phase-transition of mathematics
from the UTTERLY TRIVIAL state to the SEMI-TRIVIAL state.

Added April 6 1999: In addition to
permamnently published here,
I was hoping to have this piece also published in the Notices
of the American Math. Soc., so that non-surfers might benefit from it
too. The Forum section editor, Susan Friedlander, decided to
reject it. I wonder why?
Added April 13, 1999: Javier Thayer reminded me
of the very interesting and promising efforts to
develop (from an AI point of view)
Theorem Provers.
Read
very interesting feedback
from Dror Bar-Natan, Andrej Bauer, Ron Bruck,
Olivier Gerard
(my favorite!, You may ALSO view it directly, at
Olivier Gerard's Opinion 006)
Ira Gessel, Roger Howe,
Doug Iannucci and his biologist friend
Richard Hall, Greg Kuperberg (my least favorite!,
I got so annoyed that I felt the need to talk-back),
Michael Larson, Jaak Peetre, Marko Petkovsek, Simon Plouffe, and
a friend who prefers to stay anonymous.
Doron Zeilberger's Opinion's Table of Content