Still, his novel and series The Hitchhiker’s Guide to the Galaxy begins with the impending destruction of the Earth, which goes ahead 5 minutes too soon. The remainder is post-apocalyptic. It is also pre-apocalyptic, because there are multiple Earths that face destruction at different times. At least having multiple days of prophesied doom is something we’ve recently been dealing with.

Today—the last time we can use this word?—we wish to cover real apocalypses in mathematical and scientific theories.

We have already blogged about what would happen to complexity theory if were true and proved. As we said, “Most of the 573 pages of [the] Arora-Barak [textbook] would be gone…” Well this is still hypothetical. Now we will look at cases in the past where whole theories were blown up by a surprise result.

This is different from theories going out of fashion and dying out, even if it was from internal causes. Likewise we don’t consider the fadeouts of past civilizations to be catastrophes, only ones destroyed by things like volcanoes. Ironically, the branch of mathematics called catastrophe theory itself is said to be one of the fadeouts. As mathematical historian David Aubin wrote in “Chapter III: Catastrophes” of his 1998 Princeton PhD thesis:

Catastrophe Theory is dead. Today very, very few scientists identify themselves as ‘catastrophists’; the theory has no institutional basis, department, institute, or journal totally or even partly devoted to it. But do mathematics die?

He goes on to cite an article by Charles Fisher that proclaimed the death of Invariant Theory. To be sure, theories like that sometimes get revived. But first a word about the Mayans and ultimate catastrophe.

Baktun The Future

All the fuss is about today’s ticking over of a Mayan unit of time called a baktun, or more properly b’ak’tun. It’s not even a once-in-5,000-years event like everyone says, but rather once-in-144,000 days, making just over 394 years. The point is there have been 13 of them since the inception of the Mayan creation date according to their “Long Count” calendar, making years in all. So the 14th b’ak’tun starts today—big whoop. The buzz comes from many Mayan inscriptions seeming to max out at 13, but others go as far as 19 and it is known that they counted by 20. Hence the real epoch will be when the 20th and final baktun ticks over to initiate the next piktun. That will be on October 13, 4772. If human civilization lasts that long, that is.

This still has us thinking, what if Earth really were suddenly blown up by Vogons or by Vegans or by a space rock a little bigger than last week’s? What would be left? Anything? The reason is that according to a recently-agreed principle in fundamental physical theory, the answer should be everything.

The principle, as enunciated in small capitals by popular science author Charles Seife in his 2007 book Decoding the Universe, states:

Information can be neither created nor destroyed.

As we mentioned last March, the agreement was symbolized by Stephen Hawking conceding a bet to John Preskill, who has graced these pages. Hawking underscored the point by making it a main part of the plot of a children’s novel written with his daughter Lucy. The father falls into a black hole, but is resurrected by a computer able to piece back all the information because it was all recoverable. At least in theory.

Hence even if Earth really is swallowed up later today, or if we disappear—leaving all our stored information and literary artefacts to decay within a 50,000-year time-span estimated for The History Channel’s series Life After People—all the information would in principle still exist. Is this comforting?

Perhaps not. It could be that while all natural processes conserve information, the more violent ones might embody the computation of a one-way function. It then becomes an issue of complexity theory whether the output of that function could be reverted to its pre-apocalyptic state.

Apocalypses In Math

Here are a few examples from mathematics of “extinction” events: usually the extinction was of a theory or whole approach to math.

Bertrand Russell and Gottlob Frege:

Frege was just finishing his tome on logic when the letter from Russell arrived showing that Frege’s system was inconsistent. The letter basically noticed that the set

was not well-defined. This destroyed the whole book that Frege had worked so hard on for years. Frege’s reaction was recorded in his revised preface:

A scientist can hardly meet with anything more undesirable than to have the foundations give way just as the work is finished. I was put in this position by a letter from Mr. Bertrand Russell when the work was nearly through the press.

A study in being low-key as we might say today.

David Hilbert and Paul Gordan:

Gordan was known as “the king of invariant theory.” His most famous result is that the ring of invariants of binary forms of fixed degree is finitely generated. Hilbert proved his famous theorem that replaced “binary” by any degree, and replacing horribly complex arguments with a beautiful existence proof. To quote Wikipedia:

[This] almost put an end to classical invariant theory for several decades, though the classical epoch in the subject continued to the final publications of Alfred Young, more than 50 years later.

Gordan was less low-key than Frege, since his comment on Hilbert’s brilliant work was:

“This is not mathematics; this is theology.”

Oh well.

Kurt Gödel and David Hilbert:

Hilbert, again, wanted to create a formal foundation of all mathematics based on an axiomatic approach. He had already done this for geometry in his famous work of 1899. No, Euclid did have an axiomatic system thousands of years earlier, but it was not really formal. Some proofs relied on looking at diagrams and other obvious facts, so Hilbert added extra notions that made geometry based on a complete system. For example, Hilbert added the notion of betweenness of three points: point is between and .

Of course Gödel proved via his famous Incompleteness Theorem that what Hilbert could do for geometry was impossible to do for number theory.

Stephen Kleene and Barkley Rosser:

Once Alonzo Church’s lambda calculus and Haskell Curry’s combinators were discovered in the 1930’s, it seemed natural to build systems of logic around them. That was the original intent of both Curry and Church. It was therefore a shock when Kleene and Rosser, as students of Church, showed at a stroke that they were inconsistent. The reason is that the theories’ standard of “well-defined” claimed too extensive a reach, as with Frege’s formalization of the notion of “set.” It essentially allowed defining an exhaustive countable list of well-defined real numbers, for which the Cantor diagonal number was well-defined within the system, a contradiction. Ken likens this paradox phenomenon to the collapse of the Tower of Babel.

Riemann’s Non-Conjecture Refuted:

At the very end of his famous 1859 paper which included the Riemann Hypothesis, Bernhard Riemann made a carefully-worded statement about the relationship between the prime-counting function and the logarithmic integrals and :

Indeed, in the comparison of with the number of prime numbers less than , undertaken by Gauss and Goldschmidt and carried through up to three million, this number has shown itself out to be, in the first hundred thousand, always less than ; in fact the difference grows, with many fluctuations, gradually with .

Further calculations were consistent with inequality holding in general, until in 1914, John Littlewood refuted this not just once, but infinitely often. That is, he did not find a counterexample by computation, but rather proved that must change sign infinitely often. In fact, the first number giving a sign flip is still unknown, though it must be below .

Although this is included on Wikipedia’s short list of disproved mathematical ideas, its significance is not the inequality hypothesis itself, but the fallible nature of numerical evidence. Michael Rubinstein and Peter Sarnak showed an opposite surprise: the set of integers giving a negative sign has non-vanishing density, in fact about 0.00000026, so it is disturbing that no such is within the current range of calculation.

Mertens Conjecture Refuted:

The conjecture, which was first made in 1885 by Thomas Stieltjes not Franz Mertens, states that the sum of the first values of the Möbius function has absolute value at most . That is,

Despite the fact that all computer calculations still support this, Andrew Odlyzko and Herman te Riele disproved it theoretically in 1985. At least it has an exponentially bigger leeway than the previous one: the best known upper bound on a bad is currently

Moreover the following weaker statement, which Stieltjes thought he had proved, is still open:

The reason this is portentious is that the following slight further weakening,

is actually equivalent to the Riemann Hypothesis.

The failure of Riemann would have a definite apocalyptic effect: it would wipe away all the many papers that assume it. It is not clear whether those papers could even be saved by the kind of “relativization” we have in complexity theory, whereby results obtained assuming and so on may still be valid relative to oracle languages such that .

Our Scientific Neighbors’ Houses

Still, the loss of papers assuming Riemann would be nothing compared to what would happen in physics if supersymmetry were disproved, as its failurecouldtake all of stringtheorydown with it. The Standard Model of particle physics seems also to have survived problems the absence of the Higgs Boson would have caused, although issues with the Higgs are still causing apocalyptic reactions from some physicists. At least news today is that other bosons are behaving well according to Scott Aaronson and Alexander Arkhipov’s protocol, which is related to our kind of hierarchy collapse.

Perhaps we in computer science theory and mathematics are fortunate to experience less peril. Even so, we are left with this quotation attributed to Hilbert by Howard Eves:

One can measure the importance of a scientific work by the number of earlier publications rendered superfluous by it.

Open Problems

Do you have other favorite examples of results in the mathematical and general sciences that caused the collapse of theories?

Dick Lipton: You know that all of us know that we are left with a single fact about theory which is there is no theory. You mention the Kleene-Rosser paradox. You have already examined the proof that the set of all KR-paradoxes is decidable iff it is undecidable. ZFC is inconsistent iff the lambda-calculus is inconsistent. The objects defined by the later are all objects of the former.

I’m sure that you as well as others (now) believe in this result, but no one is ready to admit it, except in anonymous comments.

Frege’s system is not really “destroyed”, the reality is similar to Cantor’s naive set theory. I remmebe reading an article by George Boolos in JSL that if we replace axiom V with Hume’s principle the theory can be saved. (Huh! this is on Wikipedia: http://en.wikipedia.org/wiki/George_Boolos I should donate more often to them).

LOL … the Seattle Seahawks football are beating the San Francisco 49s so overwhelmingly, that it’s more fun to read Raymond Streater’s on-line list of Lost Causes in Theoretical Physics (he is the “Streater” of the celebrated “Streater and Wightman” field theory textbook PCT, Spin and Statistics, and All That) .

You can obviously reduce the halting problem to the Kleene-Rosser paradox recognition problem. Thus, KRP is undecidable. Then you can write a trivial program to decide it. Hence, ZFC is inconsistent. This is the proof that made Richard Karp say it is far removed from his area of coverage, by Edward Blum as JCCS editor. Stephen Cook and Lance: Albert Meyer on behalf of them said:”Far-fetched and incomprehenisble of as EiC of Information & Computation.

Dick Lipton: Why the biggest names in theory had to run away. I understand it is harsh on them as they find meaning in life through mathematics. Now,. mathematics is meaningless, so they have meaningless life. Is it so? or something else? Why do they have to take it like this?

I’ve been intrigued by the repeating emergence / hype / backlash cycle in areas of nonlinear mathematics, particularly with connections to biology. Examples include catastrophe theory, fractals, chaos theory, general systems theory, L-systems, and some of the more heady parts of cybernetics. Does anyone know if a historian of science has looked at some subset of these in a unifying analysis?

David D Lewis asks “I’ve been intrigued by the repeating emergence / hype / backlash cycle in areas of nonlinear mathematics, particularly with connections to biology … Does anyone know if a historian of science has looked at some subset of these in a unifying analysis?”

David, your question has many good answers, and please let me commend the following articles in particular.

When Gil Kalai posed his MathOverflow question “What is an integrable system?“, Gil quoted from Nigel Hitchin’s admirable first chapter to the book Integrable Systems: Twistors, Loop Groups, and Riemann Surfaces (1999)

Introduction (by Nigel Hitchin) Integrable systems, what are they? It’s not easy to answer precisely. The question can occupy a whole book (Zakharov 1991), or be dismissed as Louis Armstrong is reputed to have done once when asked what jazz was—`If you gotta ask, you’ll never know!’

If we steer a course between these two extremes, we can that that integrability of a system of differential equations should mainifest itself through some generally recognizable features:

• the existence of many conserved quantities;
• the presence of algebraic geometry;
• the ability to give explicit solutions.

The study of integrable systems is not just about cunning methods of solving isolated special equations. Each equation is slightly different, indeed there are many of them: a trawl through a couple of standard books on the subject gives at least the following list of equations which are seriously considered to be related to integrability: [list of 49 named equations follows].

… Another task of the mathematician, apart from solving special equations, is to put some order into a universe like this. Is there some overarching structure of which these are special cases which explains integrability?

For whatever reason, few libraries contain the book in which Hitchin’s essay appears (it languishes unreviewed at rank #3,555,481 in Amazon’s book sales). Fortunately, Amazon, Google, and Oxford Press provide free on-line previews that in aggregate encompass the entirety of Hitchin’s essay.

What David’s question calls the cycle of “emergence/hype/backlash” is thoroughly surveyed in a book-length Physics Report by Martyushev and Seleznev (2006):

The notions of entropy and its production in equilibrium and nonequilibrium processes not only form the basis of modern thermodynamics and statistical physics, but have always been at the core of various ideological discussions concerned with the evolution of the world, the course of time, etc. These issues were raised by many outstanding scientists, including Clausius, Boltzmann, Gibbs and Onsager. As a result, today we have thousands of books, reviews and papers dedicated to properties of the entropy in different systems. The present review deals with the entropy production behavior in nonequilibrium processes.

This topic is not new. What was the impetus to this study?

[…] Two essentially extreme opinions have been formed in the literature. Some scientists glorify the principle and think it is capable of describing various nonequilibrium processes to a certain extent. Other researchers, who observed weak points of the principle and unceasing efforts by Prigogine and his progeny to generalize it, are very skeptical about the possibility to formulate universal entropy principles, which would govern so diverse and dissimilar nonequilibrium processes.

For survey of more recent developments there are many!) in this ongoing cycle of discourse relating to entropy, information, biology, cognition, and nonlinear dynamics, see Martyushev and Seleznev’s (shorter, and free!) recent preprint “Fluctuations, trajectory entropy, and Ziegler’s maximum entropy production” ( arXiv:1112.2848).

Note: Although the word “quantum” appears nowhere in arXiv:1112.2848 … it arguably should … because anywhere that fundamental questions arise that are associated to “fluctuations”, it is a safe bet that fundamental questions associated to “quantum dynamics” are nearby.

All of these references are at-hand because they bear upon (what seems to our UW QSE Group to be) the key issue of the Kalai/Harrow debate, namely the question: How does QM work — mathematically, physically, and thermodynamically — and what great 21st century objectives can we practically accomplish via our growing understanding of it?

For us the main virtue of modern-day QM/QIT/QSE research in general — and discourse like GLL’s Harrow/Kalai debate in particular — is the wonderful answers that it suggests to this strategically vital question, whose answer (arguably) will very largely determine the technological course of the 21st century.

There’s a form feed character between “out to be, in the ” and “first hundred thousand” that’s causing the RSS feed to fail in my feed reader, Liferea. It’s an XML validation error, so this might affect other feed readers as well.