Triple Century Post

Godfrey Harold Hardy was an avid fan of cricket, besides being one of the 20th Century’s most influential mathematicians. During cricket season at Oxford and Cambridge, he would work on mathematics in the morning, and then spend a long afternoon watching a match at the university cricket ground.

Today is this blog’s 300th post. In cricket terms this means we have scored a triple century and are still not out. Until recently a score in the 300’s was the highest by any individual batsman in a Test Match. Brian Lara, however, scored 400 not out for the West Indies against England in 2004. This gives us our next objective.

Cricket was invented in England but has gone global. For instance, cricket is the true national sport of India’s 1.2 billion inhabitants, and of the surrounding 355 million in Pakistan, Bangladesh, and Sri Lanka besides. This puts baseball, the subject of our last post, in some perspective.

Test Matches run for five days, with three two-hour sessions per day. Whereas baseball games have nine discrete innings of three outs apiece, and last only a few hours, Test Matches have two innings of ten outs apiece, and thus feel continuous. Drawing on some of Hardy’s work, we discuss whether continuous mathematics may have any vital impact on , even though that question belongs to discrete mathematics.

Hardy and Cricket

Hardy himself was a good cricket batsman in his youth, and played various sports actively until a heart problem in 1939 at age 62. He also needed to be outdoors for sunshine, especially in winter. In his late forties he ranked the accomplishments he most desired to achieve in this order:

Prove the Riemann Hypothesis.

Make a match-winning score of 211 not-out in the last innings at The (Kennington) Oval, which traditionally hosts the last Test of a major series in England.

Find an argument for the non-existence of God that would convince the general public.

Be the first to climb Mount Everest.

Lead a communist unified Britain and Germany.

Assassinate Benito Mussolini.

Note that he did not ask for a triple century. At the time no one had scored one. The record then was 287 at the Sydney Cricket Ground by England’s Tip Foster in 1904, while the previous record had been 211 by Australia’s Billy Murdoch at The Oval in 1884. The number 211 is the first prime after 200, while being not-out (written 211*) would still make it better than Murdoch’s score.

Later the man billed as the “Babe Ruth of cricket,” Sir Donald Bradman, would hold the individual Test batting record by scoring 334 for Australia against England at Leeds in 1930. Hardy’s own reaction to Bradman’s feats, compared to England’s “Master” Sir John Hobbs, was:

Bradman is a whole class above any batsman who has ever lived: if Archimedes, Newton and Gauss remain in the Hobbs class, I have to admit the possibility of a class above them, which I find difficult to imagine. They had better be moved from now on into the Bradman class.

Bradman remained active in cricket administration and promotion and commentating well into the 1990’s. I (Ken) remember this version of a story that has become subject to Internet “phone tag”: In 1992 or 1993 as a guest on a Test telecast he was asked how much he thought he would score on a wicket that looked benign but had batsmen getting themselves out cheaply. He replied, “about 50.” Commentator: “Only 50?” Bradman: “Well, I am eighty-four years old!”

Hardy-Littlewood and Notation

The computer science Donald in the Bradman class is of course Donald Knuth. Knuth credited Hardy and his longtime collaborator John E. Littlewood with introducing the “Big-” asymptotic notation, though with the different meaning . The difference from the currently accepted meaning

is that could oscillate. For instance, if , then would meet Hardy-Littlewood’s definition of being but not Knuth’s now-standard one.

One can similarly use trigonometric functions to define and for which none of , , and hold, indeed neither nor . This is summarized by saying that “trichotomy fails” for asymptotic comparisons. However, Hardy and Littlewood proved that if one limits attention to real functions defined by arithmetic, exponentiation, and logarithms, then trichotomy does hold. Here is a “modern computer science-y” way of stating their result:

Theorem 1 Let and be defined recursively from in the grammar:

Then either , , or .

This says that almost all of the functions we use to define complexity classes fall into a nice total linear ordering. Thus for these so-called “log-exp functions” or “Hardy-Littlewood” functions, there is no difference between their definition and Knuth’s.

We would like to find a nice, accessible proof of this theorem. An aside, Dick used this theorem with his co-author Rich DeMillo once in a paper on independence issues surrounding the question.

From Analysis to Primes

The Riemann Hypothesis itself is stated for continuous functions but has major implications for prime numbers, which are discrete. The Prime Number Theorem (PNT) was first proved via continuous analysis, before an “elementary” proof was found. The PNT states that the number of prime natural numbers below the real number satisfies .

Hardy and Littlewood themselves made two notable conjectures about the distribution of prime numbers. For the first, call a set of distinct natural numbers “nice” if for every prime number , the number of distinct values of mod for is at most . Define

Then when , is just . Thus the following conjecture generalizes the PNT:

Conjecture 1
For every nice of size ,

where

The second conjecture is simpler: it is just that for any real numbers and , there are always no more primes from to than from to . In symbols, this says

Conjecture 2
For all natural numbers and , .

This seems obvious, since the sequence of prime numbers thins out. However, it contradicts certain cases of the first conjecture. It seems odd that Hardy-Littlewood would have advanced two mutually contradictory conjectures, but perhaps one came from Hardy while the other came from Littlewood.

From Primes to Complexity

Further versions of the Riemann Hypothesis have even more direct implications in computational complexity. The generalized Riemann hypothesis (GRH) states that for any group character on the corresponding complex L-function

has all of its nontrivial zeroes on the line . Here , , and . (We have talked about group characters on this blog before. GRH is different from the extended Riemann hypothesis (ERH) which involves zeta functions over number fields.)

Pascal Koiran proved (note erratum) that GRH places the problem of whether a set of polynomial equations has a solution over the complex numbers into the second level of the polynomial hierarchy, indeed into the Arthur-Merlin class .

Koiran’s main GRH-dependent technique intuitively transfers solvability over into statements about solvability modulo many primes . Let bound the binary length of coefficients in the finite set of -many polynomial equations in -many variables, and let bound their degree. Set

Theorem 2 For any finite set of polynomial equations there exist such that:

if the equations are not solvable over , then the number of primes such that they are solvable over is at most ;

if they are solvable over , then the number of primes such that they are solvable over is at least .

From Continuous to Discrete

We view Koiran’s theorem as a way of translating properties of continuous mathematics into discrete domains. The theory of poles and winding numbers that is basic to complex analysis is one of the ways to connect the discrete and the continuous, while Knuth himself co-authored the book Concrete Mathematics which marries them. Three other avenues by which people are applying tools from continuous mathematics to do research on computational complexity are:

The theory of computing over the reals, jump-started by Lenore Blum, Michael Shub, and Stephen Smale. This survey by Mark Braverman and Stephen Cook gives a brief introduction and outlines a competing (older) view.

Like this:

Related

To pursue your analogy, is there any way, in principle, that you could lose your wicket? The only thing I can think of is that someone sends in a comment with a quick proof that P does not equal NP, and that after an initial flurry of interest you decide that the blog has lost its raison d’etre. But something tells me you’ll be getting that quadruple century …

Much of what one does in continuous mathematics is manipulations of discrete objects. For example, for many analytic estimates that involve discrete objects (such as a sum over the elements of a set, weighted by integers), whenever one writes down an inequality (Quantity A) T, where |S| = (Quantity A) and |T| = (Quantity B),” where f can be explicitly written down by tracing through the proof. Furthermore, many “analytic formulas”, having big-oh error terms, can be converted into algorithms: often, a formula of the type (Quantity A) = (Quantity B) + O(Quantity C) can be interpreted as the algorithm, “To compute Quantity A, first compute quantity B, then the O(Quantity C) “error”, and lastly add them together”; in many cases the “error” and Quantity B are much easier to compute than the “naive algorithm” for computing Quantity A (by “naive algorithm”, I mean the defining algorithm — e.g. the “naive algorithm” to compute pi(x) is to, one-by-by, enumerate the number of primes up to x).

I do not think that the solution to the P vs NP problem will be solved by any of the methods written on this blog. It will be solved by someone unexpected… trying ideas orthogonal to all of these. No amount of reading up on this or that nifty trick or on the opinions of “big thinkers”, or writing “Will such and so crack P vs NP?”, will get you there. Only luck, profound insight, and a lot of hard work will.

“I do not think that the solution to the P vs NP problem will be solved by any of the methods written on this blog. It will be solved by someone unexpected… trying ideas orthogonal to all of these. No amount of reading up on this or that nifty trick or on the opinions of “big thinkers”, or writing “Will such and so crack P vs NP?”, will get you there. Only luck, profound insight, and a lot of hard work will.”

Yes. Indeed! It will not. The “lot of hard work” is mainly the formalisation of new & simple concepts about Computation.
IMHO, Jean-Yves Girard with Geometry of Interactions is much closer than any one. Unfortunately his formalisation is too technical & lacks of new concepts.

Every one of us trying to get an answer about P versus NP is like this fly going fast on the glass & knocking it hard. And then after, trying again to fly through the window, forgetting what happens earlier & again & again … The Fly Syndrom. Why not to calm down and say: “yes there is an obstacle we don’t see because today Theory of Computation makes it INVISIBLE!

It lies down in the very beginning of the formalisation of what we call “computation”. Through P vs NP Nature is answering the same response it gave some decades ago …

Elandalussi, what you are talking about has been described in Humberto Maturana and Francisco Varela’s Autopoiesis and Cognition in a beautiful and much-quoted phrase:

… the always gaping trap of not saying anything new because the language does not permit it.

My database has multiple quotations upon this theme, of which among the darkest is Ahab’s soliloquy that begins “All visible objects, man, are but as pasteboard masks …” and among the most plainly stated and practically useful is Spivak’s

There are good reasons why theorems should all be easy and the definitions hard … Definitions serve a twofold purpose: they are rigorous replacements for vague notions, and machinery for elegant proofs … Stokes’ Theorem shares three important attributes with many fully evolved major theorems: (1) It is trivial. (2) It is trivial because the terms appearing in it have been properly defined. (3) It has significant consequences.

Elandalussi: I agree that we need new and simple concepts and structures about computation; and it is hard to see where to look for them. But good definitions/structures alone will probably not be enough to see how to crack the P vs NP problem. The resolution of many great problems do often require new definitions and structures; but they also often require a lot of really hard, technical, grinding-through-identities-and-calculations (e.g. the resolution of the Poincare Conjecture and the proof of Fermat’s Last Theorem [by Wiles]). Even in algebra, a subject full many deep definitions and concepts, most papers require lots of formal manipulations to prove their theorems, and these often aren’t just simple exercises in “diagram chasing”

I think that these new definitions/structures will not come wholly from any exiting sub-field of mathematics, but rather will need to be invented “from scratch” (as perhaps Girard has done). There is this tendency that some people have to assume that some particular deep theory they know about must somehow have a bearing on whatever deep problem they come in contact with, no matter how unrelated the theory and problem seem to be. It is not unlike the quantum consciousness fallacy: consciousness is a mystery (to some), and quantum physics is deep and what it “means” seems mysterious; and therefore these two mysteries must have something to do with one another.

“Every one of us trying to get an answer about P versus NP is like this fly going fast on the glass & knocking it hard. And then after, trying again to fly through the window, forgetting what happens earlier & again & again … The Fly Syndrom. Why not to calm down and say: “yes there is an obstacle we don’t see because today Theory of Computation makes it INVISIBLE!”

I completely agree. Not only does the P!=NP conjecture imply the existence of trapdoor functions, but it also behaves like a trapdoor to the brains who try to prove it!

Serge, on a later Gödel’s Lost Letter and P=NP topic titled “Make Your Own World”, Tim Gowers posted a short three-paragraph essay on P vs NP that I admire greatly and recommend highly … very seldom in my experience has any comparably brief essay so enjoyably inspired new reflections on these classic problems.

Thanks for highlighting my work, but may I point out that the survey by Braverman and Cook is not about the Blum-Shub-Smale model. Their survey is about an approach to real analysis based on the usual (discrete) bit model of computation. This line of work was pioneered by no one but Turing himself.

The BSS model is an algebraic (as opposed to discrete) model of computation: the basic entities are field elements and field operations, and the field may well may be infinite (think of the real or complex numbers).
In my opinion this model is especially well suited to the study of the complexity of problems from computational algebraic geometry, such as e.g. the feasibility of systems of polynomial equations in several variables.

Thanks Pascal—didn’t yet get time to look in your later papers for technical improvements (owing to events in your country:), so feel welcome to mention… I added a link to the 1989 BSS paper in Bull. AMS, on “jump-started” which I was careful to write rather than “originated”.

Thanks Kenneth for the update. What’s going on in my country ?
The view that these two models (the bit model and the Blum-Sub-Smale model) are competing seems to be commonplace, but my own opinion is different. The reason is that these models are best-suited to address very different types of questions.
Let me give a couple of examples to make this point.

Bit model questions:

a) for a real number x, how many bit operations do we need to compute
an approximation of x to some given acuracy ?

b) for a continuous function from the reals to the reals,
how many bit operations do we need to compute f(x)
to some given accuracy, given (oracle) access to approximations of x?

BSS model questions:

1) is it possible to decide with a polynomial number of arithmetic operations and comparisons whether a degree 4 real polynomial (in many variables) has a real root ?

2) given a system of polynomial equations over the complex numbers,
is it possible to compute the dimension of the solution set with a polynomial number of arithmetic operations and equality tests ?

You can see that questions a-b and 1-2 are of a very different nature.
So I view these models are complementary rather than competing.

This isn’t really about computational complexity, but some of our favorite discrete algorithms have continuous roots. For instance, one way to view Karmarkar’s interior-point method is as Newton’s method for minimizing a kind of potential energy in the polytope, or even as discrete-time snapshots of a continuous flow. See

Another nice use of continuous methods is the technique of differential equations for analyzing the behavior of greedy algorithms on random graphs, random Boolean formulas, etc. To my knowledge, this started with Karp and Sipser’s algorithm for Max Matching and Max Independent Set in random graphs.

> Are there other striking applications of Koiran’s density-gap phenomenon?

Peter Burgisser has shown that a collapse of complexity classes in Valiant’s model (namely, VP=VNP) implies a similar collapse for boolean complexity classes.
This is covered in his book: Completeness and reduction in algebraic complexity theory.

Results about the complexity of computations change quite radically if we consider [insert burning arrow here].

Hartmanis’ burning arrow is (of course) “properties of computations which can be proven formally.” Braverman and Cook’s burning arrow is “computational models of functions over the [bit-model] reals.” Blum, Shub, and Smale’s burning arrow is to “present a model of computation over the [continuum] reals” and thereby “bring the theory of computation into the domain of analysis, geometry and topology.” Traub and Wozniakowski’s burning arrow is that “information is partial, contaminated, and it costs.“

A major attraction of Gödel’s Lost Letter for systems engineers is that we dearly cherish all four of these burning arrows, as follows: (1) computations are useful only if their correctness can be validated (Hartmanis’ arrow); (2) numerical simulations are essential to the engineering of large systems (Braverman & Cook’s arrow); (3) we engineers understand our simulations mainly via the toolset of analysis, geometry and topology (Blum, Shub & Smale’s arrow); and (4) in practice we systems engineers more commonly sample noisy trajectories of thermalized/turbulent/chaotic systems than we compute deterministic functions (Traub & Wozniakowski’s arrow). It’s not uncommon nowadays for research groups in synthetic biology and/or materials science to generate terabytes of simulated classical and/or quantum trajectories on a daily basis. That is why we engineers enthusiastically lash all four complexity-theoretic arrows together and fire them at practical problems … this computational strategy proves to be both fun and exceedingly useful.

As engineers devote more-and-more computational power to sampling trajectories, our attention is drawn increasingly to the practical verification and validation of sampling computations. In this respect we stand in need of … more Gödel’s Lost Letter arrows! Consider for example the following dialog associated to Braverman & Cook’s discussion of the extended Church-Turing thesis (ECT):

In the absence of any formal association of verification to sampling, and under the assumption that the state-space of quantum mechanics is a linear Hilbert space have exponentially great dimension, Alice’s reasoning is impeccable. So should complexity theorists regard the ECT as having already been formally disproved … decades ago? Formally “yes,” and yet we have the practical intuition that Alice’s reasoning is dubious.

Thus, a future Gödel’s Lost Letter on the general topic of validation in the context of computational sampling would be very welcome. For engineers such topics are both enjoyable and have great practical importance. The more burning arrows the better, and so thanks are given for the wonderful burning arrows that Gödel’s Lost Letter columns have already shot!🙂

A challenge for Gowers (he was asking for one in some previous post)! As a crank, I promised not to follow rules, except for one – the counter-example is a reason to shut-up for me. As Dick was asking for something small, here are BASIC (as programming language) arguments ( as everybody knows I cannot code in “math assembly”, and I think there are enough translators out there, if needed). As I told, I do not need neither recognition, neither prizes (they are for assembly coders), as a user, I see a lack of attention to a particular interface, which I want to draw attention to serious math coders.I would put it into polymath, if I would have enough knowledge to communicate – the math here should be elementary, according to modern standards.

The claim is: is sum of squares (sos) of polynomials when non-negative (psd) for small enough .

Proposition 1. The claim is true for – obvious.
Proposition 2. The claim is true for – the minimum for , and is independent from . Due to Hilbert is sos when psd.

Weighting decompositions of Proposition 2 with gives a SOS representation of polynomial in the claim for small enough .

PS. polynomial in the claim can be used to encode partition problem. Use SDP to solve for the minimum (lower bound, this will be good enough, throwing away terms with smaller then decision precision ), and check that you need polynomial precision for the decision (since the problem is polynomial, there is no inversions, and Q is integral, there is no place for exponential precision to appear).

Regarding the transfer of results or tools from the continuous to the discrete, one little favourite of mine (from when I was doing lower bounds) is the “topological method” used to obtain lower bounds on algebraic computation trees: Ben-Or did it for computation over reals; then Yao refined the method so that it applies to integer data (still using the topology in the real space).

I recently came across Michael’s Freedman’s paper “Complexity classes as mathematical axioms” ( http://arxiv.org/pdf/0810.0033v4 ) through StackExchange. It assumes the separation “axiom” and derives a knot-theoretic theorem that–I am quoting the Introduction here–sounds both “very plausible” and “impossible to prove”. I am surprised this paper is not as well-known as I expected it to be. But I must confess I unfortunately do not understand even elementary knot-theory and hence I didn’t get much of the technical aspects.

While your post is about applying classical tools to understand complexity better, this paper takes the seemingly less popular converse approach. Indeed, the possibility that TCS can “give back” to classical, continuous math is as exciting to me as resolving the difficult open questions in complexity. Do you know of other such “TCS to classical math” theorems that may be more accessible for computer science audience?