The Blog of Scott AaronsonIf you take just one piece of information from this blog:Quantum computers would not solve hard search problemsinstantaneously by simply trying all the possible solutions at once.

No particular news to report — it’s about the same as it was 400 years ago, I guess. I just wanted to liveblog from the Taj Mahal, is all. (Jonathan Walgate is the one who suggested it.). Now I’ll go back to looking at it.

This entry was posted
on Saturday, December 22nd, 2007 at 6:47 am and is filed under Adventures in Meatspace, Embarrassing Myself.
You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

Scott would have have to be really selfish to intentionally prove that P != NP. His fame and fortune in exchange of everyone’s happy hours of thoughts entertaining P=NP. And all of that to no one’s gain.

i recently heard (on audiobook), Ian Stewart’s essay the mathematics of 2050, from the next fifty years essays, and he guessed that P vs NP will be proven formally undecidable by then (which i had never even considered). but i got to thinking about it more, and i realized i dont really know if he means undecidable like the halting problem, or undecidable like CH (which i now see might more properly be called independent). and that got me wondering if computational complexity is built on axioms, and if so what are they. CC seems somehow “more bound” to reality than pure mathematics, and so it seems as if P vs NP were independent, then an additional axiom would be in order. at the same time, i cant really picture it being undecidable like the halting problem, though i cant say why.

which now has me wondering: do we ever examine the “space complexity” or anything similar of proofs? maybe these are silly questions, im really more of a low-level physics guy than mathematician or computer scientist.

Cody, being a “physics guy” like you, may I say that the literature on the complexity of search problems seems more natural—or at least, more physically motivated—than the literature on the complexity of decision problems?

The two categories of problems are of course closely related, but they are not the same. That’s my limited understanding anyway … perhaps others have helpful comments?

ah, yes, i think i see what you mean. would the relationship between search problems and decision problems be sort of the relationships between FP and P? (or any other functional version of a decision problem).

John: P vs. NP is of course just the “canonical representative” of a very large set of questions (P vs. BQP, P vs. PSPACE, NP vs. P/poly, FP vs. PPAD, NP vs. DTIME(npolylog), existence of OWF’s, etc.) that we’d like to answer but can’t. Personally, I don’t necessarily see it as more important than the rest; I think people just focus on it for concreteness (similarly to how mathematicians talk about the Riemann Hypothesis, even though many of the implications actually require stronger versions like ERH).

In the case of P vs. NP, what justifies focusing on decision problems is their well-known equivalence to search problems. (That is, P=NP iff FP=FNP.) For the TFNP classes (PPAD, PPP, etc.), one has to talk about search problems, so that’s exactly what people do.

The title of Hanika’s appendix is “Three Myths About Function Problems”. The three myths are called (by Hanika) “Reducibility to Decision”, “Totality Makes [a?] Difference”, and “Decisions are Two-Valued”.

When I say the appendix is “thought-provoking” what I mean is “it reassured me that I was not the only person to whom these issues were non-obvious”.

That doesn’t mean I understand these issues … far from it. But am I right in guessing that they would be good things to understand better?

Here’s something i don’t understand entirely, if a function f(x) = {0,1} alternates infinitely between chunks (of varying length) that are computable with polynomial lower bounds (of varying degrees) and chunks that are computable with exponential lower bounds, then what is the lower bound of an algorithm computing f(x), polynomial or exponential?

roland, do you think that Ian Stewart meant P vs NP would be proven independent of number theory then? and also, is computational complexity theory founded on number theory? or just ZFC (is there a difference)? and if that were the case, couldnt we “choose” whatever answer we would like, for P vs NP and make it a new axiom? as i understand it, most mathematicians work under the assumption that CH is false, though not all, and that it is up to them.

the idea of “space complexity” of proofs isnt necessarily a coherent one; it is a curiosity of what might result from looking at the number of steps and amount of information required to prove something, (though i do understand that proofs are highly arbitrary).

i guess really im wondering if anything useful could result in applying computability and complexity sort of ideas to proof sort of problems, though it feels like a dumb question because proofs seem like such radically different problems than algorithms. clearly im beyond my comprehension.

Scott, i just noticed i can resize this comment window, thats a neat little touch.

oh, i meant to say that i am comfortable with arbitrarily rejecting CH or not, because in pure mathematics you can choose what sort of system you are studying. my view is that in math, you define your axioms and see what sort of world results, as where in physics you are handed a set of unknown axioms and the resulting world, and we are trying to work backwards. P/NP independent of ZFC would seem weird because P and NP seem somehow more grounded in reality, as if they were handed to us along with electric charge and gravity, so it seems silly to say we could choose to work with either outcome.

John, I took a look at Hanika’s thesis. He’s correct that the reduction from search to decision problems is necessarily a Turing reduction. But we can perform Turing reductions in P — so if all we care about is the P vs. NP question, then we don’t have to talk about function problems.

But as I said before, sometimes you do have to talk about function problems — and when you have to, you do. This is not a deep ideological issue, and there’s really not much to understand.

Cody, my intuition about CH vs. P≠NP accords with yours. When people throw around the idea that P≠NP is independent, they don’t mention that in the whole history of mathematics, we’ve never once seen a “natural” question phrasable in terms of Turing machines and whether they halt proven independent of set theory.

(The Gödel sentence is not “natural,” in the sense that it’s specifically constructed to be independent. The Continuum Hypothesis is not phrasable in terms of Turing machines and whether they halt. The Paris-Harrington example is only independent of arithmetic, not of set theory.)

On the other hand, in the history of mathematics we’ve seen many, many examples of questions that were answerable, but that were asked centuries or millennia before the tools existed to answer them. That’s an extremely well-known phenomenon.

So I’d think the “default” conjecture should be that P≠NP, that there’s indeed a proof, and that mathematics simply hasn’t yet advanced to the point where we can find it.

(If you want to know more about independence and P vs. NP, see this survey paper I wrote a while back. Unfortunately, nowhere in the paper did I bother to state my own opinion, and some people misinterpreted me to mean I actually think independence is a serious possibility.)

cody says: i guess really im wondering if anything useful could result in applying computability and complexity sort of ideas to proof sort of problems, though it feels like a dumb question because proofs seem like such radically different problems than algorithms.

The way i understand it, is that an algorithm is basically constructing a mathematical object. So in constructive mathematics you always prove that a mathematical obejct exists by constructing it – which gives you a way to “make” that object “by hand” (e.g. an algorithm). In other words
algorithm = proof of existence of something (in con. math)

cody says: i guess really i’m wondering if anything useful could result in applying computability and complexity sort of ideas to proof sort of problems?

That is a fine topic for an end-of-year “quantum confection”.

The old-school answer is “no” and the reason is aesthetic. As Chandrasekhar phrased it “The simple is the seal of the true” and “Beauty is the splendour of truth”. The implication being, that proof-related results originating in complexity theory may well be formally true, but results derived by this path will in general lack the simplicity and beauty that Chandra prized.

The new-school answer is “yes”, and Kurt Gödel’s incompleteness proofs are of course the shining example. But this doesn’t mean the old-school thinking is wrong. As Scott noted, Gödel sentences lack Chandra’s “splendour of simple truth.”

The post-modern view might be “yes-and-no”, with the aesthetic the point being that in the real world, truths are always embedded within computational ecosystems, such that the full meaning of a truth is illuminated only in light of its informatic embedding.

Let’s construct an example of a post-modern kind of question. We suppose that Alice is running an error-corrected quantum computation in a finite-dimensional Hilbert space and Bob is doing the same. But without knowing it—and this is the eco-informatic “catch”—Alice and Bob are operating in the same physical Hilbert space.

This sharing is not immediately apparent to either Alice or Bob, because Bob’s qubit basis basis is random relative to Alice’s. In consequence, Bob’s computation looks like noise relative to Alice, and vice versa.

As a toy problem, we ask “Is quantum error-correction possible within this reciprocal computational ecosystem?” Just for fun, I coded up a toy computation, and obtained the surprising (to me) result that Alice and Bob’ respective computations are fratricidal … each destroys the other.

For example, if Bob implements an computation as simple as a single pi-pulse on a single Bob qubit, this one operation appears in Alice’s frame as a randomizing POVM that completely “mixmasters” her entire computation, beyond any hope of error correction.

Does this toy problem have any interest at all? Well, the converse is interesting: if Alice’s computation succeeds, this is convincing evidence for Alice that Bob does not exist in her universe (or if he does, is not carrying out computations).

So this toy problem helps us (well, me at least) understand better why we perceive only one classical universe … it’s because the shared universes all committed quantum fratricide!

Why is the attribute “simple” considered to be the complete equivalent of the attribute “beautiful”? They are two totally different characteristics. Is a beautiful woman a simple woman? In addition, the predicate “elegant” is different from the properties “simple” and “beautiful.” Is such careless talk accepted and condoned in mathematics? If so, it should be consciously recognized as such.

Without offering an opinion on whether Chandra’s book is “true”, it is surely true that its philosophy motivated and guided at least one person—namely Chandra himself—to do some outstanding physics and mathematics. 🙂

That is a good pragmatic definition of beauty. Whatever results in valuable work is true and beautiful, then. Maybe it’s even elegant (von Neumann), pretty (Oppenheimer), tasteful, fine, and nice. Chandrasekhar’s authority must be overwhelming. Did he have as much success in his expensive book on Newton? In it, he tried to convert Newton’s idiosyncratic, pseudo-mathematics into differential equations.

Well, I dunno. Assembly language code is beautiful? Manure shoveling is beautiful? More provocative IMHO is the idea that new forms and ideals of beauty become apparent in every century.

Supposed we adopt for the moment Chandra’s categories of “basic science” and “derived science” (while admitting that the exact boundary between basic and derived science is somewhat indistinct).

Then last century saw a lot of beautiful “basic science” that (for example) united variational principles, path integrals, symmetries, and symmetry-breaking; these ideas added up to the Standard Model.

But as Feynman foresaw, this flowering of basic science was a one-time event:

“We are very lucky to live in an age in which we are still making discoveries. It is like the discovery of America—you only discover it once. The age in which we live is the one in which we are discovering the fundamental laws of nature, and that day will never come again. It is very exciting, it is marvelous, but this excitement will have to go. Of course, in the future there will be other interests. There will be the interest in the connection of one level of phenomena with another—phenomena in biology and so on, or, if you are talking about exploration, exploring other planets, but there will not still be the same things that we are doing now.”

What comes next? Well, it would be cool—which is to say “beautiful”—if in the twenty-first century, one percent of humanity (say) could find employment as scientists or mathematicians.

This will require creating on the order of 100 million new scientific jobs in this century, which is about 3,000 new scientific jobs per day … so we had better get busy! 🙂

There definitely are scientific enterprises that potentially “scale” to this immense size: the sky surveys and the genome surveys come to mind. These are examples of what Chandra called “derived” enterprises.

It is uniquely characteristic of our new century that humanity has begun to undertake these derived enterprises on an global scale … a necessity that John von Neumann foresaw.

And who is better equipped, by training and talent, to think creatively about these global enterprises, and to provide the fundamental tools for undertaking them, than complexity theorists?

That concludes my optimistic New Years’ Essay … whose main point is that the scientific community has in-hand both urgent challenges and interesting questions. Perfect! 🙂

I saw Mr Kohli comments somewhere else -Mr Kohli is right that you were lucky to have Dr Prem Saran Satsangi during your lecture at Dayalbagh-what you try to compute ,probate and prove- A PARTICLE OR A STRING HERE AS WELL THERE AND PROBABLY NOWHERE AND POSSIBLY EVERYWHERE -He sees it directly-he has described the phenomenon of `Para'( Science of Other world) and Apara( of this Material world)-It would be worthwhile to read his following publication ,to know something about His Superhuman mind and a most wonderful humanbeing:-