Posted
by
Soulskill
on Friday April 04, 2014 @11:17AM
from the schrodingers-cat-is-both-alive-and-equal-to-NP dept.

KentuckyFC writes: "One of the greatest mysteries in science is why we don't see quantum effects on the macroscopic scale; why Schrodinger's famous cat cannot be both alive and dead at the same time. Now one theorist says the answer is because P is NOT equal to NP. Here's the thinking: The equation that describes the state of any quantum object is called Schrodinger's equation. Physicists have always thought it can be used to describe everything in the universe, even large objects, and perhaps the universe itself. But the new idea is that this requires an additional assumption — that an efficient algorithm exists to solve the equation for complex macroscopic systems. But is this true? The new approach involves showing that the problem of solving Schrodinger's equation is NP-hard. So if macroscopic superpositions exist, there must be an algorithm that can solve this NP-hard problem quickly and efficiently. And because all NP-hard problems are mathematically equivalent, this algorithm must also be capable of solving all other NP-hard problems too, such as the traveling salesman problem. In other words, NP-hard problems are equivalent to the class of much easier problems called P. Or P=NP. But here's the thing: computational complexity theorists have good reason to think that P is not equal to NP (although they haven't yet proven it). If they're right, then macroscopic superpositions cannot exist, which explains why we do not (and cannot) observe them in the real world. Voila!"

The very reason why physicists build quantum computers is *because* they suapect or propose this. In fact, the observation about the computational complexity was what lead to the idea of QC.

I have worked on QC (experimentally) and as an experimentalist i understand that the existence of Schroedinger cat-like states is a prerequisite to the generation of e-bits, which are what a succeding computation needs for the NP-speedup.

So hist section 3 is title wrong because it imples that arbitrary large quantum states can be generated (sine he uses the word "explained" and not "equivalent"). However these have not been observed for *arbitrary large system*. i observed such states experimetnally, and as a matter of fact we were busy oberving the decay into a classical state, which is standard technique in all experimental groups working on this field.

So iff NP=!P then QC makes sense anda prerequisite for QC is the generation of systems with many e-bits (entanglement measure). Even a large system undergoing a quantum dynamics (e.g. the cooled MEMS systems) is not sufficient for claiming (or thinking) that there exists much entanglement in the computational sense.

I am sick and tired of mentally short-circuited papers like this one which restate the obvious and ignore the recent developments. i am sick theorist who dream of being great philosphersand at the same time utterly ignorant of many people doing hard work in the last 20 years.The citation pattern in the paper screams "shit". I see no reference to previousl literature about entanglement measures. He talkes about the "measurement" problem like it did not receive any attention in the last 80 years (and as a matter of fact it did, theoretically and experimentally). The abstratc doe not state a clear goal, the paper contains a quantum mechanics for beginners lesson and the paper does not have a "summary" but "final remarks".

looking at the prvious work of the same author an incredibly weird comment (http://arxiv.org/pdf/1401.1747v4.pdf) can be fund in which he has his personal definition of what is falsifiable. His central idea does not hold, of course, if i can do one or more things of the following:

* Apply trace operations before comparing the observation, and at the same time reduce the complexity of the theoreticla calculation

* Do postselection and compare relative probabilities of experimental outcomes , where the ration verifies or falisifies the theory.

Both are valid standard operations in verifying (i.e. not falsifying) quantum theory.

He seems to be a, medical data evaluation guy, has no significant publicaitons as first author (and to few impact points for his role), and, as much as i appreciate people of other disciplines getting interested in physics, i would expect that we distinct a nice college-level summary from serious research.

P is the set of decision problems that are decidable by a deterministic Turing machine in polynomial time.

NP is the set of decision problems that are decidable by a nondeterministic Turing machine in polynomial time (nondeterministic machines do not exist in the real world; the term means "try all paths simultaneously, and return the shortest answer, if one exists."); equivalently, NP is the set of decision problems that are verifiable by a deterministic Turing machine in polynomial time. The verification is done by evaluating of a (polynomial size) proof certificate that describes the steps necessary to solve the problem. The "certificate" might be a Circuit Value Problem (known to be P-complete), or it might be the execution trace of a deterministic turing machine, or it might just be a set of inputs that yields the desired output.

We know that P is a subset of NP (i.e. P <= NP) because a nondeterministic Turing machine can trivially simulate a deterministic Turing machine, but it is not known whether NP is a subset of P (i.e. NP <= P). If they are subsets of one another, then they are equal; if NP is not a subset of P, then they're not equal (i.e. P < NP; that case is called a strict subset). FWIW, most mathematicians believe P != NP.

Regarding the equivalence of the problems treatable by a QC to NP, it seems some NP problem are treatable by a QC effciently. If that applies to all (or which) NP problems is (to my knowledge) indeed an open question.