If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register or Login
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Re: P = np

As humans we can quickly check a map to see which is the quickest route from A - B(NP), although we could also check the map just to ensure that the route we have chosen it is the quickest route(P trying to solve), but when looking at the map you will find there is lots of ways to A - B. Intuitively as humans we would disregard extremely long routes hence saving time. For a general computer you would have to input all the data of the map and the time complexity would be well over polynomial time using classical bits. For quantum computing as its super position of electrons which can be 0 and 1 at the same time the quantum computer would be faster as it can process more classic bits simultaneously, in theory N qbits is 2^n classic bits. The point of this is that if we can make computers more intuitive and can compute at same or higher speeds then there is a possibility of P=NP experimentally, however before this could even be taken into consideration we will need a through deep understanding on how the human brain works...

Re: P = np

Originally Posted by gaar321

As humans we can quickly check a map to see which is the quickest route from A - B(NP), although we could also check the map just to ensure that the route we have chosen it is the quickest route(P trying to solve), but when looking at the map you will find there is lots of ways to A - B. Intuitively as humans we would disregard extremely long routes hence saving time. For a general computer you would have to input all the data of the map and the time complexity would be well over polynomial time using classical bits.

Eh, generally, such shortest path problems can be solved in polynomial time. You're probably thinking of the travelling salesman problem, but I do not think many humans, if any, can intuitively solve non-trivial instances of that problem quickly.

Re: P = np

just some january 1st after party thoughts ...

Originally Posted by gaar321

electrons which can be 0 and 1 at the same time

being "0 and 1 at the same time" is non-sense classically, and it's non-sense in QM too. QM probabilities are conditional transition probabilities. So, for example, the pure state |z-spin-up> ( that according to your reasoning should represent a qbit "that is a 0" ) is exactly the same thing as the pure state |x-spin-up> + |x-spin-down> ( that according to your reasoning should represent a qbit "that is both a 0 and a 1" ) ...

Originally Posted by gaar321

in theory N qbits is 2^n classic bits

this is not correct, a single qbit is a continuum with the topology of a 2-sphere, so it can in principle encode infinitely many bits; and even if we accept the orthodox formulation of quantum mechanics, there's still the possiblilty of so called weak measurements, by virtue of which you can encode more than 1 bit in a single spin-like degree of freedom.

Originally Posted by 2kaud

But quantum computation may lift this lid. To paraphrase, "it's computing, Jim, but not as we know it".

technologically maybe, but not conceptually. From a purely computational perspective, quantum computational models are nothing more than a form of parallelization ( in fact they can be emulated arbitrarily well via classical models too ) so they cannot change the consequences of a p/np proof ( quantum computational models are of course very very interesting from many other pov though ... ).

indeed, the fact that one can use a quantum system to beat the algorithmic complexity of a turing machine does not mean that the latter is "classical" ( whatever does this actually means ) and the former is quantum/strange/cool/etc... . It simply means that you're using a different computational model and different computational models can give different algorithmic complexities in the "classical" world too.

For example, take a long tube ending with a funnel, place it vertically and fill it with a viscous liquid. Then, drop simultaneously at the top of the tube an array of spherical weights, say, balls with the same radius but possibly different masses. if the tube is sufficiently long these balls will reach a constant speed proportional to their masses, hence balls will leave the funnel at the bottom in heaviest to lightest order.
so, we have constructed a "gravitational computer" capable of sorting in linear time ... should we conclude that gravitation implies a different/new/cool/whatever form of computation ? I think not ..

Re: P = np

Superbonzo, awesome post/topic!

A few points:

Originally Posted by superbonzo

QM probabilities are conditional transition probabilities.

They are transition probabilities, but the "conditional" part is most likely not true - the process where one state collapses into some other state according to the squares of the probability amplitudes (components) of the state vector is thought to be truly random; otherwise, it implies the existence of so called "hidden variables" (where if you knew them, you would be able to deterministically predict an outcome of a measurement (it's the measurement that leads to the transition, or "collapse"), but the predictions of such a theory do not agree with experimental results.

Originally Posted by superbonzo

So, for example, the pure state |z-spin-up> ( that according to your reasoning should represent a qbit "that is a 0" ) is exactly the same thing as the pure state |x-spin-up> + |x-spin-down> ( that according to your reasoning should represent a qbit "that is both a 0 and a 1" ) ...

Well, you can't do that. A qubit's state is represented as a unit vector in a 2D orthonormal basis (in complex, Hilbert space), so you can (disregarding the complex numbers related details) think of it as of a radius vector on a unit circle (the 2-sphere topology comes from the fact that the components of the vector are complex, so it's really 4D, but then there are these constraints required to make the total probability equal to 1).

Here's a 2D real-plane visualization of a state of a qubit. Note that this is not the 3D Bloch sphere representation.
The |+> and |-> vectors on the image on the right form a different orthonormal basis; these states are distinct from the states |0> and |1>.

Once you pick a basis, you can assign to its basis vectors labels labels such as "0" and "1", thus defining the |0> and |1> states for the qubit. If you change (say rotate) the basis, you can express the |0> and |1> vectors as superpositions (vector sums) of the new basis vectors, but you can't call the new basis vectors "0" and "1", as they are completely different states (themselves superpositions of |0> and |1> in the {|0>, |1> } basis).
The key is that, once a measurement is made (which really means "interaction which reveals something about the state"), depending on the measuring apparatus, a basis is defined on a qubit, and by a non-deterministic process that is still not really understood, the qubit collapses from its current state (generally superimposed) to one of the basis states, depending on the square of the modulus of the corresponding component of its original state (expressed in that basis), so you either get (or interpret) your answer as a "0" or as a "1" in the end.

Originally Posted by superbonzo

in fact they can be emulated arbitrarily well via classical models too

So, in the context of the above, you can emulate them on a classical computer (where "classical" means built on the principles of classical physics), but if you are using a classical (again, in the classical physics sense) pseudorandom number generator, then you can't do it arbitrarily well (or accurately).

Re: P = np

>>They are transition probabilities, but the "conditional" part is most likely not true

Firtsly, "conditional" means conditional with respect to the choice of an observable ( aka, an Hilbert space operator with a complete spectral measure of projectors ). So, claiming that a state is both "here and there" is nonsense because the result of the wavefunction collapse depends on the specific observable used.

For example, consider the wavefunction of a point-like particle, that is a square integrable complex function on the real line. Now, in some physics textbook |psi|^2 is erronously interpreted as "the probability density of <finding> the particle here or there", so making students erroneously conclude that you cannot "find" that particle in a region where that probability is zero.

The problem is, just measure the observable |psi+psi'><psi+psi'|-|psi-psi'><psi-psi'| where psi' is *any* wavefunction and you're free to find your particle wherevever you want with high probability. So, unless you consider some observables "more important" than other, you cannot conlcude that the result of a collapse has any meaning with respect to the state before the measurement occured, hence it's nonsense to say that the particle *was* both here and there. You can only say that a transition will occur with some probability given a specific observable.

>>the probability amplitudes (components) of the state vector is thought to be truly random; otherwise, it implies the existence of so called "hidden variables" (where if you knew them, you would be able to deterministically predict an outcome of a measurement (it's the measurement that leads to the transition, or "collapse"), but the predictions of such a theory do not agree with experimental results.

no experiment can verify "true" randomness ( whatever such a thing is supposed to mean ), and no experiment has excluded the possibilty of determinsitic descriptions of quantum reductions ( yes, things like Bell's inequalitities do pose limits on them, though ... ). That said, there are also other alternatives, most notably so called dynamic collapse theories.

>>"hidden variables" (where if you knew them, you would be able to deterministically predict an outcome of a measurement (it's the measurement that leads to the transition, or "collapse")

this is not true, determinism does not imply predictability and viceversa ( although for some obscure reason they're often confused ... )

>>The key is that, once a measurement is made (which really means "interaction which reveals something about the state"), depending on the measuring apparatus, a basis is defined on a qubit[...]

not true, such a basis is defined only if the spectrum is non-degenerate. In any case, if the "(pure)state" of a system is that thing you need to describe the best you can the outcome of all possible observations on that system, than the pure state space of a qbit is the space of rays in the hilbert space C^2, aka a 2D sphere.

Anyway, maybe, are you claiming that once a "spin axis" is fixed in any case the outcome of oberservation will always be a 0 or a 1 ? as I said above, this is also wrong though; are you aware of weak measurements ?

>>non-deterministic process that is still not really understood,

if it's not really understood how can you claim it is non-deterministic ? albeit its enormous success, quantum mechanics is still a deeply incomplete theory, both foundationally and phenomenologically.

>>So, in the context of the above, you can emulate them on a classical computer (where "classical" means built on the principles of classical physics), but if you are using a classical (again, in the classical physics sense) pseudorandom number generator, then you can't do it arbitrarily well (or accurately).

this is just wrong, people insist confusing "classical" with finite. You can emulate arbitrary well a quantum computer with classical physics, you just need essentially infinite degrees of freedom to achive the same "level" of parallelism. Again, just search for dynamic reduction theories ( GRW theory being the first one ) for examples.