The Blog of Scott AaronsonIf you take just one piece of information from this blog:Quantum computers would not solve hard search problemsinstantaneously by simply trying all the possible solutions at once.

This entry was posted
on Friday, May 2nd, 2008 at 5:31 pm and is filed under Complexity, GITCS.
You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

In section 1.2 of lecture 13, I don’t think it increases your probability of being executed to know the name of a prisoner who isn’t executed.

Even in the example where 999,998 names of free prisoners are revealed, your probability of being executed is still 1/1,000,000. This is exactly like the Monty Hall problem. Assigning a high chance that the remaining prisoner is going to die is exactly like assigning a high chance that the remaining closed door has the prize.

To clarify, it doesn’t increase your probability of being executed to be told the name of one of the *other* two prisoners who won’t be executed. The exaggerated example is different, because the guard tells you 999,998 names at random, so if you were free, you would have expected to see your name with high probability.

This is sort of off topic, but I’m going to be doing comp sci research up the road from MIT this summer and I was wondering how do I find out about what nifty theory, algorithm and math talks are afoot at MIT for when I’m in the areas?
thanks!
-Carter

I agree, the prisoners example in lecture 13 is incorrect as stated in the lecture. Because the prisoner asks the guard a question that he can answer regardless of whether the prisoner is the one to be executed or not, his probability of being executed is still 1/3 even after the guard answers. This is exactly analogous to Monty Hall opening a door with a goat behind it — the chance that you have the lucky door is still 1/3, and this is -exactly- why switching is the correct thing to do. From the prisoner’s perspective, he should gladly switch places with the other prisoner still in jeapordy.

I have to agree that Section 1.2 is just wrong. Suppose there are 1,000,000 prisoners and only one will be chosen uniformly at random to be executed, and you ask the guard: “Tell me 999,998 people who won’t be executed.” Upon learning this information, your probability does not somehow miraculously jump to 1/2. On the contrary, your probability is still 1/1,000,000. The other 99.9999% of the time, it will be the other poor fellow.

Scott: In Lecture 12, where you discuss the Baker-Gill-Solovay argument, the lecture notes say “On the other hand, we can also create an oracle B such that PB=NPB”, and then go on to construct an oracle B which seems to require PB&neq;NPB, assuming I’m understanding the argument correctly. Is this a typo, or am I having a thinko?

Thanks, everyone, for catching embarrassing errors — I wish I had time to go through more carefully prior to posting! (And as I said before, don’t use these notes to build a bridge or a nuclear sub or anything.)

Regarding your March 08 Scientific American article: I do not understand the idea of using a CTC to get around the huge runtime of NP-complete problems. This only works if you use the user perceived run time of the system. But this is not different than if you put the user into suspended animation after starting the task and wake them back up a million years later when the program is about to finish. In either case a clock inside the machine still registers a million years. So I don’t see the CTC argument as being valid. What am I not getting?

John: No, the argument is different from that. We specifically don’t exploit the user-perceived runtime (which would be trivial), but rather Deutsch’s requirement of “causal consistency.” In other words, if M is the quantum operation acting inside a CTC, then nature needs to find some quantum state ρ such that M(ρ)=ρ — and even if M is polynomial-time computable, one can still set things up so that finding ρ amounts to solving an NP-complete or even PSPACE-complete problem. For more details, see Section 8 of my survey article in SIGACT News — or my actual paper with Watrous, which as it happens, should be finished any week now! (Wish I could retrieve it from the near future for you.)

You say that “E[XY] = E[X] E[Y] … only if X and Y are independent.” That’s not true. It IS true that “E[XY] = E[X] E[Y] IF X and Y are independent” and “E[XY] = E[X] E[Y] if and only if X and Y are uncorrelated” (essentially by definition).

Are the rules of probability given in lecture as ‘obvious’ as you make them look? I am puzzeled for two reasons:
1- I thought Kolmogorov was such such a genius because he came up with them?
2- I’ve read this somewhere, although I don’t remember the exact details (and don’t know enough math to understand them anyway, something about non measurable sets), that by weakining those rules you could explain Bell’s inequalities without the spooky implications. Is that true? I think the author was Pitowsky.

Zelah, very interesting question! No, we don’t know that NP in P/poly would have any such quantum implication. But now that I think about it, a sufficiently strong quantum hypothesis (basically, that “QAM provers can be simulated by poly-size quantum circuits with quantum advice”) would imply that QMA=QAM, for basically the same reason NP in P/poly implies MA=AM. (I.e., the QMA machine could first guess the quantum circuit and then use it to simulate the QAM prover.) To get QMA=QIP(2) or QMA=QIP(3)=QIP, one would need still stronger and less plausible hypotheses.

1 – Kolmogorov was a genius for many reasons, but realizing that Pr[not(A)]=1-Pr[A] was not one of them. His achievement was not showing that this and a few other statements are true of probabilities (which is obvious), but (basically) that they imply everything else you’d want and are therefore suitable as axioms (crucially, even with infinite probability spaces).

2 – I can believe it, but changing probability theory (or worse yet, logic) to explain quantum mechanics always struck me as a grave offense against poor Occam. It’s just so much more drastic than is needed for the job! Why not simply say that we have probabilities that obey the usual Kolmogorov axioms, and that we calculate those probabilities from a quantum state vector?

I always enjoy reading Scott’s blog, and GITCS seems like a very nice course in complexity theory. However, I have to say that I don’t think “Great Ideas in Theoretical Computer Science” is a very appropriate title for the course. It suggests something much more general. In some circles, Theoretical Computer Science seems to be used more or less as a synonym for complexity theory, but there is a large community that interprets it much more broadly. For example, look at the web page of the Theoretical Computer Science journal.

Simon: Well, the course title isn’t All Great Ideas in Theoretical Computer Science. 🙂 Seriously, the question came up, and future iterations of the course will likely include something about (say) distributed computing and concurrency. The choice of topics of this semester was sharply limited by my own competence, as well as time constraints.

There’s some precedent to the title, also. The analogue course at Carnegie-Mellon, taught previously by Steven Rudich and now Luis von Ahn, has been called Great Theoretical Ideas in Computer Science since at least 1997.

Personally I see QMA=P#P as extraordinarily unlikely. Indeed, I don’t believe QMA even contains coNP (it’s easy to show this is true in the oracle setting, though obviously that isn’t decisive). Still, I don’t think anyone has considered QMA in the setting of additive approximations to #P-complete problems (the Jones polynomial, the Potts model, etc.), and it would be interesting to know if you get anything new there.

Your overview of the envelopes paradox in 1.4 is slightly inaccurate, in that it seems to suggest that the ONLY problem with this setting is that you can’t uniformly select an integer at random.

In fact, you can define a perfectly legitimate probability distribution on the positive integers, and STILL get the (apparent) paradox. Let’s say you pick a positive integer k with probability (1/2)^k, and the envelopes contain 3^k and 3^(k+1) dollars, respectively.

You open an envelope, observe its contents, and are faced with a decision – to switch or not to switch. What do you do?

If you see $3, you should definitely switch.

If you see $3^m with m>1, the other envelope certainly contains either $3^(m-1) or $3^(m+1), depending on whether k was 3^(m-1) or 3^m. The former is two times more likely than the latter, so the probs are 2/3 for the lower amount and 1/3 for the higher – but that still gives you a positive net expected gain of 2*3^(m-2). Switch!

It’s true that there’s no real mathematical paradox here (the gain from switching turns out to be a random variable with no expectation), but the situation is slightly more subtle than the mere assumption of uniform distribution.