The Blog of Scott AaronsonIf you take just one piece of information from this blog:Quantum computers would not solve hard search problemsinstantaneously by simply trying all the possible solutions at once.

Gus Gutoski took notes for this “all about cryptography” lecture, and they were so good that I’ve posted them with only moderate editing and joke-reinsertion. I’ve thereby provided you, my readers, with the unique opportunity to experience my lecture as Gus himself experienced it — as if you actually were Gus, sitting in a real Waterloo classroom taking notes.

For those of you who feel the need to prepare yourselves for this experience, here’s a recap of all the lectures so far:

Update: Preparing these notes is a sh&tload of work for me. So dude — if you want me to keep doing it, please let me know in the comments section if you’re actually reading the notes and deriving any benefit therefrom. Constructive criticism would also be fantastic. Thanks very much!

This entry was posted
on Monday, December 11th, 2006 at 10:09 am and is filed under Complexity, Democritus.
You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

I’m not sure if this just a transitory server thing or if the links are mistyped, but as I write this (morning of December 11th), all the lecture links yield “404 Not Found: The requested URL /blog/lecX.html does not exist” error messages.

In addition to Penrose’ Emperor’s New Mind (whose wonderfully enigmatic concluding paragraph is this: “Adam felt acutely embarrassed. Whatever they should have done, they should not have laughed.”) beginning students might enjoy David Deutsch’s highly readable Fabric of Reality.

Admittedly, neither of these books will teach a student how to calculate.

Also recommended is Jonathan Israel’s new Enlightenment Contested. Israel’s book is far too long to be assigned as a text, but any student who read the Preface, Introductory, and the Postscript would gain a solidly-grounded appreciation of the motivations and historical circumstances of people like Newton and Leibniz. Also, the chapter Newtonianism and Anti-Newtonianism in the Early Enlightenment: Science, Philosophy, and Religion is mandatory reading, due to the many evident parallels to the present era.

The point being, our present era is surely not less interesting Newton’s! IMHO, Prof. Israel’s book provides an excellent remedy to the watered-down & thought-deadening histories of mathematics, science, and technology that students are usually assigned.

I’ve read upto Lecture 6 so far, and I’m finding them immensely enjoyable. The presentation, and the problems are way better than the classes I took years back. Clearest proof of Godel’s Incompleteness Theorem I’ve seen too.

The notes are really great. I very much like the way you point out continuative topics. Most teachers only say something like “there is something very interesting you will perhaps encounter later” but don’t state its name so nobody is able to look it up. You instead present very concise teasers but fully state the name of the problem.
Keep at it!

Thanks. I honestly never knew that meaning of “spook” — I just looked it up, and found it in some dictionaries but not others. I assume you know full well that I meant an undercover agent or spy! Even though my sentence said exactly what I thought it did, I decided to change it. I’m reminded of the government employee who was inadvertently fired for using the word “niggardly”…

“…do we ever really talk about the continuum, or do we only ever talk about finite sequences of symbols that talk about the continuum?”

It seems to me the latter. To talk about the continuum, we’d need \aleph_1 names, which we don’t have. I think I’m missing something, though.

Nick: I agree with you, but in the interest of full disclosure, I should tell you that there have been many great thinkers over the millennia who wouldn’t! The key question seems to be whether human beings can form a direct intuition of (say) the real number line, unmediated by any symbolic description of it. For what it’s worth, my own answer is simple: maybe Roger Penrose can form such an intuition, but I can’t! Sure, I can form an intuition of something that looks like the real number line, but then again, it might just be the rationals (the two look pretty similar to me).

One thing I wondered about is what would P=BPP imply about the existence of good pseudo-random generators? You mentioned in the last lecture notes that a good enough pseudo-random generator would imply P=BPP but is there any form of converse?

Scott: “The key question seems to be whether human beings can form a direct intuition of (say) the real number line, unmediated by any symbolic description of it.”

Ah, so that’s how they’d go about doing it.

I’ll have to take your seminar’s advice and reread Penrose; I’m afraid I read _Shadows of the Mind_ about three years ago — but I don’t remember any of it, so I can read only _The Emperor’s New Mind_ and still have a clean conscience.

Scott,
I think it’s great that you are taking the time to make these notes available for people who wouldn’t otherwise be able to learn this stuff. I haven’t read all of them yet, but I will do so over the holidays. Seriously though, don’t stress yourself out just to provide free education to anonymous strangers like me! You’re too kind 😉

One thing I wondered about is what would P=BPP imply about the existence of good pseudo-random generators? You mentioned in the last lecture notes that a good enough pseudo-random generator would imply P=BPP but is there any form of converse?

Excellent, excellent question! The short answer is that right now, we only know very weak converses. So for example, we know that if P=BPP (or more precisely PromiseP=PromiseBPP, which is almost the same), then NEXP requires circuits of half-exponential size (that is, size f(n) where f(f(n)) is exponential). And being a circuit lower bound, that will give you some sort of pseudorandom generator: maybe a PRG computable in nondeterministic half-exponential time? Whatever it is, it will be crappy but better than what we know now.

The fundamental problem is that in principle, there might be a way to derandomize BPP that doesn’t involve going through a pseudorandom generator. BPP is a class of languages, and it would suffice to find P algorithms for all the languages that happen to be in it, whether or not one can “fool” the BPP machines themselves! Having said that, all the actual proposals we have for derandomizing BPP do involve going through a pseudorandom generator.

“The key question seems to be whether human beings can form a direct intuition of (say) the real number line, unmediated by any symbolic description of it. For what it’s worth, my own answer is simple: maybe Roger Penrose can form such an intuition, but I can’t!”

In the way of constructive criticism, I suggest adhering to the general standards of classic literature. That is, carry your words with poise and purpose, like a gentleman. Exhibit eloquence in every phrase and listen carefully to your own phonetic melody. Alternatively, the casual jesting, lightly musing, unpoetictone detracts from your dignity, destroying the air of majesty which would normally surround your lecture notes.

I read all your lecture notes soon after you post them, and find them every entertaining. I appreciate the time you spend on them. Thanks.

I did have one piece of constructive criticism for your lecture notes, on your proof of Godel’s incompleteness theorem in lecture 3.

The idea that “Godel’s incompleteness theorem basically boils down to the halting problem” is one I’ve heard from a few current and former CS grad students at Berkeley. I’m not sure where that perspective originates, but I think it misses something.

Your notes mention something about translating the halting problem into a statement about integers, but just as a parenthetical remark. I think the “technical tour de force” part of proving Godel’s theorem lies there, with formally identifying proof checking, or program halting, (or any primitive recursive predicate) with sentences about natural numbers in the first order language with plus and times. The diagonalization proof at the end is just the punchline.

The halting problem provides a good way to phrase the incompleteness theorem’s required diagonalization argument, simpler than Godel’s original proof, and maybe better than whatever way they do it in math class. My point would just be that the diagonalization argument is not the entire theorem. (You skipped the hard part).

For me, the greatest testament to Gödel’s legacy is that what you call “the hard part” of his theorem is no longer hard. From a CS perspective, it’s obvious that whether a computer program halts is a mathematical question, one that could ultimately be “compiled” into a question about integers — in exactly the same way it’s obvious that a C program can be compiled into assembly language. In both cases, formal proofs can be given (and verified by machine), but are unfit for human consumption.

In other words: it took a while, but civilization eventually caught up to where Gödel was in 1931!

The lectures really only get better from here, and you should have reasonably good notes for most of them after lecture 8, no?

To all: rest assured, Scott will be able to fit at least one complexity class into every lecture. It’s a shame these couldn’t’ve gotten done more ‘real-time’ so that the people actually taking the class could comment, but such is life.

Compiling a C program into assembly is conceptually easy because it just involves translating the syntax of one imperative language into a more basic one. The procedure is complicated only because there’s so much stuff to write down.

On the other hand, I would argue that modeling an arbitrary computation by using a first order sentence about natural numbers is less obvious. The procedure is also less complicated in terms of what you need to write down to explain to someone else how to do it. The following example would likely suffice:

Given a Turing program P, P halts iff there exists integers C and n, (C a code for a sequence of n integers, each of which codes a Turing machine configuration) such that
– C_0 codes a configuration in the initial state, and
– C_n codes a configuration in the halting state, and
– For each j

– For each j less than n, the configuration coded by C_(j+1) is the result of applying P to the configuration coded by C_j.

Now there’s still some work to do, (e.g. saying what it means for an integer to code a “configuration in the initial state”), but at least now you’ve given enough of a start so that the problem looks more like compiling.

I’ve heard model theorists tell it that Godel’s most important result was showing that the first order language with plus and times can be used as a model of computation. Is there really a big picture perspective from whose point of view that part of the theorem is obvious? I’d enjoy being enlightened.

Benjamin: Alright, I agree that it takes some ingenuity to show that first-order logic with plus and times can be used as a model of computation.

But here’s what a computer scientist would say: if FOL with plus and times is too hard to prove universal, then just throw in some more operations (exponentials, division, the floor function, etc.) until it becomes obvious! If you do this, you’ll have answered the conceptual question of whether sufficiently powerful formal systems can talk about their own provability, leaving only the technical question of exactly how much power is sufficient.

Hi Scott,
Maybe I’m missing the section or lecture where you do this, but you should really explain that ‘solving NP-complete problems on average’ means solving all NP-complete problems with respect to all P-sampleable distributions, not just e.g. solving 3-colorability with respect to the uniform distribution (which is easy).

If you don’t make this distinction it’s problematic to assert (as in your diagram) that solving NP-complete problems on average is harder than breaking cryptosystems.

An elaboration of Scott’s response to Johan’s question: For the class MA, which is just NP with probabilistic verification, it is known that any non-trivial derandomization of MA with a small amount of advice implies non-trivial pseudo-random generators with a small amount of advice. So there are certainly indications that derandomization may be as hard as finding pseudo-random generators, which is rather surprising on the surface of things.

Simply put: If, as Scott says , the Simpsons is what justifies the existence of television as a medium, then resources like these course notes are what justify the existence of the Internet as a medium.

Seriously, where else can one get an entertaining introduction to set theory, mathematical logic, computability theory, complexity theory, quantum computing, cryptography, and the philosophy of mind and language all at once?

Maybe I’m missing the section or lecture where you do this, but you should really explain that ’solving NP-complete problems on average’ means solving all NP-complete problems with respect to all P-sampleable distributions, not just e.g. solving 3-colorability with respect to the uniform distribution (which is easy).

I don’t know if any of you read Philip Roth’s The Human Stain, but one summary of the plot is

In April 1996, at 69, Coleman Silk is still professor of classical literature at Athena College…Silk is accused of having made a racist remark about two African-American students who were absent from his class and whom he had never seen before … He called them “spooks” — suggesting they were ghosts, but not considering that spooks is also an old-fashioned epithet for blacks. In the ensuing upheaval, several of his colleagues turn against Silk and support the African-American students. Silk feels monstrously wronged, and though he could have gone on teaching, he decides to resign…

It didn’t seem realistic when I read it, but I may have to change my mind following this thread.

For future reference, you can encode < as &lt;, > as &gt;, and & as &amp;. The problem you experienced will occur with any browser; the WordPress comment editor doesn’t automatically encode characters normally used in specifying HTML markup.