Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch

Friday, March 07, 2008

AI overhyped!!!!!!!!!!!!!!!!!!!!!!!!!!!

(A commenter left an off-topic comment on my last entry.
INSERT GENDER-NEUTRAL PRONOUN HERE SINCE I DO NOT KNOW
GENDER OF COMMENTER raisesd a question of interest.
I have made a blog entry out of it.
(NOTE- if you want a topic discussed on this blog,
better to email Lance or I directly rather than make
an off-topic comment.))

Kurzweil and others in AI think that computers will
surpassing human intelligence in about 30 years.
For example, see
this
overly optimistic entry on wikipedia.
The media also seems to over-hype things.
For example, when Deep Blue beat Kasporov there were headlines
about how computers are smarter than people.
These types of article seem to overlook the computational
complexity of some of these problems.
(Though one can say that we work on worst case and asy results
while they work on ``real world problems''.)

My impression of Computer Chess is that they originally wanted
a domain where computers could learn and adapt, but winning
became too important and computers became too fast, so
that (very clever) brute force searches took over.
They may have more luck with the game of Go which is likely
not able to be won with (even very clever) brute force.
However, the whole story seems to be to show the
computers are nowhere near human intelligence.
(I grant that these terms are hard to define.)

Sheesh. Not this old, tired meme. I've never understood why AI researchers were willing to publicly aim for such a paltry goal. We already have machines that are as intelligent as humans; why not shoot for something better?

And if we're really honest about it, we have to admit that computers (in the aggregate) surpassed human intelligence long ago. Sure, computers aren't as good as humans at understanding spoken language, playing Go, or proving Fermat's last theorem, but do you think you could do what Google does? You think the New York Stock Exchange is run by HUMANS? Fortunately, computers aren't as interested in primate power games as us hairless apes, so we'll be able to use them as cerebral prostheses for the forseeable future.

Why do people give timetables for predictions like this? What are we supposed to do with this information? Buy Intel stock? When did a far out prediction about a spectacular scientific achievement in the distant future ever helped plan anything?

As someone who works both in AI (in the original sense) and complexity, I have two opinions to express:

1. Computational complexity has little or nothing to do with "solving" AI.

2. 30 years is far too pessimistic. I make it about 10. And it is hard to overhype the changes this will bring about in society.

Incidentally, Go programs have gotten much, much stronger in the past couple of years, by using Monte-Carlo techniques, oddly enough. At this point it is not clear how much further progress can be made by these programs, but it is no longer the case that an average amateur can trounce the best programs. Not that that has anything to do with AI! We will have real AI when we reverse engineer enough of the brain.

1. I strongly disagree with the idea that humans have been surpassed by computers. computers imho are merely extensions of human intelligence. I believe google is simply a large version of babbage's analytical engine, in other words it has no intrinsic intelligence. It merely demonstrates the fact that groups of people working to codify and reduce their knowledge into technology are much smarter than individuals. I think for computers to be truly intelligent there is going to have to be some sort of paradigm shift in design...maybe running programs that have the 'intrinsic randomness' that wolfram talks about in NKS are what it will take, maybe it will be something completely different.

2. The idea of self-improving computers that is supported in the linked wikipedia article, I think makes a major fallacy. Namely it assumes that the knowledge necessary for a hyper-intelligent computer to improve itself can be deduced a priori, simply through internal action of the machine. It disregards the significant importance that improved actuators have in the accumulation of knowledge and the generation of new scientific paradigms. For humans these actuators take the form of tools like microscopes or new scientific methods like linear algebra. The inspiration we take from our world is far more important in changing thought, historically. Which gets back to my first point, computers, at the moment, are just improved actuators, and we can't extrapolate from their existence to a completely novel type of computation that we can imagine but can only poorly define.

P.S. one final point. In the legend that has become Einstein insofar as it is applied to internet discussions some would argue that his insight into relativity was as a result of his experience with light speed communication due to the recent development of the telegraph. I'm not going to assert this as historical fact...but it is something to consider.

Isn't vision part of human intelligence? Computer vision systems are incredibly far from the level of human vision. Do people really think that computers will match human vision in that period of time?

Although I think Kurzweil and his followers should be commended for their efforts in supporting the search for strong AI, I think their opinions on timeframes are irrelevant. In his books Kurzweil goes on and on about how current supercomputer computational power is converging to the computational power of the human brain. As far as I am concerned this is (maybe) a necessary but certainly not a sufficient condition for strong AI. I think the last 50 years have shown that a good understanding of the process of human reasoning or understanding the software of the brain is completely lacking. I am convinced that this is a very exciting time for AI as the interaction with neuroscience is slowly unveiling some properties of our reasoning process.

Let me finish by agreeing and disagreeing with Bob: first of all I completely acknowledge that complexity theory has little to do with solving AI. For most AI-related problems we don't even know what to search for or compute so the complexity of algorithms to do so doesn't matter. I would have to disagree with Bob on his other opinion: I just cannot see how you can claim AI will be here in 10 years when we haven't even shown how to reproduce/simulate/emulate any moderately complex subsystem of the brain (like vision).

I will freely admit that I am more optimistic than most AI researchers. ~10 years is just my best guess from where I sit.

Re the comments on human vision, I think it's important to acknowledge that vision is likely not a brain subsystem that can be studied and completely understood in isolation. Our vision systems develop in the context of and perform in interaction with the rest of the brain, and that is likely where a lot of the complexity and power comes from. I expect vision systems to get better as they become components of larger-scale brain simulations. The same applies even more so to natural language understanding.

That said, I think there has been great progress in brain-inspired computer vision in the past decade. Look at the work of, e.g., Poggio and Ullman. On the not-as-brain-inspired front, the SIFT system of David Lowe has been quite successful.

I think my worry is that there isn't going to be an epiphany moment where we realize there's some sort of general algorithm that explains the faculty of human intelligence, but instead we find a lot of complicated finely tuned hacks that are specific to each system and not particularly relevant to reproduction in computer systems.

From what I've learned about the visual system, it seems to embody a lot of knowledge about the structure of 3d space as it is likely to be projected on the retina that may very well be stored in our genetic code.

This very much parallels human forays into computer vision, where successfulish results are due to incorporating a lot of heuristics and programmer time into finely tuning a solution to a specific task and sadly the heuristics applied don't look much like the ones that are used by humans.

But thats just me being my skeptical self, I certainly do hope for a general and broadly applicable algorithm of intelligence...as to when that might come...well who knows.

(If that did happen think of the CAPTCHA headache that might cause google...lol)

Really, what does chess have to do with AI? Not much, I would say. Why should one particular game, which for some reason gets a lot of attention because it seems to appeal to math geeks and related folks, be a measure for the progress of AI? Same with the game of Go. I cannot see how any result on computer chess could possibly impact my perspective on the state of AI.

For what it's worth, I agree with Kurzweil about the coming "singularity" and its societal implications and all. Unless one is a dualist of some stripe, it is pretty much an unavoidable philosophical conclusion. But these timelines! I suspect that Kurzweil is sensing his mortality and engaging in more than a little bit of wishful thinking. The singularity is not going to come in time for him, or for me.

I also think it should be pretty obvious that the interesting work toward this end is being done by the neuroscientists, and not by people working in "traditional" AI.

The gender neutral pronoun you are looking for is "he." English cares so little about males that it cannot explicitly indicate the quality of being male using a pronoun (unlike female where she is unambiguous).

Kurt, although neuroscientists are doing an amazing job I think it is a bit to early to say it is obvious they are doing all the interesting work. May I remind you that we built airplanes not by reverse engineering the bird's ability to fly. More closely to the AI spirit: we have fairly decent machine translation systems which are completely engineered machines: they do not even closely mimich the human translation process. The reverse engineering endevour from our neuroscientist friends has given us a lot of interesting perspectives but hasn't produced very many slightly intelligent systems.

I think that there is a lot of vagueness regarding what it would mean for a machine to be "as intelligent" as a human being. I guess one possible definition would be "Solves the same problems as human beings are able to with comparable efficiency". That may have been what Gasarch was getting at with the computational complexity angle. Brute force searches may win in Chess, but they obviously don't scale to the same level of thinking humans do outside of the isolated example of a chess game.

There is also the distinction between raw computing power and figuring out how to utilize that power. Moore's law gives us a pretty clean exponential picture of the former, Wirth's law is not quite so favorable regarding the latter. Progress in software shouldn't simply be taken for granted, as Kurzweil and others seem prone to do.