Ray Kurzweil: The Mind and How To Build One (video)

Will we ever create an accurate simulation of the human mind? Can we detect and measure consciousness? When will artificial intelligence surpass human intelligence?

Kurzweil explores the mind and humanity's journey to recreate it in his presentation from Singularity Summit 2010.

Humanity has questions about the development of AI, and for decades Ray Kurzweil has been trying to find the answers. Those who know the author, futurist, and inventor's work will be familiar with his beliefs in the exponential growth of information technology, and the inclusion of more technologies into the IT label. Lately, Kurzweil has become increasingly interested in the human mind, how we may be able to understand it, and eventually how we could recreate it. He's working on his seventh book, How the Mind Works and How to Build One, which will explore those concepts. This past August, at the annual Singularity Summit, Kurzweil gave attendees a sneak peak into his upcoming book via an hour long presentation with almost the the same name: "The Mind and How to Build One". Thanks to the Summit organizers, The Singularity Institute, Kurzweil's talk is now available to watch online; check it out in the video below. From his discussion on consciousness to his explanation of the processing methods of the cerebral cortex, this is one of the best Kurzweil presentations I've ever seen.I attended this year's Singularity Summit, and had a great time. I remember some commenters at the Summit lamenting that Kurzweil started his talk rather slowly. However, I think the first 15 minutes of his presentation give some really valuable background to what he wants to discuss. Right away Kurzweil points out that the brain is not some mystic device, some quantumly unknowable system that we'll never be able to understand. We can, for the first time in history, reliably peer inside the brain and see what's happening. That's an important step in creating a comprehensive map of how our brain behaves. But in terms of AI, we may not really need that map. Kurzweil explains that reverse engineering the brain isn't absolutely necessary to develop artificial intelligence, it is just that understanding the brain can help us augment our pursuit of AI rather well. He relates how we've already had success with determining how the brain understands speech and visual input. These pattern recognition tasks have given us insight into how the rest of the organ processes information. With this context, Kurzweil's ready to jump into the future of creating artificial minds.

...But first he takes a bit of a detour. At 14:45 he starts to discuss the reasons why some people believe in the Singularity and others do not. Importantly, he points out that education, intelligence, and age aren't the determining factors. Glad to hear that the people who disagree with the concept of the Singularity aren't dumb, ignorant, or childish. At 17:44 he starts back towards the brain, explaining how the cerebral cortex is composed of modules, which he calls recognizers, that serve as linked labels for real world objects and metaphysical ideas. ...and then he gets away from the mind again. From 19:00 to 25:15 he shows evidence supporting the theory that information technologies have experienced exponential growth. For those who have seen Kurzweil speak before you can skip that part of the video. If this is your first Kurzweil presentation I have some bad news: Singularity Institute didn't include the slides in the video. Luckily I tracked down a similar presentation he gave to Google in 2009 (see it below). You can see all the graphs of exponential curves you'd ever want by checking out slides 5 through 44.

The real meat of the presentation starts up after 25:00 when Kurzweil really gets into exploring concepts related to the brain. Jump to that point in the video and you won't be disappointed. Slides 56 through 71 in the Google presentation are helpful to look through while you listen to him speak.

Unfortunately, Kurzweil was not able to appear in person for the Singularity Summit, instead he teleconferenced in. I was in the auditorium for the presentation, and I remember him looking a little like a giant floating head, but luckily you'll miss out on that when you see the video below.

**UPDATE 12.21.10 The video seems to have been taken down, we are actively working to resolve this issue.****UPDATE 12.22.10 The video is now back up! A better quality version, without the freezing halfway through, will be available soon. Thanks to Michael Annissimov and Singularity Institute for all the help.****UPDATE 1.10.10 A new video, without the freezes, is now available. It's included here.**

Here are the slides from Kurzweil's presentation at Google in July of 2009.

Part of why I like this presentation so much is that Kurzweil fills it with memorable statements that encourage the audience to learn more about the nature of their minds. At 26:04 he explains that consciousness, in its very nature, is not measurable. It is a subjective evaluation, not an objective one. Science is simply not going to be able to have a definitive test for consciousness. That's very appealing to me as both a challenge to experimentalists, and a launching point for philosophers. At 27:30, Kurzweil explains how thoughts create the brain saying, "we create who we are by the thoughts we have." Our thought patterns are literally rewiring our brain and our brain's wiring is influencing our thoughts. Speaking from experience, that's a wonderfully interesting concept to explore with friends over coffee late at night. At 39:25 he states that, "...the cerebral cortex is a LISP processor." Referencing the computer language LISP that uses linked lists as a data structure. Kurzweil describes the cortex as filled with units ("recognizers") that build complex concepts out of links to other concepts. That's a delightful (and apparently accurate) way to understand the way our minds learn, and again, something fun to discuss with friends or inspire you to read a book about neuroscience. It also jives very well with Jeff Hawkins' theories about the brain. Hawkins is the founder of Palm, Handspring, and most recently Numenta, a company that uses the architecture of the brain to help design narrow artificial intelligence for interesting things such as sorting through video footage. We here at the Hub are fans of Hawkins, and it's nice to see that apparently Kurzweil is too.

Further memorable sections:
43:00 - Kurweil discusses spindle neurons and the importance they have in our higher reasoning.
45:00 - He explains that we can only really test our perceptions of consciousness, not consciousness itself.
52:00 - The 'Duck Theory' of consciousness: If something looks like a duck, quacks like a duck, etc it's probably a duck. In the same way, humanity will likely decide to accept artificial entities as 'alive' when they do the things that our consciousnesses do, even if we don't have a test.
55:00 - Questions begin: 1) Is it possible the quantum wave function is a mental field? 2) How accurately do we need to model the brain to get intelligence? Neurons, subcellular, macromolecular? 3) Is scanning a human brain to the molecular level necessary before we get AI?

I should say that this presentation at the Singularity Summit has become a little frustrating to me. Around minute 30, Kurzweil starts to discuss the amount of code it would take to simulate a brain. A poor interpretation of these comments lead to PZ Myers, a researcher and blogger of some renown, to trash the entire presentation. We covered Myers' original blog posting, as well as Kurzweil's response, when it happened. As such I won't go into the debate too much here. Suffice to say that Kurzweil believes that our brains are encoded by our DNA, which represents a reasonable amount of code to try to simulate/recreate in the future. However, he also states outright that a simulated brain will need to be 'taught' because experience is a key element in the development of a mind (watch around 29:35). Myers seems to have missed all this and concluded that Kurzweil had a laughable naive comprehension of the complexity of the brain. Ugh. Misunderstandings such as these are not the best basis for reasonable debate.

Over the years Kurzweil's name has become somewhat synonymous with the Singularity. That's to be expected, I guess, since he has written so many books that have directly or indirectly discussed the topic. I often lament that equivalence because it opens up a complex intellectual concept to boring ad hominem counter arguments. Today, however, I'm rather glad that Kurzweil is so often portrayed as the leader of the Singularity. He doesn't always have the best stage presence, but it's hard to ignore the depth of thought and clarity of vision he brings to his presentations. At the Singularity Summit Kurzweil painted a detailed picture of the brain as we know it today, and the way we may delve it more deeply in the future. I look forward to reading his upcoming book to see how he expands upon these ideas.

Discussion
—
10 Responses

I honestly think we are moving way ahead of ourselves here, in terms of AI development.

We haven’t been able to apply and manage the technology we have today very well. IF we decide to pursue Artificial Intelligence today, it is simply going to be a mirror image of man.
Our understanding hasn’t yet developed well enough to create a super-intelligent mind that is Nothing Like Ours.

You obviously did not watch the video. No one has the goal of creating AI today, but in the near future. You must also not understand the advantages computers have over our, in comparison, slow brains. Please educate yourself on the topic before making these rash statements in the future.

If you feel your understanding of this issue is more complete than mine, why don’t you just tell me, or refer me to someplace where I can gain more insight — Rather than try to undermine my own understanding of the issue?

I don’t have to refer you to a place to gain more insight, watch the video and look at the slide show. No one has ever stated that they think AI will be created in the present. You use that to argue that you don’t think AI is possible now, when that is a completely irrelevant point. I didn’t say that to be mean, I was just saying that if you would actually read more into the topic, then your insight would be seen as completely pointless. Sorry if I offended you, but when dealing with this topic so many people just block out the idea that any of these things are possible. Comments like the one you made are what typically support the rational of this belief. You can’t look at the growth of technology as linear, but instead you must see the exponential growth. This is a common understanding with those who believe singularity is going to happen in our near future. Obviously now we do not have the understanding and technology to create AI, but look how far we have come in such a small amount of time. If the trend continues, which I believe it will, then we should have the ability to create AI in the near future. Again sorry, but assumed that anyone visiting this website and commenting on this video would have a basic understanding of the reasoning behind Ray Kurzweil’s predictions and idea, especially when the video and slideshow provide all of this evidence.

Nice polite response. I find myself arguing this myself. One argument, which I’m sure some big brain has already developed, is that the exponential increase in scientific knowledge and accompanying technological developments are similar to economic market models. Specifically the idea of the market as an emergent phenomena. Many actors working in their best interests, whether for financial gain or knowledge, create an environment that produces outputs greater than the inputs- thus wealth in the terms of money or knowledge keeps growing, similar to a free market economy.

But I think the actors in the tech/sci market produce far greater gains then business markets.

You seem to first question whether superhuman AI is a good idea. That is a very reasonable question. The Wikipedia article mentions several very smart people (most notably, Bill Joy) who also wonder if unchecked technological advance is a good idea or not.

The essence of the Singularity, which Kurzweil and others seem to have forgotten, is that we can not possibly have any idea what will happen after it. If we bother to make predictions on what will happen after an event we are calling the “Singularity”, then the event isn’t the Singularity. Kurzweil and others repeatedly tell us what is going to happen, how it will all turn out. The so-called “end of humanity” won’t hurt a bit, they say.

Maybe they are correct. But if they can see beyond it, then superhuman AI is not the Singularity.

Personally, I am very much in favor of pursuing superhuman AI. I think we are hundreds of years from attaining it (not the 20 or 30 of Vinge and Kurzweil), but we need to work on it. And I believe (stressing “believe”) it is attainable. But at the same time, we must also work on becoming better people. We must become worthy of being scanned and replicated and simulated till the end of time.

I think some may have misinterpreted my initial comment, just a bit. I am not anti-science or technology (total opposite). I am simply against the use of technology as a means to benefit a few, rather than to benefit humanity as a whole (But that’s a different story)

I actually like the idea of AI, and I believe we can attain it within the next 20 or 30 years; but just like any other problem that ever gets solved, there needs to be funding/a business plan — This is not to imply that AI development is an easy task. All it takes is one super-smart-crazy-passionate person to really get the ball rolling.

…and as you mentioned “we must also work on becoming better people.” That’s the main point I was really trying to get across in my original comment above, and I think it’s very important.

I honestly think we are moving way ahead of ourselves here, in terms of AI development.

We haven’t been able to apply and manage the technology we have today very well. IF we decide to pursue Artificial Intelligence today, it is simply going to be a mirror image of man.
Our understanding hasn’t yet developed well enough to create a super-intelligent mind that is Nothing Like Ours.

jBo

You obviously did not watch the video. No one has the goal of creating AI today, but in the near future. You must also not understand the advantages computers have over our, in comparison, slow brains. Please educate yourself on the topic before making these rash statements in the future.

If you feel your understanding of this issue is more complete than mine, why don’t you just tell me, or refer me to someplace where I can gain more insight — Rather than try to undermine my own understanding of the issue?

– Get over yourself dude.

jBo

I don’t have to refer you to a place to gain more insight, watch the video and look at the slide show. No one has ever stated that they think AI will be created in the present. You use that to argue that you don’t think AI is possible now, when that is a completely irrelevant point. I didn’t say that to be mean, I was just saying that if you would actually read more into the topic, then your insight would be seen as completely pointless. Sorry if I offended you, but when dealing with this topic so many people just block out the idea that any of these things are possible. Comments like the one you made are what typically support the rational of this belief. You can’t look at the growth of technology as linear, but instead you must see the exponential growth. This is a common understanding with those who believe singularity is going to happen in our near future. Obviously now we do not have the understanding and technology to create AI, but look how far we have come in such a small amount of time. If the trend continues, which I believe it will, then we should have the ability to create AI in the near future. Again sorry, but assumed that anyone visiting this website and commenting on this video would have a basic understanding of the reasoning behind Ray Kurzweil’s predictions and idea, especially when the video and slideshow provide all of this evidence.

StupendousMan

jBo,

Nice polite response. I find myself arguing this myself. One argument, which I’m sure some big brain has already developed, is that the exponential increase in scientific knowledge and accompanying technological developments are similar to economic market models. Specifically the idea of the market as an emergent phenomena. Many actors working in their best interests, whether for financial gain or knowledge, create an environment that produces outputs greater than the inputs- thus wealth in the terms of money or knowledge keeps growing, similar to a free market economy.

But I think the actors in the tech/sci market produce far greater gains then business markets.

Well that’s my 2 cents.

billb

Chiwuzie Sunday…

(context: by AI, I mean general AI, the kind that would pass the Turing test. We already have loads of extremely narrow superhuman AI.)

For a better understanding, you can always start with the container of all knowledge, Wikipedia. There is an article on the Singularity.

You seem to first question whether superhuman AI is a good idea. That is a very reasonable question. The Wikipedia article mentions several very smart people (most notably, Bill Joy) who also wonder if unchecked technological advance is a good idea or not.

The essence of the Singularity, which Kurzweil and others seem to have forgotten, is that we can not possibly have any idea what will happen after it. If we bother to make predictions on what will happen after an event we are calling the “Singularity”, then the event isn’t the Singularity. Kurzweil and others repeatedly tell us what is going to happen, how it will all turn out. The so-called “end of humanity” won’t hurt a bit, they say.

Maybe they are correct. But if they can see beyond it, then superhuman AI is not the Singularity.

Personally, I am very much in favor of pursuing superhuman AI. I think we are hundreds of years from attaining it (not the 20 or 30 of Vinge and Kurzweil), but we need to work on it. And I believe (stressing “believe”) it is attainable. But at the same time, we must also work on becoming better people. We must become worthy of being scanned and replicated and simulated till the end of time.

I think some may have misinterpreted my initial comment, just a bit. I am not anti-science or technology (total opposite). I am simply against the use of technology as a means to benefit a few, rather than to benefit humanity as a whole (But that’s a different story)

I actually like the idea of AI, and I believe we can attain it within the next 20 or 30 years; but just like any other problem that ever gets solved, there needs to be funding/a business plan — This is not to imply that AI development is an easy task. All it takes is one super-smart-crazy-passionate person to really get the ball rolling.

…and as you mentioned “we must also work on becoming better people.” That’s the main point I was really trying to get across in my original comment above, and I think it’s very important.

Again, thanks for sharing.

Claire

Kurzweil will be releasing his new film, Transcendent Man, this week on the film’s website. A compelling documentary film about the role of technology in our future.