Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

On Tuesday we discussed a scathing critique of Ray Kurzweil's understanding of the brain written by PZ Myers. Reader Amara notes that Kurzweil has now responded on his blog. Quoting: "Myers, who apparently based his second-hand comments on erroneous press reports (he wasn't at my talk), [claims] that my thesis is that we will reverse-engineer the brain from the genome. This is not at all what I said in my presentation to the Singularity Summit. I explicitly said that our quest to understand the principles of operation of the brain is based on many types of studies — from detailed molecular studies of individual neurons, to scans of neural connection patterns, to studies of the function of neural clusters, and many other approaches. I did not present studying the genome as even part of the strategy for reverse-engineering the brain."

I believe you may be falling prey to what Kurzweil warns about in his response to Meyers: linear thinking. Things go from impossible to inevitable without us much noticing. The bottom of a parabola looks a lot like a horizontal line.

Let's say Kurzweil has been too optimistic about the rate of growth of our understanding of the way the brain works. Assuming the exponent on the rate of growth of our knowledge and technology is greater than one, and assuming that Penrose and Searle are full of it—which they IMO are—and there isn't some mystical quantum mechanical woo-woo that is just as irrational as the Silicon Valley Deepak Chopra mumbo-jumbo that Meyers's crew accuses the Singularity Crows of pedaling, Kurzweil will ultimately be vindicated, even if he—or his cyborg replacement body—is not around to say, "I told you so."

I don't know that vindication is the right word. Kurzweil's approach is one of several that may have merit and add to the body of knowledge about the lifecycle of the physical and experiential states of the mind. We already know that brains come in numerous varieties, depending on hormonal dosing from gestation through adulthood, as well as predispositions that are genetically influenced. Kurzweil's thinking casts a wide net, and there are huge chasms remaining to be explored.

Even if Penrose isn't full of it, you build yourself a hardware neuron model or a quantum randomness coprocessor and you're good to go. It would be VERY interesting if the brain did rely on quantum effects, because then we could measure (and duplicate) them.

Hell just look at what we have today, and you can see the brain/computer interfaces are pretty much inevitable. When I was a child of the 70s many scientists were quick to poo poo "The Six Million Dollar Man" as pure kiddie fantasy stuff, which considering at the time most computers were these big tape monsters was understandable. But now look what we have...we have artificial arms that can be wired into the nerves of the shoulder and which look and move more realistic every day, we have our first crude bio

Kurzweil is obviously optimistic about his time tables. But his theory of technology growth accelerating calls for optimism; there's good reason to believe that experts historically underestimate the rate of advancement.

Clearly, Myers has discovered that being unnecessarily angry and insulting leads to more pageviews in his blog. I'm sure he knows his field, and it's great when he tears into real jokers, but he has moved beyond that. He is now being inflammatory just for page hits.

Kurzweil is more than optimistic - he's just plain guessing. His predictions for the near term are accurate because they don't require big leaps in imagination or technology. His predictions for further out tend to be wrong or loony (many, if not most, of the predictions he made for technology achieved by 2010 back in the 90s were wrong in whole or in part).

His "theory" of technology growth is ridiculous in the face of prima facie evidence. It's true that experts historically underestimate the rate of technology advancement. It's also true they almost always underestimate the field in which explosive exponential growth takes place. In the 1950s, we were dreaming about flying cars and meals in pill form. Who actually predicted the full extent of the internet in our lives back in 1960? Or ubiquitous celluar communication? Or that we wouldn't have just 3 broadcast television stations? Technological progress is a given and the more limited of Kurzweil's predictions are correct because they typically require modest improvements in current technology - but epiphenomenalism, i.e. the singularity, is far from a given.

.

Kurzweil does a fine job making the simple types of predictions (the type that led to predicting flying cars in the 50s). The problem is that, like everybody, he can't predict the "next big thing". Exponential growth in technology always relies on discovering and exploiting as yet undiscovered technologies, and Kurzweil mostly relies on existing tech. That's fine for 10 or 20 years out but gets progressively worse at predictive power past that (see his predictions for 2010 and beyond made in the 90's, as opposed to the predictions he made in the last 10 years). And, to be honest, most scientists could have (and did) made the same short-term predictions Kurzweil made. It's not a stretch to think that Moore's Law will keep chugging along for at least 5 years and that people in different fields will exploit that.

Heck, even people in the fields of science related to some advancements don't see some of those advancements coming.In one of the Futures in Biotech podcasts (a 2007 episode if I recall) the guest was talking about gene sequencing and that as little as four years before they managed to sequence an earthworm genome it was thought to be impossible because of the work/technology involved. And then they did it. Shortly afterward the human genome project began.

He isn't being loony. If he were loony, he would predict things known to by impossible based on our understanding of physics. He is very specifically predicting developments which (a) people want, and (b) the universe (seems to) allow. This is necessarily murky business, but he at least attempts to set his time-tables based on quantifiable, empirical observations as best he can.

So accepting that predicting the longer-term future is inherently difficult, he at least makes an attempt. You are the sort to just throw up your hands and sling mud at those who try. It's a good thing we have a few people like him. It would be tragic if everyone thought like you.

P.Z. Meyers is not some headline grabbing putz like half the republican party. He would have an interested following regardless of whether he even bothered to talk about Kurzweil or not. Kurzweil has a vested interest in trying to shout down dissenting opinion while Meyers has no dog in the fight save illustrating the scientific fallacies and fantasies foisted upon a credulous public by pompous windbags such as Kurzweil.

Kurzweil is obviously optimistic about his time tables. But his theory of technology growth accelerating calls for optimism; there's good reason to believe that experts historically underestimate the rate of advancement.

Hey, optimism regarding the exponential growth of (some) technology, and the unpredictable and amazing consequences of such is fantastic. I try to be optimistic that it will continue myself (being in a field that has been the poster child for exponential improvement and not liking the idea of this ending).

Clearly, Myers has discovered that being unnecessarily angry and insulting leads to more pageviews in his blog. I'm sure he knows his field, and it's great when he tears into real jokers, but he has moved beyond that. He is now being inflammatory just for page hits.

I guess, but what I considered to be the biggest failing that Myers tore into in the previous article still remains. Kurzweil says Myers is mischaracterizing his thesis, and sure maybe he was at some point. But then he goes right on to emphasize that "the genome constrains the amount of information in the brain prior to the brain's interaction with its environment."

Aside from the fact that you can't separate the brain's development from its interaction with the environment even in the womb and it's doubtful that a brain that somehow developed completely without stimulus would look very much like a functioning human brain at all, that's still just not true. It's like saying that the tiny binary produced by compiling "Hello World" constrains the amount of information needed to actually run the program (especially since it's suppossed to tell you how to make the computer its running on too). Or that the amount of information on a web page is constrained by the size of the.html file. Img tags are not sufficient information to reconstruct the image it references.

The genome contains instructions for constructing the human body/brain within the context of another human body. The genome itself is not sufficient information to create that body. It's exploiting a huge amount of external information to allow itself to be as compact as it is.

But then he goes right on to emphasize that "the genome constrains the amount of information in the brain prior to the brain's interaction with its environment."

To be sure, the genome must be a major factor. I recognize that human bodies are not the products of genomes alone -- indeed, over 90% of the cells in our bodies (counting by sheer number) don't even have our DNA because they belong to our symbiont species -- but surely the complexity of the blueprint for brain-developing-systems goes a long way towards approximating total complexity of the developed-brain-system. Part of my intuition here is an anticipated relative complexity between environment and the

Clearly, Myers has discovered that being unnecessarily angry and insulting leads to more pageviews in his blog. I'm sure he knows his field, and it's great when he tears into real jokers, but he has moved beyond that. He is now being inflammatory just for page hits.

You missed something. The media will always inaccurately propagate scientific... hell, just about ANY view. They necessarily must summarize, simplify, and downplay. Typically, their own personal interests will cause a bias towards one particularly interesting feature of the advancement or article, and they will focus on that. (Remember the recent "chicken or egg" article whose scientific findings had NOTHING to do with that question?)

PZ Meyers made a bit of a mistake in responding so vehemently to a strawman construction of media's doing.

I'm not entirely certain what strawman construction PZ Myers responded to. Ray Kurtzweil said, and yes this is from the article, but presumably he actually said something like this:

Here's how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 mi

there's good reason to believe that experts historically underestimate the rate of advancement

Except in the area of artificial intelligence. About every 5 years, starting back in the early 1950s, some group of experts have proclaimed that human level intelligence would be simulated on a computer "within the next 20 years". They all overestimated the growth rate in this field... and continue to do so, in all likelihood.

Don't confuse what Moore's Law does for technology with growth of knowledge about the human brain. We know a lot more than we did 60 years ago... but we still don't have a clue how

Compared to Myers who apparently writes stories slamming third hand information? Seriously, that should completely invalidate almost all of Myer's arguments in general. If he doesn't bother checking sources, uses poor sources and proceeds without any caution. His points are going to be widely invalidated.

Kurzweil might come to the wrong conclusions but so what? That is wishful thinking at worst. At least he seems to do lots of research and is very well read.

This whole discussion reminds me way too much of the million partisan pundit sissy fights that rage endlessly on the internet. If I wanted to see two guys argue about what the other did or didnt say, I would gladly head over to DailyKos or BigJournalism and drown myself in their pedantry. This is slashdot; please save the inanity for the comments and at least give us stories that have meaning!

I'm actually glad to see that Slashdot is participating in such a debate. As a longtime Slashdot resident, I'm happy that Slashdot is attempting to find a niche in the Internet that involves scientific (or semi-scientific) and computer related matters.

The draw to Slashdot needs to be the articles, but also the response to the articles. The comments should be a cut above what you see at other websites.

The draw to Slashdot needs to be the articles, but also the response to the articles. The comments should be a cut above what you see at other websites.

And indeed, they are. Or anyway, a small subset of them, which is all that you can hope for. Slashdot is one of a subset of websites on which [various] people who know about many different things share useful information. It's rare indeed that I encounter any truly significant news item (to me, anyway) that isn't discussed here. Timeliness varies but I have only myself and all the rest of you to blame for that.

Partisan pundit sissy fight? No, this is somebody defending his research after somebody essentially lied to make him look bad and got press from it.

I cannot, for the life of me, determine who it is that you think "essentially lied." Lying requires an intent to deceive; i.e. in order to lie, you have to know the truth, and intentionally communicate in a manner that is contrary to that truth, either by creating facts from whole cloth, or by omitting certain pieces of information in order to get your audienc

Myers may have been focused on the "reverse engineer from the genome" argument but really the main issue is whether Kurzweil is within a few orders of magnitude of guessing the right level of complexity necessary to simulate a brain. The gist of the Myers argument isn't so much about genomics and ontogeny as it is about the emergent complexity of inter-related systems and I think the real nugget there might be something like: "We could model a brain but that wouldn't mean we modeled a mind. To model a mind you need to model a great deal of the environment the mind lives in... and that is many many orders of magnitude more complex."

For the record: I hope Kurzweil is right but I rather doubt he is. I don't think he's wrong about how powerful machines will be in 2050 I think he may be wrong about whether those machines can simulate a mind well enough because I really wonder if the complexity of a mind is actually a superpolynomial problem due to the hyper connected-ness of a mind and its environment.

The fundamental assumption is that there is some kind of mystical brain/mind dualism. From where I sit, modeling environments really isn't a hard thing to do. Our brains develop minds by not much more than sensory feedback. Experiments with rat brain cells in petri dishes attached to electrodes that control robots have shown that brain cells respond to sensory feedback even in ad hoc configurations. If we can truly model what the brain is physically, then development will be a simple trial and error experie

Pooh, I cannot get much traction on mystical, but it is blindingly obvious that the brain deals in sensory stuff and there are thousands of years of developments of the claim that the sensory data is not the universe. So you either do some Plato et al or you say that all you can know is your emotional state and figure reality is effectively some sort of psych thing. If you play Plato, then maybe you end up knowing something about fundamental principles of the universe by looking at the contradictions in

To paraphrase something somebody wisely said in the previous thread about this topic, you don't need to model the electrons in the circuit of a machine to emulate an NES.

Except the NES is a mishmash of seemingly random bits and junky non-logical software. You might not need a model of the electrons, but you need the software too. I'm sure someone could, eventually, built a rough facsimile of the human brain, but lacking software you've built nothing but a pile of quivering Jello. Think of it as building

I like how within your perspective a difference of opinion is 'total ignorance'. In that context, I will treat you with equivalent respect. Materialism is so long been disproved that leading biologists like Dr. Richard Dawkins still ascribe to it. Yes, I see what poor company I keep.

Really, you're going to peddle Peter Russell? A guy who makes his living selling pseudopsychological snake oil to businesses? Lynne McTaggart is even worse, she spreads FUD about modern medicine to suit some whackjob personal political agenda. I recognize that I am not assailing their arguments because they are not worth my time, nor are you, as I said I'm only going to give you as much respect as you've given me, which has been none.

Oh and Max Plan[c]k's [SIC] opinion of consciousness is about as meaningful as Jung's opinion of quantum electrodynamics. Planck did not have the background in the field of neuroscience or psychology to have an educated opinion about consciousness. He simply had an opinion, and that opinion gains no more automatic credence because he happened to have a Nobel prize in an unrelated field. Even if all of that were different, a lot can change in nearly a century.

I don't deny there are levels of consciousness, they're just all physical. Just as the levels in a computer are all physical. Software is nothing more than differential physical states on magnetic media and within circuits. The mind is the same, and below that level is electricity again not "spirit", just like in a computer coincidentally.

"I don't deny there are levels of consciousness, they're just all physical. "

I've had a neurogical disease that affected my level of consciousness, and I can still tell you this question is not nearly as clear cut as you think. I quite certainly believe that all my thoughts and experience originate in my brain, because those were the things that were compromised, or went away, with the disease process, which is physical.

But beyond that, I'm stumped. I can't account for how *I* come to experience my thoughts and sensations. Yes, my brain represents the world in a 3 dimensional mental map - but represents it *to* whom? That sky appears blue. But it appears blue to what?

Furthermore I can't decide whether, when I "woke up" from the illness, I popped back into existence out of nowhere *or* the possibility of my experience was present the entire time, even though my brain wasn't functioning correctly.

There are no certain answers to this question. Anyone who claims they have answered it with any certainty on any side of the issue is mistaken or worse.

This is the hard problem of consciousness, the fundamental problem of consciousness. To repeat: how is subjective awareness, or experience, possible at all? You haven't answered this question.I suspect its out of our epistemological reach because we can never 100% verify that a physical machine which speaks and acts like us is actually conscious, actually has subjective experience. If the machine insists he has experience of pain, or pleasure, do we believe him? From an ethical standpoint, I think we have to. But from an epistemological standpoint we can never really know for sure. Because our qualia are non-substitutable. There is no way to get your experience into my brain - as soon as it enters my brain it becomes my experience.

So if you reduce awareness to a set of physical propositions, you lose the experience of "what it is like." and "what is it like to be me" - That's the other side of the coin, the subjective side. The best we can come up as far as how this is possible - physically or spiritually - is at most a hypothesis and at worst a religious assumption, even if we believe in materialism. If we want to be truly scientific we should begin to view this fundamental question as fundamentally undecidable.

I'd be more worried about concurrency issues. If you have to treat each neuron as its own processor in order to simulate it correctly to get a mind even if computers are fast enough to do it they might not be able to with out deadlocking.

This starts turning into a definition problem. A matter of semantics. A mind anything like a human being's runs on a hardware substrate that's built to interact with a physical environment in ways that promote organic survival. A mind that isn't anything like a human mind could run on very differently designed hardware, but then, if it's that different, how do you determine if it's equivalently complex, and ultimately, what justifies calling it a mind at all? People such as Verno

"We could model a brain but that wouldn't mean we modeled a mind. To model a mind you need to model a great deal of the environment the mind lives in... and that is many many orders of magnitude more complex."

Few serious hard AI approaches since the 60s have actually tried to do this as you suggest. Most use the ACTUAL environment rather than trying to model it. This process is usually called "learning."

PZs meaningful point is that the prenatal development environment affects brain development, in additio

Myers, who apparently based his second-hand comments on erroneous press reports (he wasn’t at my talk), goes on to claim that my thesis is that we will reverse-engineer the brain from the genome.

So put your speech up on your site, all I can find are videos from previous summits [magnify.net]. TED seemingly posted videos as they happened and therefore we could openly debate them. Summits are great but not everyone has the time or resources to attend them. I would suggest you move towards a more open format of disseminating your ideas and the very specific and lengthy details about them. I'm not going to buy a book on futurism and wade through it for the details you provide about neurobiology and I don't think PZ Meyers would do that either.

I mentioned the genome in a completely different context. I presented a number of arguments as to why the design of the brain is not as complex as some theorists have advocated. This is to respond to the notion that it would require trillions of lines of code to create a comparable system. The argument from the amount of information in the genome is one of several such arguments. It is not a proposed strategy for accomplishing reverse-engineering. It is an argument from information theory, which Myers obviously does not understand.

Well, frankly, I don't understand it either. You're applying information theory to lines of code... and that just doesn't make any sense to me. I haven't heard of it. I haven't heard of anyone say "theoretically could be reduced to x lines of code." I don't know why we're talking about information theory when we're talking about simulating the brain or even understanding the brain.

The amount of information in the genome (after lossless compression, which is feasible because of the massive redundancy in the genome) is about 50 million bytes (down from 800 million bytes in the uncompressed genome). It is true that the information in the genome goes through a complex route to create a brain, but the information in the genome constrains the amount of information in the brain prior to the brain’s interaction with its environment.

So first it was information theory on the genome and now you're on about compression of the genome. Great, you've applied theoretical limits to lines of code in order to describe a complex biological system and then argued that due to redundancy we can reduce it to 50 million bytes. And what did that buy us exactly? Look at how many lines of code we've devoted to simulating a single neuron or synapse... and it's not even a complete and accurate simulation. Your theoretical limits are amusing but pointless... to further apply your 'exponential growth' of the lines of code we can program is further amusing.

Kurzweil is a futurist with just enough knowledge to sell people. His exponential growth to a singularity and proof of it doesn't do him much good when he doesn't understand the complexity of the brain and then applies theoretical limits to that from other disciplines. He's free to keep preaching, I just question at what point people will give up on him. If he dies soon and pulls a L. Ron Hubbard what sort of cult then will we have on our hands?

Well, lack of searching is not a lack of material, you can find several hours of Ray's talks on video at Singularity Summit 2007, 2008, 2009, TED.com, Singularity University and just plain independent YouTube videos. He also has two movies out (I haven't seen either), the Transcendent Man criticisng his esoteric side and The Singularity Is Near (based on his book) supporting his ideas.

It is true that the information in the genome goes through a complex route to create a brain, but the information in the genome constrains the amount of information in the brain prior to the brain’s interaction with its environment.

So the implication here is that a genome can create a brain without input from the environment (at least any input that carries information). I have some news: every human born ever has come from a womb. That womb has supplied raw materials and information in the form of the mix and timing of resources. There are no exceptions at all . Would you get a blank brain or a malformed brain if the resources were not supplied in the correct mix? Almost certainly and that means you need to include at least so

Well, frankly, I don't understand it either. You're applying information theory to lines of code... and that just doesn't make any sense to me. I haven't heard of it. I haven't heard of anyone say "theoretically could be reduced to x lines of code." I don't know why we're talking about information theory when we're talking about simulating the brain or even understanding the brain.

Kurzweil doesn't advocate the use information for understanding or modeling the brain. He only used it in combination with other methods to get an estimate on how complex the brain actually is (whether his methods and estimates are correct I can't tell).
That was, imo, the whole point of the paragraph you quoted...

Personally, I think Kurzweil is still full of shit. Systems are usually way more complex than most "futurists" would like to admit. They are finding that with the human genome. The promise was that once it is decoded, we'll find cures for everything. Errr...yeah, well, it sort of depends on how it gets expressed in proteins wh

You point out what I thought was the failure of Kurzweil's defense against Myers' argument. Kurzweil repeats the claim that Myers said was a wrong assumption on Kurzweil's part: that the genome contains all of the information necessary to create the brain. Myers argument with Kurzweil boils down to this: the genome does not contain all of the information necessary to reconstruct the brain. There is an awful lot of information about building a living creature contained in various ways in the structure of each cell. For example, if you were to take the nucleus of a fertilized monkey ovum and place it in a fertilized shark ovum (after removing the nucleus of the shark ovum), you would not end up with a monkey, although it would be closer than if you just swapped the genome between the two. There is a lot of information about how to interpret the genome in the cell structure. The same sequence of DNA has been shown to code for significantly different proteins in different creatures.

I said that we would be able to reverse-engineer the brain sufficiently to understand its basic principles of operation within two decades, not one decade, as Myers reports.

We don't have more than a rudimentary understanding of how the brain works, or even what Consciousness [wikipedia.org] is.

Although humans realize what everyday experiences are, consciousness refuses to be defined, philosophers note (e.g. John Searle in The Oxford Companion to Philosophy):[3]

"Anything that we are aware of at a given moment forms part of our consciousness, making conscious experience at once the most familiar and most mysterious aspect of our lives."--Schneider and Velmans, 2007[4]

A good point. I think Kurzweil is one of those that would say "consciousness is computing" so all you need is enough of the right computations. This is definitely something brain simulations would have to explore. We simply have no idea yet.

Why does it have to fit contemporary science? What we know about the universe is almost nothing whatever, compared to what there is to know. We don't know what consciousness is because biochemistry hasn't advanced far enough to understand it. Remember, all thought and feeling and sense is nothing more than complex chemical reactions.

We have a lot more to learn before we can even ask the question, let alone answer it. If thought is simply computation, why can't a house cat do trigonometry? Trig is easy for a

It has to fit contemporary science because forcing consistency between our models is how we advance them. If we had some object that had "magical" properties and we had physics that didn't cover that object, we'd do well to figure out how to mix them together. Likewise, when we have an unknown object, it makes a lot of sense to assume it's covered by the laws of physics as we know them unless we see strong indicators otherwise. That's how we learn.

I think it's reasonable to have a strong belief that consciousness is computing, based on my general thoughts in philosophy of science and experience with and studies in neuroimaging. I would suggest you look into the state of the field - there's serious progress into making broad maps of brain function, and the characteristics of the neuron strongly suggest reasonable parsimony with computational models.

I would not care to make a guess on timing or methods - I think Kurzweil may have stuck his neck out muc

Of course, but it also will definitively solve many of these questions. Looking at the traditional philosophical questions is interesting in our spare time, but in the neurosciences it's better to just plow ahead and let that stuff sort itself out as an afterthought. We have the assumptions of science (methodological and perhaps philosophical naturalism), so far they've been shown pragmatically pretty decent, and there's been no coherent challenge yet to the idea that the mind is strictly rooted in phenomen

I don't think that style of reasoning really holds water. If you like, I'll say that I reject your second step in the 3-step argument you present, but in general I think that reasoning must be subordinate to empiricism. We might hope that our notions of reason would not allow us to prove things that don't line up with reality (or false by some other metric of truth), but if they do, we might have to either discard them or adopt some variant that works better. I reject the idea that philosophy of the sort yo

The thing is, you won't find consciousness looking at the signals in the brain. The brain is composed of parts that have no consciousness themselves, and the patterns are too complicated to understand anyway. Even if you manage to see all the patterns at once, you still won't see conciousness.

The only solution is to look at the behavior. If the simulated brain can have a discussion about consciousness, it has everything you can possibly want.

It is not inconceivable that we could create a thing like a brain which would give rise to consciousness, and yet still not understand what it really is. If we somehow manage to write a computer program which can be (again somehow) qualitatively defined as conscious, then we will need to have first understood consciousness. But if we only assemble a collection of technologies which somehow surprises us with consciousness, then we will have a new direction for research, but not an understanding of the thing

There's about as much chance of building a sentient computer when we don't know what sentience is as there is of giving someone who knows nothing about electricity a box of electronic parts and having them build a working radio.

We don't have more than a rudimentary understanding of how the brain works, or even what consciousness is.

People say this a lot, and I don't understand why. Our understanding of how the brain works is a good deal more than rudimentary. The advances we've made in understanding the brain on both the large and small scales in just the last five years are breathtaking. Our understanding is a long way from complete, but Kurzweil is correct at least to the extent that our understanding is significant and appears to be growing at an accelerating rate. It may not be accelerating as fast as he expects, but keeping up with new developments in neurology at even a cursory level is quite challenging. The main difficulty we face at present in implementing the structures we do understand in silicon is the lack of adequate parallelism in current computing hardware, not our understanding of the relevant neural structures.

As for consciousness, unless you believe in some kind of pre-scientific vitalism, a reasonable working assumption is that it is an emergent property of brain-like structures. Unless and until we discover otherwise, there is no reason to wait for an understanding of consciousness to begin working on replicating the functionality of the brain. Quite likely, the attempt to replicate the brain will reveal more about consciousness than idle philosophical inquiries. Those so inclined might want to settle on a definition of consciousness before trying to figure out how it works.

Emergent properties do not emerge out of properties, they emerge out of parts that are joined together. Yes, an emergent property can influence the parts that produce it. You can decide to shoot yourself in the head, for example. Or smoke pot.

Something interesting that happens as a consequence of other properties.

For instance, gliders and Turing completeness in Conway's Game of Life are an emergent property that's a consequence of the rules. The game wasn't designed for it. There's nowhere in the rules where a glider was explicitly coded in. It's something that emerged as a consequence of the rules.

This is interesting, because often the things that emerge are not obvious and things you wouldn't have coded in explicitly. It's also potentially les

Dennett has already provided some insights. The problem is that people find that it doesn't match their intuition, so they keep looking for something else. The biggest hurdle you have to take is to realize that you can't know your own consciousness. Once you get beyond that, the problem becomes a lot easier.

The major flaw I can see in his response (which I think was addressed by Myers) is

but the information in the genome constrains the amount of information in the brain prior to the brain’s interaction with its environment.

He even underlined it. The problem is that the brain doesn't just spring into existence fully formed and THEN get exposed to the environment. The brain starts out as a few cells and is constantly exposed to the environment as it develops. I think this was a major point in Myers response and RK just blew right past it.

When are we human? Abortion hinges on this, WHEN is the foetus a human being with a human brain. Is there some magic moment the brain switches on OR are we a bacteria that evolves rapidly into a complex life form?

Can it be that the brain "knows" the human body and how to operate it because it "grew up" with it? We imagine a robot being build typically on a long assembly line and only at the last moment the head is connected and the robot switches on. Could a brain instead function as a very small simple "c

It is even more basic than that. What a particular peice of the genome codes for depends on what structures are in the cell it is in, this starts with the very first cell of the organism. Addtionally, what a particular peice of genome codes for also depends on what cells are surrounding the cell it is in.

I got that too. While I'm largely in the Kurzweil camp in this whole thing, he's misreading Meyer point about the environment. A strand of DNA dropped on the moon isn't ever going to form a brain any more than dropping a paper containing source code on a computer will cause it to run the program. It needs a very specific environment and its easy to see that there are lots of small environmental imperfections that can fuck up brain development in a child. But even though Kurzweil doesn't address that, I

My point is that the genomic argument isn't relevant for addressing the objection that the brain is a system too complex to describe in any amount of code.

Even referencing the genome weakens the argument if you're using it to describe complexity. The genome is more of a bootstrap code than it is a descriptor of the system itself.

My understanding is that Kurzweil is looking at the brain as an existing system to be simulated, and Myers is saying that it is actually a long process that begins at the formation of a few cells and proceeds through exposure to its environment and its own chemistry. That the meaning of the system is actually bound up as much in that growth process as it is in the chemistry. That even the things that we see as redundancies may (or may not) be significant.

Both of these people are way smarter than I am. So like any good slashdotter, I feel compelled to criticize one of them to make myself feel better.

Using the genome does not address the code issues that's the whole bloody point that anyone who knows molecular biology sees (including Myers).

The genome is a SUBSET of the code used to describe a human brain. The real code is in the universe. Physics, biology and so on. The computer the genome is run on. It's using a 10 million line library to create a jpeg and then saying that making a jpeg is only a single line of code because the call to the library was 1 line. Utter idiocy.

Sure we do, it's why biological experiments are possible but they're rather cumbersome and slow. You can try filling a bug ticket with god to let us use a proper debugger toolset but he's a paranoid loon so that's not likely to go over well.

Most computer hardware on the other hand doesn't allow direct access to the underlying physical layer.

If you believe Penrose, it isn't even possible to create intelligence through a simulation on a standard digital computer. But so what? We're not so bad at building (or growing, if necessary) hardware.

is right. Myers criticism may be off the mark but Kurzweil's speculation about brain design, like some much of his other speculation, is bullshit. His basic argument in the blog post is that the amount of information in the human genome constrains the amount of information (and the complexity) required to design the brain. This thesis is wrong on a bunch of levels but let's take the most obvious. The amount of information in the genome is the amount of information that the "body" (to simplify) requires to replicate or create parts of itself. The amount of information required is relative to the machinery which is going to interpret it. There is no reason to believe we are dealing with a Turing machine here where the amount of bits required for a program to perform a function is going to be more or less consistent across languages and platforms (assuming similar complexity of the code). The machine interpreting the bits matters. So while the body may only need "50 million bytes" to create itself we may need many, many more millions of bits to specify how to build it. Just consider the complexity of protein folding.

More dubious statements follow:

"The goal of reverse-engineering the brain is the same as for any other biological or nonbiological system – to understand its principles of operation. We can then implement these methods using other substrates other than a biochemical system that sends messages at speeds that are a million times slower than contemporary electronics. The goal of engineering is to leverage and focus the powers of principles of operation that are understood, just as we have leveraged the power of Bernoulli’s principle to create the entire world of aviation."

This completely begs the question of whether it can be replicated in another substrate. He just assumes that it can be done and by doing so he already assumes a model of the brain that could be (and is most likely) wrong. The brain is clearly not a Turing machine. That's not say it is not another kind of "computer" (for some expanded definition of computer) or follow mechanistic principles however. Assuming the brain is like a Turing machine (which Kurzweil implicitly does) is one of the biggest obstacles to developing real AI.

Speculation of Kurzweil kind does not belong in the "Science" category, maybe "Idle".

is right. Myers criticism may be off the mark but Kurzweil's speculation about brain design, like some much of his other speculation, is bullshit. His basic argument in the blog post is that the amount of information in the human genome constrains the amount of information (and the complexity) required to design the brain. This thesis is wrong on a bunch of levels but let's take the most obvious. The amount of information in the genome is the amount of information that the "body" (to simplify) requires to replicate or create parts of itself. The amount of information required is relative to the machinery which is going to interpret it. There is no reason to believe we are dealing with a Turing machine here where the amount of bits required for a program to perform a function is going to be more or less consistent across languages and platforms (assuming similar complexity of the code). The machine interpreting the bits matters. So while the body may only need "50 million bytes" to create itself we may need many, many more millions of bits to specify how to build it. Just consider the complexity of protein folding.

Exactly so. The genome information assumption is absurd and arbitrary. It's like assuming that because I can buy a book an Amazon by transferring 1500 bytes of information to Amazon's website I can thus recreate that book inside a simulation using only 1500 bytes of code. In both this case and the issue of brain complexity, the mechanism for transforming the initial information into the finished product is far more complex than the "input data."

In other words because Kurzweil's theories are, in your opinion, nonsense they shouldn't be tested?

It's not my "opinion". It's based on theory, the very information theory that he criticizes Myers for not understanding. Obviously he doesn't understand it either.

If you mean the bit about the brain being a Turing machine, then I would say its fine to try testing that theory (many people have been for many years), but the assumption that consciousness is substrate-independent is completely unjustified.

Our progress towards "reverse engineering" the brain may actually be SLOWING DOWN, not accelerating. Despite the wishes and dreams of computer scientists, animal rights adv. and folks like Kurzweil, the real nitty gritty of "figuring out the brain" comes primarily from painstaking experiments in the anatomy and physiology of the brain. The primary funder of this research in the US is the NIH. And funding has been stagnant if not decreasing in real dollars. Consequently, fewer smart students are entering the

As someone who actually does neuroscience research, the tools and techniques available today were almost undreamed of a couple of decades ago. Nothing is slowing down. But more money is always greatly appreciated, of course.

Kurzweil is absolutely correct. His best argument is not the complexity of the genome, but focusing on the actual functional structures in the brain. A cortex composed of a billion repeating units is something we CAN feasibly simulate. Already, we have massive systems that run an algorithm spread across billions of separate instances. (google.com is one)

An "algorithm" could also model the behavior of a few neurons working in circuit.

Also, keep in mind that most of the complexity of the brain and body are completely unrelated to the task of thinking. Much of that genome codes for molecular machine parts needed to maintain and grow the hardware. There's all kind of defense and circulatory and support systems that we won't have to worry about when designing artificial minds.

And finally, when you consider the changes made to the brain from the enviroment : that doesn't make the problem harder. Once you have a self organizing neural system that works like the human brain but a million times faster, you expose that system to our environment and train it up just like we do with humans. Sure, it might take a few years for such a system to reach super-intelligence, but if your fundamental design was right then this would eventually happen.

Kurzweil is absolutely correct. His best argument is not the complexity of the genome, but focusing on the actual functional structures in the brain. A cortex composed of a billion repeating units is something we CAN feasibly simulate. Already, we have massive systems that run an algorithm spread across billions of separate instances. (google.com is one)

I would urge you to read the following slashdot post: http://science.slashdot.org/comments.pl?sid=1757102&cid=33278462 [slashdot.org]
The point of the post is that we are unable to model the neural activities of a worm with 302 neurons, and this after an extremely large amount of work. The cortex is not 'composed of a billion repeating units'. It is composed of 100 billion non-repeating units, with thousands of connections (each) to other non-repeating units, and each of the non-repeating units keep changing both

One other thing I did not mention : neurons are a complex biological machine that use hundreds of thousands of moving parts to do a very simple task. If a neuron is stimulated enough, it fires. All or nothing. Also, there are various fine tuning mechanisms located in the cell membrane at the synapse. We can model this behavior with a teensy fraction of the hardware that nature needs to do it. Just a few hundred transistors tops.

I think dendrites, but pretty slowly? This all depends on what you consider to be neuron proper and what to be fiddly additional bits, like ions and neurotransmitters, and what you mean by move (rather than "grow" or "slowly shift" perhaps).

But reuptake transporters, vesicles, ion channels... There are moving bits proper, I think. Cells in general do lots of moving.

Myers primary complaint was that Kurzweil used the number of genes in the genome and how many bits would be required to store that data as a predictor of how long it will take to completely understand the complexity of the human mind. Myers' post lays out a glimpse of the additional complexity involved and rightly points out the fallacy of making such a grand prediction based on such a small amount of information and understanding.
Of course Kurzweil's entire career and fame are now dependent on people continuing to fall for his dramatic generalizations and overreaching predictions that "Something Big" is right around the corner. I have watched Kurzweil talk and sometimes it seems as if he has a messianic complex.

He should go back to what he does well - inventing interesting and useful machines. His prickliness displayed towards those who disagree (often with good reason) with his worldview has done more to harm his reputation than any of his critics' corrections. If you can't take criticism, you shouldn't be a futurist. He also won't be the first whose hubris will lay him low. Get back to the lab while you still can, Ray...

Sceptics are adept at making really quite fetching mincemeat sculptures of religion, alternative medicine and the new age, but we need some serious attention paid to the transhumanist/singularist/cryonicist belief cluster. Because these are smart people, they are likely our friends, they share a lot of our notions and they are proving that the main use apes with delusions of grandeur like ourselves put intelligence to is being stupid with far greater efficiency.

Obligatory RationalWiki plug: Cryonics [rationalwiki.org]. I was actually neutral-to-positive on the subject until a friend started looking seriously into spending $120k on freezing his head and I started looking seriously into what he was getting into. And goddamn, it's woo all the way down. Woo by people who are ridiculously smarter than you or me and use it to be dumb. How do you fight that sort of woo? Piece by piece, of course. So I have to learn the bollocks on its own terms to take it down (at which point you see goalpost-moving, reversal of burden of proof, etc., all the things apes with delusions of grandeur do so well). And it's just AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.

tl;dr: Singularitarians talk as much utter bollocks as creationists, climate change deniers, New Age hippies and the tobacco industry. There needs to be more analysis and dissection of said bollocks.

Why would we even want to simulate a human mind? To spare ourselves trouble of thinking? While we're at it, why don't we build a bunch of sex robots to save us the trouble of having sex. Then I guess we'll sit in front of the TV for the rest of eternity. Sounds like a blast.

I think it would be more awesome to have each of them train for a time under a group of east coast and west coast rappers respectively. Then after, say, a year of training, they meet for an ultimate rap battle at an arena and give their best attempts to 'serve' and/or 'school' each other from their given perspectives in mad rhymez.

PZ is no idiot, but he does have a tendency to lash out at those who are, and occasionally also at those who aren't. I used to post on his blog fairly often, until it degenerated into a far-left echo chamber when he couldn't be bothered to require civility from his commenters. I also wouldn't describe him as being one of those high school pricks that you mention, although he certainly does tolerate and cheer on the ones who frequent his blog.