Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

jamie writes "There he goes again, making up nonsense and making ridiculous claims that have no relationship to reality. Ray Kurzweil must be able to spin out a good line of bafflegab, because he seems to have the tech media convinced..."

...he must be right! He used math, and everything!
I'm a little shocked that Kurzweil equates blueprints with the functioning organ. I am not shocked, however, that the tech media latched onto this--at first blush it sounds so *reasonable.*

Sejnowski says he agrees with Kurzweil's assessment that about a million lines of code may be enough to simulate the human brain.

Here's how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.

About half of that is the brain, which comes down to 25 million bytes, or a million lines of code.

Idiot. The design of the brain is encoded in the genome in the same way that the design of a 4KiB program is encoded in its load module: useful for running the program on its original hardware.

But then you have architectural issues. That 4KiB of information does not run unless it's supported by a complex operating system, which itself is supported by complex logic in an ISA and memory managment architecture backing it up. And all that is implemented on a specific design in a specific physics model.

Translating that program to SPARC takes work, and it comes out roughly the same size. Translating that program to a progression of chemical reactions produces something vastly different, especially since you need a new middle ware (chemical environment) running on top of different physics (chemistry).

Translating a physical architectural design from chemistry to computer logic on top a given ISA is the same problem. You now have odd issues that are messy, and then the program running on the brain needs to be built again. That program is even more complex and less known.

Ray Kurzweil is yet another computer programmer blathering on about things that he has no understanding on. The vast majority of software people I know do that, I don't understand why this guy gets to publish books on it.

Would be nice if the summary even hinted at what the ridiculous claim actually WAS...
Namely, that we'll be able to reverse engineer the human brain in the next 10 years.

It's a little more complicated than that. You see, the article actually breaks down the logic behind that statement and points out how poor it is. Here's the initial part of Kurzweil's argument:

Sejnowski says he agrees with Kurzweil's assessment that about a million lines of code may be enough to simulate the human brain.

Here's how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.

About half of that is the brain, which comes down to 25 million bytes, or a million lines of code.

I have only taken high school biology but I know that the genome doesn't magically become the brain. It goes through a very complex process to amino acids which fold into proteins which in turn make cells that in turn make tissues that in turn comprise the human brain. To say we fully understand this transformation entirely is a complete and utter falsity as demonstrated by our novice understanding of the twisted beta amyloid protein that we think leads to Alzheimer's. How amino acids turn into which proteins I believe is largely an unsolved search problem that we don't understand (hence efforts like Folding@Home). And he claims that in ten years not only will we understand this process but we will... reverse engineer it?

The man is insane. I've posted about this same biologist criticizing him before [slashdot.org] and it looks like P.Z. Myers just decided to take some extra time to point out how imprudent Kurzweil's statements are becoming. Kurzweil will show you tiny pieces of the puzzle that support his wild conclusions and leave you in the dark about the full picture and pieces that directly contradict his statements. This is a dangerous and deceptive practice that -- despite my respect for Kurzweil's work in other fields -- is rapidly turning me off to him and his 'singularity.' He's becoming more Colonel Kurtz than Computer Kurzweil.

"Read it. Other than the solid date he predicts, it's pretty plausable."

No it's not. If it was possible to do in a million lines of code, it would have been done by now. Windows XP had something like 40 million lines of code. While we can agree it was probably coded relatively inefficiently, there is no way that any OS even comes close to the complexity of the brain.

Aside from the article, it would appear arbitrary to apply loss-less compression to the LOC.

Code must be very very compressible losslessly (I am betting like 90%, as plain text is often 80% when zipping). This would imply to me that one would need 10 times as many LOC as the (faulty) premise assumes.

The article itself points out that it is not just a matter of writing the code, but also simulating the machine. So yes, if we could accurately make a machine (real or virtual) that could compute the way that DNA computes perhaps we could then make a brain that functions in it with not too much code, but it does not follow that we can just a tersely describe it on a computer as we have them (Turing Machine?).

A computer is a fixed system. If you tell it to do A (via software), you know you will get B, based upon knowledge of how the circuits are hardwired. The same can not be said of the human brain, because it has the ability to change its hardware (via growing new connections between neurons).

as opposed to those who are satisfied with the theory that life evolved from inorganic chemical compounds, totally by chance, with a series of ininitely improbable events occurring in the right sequence over and over and over again.

What a lovely caricature you've constructed there. Secondly, just like most crappy caricatures of biological evolution you also seem to conveniently gloss over the major role that natural selection plays which is not random.

You're being conveniently trite here, though. That's not a good counter-argument. This particular biologist seems to have a pretty good grasp on the fundamental problem with Kurzweil's argument, and that problem is: Kurzweil confuses the purpose of the genome. It is not "the program"! Myers contends that, really, it's more like data. To me, this sounds like a classic Von Neumann architecture: it's bit of both, depending on your context. In any case, Kurzweil completely misses out on the fact (and he would know this if he had followed *anything* in genomics over the last 15 years) that the genome, as encoded in DNA, is only a small part of what makes a cell express and function in a particular way. A nice introduction to the epigenome was in this NOVA documentary [pbs.org].

You're looking for a level of effort above pure copy'n'paste and as such are asking for way too much. Slashdot submissions and editing have gotten so bad that the summaries are generally misleading if not entirely wrong. The summaries tend to be nothing more than the submitter taking the most polarizing sentence/paragraph from TFA and pasting it into the summary field. RTFA is no longer to glean more details for the sake of learning more or backing up your opinions in comments... RTFA is now necessary to understand just what the fuck the submitter wants us to learn. The term "summary" appears to be _entirely_ lost now, at least in the Slashdot story submission crowd.

Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes

This is so retarded that it's sad. Why is it so hard to understand how compression algorithms works? Saying that X can be compressed into Y bytes doesn't say ANYTHING. You can "loose-less compress" ANYTHING into 1 bit by using the function that takes that and returns the bit "1" (and which takes anything else and returns "0" + that). What does compression has to do with anything? The stupid hurts...

Oh and as a side note, the current state of the field in biological evolution has long since moved past the works of Darwin. Your remark is about as disingenuous as trying to use the failings of Newton's classical mechanics to make criticisms of the current state of quantum mechanics.

The proteins/cells that make up the brain are only part of the story. The protein/cell level is roughly what a newborn can do. The rest of brain development is creating and tearing down billions of interconnections between neurons. It's those interconnections that turn the brain from a pile of goo into something interesting, and we have no understanding of how that mechanism works.

Secondly, 3 billion base pairs does not mean 6 billion bits. First, DNA is base-4, not base-2. Second, the pairs are the units of information, not 2 nucleotides that make up the pairs.

3rd, source code isn't compressed.

4th, there isn't much redundancy in a gene sequence. There is redundancy in that we have 2 copies of our genome, but that's already accounted for by the '3 billion base pairs' number. While there's a lot of 'junk' DNA, there isn't much (if any) redundant DNA.

it looks like P.Z. Myers just decided to take some extra time to point out how imprudent Kurzweil's statements are becoming. Kurzweil will show you tiny pieces of the puzzle that support his wild conclusions and leave you in the dark about the full picture and pieces that directly contradict his statements.

He staked his reputation on a timeline that everyone but him knew was impossible and now he tries to find little pieces of evidence to support the idea that we are still on that timeline. As reality and his predictions diverge further from each other his claims and evidence become weaker, until the day he predicted the singularity would happen passes by and he is forced to revise his proph-... er, prediction. Even assuming his basic premise is correct (an idea which I feel there isn't enough evidence to say either way) it should be obvious by now that his time scales are way, way off, probably by at least an order of magnitude. He'd better serve himself and his causes by admitting his mistake and reevaluating his predictions.

The genome contains enough information to build the brain from raw materials. However, this data has already been losslessly compressed by countless generations of evolution. We would need to discover the evolved compression algorithm to "unpack" the 800 million bytes into the 3.2 billion bytes (using his factor-of-4 ratio) in order to begin understanding it.

if you rtfa you'll see that the million lines of code only gives you the proteins that make up the brain - i.e., it gives you a parts list and a delivery schedule, not a set of assembly intstructions. The genome doesn't give you how the proteins interact, in usually complex ways (i.e., three or more proteins interacting simultaneously), in billions of cells in parallel, over the course of 9 months to give us an infant brain (even leaving aside the tremendous amount of development that takes place in the brain during childhood).

As the author of tfa writes: To simplify it so a computer science guy can get it, Kurzweil has everything completely wrong. The genome is not the program; it's the data.

IOW, the program is the developing organism itself, the complex protein interactions and it's (uterine) environment none of which are encoded in the genome. The organism uses the data encoded in the genome to produce proteins which interact with each other and the organism and its environment to grow cells which eventually form a brain.

The mistake in Kurzweil's thinking is the typical mistake engineers make when dealing with biology; the enviroments into which engineers place their designs do not typically spontaneously cooperate in the construction of the engineer's design. When an engineer designs a circuit board, his lab bench doesn't spontaneously start soldering connections and adding components for him and automatically complete parts of the designwithout his explicit instructions. But the organism does precisely this with proteins syntesised from the genome.

As a result, the genome alone cannot possibly tell you how to "make" an organism, because the genome only tells you the parts list and delivery schedule for the organism, not the assembly instructions. The assembly instructions are not explicit anywhere in the system; the assembly instructions are implicit in the combination of the complex behavior of the cells of the developing organism, the uterine environment and the very complex ways the proteins sythensized from the genome interact.

In order to extract the actuall assembly instructions we'd need a full blown molecular biology simulator that could correctly simulate:1. protein folding (still unsolved)2. comlex multi-protein interaction (still unsolved)3. simultaneous behavior and development, (i.e., in parallel) of billions of living cells each undergoing trillions of chemical reactions per second (computationally prohibitive)

What Kurzweil says is pretty reasonable, he used the total amount of information in the genome to get an upper limit estimate of the amount of library code needed to simulate a brain.

Yes. Well done. Did you try reading the article that you are criticising because it rips your point apart fairly easily. The thing about an upper limit is that it should be at least as large as the thing that you are estimating. The article shows quite conclusively that Kurzweil's "upper limit" is far too small because he knows nothing about brains and pulled some numbers out of his arse.

That "tangent" that Myers went off of was a reasonable argument for why the amount of information described is not sufficient to simulate a brain. Not least because it is a highly compressed description of a process that builds a brain. It is not a description of a brain itself. Furthermore to use that description to build an actual model of the brain you need to understand all of the biological processes that are relevant in executing that construction code, and the environment that they run in.

I think a machine with one million processing cores at 1 GHz would have approximately the same data handling capacity as a human brain. The rest is software.

Oh the irony, it's burning my eyes. You're defending somebody who was caught babbling about something they don't understand by repeating the trick. Well done you.

You also lack an understanding of what is involved in the functioning brain.

Biochemistry is incredibly important. The brain is not just a neural network; it is an electrochemical organ and the chemicals floating around in there greatly affect the operation of neurons. There is no distinction between "hardware" and "software" in the brain--every new thought or stimulus causes the physical structure to change: neurons form new pathways, areas get flooded with neurotransmitters, etc. This shit is way more complex than you believe, and not in a way that is friendly to computation.

Computer people like to think that if we just throw enough cores at the idea it will magically come to fruition. In reality there are many important differences [scienceblogs.com] between brains and computers, enough that I don't think digital computers are going to be more than a dead end. I could maybe see implants that let us control real-world stuff with concentrated thought, but that's about the limit of digital interfaces.

I'm not sure what it is about his claims that are supposed to be so ludicrous. For example, a million lines of code seems at least plausible, as long as we bear in mind the following:

1. We're not trying to mimic the brain at the protein level, rather at the broader, inter-neuron level (and whatever complex intra neuron behaviour we discover).

2. The million lines of code don't need to encompass the capacity of the brain, just its general neural architecture and adaption rules - there will no doubt be many gigabytes (terabytes?) of working memory, which would actually store the neural connections and whatever parameters they may have.

To be honest, the authors of this article seem to be rather too cocksure in dismissing all this. Even the apparent agreement of Terry Sejnowski (co-inventor of the boltzmann machine) doesn't give them pause. I'm not that familiar with Kurzweil's predictions, but this seems fairly reasonable to me.

There is a google tech talk by Geoff Hinton on restricted boltzmann machines, (a sort of stochastic neural network) that's well worth a watch, for those that are interested. They are considered biologically plausible, and he seems mostly to apply them to machine vision tasks.

There are currently four academic disciplines working on the reverse engineering of a human mind. Linguistics, psychology, computer science, and philosophy. You can count neurology too if you want to start talking about the *actual* brain. Several tens of thousands of individuals are directly and indirectly working on this problem. We've come a long way in the last few decades. Unfortunately, we have a pretty long way to go. For the moment we lack a model which accurately describes how mental processes work. There isn't even a consensus on how the processing is done.

"modeling the brain" is not even really the hard part. One only needs sufficient computing power to model what they *think* is going on logically (there isn't even a consensus here). The trick is modeling the mind. We are very, very far away from that.

A fun number to throw around is how many synaptic connections are present in the brain. Synaptic connections are widely believed to be the best indicator of overall memory storage and processing speed (to an extent). There are about 10 to the 15th (Peta I believe?) synaptic connection in a normal human brain. A significant number of these are active at any given time. In other words, the brain is performing a HUGE number of "calculations" simultaneously at all times. Modeling just the hardware is obviously not easy... modeling the software is currently not possible. I doubt it will be in the next 50 years.

For a good read on what many cognitive scientists think is going on, though it is clearly not an accurate model but rather a best guess, go read up on "connectionism".

Now just find the compiler with the right set of libraries that can compile it. And yes, I am NOT just being anal. Half a million lines of code is MEANINGLESS. Quickly, how many lines do you need for a "Hello World" program? In assembly? C? Java? PHP?

If one day someone designs a cpu with a built in Hello World function, then it would require what? 2 instructions in assembly? Meanwhile the java guy will be pounding out yet another page of code.

Obviously, by your logic, a free market economy is impossible, Our economy is too complex to have evolved on its own. In fact, it is far more complex, with far more different parts, than a human being. It must have had a creator. If most any part of the economy, like the steel industry, say, were removed, the economy would not function. How did the economy function before there was a steel industry? Obviously, it couldn't, and therefore we have demonstrated irreducible specificated complexification or something.

All this free market talk is obvious bullshit, and we actually DO have a centrally planned economy because it is impossible for something so complex to have evolved without a central planner.

Kurzweil hasn't just staked his reputation on this barmy timeline, but his life too. I mean, seriously, the guy is popping vitamin pills like crazy thinking that if he can just extend his life a decade or so, the nerd rapture will finally happen and he'll get to be absorbed into the giant galactic Googlebrain.

But, no, this isn't religious enthusiasm gone too far. No, this is SCIENCE. I mean, the man has graphs, so it has to be science, right?

you don't need to simulate electrons in a semi-conductive material at specific temperatures in order to build a complete working emulator for an old computer.

Maybe not, but you do need to understand the fundamental laws and rules that govern the systems of a computer. The fellow who wrote this article seems to be asserting that we actually don't know the fundamental laws and rules that govern the systems of the human brain, or, at least, Kurzweil doesn't. In other words, Kurzweil seems to oversimplify the problem by stating that, since the brain is organically grown from a base set of information, it should be trivial to emulate a brain once we can emulate that base set of information. Meyers seems to be asserting that the fundamental laws that govern the functions of the human brain appear to be far more complex and tend to derive from things other than that base set of information. The human brain appears to function under a set of laws and rules different than the set that Kurzweil assumes it does. That is the fallacy that Meyers is pointing out in Kurzweil's logic. Meyers may not understand computers very well, but he certainly does seem to have some insight on what rules and laws (biochem, protein folding, etc.) at least partially govern the human brain. Similarly, anyone writing a computer emulator needs to have the understanding of the fundamental laws and rules that govern the computer (binary logic, architectural pathways, memory addresses, etc.). Meyers goes on to say that our understanding of the fundamental laws of the human brain are incomplete at best and downright ignorant at worst. That's how he derives his argument.

That's not really how Kurzweil is arguing. He's looking at the genome, then saying you can build a working brain from that info alone. It may be theoretically possible, but it's so difficult that we shouldn't even bother trying. It's akin to trying to understand the behavior of a volume of a gas by looking at how just two molecules bounce off each other; it looks very straightforward, but you're actually missing some hugely complicated behavior going on.

A prediction of my own: if the brain is ever simulated by a program, the program itself will be very simple--perhaps a few thousand lines, or even a few hundred. However, that program will be self-organizing in a way that's equivalent to a program trillions of lines long, and the creators won't be able to comprehend the end result.

First off, Ray Kurzweil doesn't want to die. That's a preoccupation that a lot of people have (including one of his critics, Rudy Rucker, who has written whole books hoping to find immortality in the fourth dimension), and it leads them to some pretty fantastic conjectures from time to time. It's not necessarily a bad thing, as long as you keep the proverbial grain of salt handy. Modern chemistry and its not insignificant contributions to our vastly expanded lifespans arose from the alchemical search for immortality. Alchemy was bullshit, of course, but the incidental discoveries of alchemists on the way to their illusory elixir of life paved the way for the real science to follow and build upon after it had ejected the dross.

And secondly, I don't think it's entirely implausible that we can eventually design hardware and software that will match and exceed the performance of the human brain. Our brains, after all, are the end product of evolution, and like pretty much every other part of our bodies, an accumulation of kludges that were just good enough to get passed to the next generation (or not bad enough not to get passed on). It's also implemented using hardware so unreliable that it wouldn't function at all if it wasn't constantly repairing itself, and even then, no matter how well you treat it, it irreparably craps out after about 75 years. And it still doesn't work all that well -- ever seen the long chain of train wrecks that is the history of human civilization? We might be able to engineer something that works a lot better. Granted, it's not going to be by deriving simulated human brains from a copy of the human genome. More likely, it will be very much unlike the way biological brains work.

The fundamental problem, which I think smart and optimistic guys like Ray Kurzweil are particularly prone to forgetting, is that it may not be possible for a mind to understand a mind of equal complexity, i.e., humans may lack the necessary intelligence to duplicate their own intelligence. That will force us back on genetic algorithms to evolve AI, leading to an end product that will likely be just as badly undesigned as natural brains. Worse, it will do little to advance our understanding of how minds work: if we can't reverse-engineer our own brains, we probably won't be able to reverse-engineer even more sophisticated artificial minds, nor will they be able to reverse-engineer themselves. (We can hope that they could reverse-engineer us, and then explain it to us in terms we can understand, if such terms exist, but that takes us so far out on a conjectural limb that I can see Ray Kurzweil from here.)

Anyway, there's room for bold conjectures. That doesn't mean that when Kurzweil completely fails to understand the way molecular biology works that we shouldn't call bullshit on it, but we shouldn't be entirely hostile to futurist speculation. By nature, most of it will be bullshit, but a lot of progress in unexpected areas has been made in the pursuit of mirages (alchemy leading to chemistry, astrology leading to astronomy), and explaining (or discovering) why a conjecture is bullshit is a beneficial exercise in and of itself.

I forgot a few points. A few years back I went to a "singularity talk" by some people doing silicon design, trying to cram denser neural nets onto chips.

Even at the time, it struck me that by the tame you've made a "human equivalent" hardware simulator in some sort of neural net, you've got a newborn. Let's assume you're "at" the singularity, with your brand new AI...

I have experience with this. I've participated in the creation of two NIs.They can't do spit at initialization. Actually they can do 2 things - suck on a nipple and express displeasure. OK, they can wave their limbs and produce waste, but I'd argue that the brain isn't involved in that, at least on a control basis. Maybe I'd include a 3 and 4 - open and close their eyes.It takes weeks to months before they can do much more than that.It takes years before they can tell you anything that doesn't need the "parental interpreter" functioning.It takes more years before they can even think of passing a Turing Test.

I'm not quite sure what researchers expect of a brand new AI. Maybe their expectations are right in line with mine. Maybe it's the popular literature, and therefore the general public, that expects to hear, "Hello Dr. Chandra" in perfect speech.

Then on the parent post, something in TFA made me recalibrate "blank brain", realizing that there's probably quite a bit of "body connection" rewiring happening in the brain well before birth. I'm guessing that the newborn's brain is far from blank.

Entering college we get students whos goals in life are the following.

Make a True AI/Mimic a Human Brain - If they good they will end up getting a PHD and being a computer science professor and perhaps doing some cool research on a limited area of AI.Make an Operating System which can run any code for any platform faster and more securely then the exiting OS's - If they are good they may work for a software company doing some lower level programmingMake the ultimate game which will make them millions nay billions of dollars - Working as a Web Designer or if they are really lucky working on some small subset of a game.

Then in college they realize there is No Magic in computer science and a lot of things that are easy for a Human do do is difficult for a computer to do. And a lot of things difficult for people are easy for computers. You cant just tell a computer an Abstract concept and hope it know what to do with it. It takes real work and actually a fair amount of brain power for a lot of the mundane tasks that need to be done.

The human brain is composed of one hundred billion or so neurons. Looks like it's pretty much finite to me. I have ten times as many bytes of information in my hard disk.

Yet while you were typing (presumably not saving anything other than in RAM), was the content of your hard disk changing (Yes, perhaps a bit, but play along for this example)

The neurons are continuously 'remapping' in your brain. Even while some may be static, other's are making new connections in manners which we currently can't predict, or really understand why did it connect to 'this' neuron instead of 'that' neuron.

Not that the brain functions in any quantum manner, but it's one of those things that if you were to KNOW the exact mapping of neurons, the very next instant the mapping would be incorrect and very quickly become inaccurate (100 billion or so items making new connections in multiple paths)

I suppose it would be something like trying to map the water vapor droplets in a cloud. There is a finite number of droplets there too, but predicting the shape/behavior of a cloud with any precision after only a single second would be very, very difficult.

There's also a "skeptic woo". It means dismissing things you know nothing about because they involve things you don't understand.

It never ceases to amaze me how so many "skeptics" have decided that they've seen it all, know it all. They're the mechanical engineer who have decided that their expertise also qualifies them as experts in quantum mechanics. They're the chemist who has decided to write the "definitive" work on physics that's going to refudiate Einstein.

It's very easy to tell someone serious who can discern the difference between scientific claims and hokum from the "professional skeptic" who dismisses anything they don't understand as phony. It's the corollary to the saying about how "technology that is sufficiently advanced is indistinguishable from magic". Basically, it says that "anything that I don't understand must be magic" and it's intellectually lazy. Yes, I'm saying that many "skeptics" who tout their intellectual rigor are actually intellectually lazy.

Here's how I tell the difference between a serious skeptic and a "pop" skeptic: I ask them if acupuncture is "woo". One question, that's all. The question works just as well with tai chi chuan.

you don't need to simulate electrons in a semi-conductive material at specific temperatures in order to build a complete working emulator for an old computer

You do, if you have no idea what the higher levels are all about. Our knowledge of how the brain works (hell, even of the biochemistry of a single cell) is so poor that we cannot yet discard "lower details" if we want to get a working system. So finding upper bounds by looking at the lower level of the picture is not such a bad idea.

Myers does not raise any objections to code or data "quantity" -- the big hurdle is that vital part of the system is outside the DNA, and we are only beginning to explore it. Read up on epigenetics [wikipedia.org].

Also, Kurzweil is not the first person to make such a ridiculous claim. Wolfram makes a comparison in NKS -- claiming that Mathematica is more complex than the human genome based on a "lines of code" argument. I lol'd.

... and the mark of a good skeptic, is somebody who understands that they realistically cannot know all that much of anything, and to defer to the judgement of experts -- and not just ANY experts, but recognised experts.

And I disagree that scientific skeptics are (as) susceptible to the Dunning Kruger Effect as the cranks and New Agers. At least the skeptics don't pretend they know more than people who've been to university (and have at least a basic enough grounding in physics and medicine that they know NOTHING about it).

The idea that my Granddad -- who thinks he has magic TK powers -- is practicing some kind of science beyond my comprehension, is not very plausible.

Profit does not have to be the only motivation for making shit up. I suspect that the industrial-grade crazies of the world (like Hulda Clark), are motivated less by cash, than by a desire to be recognised as a maverick genius, etc.

PZ Myers wasn't there; he based his whole critique on gizmodo's writeup.

Speaking as someone who was there and heard Kurzweil's full speech, I can confidently say that PZ Myers does not understand Ray Kurzweil.

First off, a significant factual mistake: Kurzweil -clearly- never said we'd reverse engineer the brain by 2020. He argued against exactly that (his prediction was late 2020s, shading into 2030-- perhaps also unbelievable, but if you're going to critique someone, why not get the facts right?). Sure, gizmodo's writeup was entitled "Reverse-Engineering of Human Brain Likely by 2020". It'd be an understandable attribution mistake for say, an undergraduate.

Second, Myers is critiquing Kurzweil's ontological position based on a throwaway writeup dashed off by gizmodo. (Really, Myers? And you wonder why you're a magnet for shitstorms...)

Third, Myers' criticism is essentially that the brain is an emergent system, and we'll have to understand all the protein-protein interactions, functional attributes of proteins, etc. in order to actually model the brain.

This third assumption is arguable, but Kurzweil wasn't actually arguing against this. All Kurzweil meant with his comment about bytes and the genome was there's an interesting information-theoretic view of how much initial data gives rise to the wonderful complexity of the brain.

The genome of a creature, plus the cytoplasm contents of an egg, plus a complete understanding of the laws of physics should in fact be all that you need in order to fully simulate a human being. Granted, you'd need to simulate it sequentially from conception to adulthood before you get anything useful out of it, which might take more or less than the biological time required depending on the power of your simulator.

Humans are deterministic, after all - we're just a bunch of atoms and molecules. Granted, there is the effect of random quantum effects, so three simulations with the same input might not come up with the same output if this is genuinely taken into account. However, all three would be plausible outcomes if we were talking about a real person with a real brain.

The part that is being left out is the little caveat: "plus a complete understanding of the laws of physics."

Here is an illustration. A jpeg of a rendition of the Mandelbrot set might take 20k of space. A mathematical description might take well under 1kb of code. That description might even be enough to fully simulate its behavior. That description is certainly not sufficient to UNDERSTAND its behavior.

Also, don't discount the cytoplasm. Proteins don't fold the same in buffer as they do in a cell, and simply adding non-specific protein doesn't always do the trick either. Gene regulation doesn't work without epigenetics, and epigenetics doesn't happen without regulatory proteins, and those proteins don't get there without translation from gene transcripts. DNA alone without capturing the initial state of the machine is as useful as a memory dump without the CPU status dump on a CPU with 43 million registers. The last I heard things like centrioles can't be replicated except in the presence of another centriole.

The bottom line is that there is nothing "magical" about human cells. However, to estimate their total information content at only 2GB or so is probably a gross underestimate.

Is this a slashdot story, or someone's twitter page? At least some kind of objective summary would be nice, other than "Lul Kurzweil, here, a link, he stoopid!"

But before I just hit preview and go, lets take a look at the article itself. Aaand, holy crap, the post is verbatim from the article.

Kurzweil's effective claim is "There's only so much data in the DNA. The brain is about 50 million bytes. If we can reverse engineer the process used to turn those 50 million bytes into a brain, we can then reverse engineer the brain."

Seems logical - and even though the endpoint might not be "brain on a chip" it might be "oh, there's a flaw in the DNA here that's causing the hypothalamus to be malformed, lets start checking for that and maybe fixing it in the womb." There are many, many scientists that are trying to puzzle out this "source code" for that very reason. It's a perfectly valid point of study.

Kurzweil is a futurist. His scientific area of study is not "You should do X Y and Z to get to points A B and C." His area of study is "Scientists are working on X, which may lead someday to Z, and might bring us technology C." There's an important difference there, which I always find amusing when scientists and the anti-singulatarians start hooting, "he forgot Y, A and B!"

His math all points to Technology C and beyond being really amazing [wikipedia.org], but that's besides the point. His area of study is not "every technology field ever", but rather "this is where things are trending". People mix the two up, sometimes intentionally, and hoot hoot hoot, Y A B.

Anyway. Back to the article. The rebuttal in the article is "We cannot derive the brain from the protein sequences underlying it; the sequences are insufficient, as well, because the nature of their expression is dependent on the environment and the history of a few hundred billion cells, each plugging along interdependently."

See, the brain might have a source code, one that's remarkably small and turns into something really complex, but that doesn't mean anything cause... maaaagic. And you can't understand magic, right? Everyone knows that something that's so complex that it seems impossible to understand [wikipedia.org] should never be attempted. Worthless endeavor. Everyone knows that. Right?... Maagggiiiiccc~~~

The fact of the matter is, DNA is source code. For a system we don't fully understand, one that's remarkably complex, but ultimately, DNA, even our DNA, is just data. We can understand, change, manipulate, and create data.

To treat it all as magic -- as something that we will just never be able to understand -- is to do a disservice to centuries of scientists, of the past and the future.

A model of the human brain would need to model 10^10 neurons, each connected (not at random) to some 10,000 other neurons to produce a net of 10^14 synapses.

To understand the challenge of modelling a system this vast and complex, consider the state of research on the model organism Caenorhabditis elegans (a tiny worm). After many years of work its nervous system has been (almost) exactly mapped: it contains 302 neurons, 6393 chemical synapses, 890 gap junctions, and 1410 neuromuscular junctions. Imagine now the difficulty of reaching this level of precision in a system 10^7 times larger. Unlike the genome, we have no clues about how to automate mapping of an intact brain.

But the good news is that with this level of neuro-mapping precision we can now completely simulate the neural network ("brain") of a tiny worm, right? Right?

Wrong. Not by a long shot. We are still struggling with characterizing the behavior of this primitive neural net, and making efforts at simulating some aspects of that behavior. The 302 neuron "brain" is far beyond our abilities to simulate at present.

That's nice, unfortunately the rate limiting step isn't processing time or RAM or the cost of sequencing. The rate limiting step is our understanding of neurobiology and developmental biology. Even PZ misses some of the complexity. One of the really difficult problems is figuring out all of the electrophysiology of the brain (spike timing dependent plasticity...of every area, all of the electrotonic structures and how they're modulated and how that and post-secondary modifications muck with everything, etc.). It'll be 10 years before the Blue Brain Project is really show something super cool in this regard, and that's a single cortical column of a mouse brain...
Kurzweil doesn't even know enough to understand what would actually be required to do what he's saying.

One important feature of an AI is that it must be able to prioritize its tasks by urgency. This, in itself, requires intelligence and processing power.

I'm pretty sure it's not the way the brain works in the majority of cases. The priority of a lot of brain tasks has been worked out by evolution, and is hardcoded in as reflex. Concious thinking being one of the lower priority ones:)

Here's how I tell the difference between a serious skeptic and a "pop" skeptic: I ask them if acupuncture is "woo". One question, that's all. The question works just as well with tai chi chuan.

And what if this was my answer; a vast majority of the claims relating to acupuncture are woo, though there are some areas that demand further research. I would use chiropracty as a test, personally, since 90% of the claims, and supposed reasons, are pure, unadulterated woo, but 10% of it is actually helpful (if if the reasons for its effectiveness is often pure hokum). The same would go for acupuncture and tai chi, there might be some useful bits in there, but the stated mode of operation (chi, spiritual energy, invisible whatnots) is probably 100% woo. Also, with acupuncture and chiropracty the woo is firmly bundled with the real bits, making it very hard to distinguish where one begins and the other ends.

So.. with acupuncture, at least, we can say some functional aspects of it are non-woo, but if you accept it as it stands with it's traditional rational, then you are a follower of woo. If you toss out the idiotic bits, and accept an actual accepted scientific explanation for it, then you can join the woo-less camp.

It doesn't help that there are tons of fake, woo-speading, "naturalpathic"/alt-medicine journals that spread self-aggrandizing quasi-studies.