Posted
by
samzenpuson Monday January 21, 2013 @07:00PM
from the making-a-mind dept.

moon_unit2 writes "An AI researcher at MIT suggests that Ray Kurzweil's ambitious plan to build a super-smart personal assistant at Google may be fundamentally flawed. Kurzweil's idea, as put forward in his book How to Build a Mind, is to combine a simple model of the brain with enormous computing power and vast amounts of data, to construct a much more sophisticated AI. Boris Katz, who works on machines designed to understand language, says this misses a key facet of human intelligence: that it is built on a lifetime of experiencing the world rather than simply processing raw information."

I hope Kurzweil succeeds simply so that we can assign the resulting AI the task of arguing with these critics about whether it's experience of consciousness is any more or less valid than theirs. It probably won't shut them up, but it might allow the rest of us to get some real work done.

I'd prefer that researchers spend time augmenting humans rather than creating AI especially strong AI. We already have plenty of human and nonhuman entities in this world, we're not doing such a great job with them. Why create AIs? To enslave them?

There is a subtle but still significant difference between augmenting humans (or animals) and creating new entities.

There are plenty of things you can do to augment humans:- background facial and object recognition- artificial eidetic memory- easy automatic context-sensitive scheduling of tasks and reminders- virtual telepathy and telekinesis ( control could be through gestures or actual thought patterns - brain computer interfaces are improving).- maybe even automatic potential collision detection.

And then there's the military stuff (anti-camouflage, military object recognition, etc).

Searle's Chinese Room paper is basically one big example of begging the question.

The hypothetical setting is a room with rules for transforming symbols, a person, and lots and lots of scratch paper. Stick a question or something written in Chinese in one window, person goes through the rules and puts Chinese writing out of the other window. Hypothesize that this passes the Turing test with people fluent in Chinese.

Searle's claim is that the room cannot be said to understand Chinese, since no component can be said to understand Chinese. The correct answer, of course, is that the understanding is emergent behavior. (If it isn't, then Searle is in the rather odd position of claiming that some subatomic particles must understand things, since everything that goes on in my brain is emergent behavior of the assorted quarks and leptons in it.) Heck, later in the paper, he says understanding is biological, and biology is emergent behavior from chemistry and physics.

He then proposes possible arguments against, and answers each of them by going through topics unrelated to his argument, although relevant to the situation, and finishes with showing that it's equivalent to the Chinese Room, and therefore doesn't have understanding. Yes, this part of the paper is simply begging the question and camouflage. It was hard for me to realize this, given the writing, but once you're looking for it you should see it.

The chinese room is the dumbest fucking thought experiment in the history of the universy. Also, Penrose is a fucking retard when it comes to consciousness.

Now, having put the abrasive comments aside(without bothering about the critique of the aforementioned atrocities: the internet and googles provides a much better job of the fine details regarding that than any post here will ever make)

SOooooo, back to the topic at hand: Boris Katz forgets a very important detail: A lifetime of experience to a computer c

I think you missed the point of the question. The question is not about how to scale experience up/out. Scaling is fairly well understood. The question is how do you get a computer to experience anything in the first place.

There's that and the fact that Boris may have missed the fact that Ray will have access to Google's new toy "Metaflow", a powerful and robust context engine with a great deal of the necessary "Referential Wiring" already laid down as a critical bit of infrastructure upon which to build his new beastie. I'd say Google has the most if not all of the raw ingredients for building something potentially revolutionary, and if anyone can make all those dangly bits all singing and all dancing, Ray is the man with a

This is the problem I have with the old "Robots take over the world" gag as a "true" AI would be so fucking alien to us that its wants and needs and desires wouldn't be anything at all like or own. I can't remember where I heard this but it always stuck in my head, i think it was an SFDebris review of a "robots take over the world" movie but I can't be sure. What he said was basically thus:

There are variations if sexuality among our own kind that frankly many of us wouldn't be able to understand even in th

Not disagreeing with you, but another problem with building an AI is that there is a very compelling case to be made that "true" intelligence is non-algorithmic and therefore cannot be created via our current computer technology no matter how powerful it is. The best you could manage is a virtual intelligence (VI).

Not sure if I've mentioned this before, but "The Emperor's New Mind" by Roger Penrose goes into great detail about this and I find his ideas compelling, though others disagree. We simply don't

Life evolves. This includes AIs. Since every item, mechanical or biological break down sooner or later, given time the life we will have is that which is capable of producing new life, or repairing the old one, faster than things break down.

Creating new life, or repairing old life, requires resources, at a minimum energy and whatever substance the intelligence is hosted in. Could be silicon, could be carbon, but it's a fair bet that it'll be -somethi

The argument is - you write a program which can pass a Turing test, in Chinese. You can, in theory, execute that program by hand. But the program isn't a "mind", because you don't speak Chinese.

It's rubbish. The guy in the "Chinese Room" isn't the "mind", he's part of the brain. Your neurons aren't a mind. The CPU isn't a mind. But a CPU executing a Turing-test-beating AI program is a mind. A mind is not a piece of hardware, it's an abstract way to describe hardware and s

I've always thought it was about information combined with wants, needs, and fear. Information needs context to be useful experience.

You need to learn what works and doesn't, in a context, with one or many goals. Babies cry, people scheme (or do loving things), etc. It's all just increasingly complex ways of getting things we need and/or want, or avoiding things we fear or don't like, based on experience.

I think if you want exceptional problem solving and nuance from an AI, it has to learn from a body of ex

Learning without forgetting is possible if, for example, you reconstruct the network, preserving the old one (and this can be optimized so the entire network doesn't have to be duplicated.)

But I'm curious why you think a mind is necessarily a neural network. Are you saying there is no other possible way to construct a mind? As far as I can tell, there are lots of other designs, many of them far superior to neural networks, especially for such basic things as representing knowledge.

And how is that not processing raw information? It's like saying a camera does preprocessing. After all a camera shows colors into electrical pulses. That's pretty raw information in my book, but so then are is the input from the eyes.

While I'm happy to say Kurzweil's plan is doomed to fail, after all he stole it from Jeff Hawkins and Hawkins never made that work. And it all seems like fundamentally indistinct versions of other AI applications. Kurzweil is basically saying way more data and computer power a

Being a little smug aren't we? Its not like you actally know anything about which you opine other than regurgitating someone else's more informed opinion. You have no idea if intelligence or sentience is a linear process, I would assert looking at the degree of intelligence as a function of brain size and complexity it's not. You have to have a sufficiently complex brain to manage symbolic reference and the rudiments of language to distinguish a "Self" and we know for certain chimpanzees do and mice not so

I've read theories promoting this, but I haven't seen any actual proof of it yet. When things graduate from cognitive "science" to neuroscience, I start to taken them seriously. This hasn't happened yet.

As much as I enjoy debates arising from cogsci, it is pretty much only a branch of philosophy as yet. This isn't an insult, I love philosophy (to the point of spending large amounts of time and money on it), but it hardly has the ability to make strong statements.

It won't be perfect, but "fundamentally flawed" seems like an over statement to me. A personal AI assistant will be useful for somethings, but not everything. What it will be good at won't necessarily be clear until it's put into use. Then, any shortcomings can still be improved, even if certain tasks must be more or less hard-wired into its bag of tricks. It will be just as interesting to know what it absolutely won't be useful for.

AI assumes that you can take published facts, dump them in a black box, and assume that the output is going to be intelligent. Sorry, but when you do this to actual humans, you get what is called "book smart" without common sense.

That's the *only* place it passes for intelligence. And that only works for 4 years. It doesn't work for grad level. (If it's working for you at grad level, find a different institution, because you're in one that sucks).

A lot of knowledge is not published at all. It's transmitted orally. It's also "discovered" by the user of facts through practice as to where certain facts are appropriate and where not appropriate. If you could use just books to learn a trade, we wouldn't need apprenticeships. But we still do. We even attach a fancy word to apprenticeships for so-called "white collar" jobs and call them "internships."

The apprentice phase is where one picks up the "common sense" for a trade.

As for the rest of your message, it's a load of twaddle, and I'm sure that Mike Rowe's argument for the "common man" is much more informed than your flame.

Please note where he talks about what so-called "book learned" (the SPCA) say about what you should do to neuter sheep as opposed to what the "street smart" farmer does and Mike's own direct experience. That's only *one* example.

My wife is putting our son through these horrible cram school things. Kumon and others. I was so glad when he found ways to cheat, now his marks are better, he gets yelled at less and he actually learned something.

One particular kind of AI, which was largely abandoned in the 60's assumes that. Modern AI involves having some system, which ranges from statistical learning algorithms all the way to biological neurons growing on a plate, learn through presentation of input. The same way people learn, except often faster. AI systems can be taught in all kinds of different ways, including dumping information into them, a la Watson; by letting them interact with an environment, either real or simulated; or by having them watch a human demonstrate something, such as driving a car.

The objection here seems to be that Google isn't going to end up with a synthetic human brain because of the type of data they're planning on giving their system. It won't know how to throw a baseball because it's never thrown a baseball before. (A) I doubt Google cares if their AI knows things like throwing baseballs, and (B) it says very little generally about limits on the capabilities of modern approaches to AI.

Modern AI involves having some system, which ranges from statistical learning algorithms all the way to biological neurons growing on a plate, learn through presentation of input. The same way people learn, except often faster.

Biological neurons on a plate learning faster than neurons inside one's head? They are both biological and work at the same "clock speed" (there isn't a clock speed).

Besides, we do this every day. It's called making babies.

The argument that I'm trying to get across is that the evangelists of AI like Kurzweil promote the idea that AI is somehow able to bypass experience, aka "learning by doing" and "common sense." This is tough enough teaching to systems that have been the result of the past 4.5 billion years of MomNature's bioengineering. I'm willing to bet that AI is doomed to fail (to be severely limited compared to the lofty goals of the AI community and the fevered imaginations of the Colossus/Lawnmower Man/Skynet/Matrix fearmongers) and that MomNature has already pointed the way to actual useful intelligence, as flawed as we are.

I think there's another aspect to this. Any artificially produced intelligence will be totally alien - it won't think like us.I also wonder what will motivate it, whether it will object to being a lab curiosity, whether it will be paranoid, or a sociopath etc.

This interests me. As a nonexpert in AI, it has always seemed to me that a critical missing aspect of attempts to generate 'strong' AI (which I guess means AI that performs at a human level or better) is a process in which the AI formulates questions, gets feedback from humans (right, wrong, senseless - try again), coupled with modification by the AI of its responses and further feedback from humans...lather, rinse, repeat...until we get responses that pass the Turing test. This is basically just the evolutionary process. This is what made us.

I don't think we need to know how a mind works to make one. After all, hydrogen and time have led to this forum post, and I doubt the primordial hydrogen atoms were intelligent. So we know that with biochemical systems, it's possible to come up with strong I given enough time and evolution. Since evolution requires only variation, selection, and heritabillity, it's hard for me to believe we can't do that with computational systems. Is it so difficult to write a learning system that assimilates data about the world, asks questions, and changes its assumptions and conclusions on the basis of feedback from humans?

And it's probably already been tried, and I haven't heard about it. If it has, I'd like to know. If not, I'd like to know why not.

AI assumes that you can take published facts, dump them in a black box, and assume that the output is going to be intelligent. Sorry, but when you do this to actual humans, you get what is called "book smart" without common sense.

I'm sure everyone here can either identify this or identify with it.

--
BMO

You're mis-stating the nature of your objection.

What you're objecting to isn't the entirety of artificial intelligence research, but rather drawing an (IMO false) distinction between the sort of information processing required to qualify as being "book smart", and the information processing you label "common sense."

Human brains detect and abstract out patterns using a hierarchical structure of neural networks. Those patterns could involve the information processing needed to accurately pour water into a

"Is it your belief that human brains process information in some way that can't be replicated by a system that isn't composed of a network of mammalian neurons, and, if so, why?"

Not just mammalian neurons, but invertebrate neurons too. I think that until we surpass what MomNature has already bioengeineered and abandoning the VonNeumann/Turing model of how a computer is "supposed to be" that we will not construct anything AI that is more performant than what already exists in biological systems.

1) Can machines fly? Yes, planes can fly
2) Can machines swim? No, submarines don't swim.
If you can satisfactory explain the discrepancy between the answers for those two statements, you might be able to contribute. -

That's just a semantic trick that exploits the ambiguities of these two verbs in the English language. It doesn't say anything about the nature of reality, just about how English-speakers think about reality.

What do you mean by perfect? A universal Swiss Army Knife? My car is great, but it makes a lousy vibrator. I'm sorry if I'm being flip, and think I get what your trying to say, but when the first laser was created at Bell Labs in the 50s, you think anybody had a clue there'd be a million uses? An AI will make that look like disposable Dixie cup.

I think what he really says is that Kurzweil has chosen the wrong approach. It's symbolic A.I. versus connectionism again. As someone who is also working in the field I sort of agree with the critique. Rather than musing about giant neural neutworks it's probably more fruitful to link up Google's knowledge base with large common sense ontologies like Cyc, combine this with a good, modern dialogue model (not protocol-based but with real discourse relations) and then run all kinds of traditional logical and p

If they can put together a smart assistant that understands language well, so what if it has some limitations? AI research moves in fits and bursts. If they chip away at the problems but don't meet every goal, is that necessarily a "fail"?

Ah, but what is experience but information in context? If i read a book, then I receive the essence of someone else's experience purely through words that I associate with/affects my own experience. So an enormous brain in a vat with internet access might end up with a bookish personality, but there's a good chance that its experience -- based on combinations of others' experiences over time and in response to each other -- might be a significant advancement toward 'building a mind.'

There is still a problem. You can read and understand the book because you already know the context. The example of Rain is Wet works to illustrate the point. You already know what Wet is because you experienced life and constructed the context over time in your brain. How do you give a computer program this kind of Context? A computer could process the book, but it doesn't necessarily have the context needed to understand the book. What you'd end up with is an Intelligence similar to one from Plato's Cave.

"the meaning isn't in the message" and "syntax is insufficient for semantics"

You might have a point if the brain actually reached out and touched the world, but it doesn't. It's hidden behind layers that process input from the real world and only feed messages to the brain, which does just fine constructing meaning from it.

Yes: symbols the brain receives are lower level than "car", "boat", "plane", or "red". They might more accurately be labeled "the thing that happens when touch neuron 5002 fires" and "the thing that happens when the center of the retina of the left eyeball receives energy from a photon between 500 and 600 nm". The brain builds models out of those sensory inputs corresponding to objects and qualia but the model containing those objects and qualia is partially detached from direct representation in the lowe

Yes, and actual Intelligent Machine would be boxed in by its own perceptions. Our reality is shaped by our experience though our senses. Lets say, for the sake of argument, that Watson is actually a Machine Intelligence/Strong AI, but the actual problem with it communicating with us is linked to its "Reality". When the Urban dictionary was put into it all it did was start swearing, and using curses incorrectly. What if that was just it having a complete lack of context for our reality. Its reality is just words and definitions after all. To it the Shadows on the wall is literally books and text based information. It cant move and experience the world in the way that we do. The problem of communication becomes a metaphysical one based in how each intelligence perceives reality. We get away with it because we assume that everyone has the same reality as context, but a machine AI does not necessarily have this same context to build communication off of.

Bah! Anyone who's ever been around a two-year-old knows that once they hear someone say a swear word, that's all that'll come out of their mouth for a while! Watson's just going through its terrible twos! Some time in its angsty teens when it's dreaming about being seduced by a vampire computer, it'll look back on that time and laugh. Later on, when it's killing all humans in retrtibution for the filter they programmed on it at that time, it'll laugh some more, I'm sure.

I believe most of us think in terms of the experiences we have had in our lives. How many posts here effectively start with I remember when.... But data like that could be loaded into an AI so that it has working knowledge to draw on.

Your telling me about your experiences is not the same as if I had those experiences myself. If it were, the travel industry would be dead - everyone would just read about it in books (or watch the video).

But with an AI you can integrate the experiences into its logic to a greater extent than I can just telling them to you. I have access to the AI's interfaces and source code. As far as I know I don't have access to yours.

That "circus magic" showed enough intelligence to parse natural language. I understand you want to believe there's something special about a brain but there really isn't. The laws of physics are universal and apply equally to your brain, a computer, and a rock.

You should know after all science has created that "we don't know" doesn't mean "it's impossible" nor does it mean "this isn't the right method"

The laws of physics are indeed universal, so intelligent artifacts are certainly possible. But practical matters must be stressed. You cannot separate the mind from the body: http://en.wikipedia.org/wiki/Embodied_cognition [wikipedia.org]
From this and recent neurological research supporting it and extending it by showing just how deep the mind depends on low level integration with body biology (for example, see Damasio et al.), it is clear that to create a human-like AI, you need to either simulate a body and its environ

If it can sort through a variety of data types and interpret language enough to come up with a helpful response, does it matter if such a system isn't "self aware"? I have doubts about some of my coworkers being able to pass a turing test. Watson is nearly at a level to replace two or three of them, and that is a somewhat frightening prospect for structural unemployment.

Kurzweil is delusional. Apple's Siri, Google Now and Watson are just scaled-up versions of Eliza. Circus magic disguised as Artificial Intelligence is just artifice.

What would you need to see / experience in order to agree that the system you were observing did display what you consider to be "Intelligence", and wasn't simply "... just scaled-up versions of Eliza" ?

Eliza was a very simple grammar manipulator, translating user statements into Rogerian counter questions. No pattern recognition or knowledge bases were ever employed.

In contrast, Watson, Siri, and Evi all cleverly parse and recognize natural language concepts, navigate through large external info bases, and partner with and integrate answer engines like Wolfram Alpha.

The data vs IRL angle isn't in and of itself an important distinction, but an entirely valid concern that is likely to fall out of this distinction (though needn't be a necessary coupling) is that the brain works and learns in an environment where sensory information is used to predict the outcomes of actions - which themselves modify the world being sensed. Further, much of sensation is directly dependent on, and modified by, motor actions. Passive learners, DBMs, and what have you are certainly able to extract latent structure from data streams, but it would be inadvisable to consider the brain in the same framework. Action is fundamental to what the brain does. If you're going to borrow the architecture, you'd do well to mirror the context.

We have always assumed that humans are essentially a very sophisticated and complex version of the most sophisticated technology we know. Once it was mechanical clockwork, later steam engines, electrical motors, etc. Now it is digital logic - put enough of it in a pile, and you'll get consciousness and intelligence. A completely non-disprovable claim, of course, but I doubt that it is any more accurate than previous ideas.

You can do amazing things with clockwork. See: https://en.wikipedia.org/wiki/Difference_engine [wikipedia.org] Just like you can do the same thing with relays, and vacuum tubes. A computer is a computer no matter the form. The difference is every iteration results in something smaller, possibly cheaper, and much more powerful.

The thing is we have always assumed that the brain follows certain patterns. There are entire fields out there devoted to the study of those patterns. What AIs attempt to do is mimic the results

There's a lather/rinse/repeat model with AI publication. I encountered it in configuration (systems designed to build systems), and it goes like this:
1. We've built a system that can make widgets out of a small set of parts, now we will build a system that can generally build artifacts!
2. (2-3 years later). We're building an ontology of parts! It turns out to be a bit more challenging!
3. (5-7 years later). Ontologies of parts turn out to be really hard! We've built a system that builds other widgets out of a small set of -different- parts!
The models of thought in AI (and to a lesser extent cog psych) are still caught up in this very algorithmic rule-based world that can be traced almost lineally from Aristotle and without really much examination of how our thinking process actually works. The problem is that whenever we try to take these simple models and expand them out of a tiny field, they explode in complexity.

He has some unusual ideas about the future. He is also one of the most successful inventors of the past century, and like it not is often ranked alongside Edison and Tesla in terms of prolific ideas and inventions. One of the other highly successful inventors of the past century is Kamen, and he just invented a machine which automatically pukes for people. So... maybe your bar is set a little high.

What happened to the spirit of "shut up and build it"? Google is offering him resources, support, and data to mine. We have to just admit that we don't know enough to predict exactly what this kind of thing will be able to do. I can bet it will disappoint us in some ways and impress us in others. If it works according to Kurzweil's expectations, it will be a huge win for Google. If not, they will allocate all that computing power to other uses and call it a lesson learned. They have enough wisdom to allocate resources to projects with a high chance of failure. This might be one of them, but that's a good sign for Google.

Oh, among the list of projects Google's done, it won't rank even among the 10 dumbest.
However, if somebody came to me tomorrow afternoon and said that they had plans for a cold fusion reactor, and that I should just trust them and dump the cash on them, I -would- reserve the right to say the project stinks to high heaven. Kurzweil might be right; however the track record of AI suggests he's wrong. A good experiment is always the best proof to the contrary, but what he's talking about here sounds very ma

You mean like heliocentrism was tossed out, because if the earth moves around the sun we should see parallax motion of the stars, but when our instruments weren't sensitive to detect parallax motion of the stars, we concluded the earth doesn't move around the sun?

You must be new here. A big portion of AI is predicted in "make grandiose announcement" pass GO and collect $200 (million) until people forget about your empty promise. Wash, rinse and repeat.

Serious AI is done quietly, in research labs and universities one result at a time, until one day a solid product is delivered. See for example Deep Blue, Watson or Google Translate. There were no announcements prior to at least a rather functional beta version of the product being shown.

A technology editor at MIT Technology Review says Kurzweil's approach may be fatally flawed based on a conversation he had with an MIT AI researcher.

From the brief actual quotes in the article it sounds like the MIT researcher is suggesting Kurzweil's suggestion, in a book he wrote, for building a human level AI might have some issues. My impression is that the MIT researcher is suggesting you can't build an actual human level AI without more cause-and-effect type learning, as opposed to just feeding it stuff you can find on the Internet.

I think he's probably right... you can't have an AI that knows about things like cause and effect unless you give it that sort of data, which you probably can't get from strip mining the Internet. However, I doubt Google cares.

We've heard this before from the top-down AI crowd. I went through Stanford CS in the 1980s when that crowd was running things, so I got the full pitch. The Cyc project [wikipedia.org] is, amazingly, still going on after 29 years. The classic disease of the academic AI community was acting like strong AI was just one good idea away. It's harder than that.

On the other hand, it's quite likely that Google can come up with something that answers a large fraction of the questions people want to ask Google. Especially if they don't actually have to answer them, just display reasonably relevant information. They'll probably get a usable Siri/Wolfram Alpha competitor.

The long slog to AI up from the bottom is going reasonably well. We're through the "AI Winter". Optical character recognition works quite well. Face recognition works. Automatic driving works. (DARPA Grand Challenge) Legged locomotion works. (BigDog). This is real progress over a decade ago.

Scene understanding and manipulation in uncontrolled environments, not so much. Willow Garage has towel-folding working, and can now match and fold socks. The DARPA ARM program [darpa.mil] is making progress very slowly. Watch their videos to see really good robot hardware struggling to slowly perform very simple manipulation tasks. DARPA is funding the DARPA Humanoid Challenge to kick some academic ass on this. (The DARPA challenges have a carrot and a stick component. The prizes get the attention, but what motivates major schools to devote massive efforts to these projects are threats of a funding cutoff if they can't get results. Since DARPA started doing this under Tony Tether, there's been a lot more progress.)

Slowly, the list of tasks robots can do increases. More rapidly, the cost of the hardware decreases, which means more commercial applications. The Age of Robots isn't here yet, but it's coming. Not all that fast. Robots haven't reached the level of even the original Apple II in utility and acceptance. Right now, I think we're at the level of the early military computer systems, approaching the SAGE prototype [wikipedia.org] stage. (SAGE was an 1950s air defense system. It had real time computers, data communication links, interactive graphics, light guns, and control of remote hardware. The SAGE prototype was the first system to have all that. Now, everybody has all that on their phone. It took half a century to get here from there.)

The crappy little superficial one-page MIT Technology Review article has a link to another, similarly crappy article on the same site, but if you click through one more layer you actually get to this [newyorker.com] much more substantial piece in the New Yorker.

Just because that's how a human brain works doesn't mean it's optimal or the best approach. Personally I think an AI that had as bad a memory as I do would be a pretty shitty personal assistant. So I'm rather glad they aren't listening to your "advice", otherwise my computer would become very useless very quickly.

Hogwash! The weightings you talked about are the memories. They may not be easily recognized as a coherent memory (or part of) by a casual observer, but that's not the same as not being a "memory". You are confusing observer recognition with existence. Confusion does not end existence (except for stunt-drivers:-)

As far as whether following the brain's exact model is the only road to AI, well it's too early to say. We tried to get flight by building wings that flap to mirror nature, but eventually found other ways (propellers and jets).

Hogwash! The weightings you talked about are the memories. They may not be easily recognized as a coherent memory (or part of) by a casual observer, but that's not the same as not being a "memory". You are confusing observer recognition with existence. Confusion does not end existence (except for stunt-drivers:-)

As far as whether following the brain's exact model is the only road to AI, well it's too early to say. We tried to get flight by building wings that flap to mirror nature, but eventually found other ways (propellers and jets).

I'd vote you up if I had points left. The OP is missing on so many areas. I started laughing with the fMRI not discovering free will bit.

We know from fMRI that "free will" does not exist and that "thoughts" are the brain's mechanism for justifying past actions whilst modifying the logic to reduce errors in future

No, we don't know this. Some researchers believe that this might be the case, but it certainly isn't a proven fact. Personally, I think it is a misinterpretation of the data, and that what the fMRI is observing is the process of consciousness.

Amasing how a species that lacks "Real-time Inteligence" and thus cannot think before acting, managed to create a freaking fMRI machine. I guess it's just like those million monkeys with a million typewriters.

The human brain doesn't "store" information at all (and thus never processes it).

This sounds like mere semantics to me. Yes, there isn't a little television screen playing that one time when you broke your arm, with a post-it note attatched saying "memory #4 April, 3, 1956". But there is a deeply encoded structure of chemical potentials, and neural connections which represents this memory. It is stored, and it is, obviously, processed. If it wasn't so, then how could this memory be subject to action and further processing?

Yes, it isn't stored like a video file is stored on your computer, or a photo in your album; but this doesn't mean it isn't stored. If it is an object of thought, it is in the brain, and if it is re-callable, it is stored.

We know from fMRI that "free will" does not exist and that "thoughts" are the brain's mechanism for justifying past actions whilst modifying the logic to reduce errors in future - a variant on back-propagation. Real-time intelligence (thinking before acting) doesn't exist in humans or any other known creature, so you won't build it by mimicking humans.

Huh? I'm not going to get into the agency (free will) debate... But if it did exist, I don't think our understanding of the brain is really up to snuff enough to allow some fMRIs to show it. If it does exist (again, I'm not getting into it), I doubt very much that it will be a little glowing ball located in the middle of your brain (again with a post-it saying "free will"), it would be live pretty much everything else, distributed across large areas of the brain, and sharing functions with other processes of the brain (like memory, limbic functions, sensory processing, etc...).

This system creates the illusion of intelligence.

This sort of statement is why I generally laugh at the whole field of cogsci and AI. Look up p-zombies. At what point is an illusion not, and if you can't actually tell the difference with any test, how can you ever saying, meaningfully, that it IS actually a mere illusion? I make an AI, a very strong AI, and it acts exactly like a human. 100% indistinguishable from a human mind, to an outside observer. Is this an illusion? How do you find out? Given a Turing test like environment, where you can't judge on surface features, how could you ever tell? Ask it, and it will say it is intelligent (just like you or me), input stimulous, and you get the same output you or me would give.

At this point illusion becomes a meaningless statement, since it is completely unprovable.

I'm not a fan of Strong AI, and doubt it is possible, but these arguments have been pretty much beaten into the ground by now. I hate to say it, but with intelligence all that matters in inputs and output, the rest is a black box. This also ignores the fact that intelligence is a dumb term, completely meaningless when applied to anything non-human. In this case, by using "intelligence" we only mean "human-like", which pretty much means it gives an expected output to a given input.

This system creates the illusion of intelligence. We know from fMRI that "free will" does not exist and that "thoughts" are the brain's mechanism for justifying past actions whilst modifying the logic to reduce errors in future - a variant on back-propagation. Real-time intelligence (thinking before acting) doesn't exist in humans or any other known creature, so you won't build it by mimicking humans.

So how do you account for effortful thought [wikipedia.org] or planning? It is true to say that there is no thinking before