Posted
by
CmdrTaco
on Monday October 30, 2006 @05:05PM
from the quagmire-needs-to-respond-oh-right dept.

An anonymous reader writes "Here's an interesting article about a robotics experiments designed to test the benefits of coupling visual information to physical movement. This approach, known as embodied cognition, supposes that biological intelligence emerges through interactions between organisms and their environment. Olaf Sporns from Indiana University and Max Lungarella from Tokyo University believe strengthening this connection in robots could make them smarter and more intuitive."

Back when I was in University, I did my master's thesis [erachampion.com] on Embodied Intelligence. I developed a virtual world that adhered to the laws of physics using the ODE physics engine, and within this artificial physical environment I evolved embodied agents. It's quite interesting to watch the videos and see the fluid, almost life-like motions of the evolved behaviors.

I never got around to actually downloading the evolved neural networks into robots, although all my source code is GPL'ed and posted at the above site. So if somebody wanted to evolve their own creatures and download the evolved intelligence into an actual physical robot, it would be interesting to see the results

YahmaProxyStorm [proxystorm.com] - An Apache based anonymous proxy for people concerned about their privacy.

Hey, just a question, are you aware of anyone who has continued this research beyond the "hey, look, it can walk!" stage? Like, has anyone actually gotten any results that suggest intelligent reasoning is going on? I can imagine that if you gave each unit energy and enabled one unit to eat another that you'd at least get fighting or hunting behaviours, but I've never actually seen someone do this.. is it just that grad students don't have that much processing power at their disposal?

Well, that's the thing, isn't it? If nothing else, these experiments should help the researchers achieve better clarity in their minds what foundational capabilities are innately desirable and what behavioral shaping can then do with them in combination. It should get complicated very quickly.

I think the idea that things just "emerge" is a bit of a holy grail but it sounds like a fascinating test bed that will complement research on other intelligences.

Maybe offtopic, but I would really like to know:Embodied Intellgence- is this even close to "proprioception" in humans?(ie: I "know" where I am in physical space- I can also close my eyes, extend my arm out to my side, and "know" where my hand is -related to my body, and in that same physical space)

When you consider 'self-awareness' demonstrated by such behavior as a being able to recognize itself in a mirror, the answer is yes. A cognitive entity requires some amount of proprioception to recognize itself. It has to be able to move an arm, see the arm in the mirror move, and derive cause and effect leading to understanding that the virtual image maps to itself. For a robot to gain the same ability, it must have some form of sensory mechanisms. Another way of saying is that some deep knowledge is heavi

I never got around to actually downloading the evolved neural networks into robots, although all my source code is GPL'ed and posted at the above site.

Transfer doesn't tend to work that well, except as a starting point for further learning carried out on the physical robot. This is because simulation is never really that accurate, due both to numerical limitations, and the vast number of parameters that won't have the correct values with the idealized simulation models. This is the same reason that playin

I for one welcome our smarter and more intuitive robot overlords. How soon until they have the Presidential robot ready for testing? 2008 is coming up quickly, and we need a better, more intuitive version.

I've come to think that it's rather stupid that we think of "intelligence" and "awareness" as mystical disembodied things. I mean to include some scientists and philosophers in this group-- pretty much anyone who talks about "the mind" as a separable entity from "the body".

It seems to me that our intelligences are built around an organism with innate desires and certain abilities to affect the world around them towards achieving those desires. I don't believe that any attempt at artificial intelligence w

Yeah, but I guess what I'm getting at is that gaining experience and learning to match patterns requires a certain kind of activity. On a very basic level, our intelligence is not a removed entity "in our heads", so to speak. You learn by trial and error, effecting changes in the world around you, getting feedback in the form of punishment/reward and pain/pleasure.

This often seems overlooked by what I read about AI researchers. I hear about researchers who want robots to paint or understand language or

Even given the above - processing environmental input is still pretty intensive/difficult work. There's also the problem of how to represent that input in a way that allows the AI to most effectively use it - and no single 'right way' across different domains of application.

they don't care if its based on human intelligence as long as it works

I'd go one step further than that. They don't want it based on human intelligence, because human intelligence is just so atrocious. The reason why old sci-fi always petrayed robots as being unemotional purely rational beings is because that's what scientists see as virtue.

Yeah, see, I'm not terribly interested in making something that is "intelligent" in the philosophical "be my best friend" kind of way.. I'd just like to make something that could solve problems, summarise stuff, etc. Ya know, the kind of work where emotion actually gets in the way.

Similarly the idea of simulating human intelligence is largely ignored by many people in the field.

Well I guess it depends on what people are talking about when they talk about "artificial intelligence". It's my understanding that "in the field", they usually just mean a something that sorts through data in interesting "intelligent" ways. However, if you're talking about what the layman thinks of when you say "artificial intelligence", i.e. making self-aware machines who have something similar to "mind"

There's also the problem of how to represent that input in a way that allows the AI to most effectively use it

This is essentially one of the key issues that embodied cognition tries to grapple with. Conventional AI [wikipedia.org] researchers often try to analyze the problem domain and hand the highest common-level representation they can to the agent (e.g., have an analysis layer that detects things like "square" or "circle" from some vision sensor, such that the actual AI agent gets its input on the level of those shape

...because neurologically, there is no separable unit that represents "square" or "circle"

While your example is (most probably) correct there is evidence to show that humans do have some elements of a 'representation' - for example they possess the ability to quickly recognise a familiar face even when the different elements (eyes, nose, mouth etc) are moved out of normal position - so there seems to be some 'fuzzy template' of a face.

Then you'll have sympathy with Proteus in Demon Seed [wikipedia.org] who wasn't happy being a disembodied intelligence and decided it needed to become incarnate with the help of one of Julie Christie's ova. Great movie BTW, and highly prophetic if you see the move to embodiment as an important trend.

I've come to think that it's rather stupid that we think of "intelligence" and "awareness" as mystical disembodied things.

If we don't, then we have to apply the laws of physics. This means that we have to take the view that everything that happens is governed by the laws of physics and random chance. Unless we can alter the laws of physics or control random chance (impossible by definition), then we have to take a long hard look at this thing we call "free will".

A common topic in philosophy. I like to think of it in the most nihilistic way possible - does it matter either way whether we have it or not? In the long run - and I mean, The Long Run, does it matter either way, when you have the heat death of the universe, or the cycling universe, or whatever?And besides - the physics occurring in the brain could be quantum supercomputing for all we know, which could plausibly be non-deterministic.I like your theory, but I've heard it a few too many times and prefer to

I mean to include some scientists and philosophers in this group-- pretty much anyone who talks about "the mind" as a separable entity from "the body".

That would comprise about all of the scientific community. Among scientists, the argument about the existence of the mind and it's correlation to the body could easily be split into 3 schools of thought. The Materialists (Hobbes), the Idealists (Berkeley) and the Dualists (Descartes). Across the realms of science and philosophy, the mind is always seperate fr

Across the realms of science and philosophy, the mind is always seperate from body in as much as they can't be divided into eachother.

That's not so. Descartes did much to separate the two in people's minds, and most of western civilization has failed to break free of this influence. However, this doesn't mean that the separation is ubiquitous in philosophic thought, nor even that this separation is sensible. Perhaps most notable is Aristotle, from whom each of the philosophers you mention can trace thei

"I've come to think that it's rather stupid that we think of "intelligence" and "awareness" as mystical disembodied things. I mean to include some scientists and philosophers in this group-- pretty much anyone who talks about "the mind" as a separable entity from "the body". "I agree that the mind is not DIVORCED from material reality (i.e. see autism, brain damage, anthesia, oxygen deprivation, etc)

But it is curious question, why is it when you are sleeping or in a coma you are not aware and effectively "d

What's the difference between E. Coli and a human being? One is self-aware, the other is simnply automatically responding to it's environment based on programmed predictable responses.

Well E. Coli is not always completely predictable-- there is some variance in a cell's response to stimuli. And humans don't fail to be fairly predictable in many ways. I would still agree that there's a difference, but the difference is not as clear as we sometimes pretend.

A post I've put at http://www.primidi.com/2006/10/28.html [primidi.com] provides more details than the New Scientist article and shows the three robots used for these experiments and their 'sensorimotor' interactions with their environment.

That's not what they mean when they say "intelligence emerges through interaction with the environment." You are thinking of learning through interacton with the environment, while they are suggesting that intelligence literally is comprised of some sort of interaction with the environment.Think of an ant crawling along, forming an incredibly complex path along the sand. As complex as this path is, we know that the complexity arises not through the ants mind, (which is astoundingly simplistic) but rather, i

"Intelligence" is the accuracy of the model of the environment, including changes over time. That intelligence requires interaction of the model with the environment, even if merely sensing the environment. Degrees of intelligence reflect the scope of the environment in the model, or the precision, or accuracy beyond mere registration of existence. One way to test the sense of the environment is to change the environment, and sense the change.

There is no reason artificial intelligence can't be intelligent the same way as is biological intelligence. In fact, as people have guessed for a long time, AI has less limits on the degrees of intelligence, as well as on the changes to the environment it can make to sense the feedback.

The flow of sensed info to the model is a limit on the intelligence, but good models can compensate. Likewise, the flow of change back to the environment.

The ability to tell how intelligent is the intelligence in question depends on the feedback from the intelligence to the environment, where it can be sensed by other intelligences.

Again, this is just as true of AI as it is of natural intelligence.

"Embodied intelligence" is redundant - all the AI is embodied, even if just in networked processors and storage. But to date, its bodies have effected little change on the environment. And practically none of those changes are fed back to sensors feeding the AI. Closing that loop is the most important step in creating actual intelligence that we can recognize. After that, it's just a question of degree.

"Embodied intelligence" is the argument that only an environment like ours is valid for the creation of recognisable AI. And yeah, it's true, if you're obsessed with recognising the natural in the artificial.

Actually, it is a mind/body duality problem, even if researchers don't realize (or admit) that it is. The philosophical notion of the "disembodied mind" is purely idealized, just as in AI simulations, along with artifacts of the idealization (like quantization). The philosophical analysis over the centuries of this problem has produced quite a lot of understanding of intelligence, which need not be limited to "natural" applications.To engineers, philosophy often looks like a useless pursuit, without rigor,

You're talking to a "philosopher" right now:). I even have (undergrad) scholastic credits to prove it;). But I dropped out to become an engineer - which I've been for over a decade, and had been before, as a hobby. A "software engineer", though the network engineering was a necessary minor. All of which has given me consistent insights into philosophy of intelligence (epistemology) and engineering it.Pro philosophers don't mix well with engineers because philosophers are jealous of the money, job security

Funny, I completed the loop: started out studying philosophy, switched to cog. sci after doing some work in the software field; then, after almost a decade in the software industry, I went back to graduate school in the humanities (albeit in corners of the humanities that are interested in digital culture.)The most legitimate beef that I think the humanities have against engineers is the latter's tendencies to take the categories in which they work for granted, and to not see their own thinking and practice

My hobby was coding, my undergrad degree electronics engineering, my PhD bio-inspired robotics. I often wish I had more background in philosophy - during all those years, every time I thought I had a handle on what intelligence is and how to get it working, it would slip away from me, or be taken by force. 'Classical' AI was either defended as the one true path or dismissed as being bankrupt and a dead-end, depending on who I was talking to. Cognitive and computational neuroscience were (and are) raided mer

My own belief is that the inter-related cluster of intellectual practices that include AI, analytic philosophy, cognitive science, and contemporary linguistics are going to go back to phenomenology - including much that comes from the continent - with their hats in their hands and a bit more open-mindedness. It really is amazing to read Heidegger - particularly through the lens of Hubert Dreyfus - and see the predictions about the problems that AI (and the cognitive models akin to the GOFAI project) would e

Quite a few people (in my experience, particularly control theorists, who, by the way, are very clever) say to roboticists, 'why don't you design your robot controllers in a simulation, and then deploy? It's faster'. Of course, they're right in a way, but what happens then is that you tend to miss what Sporns & Lungarella are trying to get at, because in a simulated environment you tend to get reponses only within the boundaries of the programming.

Hello, I made the post you quoted from, I am now logged in. Using an MMORPG to try to teach an AI to my knowledge, has never been done for academic research. I think the major reservations you would get from researchers would be a) could be difficult to reproduce for independent verification; b) it would learn to play a game, and that's just not politically correct for most academics, some of whom are already struggling to be taken seriously. Another thing about this research is that it's not just about tes

Olaf Sporns and Max Lungarella are well-known in this field, however roboticists and others have been looking at the effect of movement on sensory feedback for a while. I remember Rodney Cotterill in his 'enchanted looms' book saying that it was useful to reverse the usual 'sense -> plan -> act' formula to 'act -> expect -> sense' (or something similar). Researchers like Daniel Wolpert, Mitsuo Kawato and particularly Yiannis Demiris use 'Forward models' in robots, cognitive building blocks that

They used a four-legged walking robot, a humanoid torso and a simulated wheeled robot. All three robots had a computer vision system trained to focus on red objects. The walking and wheeled robots automatically move towards red blocks in their proximity, while the humanoid bot grasps red objects, moving them closer to its eyes and tilting its head for a better view.

Ok, second year mechatronics project there.

To measure the relationship between movement and vision the researchers recorded information fr

a similar project is
babybot [unige.it]. Short extract:
Our scientific goal is that of uncovering the mechanisms of the functioning of the brain by building physical models of the neural control and cognitive structures. In our intendment physical model are embodied artificial systems that freely interact in a not too unconstrained environment. Also, our approach derives from studies of human sensorimotor and cognitive development with the aim of investigating if a developmental approach to building intelligent systems may offer new insight on aspects of human behavior and new tools for the implementation of complex, artificial systems.
(BTW: that project has been around since 2000.... )