Embodied cognition is a topic of research in social and cognitive psychology, covering issues such as social interaction and decision-making.[2] Embodied cognition reflects the argument that the motor system influences our cognition, just as the mind influences bodily actions. For example, when participants hold a pencil in their teeth engaging the muscles of a smile, they comprehend pleasant sentences faster than unpleasant ones.[3] And it works in reverse: holding a pencil in their teeth to engage the muscles of a frown increases the time it takes to comprehend pleasant sentences.[3]

The embodiment movement in AI has fueled the embodiment argument in Philosophy, see in particular Andy Clark (1997, 1998, 2008)[15] and Hendriks-Jansen (1996).[16] It has also given emotions a new status in philosophy of mind as an indispensable constituent, not a non-essential addition to rational intellectual thought.
In Philosophy of Mind, the idea that cognition is embodied is sympathetic with other views of cognition such as situated cognition or externalism. This is a radical move towards a total re-localization of mental processes out of the neural domain.[17] It is important to stress that these views are forms of physicalism. They maintain that the mind is identical with physical processes, though such processes are outside the nervous system.

One embodied cognition study shows that action intention can affect processing in visual search, with more orientation errors for pointing than for grasping.[18] Participants either pointed to or grasped target objects of 2 colors and 2 orientations (45° and 135°). There were randomized numbers of distractors as well (0, 3, 6, or 9), which differed from the target in color, orientation, or both. A tone sounded to inform participants which target orientation to find. Participants kept their eyes on a fixation point until it turned from red to the target color. The screen then lit up and the participants searched for the target, either pointing to it or grasping it (depending on the block). There were 2 blocks for pointing and 2 for grasping, with the order counterbalanced. Each block had 64 trials.[18]

Results from the experiment show that accuracy decreases with an increase in the number of distractors.[18] Overall, participants made more orientation errors than color errors.[18] There was no main effect of accuracy between the pointing and grasping conditions, but participants made significantly less orientation errors in the grasping condition than the pointing condition.[18] Color errors were the same in both conditions.[18] Because orientation is important in grasping an object, these results fit with the researchers' hypothesis that the plan to grasp an object will aid in orientation accuracy.[18] This supports embodied cognition because action intention (planning to grasp an object) can affect visual processing of task-relevant information (orientation).[18]

Internal states can affect distance perception, which relates to embodied cognition.[19] Researchers randomly assigned college student participants to high-choice, low-choice, and control conditions. The high-choice condition signed a “freedom of choice” consent form indicating their decision to wear a Carmen Miranda costume and walk across a busy area of campus. Low-choice participants signed an “experimenter choice” consent form, indicating the experimenter assigned the participant to wear the costume. A control group walked across campus but did not wear a costume. At the conclusion of the experiment, each participant completed a survey which asked them to estimate the distance they walked.[19]

The high-choice participants perceived the distance walked as significantly shorter than participants in the low-choice and control groups, even though they walked the same distance.[19] The manipulation caused high-choice participants to feel responsible for the choice to walk in the embarrassing costume.[19] This created cognitive dissonance, which refers to a discrepancy between attitudes and behaviors.[19] High-choice participants reconciled their thoughts and actions by perceiving the distance as shorter.[19] These results show the ability of internal states to affect perception of physical distance moved, which illustrates the reciprocal relationship of the body and mind in embodied cognition.[19]

Researchers have found that when making judgements about objects in photographs, people will take the perspective of a person in the picture instead of their own.[20] They showed college undergraduate participants 1 of 3 photographs and asked where 1 object in the picture was compared to the other object. For example, if the 2 objects were an apple and a banana, the participants would have to respond to a question about the location of the apple compared to the banana. The photographs either had no person, a person looking at the object, in this case the banana, or a person reaching for the banana. The photograph and question appeared in a larger set of questionnaires not related to the study.[20]

File:Tversky and Hard figure.png Results show that participants who viewed photographs that included a person were significantly more likely to respond from another's perspective than those who saw photographs with no person.[20] There were no differences in perspective of responses for the person looking versus reaching.[20] Participants who saw the scene without a person were significantly more likely to respond from their own perspective.[20] This means that the presence of a person in the photograph affected the perspective used even though the question focused solely on the two objects.[20] The researchers state that these results suggest disembodied cognition, in which the participants put themselves into the body of the person in the photograph.[20]

The motor system is involved in language comprehension, in this case when sentences were performable by a human, there was a change in participants' overall movement of a pendulum.[21] Researchers performed an experiment in which college undergraduate participants swung a pendulum while completing a "sentence judgement task." Participants would swing the pendulum with both hands for 10 seconds before a prompt and then a sentence would appear on the screen until the participant responded. In the control condition, participants swung the pendulum without performing the "sentence judgement task." Each trial had half "plausible" and half "implausible" sentences. The "plausible" sentences made sense semantically, while the "implausible" ones did not. The "performable" sentences could be performed by a human, while the "inanimate" sentences could not. Participants responded by saying "yes" to the "plausible" sentences.[21]

File:Olmstead et al. figure.pngResults show a significant "relative phase shift," or overall change in movement of the swinging pendulum, for the "performable" sentences.[21] This change did not occur for "inaminate" sentences or the control condition.[21] The researchers did not expect an overall phase shift, instead they expected a change in the variability of movement, or the "standard deviation of relative phase shift."[21] Although not entirely expected, these results support embodied cognition and show that the motor system is involved in the understanding of language.[21] The researchers suggest that the nature of this relationship needs to be further studied to determine the exact correlation this task has to bi-manual motor movements.[21]

Some researchers extend embodied cognition to include language.[2] They describe language as a tool that aids in broadening our sense of body.[2] For instance, when asked to identify “this” object, participants most often choose an object near to them.[2] Conversely, when asked to identify “that” object, participants choose an object further away from them.[2] Language allows us to distinguish between distances in more complex ways than the simple perceptual difference between near and far objects.[2]

A study examining memory and embodied cognition illustrates that people remember more of the gist of a story when they physically act it out.[22] Researchers divided female participants randomly into 5 groups, which were "Read Only," "Writing," "Collaborative Discussion," "Independent Discussion," and "Improvisation." All participants received a monologue about teen addiction and were told to pay attention to details about the character and action in the monologue. Participants were given 5 minutes to read the monologue twice, unaware of a future recall test. In the "Read Only" condition participants filled out unrelated questionnaires after reading the monologue. In the "Writing" condition participants responded to 5 questions about the story from the perspective of the character in the monologue. They had 6 minutes to answer each question. In the "Collaborative Discussion" condition participants responded from the character's perspective to the same questions as the "Writing" group, but in groups of 4 or 5 women. They were also given 6 minutes per question and everyone participated in answering each question. The "Independent Discussion" condition was the same as the "Collaborative Discussion," except 1 person answered each question. In the "Improvisation" condition participants acted out 5 scenes from the monologue in groups of 5 women. The researchers suggest that this condition involves embodied cognition and will produce better memory for the monologue. Every participant played the main character and a supporting character once. Participants were given short prompts from lines in the monologue, which were excluded from the memory test. Participants had 2 minutes to choose characters and 4 minutes for improvisations. The recall test was the monologue with 96 words or phrases missing. Participants had to fill in the blanks as accurately as possible.[22]

Researchers gave the recall test to a group who did not read the monologue. They scored significantly lower than the other groups, which indicated that guessing was not easy.[22] In coding the answers to the recall test, exact words were labeled "Verbatim", and correct content but varied wording was labeled "Gist". The combination of "Verbatim" and "Gist" was called "Total Memory." The "Improvisational" group had more "Gist" memories than any other group and had more "Total Memory" than both of the discussion groups.[22] The results fit the researchers' hypothesis that the "Improvisational" group would remember more because they actively rehearsed the information from the monologue.[22] Although other groups had also elaborately encoded the information, the "Improvisation" group remembered significantly more than the discussion groups and marginally more than the "Reading Only" and "Writing" groups.[22] Simply experiencing the monologue in an active way aids in remembering the "Gist."[22] There were no differences across groups for "Verbatim" memory, which they suggest could take longer than the limited time during the experiment to develop.[22]

In research focused on the approach and avoidance effect, people showed an approach effect for positive words.[23] In the "positive toward condition," participants moved positive words toward the center of the screen and negative words away. In the "negative toward condition," participants moved negative words toward the center and positive words away. Participants were given feedback about their accuracy at the end of each of the 4 experimental blocks. In the first experiment the word at the center of the screen had a positive valence, while in the second experiment the central word had a negative valence. In the third experiment, the center of the screen had an empty box.[23]

File:Van Dantzig et al. table.pngAs predicted, in the first experiment participants in the "positive toward condition" responded significantly faster than those in the "negative toward condition."[23] This fits the approach/avoidance effect in embodied cognition, which states that people are faster to approach positive things and avoid negative ones.[23] In the second experiment, researchers expected participants in the "negative toward condition" to be faster, yet those in the "positive toward condition" responded significantly faster.[23] Although effects were smaller in the third experiment, participants in the "positive toward condition" were still faster.[23] Overall, people were faster in the "positive toward condition," regardless of the valence of the central word. Despite mixed results regarding the researchers' expectations, they maintain that the motor system is important in processing higher level representations such as the action goal.[23] In this study, participants showed strong approach effects in the "positive toward condition," which supports embodied cognition.[23]

As part of a larger study, researchers separated participants into 5 groups with different instructions.[24] In the "approach" condition, participants were instructed to imagine physically moving the product toward them, but in the "avoid" condition, participants had to imagine moving the product away from them. In the "control" condition, participants were instructed to simply observe the product. The "correction" condition involved the same instructions as the approach condition, except participants were told that the body can affect judgment. In the "approach information" condition, participants had to list 5 reasons why they would obtain the product. After viewing a picture of an aversive product, participants rated on a scale of 1 to 7 how desirable the product was and how much they approached of or avoided the product. They also provided how much they would pay for the product.[24]

An approach/avoidance effect was found in relation to product evaluation.[24] Participants in the "approach" condition liked the aversive product significantly more and would pay more for it. There were no differences between the "avoidance," "control," "correction," and "approach information" conditions. Simulation of approach can affect liking and willingness to pay for a product, but the effect can be reversed if the person knows about this influence.[24] This supports embodied cognition.[24]

As part of a larger study, one experiment randomly assigned college undergraduates to 2 groups.[25] In the "muscle-firming" condition participants grasped a pen in their hand, while in the "control" condition participants held the pen in their fingers. The participants were then asked to fill out donations to Haiti for the Red Cross in sealed envelopes. They were told to return the envelope regardless of whether they donated. They also filled out questionnaires about their feelings about the Red Cross, their tendency to donate, their feelings about Haiti, what they thought the purpose of the study was, etc.[25]

Significantly more participants in the "muscle-firming" condition than in the "control" condition donated money.[25] Condition did not affect the actual amount donated when participants chose to donate. As the researchers predicted, the "muscle-firming" condition helped participants get over their physical aversion to viewing the devastation in Haiti and spend money. Muscle-firming in this experiment may also be related to an increase in self-control, suggesting embodied cognition can play a role in self-regulation.[25]

Some suggest that the embodied mind serves self-regulatory processes by combining movement and cognition to reach a goal.[26] Thus, the embodied mind has a facilitative effect. Some judgments, such as the emotion of a face, are detected more quickly when a participant mimics the facial expression that is being evaluated.[19] Individuals holding a pen in their mouths to freeze their facial muscles and make them unable to mimic the expression were less able to judge emotions. Goal-relevant actions may be encouraged by embodied cognition, as evidenced by the automated approach and avoidance of certain environmental cues.[19] Embodied cognition is also influenced by the situation. If one moves in a way previously associated with danger, the body may require a greater level of information processing than if the body moves in a way associated with a benign situation.[26]

Some social psychologists examined embodied cognition and hypothesized that embodied cognition would be supported by embodied rapport.[27] Embodied rapport would be demonstrated by pairs of same-sex strangers using Aron’s paradigm, which instructs participants to alternate asking certain questions and to progressively self-disclose. The researchers predicted that participants would mimic each other’s movements, reflecting embodied cognition. Half the participants completed a control task of reading and editing a scientific article, while half the participants completed a shortened version of Aron’s self-disclosure paradigm.[27]

There is a significant correlation between self-disclosure and positive emotions towards the other participant.[27] Participants randomly assigned to the self-disclosure task displayed more behavioral synchrony (rated by independent judges watching the tapes of each condition on mute) and reported more positive emotions than the control group.[27] Since bodily movements influence the psychological experience of the task, the relationship between self-disclosure and positive feelings towards one's partner may be an example of embodied cognition.[27]

George Lakoff and his collaborators have developed several lines evidence that suggest that people use their understanding of familiar physical objects, actions and situations (such as containers, spaces, trajectories) to understand other more complex domains (such as mathematics, relationships or death). Lakoff argues that all cognition is based on knowledge that comes from the body and that other domains are mapped onto our embodied knowledge using a combination of conceptual metaphor, image schema and prototypes.

Lakoff and Mark Johnson[29] showed that humans use metaphor ubiquitously and that metaphors operate at a conceptual level (i.e., they map one conceptual domain onto another), they involve an unlimited number of individual expressions and that the same metaphor is used conventionally throughout a culture.
Lakoff and his collaborators have collected thousands of examples of conceptual metaphors in many domains.[29][30]

For example, people will typically use language about journeys to discuss the history and status of a love affair, a metaphor Lakoff and Johnson call "LOVE IS A JOURNEY". It is used in such expression as: "we arrived at a crossroads," "we parted ways", "we hit the rocks" (as in a sea journey), "she's in the driver's seat", or, simply, "we're together". In cases like these, something complex (a love affair) is described in terms of something that can be done with a body (travel through space).

Prototypes are "typical" members of a category, e.g. a robin is a prototypical bird, but a penguin is not. The role of prototypes in human cognition was first identified and studied by Eleanor Rosch in the 1970s.[31] She was able to show that prototypical objects are more easily categorized than non-prototypical objects, and that people answered questions about a category as a whole by reasoning about a prototype. She also identified basic level categories:[32] categories that have prototypes that are easily visualized (such as a chair) and are associated with basic physical motions (such as "sitting"). Prototypes of basic level categories are used to reason about more general categories.

Prototype theory has been used to explain human performance on many different cognitive tasks and in a large variety of domains.[33] George Lakoff argues that prototype theory shows that the categories that people use are based on our experience of having a body and have no resemblance to logical classes or types. For Lakoff, this shows that traditional objectivist accounts of truth can not be correct.[33]

Artificial intelligence research before the 1980s simulated intelligence using logic and high-level abstract symbols (an approach called Good old-fashioned AI). This "disembodied" approach ran into serious difficulties in the 1970s and 80s, as researchers discovered that abstract, disembodied reasoning was highly inefficient and could not achieve human-levels of competence on many simple tasks.[34] Funding agencies (such as DARPA) withdrew funding because the field of AI had failed to achieve its stated objectives, leading to difficult period now known as the "AI winter". Many AI researchers began to doubt that high level symbolic reasoning could ever perform well enough to solve simple problems. In recent decades, AI research has achieved a significant amount of success by using "embodied" approaches; that is, by directly simulating the functions we associate with the body (such as perception and motion) without using logic or any similar representation.
The experience of AI research provides another line of evidence supporting the embodied mind thesis.

Moravec's paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated by Hans Moravec (whence the name) and others in the 1980s.
As Moravec writes:

Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.[35]

In the early history of AI successes in programming high-level reasoning tasks such as chess-playing led to an unfounded optimism that all AI problems would be relatively quickly solved. When this failed to happen and the difficulty of tasks based on sensorimotor skills was appreciated, this led to new approaches to AI such as Nouvelle AI and the notion of embodied cognition.

Many artificial intelligence researchers have argued that a machine may need a human-like body to think and speak as well as a human being. As early as 1950, Alan Turing wrote:

It can also be maintained that it is best to provide the machine with the best sense organs that money can
buy, and then teach it to understand and speak English. That process could follow the normal teaching of a
child. Things would be pointed out and named, etc. (Turing, 1950).[36]

Embodiment theory was brought into Artificial Intelligence most notably by Rodney Brooks in the 1980s. Brooks showed that robots could be more effective if they 'thought' (planned or processed) and perceived as little as possible. The robot's intelligence is geared towards only handling the minimal amount of information necessary to make its behavior be appropriate and/or as desired by its creator.

Rohrer (2005) discusses how both our neural and developmental embodiment shape both our mental and linguistic categorizations. The degree of thought abstraction has been found to be associated with physical distance which then affects associated ideas and perception of risk.[37]

Research on embodied cognition is extremely broad, covering a wide range of concepts. Methods to study embodied cognition vary from experiment to experiment based on the operational definition used by researchers. There is much evidence for embodied cognition, although interpretation of results and their significance may be disputed. Researchers continue to search for the best way to study and interpret embodied cognition.

Some[38] criticize the notion that pre-verbal children provide an ideal channel for studying embodied cognition, especially embodied social cognition.[39] It may be impossible to know when a pre-verbal infant is a "pure model" of embodied cognition, since infants experience dramatic changes in social behavior throughout development.[38] A 9-month old has reached a different developmental stage than a 2-month old. Looking-time and reaching measures of embodied cognition may not represent embodied cognition since infants develop object permanence of objects they can see before they develop object permanence with objects they can touch.[38] True embodied cognition suggests that children would have to first physically engage with an object to understand object permanence.[38]

The response to this critique is that infants are "ideal models" of embodied cognition.[39] Infants are the best models because they utilize symbols less than adults do.[39] Looking-time could likely be a better measure of embodied cognition than reaching because infants have not developed certain fine motor skills yet.[39] Infants may first develop a passive mode of embodied cognition before they develop the active mode involving fine motor movements.[39]

Some criticize the conclusions made by researchers about embodied cognition.[40] The pencil-in-teeth study is frequently cited as an example of these invalidly drawn conclusions. The researchers believed that the quicker responses to positive sentences by participants engaging their smiling muscles represented embodied cognition.[3] However, opponents argue that the effects of this exercise were primed or facilitated by the engagement of certain facial muscles.[40] Many cases of facilitative movements of the body may be incorrectly labeled as evidence of embodied cognition.[40]

The following “Six Views of Embodied Cognition” are taken from Margaret Wilson:[41]

"Cognition is situated. Cognitive activity takes place in the context of a real-world environment, and inherently involves perception and action." One example of this is moving around a room while, at the same time, trying to decide where the furniture should go.

"Cognition is time-pressured. We are 'mind on the hoof' (Clark, 1997), and cognition must be understood in terms of how it functions under the pressure of real-time interaction with the environment." When you’re under pressure to make a decision, the choice that is made emerges from the confluence of pressures that you’re under and in their absence, a decision may be made completely different. Since there was pressure, the result was the decision you made.

"We off-load cognitive work onto the environment. Because of limits on our information-processing abilities (e.g., limits on attention and working memory), we exploit the environment to reduce the cognitive workload. We make the environment hold or even manipulate information for us, and we harvest that information only on a need-to-know basis." This is seen when people have calendars, agendas, PDAs, or anything to help them with everyday functions. We write things down so we can use the information when we need it, instead of taking the time to memorize or encode it into our minds.

"The environment is part of the cognitive system. The information flow between mind and world is so dense and continuous that, for scientists studying the nature of cognitive activity, the mind alone is not a meaningful unit of analysis." This statement means that the production of cognitive activity does not come from the mind alone, but rather is a mixture of the mind and the environmental situation that we are in. These interactions become part of our cognitive systems. Our thinking, decision-making, and future are all impacted by our environmental situations.

"Cognition is for action. The function of the mind is to guide action and things such as perception and memory must be understood in terms of their contribution to situation-appropriate behavior." This claim has to do with the visual and memory perception that our minds have. Our vision is encoded into our minds as a “what” and “where” concept--meaning the structure and placement of an object. This idea goes back to what we are used to and what we have been exposed to. Our perception of what we see comes from our experience of it and exposure to it. Memory in this case doesn’t necessarily mean memorizing something, but rather remembering in a relevant point of view instead of as it really is. We remember how relevant it is to us, and decide whether it’s worth remembering.

"Off-line cognition is body-based. Even when decoupled from the environment, the activity of the mind is grounded in mechanisms that evolved for interaction with the environment- that is, mechanisms of sensory processing and motor control." This is shown with infants or toddlers best. Children utilize skills and abilities they were born with, such as sucking, grasping, and listening, to learn more about the environment. The skills are broken down into five main categories that combine sensory with motor skills, sensorimotor functions. The five main skills are:

Mental Imagery: Is visualizing something based on your perception of it, when it is not there or is not present. An example of this would be having a race. You are all excited and full of adrenaline and you take a moment and you can actually see yourself winning the race.

Implicit Memory: means by which we learn certain skills until they become automatic for us. An example of this would be an adult brushing his/her teeth, or an expert race car driver putting the car in drive.

Reasoning and Problem-Solving: Having a mental model of something will increase problem-solving approaches.

Margaret Wilson adds: "Some authors go so far as to complain that the phrase 'situated cognition' implies, falsely, that there also exists cognition that is not situated (Greeno & Moore, 1993, p. 50)."[42] Of her six claims, she notes in her abstract, "the first three and the fifth claim appear to be at least partially true, and their usefulness is best evaluated in terms of the range of their applicability. The fourth claim, I argue, is deeply problematic. The sixth claim has received the least attention, but it may in fact be the best documented and most powerful of the six claims."[43]