Can you tell what the person is doing? It might be hard to make it out from these still pictures, but when you see the same thing in motion it becomes quite clear. Visit the Biomotion Lab and you’ll quickly understand.

What you see is called a point-light display. Lights are attached to joints on the body and filmed while a person is performing an action. The animated display makes it surprisingly clear that this person is walking. But, could a young child, who has just learned verbs, recognize that this person is walking?

Once children learn verbs they must be able to generalize them to different people and situations. By showing children point-light displays, it might be possible to understand the process children use to extend verbs. Because point-light displays give no specific hints about a verb such as an associated location or object, the verb is represented only by the manner and path that define it. For example, a picture of a person using a shovel on the beach could give away the verb if a child recognizes the shovel, whereas showing the action of bending over, pushing towards the ground, and then standing up again shows the manner and path of shoveling.

A research team led by Roberta Golinkoff had 3-year-old children look at point light displays in an attempt to discover if children can label verbs presented in this format.
Success with this task would possibly indicate that children learn to extend verbs because they understand the specific motion that makes up an action and then form a word-action association. For example, when you tell a child ‘Look, Dad’s running,’ the movement of the arms, the bend of the knee, and the forward, fast motion is then represented by the new word, running, which is later generalized if the same motion is observed in an animal or another person. But, for children to be able to label point-light displays as specific verbs, they must have first been introduced to those verbs, especially since children might have a hard time comprehending that a group of lights is actually a person. Because of this, the researchers chose children who already knew the names of at least 7 out of 8 specific verbs. Point-light displays were created by filming people walking, dancing, shoveling, picking flowers, running, rolling, hopping, and skipping with lights attached to their ankles, knees, hips, wrists, elbows, and shoulders. In experiment 1, children were presented with 2 simultaneous films of point-light displays on separate screens. As they watched, an experimenter said things such as “Do you see dancing?” or “Find dancing! Look at dancing!” The expectation was that if children look more to the screen depicting the named verb, then they are able to make a distinction between the 2 actions. All children were held facing the screens by their parents, but parents were told not to help the children in any way.

The following diagram shows the setup of the testing room.

Children were first presented with a point-light display of a cat; the cat and action were verbally labeled to familiarize the children with the concept of point-light displays. Then children saw 2 simultaneous videos, this time depicting people. The experimenter verbalized both actions so the children would understand what the displays were showing. However, the experimenter did not tell them which screen displayed which action. For example, “Hey, one is walking and one is dancing!” Next, children were given the test trials. Pairs of verbs were presented while the experimenter named 1 of the 2 verbs. A red light between the 2 screens helped to attract children’s attention before the start of a new trial. Observers who could not see the screens indicated which screen children looked at and how long they looked.

The results show that 29 out of the 32 three-year-olds looked at the point-light display that matched the spoken verb! The average time that children looked at the screen depicting the match was 3.36 seconds compared with only 2.29 seconds spent looking at the screen that did not match. These data suggest that children can extend verbs to something they have never seen before.

Children could successfully look at the screen depicting the matching verb, but would they be able to name the verb? Perhaps children only recognized the action once they were given the label, which then helped them understand what they were seeing. In order to gather more evidence that children can recognize point-light action without suggestion, the researchers brought in a new group of children and asked them to say the verb that they saw.

In experiment 2, children once again sat on their parent’s laps but saw the point-light displays one at a time. The experimenter then prompted the children, asking “Can you tell me that was?” Often the children would label the object, and in this case, the experimenter would then prompt again, saying something such as “What was the lady doing?” If a child still did not say a verb, he or she was prompted with the actual label, “Was the lady walking?” Most children successfully answered with some kind of action after the first question. The experimenters accepted a few different descriptions for specific actions based on adults’ ratings of the appropriateness of each response. For instance, walking, jogging, and marching were all considered appropriate responses for running. The following chart shows the number of children who gave appropriate responses for each action over the total number of children who gave responses.

As you can see, skipping and shoveling had only a few appropriate responses but children were still able to say some type of motion verb even if it was not appropriate. These results are quite impressive considering that 3-year-olds actually produced a verb for a strange group of moving lights

Although there are no point-light displays in the real world, this research shows us that children most likely recognize motions by summarizing the components that make up a specific motion and then storing those components in memory. If children can extend familiar verbs to point-light displays, could they learn verbs from point-light displays? Since autistic children struggle to develop language, point-light displays might be incorporated into autism treatment programs as a way to teach motion verbs. Research about how children acquire, categorize, and label information from their surroundings is particularly applicable to this group. As Golinkoff and colleagues point out, verbs are the building blocks of sentences, so understanding how children learn to use them may hint at how children learn to label their surroundings with words.

All this talk about stereotypes can get you thinking. Perhaps some stereotypes reflect actual differences. Take color vision, for example: men often refer to themselves as “color-impaired,” letting the women in their lives make home design decisions and even asking them to match clothing for them. Maybe they’re just behaving in accordance with traditional stereotypes … but maybe there’s something more to it.

In the 1980s, vision researchers began to find some real physical differences between the eyes of many women and those of most men. “Normal” color vision is possible because we have three different types of cone cells in our eyes, each of which responds to a different wavelength of light. The process is basically the reverse of how a TV set or computer monitor works: on a TV, there are three different colored dots—red, green, and blue—and the millions of “colors” we see are based on mixtures of different proportions of those colors. In the eye, cone cells can have three different photopigments. These are usually generalized as red, green, and blue, but their actual values are closer to yellowish green, green, and bluish violet. To avoid confusion, psychologists typically refer to them to long-, medium, and short-wavelength sensitive cones. Supposing we’re looking at a yellowish-green thing, the long-wavelength cones are stimulated the most, the medium-wavelength cones are stimulated a bit, and the short-wavelength cones are not stimulated at all, and the appropriate signal is sent along the optic nerve to the brain, which then recognizes the color as “yellowish-green.”

What the researchers were finding when they actually looked at the structure of the eye is that many women—perhaps over fifty percent—possessed a fourth photopigment. Was this merely a genetic anomaly? Would the brain even be able to process this fourth input? The early research suggested that it would not. Women were no better at determining whether two very similar color patches were actually the same. They were only slightly better than men at detecting subtle spots of red light, a fact researchers attributed to individual difference.

However, Kimberly Jameson, Susan Highnote, and Linda Wasserman were not convinced by this evidence. Five- and six-year-old girls are better at naming colors than boys, and grown men are not as good at color-naming compared to women. They felt the existing measures of color sensitivity and color-matching did not capture all the differences between men and women, and devised a new experiment that they felt was more representative of real-world vision.

It’s quite difficult to examine an eye to determine if it has the fourth photopigment—the process generally involves removing the eye itself. Jameson and her colleagues might have had just a bit of difficulty recruiting volunteers to participate in an experiment requiring such extreme measures, so instead they used a genetic test to determine how many different photopigments participants were likely to possess (they estimate this process to be about 90 percent accurate—biologists will recognize this as the genotype versus phenotype problem). Of 64 participants in the study, 23 were women with 4 photopigments, 15 were women with 3 photopigments, 22 were men with 3 photopigments, and 4 were men with 2 photopigments (this is commonly called “color-blindness,” but most people with 2 photopigments can still distinguish between many colors).

Next, participants viewed a spectrum projected on a lucite window covered with tracing paper. Over the next hour and a half, they performed an array of tasks, including marking the edges of the visible rainbow, marking the locations of the “best example” of each of the major colors, and marking the edges of each “band” of color in the rainbow. Between each task, a camera flash was set off to mask the previous spectrum example, and the experimenter mounted a new sheet of tracing paper on the panel.

The most compelling results came from the number of spectral bands task:

Type of participant

Average number of spectral bands

Number of participants

Four-pigment females

10

23

Three-pigment females

7.6

15

Three-pigment males and females

7.3

37

Two-pigment males

5.3

4

Four-pigment females perceived significantly more bands of color than both three-pigment males and females. Further, three-pigment males and females are statistically indistinguishable, suggesting that the result is not due to some cultural difference between men and women.

So why were others unable to find significant results in a color-matching task when we see such dramatic results here? Jameson et al. suggest that there may be two (or more) different modes of seeing color, each processed differently in the brain. The brain may use the data from all four photopigments for some processes, but not for others. But this is still supposition. What’s clear from this study is that the stereotype of women being better with color may reflect real differences between men and women.

]]>http://scienceblogs.com/cognitivedaily/2007/07/14/do-women-perceive-color-differ-3/feed/13Attentional Set: Set in stone?http://scienceblogs.com/cognitivedaily/2007/07/12/attentional-set-set-in-stone/
http://scienceblogs.com/cognitivedaily/2007/07/12/attentional-set-set-in-stone/#commentsThu, 12 Jul 2007 09:53:01 +0000http://scienceblogs.com/cognitivedaily/2007/07/12/attentional-set-set-in-stone/This is a guest post by Daniel Griffin, one of Greta’s top student writers from Spring of 2007.

Does anything seem stick out about this sentence? I’m sure that if I told you to keep looking for yellow highlighted words, you would not have much trouble finding them in these first few sentences. You could even make it simpler for yourself and just look for any highlighted word. The only highlighted portions are yellow, so what is the difference? Let’s say that by now you are used to searching for these highlighted words by just looking for a different color background than just the usual white. Does it take any longer to find the yellow word in this sentence? For most of us the answer to this question would be not especially, but I bet you glanced twice at “find.” If I were to write the rest of this post in this fashion you would have to change your visual searching strategy to not just look for highlighted words, but yellow words in particular.

This kind of visual searching strategy is called an “attentional set”. More specifically, an attentional set is an innate part of our information processing that prioritizes certain stimuli, such as yellow highlighted words, for selection. So why would we create this set when look for things? Well, using a certain set prevents things other than what we are looking for from distracting us. The problem with creating and using a set is that we do not always use the set that is best for what we are doing. With this in mind, Andrew Leber and Howard Egeth studied our visual searching strategies and the effects of past experience.
Leber and Egeth first defined two major types of attentional sets. The first of which, they define as singleton detection mode. This is a broad way of searching that narrows our attention down to a certain kind of characteristic- i.e. a singleton. The example of the highlighted word in the opening of this blog is a singleton. No matter what color it is, being highlighted makes it a singleton. The second kind of attentional set is called feature search mode. This mode is much narrower way of searching, where we look only for something’s defining feature (i.e. a yellow highlighted word.) There are positives to using both methods. In singleton detection mode, you don’t use as much effort, but it is not as systematic as feature search mode. So how do we decide which one to use? The choice is not entirely ours to make–the presence of a preestablished strategy looks to be the most important factor in which set we use.

To test the importance of a preestablished strategy, Leber and Egeth first divided volunteers in two groups for a “training phase.” The training influenced the groups to use either singleton detection or feature search mode. The general procedure asked subjects to identify a target letter after a rapid presentation of 20 screens with a single, random letter on each and a blank screen in between letters. Each screen was only presented for 50 milliseconds. The group that was influenced to use singleton detection mode searched for any letter that was a different color compared to the other letters. The feature search group looked for a target of a particular color –either red or green. Two screens before the target letter appeared, a number of different distractors might flash — either grey “#” symbols or “#” symbols that were the same or a different color as the target letter. The training phase lasted 30 minutes so that each subject would get used to the influenced set. Here’s an example of the kind of thing viewers experienced. Play this movie (QuickTime required) and see if you can identify the letter that’s displayed in red.

Remember, some viewers were trained to look for specific colors while others were trained to look for any color that was different.

Next, subjects looked for a letter which was either red or green but which color exactly was not announced. These letters were preceded by four types of distractors: no distractors, four grey distractors, three grey distractors and one distractor the same color as the target, and finally three grey distractors and one distractor of a different color than the target.

The results showed a remarkable effect of preestablished strategy. The subjects continued to use the same search method they had used in the training phase. The group that was trained in feature searching still looked for a specific color, which could be shown in the data because there was no difference between the performance on the all grey distractor and different color distractor. Subjects of the singleton search group were seen using the same strategy due to the fact that they performed worse on the same AND different colored distractors. Again, in singleton search mode, you are looking for one characteristic which in this case is the presence of color. The graph below shows the performance of the two groups during the test phase.

Notice the difference between groups when presented with a colored distractor- the feature search group can focus on the task better because they look for a specific color, whereas the singleton search group does not.

In the “test phase,” subjects could use either attentional set. The results show that they did not change which searching style according to the demands of the task. After a reevaluation of how to find the target most effectively, the singleton search group would have adopted more of a feature searching style, but they did not. Conversely, the feature search group may have adopted more of a singleton search style when they looked through screens with only grey distractors, but they did not change approaches either.

Our past experience influences how we search, regardless of which method is most effective. This idea leads to broader questions concerning the basic properties of attentional control. Does the past play a larger part in where we direct our attention than we think? Try spotting the yellow highlighted word this time. Faster?

There’s something about kids and dogs. The phrase “A boy and his dog” brings up quite a range of images: from the sweetness of Norman Rockwell to what sounds like a truly bizarre movie from 1975. Despite not being a dog-person myself (okay, not being a pet-person at all), I find the results from a study that looked at kids and dogs amazing. Marina Pavlova and her colleagues at the University of Tüebingen were curious about how well kids would understand point-light displays. Imagine placing little lights on the major joints of someone’s body (hips, elbows, etc) and then watching them move in a dark room. All you can see are little dots, but you can almost instantly identify a person–you can even name them, if it’s a friend. You can play with these displays here, and we have posted on them before.

The speed with which we recognize these figures could be because we have a lot of exposure to human movement, not just visually, but physically, as well. Pavlova and her team wanted to explore what young kids might see in these displays, and figure out if they needed the displays to be in motion. They created four point-light displays: a human walker, a running dog, a flying bird and a walking dog. The flying bird was viewed from the front, and the rest were viewed from the side. Kids and adults were shown the movies and asked to name the figures, and here are the percent of correct responses.

This is an easy task for adults, and by the time you are 5 years old, this part of your world is the same as adults’—there was not a significant difference between adult and 5 year-old performance. However, the younger kids are having trouble–some of the kids can see that the dots show a moving body, but many cannot. Of these movies, you might think that the human walker would be the easiest; after all, this is who you are. But take a look at how the youngest kids are doing with the walking dog–they are doing better with the dog than with the human!

What does this mean? Pavlova et al. suggest that the littlest kids are having trouble recognizing the walker because it’s the wrong view. Walking adults don’t look like that when you are only 3 feet tall; all the angles are different. If this is true, then we could expect kids recognition to improve dramatically with movies made from their point of view, and I hope someone’s taking a look at this, because it would be fun to see.

As a final note, how much do you need the motion for these displays? In their second experiment, Pavlova and her team showed new groups of adults and 5 year olds comic book versions of the movies. One problem they faced was even figuring out a way to explain the storyboard form to 3- and 4-year-olds. In the end, this problem didn’t matter, because both adults and children could not do this task at all—their guesses were at the level of random chance. This is something you can probably easily believe as you try to identify this form.

]]>http://scienceblogs.com/cognitivedaily/2007/07/11/a-boy-and-his-dog-2/feed/16We recognize siblings based solely on facial similarityhttp://scienceblogs.com/cognitivedaily/2007/07/09/we-recognize-siblings-based-so/
http://scienceblogs.com/cognitivedaily/2007/07/09/we-recognize-siblings-based-so/#commentsMon, 09 Jul 2007 09:53:35 +0000http://scienceblogs.com/cognitivedaily/2007/07/09/we-recognize-siblings-based-so/This is a guest post by Christy Tucker, one of Greta’s top student writers from Spring of 2007.

Take a look at the following paintings. How alike are they? How can you tell–which clues help you determine similarity? Now, which of these girls are related? If only two of these young girls are related, how would you determine which two? Would they be the same ones that you thought looked very similar?

Laurence Maloney and Maria Dal Martello studied observer’s ratings of the similarity between two children’s faces in relation to judgments on whether the two are siblings. Do we simply note similarity when trying to figure out siblings? Or do we use a different process? Pairs of pictures of children with neutral facial expressions like the one below were shown to two groups of observers.

The first group of viewers rated how similar on the scale of 1 (not similar) to 10 (similar) the children appeared. They were not told that the pairs of children could be siblings. The second group of viewers viewed the same picture sets, and then answered whether the children were siblings or non-siblings. This time, the observers were told that half the pairs were siblings.

Maloney and Dal Martello found that children who were rated to be more similar were more often judged to be siblings, as demonstrated by the graph below.

Consistent with this result, when a pair was not believed to be similar, they were rarely classified as siblings. Subsequent statistical analysis revealed that viewers did not take into account gender or age when determining similarity. In addition, the straight upward-sloping line in the graph suggests that similarity ratings are based on some sort of built-in formula.

A later study by Maloney and Dal Martello addresses the specific features of the face on which observers focus during their assessment of similarity or relatedness. They conclude the characteristics of the upper face are used to make projections about relatedness of children, since the lower face is not fully developed until early adulthood. Take a look back at the paintings of the four girls. The girls are actually all sisters, taken from Thomas Gainsborough’s 1787 The Marsham Children. The ratio of upper face size to lower face size generally indicates age. Does that help you determine the relative age of the sisters? Would observers spotlight the lower face to decide whether adults are related, since the structure of their lower faces are already fully formed? It is truly fascinating to realize that when we talk by a group of people, our minds are engaging in a process of similarity and relatedness assessments that affect our judgments of them and interactions with them!

]]>http://scienceblogs.com/cognitivedaily/2007/07/09/we-recognize-siblings-based-so/feed/4High IQ: Not as good for you as you thoughthttp://scienceblogs.com/cognitivedaily/2007/07/07/high-iq-not-as-good-for-you-as-1/
http://scienceblogs.com/cognitivedaily/2007/07/07/high-iq-not-as-good-for-you-as-1/#commentsSat, 07 Jul 2007 10:00:00 +0000http://scienceblogs.com/cognitivedaily/2007/07/07/high-iq-not-as-good-for-you-as-1/A continuation of our “greatest hits” from past Cognitive Daily postings:
[originally posted on December 14, 2005]

IQ has been the subject of hundreds, if not thousands of research studies. Scholars have studied the link between IQ and race, gender, socioeconomic status, even music. Discussions about the relationship between IQ and race and the heritability of IQ (perhaps most notably Steven Jay Gould’s Mismeasure of Man) often rise to a fever pitch. Yet for all the interest in the study of IQ, there has been comparatively little research on other influences on performance in school.

Angela Duckworth and Martin Seligman estimate that for every ten articles on intelligence and academic achievement, there has been fewer than one about self-discipline. Even so, the small body of research on self-discipline suggests that it has a significant impact on achievement. Walter Mischel and colleagues found in the 1980s that 4-year-olds’ ability to delay gratification (for example, to wait a few minutes for two cookies instead of taking one cookie right away) was predictive of academic achievement a decade later. Others have found links between personality and college grades, and self-discipline and Phi Beta Kappa awards. Still, most research on self-discipline has achieved inconsistent results, possibly due to the difficulty of measuring self-discipline. Could a more robust measure of self-discipline demonstrate that it’s more relevant to academic performance than IQ?

To address this question, Duckworth and Seligman conducted a two-year study of eighth graders, combining several measures of self-discipline for a more reliable measure, and also assessing IQ, achievement test scores, grades, and several other measures of academic performance. Using this better measure of self-discipline, they found that self-discipline was a significantly better predictor of academic performance 7 months later than IQ.

How did they arrive at this result? They studied a group of 8th-graders at the beginning of the school year. They used five different measures of self-discipline: the Eysenck Junior Impulsiveness scale (a 23-question survey about impulsive behavior), the Brief Self-Control Scale (13 questions measuring thoughts, emotions, impulses, and performance), two questionnaires in which parents and teachers rated the student’s self-discipline, and a version of Mischel’s delay of gratification task. Students were given an envelope containing $1, and were told they could spend it immediately or bring it back in a week for a $2 reward. The students were also given an IQ test (OLSAT7, level G).

At the end of the school year, students were surveyed again and several measures of academic performance were taken. The data included final GPA (grade point average), a spring achievement test, whether they had been admitted to the high school of their choice, and number of hours they spent on homework. All except two measures correlated more strongly to self-discipline than to IQ. Scores on spring achievement tests were correlated both to self-discipline and IQ, but there wasn’t a significant difference. Duckworth and Seligman suggest that this could be partially due to the fact that achievement tests are similar in format to IQ tests. The other area where there was no significant difference was in school absenses.

Most impressive was the whopping .67 correlation between self-discipline and final GPA, compared to a .32 correlation for IQ. This graph dramatically shows the difference between the two measures:

Both IQ and self-discipline are correlated with GPA, but self-discipline is a much more important contributor: those with low self-discipline have substantially lower grades than those with low IQs, and high-discipline students have much better grades than high-IQ students. Even after adjusting for the student’s grades during the first marking period of the year, students with higher self-discipline still had higher grades at the end of the year. The same could not be said for IQ. Further, the study found no correlation between IQ and self-discipline—these two traits varied independently.

This is not to say this study will end the debate on IQ and heredity. The study says nothing about whether self-discipline is heritable. Further, the self-discipline might be correlated differently with achievement for different populations; this study covered only eighth graders in a relatively privileged school. Perhaps self-discipline has a different role at other ages, or in more diverse populations (though the study group was quite ethnically diverse—52% White, 31% Black, 12% Asian, and 4% Latino). Perhaps the most important question which remains is how best to teach children self-discipline—or whether it can be taught at all.

]]>http://scienceblogs.com/cognitivedaily/2007/07/07/high-iq-not-as-good-for-you-as-1/feed/27Synesthesia more prevalent than originally thoughthttp://scienceblogs.com/cognitivedaily/2007/07/02/synesthesia-more-prevalent-tha/
http://scienceblogs.com/cognitivedaily/2007/07/02/synesthesia-more-prevalent-tha/#commentsMon, 02 Jul 2007 09:53:44 +0000http://scienceblogs.com/cognitivedaily/2007/07/02/synesthesia-more-prevalent-tha/This is a guest post by Jonathan Leathers, one of Greta’s top student writers for Spring 2007.

Take a look at this word:

MONDAY

What color do you see? Red? Blue?

While you may see nothing unusual, some people report being able to perceive colors associated with different days of the week when they are written down or heard in conversation. This ability is attributed to a phenomenon known as synesthesia, previously thought to be extremely rare. In synesthesia, the human brain interprets one set of sensory stimuli in terms of another; in other words, two senses cross. But synesthesia goes beyond metaphorically stating that one feels blue on Mondays. Previous sampling methods relied on self-referral, placing the percentage of people with synesthesia roughly around 0.05%. But, a recent study led by Julia Simner has shown that the number is actually much higher — about 88 times higher!

There are many different forms of synesthesia, each one a product of different senses crossing — word-color, taste-shape, music-color, people-smell — all were included in Dr. Simner’s study of synesthesia’s prevalence in a population. Students at the Universities of Glasgow and Edinburgh (327 women and 173 men) were asked which, if any, forms of synesthesia applied to them by drawing a line from a list of “triggers” (smells, sounds, words etc.) to a list of corresponding “experiences” elicited (for example: colors, shapes or tastes). Those who had indicated to having some form of synesthesia, 120 in all, were then presented randomly with a trigger and instructed to record whatever they experienced. After 70 trials, the order of the stimuli would be re-randomized and each subject re-tested. After a period of several months, the students were asked to return again and complete a third test; this was done in order to ensure the consistency and validity of their answers and to verify that they were, in fact, synesthetic. Here are the results:

Subjects had to be able to consistently choose the same response to at least 19 questions on which senses were triggered by which stimuli, in order to be considered synesthetic. About 1 percent of those completing the study met this requirement and were classified as synesthetes. This may not seem like much, but the most recent estimate had indicated that just 0.024 percent of the population was synesthetic.

Not only was the prevalence of synesthesia in the sample population much higher than previously thought but the results also failed to support the widely held belief of a gender bias in the occurrence of synesthesia. Prior studies on the subject had reported that women were as much as six times more likely to experience synesthesia as males were. Simner’s team, however, found that the female to male ratio was really in the range of 1.1 to 1, not a statistically significant difference between sexes. By using more effective methods of sampling, the researchers were able to debunk two of the longest running misperceptions about synesthesia. Look again at the top of the page; are you sure you didn’t see a color when you read the word “Monday”?

]]>http://scienceblogs.com/cognitivedaily/2007/07/02/synesthesia-more-prevalent-tha/feed/46Why having those annoying little siblings around constantly was probably good for youhttp://scienceblogs.com/cognitivedaily/2007/06/29/why-having-those-annoying-litt/
http://scienceblogs.com/cognitivedaily/2007/06/29/why-having-those-annoying-litt/#commentsFri, 29 Jun 2007 09:53:35 +0000http://scienceblogs.com/cognitivedaily/2007/06/29/why-having-those-annoying-litt/This is a guest post by Martina Mustroph, one of Greta’s top student writers for Spring 2007

Rats are often useful models for understanding human behavior,. Testing drugs on rats before testing them on humans is particularly enticing because it is relatively free of ethical concerns (relative to drugging humans, at least), and the amount of drug required to achieve an effect is relatively small compared to the amount it would take to see an effect in a human. As rats’ nervous systems are very similar to the human nervous system, they lend themselves really well to drug studies. Rats have been used to study drug addiction for years.

Why bother doing drug research on rats in the first place? It is important to learn more about the mechanisms of human drug abuse because with additional knowledge about the mechanisms of drug abuse, drug rehabilitation programs can be improved, and risk factors in at-risk-individuals can be identified. Similarly, protective factors against drug abuse can be identified. Once protective factors are identified, we can implement policies that foster the development of these protective so that drug addiction and its associated problems (including its enormous costs to society) never develop in the first place. One suspected protective factor against drug addiction is environmental enrichment. Let us consider what environmental enrichment is. Environmental enrichment means that during development, the environment is stimulating. For a rat, this might mean growing up with other litter mates in a cage that is full of novel objects that are switched up every few days.
Building on past drug studies with rats, N.T. Bardo led a team in investigating whether environmental changes affect how rats use amphetamine. Amphetamine is highly addictive, and therefore you can see addictive behavior develop in a very short period of time. Bardo and his colleagues took 21-day-old rats that they had obtained and randomly assigned them to be raised in one of three different cage conditions: an enriched cage condition that contained novel objects and social partners (litter mates), a social cage condition that contained social partners only, or an isolated cage condition that contained neither objects nor social partners. After the rats had lived in their assigned condition for 45 days, the team removed food from the rat cages and waited until the rats’ body weights had fallen to 85% of their free-feeding body weight. They then installed two levers in each rat cage. One lever (Lever 1) dropped a pellet of sucrose into the cage when pressed. The other lever (Lever 2) was inactive; nothing happened when it was pressed. At first, the rats were rewarded with a sucrose pellet every time they pressed Lever 1. This was gradually increased until they were only rewarded every 5th time they pressed Lever 1. After this, the rats were again given free access to food in their cage. The type of system on which the rats were rewarded for pressing Lever 1 is called a “fixed ratio (FR) schedule,” and all it means is that every nth time is reinforced. (So an FR1 schedule means that every response is rewarded with a sucrose pellet; an FR2 schedule means that every 2nd response is rewarded with a sucrose pellet; an FR3 schedule means that every 3rd response is rewarded with a sucrose pellet, and so on.)

After rats had been trained to press the lever, they underwent surgery. Each rat was implanted with a catheter attached to Lever 1 feeding amphetamine directly into their veins. After they had recovered from surgery, the rats were then allowed to self-administer amphetamine first on an FR1 schedule a number of times and then on a progressive ratio (PR) schedule. A progressive ratio schedule is really an FR schedule where every nth response is rewarded, but where the n does not remain the same, like in an FR schedule, but keeps increasing. (So, for example, at first, every response is rewarded, then every 2nd response, then every 4th response, then only every 16th response, and so on.)

A progressive ratio (PR) schedule is, as has probably occurred to you, less reinforcing than a fixed ratio (FR) schedule, because it takes increasingly more effort to obtain the same reward. This is exactly what makes the PR schedule a better indicator of addictive behavior, too, since a rat that keeps responding on a PR schedule is putting in increasingly more effort to obtain a reward. The FR schedule was first used in all runs, however, because it is the best way to train rats to use the lever.

Now, remember Lever 2, which was an inactive lever and did not deliver anything when pressed? The researchers counted how many times each of the three groups of rats pressed Lever 2 to get a baseline measurement of how active each group of rats was. This was done in case one group self-administered significantly more or less amphetamine than any of the other two groups, that they could be sure that this was not just due to this rat group being more or less active in general than the other two rat groups. In fact, all three groups of rats (environmentally enriched, socially enriched, and isolated) pressed the inactive lever the same number of times at first, and that this rate of pressing declined equally steadily among all three groups. Since the only way in which the three rat groups differed was their environment placement, the team could also be sure that this effect was due to the rats’ environment during development.

What did they find? For the sucrose reward task, regardless of sex or rearing group, number of pellets earned decreased as the FR value increased. This makes sense because with an increased FR value, the rats had to work harder to get a pellet. Environmentally enriched rats at first earned more sucrose pellets than the isolated condition rats, and this difference among the two rearing groups was more pronounced in female rats than in male rats, but this discrepancy evened out completely with successive FR sessions.

There were no sex differences in the amount of amphetamine used. There was not even a significant difference between the environmentally enriched, socially enriched, and isolated rats in the number of amphetamine infusions earned when the amphetamine infusion was of a high dose (0.1 mg/kg/infusion) each time.

But when the amphetamine infusion was of a low dose (0.03 mg/kg/infusion) each time, there was a significant difference between the three rearing conditions:

The environmentally enriched rats used less amphetamine than the isolated condition rats did as the study progressed, as did the socially enriched rats. However, the socially enriched rats did use more than the environmentally enriched rats.

At the higher dose of amphetamine, the environmentally enriched and the socially enriched rats pressed lever 1 more often than they did with the lower dose of amphetamine:

Yet for the isolated condition rats, their rate of response stayed the same at both doses.

The fact that the environmentally enriched rats and the socially enriched rats showed similar patterns of response behavior to amphetamine infusions and the fact that they self-administered less amphetamine than the isolated rats suggests that environmental and social enrichment during development somehow worked as a protective factor against addiction to amphetamine. The drug amphetamine simply was not as enticing to the environmentally enriched and to the socially enriched rats as it was to the isolated condition rats. The fact that no difference in amphetamine administration was seen between rearing conditions for the higher dose of amphetamine may suggest that with strong drugs, environmental and social enrichment are not strong enough protective factors against drug abuse.

Presumably, the overall response pattern seen in the rats would also be seen in humans who have been raised in environmentally enriched (speak: stimulating) environments and in socially enriched environments (with peers) as opposed to humans who have been raised in relative seclusion. It is by now becoming clear that environmental enrichment does protect against drug abuse vulnerability. Since there is much conflicting evidence about the effects of environmental enrichment, it is still too soon to make definitive statements about exactly how environmental enrichment serves as a protective factor against drug abuse vulnerability. Further research is needed to clarify the role that environmental enrichment plays. Meanwhile, go thank your younger sibling(s) for being their clingy little selves all those years ago. Chances are: You’re probably still indirectly experiencing the benefits of this today; they were your environmental enrichment.

In connection to Monday’s posting, Other-race faces: Why do they seem different?, I thought readers would be interested in a post from early last year concerning implicit attitudes on race. The link to the original post is above if you would like to see previous comments.

Twelve years ago, Greta and I were awakened by a rattling on the door of our Bronx apartment. It was about three A.M.; our children were asleep in the next room. “What should I do?” Greta whispered to me. She had woken first and was holding the deadbolt on the door locked so the intruder couldn’t get in.

“Call the police,” I whispered, and took hold of the lock. I ventured a peek through our peephole. I could see only the grizzled razor stubble of a man who was clearly shorter than I was. He continued to struggle with the door. He was making progress picking our lock — I had to forcefully resist to keep the lock from turning. As I heard Greta talking with the 911 operator on the phone in the other room, I grew bolder. “Who’s there?” I asked, in as gruff and aggressive a voice as I could manage. As soon as he realized there was someone in the apartment, he was gone.

About 30 seconds later, the police appeared at our door. They had been less than a block away when they received the dispatcher’s call, had already searched the stairwells, and found no one. We told them our story, and they asked for a description. I told them about the man’s height, and the razor stubble. “Did you notice anything else,” the officer asked.

“No,” I responded.

“What about race — was he black?”
As I gave my reply: “no,” I could see an expression of perplexed astonishment crossing the officer’s face. Was it possible that the burglar had walked right past the police, and they had assumed this couldn’t be the guy, since he was white? I’ll never know the answer, because the police didn’t say anything more to me about race. They never caught the intruder.

Let’s suppose he actually had walked right by the offender: could we then say that this police officer was racist? After all, he was probably playing the odds — it’s likely that there were more black criminals than white criminals in our area at that time. He never expressed an explicit racial bias to me: he was simply trying to obtain an accurate description of the perpetrator.

A team of researchers at Harvard University have developed another measure of bias, which has been widely reported in the popular press: Project Implicit. As the name of the project suggests, it seeks to measure “implicit attitudes” — biases that people don’t express overtly. The method has been used to measure all sorts of bias: gender, age, even science versus humanities classes. But not surprisingly, racial biases have caught the lion’s share of the attention: a recent article in U.S. News and World Report, for example, suggests ways to counter implicit racial biases. The libertarian blogger “Winterspeak” takes a dim view of implicit bias research, suggesting that such biases are merely the result of rational decisions based on knowledge such as “blacks are more likely to commit crimes.”

But what about when our “knowledge” about racial differences isn’t true? I had a humbling moment a few months ago when I admitted on Cognitive Daily that I was surprised that African American kids are less likely to do drugs or consume alcohol than white kids. How much real knowledge do most people have about racial differences? Could knowledge really be the sole motivator of implicit bias?

Andrew Scott Baron and Mahzarin Banaji recently conducted a study that offers some tentative answers. They gave their racial implicit bias test to white middle class kids aged 6, 10, as well as adults. You can try the test for yourself at Project Implicit, but here’s a quick summary of how it works. First, you’re shown pictures of black faces or white faces: the task is to press a button as quickly as possible when you you see each face (E for a black face or I for a white face). Next you’re shown a set of words, some good and some bad (love, joy, friend, hate, vomit, bomb), and again, you’re asked to press a designated key for each type. Finally, the tasks are combined: “When you see a black face or a good word, press the E key” and “When you see a white face or a bad word, press the I key.” Then the tasks are reversed, so good words are associated with white faces and bad words are associated with black faces. Reaction times are measured, and when a particular association results in a faster response time, then participants are said to have an implicit attitude prefering that association.

In this case, the test was modified for the smallest children so that instead of words appearing on the screen, recorded words were played for them. Here are the results:

For every age group, the association of white faces with good words was stronger than the association of black faces with good words: an implicit bias for white faces over black faces. The bias must have formed before the age of six, and is undiminished in adulthood. To make sure everyone understood the task, a similar test was given to measure preference for insects versus flowers. Everyone except six-year-old boys said they preferred the flowers, but when the preference was measured with the implicit task, even the boys showed an implicit bias for flowers.

But Baron and Banaji didn’t stop with measuring implicit preferences. They also performed an explicit preference task, in which participants were asked overtly whether they preferred a white face or a black face. Here are the results:

Unlike the implicit task, these results do change over time, with each age group showing a significant difference from the other groups, and adults showing an equal preference for black and white faces. Though the implicit biases remain until adulthood, explicit biases appear to have been extinguished.

This data certainly is compatible with the idea that people can claim they are “not racist,” when their actions appear to contradict that notion. But what of Winterspeak’s criticism: “The Implicit Project implicitly assumes that any differentiation between blacks and whites is racist”? That’s a difficult notion to defend. Winterspeak offers no data in support of her/his claim, while Project Implicit can demonstrate that people’s actions differ from their words. There’s no mention of racism at all in Baron and Banaji’s report, or in Banaji’s quotes in the U.S. News article.

Worst of all, when people justify racial discrimination based on “knowledge” of racial differences, they are making two assumptions: that their knowledge of the stereotype is correct, and that the individual in question conforms to the stereotype. In three cases, (inaccurate stereotype, individual doesn’t conform to stereotype, or both) their judgement is not only incorrect, but immoral. In the last case — assuming the person doesn’t know for certain whether the individual he or she discriminates against conforms to the stereotype — it is merely immoral.

Isn’t it better to accurately know what your implicit biases are, and to try to adjust your behavior accordingly?

]]>http://scienceblogs.com/cognitivedaily/2007/06/27/implicit-attitudes-how-childre-1/feed/15Other-race faces: Why do they seem different?http://scienceblogs.com/cognitivedaily/2007/06/25/otherrace-faces-why-do-they-se/
http://scienceblogs.com/cognitivedaily/2007/06/25/otherrace-faces-why-do-they-se/#commentsMon, 25 Jun 2007 09:53:52 +0000http://scienceblogs.com/cognitivedaily/2007/06/25/otherrace-faces-why-do-they-se/This is a guest post by Rivka Ihejirika, one of Greta’s top student writers for Spring 2007

Do you find it harder to recognize the face of someone from a race other than your own? Does it take you longer to recall the face of someone from an unfamiliar race? Some researchers believe that we are born with a predisposition to process faces of those from our own race better than faces from other races. Other researchers believe that the own-race face bias is not innate, but we develop a preference for the race of those in our immediate environment. People of all ages demonstrate the bias to prefer the face of someone from the same race. Yair Bar-Haim and his colleagues wanted to find how much of this own-race face bias was due to nature and how much was due to nurture.
Bar-Haim and colleagues studied the facial processing speed of infants around 14 weeks of age. The infants came from three different racial groups. The first group of infants were Caucasian Israelis who had a predominantly Caucasian upbringing. The next group of infants were African Ethiopian and were from a predominantly African environment. The last group of infants were African Israelis who were surrounded by a mixture of races in an immigration absorption center. The infants were seated on their mothers laps and shown eight face pairs of both Africans and Caucasians. The photo above shows an example of the African and Caucasian faces used in the experiment. The experimenters recorded location and duration of the infant’s focus on each picture pair. Here are the results:

The longer a child looks at a face indicates the child’s preference for that particular face. The Caucasian Israeli infants looked longer at the Caucasian faces than the African ones. The African Ethiopians looked at African faces longer than at Caucasian faces. If the African Israelis exhibited a preference for African faces, the role of nature in own-race face bias would have been shown. However, the African Israeli infants showed no preference for the Africans or the Caucasians. This data shows that nurture plays a significant role in race face perception.

This study shows that our environment greatly influences our perceptions. Even infants at 3 months of age demonstrate signs of racial preference, but this preference is limited to the race they are mainly surrounded by. Heightening cross racial contact mitigates the effects of the bias. Is the own-face race bias a problem? Perhaps: the bias signals a lack of diversity in surroundings. The influence of the own-race face phenomenon may carry over into our daily perception and can cause some racial prejudice beyond our direct control.