From the Washington University in St. Louis press release via ScienceDaily:

Where a child lives makes a difference in how demographics and other factors influence algebra performance, and policies should take into account local variation, research from Washington University in St. Louis suggests.

The findings of William Tate, PhD, chair of the Department of Education and the Edward Mallinckrodt Distinguished University Professor in Arts & Sciences, and Mark Hogrebe, PhD, an institutional researcher in the department, were published in the Journal of Mathematics Education at Teachers College (Columbia University).

Hogrebe and Tate said some of their research would not have been possible even 10 or 12 years ago, but thanks to advances in technology, they were able to use Geographic Information Systems (GIS) data and computer models to analyze relationships between various educational factors on a regional basis.

In the article, “Place, Poverty and Algebra: A Statewide Comparative Spatial Analysis of Variable Relationships,” Hogrebe and Tate wrote that too often, educational data such as test scores are analyzed by comparing differences between schools or districts when district lines are often arbitrary. A child living on Main Street likely is not that different from a child a block away, but in a different district, yet those two students may be notably different from two more 20 miles away in a small rural school, they reasoned.

A more logical approach is to see how locations across the state vary in educational contexts and to study how different ecologies affect academic outcomes, they said.

The article’s big takeaway was that place matters in analyzing relationships between algebra performance and other educational variables.

For example, the researchers studied whether a higher percentage of children in poverty was related to lower algebra scores, and whether higher teacher salaries meant higher algebra scores. They found those relationships held true in some districts but not across the board.

Their article used an example that aptly explains the issue: You wouldn’t consider one statewide weather forecast effective or reliable, so why is it acceptable that state education policies are one-size-fits-all?

Algebra was a logical subject to study, Tate said, because in American schools, it’s often viewed as a gateway course. That is, students who perform well in it are able to progress to higher-level math courses that often are necessary for a host of college courses and career fields, while students who can’t master it are foreclosed from such opportunities. Also, in Missouri at least, students take a statewide assessment exam, providing large amounts of comparable data.

Hogrebe and Tate found that a single, global measurement based on aggregated data doesn’t properly account for important local variations.

“There need to be location-specific solutions,” Tate said.

Policies are unlikely to help students or be cost-effective if they apply the same response statewide, the researchers found.

“The evidence suggests that’s not a good way of doing education policy-making,” Tate said.

The researchers hope their work helps inform education policy and guide lawmakers and others as they determine the best use of scarce resources.

From the Society for Research in Child Development press release via EurekAlert!:

Children from low-income families tend to do worse at school than their better-off peers.

Now a new study of a large ethnically and socioeconomically diverse group of children from across the United States has identified poor planning skills as one reason for the income-achievement gap, which can emerge as early as kindergarten and continue through high school.

The study, by researchers at Cornell University, appears in the journal Child Development.

“Low-income children appear to have more difficulty accomplishing planning tasks efficiently, and this, in turn, partially explains the income-achievement gap,” according to Gary Evans, Elizabeth Lee Vincent Professor of Human Ecology at Cornell University, one of the study’s researchers. “Efforts to enhance the academic performance of low-income children need to consider multiple aspects of their development, including the ability to plan in a goal-oriented manner.”

Researchers used data from the Eunice Kennedy Shriver National Institute of Child Health and Human Development Study of Early Child Care and Youth Development, which looked at almost 1,500 children from 10 geographic sites across the United States.

Planning skills were assessed when the children were in third grade, through the widely used Tower of Hanoi game. The Tower of Hanoi starts with a stack of rings placed on a rod so that the biggest ring is at the bottom and the smallest is on the top. Using two other rods and moving only one ring at a time without ever placing a wider ring on a smaller ring, the children have to recreate the original stack on one of the two spare rods.

The study found that the children’s performance in fifth grade could be explained, in part, by how they did on the third grade planning task, even when taking IQ into consideration. Using income as well as math and reading scores, the study also found that the lower the household income during infancy, the worse the children’s performance on reading and math in fifth grade—replicating the well-known gap between income and achievement.

The researchers suggest several reasons why poverty may interfere with the development of good planning skills. Individuals living in low-income homes experience greater chaos in their daily lives, including more moves, school changes, family turmoil, and crowded and noisy environments, and fewer structured routines and rituals. In addition, low-income parents may be less successful at planning because of their own stress levels.

Researchers believe the group of skills called executive function, which includes planning skills, can be strengthened through interventions. Such interventions are being developed and tested for children as young as the preschool years.

From the Journal of Occupational and Environmental Medicine press release via ScienceDaily:

A transformational leadership style—valued for stimulating innovation and worker performance—is also associated with increased well-being among employees, reports a study in the July Journal of Occupational and Environmental Medicine, official publication of the American College of Occupational and Environmental Medicine (ACOEM).

“A transformational leadership style, which conveys a sense of trust and meaningfulness and individually challenges and develops employees, could lead to greater employee well-being,” according to the new research by Christine Jacobs of University of Cologne and colleagues.

Workers at six German information and communication technology companies were surveyed regarding their employer’s leadership style. A transformational leadership score was based on qualities such as leading by example, making employees feel they are contributing to a common goal, providing intellectual stimulation, and giving positive feedback for good performance. Employees also completed a standard test of psychological well-being.

Based on the results, “Employees perceiving a higher degree of transformational leadership are more likely to experience well-being,” the researchers write. The effect of transformational leadership remained significant after accounting for other factors linked to well-being, such as age, education, and job strain.

The findings add to studies from other industries suggesting that a transformational style can favorably affect employee well-being. That’s especially important because company leadership and managers can readily learn communication skills used in transformational leadership, such as recognizing the needs of others and resolving conflicts. “Such training programs can be seen as another essential component of workplace health promotion and prevention efforts and therefore should receive wide support,” Jacobs and coauthors conclude.

From the University of Southern California press release via EurekAlert!:

Of the many negative stereotypes that exist about older adults, the most common is that they are forgetful, senile and prone to so-called “senior moments.”

In fact, while cognitive processes do decline with age, simply reminding older adults about ageist ideas actually exacerbates their memory problems, reveals important new research from the USC Davis School of Gerontology.

The study, forthcoming in the journal Psychological Science, is an extension of the idea of “stereotype threat” — that when people are confronted with negative stereotypes about a group with which they identify, they tend to self-handicap and underperform compared to their potential. In doing so, they inadvertantly confirm the negative stereotypes they were worried about in the first place.

The results highlight just how crucial it is for older adults, as well clinicians, to be aware of how ageist beliefs about older adults can affect older adults’ real memory test performance.

“Older adults should be careful not to buy into negative stereotypes about aging — attributing every forgetful moment to getting older can actually worsen memory problems.” said Sarah Barber, a postdoctoral researcher at the USC Davis School and lead author of the study.

However, there is a way to eliminate the problem, the study reveals: “No one had yet examined the intriguing possibility that the mechanisms of stereotype threat vary according to age,” Barber said.

Barber and her co-author Mara Mather, professor of gerontology and psychology at USC, conducted two experiments in which adults from the ages of 59 to 79 completed a memory test. Some participants were first asked to read fake news articles about memory loss in older adults, and others did not. Notably, the researchers structured the test so that half of the participants earned a monetary reward for each word they remembered; the other half lost money for each word they forgot.

In past tests, 70 percent of older adults met diagnostic criteria for dementia when examined under stereotype threat, compared to approximately 14 percent when not assessed under threat.

But the latest research shows that stereotype threat can actually improve older adults’ performance on memory tests, under certain conditions.

For participants who had something to gain, being confronted with age stereotypes meant poorer performance on memory tests. They scored about 20 percent worse than people who were not exposed to the stereotype.

But when the test was framed in terms of preventing losses due to forgetting, the results flipped: participants reminded of the stereotypes about aging and memory loss actually scored better than those who were under no stereotype threat.

“Stereotype threat is generally thought to be a bad thing, and it is well established that it can impair older adults’ memory performance. However, our experiments demonstrate that stereotype threat can actually enhance older adults’ memory if the task involves avoiding losses,” Barber said.

Older adults, it seems, respond to stereotype threat by changing their motivational priorities and focusing more on avoiding mistakes. The study is part of a critical body of work on risk taking and decision making among older adults from the USC Davis School of Gerontology, named for AARP founder Leonard Davis and the leading research center in the world on aging and its biological, psychological, political and economic dimensions.

“Our experiments suggest an easy intervention to eliminate the negative effects of stereotype threat on older adults — clinicians should simply change the test instructions to emphasize the importance of not making mistakes,” Barber said.

Our daily routines can become so ingrained that we perform them automatically, such as taking the same route to work every day.

Some behaviors, such as smoking or biting your fingernails, become so habitual that we can’t stop even if we want to.

Although breaking habits can be hard, MIT neuroscientists have now shown that they can prevent them from taking root in the first place, in rats learning to run a maze to earn a reward. The researchers first demonstrated that activity in two distinct brain regions is necessary in order for habits to crystallize.

Then, they were able to block habits from forming by interfering with activity in one of the brain regions — the infralimbic (IL) cortex, which is located in the prefrontal cortex.

The MIT researchers, led by Institute Professor Ann Graybiel, used a technique called optogenetics to block activity in the IL cortex. This allowed them to control cells of the IL cortex using light. When the cells were turned off during every maze training run, the rats still learned to run the maze correctly, but when the reward was made to taste bad, they stopped, showing that a habit had not formed. If it had, they would keep going back by habit.

“It’s usually so difficult to break a habit,” Graybiel says. “It’s also difficult to have a habit not form when you get a reward for what you’re doing. But with this manipulation, it’s absolutely easy. You just turn the light on, and bingo.”

Graybiel, a member of MIT’s McGovern Institute for Brain Research, is the senior author of a paper describing the findings in the June 27 issue of the journal Neuron. Kyle Smith, a former MIT postdoc who is now an assistant professor at Dartmouth College, is the paper’s lead author.

Patterns of habitual behavior

Previous studies of how habits are formed and controlled have implicated the IL cortex as well as the striatum, a part of the brain related to addiction and repetitive behavioral problems, as well as normal functions such as decision-making, planning and response to reward. It is believed that the motor patterns needed to execute a habitual behavior are stored in the striatum and its circuits.

Recent studies from Graybiel’s lab have shown that disrupting activity in the IL cortex can block the expression of habits that have already been learned and stored in the striatum. Last year, Smith and Graybiel found that the IL cortex appears to decide which of two previously learned habits will be expressed.

“We have evidence that these two areas are important for habits, but they’re not connected at all, and no one has much of an idea of what the cells are doing as a habit is formed, as the habit is lost, and as a new habit takes over,” Smith says.

To investigate that, Smith recorded activity in cells of the IL cortex as rats learned to run a maze. He found activity patterns very similar to those that appear in the striatum during habit formation. Several years ago, Graybiel found that a distinctive “task-bracketing” pattern develops when habits are formed. This means that the cells are very active when the animal begins its run through the maze, are quiet during the run, and then fire up again when the task is finished.

This kind of pattern “chunks” habits into a large unit that the brain can simply turn on when the habitual behavior is triggered, without having to think about each individual action that goes into the habitual behavior.

The researchers found that this pattern took longer to appear in the IL cortex than in the striatum, and it was also less permanent. Unlike the pattern in the striatum, which remains stored even when a habit is broken, the IL cortex pattern appears and disappears as habits are formed and broken. This was the clue that the IL cortex, not the striatum, was tracking the development of the habit.

Multiple layers of control

The researchers’ ability to optogenetically block the formation of new habits suggests that the IL cortex not only exerts real-time control over habits and compulsions, but is also needed for habits to form in the first place.

“The previous idea was that the habits were stored in the sensorimotor system and this cortical area was just selecting the habit to be expressed. Now we think it’s a more fundamental contribution to habits, that the IL cortex is more actively making this happen,” Smith says.

This arrangement offers multiple layers of control over habitual behavior, which could be advantageous in reining in automatic behavior, Graybiel says. It is also possible that the IL cortex is contributing specific pieces of the habitual behavior, in addition to exerting control over whether it occurs, according to the researchers. They are now trying to determine whether the IL cortex and the striatum are communicating with and influencing each other, or simply acting in parallel.

“A role for the IL cortex in the regulation of habit is not a new idea, but the details of the interaction between it and the striatum that emerge from this analysis are novel and interesting,” says Christopher Pittenger, an assistant professor of psychiatry and psychology at Yale University School of Medicine, who was not part of the research team. “Thinking in the long term, it raises the question of whether targeted manipulations of the IL cortex might be useful for the breaking habits — and exciting possibility with potential clinical ramifications.”

The study suggests a new way to look for abnormal activity that might cause disorders of repetitive behavior, Smith says. Now that the researchers have identified the neural signature of a normal habit, they can look for signs of habitual behavior that is learned too quickly or becomes too rigid. Finding such a signature could allow scientists to develop new ways to treat disorders of repetitive behavior by using deep brain stimulation, which uses electronic impulses delivered by a pacemaker to suppress abnormal brain activity.

The research was funded by the National Institutes of Health, the Office of Naval Research, the Stanley H. and Sheila G. Sydney Fund and funding from R. Pourian and Julia Madadi.

From the University of Arizona press release by Alexis Blue via MedicalXpress:

While men tend to match their partners’ emotions during mutual cooperation, women may have the opposite response, according to new research.

Cooperation is essential in any successful romantic relationship, but how men and women experience cooperation emotionally may be quite different, according to new research conducted at the University of Arizona.

Ashley Randall, a post-doctoral research associate in the UA’s John & Doris Norton School of Family and Consumer Sciences and the UA’s department of psychiatry, has been interested for some time in how romantic partners’ emotions become coordinated with one another. For example, if someone comes home from work in a bad mood we know their partner’s mood might plummet as well, but what are the long-term implications of this on their relationship?

“Cooperation – having the ability to work things out with your partner, while achieving mutually beneficial outcomes – is so important in relationships, and I wondered what kind of emotional connectivity comes from cooperating with your partner?” she said.

What she found in her recent study – published in SAGE’s Journal of Social and Personal Relationships and featured in the journal’s podcast series, Relationship Matters – were surprising gender differences.

She and her colleagues found that during high mutual levels of cooperation with a romantic partner, men typically experience an “inphase” response to their significant other’s emotions. That is, if the woman in the relationship is feeling more positive, the man will feel more positive. If she feels less positive, he will feel less positive.

On the contrary, it seems women experience more of an “antiphase” pattern during high mutual cooperation. If her partner is feeling more positive, she will tend to feel less positive, and vice versa.

Take, for example, the following familiar scenario: A woman emerges from a department store fitting room and asks her husband what he thinks of a potential new shirt. He likes it, he says, hoping his time at the mall is nearing an end. So does the woman head straight to the cash register and make the purchase? Probably not. Chances are, her husband’s enthusiasm won’t be enough; she’ll want to try on a few more shirts first.

Social psychology literature on cooperation tells us that women generally tend to cooperate more, while men often try to avoid conflict. Thus, men might be subconsciously syncing their emotions with their partners’ during cooperation in an effort to avoid conflict or reach a speedy resolution, Randall says.

If that’s the case, it’s possible, although Randall’s study didn’t test for it, that women may pick up on the fact that their partner’s agreeability is not entirely authentic. If she suspects he’s not really as positive as he seems, or that he has an ulterior motive, she may become less positive herself in an attempt to get at his real feelings and reach a more mutually satisfying resolution, Randall suggests.

“If you think about a couple that is trying to cooperate with one another, the man might go along and say, ‘oh sure, honey, this is great, are we almost done?’ whereas the women might say, ‘I‘m so glad that you’re happy, but I just want to talk about this one other thing because I think we’re really getting at a resolution,‘” Randall said.

In the end, Randall’s results suggest that women may tend to serve as the emotional regulators during cooperation.

Randall based her findings on an analysis of 44 heterosexual couples who were videotaped having a conversation about their shared lifestyle related to diet and health. The couples were asked to watch the video back and, using a rating dial, provide momentary feedback about how they were feeling emotionally. Researchers analyzed the videos as well as the participants’ responses to them.

Co-authored by the UA’s Jesi Post, Rebecca Reed and Emily Butler, the study has implications for better understanding how romantic partners’ emotions are connected.

“Cooperation is something that’s invaluable and instrumental in a successful relationship but men and women experience it differently,” Randall said. “This research provides another avenue to understanding how partners’ emotions can become linked, but future research is needed on how these emotional patterns may ultimately contribute to the longevity, or demise, of the romantic relationship.”

New virtual imaging technology could be used as part of therapy to help people get over social anxiety according to new research from the University of East Anglia (UEA).

Research published today investigated for the first time whether people with social anxiety could benefit from seeing themselves interacting in social situations via video capture.

The experiment gave participants the chance to experience social interaction in the safety of a virtual environment by seeing their own life-size image projected into specially scripted real-time video scenes.

UEA researchers, led by Dr Lina Gega from UEA’s Norwich Medical School and MHCO’s Northumberland Talking Therapies, worked with Xenodu Virtual Environments to create more than 100 different social scenarios – such as using public transport, buying a drink at a bar, socialising at a party, shopping, and talking to a stranger in an art gallery.

The researchers tested whether this sort of experience could become a valuable part of Cognitive Behavioural Therapy (CBT) by including an hour-long session midway through a 12-week CBT course.

Dr Gega said: “People with social anxiety are afraid that they will draw attention to themselves and be negatively judged by others in social situations. Many will either avoid public places and social gatherings altogether, or use safety behaviours to cope – such as not making eye contact and being guarded or hyper-vigilant towards others.

“Paradoxically, this sort of behaviour draws attention to people with social anxiety and feeds into their beliefs that they don’t fit in.

“We wanted to see whether practising social situations in a virtual environment could help.”

Paul Strickland from Xenodu, the company behind the virtual environment system, said: “Our system uses video capture to project a user’s life-size image on screen so that they can watch themselves interacting with custom-scripted and digitally edited video clips.

“It isn’t a head-mounted display – which anxious people may find uncomfortable,” he added. “Instead, the user observes from an out-of-body perspective. They can then simultaneously view themselves and interact with the characters of the film.”

Dr Gega’s project focused on six socially anxious young men recovering from psychosis who also have debilitating social anxiety. The participants engaged with a range of scenarios, some of which were designed to feature rude and hostile people. The virtual environments encouraged participants to practice small-talk, maintain eye contact, test beliefs that they wouldn’t know what to say, and resist safety behaviour such as looking at the floor or being hyper-vigilant.

The main benefits of using these virtual environments in therapy was that it helped participants notice and change anxious behaviours in a safe, controlled environment which could be rehearsed over and over again. Participants were found to drop safety behaviours and take greater social risks. And while realistic to an extent, the ‘fake’ feeling of staged scenarios in itself proved to be a virtue.

“It helped the participants question their interpretation of social cues,” said Dr Gega. “For example, if they thought that one of the characters was looking at them ‘funny’ they could immediately see that there must be an alternative explanation because the scenarios were artificial.

“Another useful aspect of the system is that it can be tailored to address specific fears in social situations – for example a fear of performance, intimacy, or crowds,” she added.

“Two of the patients said that the system felt “weird and surreal”, so the element of having an out-of-body experience is something to study further in future – particularly because psychosis itself is defined by a distorted perception of reality.

“This research explored the feasibility and potential added value of using virtual environments as part of CBT. The next stage would be to carry out a randomised, controlled comparison of CBT with and without the virtual environment system to test whether using the system as a therapy tool leads to greater or quicker symptom improvement.”

Mr Strickland added: “I hope our technology can help make a difference to the lives of people experiencing social anxiety and other specific anxiety conditions for which controlled exposure to feared situations is part of therapy. It is particularly versatile because it doesn’t need technical expertise to set up and use. And the library of scenarios can be built on to capture different types of exposure environments needed in day-to-day clinical practice.”

‘Virtual Environments Using Video Capture for Social Phobia with Psychosis’ is published by the journal Cyberpsychology, Behaviour and Social Networking.

A study from Karolinska Institutet shows, that our imagination may affect how we experience the world more than we perhaps think. What we imagine hearing or seeing ‘in our head’ can change our actual perception.

The study, which is published in the scientific journal Current Biology, sheds new light on a classic question in psychology and neuroscience – about how our brains combine information from the different senses.

“We often think about the things we imagine and the things we perceive as being clearly dissociable,” says Christopher Berger, doctoral student at the Department of Neuroscience and lead author of the study. “However, what this study shows is that our imagination of a sound or a shape changes how we perceive the world around us in the same way actually hearing that sound or seeing that shape does. Specifically, we found that what we imagine hearing can change what we actually see, and what we imagine seeing can change what we actually hear.”

The study consists of a series of experiments that make use of illusions in which sensory information from one sense changes or distorts one’s perception of another sense. Ninety-six healthy volunteers participated in total.

In the first experiment, participants experienced the illusion that two passing objects collided rather than passed by one-another when they imagined a sound at the moment the two objects met. In a second experiment, the participants’ spatial perception of a sound was biased towards a location where they imagined seeing the brief appearance of a white circle. In the third experiment, the participants’ perception of what a person was saying was changed by their imagination of a particular sound.

Illusion of colliding objects.

According to the scientists, the results of the current study may be useful in understanding the mechanisms by which the brain fails to distinguish between thought and reality in certain psychiatric disorders such as schizophrenia. Another area of use could be research on brain computer interfaces, where paralyzed individuals’ imagination is used to control virtual and artificial devices.

“This is the first set of experiments to definitively establish that the sensory signals generated by one’s imagination are strong enough to change one’s real-world perception of a different sensory modality”, says Professor Henrik Ehrsson, the principle investigator behind the study.

This study was funded by the European Research Council, the Swedish Foundation for Strategic Research, the James S. McDonnell Foundation, the Swedish Research Council, and the Söderberg Foundation.

People can plan strategic movements to several different targets at the same time, even when they see far fewer targets than are actually present, according to a new study published in Psychological Science, a journal of the Association for Psychological Science.

A team of researchers at the Brain and Mind Institute at the University of Western Ontario took advantage of a pictorial illusion — known as the “connectedness illusion” — that causes people to underestimate the number of targets they see.

When people act on these targets, however, they can rapidly plan accurate and strategic reaches that reflect the actual number of targets.

Using sophisticated statistical techniques to analyze participants’ responses to multiple potential targets, the researchers found that participants’ reaches to the targets were unaffected by the presence of the connecting lines.

Thus, the “connectedness illusion” seemed to influence the number of targets they perceived but did not impact their ability to plan actions related to the targets.

These findings indicate that the processes in the brain that plan visually guided actions are distinct from those that allow us to perceive the world.

“The design of the experiments allowed us to separate these two processes, even though they normally unfold at the same time,” explained lead researcher Jennifer Milne, a PhD student at the University of Western Ontario.

“It’s as though we have a semi-autonomous robot in our brain that plans and executes actions on our behalf with only the broadest of instructions from us!”

According to Mel Goodale, professor at the University of Western Ontario and senior author on the paper, these findings “not only reveal just how sophisticated the visuomotor systems in the brain are, but could also have important implications for the design and implementation of robotic systems and efficient human-machine interfaces.”

This work was supported by operating grants from the Natural Sciences and Engineering Research Council of Canada to J. C. Culham (Grant No. 249877 RGPIN) and M. A. Goodale (Grant No. 6313 2007 RGPIN).

You might be falling in love with that new car, but you probably wouldn’t pay as much for it if you could resist the feeling.

Researchers at Duke University who study how the brain values things — a field called neuroeconomics — have found that your feelings about something and the value you put on it are calculated similarly in a specific area of the brain.

The region is small area right between the eyes at the front of the brain. It’s called the ventromedial prefrontal cortex, or vmPFC for short. Scott Huettel, director of Duke’s Center for Interdisciplinary Decision Science, said scientists studying emotion and neuroeconomics had independently singled out this area of the brain in their research but neither group recognized that the other’s research was focused on it too.

Now, after a series of experiments in which subjects were asked to modify how they felt about something either positively or negatively, the Duke group is arguing that emotional and economic calculations are more closely related than brain scientists had realized. The study appears July 3 in the Journal of Neuroscience.

Earlier research by other groups had shown the vmPFC participates in calculating the value of rewards and that it is engaged by positive stimuli that aren’t really rewards, like a happy memory or a picture of a happy face. A separate line of studies had shown that this brain region also set values on little things like snacks.

The vmPFC handles value tradeoffs such as ‘is that product worth parting with my hard-earned money?‘ “This says that your emotions would enter into that tradeoff,” Huettel said.

“The neuroscience fits with your intuitive understanding,” said Amy Winecoff, a graduate student in psychology and neuroscience who led the research. “Emotions appear to be relying on the same value system.”

In the Duke study, experimental subjects were first trained to do “reappraisal,” in which they could change their emotional response to a situation. “In reappraisal you reassess the meaning of an emotional stimulus, rather than trying to avoid the emotional stimulus or suppress your reaction to it,” Winecoff said.

While the subjects’ brains were being scanned using functional MRI, they were shown images of evocative scenes and faces. After each image the subjects were told to either let their feelings flow or to practice reappraisal to change their thoughts. Then they were asked to rate how positive or negative they felt.

In the case of “an unregulated positive affect” — letting the good feelings flow — the vmPFC was shown to be working harder, which the researchers say could be used to predict how much value a person is putting on something. But when the subjects dampened their emotion responses to positive images, the vmPFC activation diminished, as if the images were less valuable to the subjects.

“This changes our frame of reference for thinking about these things,” Huettel said. He said advertisers have long been using emotional appeals to get people to value their products, “but they didn’t know why it worked.”

Previous studies had focused only on reappraisal of negative emotions, but this time around the Duke scientists wanted to watch people reappraise both negative and positive responses. “We have kind of a skewed picture because this has only been done on the negative,” Winecoff said.

“It’s not the case that you never want to reappraise a positive emotion,” said Huettel. But when buying a house or a car, it’s a good idea to dampen your infatuation down a bit, he added.