The Neurocritic

Sunday, January 28, 2007

Well, after a year of blogging here at The Neurocritic, I thought I might still get away with a snarky, end-of-week, throwaway post about a Science paper whose press coverage was overshadowed by the Great Insula Smokeout.

Hey Neurocritic--I think you miss the point of the Tom et al. study that just appeared in _Science_.

First: It is actually not so obvious that potential losses would "turn down" responses in the reward centers when people make decisions. Several top researchers had speculated that potential losses would be associated with negative emotional responses as mediated by activity in, say, the anterior insula or amygdala. Second, it is not so obvious that the abstract representation of value when making decisions would involve the same reward circuitry that responds to anticipated or experienced reward. Third, the fact that the reward circuitry is substantially more responsive to potential losses than equivalent gains is not so obvious--it is difficult to reconcile with rational choice models but supports prospect theory, the leading behavioral model of decision under uncertainty for which Daniel Kahneman won the 2002 Nobel Memorial Prize in Economic Sciences. Fourth, the finding that individual differences in sensitivity in this particular circuitry are very highly correlated with individual differences in risk attitudes is exciting and also supports the prospect theory interpretation of risk aversion for mixed (gain/loss) gambles. Fifth, the ancillary finding of a positive correlation between gain and loss sensitivity is rather surprising. Sixth, mapping out the relationship between the dopaminergic circuitry and risk-taking behavior is a first step that may eventually lead to better understanding and treatment for compulsive gambling behavior--for instance, as often exhibited by Parkinsons patients after they begin a regimen of dopamine agonist medication.

It is easy to make fun of a study after reading only a headline. But with all due respect I find your critique naive and superficial.

Sorry, CF, my reward circuitry was turned way down. I did read the paper, but looming deadlines prevented me from writing a full critique. Apologies for that. I'm usually quite thorough.

A few questions and comments for you, if I may:

(1a) Many other studies have indicated that potential losses are associated with negative emotional responses. So the difference here is that the participants never knew the results of their decisions in the Tom et al. study, i.e., there were no consequences during scanning. If there are no immediate consequences, doesn't that change one's response?

(1b) There were 3 runs, and subjects were told that only one gamble from each of these runs would be played; i.e., there were only 3 "real" gambles for the entire experiment (albeit which ones was unknown).

(1c) Supplement, page 3: In order to encourage participants to reflect on the subjective attractiveness of each gamble rather than revert to a fixed decision rule (e.g., accept only if gain ≥ 2 × loss), we asked them to indicate one of four responses to each gamble (strongly accept, weakly accept, weakly reject, and strongly reject) using a four-button response box. We also instructed them to respond as quickly as possible within the 3-second trial duration.

People typically aren't rushed into making 256 decisions in 12.8 minutes (a total of 24.7 minutes including the interstimulus intervals). I fail to see how this experimental design relates to making momentous life decisions (see point #7 below regarding press coverage). Or to compulsive gambling behavior, for that matter, since win/loss results aren't withheld until the end of the session in Vegas.

ADDENDUM:

(1d) I forgot to mention that entire models (Holroyd and Coles, 2002) and entire research programs (Schultz, 2006) are based around the notion that the activity of mesencephalic dopamine cells is sensitive to potential gains and losses. Didn't see those particular articles cited in the paper.

Schultz and colleagues have proposed that the dopamine neurons are sensitive to changes in the prediction of the "goodness" of ongoing events: A positive dopamine signal is elicited when an event is better than predicted, and a negative dopamine signal is elicited when an event is worse than predicted. [quote from Holroyd and Coles, 2002]

So the Tom et al. study is truly novel in demonstrating that potential losses "turn down" responses in the reward centers??

[end addendum]

(2) ...not so obvious that the abstract representation of value when making decisions would involve the same reward circuitry that responds to anticipated or experienced reward.

(4) Many of the correlations with individual differences in risk aversion are, indeed, interesting. A question about the one of the puzzling relationships:

For increasing gains, we observed a significant correlation with behavioral loss aversion in the sensorimotor cortex and superior frontal cortex (fig. S4).

What is the significance of this result? Why sensorimotor cortex??

(5) Page 4: the ancillary finding of a positive correlation between gain and loss sensitivity is rather surprising. OK, I'll take your word for it.

(6) I'm not a funding agency, you don't have to justify the broader impacts ("future patients may benefit...").

(7) It wasn't only the headlines that prompted my admittedly snarky post, it was some of the quotes in the article, e.g., extrapolations to scenarios such as leaving a bad marriage, taking cocaine, eating chocolate, and looking at a beautiful face.

The study, detailed in the Jan. 26 issue of the journal Science, also revealed that people respond more strongly to potential losses than potential gains. Thinking about the possibility of winning money turns on some of the same areas of the brain that are activated when people take cocaine, eat chocolate or look at a beautiful face, explained study team member Russell Poldrack.

These "reward centers" of the brain that get turned on when we think about winning money get turned off when we think about losing money. [NOTE: No! You don't say!!]

"You turn down the reward areas of the brain, and you turn them down more strongly for losses than you turn them up for gains," Poldrack said.

People typically exhibit greater sensitivity to losses than to equivalent gains when making decisions. We investigated neural correlates of loss aversion while individuals decided whether to accept or reject gambles that offered a 50/50 chance of gaining or losing money. A broad set of areas (including midbrain dopaminergic regions and their targets) showed increasing activity as potential gains increased. Potential losses were represented by decreasing activity in several of these same gain-sensitive areas. Finally, individual differences in behavioral loss aversion were predicted by a measure of neural loss aversion in several regions, including the ventral striatum and prefrontal cortex.

Tuesday, January 23, 2007

The most forceful (and the only one published, it seems) objection to the concept of a "default mode" of processing in the human brain has been articulated by Alexa Morcom and Paul Fletcher at Cambridge. The brain's "default mode" or "resting state" has been studied most intensively by Marcus Raichle and colleagues (e.g, Gusnard & Raichle, 2001):

So, we propose that there exists a physiological baseline of the human brain, which can be observed when subjects are awake and resting with their eyes closed (an exception might exist for some extrastriate visual areas — see above). Substituting simple visual fixation or passive viewing of stimuli has little effect on this physiological baseline, except to increase blood flow in the visual areas...

Morcom and Fletcher see multiple problems with this proposal:

The case for a default mode comprises three related ideas. The first is that the resting state constitutes an absolute baseline, and is therefore a fixed point relative to which all cognitive and physiological states can and should be considered (Gusnard and Raichle, 2001). Second is the notion that the level of neural activity in this resting state is substantial and therefore functionally important, with changes produced by task demands representing just the “tip of an iceberg” (Raichle et al., 2001). Finally, relative to a wide range of tasks the resting state is said to be associated with higher levels of activity in a consistent set of brain regions. This has led to the idea that, at rest, we return to a ‘default mode’, which plays a critical role in the ‘intrinsic’ functioning of the brain (Shulman et al., 1997 and Gusnard and Raichle, 2001). We believe that these three claims, and their synthesis, should be evaluated critically for theoretical and practical reasons. If they are valid, then the resting state is indeed a context in which to study brain processes that are fundamental and important relative to the small flickers of activity produced by task demands. It would follow that cognitively driven fluctuations cannot be interpreted except in the context of the default system.

We suggest that the case for a default mode does not survive this critical evaluation. We first explain and evaluate the claim that ‘rest’ is a baseline state for the human brain, summarizing the support for this, and the ways in which a baseline might or might not be necessary in functional neuroimaging. We conclude that despite the interesting characteristics of rest as baseline in terms of oxygen balance, these are not relevant to studies that seek to understand how neural activity underpins cognitive processing. Secondly, while we accept that a high level of energy expenditure of the brain at ‘rest’ indicates that the resting state is active, we do not agree that this activity has a special status compared with that in any other task, or that the brain's energy budget is informative about the nature of a ‘default mode’. Thirdly, with respect to the idea that patterns of brain activity found at ‘rest’ are consistent, we point out that the evidence for this is inconclusive. Furthermore, we would question what conclusion could be drawn from such a consistency, if it is ultimately demonstrated. Finally, we note that the idea of a default mode is based not only on three separate claims but upon their synthesis. We scrutinize this synthesis with regard to the support that it gives to rest/default mode as a state that is qualitatively different from any other mental and neural state. We conclude that even if there is empirical consistency in the patterns of activity observed at rest, and a subjective appeal to the notion that when we rest we are in a default state because there is no explicit task to perform, these are insufficient grounds for affording the resting state a privileged status in accounts of human behavior. We further suggest that, in most situations, the aims of cognitive neuroscience are best served by the study of specific task manipulations, rather than of ‘rest’.

AND

It seems to us that there are problems with the assumption that the resting state is a baseline to which other states should be referred, a notion entailed by the call for a distinction between a subtraction and a ‘reverse subtraction’. If, as we believe, such a distinction is irrelevant in cognitive neuroimaging studies, the utility of the term ‘baseline’ in this context must also be called into question. In considering these problems, we need to distinguish between two views: one that the resting task constitutes a processing baseline and one that it serves as a physiological baseline that does not relate directly to processing. In so doing, we highlight the fact that the notion of a ‘default mode’ rests in part on a relationship between the physiological and processing aspects of ‘rest’. We conclude that although rest may be, subjectively, a ‘default’ state (in the sense that it is what we are doing when we are not doing anything else), it is of no utility as a processing baseline. We find that the suggested link between the processing taking place at rest and its physiology is one that can have no direct relevance for neuroimaging.

Most important for the current argument attributing the phenomena of daydreaming or "mind-wandering" to activity in the default mode network is an objection on the grounds of its functional significance. What happens (cognitively speaking) during "rest"? How do you manipulate it and quantify it? Is introspection a valid measure? Certain manipulations can produce global increases and decreases in the network, but is this enough? Shouldn't the component areas be isolated? Why not try to separate activity in disparate regions of the network? Surely superior medial frontal regions subserve a different function than posterior lateral parieto-occipital regions.

Gusnard and Raichle (2001) emphasize the importance of studying the ‘default mode’. It is therefore particularly notable that in the last section of this article, they consider its possible functions by citing studies in which relative activity increases have been demonstrated in the ‘TID’ [task-independent decreases] regions they highlight, in response to cognitive task manipulations. In the case of the precuneus and posterior cingulate, “associated with the highest resting metabolic rates in the human cerebral cortex”, it is pointed out that these regions show specific increases in activity during visuospatial and emotional processing tasks (Vogt et al., 1992 and Maddock, 1999) and thus may play a role in ‘monitoring’ of the environment and in assessing the emotional significance of events (see also Raichle et al., 2001). We believe this to be a telling inference; they are suggesting that although the precuneus/posterior cingulate may indeed frequently be active at ‘rest’, an understanding of the processing such activity subserves depends on the employment of the right tasks and the delineation of the circumstances of activity changes. Comparable observations about posterior lateral cortices, and dorsal and ventral medial PFC are also made (for a similar line of reasoning see Beckmann et al., 2005). It is clear from these observations that the cognitive nature of rest is at present almost entirely a matter of speculation. More importantly, once again, the message must surely be that the resting task seems cognitively interesting, so we should unpack it by devising appropriate tasks.

That concludes our three part series on the concept of a "default mode" for the brain, and whether it ushers in an exciting new era in scientific studies of introspection, adding the technological imprimatur of neuroimaging to that musty old Jamesian field, or whether we should relegate this notion to the dustbin of neuroscience history.

There's an extensive body of literature on what is called daydreaming or mind-wandering. Raichle and colleagues have argued for a "default mode" or "resting state" of brain function that engages a certain network of brain regions (posterior cingulate and precuneus and medial prefrontal cortex) during "rest." These regions become DEactivated when people are engaged in the typical types of cognitive tasks they're asked to do in a scanner. So it's really only a "resting state" when compared to doing, say, the Stroop task. When asked to rest and stare at a plus sign, you may engage in idle daydreaming or think about what you'll have for dinner or remember your hot date from last night or silently sing.

[You can read more about one of these "default mode" regions, the precuneus, at the links below1]

So what's up in the new experiment? The idea was to try to manipulate the degree of daydreaming during cognitive tasks by training subjects to become proficient at them. A verbal working memory task involved 4 four-letter sequences (e.g., ZVRT) that had to be recalled either forwards or backwards, while a visuospatial task involved 4 finger-tapping sequences. After 3 days of training with the sequences, and a day of "thought-sampling" (described below), scanning was conducted on day 5. On day 4, participants again practiced the learned sequences but were interrupted periodically and asked

to indicate whether they were having an "irrelevant thought." Consistent with previous studies of this nature, the term "irrelevant thought" was defined to participants as "thoughts that do not facilitate performance and are not immediate reactions to perceptual information gleaned over the course of a trial."

Subjects were more likely to report having irrelevant thoughts during learned sequences compared to novel sequences introduced into the experiment. The entire experiment hinges on accurate introspective reports of daydreaming (stimulus-independent thought, SIT) on the day before the scanning session, since it wasn't assessed during:

On day 4, the proportion of sampled thoughts participants classified as SIT varied by block type (baseline, practiced, or novel)... Participants reported a greater proportion of SIT during the baseline blocks (mean=0.93; SD=0.16) than during both practiced blocks (mean=0.32, SD=0.20), t(17)=9.22, p less than 0.01, and novel blocks (mean=0.22, SD=0.18), t(17)=10.96, p less than 0.01. Participants reported a significantly greater proportion of SIT during the practiced blocks than during the novel blocks, t(17) = 2.11, p less than 0.05...

Scanning took place on day 5, when learned sequences, new sequences, and a baseline condition (no task) were interleaved. Two weeks after the study was over, participants were sent a questionnaire to assess daydream frequency (e.g., On a long bus, train, or airplane ride I lose myself in thought). Scores on the daydream scale were correlated with activations in the default network during practiced stimulus sequences (see illustration above), when presumably, people's minds would wander.

How were the data analyzed? The default network for each subject was determined by comparing baseline vs. working memory task conditions.

This comparison revealed significantly greater recruitment at rest in a distributed network of regions that included aspects of the posterior cingulate and the precuneus [Brodmann areas (BAs) 23 and 31], the posterior lateral cortices (BAs 40 and 39), the insular cortices, the cingulate (BA 24), and aspects of both ventral and dorsal medial prefrontal cortex (mPFC) [BAs 6, premotor and supplementary motor cortex; 8, including frontal eye field; 9, dorsolateral prefrontal cortex; and 10, frontopolar area (most rostral part of superior and middle frontal gyri)] (8, 9).

. . .

The resulting default network contrast was subsequently converted to a binary image and used as an ‘inclusive’ mask in subsequent analyses (at a more lenient threshold of p < .05, k = 10). In effect, this made it possible to identify differences in cortical activity during ‘practiced’ blocks relative to ‘novel’ blocks that occurred within the default network.

However, once the default network was determined, the practiced vs. novel contrast did use a threshold of p!!]

So what were the results? Not surprisingly, regions of the default network were more "active" during practiced than novel blocks. The word "active" is in quotes, because really, the finding is that certain default mode regions (e.g., the insula, the posterior cingulate) showed less DEactivation (or no change at all) for practiced than for novel sequences (see figure below). Meaning that daydreams were associated with no brain activity, or a tiny deactivation, in some cases. Hmm.

Despite evidence pointing to a ubiquitous tendency of human minds to wander, little is known about the neural operations that support this core component of human cognition. Using both thought sampling and brain imaging, the current investigation demonstrated that mind-wandering is associated with activity in a default network of cortical regions that are active when the brain is "at rest." In addition, individuals' reports of the tendency of their minds to wander were correlated with activity in this network.

The adult human brain represents about 2% of the body weight, yet accounts for about 20% of the body's total energy consumption, 10 times that predicted by its weight alone. What fraction of this energy is directly related to brain function? Depending on the approach used, it is estimated that 60 to 80% of the energy budget of the brain supports communication among neurons and their supporting cells (2). The additional energy burden associated with momentary demands of the environment may be as little as 0.5 to 1.0% of the total energy budget (2). This cost-based analysis implies that intrinsic activity may be far more significant than evoked activity in terms of overall brain function.

Consideration of brain energy may thus provide new insights into questions that have long puzzled neuroscientists. For example, researchers have sought to explain the relative disproportion of connections (i.e., synapses) among neurons that appear to perform functions intrinsically within the cerebral cortex. Take the visual cortex, whose primary function is to respond to external input to the retina. Less than 10% of all synapses carry incoming information from the external world (3)--a surprisingly small number. From a brain energy perspective, however, the cortex may simply be more involved in intrinsic activities.

What is this intrinsic activity? One possibility is that it simply represents unconstrained, spontaneous cognition--our daydreams or, more technically, stimulus-independent thoughts. But it is highly unlikely to account for more than that elicited by responding to controlled stimuli, which accounts for a very small fraction of total brain activity.

YES: [detritus]

Morcom AM, Fletcher PC. (2006). Does the brain have a baseline? Why we should be resisting a rest. NeuroImage Oct 16; [Epub ahead of print]

In the last few years, the notion that the brain has a default or intrinsic mode of functioning has received increasing attention. The idea derives from observations that a consistent network of brain regions shows high levels of activity when no explicit task is performed and participants are asked simply to rest. The importance of this putative "default mode" is asserted on the basis of the substantial energy demand associated with such a resting state and of the suggestion that rest entails a finely tuned balance between metabolic demand and regionally regulated blood supply. These observations, together with the fact that the default network is more active at rest than it is in a range of explicit tasks, have led some to suggest that it reflects an absolute baseline, one that must be understood and used if we are to develop a comprehensive picture of brain functioning. Here, we examine the assumptions that are generally made in accepting the importance of the "default mode". We question the value, and indeed the interpretability, of the study of the resting state and suggest that observations made under resting conditions have no privileged status as a fundamental metric of brain functioning. In doing so, we challenge the utility of studies of the resting state in a number of important domains of research.

Monday, January 15, 2007

...it appears that brain imaging and psychoanalysis are coming together somewhere in the cognitive neuroscience world, as Dr. Deborah Serani mentions an imaging study of transference in a post titled "Map of the Mind."

Core psychoanalytic constructs may be impossible to study directly using neuroscience and imaging methodologies. [OK, let's all go home now.] Nevertheless, experimental paradigms have been developed and are being applied that are at least relevant to understanding the neural bases of certain core theoretical constructs within psychoanalysis. These paradigms have demonstrated the likely contributions of: (1) the nucleus accumbens and related limbic circuitry in assigning valence within the pleasure/unpleasure continuum of affective experience; (2) the reticular formation, thalamus, amygdala, and cortex within arousal circuits in assigning personal salience to those affective experiences; (3) frontostriatal systems in subserving top-down processing in the CNS, which in turn contributes to numerous important psychological functions, including the control of drives and the construction of experience according to preestablished conceptual schemas—processes that likely underlie cognitive distortions, projection, and transference phenomena; and (4) multiple memory systems, particularly the procedural learning systems based within the dorsal striatum and declarative learning systems in the mesial temporal lobe, that likely contribute to memories within the domain of the descriptive unconscious, and the interactions across affective and cognitive memory systems, that might contribute to memory formations within the dynamic unconscious.

Mostly Brain care is useful and practical advise on looking after za noggin. It is not a good idea valking under large heavy objects being raised by cranes. Running around carrying hard pointed objects is not either recommended unless we are desiring some sort of amateur lobotomy. Helmets iss good, but zey are not too cool eh? Especially on za hot day. Vat about zose cellular telephones? Vell zey work on za microwaves no? But such tiny amounts iss not a problem says industry. Of course zose who are confident zat zese devises are harmless are also zose who vant to sell zem to you. Ziss is a bit of a conflict. Let me illustrate ziss situation for you.

Tom was not crazy. His impression that his missing arm was still there is a classic example of a phantom limb--an arm or a leg that lingers in the minds of patients long after it has been lost in an accident or removed by a surgeon. Some wake up from anesthesia and are in shock when told that their arm had to be sacrificed, because they still vividly feel its presence. Only when they look under the sheets do they come to the shocking realization that the limb is really gone. Moreover, some of these patients experience excruciating pain in the phantom arm, hand, or fingers, so much so that they contemplate suicide. The pain is not only unrelenting, it's also untreatable; no one has the foggiest idea of how it arises or how to deal with it.

There may be many mechanisms underlying phantom limb pain. Damage to nerve endings is often important: subsequent erroneous regrowth can lead to abnormal and painful discharge of neurons in the stump, and may change the way that nerves from the amputated limb connect to neurons within the spinal cord. There is also evidence for altered nervous activity within the brain as a result of the loss of sensory input from the amputated limb.

And she lay on her backShe made sure she was hidShe was mute and staringNot feeling the thingThat she did

--Suzanne Vega, ibid

What is being done to help those suffering with such excruciating pain? The latest treatment, developed by researchers at the University of Manchester, involves a virtual reality world that lets amputees see and move a 3D phantom limb. Jonathan Cole describes his own work using virtual reality[and explains why these approaches are superior to earlier visual tricks that fool people into thinking they're moving their amputated limb, such as the mirror box]. In essence,

attempts to link the visual and motor systems might be helping patients recreate a coherent body image, and so reduce pain as a result of reduced and disordered input...

...in experiments still being developed, we are constructing an arm in virtual realitywhich subjects with phantom limb pain will move themselves using motion capture techniques. Movement of their stump will be captured by a movement-tracking device, and used to project the movement of the reconstituted limb in virtual reality. We anticipate that this will lead to a sense of re-embodiment in the virtual arm and hence to a reduction of the pain.

These new approaches are all based on a shift in emphasis in phantom limb pain away from the site of damage – the stump – to the centre of pain processing: the brain. It appears that disordered inputs from the limb's sensory systems, combined with disrupted motor signal back to the limb, generate a mismatch between the brain's built-in map of the physical body and what is actually perceived. For some reason, this mismatch results in pain.

Returning to the title of this post, MEN IN A WAR, what is being said and done by the President of the United States?

Our past efforts to secure Baghdad failed for two principal reasons: There were not enough Iraqi and American troops to secure neighborhoods that had been cleared of terrorists and insurgents. And there were too many restrictions on the troops we did have.

. . .

So America will change our strategy to help the Iraqis carry out their campaign to put down sectarian violence and bring security to the people of Baghdad. This will require increasing American force levels. So I have committed more than 20,000 additional American troops to Iraq.

Wednesday, January 10, 2007

We now return to The Neuroshopping Channel!! Before the last commercial break, Brenda had placed a $7 bid on a box of Godiva chocolates. How will her brain respond when she finds out the consequences? Let's take a look.

Back to you, Bob Barker!

Yes, it's true, an fMRI study on consumer decisions was published in Neuron this month. A shopping task (with the witty and original acronym of S.H.O.P. for "Save Holdings Or Purchase") was performed by subjects in the scanner. It consisted of

a series of trials, identical in temporal structure, in which subjects could purchase products. Subjects saw a labeled product (4 s), saw the product’s price (4 s), and then chose either to purchase the product or not (by selecting either "yes" or "no" presented randomly on the right or left side of the screen; 4 s), before fixating on a crosshair (2 s) prior to the onset of the next trial.

from Knutson et al. (2007)

What were the results? Let's step back a bit. Why was this experiment done in the first place, other than to get written up in the newspaper (and to get a bunch of hits from a Google news search)?

Neuroeconomic methods offer the hope of separating and characterizing distinct components of the purchase decision process in individual consumers.

OK, so the neurogoal was to look at activity in the brain's "reward center" aka nucleus accumbens, which receives input from dopaminergic neurons in the ventral tegmental area. This mesolimbic dopamine pathway has been linked to reward, pleasure, and addiction. Other regions of interest included the medial prefrontal cortex (called mesial in the article, but that word annoys me so I'll use the synonym medial), which the authors linked to gain prediction errors, and the insula (now famous because of its spindle neurons), which was linked to loss prediction. These regions were defined in economic terms in the Introduction, which also discussed previous studies of product preferences. Thankfully, I had not heard of the study in which

men who view pictures of preferred versus nonpreferred brands of beer show increased MPFC activation, and women who view pictures of preferred versus nonpreferred brands of coffee also show increased MPFC activation (Deppe et al., 2005).

[Don't women drink beer? Don't men drink coffee?]

Since we already knew what to expect from the Introduction, the figure below comes as no surprise. Preference was correlated with activation in the NAcc, price differential (i.e., the difference between what the subject was willing to pay and the displayed price of the product) was correlated with activation in MPFC, and purchasing was correlated with deactivation of the bilateral insula.

from Knutson et al. (2007)

There were a bunch of other brain regions listed in 3 tables, but the functional significance of these activations wasn't discussed. The paper's main selling point (so to speak) was that brain activation in all three regions significantly predicted purchasing (determined by logistical regression).

Microeconomic theory maintains that purchases are driven by a combination of consumer preference and price. Using event-related fMRI, we investigated how people weigh these factors to make purchasing decisions. Consistent with neuroimaging evidence suggesting that distinct circuits anticipate gain and loss, product preference activated the nucleus accumbens (NAcc), while excessive prices activated the insula and deactivated the mesial prefrontal cortex (MPFC) prior to the purchase decision. Activity from each of these regions independently predicted immediately subsequent purchases above and beyond self-report variables. These findings suggest that activation of distinct neural circuits related to anticipatory affect precedes and supports consumers' purchasing decisions.

Adding to the general glee surrounding the publication of this paper, the Neuron Preview article is entitled, Shopping Centers in the Brain. The author, Alain Dagher, does mention some reasons for caution in interpreting the results:

So, are there shopping centers in the brain? One must be careful in interpreting fMRI data from individual experiments. For example, although the NAcc was activated by product preference in this study, it does not necessarily follow that it encodes this value. Other fMRI studies have demonstrated a dependence of NAcc activation on novelty, unpredictability, salience (Zink et al., 2003), or a change in contingency (Cools et al., 2002), independently of reward or preference.

That's all for now. There's a 36 page supplement to the 10 page article. I really couldn't make it through all the analyses that ruled out alternate explanations of the data, such as product familiarity and price. However, a portion of the shopping list is presented below.

A research team led by Stanford University neuroscientist Brian Knutson has identified the parts of the human brain that respond when presented with a product and those that then act as we decide on whether or not to purchase it. The group reports its findings in this week's issue of Neuron.

. . .

Twenty-six subjects were scanned while they took part in a task called SHOP, short for "Save Holdings Or Purchase." A product, such as a box of Godiva chocolates or a DVD of the popular TV show The Simpsons, would be displayed on a screen in front of them. After four seconds, a price would be shown below the item. Four seconds later, a box would appear on each side of the screen--one labeled "YES," the other labeled "NO"--at which point the participant decided whether to buy or to pass on the product.

. . .

Researchers discovered that when the product first flashed on the screen it activated the nucleus accumbens, a section near the middle of the brain that has been implicated in the brain's reward center, effectively appraising the item. When the price appeared, the scientists noticed activity in the mesial prefrontal cortex, a region of the brain known for higher executive functions. Its activity seemed to vary according to the difference between what someone would pay for an item and its actual cost, as if in error adjustment. Finally, the response of the insula (a lateral section of the brain's cortex known to activate during responses to negative stimuli) depended on the purchasing decision--activity there increased when a participant nixed a purchase. "What we're looking at is not so much the brain's reaction to products and prices as a person's subjective reaction to the products and prices," Knutson says. "Is the product preferable? And is the price too much?"

[NOTE: and below we have the requisite quote from neuroeconomist-for-hire, Colin Camerer]

...the findings of the paper illustrate what behavioral economists label "transaction utility"--"the special pleasure or pain we get from knowing we got a good deal or got ripped off"--because of the response of the insula, the region associated with negative emotions like disgust. "This shows there is an automatic 'is it worth it?' process," he says, "that has a rapid emotional reaction if the answer is 'no.'"

[NOTE: ...and threatens a lot of people's livelihoods. Thanks a lot!]The failure of Congress to pass new budgets for the current fiscal year has produced a crisis in science financing that threatens to close major facilities, delay new projects and leave thousands of government scientists out of work, federal and private officials say.

“The consequences for American science will be disastrous,” said Michael S. Lubell, a senior official of the American Physical Society, the world’s largest group of physicists. “The message to young scientists and industry leaders, alike, will be, ‘Look outside the U.S. if you want to succeed.’ ”

. . .

Congress and the Bush administration could restore much of the science financing in the 2008 budget. Scientists say it would help enormously, but add that senior staff members by that point may have already abandoned major projects for other jobs that were more stable.

. . .

The National Science Foundation, which supports basic research at universities, had expected a $400 million increase over the $5.7 billion budget it received in 2006. Now, the freeze is prompting program cuts, delays and slowdowns.

Let's divert some of the Pentagon's budget back to NSF, NIH, etc. The Pentagon is the worst-managed federal agency, according to this report:

Iraq isn't the only pressing issue Robert Gates will face when he becomes the 22nd U.S. defense secretary on Monday.

High on his list of priorities will likely be the enormous task of cleaning up the Pentagon's tangled finances, which outside auditors lambaste as so chaotic that no one knows how much money is being spent on defense at any given time.

Great! Just think of how many students and post-docs could be funded from the waste and fraud at the Pentagon... The article contines:

Nowhere is the Pentagon's inability to control costs more glaring than in the surging costs of new weapons projects. The Government Accountability Office, an arm of Congress and headed by Walker, recently concluded that the total cost of all major U.S. military weapons projects under development has doubled in five years to $1.4 trillion.

Financial problems like the Pentagon's "would put any civilian company out of business," said Kwai Chan, a former GAO auditor, assistant inspector general at the Environmental Protection Agency and author of a report entitled "Financial Management in the Department of Defense: No One is Accountable."

The Office of Management and Budget, the GAO and the Pentagon accountants "all cannot tell you and agree on how much the Pentagon is spending at any given time," said Chan.

Thursday, January 04, 2007

UCSF scientists have identified a cell population that is a primary target of the degenerative brain disease known as frontotemporal dementia, which is as common as Alzheimer's disease in patients who develop dementia before age 65.

Because the cells arose only recently in evolutionary history - in a common ancestor of great apes and humans - and are particularly abundant in humans, and the finding supports the concept that evolution has rendered the human brain vulnerable to disease, including frontotemporal dementia, and, possibly, disorders such as autism and schizophrenia, the researchers say.

In addition, because the disease erodes aspects of social behavior and emotions – self awareness, moral reasoning and empathy - that are highly developed in humans, the finding suggests that the cells may play a role in what makes humans "human," they say.

. . .

The cells received only limited attention in the ensuing years, but in the meantime scientists determined that the brain regions in which von Economo neurons arise - the anterior cingulate and frontoinsular cortex -- are key targets of frontotemporal dementia. And in 1999, a team of U.S. scientists made the surprising discovery that, among primates, von Economo neurons were seen only in great apes and humans.

The original paper, published in the Annals of Neurology (Seeley et al., 2006), used meticulous neuroanatomical quantification techniques to examine the neurons in layer V of the cerebral cortex from post-mortem brains with frontotemporal dementia, Alzheimer's disease, and neither disease (i.e., controls). The left pregenual anterior cingulate cortex (ACC) was the one region sampled (see figure). Due to tissue availability constraints, there were no samples from either right ACC or frontoinsular cortex from either hemisphere.

OBJECTIVE: Frontotemporal dementia (FTD) is a neurodegenerative disease that erodes uniquely human aspects of social behavior and emotion. The illness features a characteristic pattern of early injury to anterior cingulate and frontoinsular cortex. These regions, though often considered ancient in phylogeny, are the exclusive homes to the von Economo neuron (VEN), a large bipolar projection neuron found only in great apes and humans. Despite progress toward understanding the genetic and molecular bases of FTD, no class of selectively vulnerable neurons has been identified. METHODS: Using unbiased stereology, we quantified anterior cingulate VENs and neighboring Layer 5 neurons in FTD (n = 7), Alzheimer's disease (n = 5), and age-matched nonneurological control subjects (n = 7). Neuronal morphology and immunohistochemical staining patterns provided further information about VEN susceptibility. RESULTS: FTD was associated with early, severe, and selective VEN losses, including a 74% reduction in VENs per section compared with control subjects. VEN dropout was not attributable to general neuronal loss and was seen across FTD pathological subtypes. Surviving VENs were often dysmorphic, with pathological tau protein accumulation in Pick's disease. In contrast, patients with Alzheimer's disease showed normal VEN counts and morphology despite extensive local neurofibrillary pathology. INTERPRETATION: VEN loss links FTD to its signature regional pattern. The findings suggest a new framework for understanding how evolution may have rendered the human brain vulnerable to specific forms of degenerative illness.

Oh, and here's a pretty important fact about von Economo neurons:

VENs are thought to make up 1 to 2% of the Layer 5 neurons in ACC (Nimchinsky et al., 1995).

So can 1-2% of cells in one layer (out of six total) of the cortex, with locations in only two circumscribed brain regions, mediate all of social behavior and emotions in humans (and great apes and humpback whales)? Sounds kind of preposterous when you put it that way.

Seeley and his colleagues conclude that VENs may play a key role in making humans the social creatures that we are, but that they also expose us to a higher risk of degenerative neural diseases. Lary Walker, a neuroscientist at Emory University in Atlanta, Georgia, says that the authors make a "reasonably compelling case that the VENs are selectively vulnerable in FTD". Nevertheless, Walker cautions against ascribing complex behaviors to the action of specific cells or regions in the brain.

About Me

Born in West Virginia in 1980, The Neurocritic embarked upon a roadtrip across America at the age of thirteen with his mother. She abandoned him when they reached San Francisco and The Neurocritic descended into a spiral of drug abuse and prostitution. At fifteen, The Neurocritic's psychiatrist encouraged him to start writing as a form of therapy.