In the last post, I celebrated Eight Years of Neurocriticism but wistfully noted that this blog's popularity peaked in 2012. The traffic last year showed a decline to 2009-2010 levels. Why did this happen? And does it matter? No it does not, but it gives me the opportunity to comment on the state of a specialized little corner of science blogging. The sort of piece where people say things like“blogging as a chance to exercise our voices doesn’t seem to be going anywhere” and “the blog is dead.”

Except not that.

@practiCalfMRI politely suggested it's the quality of visitor that counts. In 2013 the Average Time on Site for my homepage was indeed up 25%, but I could have been inadvertently cherry picking the data...

Commenters on the post anticipated some of my thoughts. Perhaps it was related to the demise of Google Reader, said one. A drop did occur when the service stopped in July 2013, but traffic started trending downward in April-May 2013. So I don't think this can explain it. Instead, the format may have been a victim of its own success and run its course. As another commenter aptly put it:

What happened? Well, what always happens: with time, people get bored. Of anything. From marriage to cereal bar flavor. When you started Neurocritic, it was new, and people were sick of all the neurocrap published out there. Then, the neurocrap people and others, started realizing that talking crap about neuro stuff got a lot of hits! And so everybody started doing it, from Voodoo correlations to Retraction Watch. It was the fashionable thing to do. That's when it got boring.

As I've said before, this general trend has been useful in pointing out flawed studies, overblown conclusions, and overly hyped press releases. But some working neuroscientists thought the naysaying had gotten a little out of hand, because expert critiques are easily misinterpreted. A little neuronuance is needed here, the middle ground that acknowledges limitations yet avoids global condemnations. Around this time, I initiated my own little backlash against the anti-neuro backlash by starting a new blog, The Neurocomplimenter.

Daniel Engbar went out on a limb and proposed that “the public turned its back on neuro-hype” long ago (in 2008) and that “2008 may also have been the high point for critical neuroscience blogging.” But I place the date later than that, in 2012. Nevertheless, popular new blogs like Neurobollocks started after then, and Neuroskeptic shows no signs of slowing down. And there's plenty of blogging to be done that doesn't involve neurocriticism.

Maybe I'm just getting boring (or bored). So perhaps it's time to use a new platform like Tumblr to post animated gifs of brains from cheesy horror movies —OMG Neurocritic!

Instead of blogging, people are posting to Tumblr, tweeting, pinning things to their board, posting to Reddit, Snapchatting, updating Facebook statuses, Instagramming, and publishing on Medium. ... Blogs are for 40-somethings with kids.

We all start to forget things, have word finding problems, and generally slow down cognitively once we get older, right? Wrong, says a recent paper by Ramscar et al. (2014), The Myth of Cognitive Decline: Non-Linear Dynamics of Lifelong Learning [free PDF].

Well, the real answer is more like, “it’s complicated,” as the first author explained in a blog post on the the paper. A giant in the field of cognitive aging quickly retorted, oh no it's...

...“Clever-Silly” comes irresistibly to mind, but this must be inadvertent fall-out from an elderly brain overstuffed by failure to assimilate the vast literature on cognitive aging.

The Rise of Academic Blogging

In the last post, I noted the potential Decline of Neurocriticism. At the same time, more and more people have started their own neuroscience and psychology blogs (which magnifies the channel factor, as Roger Dooley noted). And it's not only the SciCom crowd, which includes science journalists and aspiring science writers who aim to leave lab work behind. Some professional societies like the Society for Neuroscience are getting into the game (BrainFacts.org Blog in 2012), while some like APS We're Only Human (2010) and the venerable BPS Research Digest (2005) have been around a while longer.

An increasing number of academics are starting to blog as well. The Myth of Cognitive Decline provides a perfect example of the rapid (and serious) exchange of ideas that's possible in a "non-peer reviewed" format. Certainly, heavyweight academic blogs such as Language Log, and Statistical Modeling have existed for 10 years, but I think academic blogging is on the rise, perhaps even more so in psychology than neuroscience.

For over almost five decades, Professor Patrick Rabbitt has been among the most distinguished of British cognitive psychologists. His work has been widely influential in theories of mental speed, cognitive control, and ageing, influencing research in experimental psychology, neuropsychology, and individual differences.

So if someone makes a bold new claim about cognitive aging, they really should listen to what he says.

In his inaugural post, Age and the overstuffed mind, Prof. Rabbitt lightly and humorously skewers Ramscar et al.'s (2014) claim that cognitive decline is a myth (which received extensive coverage in the press). He unfavorably compares their model to the Homer Simpson model, summarized as, “Every time I learn something new it pushes something out”:

The Simpson model makes no prediction for decision speed because it posits finite data capacity beyond which no increment, and so no further slowing, can occur. In this respect it is more elegant than the Ramscar model which makes no allowance for stabilisation or even shrinking of the data store by data attrition (forgetting) or displacement.

However, Rabbitt makes the astute observation that the paper may have been deliberately provocative:

In conclusion: unlike the Simpson model, which was arguably first empirically tested seventy years ago and still offers a touching insight into the human condition, the Ramscar model may be intended only as a provocation and to stimulate discussion. The boundary between provocation and exasperation is narrow, and is shifted by the experiences and intellectual commitments of an audience.

Can neuroscience illuminate the nature of human relationships? Or does it primarily serve as a prop to sell self-help books? The neurorelationship cottage industry touts the importance of brain research for understanding romance and commitment. But any knowledge of the brain is completely unnecessary for issuing take-home messages like tips on maintaining a successful marriage.

In an analogous fashion, we can ask whether successful psychotherapy depends on having detailed knowledge of the mechanisms of “neuroplasticity” (a vague and clichéd term). Obviously not (or else everyone's been doing it wrong). Of course the brain changes after 12 sessions of psychotherapy, just as it changes after watching 12 episodes of Dexter. The important question is whether knowing the pattern of neural changes (via fMRI) can inform how treatment is administered. Or whether pre-treatment neuroimaging can predict which therapy will be the most effective.

However, neuroimaging studies of psychotherapy that have absolutely no control conditions are of limited usefulness. We don't know what sort of changes would have happened over an equivalent amount of time with no intervention. More importantly, we don't know whether the specific therapy under consideration is better than another form of psychotherapy, or better than going bowling once a week.

In “The Devil’s Dictionary,” Ambrose Bierce defined love as “a temporary insanity curable by marriage.” Enter Sue Johnson, a clinical psychologist and couples therapist who says that relationships are a basic human need and that “a stable, loving relationship is the absolute cornerstone of human happiness and general well-being.” To repair ailing partnerships, she has developed a new approach in marriage counseling called Emotionally Focused Therapy, or EFT, which she introduces in her new book, “Love Sense.”...

Johnson believes EFT can help couples break out of patterns, “interrupting and dismantling these destructive sequences and then actively constructing a more emotionally open and receptive way of interacting.” She aims to transform relationships “using the megawatt power of the wired-in longing for contact and care that defines our species,” and offers various exercises to restore trust.

Most interesting to me was Johnson’s brain-scanning study. Before EFT therapy, unhappily married women participating in the study reported considerable pain from an electric shock to the ankle as they held their husbands’ hands. After 20 sessions of EFT, however, these now more securely attached women judged their pain as only “uncomfortable” and their brain scans showed no alarm response. Secure attachment appears to change brain function and reduce pain.

Initial questions:

Is there a “wired-in longing for contact and care that defines our species”? {my needy cat seems to long for contact and care}

But Johnson too often focuses on attachment to the exclusion of other “megawatt” brain systems. Remarkably, she lumps romantic love with attachment, saying “adult romantic love is an attachment bond, just like the one between mother and child.” In reality, romantic love is associated with a constellation of thoughts and motivations that are strikingly different from those of attachment. My research bears out that humankind evolved distinct but interrelated brain systems for mating and reproduction: the sex drive (to seek a range of partners); feelings of romantic love (to focus one’s mating energy on a single partner); and feelings of attachment (to drive our forebears to form a pair-bond to rear their young together). Each brain system is associated with different neurochemicals; each is a powerful drive that still plays a continuing role in partnership stability.

More questions:

Are there distinct (but interrelated) brain systems for the sex drive, romantic love, and feelings of attachment? {I actually find this to be plausible}

Is each brain system associated with different neurochemicals? {i.e. testosterone, dopamine, and oxytocin, respectively. I find this to be less plausible, or at least a bit simplistic.}

It's time to correct the misperceptions and overinterpretations that have arisen from this research!!

Since there are a number of issues to tackle here – too many for a single post – I'll concentrate on only one of them here.

I Wanna Hold Your Hand

In 2006, Dr. James Coan and colleagues published a neuroimaging paper suggesting that the brains of happily married women showed an attenuation of activity related to emotion and threat when they held the hands of their husbands (Coan et al., 2006). Threat was induced experimentally by presenting a stimulus which occasionally signaled that a mild electric shock would be delivered to the ankle (20% of the time). Holding the hand of a male stranger also attenuated the hemodynamic response in some of these regions, relative to a no hand-holding control condition.2

Backing up a bit, the participants in the study were 16 heterosexual couples who rated their marital satisfaction as at least 40 on the Satisfaction subscale of the 50 point Dyadic Adjustment Scale (DAS). Total scores on the DAS were 126 for husbands (on a 151 point scale) and 127 for wives.3

The experimental design is illustrated below. The red X indicated a 20% chance of shock.

Figure adapted from a 2011 presentation by Coan (PDF), part of which can be viewed here.

At the end of each block, the women rated their subjective levels of unpleasantness and arousal on a 5 point scale. The results of the hand-holding manipulation were a bit weak. Unpleasantness ratings in the husband-hand condition were indeed significantly lower than no-hand (p=.001), but only marginally so compared to the stranger-hand condition (p=.05, with p<.05 being the usual cutoff for significance). The arousal ratings for husband-hand vs. no-hand (p=.07) and stranger-hand vs. no-hand (p=.08) were not officially significant either.

This raises a question I considered in 2006: why were the wives the only ones who were scanned?

...what about married women holding their mothers' hands? married men holding their wives' hands? unmarried women holding their partners' hands? single women holding the hands of their best friends? Perhaps the authors started with the relationship that they most expected to yield significant results...

The subjective effects of spousal-handholding were not enormous in women, which might explain why we've never seen data from husbands (i.e., perhaps there were no effects on self-report and/or neural activity). The highly-touted correlations between the wife's relationship quality rating and attenuation of threat-related brain signals weren't especially impressive either: −.59, p = .02 for the left superior frontal gyrus, −.47, p = .07 (not significant) for the right anterior insula, and −.46, p = .08(not significant) for the hypothalamus. These numbers represent the magnitude of reduction in threat-related activity when holding the husband's hand, and were interpreted to suggest that the attenuations in pain (insula) and stress (hypothalamus) were related to the strength of attachment.4

Emotionally Focused Therapy

This finally brings us to the recent paper by Johnson et al. (2013). They followed the imaging protocol of Coan et al. in a set of 35 married couples who were screened for relationship dissatisfaction and scanned both before and after 23 sessions of EFT couples counseling (range of 13 to 35 sessions over 3.25 to 8.75 months). On average, the couples were white Canadians 44-45 years of age, married for 17 years. In contrast to the happy couples described above (DAS scores of 127), these couples reported moderate levels of relationship distress, with DAS scores of 80-97. For various understandable reasons, only 23 couples completed pre- and post-EFT fMRI scans. Again, only the wives were scanned.

Still, retaining 23 couples over 6 months of treatment is no mean feat. However, I will again note that there is no control condition in this experiment, so we can't know whether any changes are specifically due to the treatment of interest.

Session and therapy length varied depending on the couples' presenting concerns and their progression through EFT-defined therapeutic change events [18], [28]. Specifically, when a couple was deemed according to EFT guidelines to have achieved 1) “softening” – a state of vulnerability and sharing of attachment related needs between the partners [37]– and 2) “consolidation” – where the therapist works with the couple to review treatment gains – treatment was terminated.

I am not qualified to comment on EFT and will not discuss it further, beyond saying that post-therapy DAS scores were significantly increased (pre-EFT mean=81 and post-EFT mean=96) but still, on average, in the moderately distressed range. Unpleasantness and arousal ratings in the husband-hand fMRI condition were lower after EFT.

The fMRI results after EFT were.... complicated, as shown below, and involve what appears to be post-hoc reasoning in relation to initial marital strife. Percent signal change was assessed for all voxels in the ROIs that were reported by Coan et al., which is a good and unbiased method for analyzing an independent dataset.

Fig. 2 (Johnson et al., 2013). Point estimates of percent signal change graphed as a function of EFT (pre vs. post) by handholding (alone, stranger, partner) and DAS score. Point estimates were computed separately for individuals high (+1SD) and low (−1SD) in DAS. Point estimates reflect average percent signal change (threat – safe) from all voxels activated in the original Coan et al. handholding study.

But the results are a little hard to interpret for the wives with high DAS scores, who nonetheless still experienced relationship distress. The intervention had no effect on their global threat-related brain response when holding their husbands' hands. In contrast, those with lower DAS scores showed a post-EFT increase in the threat response in the no-hand condition, a large reduction for stranger-hand, and a very large reduction for husband-hand.

Next the authors moved towards analyzing specific ROIs. I'll skip the husband vs. alone comparisons because these are less relevant. Well, except I'll quote this bizarre finding (which isn't terribly relevant, just hard to explain):

Interestingly, participants with higher DAS scores were generally less active in the substantia nigra/red nucleus when holding hands with their partners relative to when alone, independent of EFT, F(1,49.5)=6.6, p=.01.

OK then. What about the husband vs. stranger comparisons? There were a number of brain areas that showed pre- to post-therapy decreases that did not differ for husband-hand vs. stranger-hand.5 These regions included the right insula, which was related to relationship quality in the Coan et al. (2006) study. The two regions with positive findings (i.e., threat-related reductions in husband-hand and increases in stranger-hand) are right dorsolateral prefrontal cortex (dlPFC) and left supplementary motor area (SMA). No relationships to DAS score were reported.

What have we learned from this study, and how does it inform the practice of EFT? If we take it at face value, the one consistent finding between the two experiments is that the threat response in right dlPFC was attenuated when holding the husband's hand, relative to holding a stranger's hand. If this neural region serves to downregulate negative emotional responses expressed elsewhere (as described below), there were no downstream regions in need of downregulation:

The dlPFC in particular supports explicit, cognitive, or “reappraisal” based self-control strategies active during unpleasant emotional states [54]. ... The relative post-EFT inactivity of the dlPFC implies further that a secure connection with an attachment figure does not help individuals to maintain equilibrium by boosting self-regulatory capabilities per se but by reducing the perception and significance of threats, thus obviating the need for self-regulation to occur [13].

Having some kind of autonomic measure of threat perception (e.g., skin conductance or heart rate) would be useful in verifying this hypothesis. The authors don't interpret their other major finding, a similar effect in the left SMA (a motor control region).

The final question remains unanswered: how does this study inform the practice of EFT? The authors state:

Ultimately, our handholding paradigm has provided a unique opportunity to test some of the proposed mechanisms of social support in general, and EFT in particular, all at the level of brain function, in vivo.

But not all of their predictions were supported. In particular, to explain the changes in neural threat processing observed in the no-hand condition, they resorted to an alternate model of therapeutic change:

We predicted that EFT would not affect neural threat responding during the alone condition. ... [But] threat-related activity during the alone condition actually increased as a function of EFT in regions such as the dACC and portions of the PFC. Increased reactivity in these regions suggests a possible cost to increasing one's dependence upon social resources: that it becomes more difficult to tolerate being alone. ...

This is not what we observed. Although positivity ratings did not change, subjective arousal actually decreased. This suggests an alternative hypothesis: that EFT either trained or motivated clients to be more effective self-regulators even when alone. ... Although EFT focuses strongly on interpersonal attachments and interdependence, doing so may also increase self-regulatory motivation as clients come to value fostering effective relationships in part through self-regulatory effort.

I'm not sure that I understand this formulation, or that a dissociation between behavioral self-report and dACC activity warrants a reinterpretation of EFT's therapeutic effects. Ultimately, I don't feel like a BS-fighting superhero either, because it's not clear whether Magneto has effectively corrected the misperceptions and overinterpretations that have arisen from this fMRI research.

2 The specific neuroimaging results were a bit less straightforward and easily interpreted than this. Regions of interest (ROIs) were defined by determining which areas were activated by the red X threat compared to the safe signal in the no-hand condition. This threat response was attenuated in the husband-hand vs. no-hand condition in the ventral anterior cingulate cortex (vACC), left caudate, superior colliculus, posterior cingulate, left supramarginal gyrus, and right postcentral gyrus. The threat response was also specifically attenuated in husband-hand vs. stranger-hand only in right dorsolateral prefrontal cortex, considered a “cognitive control” area. Finally, the stranger-hand vs. no-hand comparison revealed attenuation in the same bold blue regions above.

3 However, the correlation between husbands' and wives' DAS scores was not significant. Hmm... Would knowledge of this finding create any discord?

4 I won't get into how those single functions were assigned to these two complex and diverse brain regions.

5Johnson et al. (2013): “In the vmPFC, left NAcc, left pallidum, right insula, right pallidum, and right planum polare, main effects of EFT revealed general decreases from pre- to post- therapy in threat activation, regardless of whose hand was held.”

He was studying loneliness and isolation. She was studying love and desire. When they found themselves together, they gravitated toward her end of the continuum of social connection.

John Cacioppo was living in Chicago and Stephanie Ortigue in Geneva when they met—in Shanghai. ... On the last night of the conference, they happened to be seated next to one another at an official dinner, and soon became absorbed in conversation. “She was wonderful and brilliant and funny and I was completely taken by her,” Cacioppo says.

They both felt the chemistry but had to return to their respective homes the next day. Before parting ways they walked out of the restaurant together and noticed a beautiful moon hanging over the city. He snapped a picture of it. “A couple weeks later, she e-mailed me and asked if I could send her the picture,” Cacioppo says—a request his wife now confesses was just an excuse to strike up another conversation.

Within weeks they arranged to meet again, and from there their love unfurled. ... Within eight months they were engaged, and a season later they had married.

Their romantic story and collaborative work has been covered by a number of professional and popular media outlets, including the press office at the University of Chicago. The newsroom issued a press release on February 13, 2014 to coincide with Valentine's Day:

A region deep inside the brain controls how quickly people make decisions about love, according to new research at the University of Chicago.

The finding, made in an examination of a 48-year-old man who suffered a stroke, provides the first causal clinical evidence that an area of the brain called the anterior insula “plays an instrumental role in love,” said UChicago neuroscientist Stephanie Cacioppo, lead author of the study.

The study (Cacioppo et al., 2013) showed no such thing (in my opinion), and I'll return that in a moment. But for now I'll point out the Cacioppo spin didn't translate so well to other reports about this neurological patient. According to the Fox News affiliate in Little Rock, AK:

Love at first sight does not exist, claim researchers in the Current Trends in Neurology journal.

A stroke patient had a damaged anterior insula -- which is the part of the brain which controls how quickly we fall for someone.

They found that he could make decisions about lust normally but needed longer to think about love.

The researchers say this finding "makes it possible to disentangle love from other biological drives".

The Chicago researchers never said that love at first sight is a myth. But that didn't stop the British tabloid Metro from running that headline, while the Times of India declared:

A new study suggests that love at first sight is a myth and it does not exist.

According to the study, the speed at which we fall for someone is controlled by a region in the brain called the anterior insula, Metro.co.uk reported.

All this curt tabloid fodder contradicts the meet-cute trope of the Cacciopo's own relationship. But their study itself is also quite problematic. It doesn't support the authors' contention, in my view, and here's why.

WITH Apple widely expected to release its iPhone 5 on Tuesday, Apple addicts across the world are getting ready for their latest fix.

But should we really characterize the intense consumer devotion to the iPhone as an addiction? A recent experiment that I carried out using neuroimaging technology suggests that drug-related terms like “addiction” and “fix” aren’t as scientifically accurate as a word we use to describe our most cherished personal relationships. That word is “love.”. . .

...most striking of all was the flurry of activation in the insular cortex of the brain, which is associated with feelings of love and compassion. The subjects’ brains responded to the sound of their phones as they would respond to the presence or proximity of a girlfriend, boyfriend or family member.

In Tal Yarkoni’s recent paper in Nature Methods [PDF], we found that the anterior insula was one of the most highly activated part of the brain, showing activation in nearly 1/3 of all imaging studies!

Here's where the Cacciopos and their anterior insulae come in...

The Common Neural Bases Between Sexual Desire and Love

That was the title of a review article that conducted a statistical meta-analysis of the neuroimaging literature on "love" compared to "lust" (Cacioppo et al., 2012). The emphasis was on the similarity of brain regions activated by purported experimental elicitors of these complex behavioral and cognitive states (e.g., "look at a picture of your spouse" vs. close friend, or "watch porn" vs. non-porn). However, they did report a "gradient" of differential activation from the anterior "love" insula to the posterior "lust" insula, as shown below.

In their more recent paper, Cacioppo et al. (2013) wanted to move beyond correlational data by testing a neurological patient with damage in the anterior insula. This is generally a good strategy to evaluate whether your highly vaunted theory based on fMRI data can hold up to causal manipulations, or in this case an accident of nature. If a person with anterior insula damage cannot feel love, then you'd say that region is necessary for feelings of love. If their ability to love is unaffected, then you'd say the anterior insula is not very important.

We can go even further and ask if that patient with damage to anterior insula – but sparing of posterior insula – can still feel lust but not love. In that case, you'd say there's a dissociation between love and lust in the anterior vs. posterior insula. 2

But that's not what the study was about!! Instead, it was about a speeded response task: look at pictures and quickly decide whether the person evokes feelings of love (or desire, in separate blocks). From the outset, I'll say that reaction times (RTs) in this task really have nothing to do with love, even as it was conceived in the fMRI experiments (i.e., "look at a picture of your spouse" and even "look at a picture of your child" - !!)

The participant in the study was a 48 year old heterosexual man who had a stroke affecting a fairly large portion of the right insula [I think], which is good for the investigators because "lust" seems to "localize" to the left posterior insula in their schematic above. We don't know a whole lot about this man (like, how long ago was his stroke?), other than that "At the moment of evaluation, the patient showed no symptoms and his neurological exam was normal." We'll just have to trust them on that...

Oh, and he was cognitively normal on some brief screening tests, not depressed or anxious, and fine in two social cognition tasks (including empathy for pain, a task where other persons with anterior insular lesions show deficits).

On to the task. The patient and 7 age- and sex-matched controls viewed 40 pictures in blocks of 20. In two of the blocks, the participants decided whether the sexily dressed girl/young woman (aged 18-30) in the photo was "relevant to sexual desire" (yes/no) or "relevant to love" (yes/no). Each image was viewed twice. Only the RTs on "yes" responses were evaluated, for some unknown reason, so we don't know if the patient was faster/slower than controls to reject a photo.

The patient behaved similarly to controls in the "lust" task. It took him just under a second, 926 milliseconds (ms), to respond "yes" when he desired the sexy young girl in the picture, compared to 959 ms for controls [remember, these guys are 48 and the girls are as young as 18], which did not differ. The patient said "yes" to lust 58% of the time vs. 61% for controls. The authors write (PDF):

The anamnesis indicated that the patient was unaware of any differences in his feelings of love or desire, whereas behavior testing revealed a selective deficit for love (but not sexual desire).

In the "love" task, the patient said "yes" to love 35% of the time vs. 43% for controls (which again did not differ). For RT, the patient took 1279 ms to say "yes" to love vs. 1020 ms for controls. And this constitutes his selective deficit for love!! It took him 259 ms longer to decide that a stranger in a photo in a laboratory task was "relevant to love." And we don't know how long it took him to say "no." And he reported no subjective change in his feelings of love, and no significant others or family or friends were queried about this.

The patient could have been slower to make that decision for any number of reasons that have nothing to do with “playing an instrumental role in love.” I won't belabor the point, but this particular region of the brain is implicated in many different functions.

With all due respect to the authors, I don't understand how this paper was published in its current form.3

The dissociative anesthetic and ravey club drug ketamine has been hailed as a possible “miracle” cure for depression. In contrast to the delayed action of standard antidepressants such as SSRIs, the uplifting effects of Special K are noticeable within an hour. “Experimental Medication Kicks Depression in Hours Instead of Weeks,” says the National Institute of Mental Health. NIMH has been bullish on ketamine for years now. Prominent researchers Duman and Aghajanian called it the “the most important discovery in half a century” in a recent Science review.

But in 2010, I pondered whether this use of ketamine was entirely positive:

Now, in the latest issue of the American Journal of Psychiatry, Dr. Alan F. Schatzberg of Stanford University School of Medicine has a commentary entitled, A Word to the Wise About Ketamine. He first acknowledges the excitement about acute ketamine for refractory depression, then raises several cautionary notes and warns:

“This unbridled enthusiasm needs to be tempered by a more rational and guarded perspective.”

He notes that the drug is administered off-label in free-standing private psychiatry clinics without regulation by the FDA. Some leading proponents have advocated for strictly inpatient use, but that cat is already out of the bag.

Another potential issue is abuse liability. The antidepressant effects of ketamine are short-lived (less than a week), which means that repeated infusions are required. The published literature suggests a relatively safe profile over two weeks in a hospital setting, but patients at commercial clinics are unlikely to be monitored as closely.

The commentary also suggests that “We Need To Know More About the Mechanism of Action of the Mood-Elevating Effects” — but that is true of all drugs with antidepressant properties.

Without more data on what ketamine can do clinically, except to produce brief euphoriant effects after acute administration, and knowing it can be a drug of abuse, it is difficult to argue that patients should receive an acute trial of ketamine for refractory depression. ... The recent ketamine studies are exciting, and they open up important avenues for investigation that should be supported; however, until we know more, clinicians should be wary about embarking on a slippery ketamine slope.

However, in the midst of all this naysaying, it's important to note that Dr. Schatzberg has extensive ties to the pharmaceutical and biotech industries. He receives consulting fees from 19 different companies and has equity in 16 different companies, including one for which he is a co-founder. Ketamine of course is not under patent and is cheap to purchase. Perhaps not coincidentally, he does not receive fees from AstraZeneca, which (until recently) was developing a “low-trapping” NMDA antagonist that does not cause the hallucinogenic effects of ketamine (AZD6765, aka lanicemine).

In the past, I have suggested that short-term use for immediate relief of life-threatening symptoms (i.e. suicidal ideation) or end-of-life depression seem to be the best indications. Neuroskeptic has argued for the use of an active placebo condition (i.e, a non-dissociative comparison drug) in clinical trials, which has happened only rarely (Murrough et al., 2013), and for better assessment of dissociative behavioral effects.

At this point, the long-term ramifications of ketamine use for treatment-resistent depression remain to be seen...

In a future post I'll investigate the potential side effects in more detail.

Declaration

I have no financial conflicts to declare. But if some company wants to employ a critic for some bizarre reason, I'll take this under advisement.

In 1987, over 100 Canadians became ill after eating cultivated mussels from Prince Edward Island. Symptoms included the typical gastrointestinal issues, but serious neurological findings such as disorientation, confusion, and memory loss were also observed (Perl et al., 1990). In the worst cases, the patients developed seizures or went into coma. Three elderly people died. The cognitive changes were persistent, and had not resolved within a two year follow-up.

The toxin was identified as domoic acid, which received the well-deserved moniker of Amnesiac Shellfish Poison. Domoic acid is a potent excitatory amino acid that activates kainate and AMPA receptors, the binding sites for the ubiquitous excitatory neurotransmitter glutamate. It acts as an excitotoxin by overstimulating these receptors, causing a flood of calcium ions into the cells. Particularly vulnerable are neurons in medial temporal lobe structures such as the amygdala and the hippocampus, which is critical for memory.

Postmortem examination of four brains revealed hippocampal pathology that could account for the clinically significant anterograde amnesia seen in other (still living) patients (Teitelbaum et al., 1990). The pattern of neuronal loss was consistent with the damage observed in kainic acid animal models of epilepsy.

Fig. 3 (modified from Teitelbaum et al., 1990). Panel A: Section of hippocampus from a patient who died 24 days after mussel-induced intoxication, showing severe loss of neurons in all fields except CA2 (arrow), and tissue collapse is evident in part of field CA1 (double arrow). Panel B: Control Subject.

What was the source of the Amnesiac Shellfish Poison that had accumulated in the mussels? A "red tide" of phytoplankton created a harmful algal bloom that produced domoic acid, which accumulates not only in shellfish but also in fish such as anchovies and sardines.This is where the California sea lions make their noisy entrance...

The Marine Mammal Center in Sausalito, California rescues and rehabilitates sick, stranded, and malnourished marine mammals, including seals, sea lions, and cetaceans. An up-to-date list of their current patients is available here. They are the premiere institution for the diagnosis, treatment, and scientific study of domoic acid toxicity in California sea lions:

The Marine Mammal Center was the first group to definitively diagnose DA posioning in marine mammals because of a large outbreak in California sea lions in 1998. In September 2004, the Center received a grant from the Oceans and Human Health Initiative to study the long term effects of domoic acid in sea lions. This project studied the impact of DA on health, survival, and reproduction. Part of this project focused on the neurological effects of DA. Effects were evaluated using magnetic resonance imaging (MRI), cognitive behavior tests (how the animal behaves), and histopathology (tissue samples from dead animals).

Their website on the topic is highly recommended, and contains links to published papers such as Magnetic resonance imaging quality and volumes of brain structures from live and postmortem imaging of California sea lions with clinical signs of domoic acid toxicosis [PDF].

Most recently, a team of researchers from Stanford University collaborated with the Marine Mammal Center to conduct a detailed neuropathological investigation of the brains of sea lions who suffered from seizures due to domoic acid toxicity (Buckmaster et al., 2014). Unfortunately, this is not an uncommon occurrence, since the current census of pinniped patients includes five sea lions diagnosed with acute domoic acid toxicity. In the chronic state, the animals can experience recurrent seizures, leading to a failure to thrive and poor prognosis. The authors hypothesize that the animals develop temporal lobe epilepsy, which can serve as an unfortunate accidental model of temporal lobe epilepsy in humans.

The researchers examined the brains of 14 domoic acid-exposed (DA) animals and 9 control animals. Five of the affected sea lions were admitted in status epilepticus, a state of continual seizure that can cause severe brain damage and even death. The study expanded on earlier work by using stereological methods to obtain an unbiased estimate of the total number of neurons in each hippocampus (left and right hemispheres).

In control sea lions, Buckmaster and colleagues (2014) estimated that each hippocampus contains over 6 million neurons! For the comparative hippocampal anatomy aficionados, sea lions had a relatively small proportion of neurons in the dentate gyrus granule cell layer relative to other mammals (i.e., macaque monkeys, squirrel monkeys, dogs, rats, and mice), and the granule cell layer was thinner than in other species.

Importantly, the authors observed significant neuronal loss in the DA-exposed animals, with substantial variation across the hippocampal subfields (see Fig. 3). And interestingly, the damage was unilateral in most cases: the left hippocampus in four, the right hippocampus in seven, and bilaterally in only three.

Fig. 1 (modified from Buckmaster et al., 2014). Nissl-stained cell bodies in the hippocampi from (A) control and (B-D) chronic domoic acid sea lions. Note the increasing levels of neuron loss in the three chronic DA cases. All were admitted in status epilepticus with DA toxicity. In (A), lines indicate border between the hilus (h) and CA3 field. g, granule cell.

In addition, the authors compared the pattern of neuronal loss in sea lions to that observed in human patients with temporal lobe epilepsy, using tissue obtained at autopsy or after temporal lobe resection (for seizure control):

Substantial neuron loss was evident in all hippocampal subfields of patients with temporal lobe epilepsy and chronic DA sea lions compared with controls (Fig. 3B). In sea lions neuron loss was more severe in the hilus, CA3, and CA2 subfields compared with humans. In humans neuron loss was more severe in CA1. Sea lions and humans displayed similar levels of granule cell loss.

As we saw in the earlier cases of Amnesiac Shellfish Poisoning in Canada, the CA1 region of the hippocampus was especially vulnerable, and this is also true in cases of hypoxia or anoxia. However, it's notable that significant neuron loss was observed throughout the hippocampus.

Why the difference from sea lion CA1? The reasons are unclear. Nonetheless, when examining the brain as a whole, it is remarkable that the hippocampus shows such qualitatively similar pathology in sea lions andhumans poisoned by domoic acid, and humans with temporal lobe epilepsy. The authors speculate that the misfortune of chronic DA sea lions may yield an opportunity to test new anti-seizure treatments, for the benefit of both marine and terrestrial mammals.

NCAA college basketball isn't the only hot competition involving a team from the University of Virginia. UVa Psychology Professor Brian Nosek is one of three founders of Project Implicit, a collaborative nonprofit dedicated to the study of implicit social cognition — how unconscious thoughts and feelings can influence attitudes and behavior.

Prof Nosek is also heavily involved in the Open Science and Replication movements. Along with graduate student Calvin Lai, he led a multinational group of 22 other researchers in a competition to see who could devise the best intervention to reduce racial bias scores on a widely administered implicit test, the race IAT (Lai et al, 2014).

Or does it... ? There have been some vocal critics of the IAT over the years who have questioned what the test actually measures. I'll return to this point later, but for now let's look at the impressive aspects of the new paper.

Performance on the Black-White IAT was compared after 17 brief interventions aimed at changing pro-White bias (and a "faking" condition) relative to a control condition of no pre-test intervention. Participants were over 20,000 non-Black individuals registered at the Project Implicit website, randomized into groups of 300-400. Most of the interventions were tested in four different studies. The contest rules allowed changes to the design between studies. The goal was to lower pro-White bias scores to the point of no preference between Blacks and Whites.

In the IAT, participants classify faces as Black or White and words as good or bad. Some blocks contain only faces or only words. The two critical conditions are shown in the figure above. The stimulus-response mappings are rotated in different blocks to either reinforce stereotypes (bottom) or go against stereotype (top). In the Stereotype condition, participants press the same key when they see White faces or “good” words. They press the other key when they see Black faces or “bad” words. Most White participants (and many African Americans) show a pro-White “preference” or bias, with faster responses when White/good and Black/bad are mapped to the same key (than vice versa).

Conversely, in the Against Stereotype condition, Black faces and positive words are mapped to one key, and White faces and negative words are mapped to the other key. In essence, this induces a response conflict similar to that seen in many classic cognitive psychology tasks such as the color-word Stroop task, e.g. BLUE (say “red”) and the Eriksen flanker task, e.g. ← ← → ← ← (press right button). Slower response times in the IAT conflict conditions has been interpreted as an implicit bias against Black people (Greenwald et al., 2009), although one could argue that executive control abilities play a role here, just as they do in the Stroop task (Siegel et al, 2012).1

The Interventions

The interventions were divided into six different descriptive categories. Although the descriptions were based on existing hypotheses in the literature, they do not imply the operation of any specific psychological mechanism. The interventions had to be brief in length (5 min or less), yield interpretable scores, and have a low attrition rate. See Appendix 1 at the end of this post for a detailed list.

(2) Exposure to counterstereotypical exemplars: assigned to fictional groups with positive Black ingroup members and/or negative White outgroup members; OR think about famous Black people and infamous White people (Interventions #4–8).

(6) Intentional strategies to overcome biases: provide strategies to override or suppress the influence of automatic biases, rather than trying to shift associations directly (Interventions #17 and #18).

To reveal my own a priori biases regarding these descriptive categories, I favor (6) Intentional strategies to overcome biases, which I have writtenabout previously (in 2008). These were interventions #17 Using Implementation Intentions, and #18 Faking the IAT as proposed by Calvin K. Lai, the first author of the manuscript.

Results indicated that nine of the interventions were effective, and nine were ineffective. The interventions that tried to change attitudes (Appeals to egalitarian values), increase empathy or perspective-taking (Engage with others’ perspectives), or elicit an elevated sense of morality (Inducing emotion - Haidt) were completely ineffective.

I note here that the failed interventions all tried to challenge the racially biased attitudes and prejudice purportedly measured by the IAT. These interventions are below the red line in the figure below.

Some of the most effective interventions showed variability across studies, because the parameters were altered between studies (which was allowed). Importantly, some of the interventions included multiple manipulations. The top three, Vivid Counterstereotypic Scenario, Practicing an IAT With Counterstereotypical Exemplars, and Shifting Group Boundaries Through Competition all employed Implementation Intentions in addition to the primary intervention.

What are Implementation Intentions? [in brief, think “Black = good”]

The mechanism connects an environmental cue with the goal intention, making associations between the behavior and the cue more accessible in memory. ... The task gave participants a short tutorial on how to take the IAT and informed them about the tendency for people to exhibit an implicit preference for Whites compared with Blacks. Participants were then asked to commit themselves to an implementation intention by saying to themselves silently, “I definitely want to respond to the Black face by thinking ‘good.’”

On its own, this manipulation was effective in reducing bias scores (p = .032, d = .19). The effect size was enhanced by allowing participants to practice the task before the instructions were given (p = .00037, d = .32). In other words, once subjects were even superficially familiar with the task, being told to think “Black = good” significantly reduced pro-white sentiment (i.e., IAT scores).

This intervention is remarkably similar to my previous anecdotal findings (n=1) for the Human or Alien? test and the Dead or Alive? test. My 2008 results are below. I showed similar effects for the Black-White test and the Women in Science test, but I couldn't find the results for those.

The Neurocritic is Human AND Alien. Coming soon: “Tips for Manipulating the IAT.”

You have completed the study.

Your Result

Your data suggest little to no automatic identification with Human compared to Alien.

If your results, provided above, indicate a stronger identity with alien relative to human, then you are probably an alien.

The Neurocritic is NEITHER Dead NOR Alive. Or both Dead AND Alive. Plus, as promised, today we'll cover “Tips for Manipulating the IAT.”

You have completed the study.

Your Result

Your data suggest little to no automatic identification with Alive compared to Dead.

Your results, summarized above, are an implicit indicator of whether you are alive or dead. Implicit measures are superior to self-report because the latter is notoriously unreliable. People may report being alive because social pressures suggest that it is more desirable to be alive. Also, people may not have introspective access to their animate-status, making such self-report untrustworthy.

Super Secret Tip for Manipulating the IAT

My “faking” strategy was simple, and relied on neither deliberate slowing of response times nor a long-standing affiliation with aliens. When SELF and ALIEN were mapped to the same key, I merely said to myself, “I'm an alien.” This strategy was transient, applied only when those stimulus-response mappings were the same, not when SELF and ALIEN were mapped to different keys. I used the same strategy for the Dead or Alive IAT. In both cases, I responded as quickly and as accurately as possible.

Here, what I'm calling “faking” is the Using Implementation Intentions instructions (and not the Faking the IAT intervention of Lai et al, 2014). Again, the top three contest winners combined this strategic feature with another manipulation, as noted by the authors:

The three most effective interventions appear to leverage multiple mechanisms to increase their impact on implicit preferences... The most effective intervention, Vivid Counterstereotypic Scenario, involved the participant as the subject of the story, had the participant imagine his- or herself under a highly threatening life-or-death situation, exposed participants to counterstereotypical exemplars (malevolent White villain, dashing Black hero), and provided strategies to overcome bias (goal intentions to associate good with Black and bad with White) to reduce implicit preferences.

This vivid intervention is illustrated by using a TV example in Appendix 2. [Note: participants in the actual experiment read a story; they did not watch an episode of Criminal Minds.] The strategy was receiving the instruction that “the task following the story (i.e., the race IAT) was supposed to affirm the associations: White = Bad, Black = Good.”

The conclusion I draw from this impressive project is that performance on the IAT is subject to strategic control, supporting the notion that the IAT is not a pure measure of implicit attitudes. Even a brief training session is sufficient to reduce (or reverse) stereotypical preferences and associations that are supposed to be unconscious in nature (also see Hu et al., 2012; Siegel et al, 2012).

1 Another common paradigm in cognitive psychology, semantic priming, can explain a goodly portion of the effect as well. In one study, the bias shown in IAT scores was based on statistical co-occurrence of words and concepts in the ambient culture and not on prejudiced attitudes. A discussion of those findings is beyond the scope of this post.

...participants read an evocative story told in second-person narrative in which a White man assaults the participant and a Black man rescues the participant ( “With sadistic pleasure, he beats you again and again. First to the body, then to the head. You fight to keep your eyes open and your hands up. The last things you remember are the faint smells of alcohol and chewing tobacco and his wicked grin”).

The 21st Annual Cognitive Neuroscience Society Meeting was held in Boston from April 4–8, 2014. We'll kick off our recapping festivities with a contest of "Name that Soundbyte!" from an invited symposium on how developmental cognitive neuroscience can (and cannot) inform policy.

The burgeoning field of developmental cognitive neuroscience is yielding important insights into how the human brain develops and changes with experience. These findings are proving to be of great interest not only to other scientists, but also to practitioners and policymakers from various corners of society. What have we learned so far that warrants consideration by those in a position to shape policy and practice in education, healthcare, or the judicial system? In this symposium, leading cognitive neuroscientists will discuss the potential applications of their research.

Given the list of symposium speakers, can you name who said each of these quotes? [or close paraphrases?] Be sure to chime in by leaving your best guesses in the comments.

Postpublication peer review through traditional scientific publishing is like kabuki theater: a slow, rehearsed drama in which the viewer must recognize the subtle profundities of performers wearing deliberately ambiguous masks.

Postpublication peer review on social media is like the mosh pit at a punk rock show. It’s fast, uncoordinated, a lot less subtle, more in your face, and involves a few more risks.

I thank Dr. Faulkes (“Dr. Zen”) for mentioning this blog and its cousin, The Neurocomplimenter. I started the latter in response to overt dismissals of neuroscience as providing anything useful to our understanding of human cognition (and to emphasize the importance of studying brain function, more broadly). I see it as“a new project designed to counter gratuitous anti-neuroscience sentiment. It’s part of my campaign to combat pop neurobashing profiteers.”

Lately, the compliments have been sparse (perhaps due to my pessimistic nature) but I will try to highlight the positives anon. Meanwhile, the critical element soldiers on. Dr. Zen nailed my reason for starting an anonymous blog right on the head: professional peer review is anonymous, so why should bloggers identify themselves? Many do, of course, and some researchers sign their reviews.

Regardless, there are strong traditions for using both anonymity and pseudonyms in science (Neuroskeptic, 2013), not the least of which is journal peer review itself. It is a little audacious for authors and editors to decry the negative effects of “anonymous bloggers” when essentially every journal practices anonymous peer review. Bloggers are often easier to identify than journal reviewers. We still don’t know who reviewed Wolfe-Simon et al. (2011)) for Science. But we know Rosie Redfield critiqued it on her blog (http://rrresearch.fieldofscience.com/2010/12/arsenic-associated-bacteria-nasas.html), which ultimately led to a paper that failed to replicate key claims of the original paper (Reaves et al., 2012).

The voodoo correlations neuroimaging brouhaha is another major example of post-publication peer review (that went a little pre-publication due to a preprint going public), much to the dismay of the journal editor and some of the implicated researchers (see The Voodoo of Peer Review).

Dr. Zen also made the important distinction between completely anonymous commenters and those who identify as a pseudonym, a point that I often fail to make, e.g. Anonymous Peer Review Means Never Having to Say You're Sorry: “The Neurocritic is happy to provide a new form of anonymous peer review, free of charge.”

Rescuing US biomedical research from its systemic flaws

(2) A commentary in PNAS relayed the sad structural state of biomedical research in the U.S. and made [some unrealistic, some potentially helpful] suggestions for change (by four distinguished scientists in positions of power):

There is a no more worrisome consequence of the hypercompetitive culture of biomedical science than the pall it is casting on early careers of graduate students, postdoctoral fellows, and young investigators.

The commentary discusses the unsustainable growth of the biomedical research enterprise and the overly competitive yet conservative culture it has spawned. This win-at-all costs mentality squelches creativity and collaboration, and concentrates resources in the hands of fewer and fewer investigators. The average age for landing one’s first tenure-track position has risen to 37. And some estimates project that fewer than 8% of new PhDs will get tenure, with the figure an abysmal 0.45% according to one report.

The “post-doc apocalypse”1 has been widely discussed on social media (and in the popular press) for some time now. Potential solutions have been expressed by researchers at all levels, but these voices have been scattered. Those of us who are not in certain informal social circles may overlook a site of discussion somewhere, but who has time to search for it.

On the other hand, specific responses to the Alberts et al.PNAS article have already appeared on PubPeer (which allows anonymity) and PubMed Commons (which does not). Professor Dorothy Bishop made some trenchant points on the three major solutions proposed by the authors. You can read these at PubMed Commons and click on links to blog posts, where she expands on the topics of academic workload, grant review, and evaluation.

Here's my worthless $0.02 on the matter, from a position of no power and no influence: I agree with Prof. Bishop that the “Predictable and Stable Funding of Science” solution is a pipe dream...

We encourage Congressional appropriators and the executive branch to consider adding a 5-y projected fiscal plan to the current budgetary process. This plan would be updated each year, at the same time that annual appropriation bills are written.

Ha ha ha ha ha! Have they forgotten the Great Budget Sequestration of 2013 already? Getting the political parties to agree (or factions within parties) seems impossible to me. Even if all four authors win plum positions in Congress or the White House, good luck with that.

Perhaps more plausible are recommendations for downsizing the future workforce to reduce the glut of under- and unemployed junior scientists. However, they have to persuade graduate programs to admit fewer students and ban principal investigators from funding students through research grants (rather than through training grants). Some funding sources/agencies already forbid PIs from paying student stipends, and Alberts et al. propose that NIH should move towards this model.

They also suggest “Broadening the career paths for young scientists” so they're not looked down upon as failures if they don't become a clone of their advisor at MIT. This would also require changing the minds of Bob Graybeards everywhere. Good luck with that.

Another modest proposal is to force universities to pay their faculty. Here, however, they have to convince Chancellors and Deans of medical schools to forgo the “perverse incentives” of “soft money” positions, where the institution benefits from the indirect costs awarded to the university in conjunction with salary money paid by NIH. This would do away with a whole army of productive researchers. What a great idea.

Perhaps these unemployed “soft money” faculty could apply for the proposed Staff Scientist positions, displacing all the post-docs who are supposed to be in line for them. What a great idea. Good luck with that.

To be fair, let's see if the proposed Staff Scientist position might be a good idea. Post-post-docs could move into higher paying jobs that would better prepare them to run their own labs. This is because they will actually run their supervisor's lab, at much lower pay and with no prestige:

We believe that staff scientists can and should play increasingly important roles in the biomedical workforce. Within individual laboratories, they can oversee the day-to-day work of the laboratory, taking on some of the administrative burdens that now tend to fall on the shoulders of the laboratory head; orient and train new members of the laboratory; manage large equipment and common facilities; and perform scientific projects independently or in collaboration with other members of the group. Within institutions, they can serve as leaders and technical experts in core laboratories serving multiple investigators and even multiple institutions.

Or here's an idea: there could actually be a place in the system for individuals who may not want to run their own lab! These people could conduct their own independent research and help with the grant application process, if so inclined. We'd call them Staff Scientists. We'd hire different people to take on the administrative burdens and call them Lab Managers. Still others could be hired as “technical experts in core laboratories serving multiple investigators and even multiple institutions.” We'd call them Technical Experts.

But this would be prohibitively expensive. Back to cheap graduate student labor...

The most important point is that I can lobby all I want for the Snarky Policy Consultant position, since this is my blog.

NIH and AHRQ Announce Updated Policy for Application Submission

(3) A MAJOR, MAJOR revision in how the NIH reviews research grants was released today (April 17, 2014). At first I thought it was an April Fool's joke (and that I must have been dreaming for the last 16 days, which would explain a lot of things).

Effective immediately … following an unsuccessful resubmission (A1) application, applicants may submit the same idea as a new (A0) application for the next appropriate due date. The NIH and AHRQ will not assess the similarity of the science in the new (A0) application to any previously reviewed submission when accepting an application for review.

In essence, this does away with the “two strikes” rule – which meant that rejected proposals were barred from being submitted again sans a complete overhaul.

Within minutes of this announcement, Drug Monkey had 159 tweets and a blog post on this new policy, which would allow investigators to submit multiple revisions of basically the same grant (hence the R01 A7, a grant funded on the eighth try). But it wouldn't really be considered the same grant, so the numbers only go up to A1 (one amendment), thereby avoiding the stigma of A7. Thank you, NIH, that's very considerate.

And here we are again on social media, providing immediate feedback and discussion of important issues that impact the biomedical research enterprise. Can I get a full-time job doing that? We'd call it Blogger-in-Residence.

Footnote

1I refuse to use the neologism because of what Google will turn up. And this is nothing new; it's been brewing since the early 90s: “From the early 1990s, every labor economist who has studied the pipeline for the biomedical workforce has proclaimed it to be broken.”

Corkin also discussed the man behind the initials, describing his gentle and remarkably upbeat disposition, given that he was repeatedly confronting a confusing, context-free present. Her talk included a poignant and powerful audio recording of Corkin and H.M. chatting in 1992. In the excerpt, H.M. professes to “not mind” all of the tests and studies, saying simply, “I figure what’s wrong about me helps you help others.”

Henry Molaison died on December 6, 2008. Corkin described the post-mortem handling of H.M.'s brain, which was first scanned before autopsy. Then the brain was removed and preserved in formaldehyde for 10 weeks, and later scanned in a 7T magnet (see Annese et al., 2014 for details).1

H.M.'s brain flew Jet Blue

H.M.'s brain was transported across the country, where it underwent lengthy processing prior to sectioning into 2,401 slices on a heavy duty frozen microtome (Annese et al., 2014).2 This event was webcast live at the Brain Observatory, which she said was “like watching paint dry.” I beg to differ. I thought the live coverage was like the Stanley Cup of Neuroscience, as mesmerizing as watching the Zamboni clean the ice at a hockey game.

At the time, I noted that“H.M.'s ventricles are quite enlarged. Then again, he was 82 when he died (so that's not unexpected).”

H.M. was, in fact, demented when he died. His cerebellum was severely atrophied after years on the anticonvulsant drug Dilantin. Cerebellar dysfunction on its own can be associated with explicit memory deficits (Baillieux et al., 2008). And finally, his amygdalae were gone bilaterally (Annese et al., 2014):

The excision of the anterior hippocampus, together with the bulk of the amygdala, may explain H.M.’s dampened expression of emotions, poor motivation and lack of initiative19. The fact that he was impaired in reporting internal states such as pain, hunger and thirst and his apparent lack of initiative was ascribed to the almost complete removal of the amygdala...

Dr. Corkin has long said that “H.M.'s amnesia was pure.” But these additional issues, along with some reports that his language production and visual cognition were not entirely normal, raise questions about his status as the definitive hippocampal amnesic. Nonetheless, there's no denying the immense importance of what H.M. so generously taught us about memory. “It’s a funny thing,” he said, “you live and learn. I’m living and you’re learning.”

...fixed in standard buffered formalin (4% formaldehyde; postmortem interval of ∼14 h). The brain was fixed for 10 weeks at 4 °C with three changes of fixative during that time; it was suspended upside down, hung by the basilar artery. When the tissue was firm enough, the brain was immersed in fixative laying on a cushion of hydrophilic cotton. Subsequently, multiple series of MRI scans of the fixed specimen were acquired in 3T and 7T scanners.

The results of our examination are based on 2,401 digital anatomical images and selected corresponding histological sections that were collected at an interval of 70 μm over the course of an uninterrupted 53-hour procedure. The series of digital images of the block’s surface was obtained using a digital camera mounted directly above the microtome stage. Volumetric reconstruction from these images was the basis for subsequent visualization and 3D measurements along arbitrary planes. The dissection of the brain was video-recorded and streamed live on the web to permit scientific scrutiny and to foster public engagement in the study.

(4) Martha Farah - “IMHO still premature to dictate policy based on neuro”

(5) Martha Farah -“Here's where going ‘neuro’ earns its keep”

(6) Martha Farah - ‘descriptive’ often considered derogatory in science

(7) John Gabrieli - SCHOOLS MATTER!

Dr. Gabrieli's soundbyte was in the context of discussing charter schools, which have a positive impact on standardized test scores (crystallized intelligence) but no effect on fluid intelligence. Another large-scale study identified structural differences in the brains of pre-literate kindergartners that predicted later reading ability. The CNS 2014 Blog covered his talk in much greater detail. Another Gabrieli soundbyte: “Children are born into a neurodevelopmental lottery.”

Dr. Sheridan spoke about the effects of profound deprivation in institutionalized Romanian orphans and the results of the Bucharest Early Intervention Project. This is a very depressing topic, not one that inspires witty quips or quotable soundbytes. As her abstract put it, “Many aspects of postnatal brain development depend critically on experience for development to proceed normally. In this talk we will discuss what happens to children whose postnatal experience violates what we have come to expect as a species.”

Dr. Farah's talk was about socioeconomic status and the developing brain. Low SES affects some cognitive domains more than others: language, executive function, and declarative memory are the most heavily impacted. For language development, environmental stimulation is the sole factor (other than age). For declarative memory, parental nurturance is the sole factor. “Here's where going ‘neuro’ earns its keep” – e.g., it's been shown that the deleterious effects of stress on the hippocampi of rat pups is buffered by maternal care. Farah believes it's “still premature to dictate policy based on neuro” but acknowledged the tactical advantages of using neuroscience for framing/spin – e.g., “science speaks with authority” and appeals to government technocrats. “We're not sentimental old fluffs” for promoting social justice (now in the clever guise of brain plasticity). The negative neural consequences of “toxic” environments can replace social justice frames on the Left and “poverty as a moral failing” frames on the Right. Neuroscience for the bipartisan win!

Dr. Neville spoke about specific training programs designed to narrow the socioeconomic gap in achievement. She was less circumspect about policy implications than Farah, arguing in favor of “evidence-based politics” – e.g., publicly proclaiming [in the U.S.] that social equality will improve school performance “up to the levels of Cuba and Sweden,” she said in a deadpan manner. Partnering with Head Start school programs in Oregon, she and her colleagues have implemented an attentional training program in low SES children (the Brain Train) and their parents. She said we should convince the public and policy makers to be guided by evidence from brain research. Again, see the CNS 2014 Blog for more details on her talk.

Neuroscience and Education

I'll end with a collection of links that opine on whether neuroscience really has much to say about “evidence-based” education. Teachers in the UK think it does, according to this article:

Thousands of teachers are set to receive training in neuroscience after union members called for guidance on how the subject could be applied in the classroom.

Members of the Association of Teachers and Lecturers (ATL) at the union’s annual conference narrowly voted for a motion calling for training materials and policies on applying neuroscience to education and for further research on how technology can be used to develop better teaching.

. . .

“It is true that the emerging world of neuroscience presents opportunities as well as challenges for education, and it’s important that we bridge the gulf between educators, psychologists and neuroscientists.”

Neuroscience could also help teachers tailor their lessons for creative “right brain thinkers”, who tend to struggle with conventional lessons but often have more advanced entrepreneurial skills, Ms Neal said.

If nothing else, perhaps the teachers could learn there's no such thing as “right brain thinkers.”

I really don't want to seem like a Cassandra who rubbishes every attempt at doing neuroscientific studies of development or developmental disorders. ... But my concern is that we are prioritising neuroscience approaches to developmental problems, and this is happening in part because researchers are offering the promise of educational relevance. In contrast, clinical trials of behavioural interventions, which have more potential for helping children, are much harder to fund, and are deemed far less exciting.

We now have definitive proof that the propensity of womankind to postpone sex due to a headache is of evolutionary origin! This annoying habit has been traced back directly to a strain of ovariectomized CD-1® IGS mice supplied by Charles River.

In a naturalistic design that precisely mimics the mating habits of humans, sexual receptivity was induced in the female mice with subcutaneous injections of estradiol. Then the female mice and their preferred male partners were injected in various body parts with two different compounds to induce inflammatory pain. Lo and behold, the mounting behaviors of male mice were hardly deterred by these painful treatments, but the females declined sexual congress and hid from the males.

“These findings suggest that the well known context sensitivity of the human female libido can be explained by evolutionary rather than sociocultural factors, as female mice can be similarly affected,” concluded the authors (Farmer et al., 2014).

Of Mice and Women

This study was published in the Journal of Neuroscience, and the strongly worded quote above is how the authors chose to conclude their abstract. They go to great lengths to “prove” that the loss of libido was due to lack of sexual motivation in the female mice, rather than a direct consequence of pain. The authors also stretch the clinical applicability (and evolutionary validity) of their work a bit beyond belief, in my view. Why? Perhaps because promoting a viable animal model of low sexual motivation in women will ultimately serve drug development purposes (Farmer et al., 2014):

The link between pain and sexual motivation is evident in human sexual relations. The widespread aphorism, “Not tonight, dear, I have a headache” refers to a lack of sexual motivation due to pain. No clinical data exist on the direct impact of pain on sexual motivation, yet high prevalence of reduced sexual desire in chronic pain populations (Basson et al., 2010; Fine, 2011) suggest that pain may adversely influence sexual motivation.

It's not exactly true that “No clinical data exist on the direct impact of pain on sexual motivation...” (as we'll see later), but first let's take a look at the actual study.1

Pairs of vigorously mating mice were assigned to either male “open field” or female “paced mating” situations, which mimics their respective natural preferences. One member of each pair was injected with a pain-inducing inflammatory compound (zymosan A or λ-carrageenan) into their genital or nongenital (hind paw, tail, cheek) regions. Sexual behavior was measured by mounting in open field (for males) or in paced mating (for females) conditions. In the latter situation, the smaller females could run into their safe room to avoid the males.

The results generally indicated that the females hid from the males when injected with painful substances (Fig. 1A), but the males were not bothered (based on the total number of mounts) with the exception of a non-significant decline when the penis was injected with zymosan (Fig. 1C).

Fig. 1 (modified from Farmer et al., 2014). Reduction of sexual behavior in female but not male mice by inflammatory pain. A. Decreased mounting behavior in a paced mating paradigm when female mice receive zymosan (ZYM) or carrageenan (CARR) injections to the vulva, hind paw, tail, or cheek, compared with uninjected female mice (No Inj.). Bars represent mean ± SEM mounts with (shaded) or without (open) intromissions. C. No decreases in mounting behavior in an open field when male mice are treated in a similar fashion. *p < 0.05, **p < 0.01 compared with vehicle [NOTE: using uncorrected t-tests].

[As an aside, one could imagine that the mating behavior of human males might be more greatly affected by penile injections of any sort, and by inflammogen injections into the hand or cheek than what we're seeing here in the male mice.]

In addition to mounts, Table 1 in Farmer et al. lists 8 other behaviors × 2 treatments. Of these 16 comparisons to the vehicle control, five indicated reductions and one indicated an increase in activity of some sort, meaning that certain behaviors (number of ejaculations, latency to first mount, number of crossings to the male side, latency to return to the male side) were unaffected by one or both treatments [NOTE: using Dunnett's post-hoc comparisons that do correct for multiple comparisons]. Make of that what you will.

However, the pained females did indeed spend significantly less time in their male partner's side of the apparatus.

Next, some of the pained female mice were given pregabalin (Lyrica), an anticonvulsant drug used to treat neuropathic pain (kindly provided by one of the study sponsors, Pfizer). You'll be comforted to hear that analgesic administration and concomitant pain relief will lead to increased sexual activity in injured female mice (and probably in injured creatures of any sort).

On the other hand, administration of the non-selective dopamine agonist apomorphine, which is “pro-sexual” in mice but strongly emetic in humans, is unlikely to be welcomed by women as an antidote to a pain-quashed libido. Apomorphine (not related to morphine) is sometimes given to Parkinson's patients, but always in conjunction with other drugs to prevent vomiting. In fact, apomorphine is so unpleasant to humans that it has been used for aversion therapy in gay people (an “anti-sexual” agent if there ever was one).

Anyway, it was interesting to learn that rodents do not vomit, and that apomorphine reverses the pain-induced reduction in sexual behavior exhibited by female mice. This was interpreted to mean that sexual motivation was enhanced. But since apomorphine also increases locomotion in rodents, I wonder if other appetitive behaviors were enhanced as well.

When given to injured female mice, melanotan-II reversed the reduction in sexual behavior. Unlike apomorphine, however, melanotan-II: (1) does not increase locomotion, and (2) is undergoing further testing in humans via a subcutaneous route of administration that doesn't increase blood pressure. Moreover, the authors are highly aware of their potential animal model:

Thus, the reversal of pain-induced reductions in female-paced sexual behavior likely reflects an enhanced incentive value of the paced mating context, indicating that motivational mechanisms can overcome the effects of pain. We suggest that restoration of pain-induced loss of libido may provide a more sensitive test of prosexual drugs than current paradigms.2

...Originally developed as a self-tanning agent, the drug had been repurposed when male study subjects reported a surprising side effect: erections. ...

Pfaus showed me stunning testimonials from human test subjects. "On the five-point scale, I would rate the erection I had as a six," said one of the 1,300 anonymous testers. "You get this humming feeling," said another. "You're ready to take your pants off and go."

The drug worked equally well on women, who chronicled "an intense arousal" that lasted from six to 72 hours. "I was focused on sex," said one of the women.

However, the pesky side effects of increased blood pressure (in some men) and nausea (in one third of the women) were still an issue. That didn't stop the black market bremelanotide distributors.

But was it safe? "Well," says Pfaus, "we never resolved that blood pressure thing. There's no guarantee of purity. The FDA won't regulate it."

Five years later, Palatin has its subcutaneous version of bremelanotide in Phase 2B clinical trials for Hypoactive Sexual Desire Disorder (poster PDF) and other Female Sexual Dysfunctions (poster PDF). HSDD is a controversial diagnosis, discussion of which is beyond the scope of this post. 3

The media furor over the “not tonight, dear, I have a headache” evolutionary interpretation of sexual behavior in mice has completely overshadowed the potential drug marketing angle.

“Not tonight, dear, I have a headache.” Generally speaking, that line is attributed to the wife in a couple, implying that women’s sexual desire is more affected by pain than men’s.

Now, researchers from McGill University and Concordia University in Montreal have investigated, possibly for the first time in any species, the direct impact of pain on sexual behaviour in mice. ...

“We know from other studies that women’s sexual desire is far more dependent on context than men’s – but whether this is due to biological or social/cultural factors, such as upbringing and media influence, isn’t known,” says Jeffrey Mogil, a psychology professor at McGill and corresponding author of the new study. “Our finding that female mice, too, show pain-inhibited sexual desire suggests there may be an evolutionary biology explanation for these effects in humans – and not simply a sociocultural one.”

I've written at length about whether animal models of sexual problems are appropriate stand-ins for the human condition:

Which brings us to animal models for what we typically regard as profoundly human states: longing, angst, futility. Or Desire, Dread, and Despair. The words don't easily lend themselves to rodent analogues, because they remind us of an unrequited crush or an existential crisis...

The animal models of these states are more mundane and less abstract, yet important for potentially explaining the neural mechanisms underlying human suffering: addiction, anxiety, and depression. But are they really adequate stand-ins for the human condition? Of course not. My purpose here isn't to critique animal research, but rather to consider actual behaviors and how they map onto the terminology used to describe them.

Does the model of pain-induced reduction of sexual behavior in female mice hold up in humans? The claim is that a lack of sexual motivation (or libido, if you will) is the inhibiting factor, rather than the pain itself.

Real Pain in Real Humans

How does chronic pain affect human sexual behavior? Is there a pronounced difference between men and women in terms of responsiveness? Is it true that this topic has never been studied?

One survey of 327 chronic pain patients (Ambler et al., 2001) found few differences between men and women:

Seventy-three percent of respondents had pain-related difficulty with sexual activity; most had several, in various combinations of problems with arousal, position, exacerbating pain, low confidence, performance worries, and relationship problems. ... There were few differences between men and women, and only weak relations emerged between specific problems and mood and disability.

Furthermore, it wasn't easy to attribute the problems to reduced libido or to physical limitations, as there wasn't a simple relationship between primarily physical and primarily psychological issues and overall physical, psychological, and emotional health.

Several other studies have examined sexual function specifically in patients with arthritis, a chronic pain condition. van Berlo et al. (2006) analyzed surveys from 271 patients with rheumatoid arthritis and found that men felt less sexual desire, while women masturbated and fantasized less often than controls. However, the patients did not report a difference in sexual satisfaction (although we don't know about the 77% who did not return the questionnaire).

An earlier study examined the effects of osteoarthritis of the hip joint on sexual activity (Currey, 1970). The author mailed a questionnaire to 235 potential patients and received replies from 121. He found that sexual problems were more commonly reported in women, but this was literally caused by stiffness and pain. A decline in sexual motivation was not the primary factor. In fact, the causes of sexual difficulty (i.e., interfering with heterosexual intercourse) did not differ between men and women. For women, it was pain in 49%, stiffness in 76% and loss of libido in only 20%. For men, those numbers were 50%, 75% and 27%.

So much for bremelanotide in female patients with chronic pain...

You may complain about demand characteristics and biased samples among those who complete and return surveys about sexual behavior, even when anonymous. Mice are so much simpler, they're not embarrassed to talk about it, they're not influenced by how their partner or doctor may react. Male mice are not less inclined to report sexual problems because they might be perceived as less macho. And female mice don't become sexually disinterested if their husbands are inconsiderate at any number of levels.

Oh wait, those are all sociocultural factors, which simply cannot explain the flighty female libido.

1 DISCLAIMER: Note that I am not an expert in mouse sexual behavior, so I am not qualified to critique the study on those grounds. I recognize that the present experiments represent a huge amount of work that builds upon a body of research by established investigators.

4But Were the Experimenters Male or Female?Another study by the same research team received even more press coverage: the finding that male experimenters stress out laboratory rodents to a much greater extent than female experimenters. However, we don't know whether the animal handlers in the present study were male, female, or both.

ADDENDUM May 5 2014: Bethany Brookshire has a fantastic summary of that study, You smell, and mice can tell. A closer examination of the author contributions on the Farmer et al. paper suggests that the majority of investigators handling the mice (perhaps 10 out of 11) were female.

A new study has tricked undergraduates into believing that “Spintronics,” a whimsical new “mind reading” technology constructed using an old hair dryer, was able to accurately read their thoughts (Ali et al., 2014). This held even for students enrolled in a class on the pros and cons of neuroimaging methods taught by the senior author (McGill Professor Amir Raz). The paper coined the phrase “empirical neuroenchantment” to explain why a highly dubious experimental setup would lead to such a deficit in critical thinking.

The participants were 58 McGill students, 26 of whom were upper-level psychology, neuroscience or cognitive science majors enrolled in a skeptical neuroimaging course that warned them about overblown claims. Furthermore, the professor had lectured about his experience as a “mind reading” magician who fools audiences into believing he has paranormal abilities:

The professor in the course (AR) repeatedly harped on the present impossibility of mind-reading and tested this information on the final examination verifying that students internalized these points. He also spoke about his background as a mentalist – a magician who performs psychological tricks, such as mind-reading – and led class demonstrations to exemplify why the public often misinterprets these effects and takes them for genuine paranormal powers.

And in fact, sleight of hand was used to further the ruse that the hair dryer contraption was able to read their minds. Subjects were told they were participating in a study on “The Neural Correlates of Thought” (amusingly described in the Methods) where they...

...encountered a rickety mock brain scanner built from discarded medical scraps from the 1960s and adorned with an old-fashioned hair-dryer dome [shown in the figure above]. We told participants that scientists at the Montreal Neurological Institute had developed new experimental technology to decode resting state brain activity and read the human mind. We labeled the technology Spintronics and displayed warning signs around the scanning equipment similar to those found in MRI environments.

The participants were told to think of a two-digit number, a three-digit number, a color, and a country and to write down their answers on a piece of paper. The first author cleverly pocketed their answers,1 then participants were told to think about their choices while their brains were faux scanned. During this time, “a pre-recorded video displayed rotating three-dimensional brain slices with accompanying scanner-like audio, lending the appearance of collecting and analyzing patterns of brain activity.”

Afterwards, the subjects were shown the results of the scan. Lo and behold, the machine could read their minds! A brief questionnaire rated their level of belief on a 0 to 6 point scale (from “not at all” to “extremely”).

Can we conclude from the present study that neuroimaging is special in the annals of scientific technology in its ability to dupe even those who should know better? No, and the authors acknowledge as much. We don't know whether the dual phenomena of deferring to experts in a professional laboratory, and overriding scientific knowledge on the basis of one compelling experience, would occur in other fields of study. We could potentially see meteoroenchantment or roboenchantment in the realms of weather prediction and artificial intelligence, respectively.

Remember the “seductive allure” of colorful brain images? This was the idea that college undergraduates could be swayed to believe implausible explanations for psychological phenomena if accompanied by brain images (McCabe & Castell, 2008). For example, a fictitious news article explaining that ‘Watching TV is Related to Math Ability’ — since watching television and completing math problems both lead to activation in the temporal lobe, watching TV will of course improve math skills — was more believable when accompanied by a brain scan than by a bar graph.

The Not So Seductive Allure of Colorful Brain Images

However, this finding was not replicated in more recent studies (Farah & Hook, 2013; Michael et al., 2013; Schweitzer et al., 2013). Is this because participants in psychology experiments have gotten more sophisticated in the past five years? 2Or is it because the results weren't that strong to begin with?

1 I should add here that the first author, Sabrina Ali, was an undergraduate researcher at the time, and thus the participants may have had fewer suspicions that she would try to dupe them (as opposed to the magician, Dr. Raz). The present experiment was a portion of Ali's Master's Thesis at McGill.

2More sophisticated, say, from reading critical neuroscience blogs? Or much more likely, reading critical coverage in places like the New York Times? Or am I living in a bubble which assumes way too much public interest in these topics?

In our continuing twilight saga on the seductive allure of all things neuroscientific comes this new entry by Rhodes et al. (2014). The paper isn't available yet so the abstract will have to do for now:

Abstract

Previous studies have investigated the influence of neuroscience information or images on ratings of scientific evidence quality but have yielded mixed results. We examined the influence of neuroscience information on evaluations of flawed scientific studies after taking into account individual differences in scientific reasoning skills, thinking dispositions, and prior beliefs about a claim. We found that neuroscience information, even though irrelevant, made people believe they had a better understanding of the mechanism underlying a behavioral phenomenon. Neuroscience information had a smaller effect on ratings of article quality and scientist quality. Our study suggests that neuroscience information may provide an illusion of explanatory depth. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

Do colorful brain images and neuroscientific information hold powerful sway over the unsuspecting reader's logic, leading them to overlook shoddy science coverage? From what I can gather, the seductive allure of neuroimages has not replicated (Farah & Hook, 2013; Michael et al., 2013; Schweitzer et al., 2013), but the appeal of neuroscience information (à la Weisberg et al., 2008) has yet to lose all its luster.1

Farah and Hook also debunked the study of Weisberg et al., (2008), which didn't use images at all but added neuroscience-y explanations to 18 actual psychological phenomenon. The problem was that the neuroscience-y paragraphs were longer than the no-neuroscience paragraphs. The author of the excellent but now-defunct Brain In A Vat blog had a similar objection, as explained in I Was a Subject in Deena Weisberg's Study...

Although the scores on some LuCiD factors were indeed significantly higher after frontal stimulation at 25 Hz (beta, actually) and/or 40 Hz (gamma) frequencies (relative to sham or other frequencies), this did not mean the dreams were technically “lucid”.

The LuCiD scale consists of 28 statements, each followed by a 6-point rating scale (0: strongly disagree, 5: strongly agree). Insight is the awareness that one is currently dreaming, Dissociation is taking a third-person perspective, and Control is control over the dream plot.

Of the eight LuCiD factors, Insight is the single most important criterion for lucid dreaming (Voss et al., 2013). However, the mean Insight score in the current study is well below that reported for lucid dreams in the earlier study used to construct the scale.

In other words, the 25 Hz and 40 Hz brain stimulation significantly increased Insight and Control, but not to the levels reported in lucid dreams (according the authors' previous definition). The definition in the present study was less stringent: “Lucidity was assumed when subjects reported elevated ratings (>mean + 2 s.e.) on either or both of the LuCiD scale factors insight and dissociation.”

Nonetheless, induced gamma band oscillations did result in a heightened perception of self-awareness during REM sleep, in particular the ability to view the ongoing dream activities as a detached observer. But don't waste your money investing in the latest neurocrap that claims to induce lucid dreaming... As Seen On Nature Neuroscience.

1 Note that tACS is different from the usual DIY tDCS (transcranial direct current stimulation). tACS is thought to modulate and entrain brain oscillations in a frequency-specific manner, although others are much more cautious in their interpretation.

Just when we thought it was safe to bury the dead salmon of uncorrected statistical thresholds in neuroimaging studies, a new and incendiary study on face processing in pedophiles emerges (Ponseti et al., 2014). Even if it were surprising and informative that “Human face processing is tuned to sexual age preferences” (Ponseti et al., 2014), the fMRI data analyses failed to correct for multiple statistical comparisons, which is standard in the field. Therefore, by using a very liberal statistical threshold of p< 0.01 uncorrected for the large number of tests, the results could be a series of untrustworthy false positives.1

Importantly, the basic pattern of findings, that visual parts of the brain are more responsive to pictures of faces that fall within the broad category of “sexual attractiveness”, does not tell us why someone has a particular sexual orientation, nor does it tell us if this preference is “hard wired” (i.e., innate).

The participants in the study were 56 men, 11 of whom were heterosexual pedophiles (prefer young girls), 13 homosexual pedophiles (prefer young boys), 18 heterosexual teleiophiles (prefer adult women) and 14 homosexual teleiophiles (prefer adult men). These are small groups, but to complicate matters, half of the pedophiles had committed sexual offenses and the other half had not. This is a critical difference, as one might expect differences between men who could refrain from acting on their impulses and those who could not. Yet, activation in the dorsal striatum was interpreted as a potential indicator of “efforts in withholding actions”.

Furthermore, the results presented here were part of a larger study that aimed to classify pedophiles solely on the basis of their brain responses to nude photos showing whole-body frontal views or genitals only (Ponseti et al., 2012). The authors claimed an astounding 95% accuracy in distinguishing between pedophiles and non-pedophiles.2

Overall, the participants viewed 14 different categories of visual stimuli in these two papers, so you can see that the number of potential statistical comparisons is astronomical.

The take-home message is that the participants' subjective attractiveness ratings of each face (completed after the fMRI study) were much more reliable at identifying their sexual preferences (p< 0.001) than the brain imaging data. Neuroscientists working with such controversial populations need to be especially careful in analyzing their data, and aware of how their work may be used in a broader social context.

I went on this trip once, back to my hometown after a long absence. Have you ever felt that your surroundings seem odd and distant, and that you're completely detached from them? That the things and places around you aren't real? This can happen to me, on occasion.

It did on this trip, perhaps because I've dreamed about those places so many times that the real places and the dream places are blurred in memory.

Of course time marches on. The stores in the strip mall have changed, and you go to Starbucks with your father. But sometimes new and surprising things appear in the landscape.

Or maybe old and unexpected things pop up in the background, renewing a long-standing confusion between rural and suburban.

These nostalgic travel vignettes illustrate the phenomenon of derealization, a subjective alteration in one's perception or experience of the outside world. The pervasive unreality of the external environment is a key feature, along with emotional blunting. The world loses its vividness, coloring, and tone. Some even report seeing things as if they're looking through a fog or a haze. Or a pane of blurry glass.

Not surprisingly, these dissociative states can be induced by drugs such as ketamine (a dissociative anesthetic) and hallucinogens (e.g., LSD, psilocybin). The symptoms can also be induced by stress and anxiety, or by trauma, or by sleep deprivation. Not all instances of derealization and depersonalization qualify as a disorder, however.

Derealization: "Experiences of unreality or detachment with respect to surroundings (e.g., individuals or objects are experienced as unreal, dreamlike, foggy, lifeless, or visually distorted."

B."During the depersonalization or derealization experiences, reality testing remains intact."C."The disturbance is not attributable to the physiological effects of a substance (e.g., a drug of abuse, medication or other medical condition (e.g., seizures)."D."The disturbance is not better explained by another mental disorder."

What other mental disorders can manifest as derealization (included as one of a core set of symptoms)? Among the most curious of these is an unusual neurological disorder called Kleine-Levin syndrome (KLS).

Considered a relapsing/remitting disease that typically onsets during adolescence, there is no known cause, no objective laboratory findings, and no cure. In the review by Arnulf et al. (2012), episodes lasted 10-12 days on average, followed by almost 6 months of normal sleep, cognition, and behavior. The disease can resolve spontaneously once the patient reaches their 30s. Those with childhood or adult onset can show a different disease course.

The review suggested that confusion, apathy, and/or derealization are the best diagnostic indicators, when coupled with recurrent hypersomnia.

The Phenomenology of Derealization in KLS

Since derealization is such a prominent symptom of KLS, Arnulf et al. (2012) provided examples reported by patients during Kleine-Levin episodes:

In the shower, patients might see the water flowing on their bodies, but not feel its temperature

Patients who injure themselves might not understand when or how the injury happened or that it has happened at all

Actions do not have consequences

Patients might do something to test for a normal action, such as breaking an object (eg, a cup)

Patients might ask whether they are dead or alive

Are there any changes in brain activity during symptomatic periods in KLS? A Paris-based research group led by Dr. Isabelle Arnulfrecently reported on a functional imaging study in 41 asymptomatic patients (Kas et al., 2014), 11 of whom were also scanned during an episode. The authors used SPECT (single photon emission computed tomography) to measure blood perfusion in the brain. SPECT is a relatively inexpensive cousin of PET scanning, albeit with lower spatial resolution. Although there is a place for SPECT in nuclear medicine, it is not accepted as a method to diagnose psychiatric disorders, and Kas et al. did not treat it as such.

I found it remarkable that 11 patients were scanned during an episode, a phenomenal number considering the rarity of the disease and the nature of the presenting symptoms. In fact, two additional patients could not be scanned because they were so agitated and delusional. The patients completed questionnaires related to KLS symptoms, sleep disturbances, apathy, depression, and the Depersonalization/ Derealization Inventory (Cox and Swinson, 2002).

One major finding was reduced perfusion in the general region of the temporal-parietal junction (TPJ), which was associated with more severe symptoms of derealization. The TPJ has been related to multimodal sensory integration – the integration of information from the somatosensory system (body knowledge) and the external world (visual, auditory) – among other things (like theory of mind, attention, and language). Damage or dysfunction of the TPJ can result in out-of-body experiences (Blanke & Arzy, 2005).

Changes in perfusion between episodes were also observed (relative to controls). KLS patients showed hypoperfusion in the hypothalamus, thalamus, caudate nucleus, and some cortical association areas that persisted during asymptomatic periods.

Although we must issue the appropriate caveats (small patient group, imprecise localization, limitations of the methodology etc.), the current results are suggestive of a neurological correlate of derealization. I'll keep this in mind the next time I visit my hometown after a long absence...

“We believe this to be a moment in the science of the brain where our knowledge base, our new technical capabilities, and our dedicated and coordinated efforts can generate great leaps forward in just a few years or decades. Like other great leaps in the history of science—the development of atomic and nuclear physics, the unraveling of the genetic code—this one will change human society forever. Through deepened knowledge of how our brains actually work, we will understand ourselves differently, treat disease more incisively, educate our children more effectively, practice law and governance with greater insight, and develop more understanding of others whose brains have been molded in different circumstances.”

That modest quote jumped out from the Preamble to the BRAIN Working Group Report to the Advisory Committee of the NIH Director. A decade-long $4.5 billion project that focuses on technology development and neural circuits in model systems will change government, society, and human interactions forever.

In any project, decisions must be made about where to focus. Neuroscience addresses brain function from the level of molecules to the level of psychology, and at many levels in between. This plan for the BRAIN Initiative proposes a concerted attack on brain activity at the level of circuits and systems, rather than suggesting incremental advances in every area. All areas of neuroscience are important, however, and the BRAIN Initiative should therefore supplement, not replace, existing efforts in basic, translational, and clinical neuroscience.

Work on DARPA’s Systems-Based Neurotechnology for Emerging Therapies (SUBNETS) program is set to begin with teams led by UC San Francisco (UCSF), and Massachusetts General Hospital (MGH). The SUBNETS program seeks to reduce the severity of neuropsychological illness in service members and veterans by developing closed-loop therapies that incorporate recording and analysis of brain activity with near-real-time neural stimulation. The program, which will use next-generation devices inspired by current Deep Brain Stimulation (DBS) technology, was launched in support of President Obama’s brain initiative.

UCSF and MGH will oversee teams of physicians, engineers, and neuroscientists who are working together to develop advanced brain interfaces, computational models of neural activity, and clinical therapies for treating networks of the brain. The teams will collaborate with commercial industry and government, including researchers from Lawrence Livermore National Laboratory and Medtronic, to apply a broad range of perspectives to the technological challenges involved.