It is probably impossible to look at a map of brain activity and predict or even understand the emotions, reactions, hopes and desires of the mind.

The first basic problem is that regions of the brain handle a wide variety of different tasks. As Sally Satel and Scott O. Lilienfeld explained in their compelling and highly readable book, “Brainwashed: The Seductive Appeal of Mindless Neuroscience,” you put somebody in an fMRI machine and see that the amygdala or the insula lights up during certain activities. But the amygdala lights up during fear, happiness, novelty, anger or sexual arousal (at least in women). The insula plays a role in processing trust, insight, empathy, aversion and disbelief. So what are you really looking at?

...the uses and abuses of neuroscience and brain imaging. Sally and Scott describe their book as an anchor in this discussion to expose "MINDLESS NEUROSCIENCE" and also as a critique of the assumption that the brain is the most important level of analysis for understanding human behavior.

Brooks began by talking about himself, presenting a revisionist history of his own pop neuro cheerleading, saying that in his own writings neuroscience didn't help that much, but experimental psychology helped him a lot.

"I wrote a book a few years ago about mindless neuroscience, and it did very well, so you can explain the seductive appeal of that book."

... "I started a book that I thought was going to popularize neuroscientific findings and how it'd apply to public policy and the sort of things that apply in this world, the world we deal with here in this building [the American Enterprise Institute]."

The Positive Side of the Cognitive Neuroscience Revolution

Brooks asked each of the speakers about the bright side of brain science "before we talk about the extremism." What follows below are my notes and paraphrases of the conversation.

Lilienfeld said the field was "brainless" when he came of age in the 80s and 90s -- genetics could not possibly cause behavior. Environmental factors were the primary causes of autism and schizophrenia. [NOTE: I found this a bit odd, since he attended graduate school at the University of Minnesota, site of the famous Minnesota twin studies. But I wasn't there, so what do I know.] Ultimately, injection of neuroscience was helpful, he said.

Satel mentioned the 70s biological revolution in psyschiatry. In her fields of addiction and PTSD, she felt things were too biological. She worked with Vietnam veterans with PTSD, noting both a biological component (failure of fear extinction, adrenergic system, hypothalamus, etc.) plus a profound existential dimension -- a challenge or threat that undermined integrity -- "the meaning they attributed to it was as important as the mechanism, and they both interacted" -- we should come back to somewhere in the middle. "Not to lose the mind in the age of brain science."

Democrat Brain, Republican Brain

A bit of conservative humor was injected into this portion of the conversation. Brooks mentioned the infamous This Is Your Brain on Politics opinion piece that masqueraded as actual science. "Very good pictures, aside from the fact that one is obviously a lot larger, more crenulated." [NOTE: perhaps he meant convoluted?]

"Was that a typically accurate story? If we scanned the brains here [AEI] and the Brookings brains would we see a big difference?"

Lilienfeld: "No." Ha ha [laughter].

Satel: "Yeah, ours would be bigger." Ha ha. She said that article ...was a bit of a fiasco -- one of the articles that called our attention to this. Almost read as a parody. Made a mockery of fMRI.

Free Will and Addiction

This might be considered the most controversial portion of the program, since it had policy implications. Brooks called the highbrow version of determinism "nothing but neurons" (Brian Appleyard - "nothing but-ism"), where neuroscience will replace psychology. [NOTE: There is well-articulated philosophical version of this view called eliminative materialism.] So we have no free will.

Satel (being more measured and erudite than Brooks) said there are a lot of steps in there... We're materialists, decapitation will prove it.

Which phenomena are best observed at level of the brain vs. at the level of the mind? Satel said that addiction illustrates the problem of 'neurocentrism' (neurons genes transmitters proteins) -- it's a good approach for curing Alzheimer's disease but not for dealing with addiction. Neural underpinnings underlie addiction, sure, but if you're a clinician or a policy maker is that the best way to interact with patients or develop policies? NO, she said. Regarding those with addictions: "in some ways their situation is fundamentally voluntary - let me define that - it's not easy to throw away that meth pipe but do these brain changes make them so helpless or out of control that they can't modify their behavior in response to reason or incentive or consequences?"

Brooks asked how do we define the boundary in this case?

Satel answered that "most people do overcome their addiction" - she referred to the clinician's illusion - they see the worst patients who have comorbidities - but "most people make a choice" - life is hard - essentially it's a self-medicating enterprise. In essence, she suggested that most people quit drugs/alcohol on their own, and that it's a matter of free will. Propensities for addiction and changes in the brain due to substance abuse be damned.

Lilienfeld said that the free will vs. determinism debate will not be resolvable any time soon. Interestingly, he holds the view that neuroscience does not inform free will debate, because it still doesn't say whether there's a ghost in machine even if determinist. [NOTE: uh....] There's no question that people make decisons in everyday life -- but rarely does addiction make it impossible, he said.

The Q & A with a bunch of old white guys in the audience (and three young white guys) was about half way through the video.

There we learn that Lilienfeld is a reductionist but not an eliminative materialist. An interesting point comes up when he addresses RDoC, one that somewhat questions his reductionist credentials. He's afraid of privileging biological indices as best way of measuring a psychological system. "If you want to find out if someone's an impulsive person, you ask them 'are you an impulsive person?' or you could give them laboratory tests and brain imaging. Will the latter tests give more information that just asking them and their families? We don't know that," he said.

Brooks blurts out, "That seems ridiculous to me! If you want to take an impulsive person you flick them on the ear and see what they do."

Or perhaps you give them a microphone and a column in the New York Times and see what they do...

Satel ends the conversation by mentioning their book's contribution to general neuroliteracy and "...how these levels of analysis can be bridged.1 ... ...it's a highly dynamic system that goes back and forth, and not to get seduced by these beautiful pictures which led to our preferred title, which was 50 Shades of Grey Matter." 2

-- Scott Lilienfeld received his BA in Psychology from Cornell University in 1982 and his PhD in Clinical Psychology from the University of Minnesota in 1990.

-- Sally Satel [PDF of CV] received her BS from Cornell University in 1977, her MS from the University of Chicago in 1980, and her MD from Brown University in 1984.

So that's the movie version of the book (which I have not read, other than excerpts). Neuroskeptic wrote an actual book review:

I wanted to dislike this book.

You see, I was suspicious of the fact that one of the authors is a resident scholar with the American Enterprise Institute (AEI), an organization whose political values I oppose, and, insofar as it’s an organization with political values, has little business going near science.

Then, when I found that the book cites me (with fellow neurobloggers Mind Hacks and Neurocritic) in the Acknowledgements and elsewhere, that actually made it worse. A sense of intellectual possessiveness joined my ideological reasons for not liking the thing.3

I was hoping that it would be dreadful so that I could unleash the venom I had brewed up: “Ayn Rand, Please Get Off My Bandwagon”; “The only good bits here are the bits they stole from me” – it would have been glorious.

However, sadly, Brainwashed turned out to be good.

As usual, the book was better than the movie.

Footnotes

1 Those "levels of analysis in neuroscience" figures are as old as time, so there might not be much new ground covered there. Behavior is an explicit part of many of these. It's not a new concept to cognitive scientists, either.

3 I reacted the same way when I first read an excerpt of the book in Salon (Pop Neuroscience is Bunk!). I had blogged about at least 18 of the examples given in that excerpt alone, so I felt that someone else had written a book based largely on neuroblogs (mine and others).

Take the USA Today article above. It wasn't an egregious offender, yet it used an unfortunate mixed metaphor to make a point:

The brain is the temple of the mind, but are images of it in action a little over-worshipped?

The 'temple of the mind' terminology was from a Nature News story on the structure of the rat glutamate receptor GluA2. I found it bizarre that a reference to an article on protein crystallography, a method that produces an atomic-level picture of a single protein, was being used to prop up the 'seductive allure' of human brain images (a finding which hasn't replicated). In turn, someone decided that 'temple of the mind' was an appropriate way to describe the structure of the GluA2 receptor.2

...Tallis's beef is not just with crude methodologies. In detail so pitiless it threatens to be unreadable in parts, Aping Mankind argues that neuroscientific approaches to things like love, wisdom, and beauty are flawed because you can't reduce the mind to brain activity alone.

I believe that the brain is the ultimate arbiter of behavior (and that "the mind" is ultimately reducible to brain activity), but this doesn't preclude the view that interpersonal and social factors influence people's actions. Does anyone actually think that brains exist in isolation from any complex external influence (other than sensory stimulation)? It's a false dichotomy and a straw man argument foisted on neuroscientists by the neurotrashers. There are many different levels of analysis within the field, from molecules to synapses to systems to behavior.

Graduate neuroscience programs are interdisciplinary in nature, and subfields like Cultural Neuroscience and Neuroanthropology are on the leading edge of incorporating culture into brain function (and vice versa). But very few fields cover that broad a scope.

Scientists and scholars in various disciplines generally do specialized research based on narrow areas of expertise. Are historians negligent because they're not incorporating knowledge of the brain into their analyses of past events? Of course not. So why are neuroscientists remiss if they fail to include detailed sociological and developmental accounts of crime in their Human Brain Mapping journal articles? 3

It's not surprising that each discipline privileges one level of explanation over another. The danger lies in discarding all other explanatory models in favor of your own. This also holds for theorizing within a field.

The crudity of Adrian Raine's opening arguments in The Anatomy of Violence– his manifesto for neurocriminology – is curiously refreshing. It is a long time since evolutionary psychology was served so neat.

A number of scientists have been vocal proponents of study pre-registration, in which detailed methodological and statistical plans for an experiment are registered in advance of data collection. The admirable goal is to eliminate questionable research practices such as failing to report all of a study's dependent measures, deciding whether to collect more data after looking to see whether the results are significant, and selectively reporting studies that 'worked.' Along these lines, the journal Cortex has recently launched a new type of article called Registered Reports, largely through the efforts of Dr Chris Chambers:

Unlike conventional publishing models, Registered Reports split the review process into two stages. Initially, experimental methods and proposed analyses are pre-registered and reviewed before data are collected. Then, if peer reviews are favourable, we offer authors “in-principle acceptance” of their paper. This guarantees publication of their future results providing that they adhere precisely to their registered protocol. Once their experiment is complete, authors then resubmit their full manuscript for final consideration.

Many of these same researchers are also strong advocates for publication reform, some even calling for journals to be eliminated altogether in favor of post-publication, crowd-sourced review and reputation ranking. But supporters of both pre-registration and the Open Science Framework haven't yet utilized its capability to submit their new work (as opposed to the Reproducibility Project for replications).

Since calls for pre-registration of basic research studies have been ongoing for years, perhaps its proponents have been too conservative with taking matters into their own hands. One might even say there's a distinct lack of risk-taking among the strongest believers. What's to prevent them from pre-registering studies in public databases or on their own blogs (without formal peer review but perhaps soliciting comments)? Or publishing a Study Protocol in a BMC journal (like @SallyScientist did)?

Motor Cortex and Monkeys are Responsive to Statistical Regularities of Letter Strings

A cool new study in the Journal of Cognitive Neuroscience questions the notion that the premotor cortex response to action words is due to implicit motor simulation (de Zubicaray et al., 2013). Previously, the conceptual representation and/or simulation of action words in motor regions of the brain has been taken as evidence for embodied theories of language comprehension (Glenberg & Kaschak, 2002). These theories have been based on fMRI and EEG experiments showing that reading or listening to verbs that depict actions of the face, arm or leg activate somatotopically-specific regions of motor cortex (Hauk et al., 2004).

Their novel fMRI study showed that the motor cortex response to action verbs was actually due to ortho-phonologic probabilistic cues to grammatical class (de Zubicaray et al., 2013). In other words, the spelling and pronunciation of word endings influenced activity in motor regions of the brain. This does present a problem for grounded theories of language comprehension...

The eighth and final season of the hit series Dexter takes a scientific look at serial killers. British actress Charlotte Rampling plays Dr. Evelyn Vogel, a neuropsychiatrist who has written the definitive book on the brains of psychopaths. She's consulting with Miami Metro Homicide on an unusual case where the killer saws open the skull post mortem and scoops out part of the brain (with a melon baller).

Neuroscience is depicted as a somewhat ghoulish yet artistic and stylish endeavor (the corpse with a sawed off head is not shown in this still). This autopsy scene is particularly artsy with its use of red lit retro cabinetry and colorized MRI films on an old school light box.

Dr. Vogel is philosophical about her chosen field. In a conversation with our favorite serial killer and "blood spatter guy" she says:

"I was drawn to forensics too, but I chose to focus on neuroscience. Psychopaths. We both chose murder. Maybe we're both a little crazy."

WARNING! Fake grisly fake image below the jump reveals the role of the anterior insula in psychopathy.

"Looks like a piece of it has been scooped out," says Dexter at the crime scene.

And indeed, a piece of insular cortex has been scooped out with a melon baller. Looks like someone is trying to send Dr. Vogel a message. And a collection of insulae in jars...

At the police station, Dr. Vogel points out what is supposed to be the insula on a bizarre-looking MRI scan (the sawed off skull is presumably slapped back onto the rest of the body, and the genu of the corpus callosum and the ventricles look odd).

"See this part here? That's the anterior insula cortex, the portion of the brain that processes empathy. And, the hallmark of a psychopath is -- they have no empathy."

Shame is a negative self-conscious emotion that encompasses the feeling that something is terribly wrong with the self as a human being. Feelings of shame are a prominent factor in suicidal thoughts, behaviors, and self-harm (Hastings et al., 2002; Gilbert et al., 2010). Lately I've encountered several articles saying that shame really isn't that bad for you, after all. The first theme of these commentaries is the spectacle of public shaming, and the second theme concerns private shame as a means of social control.

The Futility of Public Shaming

One news article bemoaned the end of shame in American politics. Sex scandals, drug use, and other egregious mistakes just don't have the same permanently negative consequences they used to. If you're a powerful male politician, that is.

Those words used to add up to shame. Put them in the same sentence as a politician’s name, and they ended careers.

Not anymore. The latest batch of unlikely back-from-the-swamp hopefuls are Anthony Weiner and Eliot Spitzer. Weiner resigned his New York City congressional seat two years ago after revelations that he’d tweeted a sexually suggestive picture of himself to a woman who was following him on Twitter. Spitzer left the state’s governorship in 2008 after reports surfaced that federal investigators had tagged him as “Client 9,” soliciting high-end prostitutes.

Scott DesJarlais - pro-life doctor who had multiple affairs with patients, pushed them to have abortions

The lesson from all this: Wind up on the ever-increasing roll of tainted celebrities and re-emerge as the friendly, professional politician that vaulted you into office in the first place, and you’ll probably be OK.

"You'll probably be OK"... if you're a man. Has a female politician ever emerged intact from a federal sex scandal? Or has even been involved in such a scandal? If so, the double standard of slut shaming would likely put an end to her career. One of the few women on the state and local list is:

Minnesota Senate Majority Leader Amy Koch (R)... The married mother of one resigned from her leadership position and announced that she would not be seeking reelection shortly before four fellow Republicans indicated that she had been engaged in an "inappropriate" relationship with a male staffer. (2011).

Shaming on the Internet

How about for obnoxious offenders in the general public? Does the public shaming of those who spew idiotic sexist, racist, and homophobic comments on social media do any good?

I started retweeting people complaining about welfare, food stamps, etc. and then following it up with a previous tweet of theirs that makes them look hypocritical/dumb/etc.

I discovered that as I would retweet these, my followers would start @replying these people and let them know they were idiots. They would then delete their offending tweet.

Well, I couldn’t let that happen. So, I screenshot away.

One continuous stream of vile sexist hatred was directed at Women's Wimbledon champion, Marion Bartoli. Why? Because she's not a blond model. Even BBC presenter John Inverdale took part in the insults, saying she "never going to be a looker"... to which Bartoli responded:

"It doesn't matter, honestly. I am not blonde, yes. That is a fact. Have I dreamt about having a model contract? No. I'm sorry. But have I dreamed about winning Wimbledon? Absolutely, yes."

This is the bottom line. She won Wimbledon, and her detractors will never accomplish anything that monumental. Does shaming the immature little boys for their pathetic cries for attention help anyone?

Some of those dudes deleted their accounts (to perhaps reappear another day), but others just go on their merry way with earth-shattering pronouncements like, "I have to untangle my earphones at least 3 times a day" and "Playstation is better than Xbox."

It’s no surprise to anyone that Twitter and Facebook are filled with vile, racist, homophobic, bigoted awfulness. Because humanity is filled with vile, racist, homophobic bigots.. . .

The consequence of these shaming sites, is that us “enlightened” folk then pile in on the bigots and abuse them and tell them how awful they are. And I’m willing to bet that the number of individuals who have rescinded what is probably years’ of built up bigotry is the same number of terrorist attacks that the Wellington airport security screeners have stopped: Zero.. . .

That’s not to say we should let people get away with awfulness, but when we publicly name and shame and by proxy invite the internet to start tormenting these people, we are becoming them. No better than they are because we now have a figure to poke a stick at.

The shame sweepstakes become more costly and damaging once we enter the world of mental illness, addiction, and difference. Yet some still argue in favor of shaming.

Has there been a resurgence of shame as a means of social control?

1. Shame is good for you! Shame is biological, so it's inevitable that those who are different or disabled will feel it. That was the premise of an article in the Atlantic, which in my opinion was complete and utter bullshit.

In response to a spate of teen suicides last year, a number of celebrities (Anne Hathaway, Justin Timberlake, Ellen DeGeneres, among others) used their visibility to castigate people who bully others. When public figures denounce bullying, they draw attention to the power of shame: A victim's experience at the hands of a bully can be so excruciating that life becomes unendurable. . . .

Everywhere we look, pride is on the march, and shame is on the run.. . .

If shame is such a bad thing, why did evolution see fit to program it into our genes? Evolutionary psychologists and sociobiologists believe that guilt and shame evolved to promote stable social relationships. According to the Oxford Encyclopedia of Evolution, "conformity to cultural values, beliefs, and practices makes behavior predictable and allows for the advent of complex coordination and cooperation." While the anti-shame zeitgeist views conformity to norms as oppressive, support for a great many of our social norms and the shame that enforces them is virtually unanimous.

For example, many would agree that fathers who walk out on their families, neglect their offspring, and fail to make child support payments should feel ashamed. Shame is the appropriate emotion for those men to feel: if powerful enough, the experience of shame might help them to fulfill their obligations as fathers and members of society.

Is there any scientific evidence that shaming deadbeat dads causes them to pay child support?

But it gets worse, with justifications for the biological and social inevitability of shame. Disabled children, little people, LGBT folks - be ashamed of yourselves! and stay in the closet.

While the efforts of all the parents in Solomon's book [Far From the Tree] to promote healthy self-esteem in their children are worthy and admirable, here is the unfortunate reality: those afflicted with a major disability will inevitably experience a sense of shame for the ways in which they are different, regardless of whether they have been shunned or actively shamed by their peers. Shame spontaneously arises from the perception of unfavorable difference, whether or not society inflicts it upon the person.

Shame springs from the knowledge that your development didn't unfold as might have been expected under normal conditions.

So here it is, according to Joseph Burgo: if you're different in any way, you should feel ashamed for who you are. For simply existing in a less than perfect state. Because you are "pre-programmed" to feel that way.

Here's what I think: shame is a toxic social construction. It's used by religions to control the sexual behavior of their congregations. It's used by bullies to promote their social standing over the weak. It's used by parents to ostensibly make their children into high achievers, but they end up depressed, anxious, eating disordered. It's used by the media to make women (and men) so ashamed of their bodies that they go out and buy products to lose weight, improve their looks, enhance their private parts. Shame seems to hold a central role in the perception of an adverse self-image in young women with eating disorders (Franzoni et al., 2013).

2. Shame is good for you! Addiction is a choice, so those who have a substance use disorder should be shamed into getting clean and sober. That was the territory covered in a 2007 Slate article by Sally Satel and Scott Lilienfeld, authors of Brainwashed: The Seductive Appeal of Mindless Neuroscience. An old blog post by Dirk Hanson at Addiction Inbox pointed me to their essay, which took exception to the notion that addiction is a brain disease:

A full-scale campaign is under way to change the public perception of drug addiction, from a moral failing to a brain disease. Last spring, HBO aired an ambitious series that touted addiction as a "chronic and relapsing brain disease." In early July, a Time magazine cover story suggested that addiction is the doing of the neurotransmitter dopamine, which courses through the brain's reward circuits. And now Congress is weighing in.

Addiction is defined as a chronic, relapsing brain disease that is characterized by compulsive drug seeking and use, despite harmful consequences. It is considered a brain disease because drugs change the brain; they change its structure and how it works. These brain changes can be long lasting and can lead to many harmful, often self-destructive, behaviors.

Characterizing addiction as a brain disease misappropriates language more properly used to describe conditions such as multiple sclerosis or schizophrenia—afflictions that are neither brought on by sufferers themselves nor modifiable by their desire to be well. Also, the brain disease rhetoric is fatalistic, implying that users can never fully free themselves of their drug or alcohol problems. Finally, and most important, it threatens to obscure the vast role personal agency plays in perpetuating the cycle of use and relapse to drugs and alcohol.

And now we get to their justification for shaming:

Finally, dare we ask: Why is stigma bad? It is surely unfortunate if it keeps people from getting help (although we believe the real issue is not embarrassment but fear of a breach of confidentiality). The push to destigmatize overlooks the healthy role that shame can play, by motivating many otherwise reluctant people to seek treatment in the first place and jolting others into quitting before they spiral down too far.

Really??? There is absolutely no evidence that shame motivates an addicted person to seek help. Quite the contrary, shame prevents people from getting the treatment they need (Wiechelt, 2007). Note that shame is different from guilt - with shame you're a bad person, and with guilt you did a bad thing. Why would shaming someone already filled with shame about their own undesirable behaviors be a motivating force for change?

Embarrassment over an excessive-drinking session doesn’t necessarily lead to more sobriety.

In a study of alcoholics and relapse rates, researchers found that the more shame-ridden a drinker looked when talking about drinking — interpreted through body language like hunched shoulders — the more likely he or she was to relapse and the more drinks he or she downed during that relapse.. . .

The results add to a body of literature suggesting that widely used shaming and humiliating methods of treating alcohol and other drug problems — such as those seen on shows like Celebrity Rehab— are not only ineffective but also may be counterproductive.

For example, a review of the research on the use of humiliating, confrontational tactics, which attempt to induce shame, found that none of the studies done in four decades supported this approach. In one study included in the analysis, the more the counselor confronted the client with past mistakes or other shaming information about his problem, the more the client drank.

So let's not challenge the anti-shame zeitgeist or encourage public shaming of those with addictions, mental illnesses, disabilities, or differences of any sort.

But note that Lewis holds more nuanced views than his categorical statement indicates (e.g., "it's accurate in some ways"),2 and Szalavitz has reported on predispositions towards addiction that are based on pre-existing differences in brain structure.

Arguing that addiction is either completely a matter of choice or entirely caused by a faulty brain misses the complexity of a person with a brain in a social environment.

Like depression, addiction is a real medical disorder that affects the brain. But if we want to reduce the stigma associated with it, emphasizing recovery and resilience is probably more useful than focusing on definitions of brain disease.

It’s accurate in some ways. It accounts for the neurobiology of addiction better than the “choice” model and other contenders. It explains the helplessness addicts feel: they are in the grip of a disease, and so they can’t get better by themselves. It also helps alleviate guilt, shame, and blame, and it gets people on track to seek treatment. Moreover, addiction is indeed like a disease, and a good metaphor and a good model may not be so different.

But we might also have a seriously confused author.

...an advocate of directed panspermia and has developed his own hypothesis. He believes life did not originate on Earth but was transplanted to Earth by "cosmic seeds" encased in space debris 700 million years after the formation of the Earth. He claims that these genetic seeds filled with DNA contained the genetic instructions for the metamorphosis of all life, including woman and man. He also rejects the neo-Darwinian synthesis, instead replacing it with a form of non-Darwinian evolution which he describes as a "pre-determined evolutionary metamorphosis" which is pre-programmed in DNA of all life on earth.

ADDENDUM (July 18, 2013): That ridiculous Evolution of Sexual Consciousness travesty was originally published in the "Journal" of Cosmology but later retracted due to "censorship" (only to reappear on Joseph's own site):

In the summer of 2011, the journal published an article by Rhawn Joseph entitled "Sexual Consciousness: The Evolution of Breasts, Buttocks and the Big Brain". PZ Myers made fun[12] of the article's thesis and the random images of naked women that it contained. Apparently some other people were not happy about it either, because the article was retracted moved to Rhawn Joseph's personal site (BrainMind.com) and its original page got replaced with a whiny rant decrying censorship.[13]

Almost all subjects regard the experiment as a test of imagination. This conception is so general that it becomes, practically, a condition of the experiment. Nevertheless, the interpretation of the figures actually has little to do with imagination, and it is unnecessary to consider imagination a pre-requisite. ...

The interpretation of the chance forms falls in the field of perception and apperception rather than imagination.

Rorschach denied that it was projective in nature (p. 123, ibid):

The test cannot be considered as a means of delving into the unconscious. At best, it is far inferior to the other more profound psychological methods such as dream interpretation and association experiments. This is not difficult to understand. The test does not induce a «free flow from the subconscious» but requires adaptation to external stimuli, participation in the «fonction du réel».

Schott (2013) views the inkblots as both artistic entities (noting that Rorschach was a "gifted draughtsman and an excellent art critic") and as visual stimuli for scientific study. Perceptual features of the inkblots are considered in detail:

The pivotal graphic features which constitute the blots, and which give rise to the blots’ perceptual effects, include:

form: their amorphous shape

symmetry

the perception of movement: ‘Movement without Motion’

the blank spaces: figure–ground relationships

the use of colour

shading.

Finally, the article summarizes several neuroimaging experiments that have used the inkblots as stimuli. For example, Asari and colleagues reported that unusual or unique perceptions of the blots were associated with greater activation in the right temporal pole (2008) and with larger amygdala volumes (2010).

Pareidolia is the phenomenon of perceiving a meaningful stimulus (such as a face or a hidden message) in fairly random everyday objects or sounds. We do have quite a propensity to see faces everywhere, and some religious people see the face of god (and other religious iconography) everywhere.

Schott (2013) concludes by suggesting that the images merit further investigation by neuroscientists for studies of pareidolia:

...these iconic ink-blots—which straddle iconography, psychology and neuroscience—deserve further study, and may yet illuminate important aspects of cerebral function, and even dysfunction.

There has been a lively debate recently about study pre-registration, a publishing model (or online repository) where detailed methodological and statistical plans for an experiment are registered in advance of data collection. The idea is to eliminate questionable research practices such as failing to report all of a study's dependent measures, deciding whether to collect more data after looking to see whether the results are significant, and selectively reporting studies that 'worked.'

Chris Chambers and Marcus Munafo wrote a widely discussed article that appeared in the Guardian:

Open letter [with over 80 signatories]: We must encourage scientific journals to accept studies before the results are in

. . .

[The current] publishing culture is toxic to science. Recent studies have shown how intense career pressures encourage life scientists to engage in a range of questionable practices to generate publications – behaviours such as cherry-picking data or analyses that allow clear narratives to be presented, reinventing the aims of a study after it has finished to "predict" unexpected findings, and failing to ensure adequate statistical power. These are not the actions of a small minority; they are common, and result from the environment and incentive structures that most scientists work within.

The Open Data badge is earned for making publicly available the digitally shareable data necessary to reproduce the reported results.

The Open Materials badge is earned by making publicly available the components of the research methodology needed to reproduce the reported procedure and analysis.

The Preregistered badge is earned for having a preregistered design and analysis plan for the reported research and reporting results according to that plan. An analysis plan includes specification of the variables and the analyses that will be conducted.

One could imagine the introduction of two new demerit badges for Questionable and Rejected work.1

Questionable badges are issued when the committee suspects that questionable research practiceshave been used, as outlined in the paper by John et al. (2012).

The Rejected badge is earned when there is a suspicion that outright fraud may have occurred. This will typically spur an inquiry.

While an admirable goal, there may be aspects of this scheme that the proponents haven't fully considered.

The pre-registration of study designs must be resisted, says Sophie Scott

. . .

...there are numerous problems with the idea. Limiting more speculative aspects of data interpretation risks making papers more one-dimensional in perspective. And the commitment to publish with the journal concerned would curtail researchers’ freedom to choose the most appropriate forum for their work after they have considered the results.

. . .

Moreover, in my fields (cognitive neuroscience and psychology), a significant proportion of studies would simply be impossible to run on a pre-registration model because many are not designed simply to test hypotheses. Some, for instance, are observational, while many of the participant populations introduce significant sources of complexity and noise; as introductions to psychology often point out, humans are very dirty test tubes.

One possible outcome is that certain types of research are privileged over others.2 The badge manifesto states that...

Badges do not define good practice, they certify that a particular practice was followed.

I find this assertion to be kind of hollow in the absence of badges issued for these other types of research, considered unsuitable for Preregistration. Therefore, in the spirit of fair play, I hereby introduce three new badges!

The Exploratory badge is issued to meritorious research that is not hypothesis-driven. This could include characterization of disease states and vast swaths of the neuroimaging literature ("Human Brain Mapping"), particularly in the early days. Not to mention the entire Human Connectome Project...

The Fishing Expedition badge can be earned by imaging studies that use exciting new methods like multi-voxel pattern analysis in neural decoding ("mind reading") applications, machine learning approaches to classify patient vs. control groups, and the latest in data mining ("Big Data").

Sophie Scott has compiled the thoughts of researchers with varying degrees of opposition to pre-registration. Some are not totally opposed, but have questions on how it will be implemented and how it might be problematic for certain types of research. I fall into this latter camp.

The one current publication format for Registered Reports, in the journal Cortex, "guarantees publication of their future results providing that they adhere precisely to their registered protocol."

I'm not sure this would work in studies with children, patients, or other difficult populations, where everything is not always predictable in terms of task performance, nature of the brain response, etc. In my blurb on Sophie's blog, I said:

Another of your examples, neuropsychological case studies, is particularly difficult. Are you not supposed to test the rare individual with hemi-prosopagnosia or a unique form of synesthesia? Many aging and developmental studies could be problematic too. What if your elderly group is no better than chance in a memory test that undergrads could do at 80% accuracy? Maybe your small pilot sample of elderly were very high performers and not representative? Obviously, being locked into publishing such a study would set you back the time it would take to make the task easier and re-run the experiment. You could even say in the new paper that you ran the experiment with 500 items in the study list and the elderly were no better than chance. Who's to say that a reviewer would have caught that error in advance?

At any rate, I think it's important to have these kinds of discussions. And to freely distribute new kinds of badges.

Footnotes

1 Just to be clear, I made these up.

2 I'm not at all opposed to pre-registration, and I think it'll be an interesting experiment to see whether research practices improve and "scientific quality," or replicability, increases. But I can see the danger in that being viewed as "saintly" research with the rest of it tainted.

That quote was from a recent review article in The Lancet, which did not hint at any impending pharmacological breakthroughs in the treatment of bipolar disorder. In other words, the future of bipolar treatment doesn't look much different from the present (at least in the immediate term).

Bipolar disorder, an illness defined by the existence of manic or hypomanic highs, alternating with depressive lows, can be especially difficult to treat. And the mood episode known as a mixed state, where irritability, expansive mood, anxiety, and/or agitation occur simultaneously with depressive symptoms, is an under-recognized, moving-target diagnosis (Koukopoulos et al., 2013). Mood stabilizers such as lithium and divalproex have long been the first line pharmacological choices. But these don't always work, and polypharmacy seems to be the rule, rather than the exception.

The spinning molecule above is haloperidol, a first generation antipsychotic drug developed in 1958 and approved by the FDA in 1967 as a treatment for schizophrenia. It's a dopamine blocker known for producing untoward extrapyramidal side effects, or movement disorders such as tremors and tardive dyskinesia. Nonetheless, haloperidol (Haldol®) is still the most effective drug for the acute treatment of mania, and fairly well tolerated (see HAL in the figure below). The second generation (atypical) antipsychotics risperidone (RIS) and olanzapine (OLZ) also turn out pretty well in the antimanic sweepstakes. But these drugs can also have untoward side effects, notably substantial weight gain that can lead to high cholesterol, diabetes, and metabolic syndrome.

Figure (Geddes & Miklowitz, 2013).Ranking of antimanic drugs according to primary outcomes derived from multiple treatment meta-analysis. Efficacy is shown as a continuous outcome against the dropout rate. Treatments toward the red section combine the worst efficacy and tolerability profiles and treatments towards the green[ish] section combine the best profiles.1

Clearly, effective medications with fewer side effects are needed. Unfortunately, there doesn't seem to be anything new on the horizon, according to Geddes and Miklowitz:

Overall, advances in drug treatment remain quite modest. Antipsychotic drugs are effective in the acute treatment of mania; their efficacy in the treatment of depression is variable with the clearest evidence for quetiapine. Despite their widespread use, considerable uncertainty and controversy remains about the use of antidepressant drugs in the management of depressive episodes. Lithium has the strongest evidence for long-term relapse prevention; the evidence for anticonvulsants such as divalproex and lamotrigine is less robust and there is much uncertainty about the longer term benefits of antipsychotics.

The article is actually more bullish on combining existing drugs with various psychosocial interventions (e.g., family-focused approaches, strict regulation of social and circadian schedules, etc.), which are touched on below in the Appendix (Table 1 of Geddes & Miklowitz, 2013). That table also mentions some of the usual drug suspects.

To find out what else might be in the works, I looked through ClinicalTrials.gov for open interventional drug studies in adults. There were a few surprises... foremost among these was Methylphenidate for the Treatment of Acute Mania. It seems bizarre to me that methylphenidate (the stimulant drug Ritalin) would be proposed as a treatment for mania, since 40% of patients prescribed stimulants for bipolar depression (or comorbid ADHD) experienced stimulant-induced mania/hypomania (Wingo & Ghaemi, 2008).

The Ritalin trial was submitted to ClinicalTrials.gov in Feb. 2012, but the study is not yet open for patient recruitment 1.5 years later. The investigators recently published the study protocol in BMC Psychiatry, however (Kluge et al., 2013). They proposed the ‘vigilance regulation model of mania’ where:

The outlined model ... is related to personality theories about extraversion [9] and sensation seeking [10] which comparably explain these traits as an attempt to compensate for low central nervous system arousal.

Basically, it works for ADHD, and there are a handful of uncontrolled case reports, so.... let's conduct a clinical trial.

Bipolar Depression

Depressive episodes in bipolar disorder are longer in duration and considered more difficult to treat. Again, ClinicalTrials.gov did not disappoint, revealing a grab bag of "repurposed" treatments:

We propose to conduct a double-blind placebo-controlled trial with a widely available and prototypical non-steroidal anti-inflammatory agent, aspirin, and an antioxidant agent, NAC, involving symptomatic Bipolar Disorder type I and II patients having a depressive or mixed episode currently. This will be the first controlled study to test the hypothesis that aspirin and NAC, by themselves or in combination, will be beneficial in treating depression in bipolar disorder patients and in promoting mood stabilization.

Some companies and organizations that employ door-to-door sales tactics are known for their cult-like practices (e.g., Amway, traveling magazine sales, and Jehovah's Witnesses). An unusual psychiatric report included this religious brainwashing element in presenting the case of a 47 year old Japanese housewife who felt possessed by God after a visit by a door-to-door salesman (Saitoh et al., 1996):

In Japan, psychiatry has generally regarded the possessive state as symptomatic of religion- related mental disorders. ... Recently, there has been a proliferation of direct sales enterprises that incite anxiety in prospective customers in order to sell their products. Due to the prevalence of door-to-door peddling of items such as amulets and talismans to ward off curses and misfortune, the term ‘door-to-door sales’ has come to have a religious connotation.

Recently, we treated a case of possessive state accompanied with suicidal tendencies which are thought to have developed in connection with door-to-door sales. Religious factors and elements of brainwashing were seen both in the conditions that promoted the possessive state and in the state itself.

The patient grew up on a family farm in the Tokyo area. She was described as laconic, withdrawn, quiet, unsocial and nervous.

When the patient was 47 years old, a male she described as a ‘salesperson type’ came to her home in May. He read her palm and asked for her husband’s family name and birth date. When she gave him this information he predicted that some misfortune would befall her husband. The patient’s husband had fallen in an accident a few days earlier, and she became extremely anxious. The man then said, ‘I have a talisman, a lucky name chop (family seal) which will protect your husband from misfortune’. Although she was hesitant at first, she finally agreed... When she paid for the chop the man recommended that she go to a certain room in a hotel in Saitama prefecture for a more in-depth palm reading ... where she was one of 20 women who received a lecture on subjects such as lineage, marriage, health and happiness.

Approximately 1 week later, again at the salesman’s advice, she went to a rented room in a building in Tokyo where she received a scroll called a prayer book. At the same time she was urged to buy a sculpture which was called a ‘Fortune Tree’. Two days later she went to her bank with the salesman and a woman whom she did not know and paid the ¥5,400 000. The patient went to this room twice a month during June, July and August. The room was divided by a partition and she was shown biblical videotapes. In September, she complained of an inability to sleep, and stated, ‘I can hear God’s voice. He possesses me and is controlling my bodily movements’. Thereafter, she episodically gave orders to her family in an uninflected monotone, making unrealistic assertions such as, ‘Don’t eat that or you will die’ and ‘Don’t go out or you won’t come back’. In mid-September, she filed a complaint that she had been deceived into buying the ‘Fortune Tree’ at an exorbitant price.

Shortly thereafter, she was taken to a private mental hospital and treated with the antipsychotic drug haloperidol. Two weeks later, she was able to recount her ordeal:

... ‘I felt like God had taken over my body. I was ordered by Him to do this or do that. Even if I wasn’t talking, my mouth just moved on its own. I didn’t go so far as to be One with God, but it was almost like that. That’s why I gave orders to my husband and child as though I were God’. The patient showed no subsequent objective signs of abnormality, and was released 2 months after admission.

The authors discussed her case in terms of the DSM-IV diagnosis, Dissociative Disorder Not Otherwise Specified, along with depressive symptoms and somatic complaints. Her attendance at the video lectures was described as a form of brainwashing. More specifically, her condition would fall under the category of Dissociative Trance Disorder (possession trance), a disturbance in consciousness or identity with a culturally specific element:

Dissociative trance involves narrowing of awareness of immediate surroundings or stereotyped behaviors or movements that are experienced as being beyond one's control. Possession trance involves replacement of the customary sense of personal identity by a new identity, attributed to the influence of a spirit, power, deity, or other person and associated with stereotyped involuntary movements or amnesia...

This case is rare not only because of its association with a business practice, but also because possession is usually seen in more isolated communities with traditional belief systems, quite unlike contemporary Tokyo.

Exposure to subliminal cues can help us choose the apple instead of the cake. Or can it... Let's take a look.

Our Brains Can (Unconsciously) Save Us from TemptationAug. 8, 2013— Inhibitory self control -- not picking up a cigarette, not having a second drink, not spending when we should be saving -- can operate without our awareness or intention.That was the finding by scientists at the University of Pennsylvania's Annenberg School for Communication and the University of Illinois at Urbana-Champaign. They demonstrated through neuroscience research that inaction-related words in our environment can unconsciously influence our self-control. Although we may mindlessly eat cookies at a party, stopping ourselves from over-indulging may seem impossible without a deliberate, conscious effort. However, it turns out that overhearing someone -- even in a completely unrelated conversation -- say something as simple as "calm down" might trigger us to stop our cookie eating frenzy without realizing it.

The press release states that overhearing a message of restraint in a background conversation might prevent us from reaching for a second piece of cake at the holiday party. What's the evidence for this?

A study by Hepler and Albarracin (2013) recorded EEG activity (brain waves) while 20 participants performed a "go/no-go" task that tests their inhibitory control abilities. The subjects responded every time they saw an "X" on the screen but refrained from responding when they saw a "Y". These target letters were preceded by a visual masking stimulus (&&&&&&) for 16.7 msec, a subliminal prime word for 33.4 msec, and then another masking stimulus (&&&&&&) for 50.1 msec. The idea here is to show the prime word very briefly and to "mask" conscious perception of the word.

The prime words were general action words (go, run, move, hit, start), general inaction words (still, sit, rest, calm, stop), and control stimuli (scrambled action and inaction prime words – e.g., rnu). One obvious hypothesis would be that exposure to the masked inaction words would make you better at inhibiting a response to "Y". The authors didn't exactly say that, instead predicting that the amplitude of the P3 component extracted from averaged EEG on no-go trials would reflect the engagement of unconscious inhibitory processes.

However, if behavior is unaffected by the masked inaction words, it ultimately doesn't matter what happens to the P3 component. There is nothing you can say about "resisting temptation" -- behavioral change is not the same thing as a change in the size of the P3 component. The latter may indicate that a subject's brain registered sit, rest, calm, or stop implicitly, but this neural activity wasn't enough to improve stopping ability.

And in fact, this is exactly what the study demonstrated. The masked primes had a modest effect on the size of the P3 wave to the subsequent no-go stimulus, which reached its peak at around 400 msec post-stimulus (i.e., less than half a second after the "Y"). The inaction primes were significantly different from the action primes, but neither one differed from the neutral condition.1

The authors interpreted this effect to indicate that inhibition processes were "engaged" by the subliminal primes.

However, the primes had absolutely no impact on how well participants could resist responding to the no-go stimuli [F(2, 38) = .00,p = .99]. Accuracy in the inaction prime condition was exactly the same as in the action prime condition. In other words, the study showed that Our Brains Cannot (Unconsciously) Save Us from Temptation.

"I had been in labor for my daughter for 16 hours. The labor was difficult and the Dr. approached me and told me it may come down to a choice between the child or myself. ... The labor dragged on and on and finally they came in and broke my water. I was rushed into delivery and within minutes my heart had stopped. I remember seeing a beautiful being of light enter the room. She told me I had to return as it was not my time yet. I was sucked back into my body as they restarted my breathing. My daughter began crying the moment I opened my eyes."

Are you afraid to die? We all are. Fear of pain and suffering, fear of the unknown, fear of eternal damnation (for the religious), fear of nothingness (for the atheist). Fear of the end. The finality of it all.

The existential fear of death is part of the human condition. For a neuroscientist, studying what happens to conscious thought during the brain's own demise is one of the most profound of all questions. Short of conducting ill-advised scifi experiments on your med school classmates, how does one go about studying such a phenomenon? By using an animal model of cardiac arrest.

Surge of neurophysiological coherence and connectivity in the dying brain

A popular new study by Borjigin et al. (2013) recorded EEG activity directly from the brains of nine dying rats. This paper was widely reported in mainstream media outlets, and has been nicely covered by bloggers Ed Yong, Mark Stokes, Chris Chambers, and Shelly Fan. What I would like to do here is to more closely examine the conditions surrounding the clinical death of these rats.

The figure above shows brain waves recorded from six electrodes implanted on the cerebral cortex, along with electrical activity from the muscles (EMG) and heart (EKG). The time period is 80 minutes before and 20 minutes after cardiac arrest (at time zero), which was induced by injection of potassium chloride into the heart. On its own, potassium chloride would cause a very painful death. Along with anesthetic and paralytic agents, potassium chloride is part of the drug sequence used for lethal injection in some U.S. states.

In the present study, the animals were deeply anesthetized using ketamine (a dissociative anesthetic) and xylazine (veterinary sedative/analgesic which affects alpha-2 adrenergic receptors), a commonly used method of anesthesia in rodents. Fig. 1A shows that the animals were anesthetized for 30 min before cardiac arrest. The EEG exhibits fairly constant large amplitude activity during this time, shown spread out for a small interval of time in Fig. 1B below.

To briefly summarize, the rats' brains were surprisingly active during the CAS3 period, showing highly coherent neural oscillations in the low gamma frequency band for a 20 sec interval after the heart and lungs stopped working.

Fig. 1C below expands the vertical gray bars in Fig. 1B to show greater detail. Of note is the high amplitude rhythmic oscillations during CAS3. This low gamma activity (35-55 Hz) was strongly coupled to EEG activity in other frequency bands (theta and alpha) -- to an even greater extent than during active waking. The authors viewed this as a state of heightened consciousness, but such speculation is premature.

Why would the authors maintain that a dying brain can generate the neural correlates of heightened conscious processing? Gamma (aka 40 Hz activity) has been viewed as a possible solution to the "binding problem" of how consciousness arises since the late 80s. In the visual system, synchronous gamma might be how the brain combines distributed activity conveying separate aspects of a stimulus (e.g., its color, shape, and form) into a unified percept. Furthermore, gamma might account for phenomenal awareness and consciousness, according to some. However, more recent evidence suggests that gamma band responses do not reflect conscious experience.

In addition, it is not at all clear how highly synchronized low gamma can index "heightened conscious processing" in deeply anesthetized dying rats. Do the rats transition from ketamine/xylazine anesthesia (associated with altered thalamocortical connectivity) to a hyperaware internal state of....? Of what? The CAS3 activity is so abnormal that it might be artifactual or epiphenomenal, "a tale told by an idiot, full of sound and fury, signifying nothing" (Shakespeare, 1606).

Near-death experience (NDE) researcher Sam Parnia believes the low gamma activity could be caused by a massive influx of calcium, as he stated in Ed Yong's fine piece:

...Parnia says that there could be other explanations for the results. “After blood flow to the brain is stopped, there is an influx of calcium inside brain cells that eventually leads to cell damage and death,” he says. “That would lead to measurable electroencephalography (EEG) activity, which could be what is being measured.” This would explain why Borjigin saw the same pattern in every dying rat, while only 20 percent of people experience NDEs after a heart attack.

Ketamine administration itself is associated with an increase in gamma activity in cortical and subcortical structures. And most importantly, ketamine-altered states of consciousness have been used as a model of NDEs (Jansen, 1997). Although Borjigin et al. note differences in the specific oscillatory couplings seen during ketamine/xylazine anesthesia and cardiac arrest state #3, extrapolation of their findings to NDEs in humans “is extremely premature and unsupported by evidence” (Parnia, quoted in Yong).2

Despite these limitations, the results provide a fascinating beginning to a line of research exploring consciousness at the end of life. Obviously, the use of a rat model precludes any recounting of NDEs by those who might be brought back from the brink in the future. Although the precise neurobiological mechanisms are largely unknown, NDEs do have a scientific explanation (Mobbs & Watt, 2011).3

Contrary to popular belief, research suggests that there is nothing paranormal about these experiences. Instead, near-death experiences are the manifestation of normal brain function gone awry, during a traumatic, and sometimes harmless, event.

"It seems that when consciousness shuts down in death, psyche, or soul – by which I don't mean ghosts, I mean your individual self – persists for a least those hours before you are resuscitated. From which we might justifiably begin to conclude that the brain is acting as an intermediary to manifest your idea of soul or self but it may not be the source or originator of it… I think that the evidence is beginning to suggest that we should keep open our minds to the possibility that memory, while obviously a scientific entity of some kind – I'm not saying it is magic or anything like that – is not neuronal."

Sam Parnia MD has a highly sought after medical speciality: resurrection. His patients can be dead for several hours before they are restored to their former selves, with decades of life ahead of them.

That's a pretty outrageous claim! Clinically dead for several hours? No brain activity the entire time? Even if anyone could emerge alive and conscious from such a state (unless it's a state of suspended animation, perhaps), they'd have severe brain damage (as we'll see below). There'd be no way they could have encoded their near-death experiences (NDEs), much less remembered any light at the end of a tunnel or a soothing presence drawing them home.

There may be a semantic problem here: the definition of “death.”

I haven't read the book, but the issue is described in a one-star review at Amazon:

The core of this linguistic mess is his inconsistent use of the word "death". At times he uses this term properly, as defined by the Uniform Determination of Death Act (UDDA, 1981): "An individual who has sustained either (1) irreversible cessation of circulatory and respiratory functions, or (2) irreversible cessation of all functions of the entire brain, including the brain stem is dead." This definition was developed in cooperation with the American Medical Association ... [etc.] and has been adopted by most states. It is the standard definition of death. [NOTE: I thought brain death is THE standard definition of death.] 1

Unfortunately, he also refers to "death" as cardiac arrest (e.g. pages 1, 2, 23, 42, 43, 128, 131, 139, 140, and many more). This definition of death is inconsistent with the UDDA because cardiac arrest is reversible in some cases. In fact, much of this book includes accounts of individuals who have suffered cardiac arrest and been resuscitated...

Dr. Parnia was quoted in my previous post about the “End of Life Gamma Waves” study in rats. He was skeptical that EEG during the 30 second interval after the heart stopped beating was anything more than a massive influx of calcium into the dying neurons. It wasn't a state of heightened consciousness that can explain the NDEs reported by 10-20% of his cardiac arrest patients.

Instead, Parnia is a mind-body dualist, believing that the soul (or self) can persist separately from the body for several hours at a time:

"It seems that when consciousness shuts down in death, psyche, or soul – by which I don't mean ghosts, I mean your individual self – persists for a least those hours before you are resuscitated. From which we might justifiably begin to conclude that the brain is acting as an intermediary to manifest your idea of soul or self but it may not be the source or originator of it… I think that the evidence is beginning to suggest that we should keep open our minds to the possibility that memory, while obviously a scientific entity of some kind – I'm not saying it is magic or anything like that – is not neuronal."

Memory is not neuronal! And Death can be cured. Who knew. But how?? [NOTE: according to Parnia and The Observer, at least.]

Extracorporeal membrane oxygenation (ECMO) is a temporary method of life support that introduces and circulates oxygen into the bloodstream of patients with acute respiratory failure or cardiac failure. It involves placing one or more large catheters into the patient's vessels (cannulation) and relies on an external pump to circulate and oxygenate blood and remove carbon dioxide (PDF). Primarily used in critically ill infants, its application to adults is risky and controversial, and the benefits are unclear.

A meta-analysis of ECMO in adult patients found a mortality rate of 54% at 30 day follow-up, with almost half the fatalities occurring during ECMO (Zangrillo et al., 2013). On the other hand, the procedure is a last-ditch life saving effort in critically ill patients, so a 46% survival rate seems like an improvement over probable death. However, one review stated that "Credible evidence for mortality benefit of ECMO is lacking" in cases of acute respiratory distress (Hirshberg et al, 2013). Another study concluded that ECMO is even less successful in cases of acute heart failure, with the worst survival rate for those who experience cardiac arrest (Tsuneyoshi & Rao, 2012).

The rest of the post will focus on the possible neurological complications of ECMO (Mateen et al., 2011).

Neurological Injury Associated with Heroic Resuscitation

I do not want to detract in any way from the dedication of practioners who do heroic things every day to save people's lives, or from advances in medicine. What I would like to point out, however, is that sometimes one may resuscitate the heart but lose the brain (to paraphrase Horstman et al., 2010).

Neurological events occurred in at least 50% (n=42) of patients treated with ECMO at one medical center over an 8 year period (Mateen et al., 2011). This is a conservative estimate, because a neurological exam was not performed in 21%, and over 70% did not have neuroimaging. Clinical presentation included new onset of coma and new loss of brainstem reflexes. Diffuse brain injury due to lack of oxygen (anoxia), global brain dysfuction (encephalopathy), subarachnoid hemorrhage (bleeds), and ischemic watershed infarction (stroke) were among the diagnoses. Of the 24 patients with brain scans, the findings were pathologically abnormal in 15 (see examples in figure above).

Autopsy was performed on 10 brains (out of 40 patients who died). Nine of these brains showed gross abnormalities (see examples in figure below).

Brain sections stained for microscopic examination showed abnormalities in areas vulnerability to anoxia, including hippocampal pyramidal cells (the CA1 field) and cerebellar Purkinje cells. The hippocampus is a structure located in the medial temporal lobes that is critical for memory. Even mild hypoxia due to cardiac arrest (30 sec to 7 min until initiation of CPR) can lead to memory impairments. The residual cognitive deficits seen in post-cardiac arrest patients comatose for >24 hours have been well-characterized (Lim et al., 2004).

A group of 12 cardiac arrest survivors (not treated with ECMO) underwent MRI scans and neuropsychological testing (Horstman et al., 2010). Compared to controls, abnormalities in gray matter density were observed in regions important for memory and "drive" (subjectively rated motivation). ECMO is of course meant to preserve functioning of the brain and cardiopulmonary system, but I don't see how that's possible if the patient is "dead" for several hours.

Disclaimer: I am not a medical professional, and this post is not to be taken as medical advice.

Many recent headlines have heralded a new use for the old veterinary anesthetic ketamine, which can provide rapid-onset (albeit short-lived) relief for some patients with treatment-resistant depression (aan het Rot et al., 2012). This finding has been inflated into “arguably the most important discovery in half a century” by Duman and Aghajanian (2012). While finding a cure for refractory depression is undoubtedly an important research priority, might ketamine be useful for other conditions that cause profound human misery?

The care of terminally ill patients suffering from unbearable pain is not a sexy topic, and hospice and palliative medicine is not a glamorous subspecialty. You probably haven't seen the studies examining whether ketamine is effective as an add-on agent to opioid analgesics for cancer pain (Hardy et al., 2012), or as a treatment for depression and anxiety in patients receiving hospice care (Irwin et al., 2013).

Three years ago, my father died of cancer. He had been released from the palliative care unit to a hospice, suffering with uncontrolled cancer pain. It was unbearable to watch, and beyond excruciating for him. During this time, I was writing a post for the Nature Blog Focus on hallucinogenic drugs in medicine and mental health. It included a section on drugs that might alleviate pain and anxiety in cancer patients. I told him about this, and he said to “get the word out.”

As a tribute to my father, I wanted to present a brief overview of new developments in the field.

Efficacy and Toxicity of Ketamine in the Management of Cancer Pain

In 2008, BMJ published a set of clinical practice guidelines on pain control in adults with cancer. They called for further research to investigate the role of ketamine as an adjuvant analgesic– a drug with a primary indication other than pain that might have analgesic properties in some conditions.

A recent Cochrane review evaluated the state of the literature on ketamine to alleviate cancer pain (Bell et al., 2012). Three new randomized controlled trials (RCTs) were identified since 2003, and all were excluded from further analysis. Among the older studies, the adverse effects of ketamine included hallucinations (as expected, since the drug is a dissociative anesthetic used at raves), drowsiness, nausea and vomiting, dry mouth, and confusion. The authors concluded that “Current evidence is insufficient to assess the benefits and harms of ketamine as an adjuvant to opioids for the relief of cancer pain. More RCTs are needed.” They also noted that clinical trials were ongoing, and that data by Hardy and colleagues were awaiting assessment.

Unfortunately, the outcome of the trial conducted by Hardy et al. (2012) was not positive. In this large RCT, 185 cancer patients with refractory chronic pain were randomized to receive either ketamine or placebo as an adjunct to their regular doses of opioids and other analgesics. Ketamine was administered subcutaneously in a dose-escalating regimen over 5 days. The response rate was 31% (29 of 93) in the treatment group compared to 27% (25 of 92) for placebo, which was not significantly different (p=.55). In addition, ketamine was associated with twice the number of adverse events relative to placebo. The authors concluded that ketamine did not have a net clinical benefit when used along with standard medications to treat cancer pain.

However, Jackson and colleagues (2013) objected to this “sweeping conclusion” in a letter to the Journal of Clinical Oncology titled “Ketamine and Cancer Pain: The Reports of My Death Have Been Greatly Exaggerated”. Their major arguments were that ketamine has been used in this fashion for the last decade, and previous open-label studies were more successful. They also suggested that Hardy et al. were too quick to call ketamine a treatment failure, and too late in administering drugs to counteract any hallucinogenic side effects.1

Hardy et al. (2013) replied to the first set of objections by stating the obvious about the value of RCTs: “Open-label studies do not meet the specific scientific definition of control.” They stood by their “sweeping conclusions” that ketamine was not beneficial in this population. On the other hand, I can see why clinicians would be desperate to help their patients. The 27% placebo response in Hardy's study is quite high. So if you're a patient in terrible pain and grape Kool-Aid improves your condition, why argue with that?

Ketamine for the Treatment of Depression and Anxiety in Hospice Patients

Speaking of open-label studies, a 2010 study in two hospice patients, each with a prognosis of only weeks or months to live, showed beneficial effects of ketamine in the treatment of anxiety and depression (Irwin & Iglewicz, 2010). A single oral dose produced rapid improvement of symptoms and improved end of life quality. To disentangle the pain relieving and antidepressant effects of ketamine, the authors emphasized the importance of conducting clinical trials for this particular indication.2

A more recent open-label study by Irwin et al. (2013) enrolled 14 hospice patients with depression or depression + anxiety to receive oral ketamine for 28 days. Only 8 patients completed the study, but all showed a 30% or greater improvement in their depression or anxiety scores. Four withdrew from the study at day 14 because of no response to the drug, one dropped out earlier due to unrelated rapid decline, and one withdrew at day 21 because of a change in mental status (apparently unrelated to ketamine). “Few adverse events” were noted, the most common being diarrhea, trouble sleeping, and trouble sitting still (which to me sound problematic in an extremely ill population). It seems that dissociative symptoms, hallucinations, etc. were not evaluated. The authors again call for further studies using RCT designs to evaluate whether ketamine can improve the quality of the end-of-life experience.

Although they were not entirely successful, these studies have aimed to achieve an important goal of any civil, caring society: to provide a manner of death that minimizes fear, pain, and suffering.

If you were forced to sacrifice one of your five senses, which would it be? Most people wouldn't consider losing their vision or hearing. It would be really dangerous to completely lose your sense of touch, so that won't be an option in our hypothetical scenario. So we're left with the chemical senses of smell and taste. I think most of us would choose one of these two.

But what about someone who can't smell? How can they miss something they've never known?

“If I had to lose one of my senses, it would probably be smell... even though it's gone. I mean, if I had to choose between them, because it's the least hindering...”

Congenital anosmia is a rare condition where individuals are born without the ability to smell. This condition might not seem so bad to the osmic population (especially when cleaning up after your pet), but lack of smell can affect safety (e.g., can't detect a gas leak or burning toast), body weight, hygiene, and mate selection (Karstensen & Tommerup, 2011). But if you've never had the experience of odors, whether they're from cinnamon buns or rotting fish, this is a completely normal state of affairs (Tafalla, 2013).

Isolated congenital anosmia unrelated to another condition (such as Kallmann syndrome) has been linked to genetic locus 18p11.23-q12.2 in two different families (Ghadami et al., 2004). However, no disease-causing mutations were found by sequencing eight candidate genes in this region. Studies in other families have suggested that the genetic basis of this trait is heterogeneous. Therefore, the specific genetic causes of isolated congenital anosmia remain elusive (Karstensen & Tommerup, 2011).

Rebecca Steinitz has congenital anosmia. She reached adulthood without knowing that other people actually do possess a sense of smell, as opposed to just pretending that they do. In an essay she describes what it's like to live in a world without smell.

I don’t know what a rose smells like, though when I hold my nose to a full-blown bloom and inhale deeply, I sense a vague sweetness.

I don’t know what my husband’s shirt smells like. If he died, I wouldn’t think to sleep in it so I could feel that he was with me.

I don’t know what a baby’s head smells like – not my babies, not anyone else’s babies. I couldn’t pick my babies out of a crowd with my eyes closed, and I don’t miss that baby smell when I hug my growing children.

“I learned smells from books, which made me think they were fictional. When real people said That stinks, or I can smell the sea from here, I thought they were faking, that they were willing to pretend those smells existed beyond the page. I only discovered the word for people like me a few years ago. We are anosmic; we have anosmia: lack of the sense of smell.”

Steinitz noted that she can't smell her husband's shirt or her baby's head. These types of scents bind us to people and cement our relationships. Odors have a way of linking us to times and places of the past, evoking remembrances in a sterotypically literary way, eliciting endless soliloquies of youthful memories.

Do smells have a uniquely intimate connection to memory? Olfactory information reaches the piriform cortex after only two synapses. A chemical odorant activates receptors on sensory neurons in the nasal epithelium, which synapse onto mitral cells in the olfactory bulb. These mitral cells synapse onto neurons in the piriform (olfactory) cortex located in the temporal lobe, near the hippocampus and amygdala.

What is it like to be a smeller?

Anosmic philosopher Marta Tafalla, in a nod to the famous paper by Thomas Nagel, compares the foreignness of olfactory qualia to the exotic system of echolocation in bats (Tafalla, 2013).

Neither do I have the sensation that I lack a sense, a window onto reality, that something in my body or my brain does not function properly. And because of all that, it would have been impossible for me to come up with the improbable idea that everyone else can perceive another dimension of reality, which consists of volatile chemical particles that are perceived in the mere act of breathing. It sounds as strange to me as the echolocation system of bats or some birds' capacity to align their flight with the earth's magnetic field.

In this meditative and philosophical piece, Tafalla tells the story of when she first realized she was different from other people. At the age of eight, she spent some time at a summer boarding school. One of the teachers tactfully remarked on her smelly feet. Tafalla didn't understand, and didn't connect body odor to a lack of hygienic rituals. In response, she put cologne on her feet and in her shoes. Her teacher and her mother both found this odd, so she started wondering if something was wrong. A year or two later, she was able to articulate the problem: “I can't smell!”

She then contemplates how this knowledge changes her experience of the world: her perception of food, her relationships with other people, her sense of her own body, perception of natural and urban environments, time perception (olfaction and its relation to memory), and "aesthetic appreciation of scents and stenches":

I was cleaning the fallen leafs in my patio, and I found a dead blackbird. It had probably been there for days. It must have stunk. But I had been sitting there, enjoying the first days of spring, and I had not noticed anything.

In conclusion, I believe that to be anosmic means that the world seems not so beautiful and also not so ugly. I believe that, aesthetically speaking, the world seems less.

A Sense of Loss

Individuals with congenital anosmia are in the minority of those who have olfactory dysfunctions, comprising only 3% of that population (Keller & Malaspina, 2013). The most common causes of smell problems are sinus and nasal disease, upper respiratory infection, and head trauma (see below). Traumatic brain injury can damage the olfactory bulbs if the brain bounces against the orbits and other bony protuberances inside the skull.

Becoming anosmic in adulthood, after experiencing the smells of fragrance and filth, can lead to a pronounced sense of loss. These negative consequences are often trivialized and misunderstood by others.2 To assess the effects of altered smell on everyday life, Keller and Malaspina (2013) administered an online survey to 1,000 patients with olfactory dysfunction. Complete results from the 43 survey questions, along with edited excerpts from 1,000 reports (179 pages), are freely available with the open access article.

Here's an example from a woman with asthma and nose polyps:

I spent ridiculous amounts of time every day with my nose to my son's little head, just inhaling his smell. I don't know if anyone can comprehend what it's like missing that primal connection to your child. There is something profound and powerful about a mother smelling her baby that I cannot explain, but it is viscerally important. So I don't know when I ceased to smell, but it was gradual enough that I didn't notice. That said, the absence of smell is unspeakably painful.

A person with severe allergies:

It's a huge loss. I fully understand the risk of depression from this condition. Besides the loss of smell, I've suffered a complete loss of flavor-tasting ability. That is an immense loss as well. Even more so is the loss of memories that smell used to so vividly unlock. I so miss the fragrance of a pine forest to take me back to my childhood camping in the mountains. I want to smell the turkey cooking on Thanksgiving. I want to smell the chocolate when I walk into a candy store! It's a weird affliction. People don't really get it.

The major practical problems are difficulties avoiding hazardous substances and situations, food-related issues, and problems with managing odors (body odor, pet smells, rotten food, etc.). Negative psychological consequences include smell loss-induced anhedonia and social isolation, which can result in a lowered quality of life.

It's a truism to say this, but our sense of smell is something that most of us take for granted. I spend so much time inside my own head that it's a great idea to stop and smell the nectarines, the tomatoes, the coffee, and the cat.

Footnotes

1 Kurt Cobain didn't realize that Teen Spirit was an antiperspirant marketed to girls. Hanna meant that he smelled like his girlfriend's deodorant, but Cobain thought the graffiti made a profound statement on disaffected youth.

2Tafalla (2013) has helpfully classified the most typical types of responses:

Even doctors say “don't worry, it is not a serious problem.”

Tasteless jokes and “how lucky you are!”

Infrequent but thoughtful: “smell is something very difficult to explain”.

We rationalize, we dissimilate, we pretend: we pretend that modern medicine is a rational science, all facts, no nonsense, and just what it seems.But we have only to tap its glossy veneer for it to split wide open, and reveal to us its roots and foundations, its old dark heart of metaphysics, mysticism, magic and myth.Medicine is the oldest of the arts, and the oldest of the sciences: would one not expect it to spring from the deepest knowledge and feelings we have?

The talented artist is often assumed to be Jan Steven van Calcar from the studio of Titian, but the actual identity of this person(s) is unclear (Russell, 2013):

But who were the artists? Who created such compelling images? Vesalius neither identifies nor acknowledges his exceptional artist(s) or his woodblock cutters. The absence of their identity has remained a subject of debate.

Proto-Bloggers or Plagiarists?

Vesalius tried valiantly to preserve his intellectual property from unlawful reproduction, to no avail. This is a bit ironic, since he never gave credit to the artists, and even seemed a bit annoyed with them (Lanska & Lanska, 2013):

In 1546, 3 years after publication of the first edition of the Fabrica, Vesalii expressed frustration at the plurality of artists he had supervised: “[No longer] shall I have to put up with the bad temper of artists and sculptors [wood-block cutters] who made me more miserable than did the bodies I was dissecting” (translation in O’Malley, 1964, p. 124).

Regardless of Vesalii’s frustrations with the artists, the beauty, accuracy, and utility of these woodcuts led to frequent plagiarism, despite Vesalii’s attempts to protect his work with the various privileges that were listed at the foot of the title page.

Piracy and plagiarism of images vs. "fair use" for educational [or entertainment] purposes isn't a new problem that began with commercialization of the internet in 1995, nor with the rise in the popularity of Tumblr about four or five years ago.1

Lanska and Lanska (2013) raised the issue of how the printing press made image theft easier in their chapter on Medieval and Renaissance anatomists: The printing and unauthorized copying of illustrations, and the dissemination of ideas:

With the advent of the printing press and moveable type at this time, printed books began to supersede hand-copied medieval manuscripts, and labor-intensive techniques were soon developed to integrate text and illustrations on the printed page. The same technology was used to pirate the illustrations of prior authors with varying fidelity. ... The most important milestone in the development of anatomy and anatomical illustration was the publication in 1543 by Andreas Vesalii of De humani corporis fabrica. With this work, Vesalii succeeded in coordinating a publication production team (author, artists, block cutters, publisher, and typesetters) to achieve an unprecedented integration of scientific discourse, medical illustration, and typography. However, despite Vesalii’s valiant efforts to prevent unauthorized duplication, the illustrations from the Fabrica were extensively plagiarized. Although Vesalii found such piracy frustrating and annoying, the long-term effect was to make Vesalii’s ideas known to a wider readership and to help solidify his own revolutionary contributions to anatomy.

Vesalius was angry because of the amount of work he put into the dissections, but the benefit was greater exposure of his ideas and an increase in stature. Kind of like high profile (non-critical) science blogging??2 Except unauthorized reproduction was more laborious in the 16th century... e.g. copying woodcuts prints in a close but approximate form by freehand engraving onto copper plates (Lanska & Lanska, 2013b). But at least one imitator did give him credit and even corrected his mistakes:

Vesalius bitterly complained about Valverde's unauthorized abridgement of his work: “Valverde who never put his hand to a dissection and is ignorant of medicine as well as of the primary disciplines, undertook to expound our art in the Spanish language only for the sake of shameful profit.” (O'Malley translation). Nevertheless, Valverde did make several corrections to the images (e.g., anatomy of the extraocular muscles), described the intracranial course of the carotid arteries, and made the first drawing of the stapes. In addition, Valverde acknowledged using illustrations from Vesalius because, “his illustrations are so well done it would look like envy or malignity not to take advantage of them.”

modified from Figure 2 (Lanska & Lanska, 2013b). The left set of 7 black-and-white images are from Vesalius'Fabrica ... Individual woodcuts have been arranged in a montage, corresponding to that from a single copperplate engraving in Valverde's abridgement shown on the right. The entire Vesalius montage is an approximate mirror image of the single-page, multi-image print in Valverde's abridgement. Dissection stages, brain levels, and structures illustrated all correspond closely. Note the absent mustache in the third stage of the dissection in prints from both Vesalius and Valverde. Shading is absent in the Valverde copperplate images, and there are minor differences in both perspective and fine details (e.g., the pattern of the gray-white junction, branching pattern of the middle meningeal artery, and features of the corresponding mustaches).

The chapters in this volume make for fascinating reading and cover not only these early artistic contributions to the neurosciences, but also include neuroscientists with artistic talent (e.g., Santiago Ramón y Cajal) and artists with neurological disorders (e.g., Giorgio de Chirico, who may have had complex partial seizures or migraine with visual auras).

Footnotes

1 No one ever knows the actual origin of images on tumblr, do they? Just try to find out who created this Black Cat Club image...

2 Obviously, the images in this post are hundreds of years old (and in the public domain). In the last few years, I've become more sensitive to the issue of copyright infringement and try not to do this. I assume that judicious reproduction (and appropriate attribution) of figures from journal articles falls under "fair use." I haven't made one cent from this blog so I'm certainly not profiting from others' work.

Until recently, scientists believed our brains were fixed, their circuits formed and finalised in childhood, or "hardwired". Now we know the brain is "neuroplastic", and not only can it change, but that it works by changing its structure in response to repeated mental experience.

Wow! I never knew that! You mean the brain can actually learn? And it changes with experience? Really?? Thank you, Norman Doidge, for that brilliant insight, and for many other gems in your wonderful Comment is Free piece on porn addiction in the Guardian.

Let's see what physicians and psychologists of yesteryear have to say about these newly discovered "neuroplastic" brains.

Here it may be asked whether the organs [of the brain] increase by exercise? This may certainly happen in the brain as well as in the muscles; nay, it seems more than probable, because the blood is carried in greater abundance to the parts which are excited, and nutrition is performed by the blood. In order however, to be able to answer this question positively, we ought to observe the same persons when exercised and when not exercised; or at least observe many persons who are, and many others who are not, exercised during all periods of life.

-J.G. Spurzheim (1815). The physiognomical system of Drs. Gall and Spurzheim; founded on an anatomical and physiological examination of the nervous system in general, and of the brain in particular; and indicating the dispositions and manifestations of the mind.

The question is not whether neural events change the status of the tissue in which they occur. The only question which may still be debated is: whether such changes as do undoubtedly occur have the permanence and those other properties which we must attribute to memory-traces. According to our present knowledge the primary effect which nerve impulses produce in ganglionic layers is chemical activity. . .

The authors compared the brains of rats exposed to complex, enriched environments to those housed in isolated cages. They found increases in cortical thickness, increases in cortical tissue weight (not related to overall brain or body size), and in increases acetylcholinesterase activity in rats who had lived in the fun and social cages. The project was launched 60 years ago, in 1953... so it's a bit disingenuous for Dr. Doidge to call neuroplasticity a "recent" discovery.

Porn sites are also filled with the complexes Freud described: "Milf" ("mothers I'd like to fuck") sites show us the Oedipus complex is alive; spanking sites sexualise a childhood trauma; and many other oral and anal fixations. All these features indicate that porn's dirty little secret is that what distinguishes "adult sites" is how "infantile," they are, in terms how much power they derive from our infantile complexes and forms of sexuality and aggression. Porn doesn't "cause" these complexes, but it can strengthen them, by wiring them into the reward system.

“Krech (1960), Rosenzweig (1964), Bennett (1964), and others have successfully identified and measured physiological changes in the brain that relate directly to early experiences in carefully controlled studies with laboratory rats.”

In his review of various approaches to early childhood education in the 1960s (e.g., Operation Head Start, Perry Preschool Project, etc.), psychologist David P. Weikart cited literature on neuroplasticity in adult rats (Weikart, 1966). Although written in the context of an early life “critical period” for learning, and contrasting the effects of exposure to deprived vs. enriched conditions on educational attainment in young children, he was aware of animal studies showing that neuroplastic changes continued into adulthood.

He was also ahead of his time in his beliefs that the causes of racial differences in IQ were not inherent, but a result of differences in socioeconomic status and access to resources (Weikart, 1966):

Pasamanick & Knoblock (1961) have documented the impact of deprivation most vividly in their study of infant development. Employing samples of [Black] and White infants selected for equal birth weights and absence of defects or premature birth, and using the Gesell Development Scale, they found no significant difference between the two groups at 40 weeks of age; the White babies obtained a developmental quotient of 105.4 and the [Black] babies as DQ of 104.5. At age 3, the first 300 of the original 1,000 children studied were retested and a highly significant difference was found. The developmental quotient of the White children rose to 110.9, while the DQ of the [Black] children fell to 97.4.

"Over the previous 10 years of standardized achievement testing (1948-1957), no class in the predominantly African American school ever exceeded the 10th percentile on national norms for any tested subject," he writes in his memoir. "Yet in the elementary school across town, which primarily served the children of white, middle-class university professionals, no class ever scored less than the 90th percentile."

To Weikart's mind, there was something wrong with the schools if one group of children was failing, while another group was doing fine. But the principals didn't see it that way. They said there was nothing wrong with the schools. It was the children. They weren't intelligent enough, they weren't capable of doing well.. . .

"I was working in a context where most people felt that IQ was God-given, and unfortunately, low-IQ minority children were just born that way," writes Weikart in his memoir.

But some people were beginning to question this idea, including Weikart. He recalls learning about studies of cage-reared versus playground-reared rats in zoology courses he took during graduate school.

"These studies strongly suggested that problems caused by limited environments could be ameliorated by stimulus-rich opportunities," he wrote. "For me, the idea of enriched opportunities for poor children from limited backgrounds seemed justified by the findings from these ... studies."

If you read the journal Social Cognitive and Affective Neuroscience (SCAN), you might think that Existential Neuroscience is a hot new field, since three recent papers on the topic have been published there. Can it provide profound new insights into the human condition? From what I can tell, these references to a formal discipline of “Existential Neuroscience” are based entirely on terror management theory, which was developed by Greenberg and colleagues in the 1980s (Greenberg et al., 1986; Rosenblatt et al., 1989). How does this relate to existentialism?

By the mid 1970s the cultural image of existentialism had become a cliché, parodized in countless books and films by Woody Allen. It is sometimes suggested, therefore, that existentialism just is this bygone cultural movement rather than an identifiable philosophical position; or, alternatively, that the term should be restricted to Sartre's philosophy alone.

Stanford Encyclopedia eventually tells us that the most distinctive aspect of existentialism is that standard notions of identity are wrong:

The fundamental contribution of existential thought lies in the idea that one's identity is constituted neither by nature nor by culture, since to “exist” is precisely to constitute such an identity. It is in light of this idea that key existential notions such as facticity, transcendence (project), alienation, and authenticity must be understood.

The first known account of “Existential Neuroscience” (EN) was written by mirror neuron researcher Dr. Marco Iacoboni in 2006 (PDF).1 It was published as a book chapter in Social Neuroscience: Integrating Biological and Psychological Explanations of Social Behavior (Harmon-Jones& Winkielman, 2007). Thus, EN appears to be a branch of Social Neuroscience.

But what is Existential Neuroscience, exactly? A group of French intellectuals discussing brain research in a cafe while smoking and sipping espresso? An authentic neuroscience of utter freedom that embraces a state of perpetual despair2 over the meaninglessness of existence? Or independent groups of German-speaking neuroscientists who scan subjects while they ponder death?

Sartre and Friends

If you guessed the latter, you'd be correct. More precisely, EN thus far consists of neuroimaging studies of mortality salience, as you might expect by its reliance on terror management theory (TMT).3 Therefore, EN should be called “Fear of Death” Neuroscience. TMT holds that when people are confronted with their own mortality, they respond in ways to boost their self-esteem, reinforce their own values, and punish outsiders.

In an ironic twist for the existentialist neuroscientists, however, Existentialism rejects science as means of understanding what it is to be human. Here's Sartre on the futility of science:

From the outset physiology is condemned to understand nothing of life since it conceives life simply as a particular modality of death, since it sees the infinite divisibility of the corpse as primary, and since it does not know the synthetic unity of the "surpassing towards" for which infinite divisibility is the pure and simple past. Even the study of life in the living person, even vivisection, even the study of the life of protoplasm, even embryology or the study of the egg can not rediscover life; the organ which is observed is living, but it is not established in the synthetic unity of a particular life; it is understood in terms of anatomy—i.e., in terms of death.

Even if one is a firm believer in the potential of neuroscience to lead to better treatments for mental illness, it's hard to envision what brain research can tell us about a philosophical system opposed to science (or most other philosophies, for that matter). Can we imagine what a Taoist Neuroscience or an Epicurean Neuroscience would be like? Not to mention the prospect of a Nihilist Neuroscience or a Post-Structural Neuroscience...

By necessity, a true Existential Neuroscience must deal with human beings as the focus of study, since the withdrawal reflex of Aplysia might not be a valid model of existential angst. It's unlikely we'll see circuit models and optogenetic studies of the alienated self any time soon. As currently formulated, EN has more concrete goal: to study one specific element of existentialist thought that might be more closely related to Heidegger's views (see Quirin et al., 2012).

This leads us to the most recent of the EN studies in SCAN (Silveira et al., 2013), which I'll discuss in some detail. This study is based on a different reaction to mortality salience, one that is derived from evolutionary psychology: the drive to reproduce. The heterosexual participants in the study viewed attractive opposite-sex faces and made decisions about whether they would like to meet them (a proxy for sexual desire) after being primed by death-related words (or not). Already, this seems like a bridge too far, but let us go on.

Sixteen female and 16 male subjects participated in this fMRI experiment. They viewed a series of attractive faces (as judged by an independent group of participants) and decided, in separate blocks, if the faces were attractive or not (explicit evaluation) or whether they'd like to meet the person or not ("implicit" evaluation). The task was cued at the beginning of a block by the words Meet? or Attractive? Participants make their choice when the ? appears on the monitor. Below is an example of the implicit no-prime condition shown to the male subjects.

The participants viewed a blank screen before each face is presented, instead of viewing control words that are unrelated to death. The lack of such a control condition is problematic, as we'll see later. For comparison, an example of the death-prime condition is illustrated below.

modified from Fig 1 (Silveira et al., 2013). I added two English translations for the original German exemplars that were given in the text .

Here we can see the participants are reading words, a condition that entails a number of visual, lexical (e.g., decoding the letter string), and semantic (meaning-related) processes that are completely absent from the no-prime condition. Therefore, we can't know if any differential brain activation in the death-prime vs. no-prime conditions is caused by reading a word (any word) or by comprehending a specific reference to death, thereby triggering mortality salience.

To compound matters, the study used block design methodology, so the discrete hemodynamic responses to prime presentation or face presentation or the decision screen could not be determined, as in an event-related design. The figure below shows the death-prime vs. no-prime comparison for male participants (left) and female participants (right), who did not differ from each other.

The figure shows activation in the left anterior insula and adjacent inferior prefrontal cortex, which are known to be involved in language, particularly in coordinating speech (known as articulatory planning). Although such activity is usually associated with speaking aloud, left anterior insula activation has also been observed during silent reading. To reiterate, the present result may be due to the absence of any words in the no-priming condition. This interpretation is opposed to Silveira et al.'s claim that the activity "reflects an approach-motivated defense mechanism to overcome concerns that are induced by being reminded of death and dying."

Another wrinkle in the authors' world view is the fact that the death-priming manipulation increased interest in meeting an attractive member of the opposite sex only in males (76% vs. 68% in the no-prime condition) and not in females (47% vs. 48%). It's hard to know how the approach-motivated defense mechanism is operating in women, since it didn't increase their desire to meet potential [fictitious] partners.

It seems a stretch, then, to claim:

Thus insular activation suggests an increase in mating motivation under mortality salience. This interpretation is in accordance with previous findings that mortality salience motivates the formation of romantic relationships as well as reproductive desire.

Hardly. The female participants expressed no greater interest in even meeting potential partners (no less having babies with them), and yet their insular activations were highly similar to those seen in the male participants (who may or may not have shown greater reproductive desire, as this was not queried or investigated in any way).

In sum, I don't know if we've learned anything about existentialism, or sex and death, or even mortality salience and the left anterior insula.

C'est la guerre.

I emerge alone and in anguish confronting the unique and original project which constitutes my being; all the barriers, all the guard rails collapse, nihilated by the consciousness of my freedom. I do not have nor can I have recourse to any value against the fact that it is I who sustain values in being. Nothing can ensure me against myself, cut off from the world and from my essence by this nothingness which I am. I have to realize the meaning of the world and of my essence; I make my decision concerning them—without justification and without excuse.

1 Iacoboni called EN a “Quiet Revolution” (that to me seems more like embodied cognition... which has not been quiet in announcing itself to the world):

In fact, some empirical work in the neuroscience of sociality seems to suggest - quietly, but resolutely - that the assumptions of the subject/world, inner/outer dichotomy, of representations independent of the things they represent, and of the atomism of the input may not be easily applied in some cases. Thus, rather than the picture of a meaning-giving brain that looks at the outside world and makes sense of it with a reflective and analytic approach, what emerges from some work in social cognitive neuroscience is the view of a human brain that needs a body to exist in a world of shared social norms in which meaning originates from being-in-the-world. This view is reminiscent of motivesrecurring in at least one flavor of what is called existential phenomenology (Heidegger 1927). For this reason, I call this view existential neuroscience.

What sets the existentialist notion of despair apart from the conventional definition is that existentialist despair is a state one is in even when he isn't overtly in despair. So long as a person's identity depends on qualities that can crumble, he is considered to be in perpetual despair. And as there is, in Sartrean terms, no human essence found in conventional reality on which to constitute the individual's sense of identity, despair is a universal human condition.

3 One might consider that Iacoboni's studies of existential mirror neurons fall under a different branch of EN; they're not cited by the "fear of death" faction and vice versa.