Regular use of over-the-counter pain relievers like aspirin, ibuprofen, naproxen, and acetaminophen was associated with three times the risk of committing a homicide in a new Finnish study (Tiihonen et al., 2015). The association between NSAID use and murderous acts was far greater than the risk posed by antidepressants.

Clearly, drug companies are pushing dangerous, toxic chemicals and we should ban the substances that are causing school massacres — Advil and Alleve and Tylenol are evil!!

Wait..... what?

Tiihonen and colleagues wanted to test the hypothesis that antidepressant treatment is associated with an increased risk of committing a homicide. Because, you know, the Scientology-backed Citizens Commission on Human Rights of Colorado thinks so (and their blog is cited in the paper!!):

After a high-profile homicide case, there is often discussion in the media on whether or not the killing was caused or facilitated by a psychotropic medication. Antidepressants have especially been blamed by non-scientific organizations for a large number of senseless acts of violence, e.g., 13 school shootings in the last decade in the U.S. and Finland [1].

The authors reviewed a database of all homicides investigated by the police in Finland between 2003 and 2011. A total of 959 offenders were included in the analysis. Each offender was matched to 10 controls selected from the Population Information System. Then the authors checked purchases in the Finnish Prescription Register. A participant was considered a "user" if they had a current purchase in the system.1

The main drug classes examined were antidepressants, benzodiazepines, and antipsychotics. The primary outcome measure was risk of offending for current use vs. no use of those drugs (with significance set to p<0.016 to correct for multiple comparisons). Seven other drug classes were examined as secondary outcome measures (with α adjusted to .005): opioid analgesics, non-opioid analgesics (e.g., NSAIDs), antiepileptics, lithium, stimulants, meds for addictive disorders, and non-benzo anxiolytics.

Lo and behold, current use of antidepressants in the adult offender population was associated with a 31% greater risk of committing a homicide, but this did not reach significance (p=0.022). On the other hand, benzodiazepine use was associated with a 45% greater risk (p<.001), while antipsychotics were not associated with greater risk of offending (p=0.54).

Most dangerous of all were pain relievers. Current use of opioid analgesics (like Oxycontin and Vicodin) was associated with 92% greater risk. Non-opioid analgesics were even worse: individuals taking these meds were at 206% greater risk of offending — that's a threefold increase.2 Taken in the context of this surprising result, the anti-psych-med faction doth complain too much about antidepressants.

Furthermore, analysis of young offenders (25 yrs or less) revealed that none of the medications were associated with greater risk of committing a homicide (benzos and opioids were p=.07 and .04 respectively). To repeat: In Finland at least, there was no association between antidepressant use and the risk of becoming a school shooter.

What are we to make of the provocative NSAIDs? More study is needed:

The surprisingly high risk associated with opioid and non-opioid analgesics deserves further attention in the treatment of pain among individuals with criminal history.

Drug-related murders in oxycodone abusers don't come as a great surprise, but aspirin-related violence is hard to explain...3

Footnotes

1 Having a purchase doesn't mean the individual was actually taking the drug before/during the time of the offense, however.

The analysis based on case-control design showed an adjusted OR of 1.30 (95% CI: 0.97-1.75) as the risk of homicide for the current use of an antidepressant, 2.52 (95% CI: 1.90-3.35) for benzodiazepines, 0.62 (95% CI: 0.41-0.93) for antipsychotics, and 2.16 (95% CI: 1.41-3.30) for opioid analgesics.

3 P.S. Just to be clear here, correlation ≠ causation. Disregarding the anomalous nature of the finding in the first place, it could be that murderers have more headaches and muscle pain, so they take more anti-inflammatories (rather than ibuprofen "causing" violence). But if the anti-med faction uses these results to argue that "antidepressants cause school shootings" then explain how ibuprofen raises the risk threefold...

Jessica is depressed again. After six straight weeks of overtime, her boss blandly praised her teamwork at the product launch party. And the following week she was passed over for a promotion in favor of Jason, her junior co-worker. "It's always that way, I'll never get ahead..."She arrives at her therapist's office late, looking stressed, disheveled, and dejected. The same old feelings of worthlessness and despair prompted her to resume her medication and CBT routine.

"You deserve to be recognized for your work," said Dr. Harrison. "The things you're telling yourself right now are cognitive distortions: the black and white thinking, the overgeneralization, the self-blame, jumping to conclusions... "

"I guess so," muttered Jessica, looking down.

"And you need a vacation!"

. . .

A brilliant suggestion, Dr. Harrison. As we all know, taking time off to relax and recharge after a stressful time will do wonders for our mental health. And building up a reserve of happy memories to draw upon during darker times is a cornerstone of positive psychology.

Jessica and her husband Michael take a week-long vacation in Hawaii, creating new episodic memories that involve snorkling, parasailing, luaus, and mai tais on the beach. Jessica ultimately decides to quit her job and sell jewelry on Etsy.

2015

Michael is depressed after losing his job. His self-esteem has plummeted, and he feels useless. But he's too proud to ask for help. "Depression is something that happens to other people (like my wife), but not to me." He grows increasingly angry and starts drinking too much.

Jessica finally convinces him to see Dr. Harrison's colleague. Dr. Roberts is a psychiatrist with a Ph.D. in neuroscience. She's adopted a translational approach and tries to incorporate the latest preclinical research into her practice. She's intrigued by the latest finding from Tonegawa's lab, which suggests that the reactivation of a happy memory is more effective in alleviating depression than experiencing a similar event in the present.

So instead of telling Michael to take time off and travel and practice mindfulness and live in the present, she tells him to recall his fondest memory from last year's vacation in Hawaii.

It doesn't work.

Michael goes to see Dr. Harrison, who prescribes bupropion and venlafaxine. Four weeks later, he feels much better, and starts a popular website that repudiates positive psychology. Seligman and Zimbardo are secretly chagrined.

. . .

Happy Hippocampus

photo credit: S. Ramirez

Artificially reactivating positive [sexual] memories [in male mice] could offer an alternative to traditional antidepressantsmakes them struggle more when you hold them by the tail after 10 days of confinement.1

The findings ... offer a possible explanation for the success of psychotherapies in which depression patients are encouraged to recall pleasant experiences. They also suggest new ways to treat depression by manipulating the brain cells where memories are stored...

“Once you identify specific sites in the memory circuit which are not functioning well, or whose boosting will bring a beneficial consequence, there is a possibility of inventing new medical technology where the improvement will be targeted to the specific part of the circuit, rather than administering a drug and letting that drug function everywhere in the brain,” says Susumu Tonegawa, ... senior author of the paper.

Although this type of intervention is not yet possible in humans, “This type of analysis gives information as to where to target specific disorders,” Tonegawa adds.

Before considering what the mice might actually experience when their happy memory cells are activated with light, let's all marvel at what was accomplished here.

Ramirez et al. (2015) studied mice that were genetically engineered to allow blue light to activate a specific set of granule cells in the dentate gyrus subfield of the hippocampus. These neurons are critical for the formation of new memories and are considered “engram cells” that undergo physical changes and store discrete memories (Liu et al., 2014). When a cue reactivates the same set of neurons, the episodic memory is retrieved. In this study, the engram cells were part of a larger circuit that included the amygdala and the nucleus accumbens, regions important for processing emotion, motivation, and reward.

Ramiriez, Liu, Tonegawa and colleagues have repeatedly demonstrated their masterful manipulation of mouse memories: activating fear memories, implanting false memories, and changing the valence of memories. These experiments are technically challenging and far outside my areas of expertise (greater detail in the Appendix below). In brief, the authors were able to label discrete sets of dentate gyrus cells while they were naturally activated during an interval of positive, neutral, or negative treatment. Then some groups of animals were stressed for 10 days, and others remained in their home cages.

The stressed mice exhibited signs of “depression-like” and “anxiety-like” behaviors.2 I'll spare you the long digression about whether the tail suspension test successfully models the anguished human experience of abject states, but you can read my earlier musings on the topic.

The most astounding part of the experiment is that optical stimulation of positive-memory engram cells in stressed mice induced a reversal of “depressive” behaviors (but not “anxious” behaviors; see Appendix). Curiously, re-exposing the stressed male mice to an actual female did not have this positive benefit. So mediated experience — artificial reactivation of the engram — is even better than the real thing.

“People who suffer from depression have those positive experiences in the brain, but the brain pieces necessary to recall them are broken. What we’re doing, in mice, is bypassing that circuitry and forcing it to be jump-started,” Ramirez says. “We’re harnessing the brain’s power from within itself and forcing the activation of that positive memory, whereas if you give a natural positive memory to the person or the animal, the depression that they have prevents them from finding that experience rewarding.”

In other words, “We'll force you to be happy [i.e., possibly remember a positive experience], whether you like it or not.” And since the authors discussed therapeutic implications in the paper, they have to deal with the problem of phenomenology, whether they like it or not. What do the mice actually remember? Generic sexual experiences, a feeling of reward? An episodic-like memory, e.g. a specific act and all its spatiotemporal contextual information? Even if we allow mice to have “episodic-like” memories, the latter seems unlikely given the highly artificial and non-physiological method of neural stimulation that bypasses the precisely timed patterns of activity thought to “represent” past experience. These memory manipulation studies seem very futuristic and scary but Inception they are not.

Our memories are plastic and malleable, and their physical instantiation changes each time we recall them. Which version of the Hawaii trip shall we target? What other memories show the greatest overlap with the happy one? Has the problem of hippocampal pattern separation been solved already?? Garden-variety deep brain stimulation seems easy in comparison (and we know how well that's gone in humans, so far). But: “In rodents, optogenetic stimulation of mPFC neurons, mPFC to raphe projections, and ventral tegmental dopaminergic neurons achieved a rapid reversal of stress-induced maladaptive behaviours” (Ramirez et al., 2015).

Why can't we just appreciate the basic knowledge gained from these experiments? But no. There has to be a human application right around the corner.

That link between the neural circuit manipulations in mice and therapies now used in humans makes the findings particularly exciting, says Tom Insel, director of the National Institute of Mental Health.

“This is a big step toward helping to understand not only the underlying circuits for a really serious illness like depression, but also the circuits that underlie treatment,” says Insel...

Was that actually an endorsement of mediated experience? If we go down that road, we must acknowledge that an artificially created reality, albeit one that originates within a being's own brain, is superior to real life. This is the most profound implication of activating positive memory engrams.

When Mediated Experience Replaces a Medicated Existence

Mediated experiences increasingly dominate our lives. Movies and television already confuse the real and the mediated. New technology is blurring the line further. Video games and virtual reality are becoming increasingly realistic. “Augmented reality” technology is on its way to the public. Wearable computers will allow people to enter a news story and see and feel the events the way the journalist who was there did and no doubt eventually we’ll be able to experience the events live. As the line between real and mediated gets harder to see, presence increases. An important and overlooked consequence of this trend is an increasing confusion from the other direction, in which “real life” seems to be mediated. People will have more and more trouble distinguishing reality, and some may not even appreciate that there is a difference. It will get harder for people to trust their own senses and judgment and it will be more difficult to impress people with non-mediated experiences.

Heavy social media users already accept a reality filtered through Instagram and Facebook. As the interest in personal biometrics and the Quantified Self movement rises, so too will tolerance of increasingly invasive performance enhancing and “lifestyle” brain stimulation methods (see DIY tDCS). No one has said that optogenetic-type treatments are (or will be) possible in humans (OK, almost no one; see Albert, 2014). Others are more modest, and see the translational potential in non-invasive transcranial magnetic stimulation (Deisseroth et al., 2015).

. . .

2035

DARPA has mandated that all depressed Americans must be implanted with its CyberNeuroTron WritBit device, which cost $100 billion to develop. CNTWB is a closed-loop DBS system that automatically adjusts the stimulation parameters at 12 different customized target locations. It uses state-of-the-art syringe-injectable mesh electronics, incorporating silicon nanowires and microvoltammetry. Electrical and chemical signals are continuously recorded and uploaded to a centralized data center, where machine learning algorithms determine with high accuracy whether a given pattern of activity signals a significant change in mood.

The data are compiled, analyzed, and stored by the global search engine conglomerate BlueBook, which in 2032 swallowed up Google, Facebook, Apple, and every other internet data mining company.

. . .

2055

Sophia, the daughter of Jessica and Michael, is depressed again. The Ramirez et al. (2050) protocol for Positive Memory Engram Activation is in widespread use. Sophia searches for her dentate gyrus recordings from a vacation in Hawaii five months earlier. Then she selects the specific memory she wants to be artificially reactivated: watching the sunset on the beach with her partner, drinking mai tais and eating taro chips.

"We had a great time on that trip, didn't we Lucas?"

Lucas the intelligent AI nods in agreement. "It's true," he thought. "Humans can no longer distinguish between virtual reality and the real thing."

This has been especially useful for the Ramirez protocol, since most Pacific Island nations have been underwater since 2047.

Footnotes

1 As an aside, I wonder what the female mice think of all this. What would be an equivalently positive experience? Is sex as rewarding for them? Will there be a new animal model of shopping at Nordstrom? Fortunately, this work was funded by RIKEN Brain Science Institute and Howard Hughes Medical Institute, so the authors don't have to follow the pesky impending NIH guidelines to include females in animal research.

2“Depression-related” behaviors were assessed using the Tail Suspension Test (TST) and the Sucrose Preference Test (SPT), which are supposed to mimic giving up hope and loss of pleasure, respectively. Different tests were used to measure “anxiety-related” behaviors. Interestingly, none of the happy engram manipulations improved anxiety-like behavior in the mice. Not a very good model of anxious depression, then.

These experiments are indeed difficult, but if you successfully execute them, a publication is Nature nearly guaranteed. A review by Liu et al. (2014) explained their general protocol in an easier-to-understand fashion:

...we combined activity-dependent, drug-regulatable expression system with optogenetics (Liu et al. 2012). We used a transgenic mouse model where the artificial tetracycline transactivator (tTA), which can be blocked by doxycycline (Dox), is driven by the promoter of immediate early gene (IEG) c-fos (Reijmers et al. 2007). The activity dependency of c-fos promoter poses a natural spatial constrain on the identities of the neurons that can be labeled, reflecting the normal biological selection process of the brain during memory formation, whereas the Dox-dependency of the system poses an artificial temporal constrain as to when these neurons can be labeled, which can be controlled by the experimenters. With these two constraints, the down-stream effector of tTA can express selectively in neurons that are active during a particular behavior episode, only if the animals are off Dox diet. Using this system, we expressed channelrhodopsin-2 (ChR2) delivered by a viral vector AAV-TRE-ChR2-EYFP targeting the dentate gyrus (DG) of the hippocampus and implanted optical fibers right above the infected areas.

One of the major treatment protocols is shown below (adapted from Fig. 1A).

There were a number of control conditions too. Reactivation of neutral or negative engram neurons didn't change depression-like behaviors on the TST and SPT. Reactivation of positive engram neurons in non-stressed mice didn't alter behavior, either.

A very impressive body of work, with a special dedication by the authors: "We dedicate this study to the memory of Xu Liu, who made major contributions to memory engram research."Xu Liu in memoriam.

Recently, Science and Nature had news features on big BRAIN funding for the development of deep brain stimulation technologies. The ultimate aim of this research is to treat and correct malfunctioning neural circuits in psychiatric and neurological disorders. Both pieces raised ethical issues, focused on device manufacturers and potential military applications, respectively.

A different ethical concern, not mentioned in either article, is who will have access to these new devices, and who is going to pay the medical costs once they hit the market. DBS for movement disorders is a test case, because Medicare (U.S.) approved coverage for Parkinson's disease (PD) and essential tremor in 2003. Which is good, given that unilateral surgery costs about $50,000.

Willis et al. (2014) examined Medicare records for 657,000 PD patients and found striking racial disparities. The odds of receiving DBS in white PD patients were five times higher than for African Americans, and 1.8 times higher than for Asians. And living in a neighborhood with high socioeconomic status was associated with 1.4-fold higher odds of receiving DBS. Out-of-pocket costs for Medicare patients receiving DBS are over $2,000 per year, which is quite a lot of money for low-income senior citizens.

Aaron Saenz raised a similar issue regarding the cost of the DEKA prosthetic arm (aka "Luke"):

But if you're not a veteran, neither DARPA project may really help you much. The Luke Arm is slated to cost $100,000+.... That's well beyond the means of most amputees if they do not have the insurance coverage provided by the Veteran's Administration. ... As most amputees are not veterans, I think that the Luke Arm has a good chance of being priced out of a large market share.

The availability of qualified neurosurgeons, even in affluent areas, will be another problem once future indications are FDA-approved (or even trialed).

The situation in one Canadian province (British Columbia, with a population of 4.6 million) is instructive. An article in the Vancouver Sun noted that in March 2013, only one neurosurgeon was qualified to perform DBS surgeries for Parkinson's disease (or for dystonia). This resulted in a three year waiting list. Imagine, all these eligible patients with Parkinson's have to endure their current condition (and worse) for years longer, instead of having a vastly improved quality of life.

... “But here’s the problem: We already have a waiting list of almost three years, from the time family doctors first put in the referral to the DBS clinic. And I’m the only one in B.C. doing this. So we really aren’t able to do more than 40 cases a year,” [Dr. Christopher Honey] said.. . ....The health authority allocates funding of $1.1 million annually, which includes the cost of the $20,000 devices, and $14,000 for each battery replacement. On average, batteries need to be replaced every three years.. . .To reduce wait times, the budget would have to increase and a Honey clone would have to be trained and hired.

Devices for DBS have been approved by the FDA for use in treating Parkinson disease, essential tremor, obsessive-compulsive disorder, and dystonia,2 but expanding DBS use to include new indications has proven difficult—specifically because of the high cost of DBS devices and generally because of disincentives for device manufacturers to sponsor studies when disease populations are small and the potential for a return on investment is not clear. In many of these cases, Medicare coverage will determine whether a study will proceed. ... Ultimately, uncertain Medicare coverage coupled with the lack of economic incentives for industry sponsorship could limit investigators’ freedom of inquiry and ability to conduct clinical trials for new uses of DBS therapy.

But the question remains, where is all this health care money supposed to come from?

The device manufacturers aren't off the hook, either, but BRAIN is trying to reel them in. NIH recently sponsored a two-day workshop, BRAIN Initiative Program for Industry Partnerships to Facilitate Early Access Neuromodulation and Recording Devices for Human Clinical Studies [agenda PDF]. The purpose was to:

Bring together stakeholders and interested parties to disseminate information on opportunities for research using latest-generation devices for CNS neuromodulation and interfacing with the brain in humans.

Describe the proposed NIH framework for facilitating and lowering the cost of new studies using these devices.

...we hope to spur human research bridging the “valley of death” that has been a barrier to translating pre-clinical research into therapeutic outcomes. We expect the new framework will allow academic researchers to test innovative ideas for new therapies, or to address scientific unknowns regarding mechanisms of disease or device action, which will facilitate the creation of solid business cases by industry and venture capital for the larger clinical trials required to take these ideas to market.

To advance these goals, NIH is pursuing general agreements (Memoranda of Understanding, MOUs) with device manufacturers to set up a framework for this funding program. In the MOUs, we expect each company to specify the capabilities of their devices, along with information, support and any other concessions they are willing to provide to researchers.

In other words, it's a public/private partnership to advance the goal of having all depressed Americans implanted with the CyberNeuroTron WritBit device by 2035 (just kidding!!).

But seriously... before touting the impending clinical relevance of a study in rodents, basic scientists and bureaucrats alike should listen to patients with the current generation of DBS devices. Participants in the halted BROADEN Trial for refractory depression reported outcomes ranging from “...the side effects caused by the device were, at times, worse than the depression itself” to “I feel like I have a second chance at life.”

What do you do with a medical device that causes great physical harm to one person but is a godsend for another? What are the factors involved? Sloppy patient selection criteria? Surgeon ineptitude? Anatomical variation? All of the above and more are likely to contribute to the wildly divergent outcomes.

One anonymous commenter on a previous post recently said that the study sponsor had abandoned them:

The BROADEN study isn't continuing the 4 year follow-up study. I'm in it and just got a phone call. They'll put in a rechargeable device for those of us enrolled and will not follow up with us. The FDA approved it just for us who had the surgery. It looks like St. Judes isn't going foe FDA approval anymore. I have no public reference for this but it was what I was just told over the phone. It has helped me and I don't know what I'm going to do about follow-up care except with my psychiatrist who doesn't have DBS experience. Scary.

Why isn't the manufacturer providing medical care for the study participants? Because they don't have to! In her Science piece, Emily Underwood reported:

Recent failures of several large clinical trials of deep brain stimulation for depression loomed large over the meeting. In the United States, companies or institutions sponsoring research are rarely, if ever, required to pay medical costs that trial subjects incur as a result of their participation, [Hank] Greely points out. “Many people who work in research ethics, including me, think this is wrong,” he says.

Hopefully the workshop attendees considered not only how to lower the cost of new DBS studies, but also how to provide equitable circuit-based health care in the future.

For some inexplicable reason, you watched the torture gore horror film Hostel over the weekend. On Monday, you're having trouble concentrating at work. Images of severed limbs and bludgeoned heads keep intruding on your attempts to code or write a paper. So you decide to read about the making of Hostel.You end up seeing pictures of the most horrifying scenes from the movie. It's all way too way much to simply shake off so then you decide to play Tetris.

But a funny thing happens. The unwelcome images start to become less frequent. By Friday, the gory mental snapshots are no longer forcing their way into your mind's eye. The ugly flashbacks are gone.

Meanwhile, your parnter in crime is having similar images of eye gouging pop into his head. Except he didn't review the tortuous highlights on Monday, and he didn't play Tetris. He continues to have involuntary intrusions of Hostel images once or twice a day for the rest of the week.

This is basically the premise (and outcome) of a new paper in Psychological Science by Ella James and colleagues at Cambridge and Oxford. It builds on earlier work suggesting that healthy participants who play Tetris shortly after watching a “trauma” film will have fewer intrusive memories (Holmes et al, 2009, 2010). This is based on the idea that involuntary “flashbacks” in real post-traumatic stress disorder (PTSD) are visual in nature, and require visuospatial processing resources to generate and maintain. Playing Tetris will interfere with consolidation and subsequent intrusion of the images, at least in an experimental setting (Holmes et al, 2009):

The timing is key here. In the earlier experiments, Tetris play commenced 30 min after the trauma film experience, during the 6 hour window when memories for the event are stabilized and consolidated. Newly formed memories are thought to be malleable during this time.

However, if one wants to extrapolate directly to clinical application in cases of real life trauma exposure (and this is problematic, as we'll see later), it's pretty impractical to play Tetris right after an earthquake, auto accident, mortar attack, or sexual assault. So the new paper relies on the process of reconsolidation, when an act of remembering will place the memory in a labile state once again, so it can be modified (James et al., 2015).

The procedure was as follows: 52 participants came into the lab on Day 0 and completed questionnaires about depression, anxiety, and previous trauma exposure. Then they watched a 12 min trauma film that included 11 scenes of actual death (or threatened death) or serious injury (James et al., 2015):

...the film functioned as an experimental analogue of viewing a traumatic event in real life. Scenes contained different types of context; examples include a young girl hit by a car with blood dripping out of her ear, a man drowning in the sea, and a van hitting a teenage boy while he was using his mobile phone crossing the road. This film footage has been used in previous studies to evoke intrusive memories...

After the film, they rated “how sad, hopeless, depressed, fearful, horrified, and anxious they felt right at this very moment” and “how distressing did you find the film you just watched?” They were instructed to keep a diary of intrusive images and come back to the lab 24 hours later.

On Day 1, participants were randomized to either the experimental group (memory reactivation + Tetris) or the control group (neither manipulation). The experimental group viewed 11 still images from the film that served as reminder cues to initiate reconsolidation. This was followed by a 10 min filler task and then 12 min of playing Tetris (the Marathon mode shown above). The game instructions aimed to maximize the amount of mental rotation the subjects would use. The controls did the filler task and then sat quietly for 12 min.

Both groups kept a diary of intrusions for the next week, and then returned on Day 7. All participants performed the Intrusion Provocation Task (IPT). Eleven blurred pictures from the film were shown, and subjects indicated when any intrusive mental images were provoked. Finally, the participants completed a few more questionnaires, as well as a recognition task that tested their verbal (T/F written statements) and visual (Y/N for scenes) memories of the film.1

The results indicated that the Reactivation + Tetris manipulation was successful in decreasing the number of visual memory intrusions in both the 7-day diary and the IPT (as shown below).

Cool little snowman plots (actually frequency scatter plots) illustrate the time course of intrusive memories in the two groups.

modified from Fig. 2 (James et al., 2015). Frequency scatter plots showing the time course of intrusive memories reported in the diary daily from Day 0 (prior to intervention) to Day 7. The intervention was on Day 1, and the red arrow is 24 hrs later (when the intervention starts working). The solid lines are the results of a generalized additive model. The size of the bubbles represents the number of participants who reported the indicated number of intrusive memories on that particular day.

But now, you might be asking yourself if the critical element was Tetris or the reconsolidation update procedure (or both), since the control group did neither. Not to worry. Experiment 2 tried to disentangle this by recruiting four groups of participants (n=18 in each) — the original two groups plus two new ones: Reactivation only and Tetris only.

And the results from Exp. 2 demonstrated that both were needed.

modified from Fig. 4 (James et al., 2015). Asterisks indicate that results for the Reactivation + Tetris group were significantly different from results for the other three groups (*p < .01). Error bars represent +1 SEM. The No-Task Control and Tetris Only groups did not differ for diary intrusions (n.s.).

The authors' interpretation:

Overall, the results of the present experiments indicate that the frequency of intrusive memories induced by experimental trauma can be reduced by disrupting reconsolidation via a competing cognitive-task procedure, even for established memories (here, events viewed 24 hours previously). ... Critically, neither playing Tetris alone (a nonreactivation control condition) nor the control of memory reactivation alone was sufficient to reduce intrusions... Rather, their combination is required, which supports a reconsolidation-theory account. We suggest that intrusive-memory reduction is due to engaging in a visuospatial task within the window of memory reconsolidation, which interferes with intrusive image reconsolidation (via competition for shared resources).

Surprisingly (perhaps), I don't have anything negative to say about the study. It was carefully conducted and interpreted with restraint. They don't overextrapolate to PTSD. They don't use the word “flashback” to describe the memory phenomenon. And they repeatedly point out that it's “experimental trauma.” I actually considered reviving The Neurocomplimenter for this post, but that would be going too far...

Compare this flattering post with one I wrote in 2010, about a related study by the same authors (Holmes et al.. 2010). That paper certainly had a modest title: Key Steps in Developing a Cognitive Vaccine against Traumatic Flashbacks: Visuospatial Tetris versus Verbal Pub Quiz.

Is there really nothing wrong with this study?? Being The Neurocritic, I always have to find something to criticize... and here I had to dig through the Supplemental Material to find issues that may affect the translational potential of Tetris-based interventions.

The Intrusion subscale of the Impact of Event Scale (IES-R) was used as an exploratory measure, and subject ratings were between 0 and 1.

The Intrusion subscale consists of 8 questions like “I found myself acting or feeling like I was back at that time” and “I had dreams about it” that are rated from 0 (not at all) to 4 (extremely). The IES-R is given to people after distressing, traumatic life events. These individuals may have actual PTSD symptoms like flashbacks and nightmares.

In Exp. 1, the Reactivation + Tetris group (M = .68) had significantly lower scores (p = .016) on Day 7 than the control group (M = 1.01). BUT this is not terribly meaningful, due to a floor effect. And in Exp. 2 there was no difference between the four groups, with scores ranging from 0.61 to 0.81.3

As an overall comment, watching a film of a girl getting hit by a car is not the same as witnessing it in person (obviously). But this real-life scenario may be the most amenable to Tetris, because the witness was not in the accident themselves and did not know the girl (both of which would heighten the emotional intensity and vividness of the trauma, elements that transcend visual imagery).

It's true that in PTSD, the involuntary intrusion of trauma memories (i.e., flashbacks) have a distinctly sensory quality to them (Ehlers et al. 2004). Visual images are most common, but bodily sensations, sounds, and smells can be incorporated into a multimodal flashback. Or could occur on their own.

The effectiveness of the Tetris intervention was related to game score and self-rated task difficulty.

This means that people who were better at playing Tetris showed a greater decrease in intrusive memories. This result wasn't covered in the main paper, but it makes you wonder about cause and effect. Is it because the game was more enjoyable for them? Or could it be that their superior visual-spatial abilities (or greater game experience) resulted in greater interference, perhaps by using up more processing resources? That's always a dicey argument, as you could also predict that better, more efficient game play uses fewer visual-spatial resources.

An interesting recent paper found that individuals with PTSD (who presumably experience intrusive visual memories) have worse allocentric spatial processing abilities than controls (Smith et al., 2015). This means they have problems representing the locations of environmental features relative to each other (instead of relative to the self). So are weak spatial processing and spatial memory abilities caused by the trauma, or are weak spatial abilities a vulnerability factor for developing PTSD?

As noted by the authors, the modality-specificity of the intervention needs to be assessed.

Their previous paper showed that the effect was indeed specific to Tetris. A verbally based video game (Pub Quiz) actually increased the frequency of intrusive images (Holmes et al., 2010).

It would be interesting to disentangle the interfering elements of Tetris even further. Would any old mental rotation task do the trick? How about passive viewing of Tetris blocks, or is active game play necessary? Would a visuospatial n-back working memory task work? It wouldn't be as fun, but it obviously uses up visual working memory processing resources. What about Asteroids or Pac-Man or...? 4

This body of work raises a number of interesting questions about the nature of intrusive visual memories, traumatic and non-traumatic alike. Do avid players of action video games (or Tetris) have fewer intrusive memories of past trauma or trauma-analogues in everyday life? I'm not sure this is likely, but you could find out pretty quickly on Amazon Mechanical Turk or one of its alternatives.

There are also many hurdles to surmount before Doctors Prescribe 'Tetris Therapy'. For instance, what does it mean to have the number of weekly Hostel intrusions drop from five to two? How would that scale to an actual trauma flashback, which may involve a fear or panic response?

The authors conclude the paper by briefly addressing these points:

A critical next step is to investigate whether findings extend to reducing the psychological impact of real-world emotional events and media. Conversely, could computer gaming be affecting intrusions of everyday events?

A number of different research avenues await these investigators (and other interested parties). And — wait for it — a clinical trial of Tetris for flashback reduction has already been completed by the investigators at Oxford and Cambridge!

Holmes and colleagues took the consolidation window very seriously: participants played Tetris in the emergency room within 6 hours of experiencing or witnessing an accident. I'll be very curious to see how this turns out...

Footnotes

1 Interestingly, voluntary retrieval of visual and verbal memories was not affected by the manipulation, highlighting the uniqueness of flashback-like phenomena.

2 It does no such thing. But they did embed a video of Dr. Tom Stafford explaining why Tetris is so compelling...

3 The maximum total score on the IES-R is 32. The mean total score in a group of car accident survivors was 17; in Croatian war veterans it was 25. At first I assumed the authors reported the total score out of 32, rather than the mean score per item. I could be very wrong, however. By way of comparison, the mean item score in female survivors of intimate partner violence was 2.26. Either way, the impact of the trauma film was pretty low in this study, as you might expect.

4 OK, now I'm getting ridiculous. I'm also leaving aside modern first-person shooter games as potentially too traumatic and triggering.

In case you've been living under a rock the past few weeks, Google's foray into artificial neural networkshas yielded hundreds of thousands of phantasmagoric images. The company has an obvious interest in image classification, and here's how they explain the DeepDream process in their Research Blog:

. . .We train an artificial neural network by showing it millions of training examples [of dogs and eyes and pagodas, let's say] and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.

. . . One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana... By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.

After Google released the deepdream code on GitHub, Psychic VR Lab set up a Deep Dream web interface, which currently has over 300,000 groovy and scary images.

I've taken an interest in the hallucinogenic and distorted brain images, including the one above. I can't properly credit the human input interface (which wasn't me), but I found it after a submitting a file of my own in the early stages of http://psychic-vr-lab.com/deepdream/. I can't find the url hosting my image, but I came across the frightening brain here, along with the original.

I've included a few more for your viewing pleasure. Brain Decoder posted a dreamy mouse hippocampus Brainbow.

Rogier said: "According to #deepdream the homunculus in our brains is a terrifying bird-dog hybrid."

Aw, I thought it was kind of cute. More small birds, fewer staring judgmental eyeballs.

And the grand finale isn't a brain at all. But who doesn't want to see the dreamified version of The Garden of Earthly Delights, by Hieronymus Bosch? Here it is, via @aut0mata. Click on image for a larger view.

When nothing's right, just close your eyesClose your eyes and you're gone-Beck, Dreams

In machine learning, computers apply statistical learning techniques to automatically identify patterns in data. These techniques can be used to make highly accurate predictions.

You should really head over there right now to view it, because it's very impressive.

Computational neuroscience types are using machine learning algorithms to classify all sorts of brain states, and diagnose brain disorders, in humans. How accurate are these classifications? Do the studies all use separate training sets and test sets, as shown in the example above?

Let's say your fMRI measure is able to differentiate individuals with panic disorder (n=33) from those with panic disorder + depression (n=26) with 79% accuracy.1 Or with structural MRI scans you can distinguish 20 participants with treatment-refractory depression from 21 never-depressed individuals with 85% accuracy.2 Besides the issues outlined in the footnotes, the “reality check” is that the model must be able to predict group membership for a new (untrained) data set. And most studies don't seem to do this.

I was originally drawn to the topic by a 3 page article entitled, Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression (Sato et al., 2015). Wow! Really? How accurate? Which fMRI signature? Let's take a look.

The authors used a “standard leave-one-subject-out procedure in which the classification is cross-validated iteratively by using a model based on the sample after excluding one subject to independently predict group membership”but they did not test their fMRI signature in completely independent groups of participants.

Nor did they try to compare individuals who are currently depressed to those who are currently remitted. That didn't matter, apparently, because the authors suggest the fMRI signature is a trait markerof vulnerability, not a state marker of current mood. But the classifier missed 28% of the remitted group who did not have the “guilt-selective anterior temporal functional connectivity changes.”

What is that, you ask? This is a set of mini-regions (i.e., not too many voxels in each) functionally connected to a right superior anterior temporal lobe seed region of interest during a contrast of guilt vs. anger feelings (selected from a number of other possible emotions) for self or best friend, based on written imaginary scenarios like “Angela [self] does act stingily towards Rachel [friend]” and “Rachel does act stingily towards Angela” conducted outside the scanner (after the fMRI session is over). Got that?

You really need to read a bunch of other articles to understand what that means, because the current paper is less than 3 pages long. Did I say that already?

The patients were previously diagnosed according to DSM-IV-TR (which was current at the time), and in remission for at least 12 months. The study was conducted by investigators from Brazil and the UK, so they didn't have to worry about RDoC, i.e. “new ways of classifying mental disorders based on behavioral dimensions and neurobiological measures” (instead of DSM-5 criteria). A “guilt-proneness” behavioral construct, along with the “guilt-selective” network of idiosyncratic brain regions, might be more in line with RDoC than past major depression diagnosis.

Could these results possibly generalize to other populations of remitted and never-depressed individuals? Well, the fMRI signature seems a bit specialized (and convoluted). And overfitting is another likely problem here...

Ideally, the [decision] tree should perform similarly on both known and unknown data.

So this one is less than ideal. [NOTE: the one that's 90% in the top figure]

These errors are due to overfitting. Our model has learned to treat every detail in the training data as important, even details that turned out to be irrelevant.

In my next post, I'll present an unsystematic review of machine learning as applied to the classification of major depression. It's notable that Sato et al. (2015) used the word “classification” instead of “diagnosis.”3

ADDENDUM (Aug 3 2015): In the comments, I've presented more specific critiques of: (1) the leave-one-out procedure and (2) how the biomarker is temporally disconnected from when the participants identify their feeling as 'guilt' or 'anger' or etc. (and why shame is more closely related to depression than guilt).

Footnotes

1 The sensitivity (true positive rate) was 73% and the specificity (true negative rate) was 85%. After correcting for confounding variables, these numbers were 77% and 70%, respectively.

2 The abstract concludes this is a “high degree of accuracy.” Not to pick on these particular authors (this is a typical study), but Dr. Dorothy Bishop explains why this is not very helpful for screening or diagnostic purposes. And what you'd really want to do here is to discriminate between treatment-resistant vs. treatment-responsive depression. If an individual does not respond to standard treatments, it would be highly beneficial to avoid a long futile period of medication trials.

3 In case you're wondering, the title of this post was based on The Dark Side of Diagnosis by Brain Scan, which is about Dr Daniel Amen. The work of the investigators discussed here is in no way, shape, or form related to any of the issues discussed in that post.

In the coming era of Precision Medicine, we'll all want customized treatments that “take into account individual differences in people’s genes, environments, and lifestyles.” To do this, we'll need precise diagnostic tools to identify the specific disease process in each individual. Although focused on cancer in the near-term, the longer-term goal of the White House initiative is to apply Precision Medicine to all areas of health. This presumably includes psychiatry, but the links between Precision Medicine, the BRAIN initiative, and RDoC seem a bit murky at present.1

But there's nothing a good infographic can't fix. Science recently published a Perspective piece by the NIMH Director and the chief architect of the Research Domain Criteria (RDoC) initiative (Insel & Cuthbert, 2015). There's Deconstruction involved, so what's not to like? 2

ILLUSTRATION: V. Altounian and C. Smith / SCIENCE

In this massively ambitious future scenario, the totality of one's genetic risk factors, brain activity, physiology, immune function, behavioral symptom profile, and life experience (social, cultural, environmental) will be deconstructed and stratified and recompiled into a neat little cohort. 3

The new categories will be data driven. The project might start by collecting colossal quantities of expensive data from millions of people, and continue by running classifiers on exceptionally powerful computers (powered by exceptionally bright scientists/engineers/coders) to extract meaningful patterns that can categorize the data with high levels of sensitivity and specificity. Perhaps I am filled with pathologically high levels of negative affect (Loss? Frustrative Nonreward?), but I find it hard to be optimistic about progress in the immediate future. You know, for a Precision Medicine treatment for me (and my pessimism)...

But let's just focus on the brain for now. For a long time, most neuroscientists have viewed mental disorders as brain disorders. [But that's not to say that environment, culture, experience, etc. play no role! cf. Footnote 3]. So our opening question becomes, How do we classify and diagnose brain disorders neural circuit disorders in a fashion consistent with RDoC principles? Is there really One Brain Network for All Mental Illness, for instance? (I didn't think so.)

Our colleagues in Asia and Australia and Europe and Canada may not have gotten the funding memo, however, and continue to run classifiers based on DSM categories. 5 In my previous post, I promised an unsystematic review of machine learning as applied to the classification of major depression. You can skip directly to the Appendix to see that.

Regardless of whether we use DSM-5 categories or RDoC matrix constructs, what we need are robust and reproducible biomarkers (see Table 1 above). A brief but excellent primer by Woo and Wager (2015) outlined the characteristics of a useful neuroimaging biomarker:

1. Criterion 1: diagnosticity

Good biomarkers should produce high diagnostic performance in classification or prediction. Diagnostic performance can be evaluated by sensitivity and specificity. Sensitivity concerns whether a model can correctly detect signal when signal exists. Effect size is a closely related concept; larger effect sizes are related to higher sensitivity. Specificity concerns whether the model produces negative results when there is no signal. Specificity can be evaluated relative to a range of specific alternative conditions that may be confusable with the condition of interest.

2. Criterion 2: interpretability

Brain-based biomarkers should be meaningful and interpretable in terms of neuroscience, including previous neuroimaging studies and converging evidence from multiple sources (eg, animal models, lesion studies, etc). One potential pitfall in developing neuroimaging biomarkers is that classification or prediction models can capitalize on confounding variables that are not neuroscientifically meaningful or interesting at all (eg, in-scanner head movement). Therefore, neuroimaging biomarkers should be evaluated and interpreted in the light of existing neuroscientific findings.

3. Criterion 3: deployability

Once the classification or outcome-prediction model has been developed as a neuroimaging biomarker, the model and the testing procedure should be precisely defined so that it can be prospectively applied to new data. Any flexibility in the testing procedures could introduce potential overoptimistic biases into test results, rendering them useless and potentially misleading. For example, “amygdala activity” cannot be a good neuroimaging biomarker without a precise definition of which “voxels” in the amygdala should be activated and the relative expected intensity of activity across each voxel. A well-defined model and standardized testing procedure are crucial aspects of turning neuroimaging results into a “research product,” a biomarker that can be shared and tested across laboratories.

4. Criterion 4: generalizability

Clinically useful neuroimaging biomarkers aim to provide predictions about new individuals. Therefore, they should be validated through prospective testing to prove that their performance is generalizable across different laboratories, different scanners or scanning procedures, different populations, and variants of testing conditions (eg, other types of chronic pain). Generalizability tests inherently require multistudy and multisite efforts. With a precisely defined model and standardized testing procedure (criterion 3), we can easily test the generalizability of biomarkers and define the boundary conditions under which they are valid and useful.

[Then the authors evaluated the performance of a structural MRI signature for IBS presented in an accompanying paper.]

Should we try to improve on a neuroimaging biomarker (or “neural signature”) for classic disorders in which “Neuroanatomical diagnosis was correct in 80% and 72% of patients with major depression and schizophrenia, respectively...” (Koutsouleris et al., 2015)? That study used large cohorts and evaluated the trained biomarker against an independent validation database (i.e., it was more thorough than many other investigations). Or is the field better served by classifying when loss and agency and auditory perception go awry? What would individualized treatments for these constructs look like? Presumably, the goal is to develop better treatments, and to predict who will respond to a specific treatment(s).

OR should we adopt the surprisingly cynical view of some prominent investigators, who say:

...identifying a genuine neural signature would necessitate the discovery of a specific pattern of brain responses that possesses nearly perfect sensitivity and specificity for a given condition or other phenotype. At the present time, neuroscientists are not remotely close to pinpointing such a signature for any psychological disorder or trait...

If that's true, then we'll have an awfully hard time with our resting state fMRI classifier for neuro-nihilism.

2 Derrida's Deconstruction and RDoc are diametrically opposed, as irony would have it.

3 Or maybe an n of 1... I'm especially curious about how life experience will be incorporated into the mix. Perhaps the patient of the future will upload all the data recorded by their memory implants, as in The Entire History of You (an episode of Black Mirror).

4 The word “shroud” always makes everything sound so dire and deathly important... especially when used as a noun.

5 As do many research groups in the US. This is meant to be snarky, but not condescending to anyone who follows DSM-5 in their research.

Below are 34 references on MRI/fMRI applications of machine learning used to classify individuals with major depression (I excluded EEG/MEG for this particular unsystematic review). The search terms were combinations of "major depression""machine learning""support vector""classifier".

This last one is especially important, since an accurate diagnosis can avoid the potentially disastrous prescribing of antidepressants in bipolar depression.

Idea that may already be implemented somewhere: Individual labs or research groups could perhaps contribute to a support vector machine clearing house (e.g., at NTRIC or OpenfMRI or GitHub) where everyone can upload the code for data processing streams and various learning/classification algorithms to try out on each others' data.

I'm not blogging about any of these events. Many many others have already written about them (see selected reading list below). And The Neurocritic has been feeling tapped out lately.

Hence the cats on treadmills. They're here to introduce a new study which demonstrated that early visual experience is not necessary for the perception of biological motion (Bottari et al., 2015). Biological motion perception involves the ability to understand and visually track the movement of a living being. This phenomenon is often studied using point light displays, as shown below in a demo from the BioMotion Lab. You should really check out their flash animation that allows you to view human, feline, and pigeon walkers moving from right to left, scrambled and unscrambled, masked and unmasked, inverted and right side up.

People born with dense, bilateral cataracts that are surgically removed at a later date show deficits in higher visual processing, including the perception of global motion, global form, faces, and illusory contours. Proper neural development during the critical, or sensitive period early in life is dependent on experience, in this case visual input. However, it seems that the perception of biological motion (BM) does not require early visual experience (Bottari et al., 2015).

Participants in the study were 12 individuals with congenital cataracts that were removed at a mean age of 7.8 years (range 4 months to 16 yrs). Age at testing was 17.8 years (range 10-35 yrs). The study assessed their biological motion thresholds (extracting BM from noise) and recorded their EEG to point light displays of a walking man and to scrambled versions of the walking man (see demo).

Behavioral performance on the BM threshold task didn't differ much between the congenital cataract (cc) and matched control (mc) groups (i.e., there was a lot of overlap between the filled diamonds and the open triangles below).

The event-related potentials (ERPs) averaged to presentations of the walking man vs. scrambled man showed the same pattern in cc and mc groups as well: larger to walking man (BM) than scrambled man (SBM).

The N1 component (the peak at about 0.25 sec post-stimulus) seems a little smaller in cc but that wasn't significant. On the other hand, the earlier P1 was significantly reduced in the cc group. Interestingly, the duration of visual deprivation, amount of visual experience, and post-surgical visual acuity did not correlate with the size of the N1.

The authors discuss three possible explanations for these results:

(1) The neural circuitries associated with the processing of BM can specialize in late childhood or adulthood. That is, as soon as visual input becomes available, initiates the functional maturation of the BM system. Alternatively the neural systems for BM might mature independently of vision. (2) Either they are shaped cross-modally or (3) they mature independent of experience.

They ultimately favor the third explanation, that "the neural systems for BM specialize independently of visual experience." They also point out that the ERPs to faces vs. scrambled faces in the cc group do not show the characteristic difference between these stimulus types. What's so special about biological motion, then? Here the authors wave their hands and arms a bit:

We can only speculate why these different developmental trajectories for faces and BM emerge: BM is characteristic for any type of living being and the major properties are shared across species. ... By contrast, faces are highly specific for a species and biases for the processing of faces from our own ethnicity and age have been shown.

It's more important to see if a bear is running towards you than it is to recognize faces, as anyone with congenital prosopagnosia ("face blindness") might tell you...

"The third sequence showed a walking cat. The data are based on a high-speed (200 fps) video sequence showing a cat walking on a treadmill. Fourteen feature points were manually sampled from single frames. As with the pigeon sequence, data were approximated with a third-order Fourier series to obtain a generic walking cycle."

"How am I supposed to work knowing that guy is listening to every thought that's going through my head? This is insane..."

David Thorogood and Ryan Cates are poor but brilliant Cal Tech grad students in Listening, a new neuro science fiction film by writer-director Khalil Sullins. Their secret garage lab invention of direct brain-to-brain communication has been hijacked by the CIA, who put it to nefarious use.

I'll take a closer look at the neuroscience (good and bad) in the next post.

Brain decodingexperiments that use fMRI or ECoG (direct recordings of the brain in epilepsy patients) to deduce what a person is looking at or saying or thinking have become increasingly popular as well.

They're still quite limited in scope, but any study that can invoke “mind reading” or “brain-to-brain” scenarios will attract the press like moths to a flame....

For example, here's how NeuroNews site Brain Decoder covered the latest “brain-to-brain communication” stunt and the requisite sci fi predictions:

Human brains can now be linked well enough for two people to play guessing games without speaking to each other, scientists report. The researchers hooked up several pairs of people to machines that connected their brains, allowing one to deduce what was on the other's mind.. . .

This brain-to-brain interface technology could one day allow people to empathize or see each other's perspectives more easily by sending others concepts too difficult to explain in words, [author Andrea Stocco] said.

Mind reading! Yay! But this isn't what happened. No thoughts were decoded in the making of this paper (Stocco et al., 2015).

Instead, stimulation of visual cortex did all the “talking.” Player One looked at an LED that indicated “yes” (13 Hz flashes) or “no” (12 Hz flashes). Steady-state visual evoked potentials (a type of EEG signal very common in BCI research) varied according to flicker rate, and this binary code was transmitted to a second computer, which triggered a magnetic pulse delivered to the visual cortex of Player Two if the answer was yes. The TMS pulse in turn elicited a phosphene (a brief visual percept) that indicated yes (no phosphene indicated a “no” answer).

Ideally, brain-to-brain interfaces would one day allow one person to think about an object, say a hammer, and another to know this, along with the hammer's shape and what the first person wanted to use it for. "That would be the ideal type of complexity of information we want to achieve," Stocco said. "We don't know whether that future is possible."

Well, um, we already have the first half of the equation to some small degree (Naselaris et al. 2015 decoded mental images of remembered scenes)...

The new film Listeningstarts off with a riff on this work and spins into a dark and dangerous place where no thought is private. Given the preponderance of “hearing” metaphors above, it's fitting that the title is Listening, where fiction (in this case near-future science fiction) is stranger than truth. The hazard of watching a movie that depicts your field of expertise is that you nitpick every little thing (like the scalp EEG sensors that record from individual neurons). This impulse was exacerbated by a setting which is so near-future that it's present day.

From Marilyn Monroe Neurons to Carbon Nanotubes

But there were many things I did like about Listening.1 In particular, I enjoyed the way the plot developed in the second half of the film, especially in the last 30 minutes. On the lighter side was this amusing scene of a pompous professor lecturing on the real-life finding of Marilyn Monroe neurons (Quian Quiroga et al., 2005, 2009).

Caltech Professor:“For example, the subject is asked to think about Marilyn Monroe. My study suggests not only conscious control in the hippocampus and parahippocampal cortex, when the neuron....”

Conversation between two grad students in back of class:“Hey, you hear about the new bioengineering transfer?” ...

Caltech Professor:“Mr. Thorogood, perhaps you can enlighten us all with Ryan's gossip? Or tell us what else we can conclude from this study?”

Ryan the douchy hardware guy:“We can conclude that all neurosurgeons are in love with Marilyn Monroe.”

David the thoughtful software guy:“A single neuron has not only the ability to carry complex code and abstract form but is also able to override sensory input through cognitive effort. It suggests thought is a stronger reality than the world around us.”

Caltech Professor:“Unfortunately, I think you're both correct.”

Ryan and David are grad students with Big Plans. They've set up a garage lab (with stolen computer equipment) to work on their secret EEG decoding project. Ryan the douche lets Jordan the hot bioengineering transfer into their boys' club, much to David's dismay.

So she gets to stay in the garage. For the demonstration, Ryan sports an EEG net that looks remarkably like the ones made by EGI (shown below on the right).

Ryan reckons they'll put cell phone companies out of business with their mind reading invention, but David realizes they have a long way to go...

Jordan the hot bioengineering transfer: “Your mind can have a dozen thoughts in a millisecond 2 [really? how can you possibly assert this?] but it takes you five seconds to say 'hi sexy'?”

Ryan the douchy hardware guy:“It's not perfect.”

Jordan: “It's crap.”

.....

Jordan points out the decoding algorithm's response time is way too slow to be useful, and that recording from “a thousand neurons” 3 isn't enough... “you have to open the books.” David points out they're not neurosurgeons (who would implant intracranial electrodes for ECoG).

Jordan: “You don't need surgery... you need nanotubes.”

...and this leads to the most ridiculous scenario: intrathecal administration of said nanotubes [along with microscopic transistors to form molecular electrodes] via lumbar puncture (spinal injections) performed by complete novices wielding foot long needles. [direct administration into the cerebrospinal fluid bypasses difficulties with the impermeable blood brain barrier.] But if you can get through that, and the heavy handed use of color filters...

...you will be transported to the Red Room, where scary bald men “listen” to every thought [the direct brain-to-brain communication is one way only to avoid that nasty "circular feedback loop"].

Then more THINGS happen. It's not perfect. But it's not crap. I thought Listening was worth $4.99.

1Some of the dialogue and the interpersonal relationships? Not as much.

2Dozens of thoughts in 1/1000 of a second?? Perhaps she's being hyperbolic here... Well, popular lore says we have 70,000 thoughts per day, which comes out to only 0.8101851851851852 thoughts per second. But this is also absurd, since we haven't yet defined what a “thought” even is. Interesting factoid: the Laboratory of Neuroimaging (LONI) at UCLA has taken credit for this number. But they did offer some caveats:

*This is still an open question (how many thoughts does the average human brain processes in 1 day). LONI faculty have done some very preliminary studies using undergraduate student volunteers and have estimated that one may expect around 60-70K thoughts per day. These results are not peer-reviewed/published. There is no generally accepted definition of what "thought" is or how it is created. In our study, we had assumed that a "thought" is a sporadic single-idea cognitive concept resulting from the act of thinking, or produced by spontaneous systems-level cognitive brain activations.

So there's the heart of the problem: No one really knows what the biological basis for a 'thought' is, so we can't compute how fast a brain can produce them. Once you figure out the biological basis for a thought (and return from the Nobel ceremony) you can ask the question again and expect a reasonable scientific answer.In the mean time, you could probably get a bunch of psychologists to argue about the definition of a thought for a while, and get a varying set of answers that depend highly on the definitions.

Oh, I think they said also 30 thoughts per second at another time in the movie...

3 Yeah, here's the “one electrode, one neuron” fallacy. The reality is that a single EEG electrode records summed, synchronous activity from thousands of neurons, at the very least.

The brain’s wiring patterns can shed light on a person’s positive and negative traits, researchers report in Nature Neuroscience1. The finding, published on 28 September, is the first from the Human Connectome Project (HCP), an international effort to map active connections between neurons in different parts of the brain.

“We identified one strong mode of population co-variation: subjects were predominantly spread along a single 'positive-negative' axis linking lifestyle, demographic and psychometric measures to each other and to a specific pattern of brain connectivity.”

Well. This sounds an awful lot like the Hegemony of the Western Binary as applied to resting state functional connectivity to me...

And hey, looks like IQ, years of education, socioeconomic status, the ability to delay reward, and life satisfaction give you a good brain.

“You can distinguish people with successful traits and successful lives versus those who are not so successful,” [Marcus Raichle] says.

The authors used canonical correlation analysis (CCA) to estimate how 280 demographic and behavioral subject measures and patterns of brain connectivity co-varied in a similar way across subjects (Smith et al., 2015):

And who is not so “successful” (at least according to their chaotic and disconnected brains)?

Regular pot smokers: “...one of the negative traits that pulled a brain farthest down the negative axis was marijuana use in recent weeks.” Cue up additional funding for NIDA: “...the finding emphasizes the importance of projects such as one launched by the US National Institute on Drug Abuse last week, which will follow 10,000 adolescents for 10 years to determine how marijuana and other drugs affect their brains.”

In terms of alcohol content, the distinction is silly these days, since you can buy craft beers like Boatswain Double IPA (8.4% alcohol) for $2.29 at Trader Joe's. Unless those questions were retained as a code for race and socioeconomic status...

“As a black woman interested in feminist movement, I am often asked whether being black is more important than being a woman; whether feminist struggle to end sexist oppression is more important than the struggle to racism or vice versa. All such questions are rooted in competitive either/or thinking, the belief that the self is formed in opposition to an other...Most people are socialized to think in terms of opposition rather than compatibility. Rather than seeing anti-racist work as totally compatible with working to end sexist oppression, they often see them as two movements competing for first place.”

(2) Good reporting / bad reporting.Smith et al. (2015) are to be commended for such an impressive body of work.1 But I still think it was remiss to report a population along a judgmental good/bad binary axis in a cursory manner. The correlation/causation conundrum needs more of a caveat than:

These analyses were driven by and report only correlations; inferring and interpreting the (presumably complex and diverse) causalities remains a challenging issue for the future.

Are some brains wired for a lifestyle that includes education and high levels of satisfaction, while others are wired for anger, rule-breaking, and substance use?

“Wired” implies born that way – no effects of living in poverty in a shitty neighborhood.

Oh, and my flippant observation about the wine cooler/malt liquor axis wasn't actually a major player in the canonical correlation analysis. But race and ethnicity information was indeed collected (but not used: “partly because the race measure is not quantitative, but consists of several distinct categories”).

(3) Ethics! This brings up the larger issue of ethics. A whole host of personal participant information (e.g., genomics from everyone, including hundreds of identical twins) is included in the package. From Van Essen et al. (2013):

The released HCP data are not considered de-identified, insofar as certain combinations of HCP Restricted Data (available through a separate process) might allow identification of individuals as discussed below. It is accordingly important that all investigators who agree to Open Access Data Use Terms consult with their local IRB or Ethics Committee to determine whether the research needs to be approved or declared exempt. If needed and upon request, the HCP will provide a certificate stating that an investigator has accepted the HCP Open Access Data Use Terms. Because HCP participants come from families with twins and non-twin siblings, there is a risk that combinations of information about an individual (e.g., age by year; body weight and height; handedness) might lead to inadvertent identification, particularly by other family members, if these combinations were publicly released.

Oops.

Important Notice to Recipients and System Administrators of HCP Connectome In A Box Hard Drives

Thank you for acquiring a Connectome-in-a-Box that contains HCP image data. This provides an easy and efficient way to transfer large HCP datasets to other labs and institutions wanting to process lots of data, especially when multiple investigators are involved. With it comes a need to insure compliance with HCP’s Data Use Terms as well as any institutional requirements.

And any participant in the study can look at the results and infer, because of their regular cannabis use and their father's history of heavy drinking, that they must have a “bad brain.” Do the investigators have an obligation to counsel them on what this might mean (and what they should do)? Yeah, stop smoking cigarettes and pot, but there's not much they can do about their father's substance abuse or their fluid intelligence.

(4) Biology. Finally, I'm not sure what the finding means biologically. Across a population, there's a general mode of functional connectivity while participants lie in a scanner with nothing to do. That falls along an axis of “positive” and “negative” traits. And this pattern of correlated hemodynamic activity across 30 node-pair edges means....... what, exactly?

Every person's connectome is unique (“I am my connectome” for the thousandth time).2 But this mantra more commonly refers to the fine-grained structural connectome. You know, the kind that will live forever and be uploaded to a computer (see Amy Harmon's article on The Neuroscience of Immortality, which caused quite a splash).

What is the relationship between resting state functional connectivity and the implementation of thought and behavior via neural codes? This must be exceptionally unique for each person. We know this because even in lowly organisms like flies, neurons in an olfactory region called the mushroom bodies show a striking degree of individuality in neural codingacross animals.3

At the single-cell level, we show that uniquely identifiable MBONs [mushroom body output neurons, n=34] displayprofoundly different tuning across different animals, but that tuning of the same neuron across the two hemispheres of an individual fly was nearly identical.

In other words, a fly's unique olfactory experience shapes the response properties of a tiny set of neurons, even for animals reared under the same conditions. “In several cases, we even recorded on the same day from progeny of the same cross, raised in the same food vial” (Hige et al., 2015).

For years, I've been utterly fascinated by these separate strands of research that rarely (if ever) intersect. Why is that? Because there's no such thing as “one receptor, one behavior.” And because like most scientific endeavors, neuro-pharmacology/psychiatry research is highly specialized, with experts in one microfield ignoring the literature produced by another (though there are some exceptions).1

Ketamine is a dissociative anesthetic and PCP-derivative that can produce hallucinations and feelings of detachment in non-clinical populations. Phamacologically it's an NMDA receptor antagonist that also acts on other systems (e.g., opioid). Today I'll focus on a recent neuroimaging study that looked at the downsides of ketamine: anhedonia, cognitive disorganization, and perceptual distortions (Pollak et al., 2015).

Imaging Phenomenologically Distinct Effects of Ketamine

In this study, 23 healthy male participants underwent arterial spin labeling (ASL) fMRI scanning while they were infused with either a high dose (0.26 mg/kg bolus + slow infusion) or a low dose (0.13 mg/kg bolus + slow infusion) of ketamine 2 (Pollak et al., 2015). For comparison, the typical dose used in depression studies is 0.5 mg/kg (Wan et al., 2015). Keep in mind that the number of participants in each condition was low, n=12 (after one was dropped) and n=10 respectively, so the results are quite preliminary.

ASL is a post-PET and BOLD-less technique for measuring cerebral blood flow (CBF) without the use of a radioactive tracer (Petcharunpaisan et al., 2010). Instead, water in arterial blood serves as a contrast agent, after being magnetically labeled by applying a 180 degree radiofrequency inversion pulse. Basically, it's a good method for monitoring CBF over a number of minutes.

ASL sequences were obtained before and 10 min after the start of ketamine infusion. Before and after the scan, participants rated their subjective symptoms of delusional thinking, perceptual distortion, cognitive disorganization, anhedonia, mania, and paranoia on the Psychotomimetic States Inventory (PSI). The study was completely open label, so it's not like they didn't know they were getting a mind-altering drug.

Behavioral ratings were quite variable (note the large error bars below), but generally the effects were larger in the high-dose group, as one might expect.

The changes in Perceptual Distortion and Cognitive Disorganization scores were significant for the low-dose group, with the addition of Delusional Thinking, Anhedonia, and Mania in the high-dose group. But again, it's important to remember there was no placebo condition, the significance levels were not all that impressive, and the n's were low.

The CBF results (below) show increases in anterior and subgenual cingulate cortex and decreases in superior and medial temporal cortex, similar to previous studies using PET.

Fig 2a (Pollak et al., 2015). Changes in CBF with ketamine in the low- and high-dose groups overlaid on a high-resolution T1-weighted image.

Did I say the n's were low? The Fig. 2b maps (not shown here) illustrated significant correlations with the Anhedonia and Cognitive Disorganization subscales, but these were based on 10 and 12 data points, when outliers can drive phenomenally large effects. One might like to say...

For [the high-dose] group, ketamine-induced anhedonia inversely related to orbitofrontal cortex CBF changes and cognitive disorganisation was positively correlated with CBF changes in posterior thalamus and the left inferior and middle temporal gyrus. Perceptual distortion was correlated with different regional CBF changes in the low- and high-dose groups.

Nonetheless, the fact remains that ketamine administration in healthy participants caused negative effects like anhedonia and cognitive disorganization at doses lower than those used in studies of treatment-resistant depression (many of which were also open label). Now you can say, “well, controls are not the same as patients with refractory depression” and you'd be right (see Footnote 1). “Glutamatergic signaling profiles” and symptom reports could show a variable relationship, with severe depression at the low end and schizophrenia at the high end (with controls somewhere in the middle).

The antidepressant efficacy of ketamine ... holds promise for future glutamate-modulating strategies; however, the ineffectiveness of other NMDA antagonists suggests that any forthcoming advances will depend on improving our understanding of ketamine’s mechanism of action. The fleeting nature of ketamine’s therapeutic benefit, coupled with its potential for abuse and neurotoxicity, suggest that its use in the clinical setting warrants caution.

The mysterious and paradoxical ways of ketamine continue...

So take it in don't hold your breath The bottom's all I've found We can't get higher than we get On the Long Way Down

1 One exception is the present study, which discussed the divergent anhedonia results (compared to previous findings of reduced anhedonia in depression). Another example is the work of Dr. John H. Krystal, which includes papers in both the schizophrenia and the treatment-resistant depression realms. However, most of the papers discuss only one and not the other. One notable exception (schizophrenia-related) said this:

...it is important to note that studies examining its effects on glutamateric pathways in the context of mood symptoms (178) may be highly informative for developing our understanding of its relevance to schizophrenia (111). Briefly, emerging models in this area postulate that ketamine may act as anti-depressant by promoting synaptic plasticity via intra-cellular signaling pathways, ultimately promoting brain-derived neurotrophic factor expression via synaptic potentiation (179) and in turns synaptic growth (178). In that sense, acute NMDAR antagonism may promote synaptic plasticity along specific pathways impacted in mood disorders, such as ventral medial PFC (180, 181, p. 916). Conversely, when administered to patients diagnosed with schizophrenia, NMDAR antagonists seem to worsen their symptom profile (182), perhaps by “pushing” an already aberrantly elevated glutamatergic signaling profile upward. Collectively such dissociable effects of ketamine may imply that along distinct circuits there may be an inverted-U relationship between ketamine’s effects and symptoms: depressed patients may be positioned on the low end of the inverted-U (178) and schizophrenia patents may be positioned on the higher end (183). Both task-based and resting-state functional connectivity techniques are well positioned to interrogate such system-level effects of NMDAR antagonists in humans.

2Low-dose ketamine: target plasma level of 50–75 ng/mL was specified (in practice this approximated a rapid bolus of an average of 0.12 mg/kg over 20 s followed by a slow infusion of 0.31 mg/kg/h).

High-dose ketamine: target plasma level of 150 ng/mL was specified (in practice this approximated a rapid bolus of 0.26 mg/kg over 20 s followed by a slow infusion of 0.42 mg/kg/h).

Horror movies where people turn into snakes are relatively common (30 by one count), but clinical reports of delusional transmogrification into snakes are quite rare. This is in contrast to clinical lycanthropy, the delusion of turning into a wolf.

What follows are two frightening tales of unresolved mental illness, minimal followup, and oversharing (plus mistaking an April Fool's joke for a real finding).

THERE ARE NO ACTUAL PICTURES OF SNAKESin this post [an important note for snake phobics].

A 24 year young girl presented to us with complaints that she had died 15 days before and that in her stead she had been turned into a live snake. At times she would try to bite others claiming that she was a snake. ... We showed her photos of snakes and when she was made to face the large mirror she failed to identify herself as her real human self and described herself as snake. She described having snake skin covering her and that her entire body was that of snake except for her spirit inside. ... She was distressed that others did not understand or share her conviction. She felt hopeless that nothing could make her turn into real self. She made suicidal gestures and attempted to hang herself twice on the ward...

The initial diagnosis was severe depressive disorder with psychotic features. A series of drug trials was unsuccessful (Prozac and four different antipsychotics), and a course of 10 ECT sessions had no lasting effect on her delusions. The authors couldn't decide whether the patient should be formally diagnosed with schizophrenia or a more general psychotic illness. Her most recent treatment regime (escitalopram plus quetiapine) was also a failure because the snake delusion persisted.

“Our next plan is to employ supportive psychotherapy in combination with pharmacotherapy,” said the authors (but we never find out what happened to her). Not a positive outcome...

Ophidiantrophy with paranoid schizophrenia, cannabis use, bestiality, and history of epilepsy

The second case is even more bizarre, with a laundry list of delusions and syndromes (Mondal, 2014):

A 23 year old, married, Hindu male, with past history of ... seizures..., personal history of non pathological consumption of bhang and alcohol for the last nine years and one incident of illicit sexual intercourse with a buffalo at the age of 18 years presented ... with the chief complains of muttering, fearfulness, wandering tendency ... and hearing of voices inaudible to others for the last one month. ... he sat cross legged with hands folded in a typical posture resembling the hood of a snake. ... The patient said that he inhaled the breath of a snake passing by him following which he changed into a snake. Though he had a human figure, he could feel himself poisonous inside and to have grown a fang on the lower set of his teeth. He also had the urge to bite others but somehow controlled the desire. He said that he was not comfortable with humans then but would be happy on seeing a snake, identifying it belonging to his species. ... He says that he was converted back to a human being by the help of a parrot, which took away his snake fangs by inhaling his breath and by a cat who ate up his snake flesh once when he was lying on the ground. ... the patient also had thought alienation phenomena in the form of thought blocking, thought withdrawal and thought broadcasting, delusion of persecution, delusion of reference, delusion of infidelity [Othello syndrome], the Fregoli delusion, bizarre delusion, nihilistic delusion [Cotard's syndrome], somatic passivity, somatic hallucinations, made act [?], third person auditory hallucinations, derealization and depersonalisation. He was diagnosed as a case of paranoid schizophrenia as per ICD 10.

Wow.

He was was given the antipsychotic haloperidol while being treated as an inpatient for 10 days. Some of his symptoms improved but others did not. “Long term follow up is not available.”

The discussion of this case is a bit... terrifying:

Lycanthropy encompasses two aspects, the first one consisting of primary lupine delusions and associated behavioural deviations termed as lycomania, and the second aspect being a psychosomatic problem called as lycosomatization (Kydd et al., 1991).

Endogenous lycanthropogens responsible for lycomania are lupinone and buldogone which differ by only one carbon atom in their ring structure; their plasma level having a lunar periodicity with peak level during the week of full moon. Lycosomatization likely depends on the simultaneous secretion of suprathreshold levels of both lupinone and the peptide lycanthrokinin, a second mediator, reported to be secreted by the pineal gland, that “initiates and maintains the lycanthropic process” (Davis et al., 1992). Thus, secretion of lupinone without lycanthrokinin results in only lycomania. In our patient these molecular changes were not investigated.

THE MORGUE: David Blake - Our hapless victim. David is a college student who gets recruited by Dr. Stoner to help out at his farm, and be his latest test subject. He's a nice guy, and there really is not much to say about him, as he's pretty bland until he starts growing scales.

Dr. Carl Stoner - The villain of our piece. He's a snake researcher looking for new grant money, and a new test subject. He actually means well enough, and is looking to advance humanity, but in classic horror movie fashion, he plays God and things go too far.

Kristine Stoner - The doctor's daughter, who is also interested in snakes. Especially David's. She's smart, and kind, and again a bit of a blank slate beyond those traits. Loyal to a fault with her father.

Dr. Daniels - A minor character, but Stoner's chief rival, and the man who holds the purse strings. The two doctors have an antagonistic relationship, but there seems to be an undercurrent of past friendship as well, overshadowed by Daniels' position. Or I'm reading too much into things.

The pathological fear of being buried alive is called taphophobia.1 This seems like a perfectly rational fear to me, especially if one is claustrophobic and enjoys horror movies and Edgar Allan Poe shortstories. Within a modern medical context, however, it simply not possible that a person will be buried while still alive.

But this wasn't always the case. In the 19th century, true stories of premature burial were common, appearing in newspapers and medical journals of the day. Tebb and Vollum (1896) published a 400 page tome (Premature burial and how it may be prevented: with special reference to trance, catalepsy, and other forms of suspended animation) that was full of such examples:

The British Medical Journal, December 8, 1877, p. 819, inserts the following : —

"BURIED ALIVE.

"A correspondent at Naples states that the Appeal Court has had before it a case not likely to inspire confidence in the minds of those who look forward with horror to the possibility of being buried alive. It appeared from the evidence that some time ago a woman was interred with all the usual formalities, it being believed that she was dead, while she was only in a trance. Some days afterwards, the grave in which she had been placed being opened for the reception of another body, it was found that the clothes which covered the unfortunate woman were torn to pieces, and that she had even broken her limbs in attempting to extricate herself from the living tomb. The Court, after hearing the case, sentenced the doctor who had signed the certificate of decease, and the mayor who had authorised the interment, each to three months' imprisonment for involuntary manslaughter."

To avoid this fate worse than death, contraptions known as “safety coffins” were popular, with air tubes, bells, flags, and/or burning lamps (Dossey, 2007). Some taphophobes went to great lengths to outline specific instructions for handling their corpse, to prevent such an ante-mortem horror from happening to them. Some might even say these directives were a form of “overkill”...

From the Lancet, August 20, 1864, p. 219.

"PREMATURE INTERMENT.

"Amongst the papers left by the great Meyerbeer, were some which showed that he had a profound dread of premature interment. He directed, it is stated, that his body should be left for ten days undisturbed, with the face uncovered, and watched night and day. Bells were to be fastened to his feet. And at the end of the second day veins were to be opened in the arm and leg. This is the gossip of the capital in which he died. The first impression is that such a fear is morbid. No doubt fewer precautions would suffice, but now and again cases occur which seem to warrant such a feeling, and to show that want of caution may lead to premature interment in cases unknown. An instance is mentioned by the Ost. Deutsche Post of Vienna. A few days since, runs the story, in the establishment of the Brothers of Charity in that capital, the bell of the dead-room was heard to ring violently, and on one of the attendants proceeding to the place to ascertain the cause, he was surprised at seeing one of the supposed dead men pulling the bell-rope. He was removed immediately to another room, and hopes are entertained of his recovery."

Here's a particularly gruesome one:

From the Daily Telegraph, January 18, 1889.

"A gendarme was buried alive the other day in a village near Grenoble. The man had become intoxicated on potato brandy, and fell into a profound sleep. After twenty hours passed in slumber, his friends considered him to be dead, particularly as his body assumed the usual rigidity of a corpse. When the sexton, however, was lowering the remains of the ill-fated gendarme into the grave, he heard moans and knocks proceeding from the interior of the 'four-boards.' He immediately bored holes in the sides of the coffin, to let in air, and then knocked off the lid. The gendarme had, however, ceased to live, having horribly mutilated his head in his frantic but futile efforts to burst his coffin open."

Doesn't that sound like fun? Wouldn't you like to experience this yourself? Now you can!

The game uses a real life coffin, an Oculus Rift, a PC and some microphones. One player gets in the coffin with the Rift on, together with a headset + microphone. The other player plays on a PC again with mic + headset, this player will play a first person game where they must work with the buried player to uncover where the coffin is and rescue the trapped player before their oxygen runs out. This is all powered by the Unity engine.

This work is intended to explore “uncomfortable experiences and interactions” as part of academic research in the Human Computer Interaction field (HCI) from an MSc by Research in Computer Science student, James Brown. The player inside the coffin will experience various emotions as they are put in and then try to get out of the confined space. Claustrophobia as well as the fear of being buried alive “taphophobia” may well affect players of the game and they must cope with these emotions as they play.

Tebb W, Vollum EP. (1896). Premature burial and how it may be prevented: with special reference to trance, catalepsy, and other forms of suspended animation. SWAN SONNENSCHEIN & CO., LIM.: London. {archive.org}

Is it possible to be “addicted” to food, much like an addiction to substances (e.g., alcohol, cocaine, opiates) or behaviors (gambling, shopping, Facebook)? An extensive and growing literature uses this terminology in the context of the “obesity epidemic”, and looks for the root genetic and neurobiological causes (Carlier et al., 2015; Volkow & Bailer, 2015).

Figure 1 might lead you to believe that the term “food addiction” was invented in the late 2000s by NIDA. But this term is not new at all, as Adrian Meule (2015) explained in his historical overview, Back by Popular Demand: A Narrative Review on the History of Food Addiction Research. Dr. Theron G. Randolph wrote about food addiction in 1956 (he also wrote about food allergies).

One problem with the “food addiction” construct is that you can live without alcohol and gambling, but you'll die if you don't eat. Complete abstinence is not an option.2

Another problem is that most obese people simply don't show signs of addiction (Hebebrand, 2015):

...irrespective of whether scientific evidence will justify use of the term food and/or eating addiction, most obese individuals have neither a food nor an eating addiction.3Obesity frequently develops slowly over many years; only a slight energy surplus is required to in the longer term develop overweight. Genetic, neuroendocrine, physiological and environmental research has taught us that obesity is a complex disorder with many risk factors, each of which have small individual effects and interact in a complex manner. The notion of addiction as a major cause of obesity potentially entails endless and fruitless debates, when it is clearly not relevant to the great majority of cases of overweight and obesity.

Still not convinced? Surely, differences in the brains' of obese individuals point to an addiction. The dopamine system is altered, right, so this must mean they're addicted to food? Well think again, because the evidence for this is inconsistent (Volkow et al., 2013; Ziauddeen & Fletcher, 2013).

An important new paper by a Finnish research group has shown that D2 dopamine receptor binding in obese women is not different from that in lean participants (Karlsson et al., 2015). Conversely, μ-opioid receptor (MOR) binding is reduced, consistent with lowered hedonic processing. After the women had bariatric surgery (resulting in mean weight loss of 26.1 kg, or 57.5 lbs), MOR returned to control values, while the unaltered D2 receptors stayed the same.

In the study, 16 obese women (mean BMI=40.4, age 42.8) had PET scans before and six months after undergoing the standard Gastric Bypass procedure (Roux-en-Y Gastric Bypass) or the Sleeve Gastrectomy. A comparison group of non-obese women (BMI=22.7, age 44.9) was also scanned. The radiotracer [11C]carfentanil measured MOR availability and [11C]raclopride measured D2R availability in two separate sessions. The opioid and dopamine systems are famous for their roles in neural circuits for “liking” (pleasurable consumption) and “wanting” (incentive/motivation), respectively (Castro & Berridge, 2014).

The pre-operative PET scans in the obese women showed that MOR binding was significantly lower in a number of reward-related regions, including ventral striatum, dorsal caudate, putamen, insula, amygdala, thalamus, orbitofrontal cortex and posterior cingulate cortex. Six months after surgery, there was an overall 23% increase in MOR availability, which was no longer different from controls.

The MOR system promotes hedonic [pleasurable] aspects of feeding, and this can make obese individuals susceptible to overeating in order to gain the desired hedonic response from food consumption, which may further promote pathological eating. We propose that at the initial stages of weight gain, excessive eating may cause perpetual overstimulation of the MOR system, leading to subsequent MOR downregulation. ... However, bariatric surgery-induced weight loss and decreased food intake may reverse this process.

The unchanging striatal dopamine D2 receptor densities in the obese participants are in stark contrast to what is seen in individuals who are addicted to stimulant drugs, such as cocaine and methamphetamine (Volkow et al., 2001). Drugs of abuse are consistently associated with decreases in D2 receptors.

Fig. 1 (modified from Volkow et al., 2001). Ratio of the Distribution Volume of [11C]Raclopride in the Striatum (Normalized to the Distribution Volume in the Cerebellum) in a Non-Drug-Abusing Comparison Subject and a Methamphetamine Abuser.

There's a new article in Trends in Cognitive Sciences about how neuroscientists can incorporate social media into their research on the neural correlates of social cognition (Meshi et al., 2015). The authors outlined the sorts of social behaviors that can be studied via participants' use of Twitter, Facebook, Instagram, etc.: (1) broadcasting information; (2) receiving feedback; (3) observing others' broadcasts; (4) providing feedback; (5) comparing self to others.

More broadly, these activities tap into processes and constructs like emotional state, personality, social conformity, and how people manage their self-presentation and social connections. You know, things that exist IRL (this is an important point to keep in mind for later).

The neural systems that mediate these phenomena, as studied by social cognitive neuroscience types, are the Mentalizing Network(in blue below), the Self-Referential Network (red), and the Reward Network (green).

I anticipated this day in 2009, when I wrote several satirical articles about the neurology of Twitter. I proposed that someone should do a study to examine the neural correlates of Twitter use:

It was bound to happen. Some neuroimaging lab will conduct an actual fMRI experiment to examine the so-called "Neural Correlates of Twitter" -- so why not write a preemptive blog post to report on the predicted results from such a study, before anyone can publish the actual findings?

Here are the conditions I proposed, and the predicted results (a portion of the original post is reproduced below).

A low-level baseline condition (viewing "+") and an active baseline condition (reading the public timeline [public timeline no longer exists] of random tweets from strangers) will be compared to three active conditions:

(1) Celebrity Fluff

(2) Social Media Marketing Drivel

(3) Friends on your Following List

... The hemodynamic response function to the active control condition will be compared to those from Conditions 1-3 above. Contrasts between each of these conditions and the low-level baseline will also be performed.

Fig. 2A. (Mitchell et al., 2006). A region of ventral mPFC showed greater activation during judgments of the target to whom participants considered themselves to be more similar.

Reading the stream of Celebrity Fluff will activate the frontal eye fields to a much greater extent than the control condition, as the participants will be engaged in rolling their eyes in response to the inane banter.

Figure from Paul Pietsch, Ph.D.The frontal eye fields are in a stamp-sized zone at the posterior end of the middle frontal gyri.

Reading the stream of Social Media Marketing Drivel will tax the neural circuits involved in generating a feeling of disgust, including the anterior insula, ventrolateral prefrontal cortex-temporal pole, and putamen-globus pallidus (Mataix-Cols et al., 2008)

Fig. 1A (Jabbi et al., 2008). Coronal slice (y = 18) showing the location of the ROI (white) previously shown to be involved in the experience and observation of disgust.

In conclusion, we predict that the observed patterns of brain activity will be dependent on the nature of the Twitter material being read. These distinct neural networks are expected to reflect the cognitive, emotional, and visceral processes underlying the rapidly changing content of digital media, which ultimately results in "rewiring" of the brain.

Social networking websites are causing alarming changes in the brains of young users, an eminent scientist has warned.

Sites such as Facebook, Twitter and Bebo are said to shorten attention spans, encourage instant gratification and make young people more self-centred.

The claims from neuroscientist Susan Greenfield will make disturbing reading for the millions whose social lives depend on logging on to their favourite websites each day.

Susan Greenfield, Susan Greenfield

No history of social media neuroscience is complete without the unsubstantiated claims of Baroness Susan Greenfield— an extremely prominent British neuroscientist, author, and broadcaster: 'My fear is that these technologies are infantilising the brain into the state of small children who are attracted by buzzing noises and bright lights, who have a small attention span and who live for the moment.' Although she declares the dangers of digital Mind Change far and wide, such statements are not backed by careful peer reviewed studies.

She is concerned that those who live only in the present, online, don’t allow their malleable brains to develop properly. “It’s not going to destroy the planet but is it going to be a planet worth living in if you have a load of breezy people who go around saying yaka-wow. Is that the society we want?”

A team of British psychologists, neuroscientists, bloggers, and science writers have been trying for ages to rebut the Baroness — asking her to produce reliable evidence for her dire assertions (see Appendix).

The neuroscience of social media isn't just emerging. It's been with us for over ten years.

Footnote

1One of these seven references is not a peer-reviewed paper, it's an abstract for a conference that's starting in a few days. I found it here: Facebook Network Structure and Brain Reactivity to Social Exclusion.

Social media sites like Facebook and Twitter have left a generation of young adults vulnerable to degeneration of the brain, we can exclusively reveal for aboutthefifthtime. Symptoms include self-obsession, short attention spans and a childlike desire for constant feedback, according to a 'top scientist' with no record of published research on the issue.. . .

The scientist believes that use of the internet – and computer games – could 'rewire' the brain, causing neurons to establish new connections and pathways. "Rewiring itself is something that the brain does naturally all the time," the professor said, "but the phrase 'rewiring the brain' sounds really dramatic and chilling, so I like to use it to make it seem like I'm talking about a profound and unnatural change, even though it isn't."

The Pursuit of Happiness is an Unalienable Right granted to all human beings, but it also generates billions of dollars for the self-help industry.

And now the search for happiness is over! Scientists have determined that happiness is located in a small region of your right medial parietal lobe. Positive psychology gurus will have to adapt to the changing landscape or lose their market edge. “My seven practical, actionable principles are guaranteed to increase the size of your precuneus or your money back.”

The structural neural substrate of subjective happiness is the precuneus.

A new paper has reported that happiness is related to the volume of gray matter in a 222.8 mm3 cluster of the right precuneus (Sato et al., 2015). What does this mean? Taking the finding at face value, there was a correlation (not a causal relationship) between precuneus gray matter volume and scores on the Japanese version of the Subjective Happiness Scale.1

Fig. 1 (modified from Sato et al., 2015).Left: Statistical parametric map (p < 0.001, peak-level uncorrected for display purposes). The blue cross indicates the location of the peak voxel. Right: Scatter plot of the adjusted gray matter volume as a function of the subjective happiness score at the peak voxel. [NOTE: Haven't we agreed to not show regression lines through scatter plots based on the single voxel where the effect is the largest??]

NO. Of course not. And the experimental subjects were not actively involved in any sort of task at all. The study used a static measure of gray matter volume in four brain Regions of Interest (ROIs): left anterior cingulate gyrus, left posterior cingulate gyrus, right precuneus, and left amygdala. These ROIs were based on an fMRI activation study in 26 German men (mean age 33 yrs) who underwent a mood induction procedure (Habel et al., 2005). The German participants viewed pictures of faces with happy expressions and were told to “Look at each face and use it to help you to feel happy.” The brain activity elicited by happy faces was compared to activity elicited by a non-emotional control condition. Eight regions were reported in their Table 1.

Before you say I'm being overly pedantic, we can agree that the selected coordinates are at the border of the precuneus and the paracentral lobule. The more interesting fact is that the sadness induction of Habel et al. (2005) implicated a very large region of the posterior precuneus and surrounding regions (1562 voxels). An area over 100 times larger than the Happy Precuneus.

But it seems a bit problematic to use hand picked ROIs from a study of transient and mild “happy” states (in a population of German males) to predict a stable trait of subjective happiness in a culturally distinct group of younger Japanese college students (26 women, 25 men).

Should we expect “the neural correlates of happiness” (or well-being) to be the same in Japanese and Chinese and British college students? In the Chinese study, life satisfaction was positively correlated with gray matter volume in the right parahippocampal gyrus but negatively correlated with gray matter volume in the left precuneus... So the participants with the largest precuneus volumes in that study had the lowest well-being.

What does a bigger (or smaller) size even mean for actual neural processing? Does a larger gray matter volume in the precuneus allow for a higher computational capacity that can generate greater happiness?? We have absolutely no idea: “...there is no clear evidence of correlation between GM volume measured by VBM and any histological measure, including neuronal density” (Gilaie-Dotan et al., 2014).

Sato et al. (2015) concluded that their results have important practical implications: Are you happy? We don't have to take your word for it any more!

In terms of public policy, subjective happiness is thought to be a better indicator of happiness than economic success. However, the subjective measures of happiness have inherent limitations, such as the imprecise nature of comparing data across different cultures and the difficulties associated with the applications of these measures to specific populations, including the intellectually disabled. Our results show that structural neuroimaging may serve as a complementary objective measure of subjective happiness.

Neurology and Psychiatry are two distinct specialties within medicine, both of which treat disorders of the brain. It's completely uncontroversial to say that neurologists treat patients with brain disorders like Alzheimer's disease and Parkinson's disease. These two diseases produce distinct patterns of neurodegeneration that are visible on brain scans. For example, Parkinson's disease (PD) is a movement disorder caused by the loss of dopamine neurons in the midbrain.

It's also uncontroversial to say that drugs like L-DOPA and invasive neurosurgical interventions like deep brain stimulation (DBS) are used to treat PD.

On the other hand, some people will balk when you say that psychiatric illnesses like bipolar disorder and depression are brain disorders, and that drugs and DBS (in severe intractable cases) may be used to treat them. You can't always point to clear cut differences in the MRI or PET scans of psychiatric patients, as you can with PD (which is a particularly obvious example).

The diagnostic methods used in neurology and psychiatry are quite different as well. The standard neurological exam assesses sensory and motor responses (e.g., reflexes) and basic mental status. PD has sharply defined motor symptoms including tremor, rigidity, impaired balance, and slowness of movement. There are definitely cases where the symptoms of PD should be attributed to another disease (most notably Lewy body dementia)1, and other examples where neurological diagnosis is not immediately possible. But by and large, no one questions the existence of a brain disorder.

Things are different in psychiatry. Diagnosis is not based on a physical exam. Psychiatrists and psychologists give clinical interviews based on the Diagnostic and Statistical Manual (DSM-5), a handbook of mental disorders defined by a panel of experts with opinions that are not universally accepted. The update from DSM-IV to DSM-5 was highly controversial (and widely discussed).

The causes of mental disorders are not only biological, but often include important social and interpersonal factors. And their manifestations can vary acrosscultures.

The strength of each of the editions of DSM has been “reliability” – each edition has ensured that clinicians use the same terms in the same ways. The weakness is its lack of validity. Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure.

For years, NIMH has been working on an alternate classification scheme, the Research Domain Criteria (RDoC) project, which treats mental illnesses as brain disorders that should be studied according to domains of functioning (e.g., negative valence). Dimensional constructs such as acute threat (“fear”) are key, rather than categorical DSM diagnosis. RDoC has been widely discussed on this blog and elsewhere — it's the best thing since sliced bread, it's necessary but very oversold, or it's ill-advised.

Just as research during the Decade of the Brain (1990-2000) forged the bridge between the mind and the brain, research in the current decade is helping us to understand mental illnesses as brain disorders. As a result, the distinction between disorders of neurology (e.g., Parkinson's and Alzheimer's diseases) and disorders of psychiatry (e.g., schizophrenia and depression) may turn out to be increasingly subtle. That is, the former may result from focal lesions in the brain, whereas the latter arise from abnormal activity in specific brain circuits in the absence of a detectable lesion. As we become more adept at detecting lesions that lead to abnormal function, it is even possible that the distinction between neurological and psychiatric disorders will vanish, leading to a combined discipline of clinical neuroscience.

Future training might begin with two post-graduate years of clinical neuroscience shared by the disciplines we now call neurology and psychiatry, followed by two or three years of specialty training in one of several sub-disciplines (ranging from peripheral neuropathies to public sector and transcultural psychiatry). This model recognizes that the clinical neurosciences have matured sufficiently to resemble internal medicine, with core training required prior to specializing.

Neurology and psychiatry have, for much of the past century, been separated by an artificial wall created by the divergence of their philosophical approaches and research and treatment methods. Scientific advances in recent decades have made it clear that this separation is arbitrary and counterproductive. .... Further progress in understanding brain diseases and behavior demands fuller collaboration and integration of these fields. Leaders in academic medicine and science must work to break down the barriers between disciplines.

Contemporary leaders and observers of academic medicine are not all equally ecstatic about this prospect, however. Taylor et al. (2015) are enthusiastic advocates of a move beyond “Neural Cubism”, to increased integration of neurology and psychiatry. Dr. Sheldon Benjamin agrees that greater cross-discipline training is needed, but wants the two fields to remain separate. But Dr. Jose de Leon thinks the psychiatry/neurology integration is a big mistake that revives early 20th century debates (see table below, in the footnotes).3

I think a distinction can (and should) be made between the research agenda of neuroscience and the current practice of psychiatry. Neuroscientists who work on such questions assume that mental illnesses are brain disorders and act accordingly, by studying the brain. They study animal models and brain slices and genes and humans with implanted or attached electrodes and humans in scanners. And they study the holy grail of neural circuits using DREDDs and optogenetics. This doesn't invalidate the existence of social, cultural, and interpersonal factors that affect the development and manifestation of mental illnesses. As an non-clinician, I have less to say about medical practice. I'm not grandiose enough to claim that neuroscience research (or RDoC, for that matter) will transform the practice of psychiatry (or neurology) in the near future. [Though you might think differently if you read Public Health Relevance Statements or articles in high profile journals.]

Basic researchers may not even think about the distinction between neurology and psychiatry. Is the abnormal deposition of amyloid-β peptide in Alzheimer's disease (AD) an appropriate target for treatment? Are metabotropic glutamate receptors an appropriate target in schizophrenia? These are similar questions, despite the fact that one disease is neurological and the other psychiatric. There are defined behavioral endpoints that mark treatment-related improvements in either case. It's very useful to measure a change in amyloid burden4 using florbetapir PET imaging in AD [there's nothing similar in schizophrenia], but the most important measure is cognitive improvement (or a flattening of cognitive decline).

Does Location Matter?

In response to the pro-merger cavalcade, a recent meta-analysis asked whether the entire category of neurological disorders affects different brain regions than the entire category of psychiatric disorders (Crossley et al., 2015). The answer was why yes, the two categories affect different brain areas, and for this reason neurology and psychiatry should remain separate.

I thought this was an odd question to begin with, and an even odder conclusion. It's not surprising that disorders of movement, for example, involve different brain regions than disorders of mood or disorders of thought. From my perspective, it's more interesting to look at where the two categories overlap, with an eye to specific comparisons (not global lumping). For instance, are compulsive and repetitive behaviors in OCD associated with alterations in some of the subcortical circuits implicated in movement disorders? Why yes.

But let's take a closer look at the technical details of the study.

Crossley et al. (2015) searched for structural MRI articles that observed decreases in gray matter in patients compared to controls. The papers used voxel-based morphometry (VBM) to quantify regional gray matter volumes across the entire brain. For inclusion, disorders needed to have at least seven published studies to be entered into the analysis. A weighted method was used to control for number of published studies (e.g., AD and schizophrenia were way over-represented in their respective categories), and 7 papers were chosen at random for each disorder. The papers were either in the brainmap.org VBM database or found via electronic searches. The x y z peak coordinates were extracted from each paper and entered into the GingerALE program, which performed a meta-analysis via the activation likelihood estimation (ALE) method (see these references: [pdf], [pdf], [pdf] ).

They found that the basal ganglia, insula, lateral and medial temporal cortex, and sensorimotor areas were affected to a greater extent in neurological disorders. Meanwhile, anterior and posterior cingulate, medial frontal cortex, superior frontal gyrus, and occipital cortex were more affected in psychiatric disorders.

- click on image for a larger view -

The authors also looked at network differences, with networks based on previous resting state fMRI studies. Some of these results were uninformative. For example, psychiatric disorders affect visual networks more than neurological disorders do. That was because neurological disorders affect visual regions much less than expected (based on the total number of affected voxels).

Another finding was that abnormalities in the cerebellum occurred less often than expected in neurological disorders. But this is obviously not the case in cerebellar ataxia, which affects (you guessed it) THE CEREBELLUM. So I'm not sure how useful it is to make global statements about cerebellar involvement in neurological disorders.

ALE map (FDR pN < 0.05) from 16 VBM studies of ataxia.

ALE map above was based on 16 papers in the BrainMap database (from a search including 'Ataxia', 'Friedreich ataxia', or 'Spinocerebellar Ataxia'). Gray matter decreases are seen in the cerebellum.

It was sort of interesting to see all the neurological disorders lumped together and compared to all the psychiatric disorders (the coarsest carving imaginable), but I guess I'm more of a splitter. But an integrative one who also looks for commonalities and overlap. The intersection of neurology and psychiatry is a fascinating topic that could fill many future blog posts.

If these data support a regional association between amyloid plaque burden and metabolism, it is for the somewhat heretical inversion of the amyloid hypothesis. That is, regional amyloid plaque deposition is protective, possibly by pulling the more toxic amyloid oligomers out of circulation and binding them up in inert plaques, or via other mechanisms...