Death by suicide is a preventable tragedy if the suicidal individual is identified and receives appropriate treatment. Unfortunately, some suicidal individuals do not signal their intent, and others do not receive essential assistance. Youths with severe suicidal ideation are not taken seriously in many cases, and thus are not admitted to emergency rooms. A common scenario is that resources are scarce, the ER is backed up, and a cursory clinical assessment will determine who is admitted and who will be triaged. From a practical standpoint, using fMRI to determine suicide risk is a non-starter.

Let me unpack that a bit. The scans of 17 young adults with suicidal ideation (thoughts about suicide) were compared to those from another 17 participants without suicidal ideation. A computer algorithm (Gaussian Naive Bayes) was trained on the neural responses to death-related and suicide-related words, and correctly classified 15 out of 17 suicidal ideators (88% sensitivity) and 16 out of 17 controls (94% specificity). Are these results too good to be true? Yes, probably. And yet they're not good enough, because two at-risk individuals were not picked up.

The computational methods used to classify the suicidal vs. control groups are suspect, according to manymachine learningexpertsonsocialmedia. One problem is known as “overfitting” — using too many parameters taken from small populations that may not generalize to unique samples. The key metric is whether the algorithm will be able to classify individuals from independent, out-of-sample populations. And we don't know that for sure. Another problem is that the leave-one-out cross validation is problematic. I'm not an expert here, so the Twitter threads that start below (and here) are your best bet.

For the rest of this post, I'll raise other issues about this study that concerned me.

Why use an expensive technology in the first place?

The rationale for this included some questionable statements.

...predictions by both clinicians and patients of future suicide risk have been shown to be relatively poor predictors of future suicide attempt2,3.

One of the papers cited as a poor predictor (Nock et al., 2010) was actually touted as a breakthrough when it was published: Implicit Cognition Predicts Suicidal Behavior. [n.b. Nock is an author on the Just et al. paper that trashes his earlier work]. Anyway, Nock et al. (2010) developed the death/suicide Implicit Association Test (IAT)1which was able to identify ER patients at greatest risk for another suicide attempt in the future:

...the implicit association of death/suicide with self was associated with an approximately 6-fold increase in the odds of making a suicide attempt in the next 6 months, exceeding the predictive validity of known risk factors (e.g., depression, suicide-attempt history) and both patients’ and clinicians’ predictions.

But let's go ahead with an fMRI study that will be far more accurate than a short and easy-to-administer computerized test!

Nearly 80% of patients who die by suicide deny suicidal ideation in their last contact with a mental healthcare professional4.

This 2003 study was based on psychiatric inpatients who died by suicide while in hospital (5-6% of all suicides) or else shortly thereafter, and may not be representative of the entire at-risk population. Nonetheless, other research shows that current risk scales are indeed of limited use and may even waste valuable clinical resources. The scales “may be missing important aspects relevant to repeat suicidal behaviour (for example social, cultural, economic or psychological processes).” But a focus on brain scans would also miss social, cultural, and economic factors.

How do you measure the neural correlates of suicidal thoughts?

This is a tough one, but the authors propose to uncover the neural signatures of specific concepts, as well as the emotions they evoke:

...the neural signature of the test concepts was treated as a decomposable biomarker of thought processes that can be used to pinpoint particular components of the alteration [in participants with suicidal ideation]. This decomposition attempts to specify a particular component of the neural signature that is altered, namely, the emotional component...

How do you choose which concepts and emotions to measure?

The “concepts” were words from three different categories (although the designation of Suicide vs. Negative seems arbitrary for some of the stimuli). The set of 30 words was presented six times, with each word shown for three seconds followed by a four second blank screen. Subjects were “asked to actively think about the concepts ... while they were displayed, thinking about their main properties (and filling in details that come to mind) and attempting consistency across presentations.”

The “emotion signatures” were derived from a prior study (Kassam et al., 2013) that asked method actors to self-induce nine emotional states (anger, disgust, envy, fear, happiness, lust, pride, sadness, and shame). The emotional states selected for the present study were anger, pride, sadness, and shame (all chosen post hoc). Should we expect emotion signatures that are self-induced by actors to be the same as emotion signatures that are evoked by words? Should we expect a universal emotional response to Comfort or Evil or Apathy?

Six words (death, carefree, good, cruelty, praise, and trouble— in descending order) and five brain regions (left superior medial frontal, medial frontal/anterior cingulate, right middle temporal, left inferior parietal, and left inferior frontal) from a whole-brain analysis (that excluded bilateral occipital lobes for some reason) provided the most accurate discrimination between the two groups. Why these specific words and voxels? Twenty-five voxels, specifically. It doesn't matter.

The neural representation of each concept, as used by the classifier, consisted of the mean activation level of the five most stable voxels in each of the five most discriminating locations.

...and...

All of these regions, especially the left superior medial frontal area and medial frontal/anterior cingulate, have repeatedly been strongly associated with self-referential thought...

...and...

...the concept of ‘death’ evoked more shame, whereas the concept of ‘trouble’ evoked more sadness in the suicidal ideator group. ‘Trouble’ also evoked less anger in the suicidal ideator group than in the control group. The positive concept ‘carefree’ evoked less pride in the suicidal ideator group. This pattern of differences in emotional response suggests that the altered perspective in suicidal ideation may reflect a resigned acceptance of a current or future negative state of affairs, manifested by listlessness, defeat and a degree of anhedonia (less pride evoked in the concept of ‘carefree’) [why not less pride to 'praise' or 'superior'? who knows...]

How can a method that excludes data from 55% of the target participants be useful??

This one seems like a showstopper. A total of 38 suicidal participants were scanned, but those who did not show the desired semantic effects were excluded due to “poor data quality”:

The neurosemantic analyses ... are based on 34 participants, 17 participants per group whose fMRI data quality was sufficient for accurate (normalized rank accuracy > 0.6) identification of the 30 individual concepts from their fMRI signatures. The selection of participants included in the primary analyses was based only on the technical quality of the fMRI data. The data quality was assessed in terms of the ability of a classifier to identify which of the 30 individual concepts they were thinking about with a rank accuracy of at least 0.6, based on the neural signatures evoked by the concepts. The participants who met this criterion also showed less head motion (t(77) = 2.73, P < 0.01). The criterion was not based on group discriminability.

This logic seems circular to me, despite the claim that inclusion wasn't based on group classification accuracy. Seriously, if you throw out over half of your subjects, how can your method ever be useful? Nonetheless, the 21 “poor data quality” ideators with excessive head motion and bad semantic signatures were used in an out-of-sample analysis that also revealed relatively high classification accuracy (87%) compared to the data from the same 17 “good” controls (the data from 24 “bad” controls were excluded, apparently).

We attribute the suboptimal fMRI data quality (inaccurate concept identification from its neural signature) of the excluded participants to some combination of excessive head motion and an inability to sustain attention to the task of repeatedly thinking about each stimulus concept for 3 s over a 30-min testing period.

Furthermore, another classifier was even more accurate (94%) in discriminating between suicidal ideators who had made a suicide attempt (n=9) from those who had not (n=8), although the out-of-sample accuracy for the excluded 21 was only 61%. Perhaps I'm misunderstanding something here, but I'm puzzled...

I commend the authors for studying a neglected clinical group, but wish they were more rigorous, didn't overinterpret their results, and didn't overhype the miracle of machine learning.

Crisis Text Line [741741 in the US] uses machine learning to prioritize their call load based on word usage and emojis. There is a great variety of intersectional risk factors that may lead someone to death by suicide. At present, no method can capture the full scope of diversity of who will cross the line.

If you are feeling suicidal or know someone who might be, here is a link to a directory of online and mobile suicide help services.

Chronic traumatic encephalopathy (CTE) is the neurodegenerative disease of the moment, made famous by the violent and untimely deaths of many retired professional athletes. Repeated blows to the head sustained in contact sports such as boxing and American football can result in abnormal accumulations of tau protein (usually many years later). The autopsied brains from two of these individuals are shown below.

Both men played professional football in the NFL. Both came upon some troubled times after leaving the game. And although the CTE pathology in their brains has been attributed directly to football — repeated concussive and sub-concussive events — other potential factors have been mostly ignored. Below I'll discuss these events and phenomena, and whether they could have contributed to the condition of the post-mortem brains.

Talented ex-NFL football star, PCP addict, convicted murderer, and suicide by hanging. The Rolling Stone ran two riveting articles that detailed the life (and death) of Mr. Hernandez. Despite a difficult upbringing surrounded by violence and tragedy, he was a serious and stellar athlete at Bristol High School. The tragic death of his father from a medical accident led Aaron to hang out with a less savory crowd. He fortunately ended up at the University of Florida for college football. There he failed several drug tests, but the administration mostly looked the other way. He was on a national championship team, named an all-American, and involved in a shooting where he was not charged.

Most NFL teams took a pass because of his use of recreational drugs and reputation as a hot-head:

After seeing his pre-draft psychological report, where he received the lowest possible score, one out of 10, in the category of “social maturity” and which also noted that he enjoyed “living on the edge of acceptable behavior,” a handful of teams pulled him off their boards, and 25 others let him sink like a stone on draft day.

But he ended up signing with the New England Patriots in a $40 million deal. He smoked pot constantly and avoided hanging out with the other players. “Instead of teammates, Hernandez built a cohort of thugs, bringing stone-cold gangsters over to the house to play pool, smoke chronic and carouse.” Things spiraled downwards, in terms of thug life, use of PCP (angel dust), and ultimately the murder of a friend that ended in a life sentence without parole.

He was also tried and acquitted of a separate double homicide, but his days were numbered. Two days later he hanged himself with a bedsheet in his jail cell. He was rumored to have smoked K2 (nasty synthetic cannabis) just before his death, but this was ultimately unsubstantiated.

These complicating factors — lengthy history of drug abuse, death by asphyxiation — must have had some effect on his brain, I mused in another post.

Meanwhile, the New York Times had a splashy piece about how the pristine brain of Aaron Hernandez presented an opportunity to study a case of “pure” CTE:

What made the brain extraordinary, for the purpose of science, was not just the extent of the damage, but its singular cause. Most brains with that kind of damage have sustained a lifetime of other problems, too, from strokes to other diseases, like Alzheimer’s. Their samples are muddled, and not everything found can be connected to one particular disease.

I’ve been struggling to write a post that highlights the misleading nature of this claim. How much of that was [the writer's] own hyperbole? Or was he merely paraphrasing the famous neuropathologists who presented their results to the media, not to peer reviewers? Is it my job to find autopsied brains from PCP abusers and suicides by hanging? Searching for the latter, by the way, will turn up some very unsavory material in forensic journals and elsewhere. At any rate, I think much of this literature glosses over any complicating elements, and neglects to mention all of the cognitively intact former football players whose brains haven’t been autopsied.

Chronic traumatic encephalopathy (CTE) is the neurodegenerative disease of the moment, made famous by the violent and untimely deaths of many retired professional athletes. Repeated blows to the head sustained in contact sports such as boxing and American football can result in abnormal accumulations of tau protein (usually many years later). The autopsied brains from two of these individuals are shown below.

Part 1 of this series looked at complicating factors in the life of Aaron Hernandez — PCP abuse, death by asphyxiation — that presumably had some impact on his brain beyond the effects of concussions in football.

Part 2 will discuss the tragic case of Fred McNeill, former star linebacker for the Minnesota Vikings. He died in 2015 from complications of Amyotrophic Lateral Sclerosis (ALS), suggesting that his was not a “pure” case of CTE, either.

Obituary: Standout of the 1970s and 1980s was suffering from dementia and died from complications from ALS, according to Matt Blair [close friend and former teammate]

ALS is a motor neuron disease that causes progressive wasting and death of neurons that control voluntary muscles of the limbs and ultimately the muscles that control breathing and swallowing. Around 30-50% of individuals with ALS show cognitive and behavioral impairments.

Overlap between ALS and other neurodegenerative diseases, in particular frontotemporal dementia (FTD) and parkinsonism, is increasingly recognized. ...

Approximately 10–15% of patients with ALS show signs of FTD ... typically behavioural variant of FTD. A further 50% experience mild cognitive or behavioural changes. Patients with executive dysfunction have a worse prognosis, and behavioural changes have a negative impact on carer quality of life.

This raises the issue that repetitive head trauma can result in multiple neurodegenerative diseases, not only CTE.2 In fact, this has been recognized by other researchers who studied 14 retired soccer players who were experts at heading the ball (Ling et al., 2017). Only four had pathologically confirmed CTE:

So the blanket term of “CTE” can include build-up of not only tau, but other abnormal proteins typically seen in Alzheimer's disease (Aβ) and the ALS-FTD spectrum (TDP-43). This lowers the utility of an in vivo marker specific to tau in diagnosing CTE in living individuals, an important enterprise because definitive diagnosis is only obtained post-mortem.

This brings us to the problematic report on Mr. McNeill's brain and the news coverage surrounding it.

The recent study by Omalu and colleagues (2017) performed a PET scan on Mr. Neill almost 4.5 years before he died. This was before any motor signs of ALS had appeared. Clearly, 4.5 years is a very long time in the course of progressive neurodegenerative diseases, so right off the bat a comparison of his PET scan and post-mortem pathology is highly problematic.

Another reason this study was not the “breakthrough” of news headlines is because the type of pathology plainly visible on MRI, and the type of cognitive deficits shown on neuropsychological tests, were quite typical of Alzheimer's disease and perhaps also vascular dementia. The MRI scan taken at the time of PET “showed mild, global brain atrophy with enlarged ventricles, moderate bilateral hippocampal atrophy, and diffuse white matter hyperintensities.”

Among his worst cognitive deficits at the time of testing were memory and picture naming, which is characteristic of Alzheimer's disease (AD). Likewise, the behavioral deficits reported by his wife are typically seen in AD.

Two years after the PET scan, he developed motor symptoms of ALS. His wife noted he could no longer tie his shoes or button his shirts. He developed muscle twitching in his arms and showed decreased muscle mass in his arms and shoulders. He was diagnosed with ALS 17 months prior to death, which was in addition to his presumed diagnosis of CTE.

Finally, the molecular imaging probe used to identify abnormal tau protein in the living brain, [18F]-FDDNP, is not specific for tau. It also binds to beta-amyloid and a variety of other misfolded proteins. Or maybe not!

I certainly acknowledge that theses types of pre- and post-mortem studies are very difficult to conduct, and although the n=1 is a known weakness, you have to start somewhere. Nonetheless, the stats relating FDDNP binding to tau pathology were very thin and not all that believable. The paragraph below presents the results in their entirety. Note that p=.0202 was considered “highly correlated” while p=.1066 was not significant.

Correlation analysis was performed to investigate whether the in vivo regional [F-18]FDDNP binding level agreed with the density of tau pathology based on autopsy findings. Spearman rank-order correlation coefficient (rs) was calculated for the regional [F-18]FDDNP DVRs (Figure 1) and the density of tau pathology, as well as for amyloid and TDP-43 substrates (Table 5). Our results showed that the tau regional findings and densities obtained from antemortem [F-18]FDDNP-PET imaging and postmortem autopsy were highly correlated (rs = 0.592, P = .0202). However, no statistical correlation was found with the presence of amyloid deposition (r s = -0.481; P = .0695) or of TDP-43 (rs = 0.433; P = .1066).

Also, FDDNP-PET showed that in cortical regions, the medial temporal lobes showed the highest distribution volume ratio (DVR), along with anterior and posterior cingulate cortices. Isn't this typical of the Aβ distribution in AD?

I'm not denying the existence of CTE as a complex clinical entity, or saying that multiple concussions don't harm your brain. Along with others (e.g., Iverson et al., 2018), I'm merely suggesting that the clinical, cognitive, behavioral, and pathological sequelae of repeated head trauma should be carefully studied, and not presented in a sensationalistic manner.

The amygdala is a small structure located within the medial temporal lobes (MTL), consisting of a discrete set of nuclei. It has a reputation as the “fear center” or “emotion center” of the brain, although it performs multiple functions. One well-known activity of the amygdala, via its connections with other MTL areas, involves an enhancement of memories that are emotional in nature (compared to neutral). Humans and rodents with damaged or inactivated amygdalae fail to show this emotion-related enhancement, although memory for neutral items is relatively preserved (Adolphs et al., 1997; Phelps & Anderson, 1997; McGaugh, 2013).

A new brain stimulation study (Inman et al., 2017) raises interesting questions about the necessity of subjective emotional experience in the memory enhancement effect. A group of 14 refractory epilepsy patients underwent surgery to implant electrodes in the left or right amygdala (and elsewhere) for the sole purpose of monitoring the source of their seizures. In a boon for affiliated research programs everywhere, patients are able to participate in experiments while waiting around for seizures to occur.

The stimulating electrodes were located in or near the basolateral complex of the amygdala (BLA), shown below. The stimulation protocol was developed from similar studies in rats, which demonstrated that direct electrical stimulation of BLA can improve memory for non-emotional events when tested on subsequent days (Bass et al., 2012; 2014; 2015).

The direct translation from animals to humans is a clear strength of the paper (Inman et al., 2017):

...direct activation of the BLA modulated neuronal activity and markers of synaptic plasticity in the hippocampus and perirhinal cortex, two structures important for declarative memory that are directly innervated by the BLA. ... These and other studies [in animals] have led to the view that an emotional experience engages the amygdala, which in turn enhances memory for that experience through modulation of synaptic plasticity-related processes underlying memory consolidation in other brain regions. This model predicts that direct stimulation of the human amygdala could enhance memory in a manner analogous to emotion’s enhancing effects on long-term memory.

The experimental task was a test of object recognition memory. Pictures of 160 neutral objects were presented on Day 1 while the participants made “indoor” or “outdoor” decisions (which were quite ambiguous in many cases). The purpose of this task was to engage a deep level of semantic encoding of each object, which was presented for 3 seconds. Immediately after stimulus offset for half the items (n=80), a train of electrical stimulation pulses was presented for 1 second (each pulse = 500 μs biphasic square wave; pulse frequency = 50 Hz; train frequency = 8 Hz). For the other half (n=80), no stimulation was presented. Each trial was separated by a 5 second interval.

An immediate recognition memory test was presented after completion of the study phase. Yes/no decisions were made on 40 old objects with post-stimulation, 40 old objects with no stimulation, and 40 new objects (“foils”). Then 24 hours later, a similar yes/no recognition test was presented, but this time with the other set of items not tested previously, along with a new set of foils. The prediction was that electrical stimulation of the amygdala would act as an artificial “boost” of performance on the 24 hour test, after memory consolidation had occurred.

This prediction was (mostly) supported as shown below, with one caveat I'll explain shortly. In Panel A, a commonly used measure of discrimination performance (d′) is shown for the Immediate and One-Day tests, with red dots indicating stimulation and blue dots no stimulation (one dot per patient). Most participants performed better on stimulated items regardless of whether on the Immediate test or One-Day test, although variability was higher on the Immediate test. Panel B shows a summary of the performance difference for stimulation − no stimulation trials. Paired-samples t-tests (two sided) were conducted for each recognition-memory interval. The result for One-Day was significant (p=.003), but the result for Immediate was not (p=.30). This would seem to be convincing evidence that amygdala stimulation during encoding enhanced delayed recognition memory selectively.

HOWEVER, from the statistics presented thus far, we don't know whether the memory enhancement effect was statistically larger for the One-Day test. My guess is not, because an ANOVA showed a main effect of test day (p< 0.001) and a main effect of stimulation (p= 0.03). But no interaction between these variables was reported.

Nonetheless, the study was fascinating because the patients were unable to say whether or not stimulation was delivered in a subsequent test of awareness (10 trials of each condition):

The take-home message is that subjective and objective indicators of emotion were not necessary for amygdala stimulation during encoding to enhance subsequent recognition of neutral material. “This memory enhancement was accompanied by neuronal oscillations during retrieval that reflected increased interactions between the amygdala, hippocampus, and perirhinal cortex”1 (as had been shown previously in animals).2

So it seems that subjective emotional experience may be an unnecessary epiphenomenon for the boosting effect of emotion in the formation of declarative memories. Or at least in this limited (albeit impressive) laboratory setting. And here I will step aside from being overly critical. Anyone who wants to slam the reproducibility of an n=14 rare patient sample size should be prepared to run the same study with 42 individuals with amygdala depth electrodes.

For [n = 5 patients] with electrodes localized concurrently in the amygdala, hippocampus, and perirhinal cortex), local field potentials (LFPs) from each region were recorded simultaneously during the immediate and one-day recognition-memory tests... LFP oscillations were apparent in the theta (here 5–7 Hz) and gamma (30–55 Hz) ranges... ... Recognition during the one-day test but not during the immediate test exhibited increased power in perirhinal cortex in the gamma frequency range for remembered objects previously followed by stimulation compared with remembered objects without stimulation. Furthermore, LFPs during the one-day test, but not during the immediate test, revealed increased coherence of hippocampal–perirhinal oscillations in the theta frequency range for remembered objects previously followed by stimulation compared with remembered objects without stimulation.

2 If you think the 14 patients with epilepsy were variable, wait until you see the [overly honest] results from even smaller studies with rats.

While everyone else rings in the New Year by commemorating the best and brightest of 2017 in formulaic Top Whatever lists, The Neurocritic has decided to wallow in shame. To mark this Celebration of Failure, I have compiled a Bottom Five list,4 the year's least popular posts as measured by Google Analytics. The last time I compiled a “Worst of” list was in 2012.

Methods: The number of pageviews per post was copied and pasted into an Excel file, sorted by date. Then the total pageviews for each post was prorated by the vintage of the post, to give an estimate of daily views.5

Results: The posts are listed in inverse order, starting with #5 and ending with #1 (least popular).

"At the brain level, empathy for social exclusion of personalized women recruited areas coding the affective component of pain (i.e., anterior insula and cingulate cortex), the somatosensory components of pain (i.e., posterior insula and secondary somatosensory cortex) together with the mentalizing network (i.e., middle frontal cortex) to a greater extent than for the sexually objectiﬁed women. This diminished empathy is discussed in light of the gender-based violence that is afﬂicting the modern society" (Cogoni et al., 2018).

A new brain imaging paper on Cyberball, social exclusion, objectification, and empathy went WAY out on a limb and linked the results to sexual violence, despite the lack of differences between male and female participants. It's quite a leap from watching a video of women in differing attire, comparing levels of empathy when “objectified” vs. “personalized” women are excluded from the game, and actually perpetrating violence against women in the real world.

I'm not a social psychologist (so I've always been a bit skeptical), but Cyberball is a virtual game designed as a model for social rejection and ostracism (Williams et al., 2000). The participant is led to believe they are playing an online ball-tossing game with other people, who then proceed to exclude them from the game. It's been widely used to study exclusion, social pain, and empathy for another's person's pain.

The present version went beyond this simple animation and used 15–21 second videos (see still image in Fig. 1) with the “self” condition represented by a pair of hands. More important, though, was a comparison of the two “other person” conditions.

“Each video displayed either a ‘social inclusion’ or a ‘social exclusion’ trial. ... At the end of each trial, the participant was asked to rate the valence of the emotion felt by themselves (self condition), or by the other person (other conditions), during the game on a Likert-type rating scale going from −10 = ‘very negative’ over 0 to +10 = ‘very positive’.”

The participants were 19 women and 17 men, who showed no differences in their emotion ratings. Curiously, the negative emotion ratings on exclusion trials did not differ between the Self, Objectified, and Personalized conditions. So there appears to be no empathy gap for objectified women who were excluded from Cyberball. The difference was on the inclusion trials, when the subjects didn't feel as positively towards women in little black dresses when they were included in the game (in comparison to when women in pants were included, or when they themselves were included).

At this point, I won't delve deeper into the neuroimaging results, because the differences shown at the top of the post were for the exclusion condition, when behavioral ratings were the all same. And any potential sex differences in the imaging data weren't reported.1 Or else I'm confused. At any rate, perhaps an fMRI study of perpetrators would be more informative in the future. But ultimately, culture and social conditions and power differentials (all outside the brain) are the major determinants of violence against women.

When discussing the objectification of women in the present era, it's hard to escape the Harvey Weinstein scandal. One of the main purposes of Miramax2 was to turn young women inro sex objects. Powerful essays by Lupita Nyong’o, Salma Hayek, and Brit Marling (to name just a few) describe the indignities, sexual harassment, and outright assault they endured from this highly influential career-maker or breaker. Further, they describe the identical circumstances, the lingering doubt, the self-blame, and the commodification of themselves. Here's Marling:

Hollywood was, of course, a rude awakening to that kind of idealism. I quickly realized that a large portion of the town functioned inside a soft and sometimes literal trafficking or prostitution of young women (a commodity with an endless supply and an endless demand). The storytellers—the people with economic and artistic power—are, by and large, straight, white men. As of 2017, women make up only 23 percent of the Directors Guild of America and only 11 percent are people of color.. . .

Once, when I was standing in line for some open-call audition for a horror film, I remember catching my reflection in the mirror and realizing that I was dressed like a sex object. Every woman in line to audition for “Nurse” was, it seemed. We had all internalized on some level the idea that if we were going to be cast we’d better sell what was desired—not our artistry, not our imaginations—but our bodies.

1 Although they listed a variety of reasons, the authors didn't do themselves any favors with this explanation for the lack of sex differences:

“Although this issue is still debated, in this study we refer to gender violence as a phenomenon that mainly entails not only active participation, but also passive acceptance or compliance and therefore involving both men and women’ behaviors.”

It was for exactly the same reason that reviewers of papers and grants are anonymous: it gives you the ability to provide an honest critique without fear of retaliation. If peer review ever becomes completely open and transparent, then I’d have no need for a pseudonym any more.

In an ideal world, reviewers should be identified and held accountable for what they write. Then shoddy reviews and nasty comments would (presumably) become less common. We’ve all seen anonymous reviews that are incredibly insulting, mean, and unprofessional. So it’s hypocritical to say that bloggers are cowardly for hiding under pseudonyms, while staunchly upholding the institution of anonymous peer review. ...

Have you ever been tempted to drop the pseudonym and use your real name? What do you think would happen (positive and negative if you did?)

My answer:

. . .

If I were to drop the pseudonym, it might be good (and bad) for my career as a neuroscientist. I could finally take credit for my writing, but then I’d have to take all the blame too! But overall, it’s likely that less would happen than I currently imagine.

{At this point, most people probably don't care who I am.}

So what has changed? Have I left the field? No. But some serious and tragic life events have rendered my anonymity irrelevant. I just don't care any more.

We all agree that repeated blows to the head are bad for the brain. What we don't yet know is:

who will show lasting cognitive and behavioral impairments

who will show only transient sequelae (and for how long)

who will manifest long-term neurodegeneration

...and by which specific cellular mechanism(s)

Adding to the confusion is the unclear terminology used to describe impact-related head injuries. Is a concussion the same as a mild traumatic brain injury (TBI)? Sharp and Jenkins say absolutely not, and contend that Concussion is confusing us all:

It is time to stop using the term concussion as it has no clear definition and no pathological meaning. This confusion is increasingly problematic as the management of ‘concussed’ individuals is a pressing concern. Historically, it has been used to describe patients briefly disabled following a head injury, with the assumption that this was due to a transient disorder of brain function without long-term sequelae. However, the symptoms of concussion are highly variable in duration, and can persist for many years with no reliable early predictors of outcome. Using vague terminology for post-traumatic problems leads to misconceptions and biases in the diagnostic process, producing uninterpretable science, poor clinical guidelines and confused policy. We propose that the term concussion should be avoided. Instead neurologists and other healthcare professionals should classify the severity of traumatic brain injury and then attempt to precisely diagnose the underlying cause of post-traumatic symptoms.

When it comes to head injuries and CTE, Goldstein spoke of three categories that are being jumbled: concussions, TBI and CTE. Concussion, he says, is a syndrome defined “by consensus really every couple of years, based on the signs and symptoms of neurological syndrome, what happens after you get hit in the head. It’s nothing more than that, a syndrome...

A TBI is different. “it is an injury, an event,” he said.“It’s not a syndrome. It’s an event and it involves damage to tissue. If you don’t have a concussion, you can absolutely have brain injury and the converse is true.”. . .

“So concussion may or may not be a TBI and equally important not having a concussion may or may not be associated with a TBI. A concussion doesn’t tell you anything about a TBI. Nor does it tell you anything about CTE.”

I think I'm even more confused now... you can have concussion (the syndrome) without an injury or an event?

But I'm really here to tell you about 8 post-mortem brains from teenage males who had engaged in contact sports. These were from Dr. Ann McKee's brain bank at BU, and were included in the paper along with extensive data from a mouse model (Tagge, Fisher, Minaeva, et al., 2018). Four brains were in the acute-subacute phase after mild closed-head impact injury and had previous diagnoses of concusion. The other 4 brains were control cases, including individuals who also had previous diagnoses of concussion. Let me repeat that. The controls had ALSO suffered head impact injuries at unknown (“not recent”) pre-mortem dates (>7 years prior in one case).

This amazing and important work was made possible by magnanimous donations from grieving parents. I am very sorry for the losses they have suffered.

Below is a summary of the cases.

Case 1

18 year old multisport athlete – American football (9 yrs), baseball, basketball, weight-lifting

history of 10 sports concussions

died by suicide (hanging) 4.2 months after a snowboarding accident with head injury

The goal of this study was to look at pathology after acute-subacute head injury (e.g., astrocytosis, macrophages, and activated microglia). Only 2 of the cases showed hyperphosphorylated tau protein, which is characteristic of CTE. But in the media (e.g., It's not concussions that cause CTE. It's repeated hits), all of these changes have been conflated with CTE, a neurodegenerative condition that presumably develops over a longer time scale. Overall, the argument for a neat and tidy causal cascade is inconclusive in humans (in my view), because hyperphosphoralated tau was not observed in any of the controls, including those with significant histories of concussion. Or in Cases 2 and 3. Are we to assume, then, that concussions do not produce tauopathy in all cases? Is there a specific “dose” of head impact required? The mouse model is more precise in this realm, and those results seemed to drive the credulous headlines.

Importantly, the authors admit that “Clearly, not every individual who sustains a head injury, even if repeated, will develop CTE brain pathology.” Conversely, CTE pathology can occur without having suffered a single blow to the head (Gao et al., 2017).

If Tylenol and Advil were so effective in “mending broken hearts”, “easing heartaches”, and providing a “cure for a broken heart”, we would be a society of perpetually happy automatons, wiping away the suffering of breakup and divorce with a mere dose of acetaminophen. We'd have Tylenol epidemics and Advil epidemics to rival the scourge of the present Opioid Epidemic.

Really, people,1words have meanings. If you exaggerate, readers will believe statements that are blown way out of proportion. And they may even start taking doses of drugs that can harm their kidneys and livers.

... some popular painkillers like ibuprofen and acetaminophen have been found to reduce people’s empathy, dull their emotions and change how people process information.

A new scientific review of studies suggests over-the-counter pain medication could be having all sorts of psychological effects that consumers do not expect.

Not only do they block people’s physical pain, they also block emotions.

The authors of the study, published in the journal Policy Insights from the Behavioral and Brain Sciences, write: “In many ways, the reviewed findings are alarming. Consumers assume that when they take an over-the-counter pain medication, it will relieve their physical symptoms, but they do not anticipate broader psychological effects.”

Taking painkillers could ease the pain of hurt feelings as well as headaches, new research has discovered.

The review of studies by the University of California found that women taking drugs such as ibuprofen and paracetamol reported less heartache from emotionally painful experiences, compared with those taking a placebo.

However, the same could not be said for men as the study found their emotions appeared to be heightened by taking the pills. Researchers said the findings of the review were 'in many ways...alarming'.

I'm here to tell you these worries are greatly exaggerated. Just like there's a Trump tweet for every occasion, there's a Neurocritic post for most of these studies (see below).

. . . This work suggests that drugs like acetaminophen and ibuprofen might influence how people experience emotional distress, process cognitive discrepancies, and evaluate stimuli in their environment. These studies have the potential to change our understanding of how popular pain medications influence the millions of people who take them. However, this research is still in its infancy. Further studies are necessary to address the robustness of reported findings and fully characterize the psychological effects of these drugs.

The studies are potentially transformative, yet the research is still in its infancy. The press didn't read the “further studies are necessary” caveat. But I did find one article that took a more modest stance:

Ratner wrote that the findings are “in many ways alarming,” but he told MD Magazine that his goal is not so much to raise alarm as it is to prompt additional research. “Something that I want to strongly emphasize is that there are really only a handful of studies that have looked at the psychological effects of these drugs,” he said.

Ratner said a number of questions still need to be answered. For one, there is not enough evidence out there to know to what extent these psychological effects are merely the result of people being in better moods once their pain is gone.

. . .

Ratner also noted that the participants in the studies were not taking the medications because of physical pain, and so the psychological effects might be a difference in cases where the person experienced physical pain and then relief.For now, Ratner is urging caution and nuanced interpretation of the data. He said stoking fears of these drugs could have negative consequences, as could a full embrace of the pills as mood-altering therapies.

Ha! Not so alarming after all, we see on a blog with 5,732 Twitter followers (as opposed to 2.4 million and 2.9 million for the most popular news pieces). I took 800 mg of ibuprofen before writing this post, and I do not feel any less anxious or disturbed about events in my life. Or even about feeling the need to write this post, with my newly “out” status and all.

There's a Neurocritic post for every occasion...

As a preface to my blog oeuvre, these are topics I care about deeply. I'm someone who has suffered heartache and emotional pain (as most of us have), as well as chronic pain conditions, four invasive surgeries, tremendous loss, depression, anxiety, insomnia, etc.... My criticism does not come lightly.

I'm not entirely on board with studies showing that one dose (or 3 weeks) of Tylenol MAY {or may not} modestly reduce social pain or “existential distress” or empathy as sufficient models of human suffering and its alleviation by OTC drugs. In fact, I have questions about all of these studies.

1And by “people” I mean scientists and journalists alike. Read this tweetstorm from Chris Chambers, including:

Two years later the 1st results were in & they were striking: most exaggeration in science/health news was already in the press releases issued by universities. https://t.co/iYxj2G0Dpg Just process that fact for a moment. Lay write up of the study: https://t.co/3rh9pgTu0S 14/x

Fourth – if you really want accurate science news, avoid exaggeration in your own press releases and anticipate likely misunderstandings by including a section “What this study does NOT show”. If you allow hype in your PR then YOU share culpability for misreporting. 21/x

No, they're not. They're really not. They're “everywhere” to me, because I've been listening to Black Celebration. How did I go from “death is everywhere” to “universal linguistic decoders are everywhere”? I don't imagine this particular semantic leap has occurred to anyone before. Actually, the association travelled in the opposite direction, because the original title of this piece was Decoders Are Everywhere.1 {I was listening to the record weeks ago, the silly title of the post reminded me of this, and the semantic association was remote.}

This is linguistic meaning in all its idiosyncratic glory, a space for infinite semantic vectors that are unexpected and novel. My rambling is also an excuse to not start out by saying, oh my god, what were you thinking with a title like, Toward a universal decoder of linguistic meaning from brain activation (Pereira et al., 2018). Does the word “toward” absolve you from what such a sage, all-knowing clustering algorithm would actually entail? And of course, “universal” implies applicability to every human language, not just English. How about, Toward a better clustering algorithm (using GloVe vectors) for inferring meaning from the distribution of voxels, as determined by an n=16 database of brain activation elicited by reading English sentences?

But it's unfair (and inaccurate) to suggest that the linguistic decoder can decipher a meandering train of thought when given a specific neural activity pattern. Therefore, I do not want to take anything away from what Pereira et al. (2018) have achieved in this paper. They say:

“Our work goes substantially beyond prior work in three key ways. First, we develop a novel sampling procedure for selecting the training stimuli so as to cover the entire semantic space. This comprehensive sampling of possible meanings in training the decoder maximizes generalizability to potentially any new meaning.”

“Second, we show that although our decoder is trained on a limited set of individual word meanings, it can robustly decode meanings of sentences represented as a simple average of the meanings of the content words. ... To our knowledge, this is the ﬁrst demonstration of generalization from single-word meanings to meanings of sentences.”

“Third, we test our decoder on two independent imaging datasets, in line with current emphasis in the ﬁeld on robust and replicable science. The materials (constructed fully independently of each other and of the materials used in the training experiment) consist of sentences about a wide variety of topics—including abstract ones—that go well beyond those encountered in training.”

Unfortunately, it would take me days to adequately pore over the methods, and even then my understanding would be only cursory. The heavy lifting would need to be done by experts in linguistics, unsupervised learning, and neural decoding models. But until then...

Death is everywhereThere are flies on the windscreen For a start Reminding us We could be torn apartTonight

(2) It depends. (on what you want to do: predict behavior2 (or some mental state), explain behavior, control behavior, etc.)

Abstract: All areas of the sciences are excited about the innovative new ways in which data can be acquired and analyzed. In the neurosciences, there exists a veritable orgy of data – but is that what we need? Will the colossal datasets we now enjoy solve the questions we seek to answer, or do we need more ‘big theory’ to provide the necessary intellectual infrastructure? Four leading researchers, with expertise in neurophysiology, neuroimaging, artificial intelligence, language, and computation will debate these big questions, arguing for what steps are most likely to pay off and yield substantive new explanatory insight.

Talk 1: Eve Marder– The Important of the Small for Understanding the Big

Talk 2: Jack Gallant– Which Presents the Biggest Obstacle to Advances in Cognitive Neuroscience Today: Lack of Theory or Lack of Data?

Talk 3: Alona Fyshe– Data Driven Everything

Talk 4: Gary Marcus– Neuroscience, Deep Learning, and the Urgent Need for an Enriched Set of Computational Primitives

Levels of analysis! Marr! [Poeppel is the moderator] New new new! Transformative techniques, game-changing paradigms, groundbreaking schools of thought, and multiple theories for myriad neural circuits. There is no single computational system that can possibly explain brain function at all levels of analysis (gasp! not even the Free Energy Prinicple).3

A Q&A or panel discussion would be nice... (although not on the schedule)

This Special Symposium will be preceded by the ever-exciting Data Blitz (a series of 5 minute talks) and followed by a Keynote Address by the Godfather of Cognitive Neuroscience:

How do neurons turn into minds? How does physical “stuff”—atoms, molecules, chemicals, and cells—create the vivid and various alive worlds inside our heads? This problem has gnawed at us for millennia. In the last century there have been massive breakthroughs that have rewritten the science of the brain, and yet the puzzles faced by the ancient Greeks are still present. In this lecture I review the the history of human thinking about the mind/brain problem, giving a big-picture view of what science has revealed. Understanding how consciousness could emanate from a confederation of independent brain modules working together will help define the future of brain science and artificial intelligence, and close the gap between brain and mind.

Plus there is a jam packed schedule of posters, talks, and prestigious award recipients/presenters on Sunday through Tuesday. Another highlight:

Everyone forgets. As we grow older or have a brain injury or a stroke or develop a neurodegenerative disease, we forget much more often. Is there a technological intervention that can help us remember? That is the $50 million dollar question funded by DARPA's Restoring Active Memory (RAM) Program, which has focused on intracranial electrodes implanted in epilepsy patients to monitor seizure activity.

Led by Michael Kahana's group at the University of Pennsylvania and including nine other universities, agencies, and companies, this Big Science project is trying to establish a “closed-loop” system that records brain activity and stimulates appropriate regions when a state indicative of poor memory function is detected (Ezzyat et al., 2018).

Meanwhile, the Penn group and their collaborators moved to a different target region, which was also discussed in the CNS 2018 symposium: “Closed-loop stimulation of temporal cortex rescues functional networks and improves memory” (based on Ezzyat et al., 2018).

Twenty-five patients performed a memory task in which they were shown a list of 12 nouns, followed by a distractor task, and finally a free recall phase, where they were asked to remember as many of the words as they could. The participants went through a total of 25 rounds of this study-test procedure.

Meanwhile, the first three rounds were “record-only” sessions, where the investigators developed a classifier — a pattern of brain activity — that could predict whether or not the patient would recall the word at better than chance (AUC = 0.61, where chance =.50).” 3 The classifier relied on activity across all electrodes that were placed in an individual patient.

Memory blocks #4-25 alternated between Simulation (Stim) and No Stimulation (NoStim) lists. In Stim blocks, 0.5-2.25 mA stimulation was delivered for 500 ms when the classifier AUC predicted 0.5 recall during word presentation. In NoStim lists, stimulation was not delivered on analogous trials, and the comparison between those two conditions comprised the main contrast shown below.

The authors found that that lateral temporal cortex stimulation increased the relative probability of item recall by 15% (using a log-binomial model to estimate the relative change in recall probability). {But if you want to see all of the data, peruse the Appendix below. Overall recall isn't that great...}

Lateral temporal cortex (n=18) meant MTG, STG, and IFG (mostly on the left). Non-lateral temporal cortex (n=11) meant elsewhere (see Appendix below). The improvements were greatest with stimulation in the middle portion of the left middle temporal gyrus. There are many reason for poor encoding, and one could be that subjects were not paying enough attention. The authors didn't have the electrode coverage to test that explicitly. This leads me to believe that electrical stimulation was enhancing the semantic encoding of the words. The MTG is thought to be critical for semantic representations and language comprehension in general (Turken & Dronkers, 2011).

Thus, my interpretation of the results is that stimulation may have boosted semantic encoding of the words, given the nature of the stimuli (words, obviously), the left lateralization with a focus in MTG, and the lack of an encoding task. The verbal memory literature clearly demonstrates that when subjects have a deep semantic encoding task (e.g., living/non-living decision), compared to shallow orthographic (are there letters that extend above/below?) or phonological tasks, recall and recognition are improved. Which led me to ask some questions, and one of the authors kindly replied (Dan Rizzuto, personal communication). 4

Did you ever have conditions that contrasted different encoding tasks? Here I meant to ask about semantic vs orthographic encoding (because the instructions were always to “remember the words” with no specific encoding task).

We studied three verbal learning tasks (uncategorized free recall, categorized free recall, paired associates learning) and one spatial navigation task during the DARPA RAM project. We were able to successfully decode recalled / non-recalled words using the same classifier across the three different verbal memory tasks, but we never got sufficient paired associates data to determine whether we could reliably increase memory performance on this task.

Did you ever test nonverbal stimuli (not nameable pictures, which have a verbal code), but visual-spatial stimuli? Here I was trying to assess the lexical-semantic nature of the effect.

With regard to the spatial navigation task, we did observe a few individual patients with LTC stimulation-related enhancement, but we haven't yet replicated the effect across the population.

Although this method may have therapeutic implications in the future, at present it is too impractical, and the gains were quite small. Nonetheless, it is an accomplished piece of work to demonstrate closed-loop memory enhancement in humans.

....the right entorhinal area during learning significantly improved subsequent memory specificity for novel portraits; participants were able both to recognize previously-viewed photos and reject similar lures. These results suggest that microstimulation with physiologic level currents—a radical departure from commonly used deep brain stimulation protocols—is sufficient to modulate human behavior and provides an avenue for refined interrogation of the circuits involved in human memory.

2 Unfortunately, I was running between two sessions and missed that particular talk.

3 This level of prediction is more like a proof of concept and would not be clinically acceptable at this point.

4 Thanks also to Youssef Ezzyat and Cory Inman, whom I met at the symposium.

In the table above, Stim and NoStim recall percentages are for ALL words in the blocks. But:

Only half of the words in each Stim list were stimulated, however, so this comparison is conservative. The numbers improve slightly if you compare just the stimulated words with the matched non-stimulated words. Not all subjects exhibited a significant within-subject effect, but the effect is reliable across the population (Figure 3a)

But I believe that Dr. Eve Marder, the first speaker, posed the greatest challenges to the field of cognitive neuroscience, objections that went mostly unaddressed by the other speakers. Her talk was a treasure trove of quotable witticisms (paraphrased):

How much ambiguity can you live with in your attempt to understand the brain? For me I get uncomfortable with anything more than 100 neurons

If you're looking for optimization (in[biological] neural networks), YOU ARE DELUSIONAL!

I believe the talks from the present symposium will be on the CNS YouTube channel as well, and I'll update the post if/when that happens.

Speaking of canonical computation, now I know why Gary Marcus was apoplectic at the thought of “one canonical cortical circuit to rule them all.” More on that in a moment...

The next speaker was Dr. Alona Fyshe, who spoke about computational vision. MLE, MAP, ImageNet, CNNs. I'm afraid I can't enlighten you here. Like everyone else, she thought theory vs. data is a false dichotomy. Her memorable tag line was “Kill Your Darlings.” At first I thought this meant delete your best line [of code? of your paper?], but in reality “our theories need to be flexible enough to adapt to data” (always follow @vukovicnikola #cns2018 for the best real-time conference coverage).

Next up was Dr. Gary Marcus, who started out endorsing the famous Jonas and Kording (2017) paper —Could a Neuroscientist Understand a Microprocessor?— which suggested that current data analysis methods in neuroscience are inadequate for producing a true understanding of the brain. Later, during the discussion, Dr. Jack Gallant quipped that the title of that paper should have been “Neuroscience is Hard” (on Twitter, @KordingLab thought this was unfair). For that matter, Gallant told Marcus, “I think you just don't like the brain.” [Gallant is big on data, but not mindlessly]

Anyway, back to Marcus. “Parsimony is a false god,” he said. I've long agreed with this sentiment, especially when it comes to the brain — the simplest explanation isn't always true. Marcus is pessimistic that deep learning will lead to great advances in explaining neural systems (or AI). It's that pesky canonical computation again. The cerebral cortex (and the computations it performs) isn't uniform across regions (Marcus et al., 2014).

This is not a new idea. In my ancient dissertation, I cited Swindale (1990) and said:

Swindale (1990) argues that the idea of mini-columns and macro-columns was drawn on insufficient data. Instead, the diversity of cell types in different cortical areas may result in more varied and complex organization schemes which would adequately reflect the different types of information stored there [updated version would be “types of computations performed there”].1

Finally, Dr. Jack Gallant came out of the gate saying the entire debate is silly, and that we need both theory and data. But he also thinks it's silly that we'll get there with theory alone. We need to build better measurement tools, stop faulty analysis practices, and develop improved experimental paradigms. He clearly favors the collection of more data, but in a refined way. For the moment, collect large rich naturalistic data sets using existing technology.

As finer analyses are applied to both local circuitry and network properties, our theoretical understanding of neocortical operation may require further revision, if not total replacement with other metaphors. At our current state of knowledge, a number of different conceptual frameworks can be overlaid on the existing data to derive an order that may not be there. Or conversely, the data can be made to fit into one's larger theoretical view.

How is semantic knowledge represented and stored in the brain? A classic way of addressing this question is via single-case studies of patients with brain lesions that lead to a unique pattern of deficits. Agnosia is the inability to recognize some class (or classes) of entities such as objects or persons. Agnosia in the visual modality is most widely studied, but agnosias in the auditory and olfactory modalities have been reported as well. A key element is that basic sensory processing is intact, but higher-order recognition of complex entities is impaired.

Agnosias that are specific for items in a particular category (e.g., animals, fruits/vegetables, tools, etc.) are sometimes observed. An ongoing debate posits that some category-specific dissociations may fall out along sensory/functional lines (the Warrington view), or along domain-specific lines (the Caramazza view).1 The former suggests that knowledge of living things is more reliant on vision (you don't pick up and use an alligator), while knowledge of tools is more reliant on how you use them. The latter hypothesis suggests that evolutionary pressures led to distinct neural systems for processing different categories of objects.2

Much less work has examined how nonverbal auditory knowledge is represented in the brain. A new paper reports on a novel category-specific deficit in an expert bird-watcher who developed semantic dementia (Muhammed et al., 2018). Patient BA lost the ability to identify birds by their songs, but not by their appearance. As explained by the authors:

BA is a dedicated amateur birder with some 30 years’ experience, including around 10 weeks each spring spent in birdwatching expeditions and over the years had also regularly attended courses in bird call recognition, visual identification and bird behaviour. He had extensive exposure to a range of bird species representing all major regions and habitats of the British Isles. He had noted waning of his ability to name birds or identify them from their calls over a similar timeframe to his evolving difficulty with general vocabulary. At the time of assessment, he was also becoming less competent at identifying birds visually but he continued to enjoy recognising and feeding the birds that visited his garden. There had been no suggestion of any difficulty recognising familiar faces or household items nor any difficulty recognising the voices of telephone callers or everyday noises. There had been no evident change in BA's appreciation of music.

BA's brain showed a pattern of degeneration characteristic of semantic dementia, with asymmetric atrophy affecting the anterior, medial, and inferior temporal lobes, to a greater extent in the left hemisphere.

Fig. 1 (modified from Muhammed et al., 2018).Note that L side of brain shown on R side of scan. Coronal sections of BA's T1-weighted volumetric brain MRI through (A) temporal poles; (B) mid-anterior temporal lobes; and (C) temporo-parietal junctional zones. There is more severe involvement of the left temporal lobe.

The authors developed a specialized test of bird knowledge in the auditory, visual, and verbal modalities. The performance of BA was compared to that of three birders similar in age and experience.

Results indicated that “BA performed below the control range for bird knowledge derived from calls and names but within the control range for knowledge derived from appearance.” There was a complicated pattern of results for his knowledge of specific semantic characteristics in the different modalities, but the basic finding suggested an agnosia for bird calls. Interestingly, he performed as well as controls on tests of famous voices and famous face pictures.

Thus, the findings suggest separate auditory and visual routes to avian conceptual knowledge, at least in this expert birder. Also fascinating was the preservation of famous person identification via voice and image. The authors conclude with a ringing endorsement of single case studies in neuropsychology:

This analysis transcends the effects of acquired expertise and illustrates how single case experiments that address apparently idiosyncratic phenomena can illuminate neuropsychological processes of more general relevance.

The study that convinced Caramazza that the assumption that object representations are organised around perceptual properties (vs abstract category membership) is wrong #CNS2018pic.twitter.com/nsCPp0QGTE

Caramazza: domain specific organisation is an evolutionary adaptation - the brain organised around these categories not because of the structure of experience, but because it is evolutionarily primed to do so #CNS2018pic.twitter.com/bSzzKtQ4iP

Deep brain stimulation (DBS) of the subthalamic nucleus in Parkinson's disease (PD) has been highly successful in controlling the motor symptoms of this disorder, which include tremor, slowed movement (akinesia), and muscle stiffness or rigidity. The figure above shows the electrode implantation procedure for PD, where a stimulating electrode is placed in either the subthalamic nucleus, (STN), a tiny collection of neurons within the basal ganglia circuit, or in the internal segment of the globus pallidus, another structure in the basal ganglia (Okun, 2012). DBS of the STN is more common, and more often a source of disturbing non-motor side effects.

DBS surgery may be recommended for some patients in whom dopamine (DA) replacement therapy has become ineffective, usually after a few years. DA medications include the classic DA precursor L-DOPA, followed by DA agonists such as pramipexole, ropinirole, and bromocriptine. But unfortunately, impulse control disorders (ICDs, e.g., compulsive shopping, excessive gambling, binge eating, and compulsive sexual behavior) occur in about 17% of PD patients on DA agonists (Voon et al., 2017).

There are many first-person accounts from PD patients who describe uncharacteristic and embarrassing behavior after taking DA agonists, like this grandpa who started seeing prostitutes for the first time in his life:

For most of his life John Smithers was a respected family man who ran a successful business. Then he started paying for sex. Now, in his 70s, he explains how his behaviour has left him broke, alone and tormented

I am 70 years old and used to be respectable. I was a magistrate for 25 years, and worked hard to feed my children and build up the family business. I was not the most faithful of husbands, but I tried to be discreet about my affairs.1 Now I seem to be a liability. Over the last two decades I have spent a fortune on prostitutes and lost two wives. I have made irrational business decisions that took me to the point of bankruptcy. I have become an embarrassment to my nearest and dearest.

New-onset ICDs can also occur in patients receiving STN DBS, but the effects are mixed across the entire population: ICD symptoms can also improve or remain unchanged. Why this is the case is a vexing problem that includes premorbid personality, genetics, family history, past and present addictions, and demographic factors (Weintraub & Claassen).

- click on image for a larger view -

Neuroethicists are weighing in on the potential side effects of DBS that may alter a patient's perception of identity and self. A recent paper included a first-person account of altered personality and a sense of self-estrangement in a 46 year old woman undergoing STN DBS for PD (Gilbert & Viaña, 2018):

The patient reported a persistent state of self-perceived changes following implantation. More than one year after surgery, her narratives explicitly refer to a persistent perception of strangeness and alteration of her concept of self. For instance, she reported:

"can't be the real me anymore—I can't pretend . . . I think that I felt that the person that I have been [since the intervention] was somehow observing somebody else, but it wasn't me. . . . I feel like I am who I am now. But it's not the me that went into the surgery that time. . . . My family say they grieve for the old [me]. . . ."

Many of her quotes are striking in their similarity to behaviors that occur in the manic phase of bipolar disorder {loss of control, grandiosity}:

The patient also reported developing severe postoperative impulsivity: "I cannot control the impulse to go off if I'm angry." In parallel, while describing a sense of loss of control over some impulsions, she has also recognized that DBS gave her increased feelings of strength: "I never had felt this lack of power or this giving of power—until I had deep brain stimulation."

...she experienced radically enhanced capacities, in the form of increased uncontrollable sexual urges:

"I know this is a bit embarrassing. But I had 35 staples in my head, and we made love in the hospital bathroom and that wasn't just me. It was just I had felt more sexual with the surgery than without."

And greater physical energy:

"I remember about a week after the surgery, I still had the 35 staples in my head and I was just starting to enter the cooler months of winter but my kids had got me winter clothes so I had nothing to wear to the follow up appointment and when I went back there of the morning, I thought "I can walk into the doctor's" even though it was 5 kilometers into town. It's like the psychologist said: "For a woman who had a very invasive brain surgery 9 days ago and you've just almost walked 10 kilometers."And on the way, I stopped and bought a very uncharacteristic dress, backless—completely different to what I usually do."

Examining the DSM-5 criteria for bipolar mania, it seems clear (to me, at least) that the patient is indeed having a prolonged manic episode induced by STN DBS.

In order for a manic episode to be diagnosed, three (3) or more of the following symptoms must be present:

Inflated self-esteem or grandiosity

Decreased need for sleep (e.g., one feels rested after only 3 hours of sleep)

More talkative than usual or pressure to keep talking

Flight of ideas or subjective experience that thoughts are racing

Attention is easily drawn to unimportant or irrelevant items

Increase in goal-directed activity (either socially, at work or school; or sexually) or psychomotor agitation

Excessive involvement in pleasurable activities that have a high potential for painful consequences (e.g., engaging in unrestrained buying sprees, sexual indiscretions, or foolish business investments)

It's also notable that she divorced her husband, moved to another state, ruptured the therapeutic relationship with her neurologist and surgical team, and made a suicide attempt. She also took up painting and perceived the world in a more vibrant, colorful way {which resembles narratives of persons experiencing manic episodes}:

"I don't know, all the senses came alive. I wanted to listen to Paul Kelly and all of my favorite music really loud in the toilet. And you know, also everything was colourful. . . . Well, since brain surgery I can. I didn't bother before. I can see the light . . . the light that is underlying every masterpiece in photography. . . . I've seen it like I've never seen it before . . . I am a totally different person. I like it that I love photography and music and colourful clothes, but where is the old me now?"

However, she appears to display more insight into her altered behavior than {most} people in the midst of bipolar mania. Perhaps her reality monitoring abilities are more intact? Or it's because her symptoms wax and wane.2 But like in many manic individuals, she did not want this feeling to stop:

"I went to the psychiatrist, and he said, 'Right, well, this is bordering on mania[NOTE: that is an understatement], you need to turn the settings right down to manage it.'I said to him, 'Please don't, this is not over the top—this is just joy.'"

I think this line of research — studying individuals with Parkinson's who have impulse control disorders due to DA replacement or DBS — can provide insight into bipolar mania. Certainly, drugs that act as antagonists at multiple DA receptor subtypes (typical and atypical antipsychotics) are used in the management of bipolar disorder.

Patient narratives are also informative in this regard, and provide critical information for individuals considering various types of therapies for PD. In this paper, the patient was not informed by the medical team that there could be undesirable psychiatric side effects. She has taken legal action against the lead neurosurgeon, and the proceedings were ongoing when the article was written.

Footnote

1One might wonder whether Mr. Smithers' premorbid propensity for affairs made him more vulnerable for compulsive sexual activity after DA agonists. And that is one consideration displayed in the box and circle diagram above.

2 She did experience bouts of depression as well as mania, perhaps related to the stimulation parameters and precise location. And bipolar individuals also gain insight once the manic episode subsides.

A new paper by Bédécarrats et al. (2018) is the latest entry into the iconoclastic hullabaloo claiming a non-synaptic basis for learning and memory. In short, “RNA extracted from the central nervous system of Aplysia given long-term sensitization training induced sensitization when injected into untrained animals...” The results support the minority view that long-term memory is not encoded by synaptic strength, according to the authors, but instead by molecules inside cells (à la Randy Gallistel).

...there is a particular reflex1(memory) that changes when they [Aplysia] have experienced a lot of shocks. How memory is encoded is a bit debated but one strongly-supported mechanism (especially in these snails) is that there are changes in the amount of particular proteins that are expressed in some neurons. These proteins might make more of one channel or receptor that makes it more or less likely to respond to signals from other neurons. So for instance, when a snail receives its first shock a neuron responds and it withdraws its gills. Over time, each shock builds up more proteins that make the neuron respond more and more. These proteins are built up by the amount of RNA (the “blueprint” for the proteins, if you will) that are located in the vicinity of the neuron that can receive this information. ...

This new paper shows that in these snails, you can just dump the RNA on these neurons from someone else and the RNA has already encoded something about the type of protein it will produce.

Neuroskeptic has a more contentious take on the study, casting doubt on the notion that sensitization of a simple reflex to any noxious stimulus (a form of non-associative “learning”) produces “memories” as we typically think of them. But senior author Dr. David Glanzman tolerated none of this, and expressed strong disagreement in the comments:

“I’m afraid you have a fundamental misconception of what memory is. We claim that our experiments demonstrate transfer of the memory—or essential components of the memory—for sensitization. Now, although sensitization may not comport with the common notion of memory—it’s not like the memory of my Midwestern grandmother’s superb blueberry pies, for example—it nevertheless has unambiguous status as memory. ... [didactic lesson continues] ... We do not claim in our paper that declarative memories—such as my memory of my grandmother’s blueberry pies—or even simpler forms of associative memories like those induced during classical conditioning—can be transferred by RNA. That remains to be seen.”

OK, so Glanzman gets to define what memory is. But later on he's caught in a trap and has to admit:

“Of course, there are many phenomena that can be loosely regarded as memory—the crease in folded paper, for example, can be said to represent the memory of a physical action.”

“So a transfer of RNA that activates a cellular mechanism associated with touch isn't memory, but rather just exogenously turning on a cellular pathway. By that logic, gene therapy to treat sickle cell anemia changes blood "memory".” 2

“Kandel set the precedent that reflexes in Aplysia are "memories", and now we're stuck with it.”

This reminded me of Dr. Kandel's bold [outlandish?] attempt to link psychoanalysis, Aplysia withdrawal reflexes, and human anxiety (Kandel, 1983). I was a bit flabbergasted that gill withdrawal in a sea slug was considered “mentation” (thought) and could support Freudian views.3

In the past, ascribing a particular behavioral feature to an unobservable mental process essentially excluded the problem from direct biological study because the complexity of the brain posed a barrier to any complementary biological analysis. But the nervous systems of invertebrates are quite accessible to a cellular analysis of behavior, including certain internal representations of environmental experiences that can now be explored in detail; This encourages the belief that elements of cognitive mentation relevant to humans and related to psychoanalytic theory can be explored directly [in Aplysia] and need no longer be merely inferred.

- click on image for a larger view -

So anticipatory anxiety in humans is isomorphic to invertebrate responses in a classical aversive conditioning paradigm, and chronic anxiety is recreated by long-term sensitization paradigms. Perhaps I missed the translational advances here, and any application to Psychoanalytic and Neuropsychoanalytic practice that has been fully realized.

If we want to accept a flexible definition of learning and memory in animals, why not consider associative learning experiments in pea plants, where a neutral cue predicting the location of a light source had a greater effect on the direction of plant growth than innate phototropism (Gagliano et al., 2016)? Or review the literature on associative and non-associative learning in Mimosa? (Abramson & Chicas-Mosier, 2016). Or evaluate the field of ‘plant neurobiology’ and even the ‘Philosophy of Plant Neurobiology’ (Calvo, 2016). Or are the possibilities of chloroplast-based consciousness and “mentation” without neurons too threatening (or too fringe)?

1edited to indicate my emphasis on reflex— more specifically, the gill withdrawal reflex in Aplysia — which can only go so far as a model of other forms of memory, in my view.

2 Another skeptic (but for different reasons) is Dr. Tomás Ryan, who was paraphrased in Scientific American:

But [Ryan] doesn’t think the behavior of the snails, or the cells, proves that RNA is transferring memories. He said he doesn’t understand how RNA, which works on a time scale of minutes to hours, could be causing memory recall that is almost instantaneous, or how RNA could connect numerous parts of the brain, like the auditory and visual systems, that are involved in more complex memories.

The authors were interested in whether they could extract a general factor of risk preference (R), analogous to the general factor of intelligence (g). They used a bifactor model to account for the general factor as well as specific, orthogonal factors (seven in this case). The differing measures above are often used interchangeably and called “risk”, but the general factor R only...

...explained substantial variance across propensity measures and frequency measures of risky activities but did not generalize to behavioral measures. Moreover, there was only one specific factor that captured common variance across behavioral measures, specifically, choices among different types of risky lotteries (F7). Beyond the variance accounted for by R, the remaining six factors captured specific variance associated with health risk taking (F1), financial risk taking (F2), recreational risk taking (F3), impulsivity (F4), traffic risk taking (F5), and risk taking at work (F6).

In other words, the behavioral tasks didn't explain R at all, and most of them didn't even explain common variance across the tasks themselves (F7 below).

To assess risk-tasking, Vi and Obrist (2018) administered the Balloon Analog Risk Task (BART) to 70 participants in the UK and 71 in Vietnam. They were randomly assigned to one of five taste groups [yes, n=14 each] of Bitter (caffeine), Salty (sodium chloride), Sour (citric acid), Umami (MSG), and Sweet (sugar, presumably). They were given two rounds of BART and consumed 20 ml of flavored drink or plain water before each (in counterbalanced order).

[Remember that BART didn't load on a general factor of risk-taking, nor did it capture common variance across behavioral tasks.]

As in the animation above (and a video made by the authors)2, the participant “inflates” a virtual balloon via mouse click until they either stop and win a monetary reward, or else they pop the balloon and lose money. The number of clicks (pumps) indicates risk-taking behavior. Overall, the Vietnamese students (all recruited from the School of Biotechnology and Food Technology at Hanoi University) appeared to be riskier than the UK students (but I don't know if this was tested directly). The main finding was that both groups clicked more after drinking citric acid than the other solutions.

Why would this this balloon pumping be more vigorous after tasting a sour solution? We could also ask, why were the Vietnamese subjects more risk-averse after drinking salt water, and riskier (relative to UK subjects) after drinking sugar water?3 We simply don't know the answer to any of these questions, but the authors weren't shy about extrapolating to clinical populations:

For example, people who are risk-averse (e.g., people with anxiety disorders or depression) may benefit from a sour additive in their diet.

Prior work has, for instance, shown that in cases of psychiatric disorders such as depression, anxiety, or stress-related disorders the use of lemon oils proved efficient and was further demonstrated to reduce stress. While lemon and sour are not the same, they share common properties that can be further investigated with respect to risk-taking.

We're really not sure how any of this works. The authors offered many more analyses in the Supplementary Materials, but they didn't help explain the results. Although the sour finding was interesting and observed cross culturally, would it replicate using groups larger than n=14?

The term “risk” refers to properties of the world, yet without a clear agreement on its definition, which has ranged from probability, chance, outcome variance, expected values, undesirable events, danger, losses, to uncertainties. People’s responses to those properties, on the other hand, are typically described as their “risk preference.”

2 The video conveniently starts by illustrating risk as skydiving, which bears no relation to being an adventurous eater.

This post will be my own personalized rant about the false promises of personalized medicine. It will not be about neurological or psychiatric diseases, the typical topics for this blog. It will be about oncology, for very personal reasons: misery, frustration, and grief. After seven months of research on immunotherapy clinical trials, I couldn't find a single [acceptable] one1 in either Canada or the US that would enroll my partner with stage 4 cancer. For arbitrary reasons, for financial reasons, because it's not the “right” kind of cancer, because the tumor's too rare, because it's too common, because of unlisted exclusionary criteria, because one trial will not accept the genomic testing done for another trial.2 Because of endless waiting and bureaucracy.

But first, I'll let NIH explain a few terms. Is precision medicine the same as personalized medicine? Yes and no. Seems to me it's a bit of a branding issue.

[it's a defense of the old-fashioned family doctor (solo practitioner) by Gibson]:

...will the solo practitioner's demise be welcomed, his replacement being a battery of experts in the fields of medicine, surgery, psychiatry and all the new allied health sciences, infinitely better trained than their singlehanded predecessor?

However, there was concern that the word "personalized" could be misinterpreted to imply that treatments and preventions are being developed uniquely for each individual; in precision medicine, the focus is on identifying which approaches will be effective for which patients based on genetic, environmental, and lifestyle factors.

The Council therefore preferred the term "precision medicine" to "personalized medicine." However, some people still use the two terms interchangeably.

So “precision medicine” is considered a more contemporary and cutting-edge term.

Pharmacogenomics is a part of precision medicine. Pharmacogenomics is the study of how genes affect a person’s response to particular drugs. This relatively new field combines pharmacology (the science of drugs) and genomics (the study of genes and their functions) to develop effective, safe medications and doses3 that are tailored to variations in a person’s genes.

At present, precision pharmacogenomics is just a “tumor grab” with no promise of treatment in most cases. There are some serious and admirable efforts, but accessibility and costs are major barriers.

Cancer chemotherapy is in evolution from non-specific cytotoxic drugs that damage both tumour and normal cells to more specific agents and immunotherapy approaches. Targeted agents are directed at unique molecular features of cancer cells, and immunotherapeutics modulate the tumour immune response; both approaches aim to produce greater effectiveness with less toxicity. The development and use of such agents in biomarker-defined populations enables a more personalized approach to cancer treatment than previously possible and has the potential to reduce the cost of cancer care.

Certainly, there are success stories for specific types of cancer (e.g., Herceptin). A more recent example is the PD-1 inhibitor pembrolizumab (Keytruda®), which has shown remarkable results in patients with melanoma, including Jimmy Carter. The problem is, direct-to-consumer marketing creates false hope about the probability that a patient with another form of cancer will respond to this treatment, or one of the many other immunotherapies with PR machines. But if there's a 25% chance or even a 10% chance it'll extend the life of your loved one, you'll go to great lengths to try to acquire it, one way or another. Speaking from personal experience.

Oh, well, there was the one trial with a highly toxic combo -- "you cannot travel more than X miles away from our center, because only WE can treat your terrible side effects." And that fun bit was omitted from the consent form.

skin inflammation causing hives or rash which may rarely be severe and become life threatening

anemia which may cause tiredness, or may require blood transfusion

itching

abnormal liver function seen by blood tests. This may rarely lead to jaundice (yellowing of the skin and whites of eyes) and be severe or life threatening

abnormal function of your thyroid gland which cause changes in hormonal levels. A decrease in thyroid function as seen on blood tests may cause you to feel tired, cold or gain weight while an increase in thyroid function may cause you to feel shaky, have a fast pulse or lose weight.

Swelling of arms and/or legs (fluid retention)

Changes in the level of body salts as seen on blood tests. You may not have symptoms.

Inflammation of the pancreas that results in increased level of digestive enzymoes (lipase, amylase) seen in bloods and may cause abdominal pain

Inflammation of the lungs (including fluid in the lungs) which could cause shortness of breath, chest pain, new or worse cough. It could be serious and/or life threatening. May occur more frequently if you are receiving radiation treatment to your chest or if you are Japanese.

Serious bleeding events leading to death may occur in patients with head and neck tumors. Please talk to your doctor immediately if you are experiencing bleeding.

Decrease of a protein in your blood called albumin that may cause fluid retention and results in swelling of your legs or arms

In the last year or so, it has become acceptable to question the dominant systems/circuit paradigm of “manipulate and measure” as THE method to gain insight into how the brain produces behavior (Krakauer et al., 2017; Gomez-Marin, 2017). Detailed analysis of an organism's natural behavior is indispensable for progress in understanding brain-behavior relationships. Claims that optogenetic and other manipulations of a neuronal population can demonstrate that it is “N&S” for a complex behavior have also been challenged. Gomez-Marin (2017) pulled no punches and stated:

I argue that to upgrade intervention to explanation is prone to logical fallacies, interpretational leaps and carries a weak explanatory force, thus settling and maintaining low standards for intelligibility in neuroscience. To claim that behavior is explained by a “necessary and sufficient” neural circuit is, at best, misleading.

The latest entry into this fault-fest goes further, indicating that most N&S claims in biology violate the principles of formal logic and should be called ‘misapplied-N&S’ (Yoshihara & Yoshihara, 2018). They say the use of “necessary and sufficient” terminology should be banned and replaced with “indispensable and inducing” (except for a handful of instances). 2

modified from Fig. 1A (Yoshihara & Yoshihara, 2018). The relationship between squares and rectangles as a typical example of true necessary (being a rectangle; pale green) and sufficient condition (being a square; magenta) in formal logic.

N&S claims are very popular in optogenetics, which has become a crucial technique in neuroscience. But demonstrating true N&S is nearly impossible, because the terminology disregards: activity in the rest of the brain, whether all the activated neurons are “necessary” (instead of only a subset), what actually happens under natural conditions (rather than artificially induced), the requirement of equivalence, etc. Yoshihara & Yoshihara (2018) are especially disturbed by the incorrect use of “sufficient”, which leads to results being overstated and misinterpreted:

The main problem comes from the word ‘sufficient,’ which is often used to emphasize that artificial expression of only a single gene or activation of only a single neuron can cause a substantial and presumably relevant effect on the whole process of interest. Although it may be sufficient as an experimental manipulation for triggering the effect, it is not actually sufficient for executing the whole effect itself.

And for optogenetics:

Rather, the importance of ‘sufficiency’ experiments lies in demonstrating a causal link through optogenetic activation of neurons... Thus, words such as triggers, promotes, induces, switches, or initiates may better reflect or express the desired nuance without creating such confusion.

Y & Y (2018) aren't shy about naming names in their Commentary, and even say that misapplied-N&S has generated unproductive and misleading studies that offer no scientific insight whatsoever. Although one could say that N&S has a different meaning in biology, or is merely a figure of speech, such strong statements have consequences for the future directions of a field.

1“...neurons necessary and sufficient for inter-male aggression are located within the ventrolateral subdivision of the ventromedial hypothalamic nucleus (VMHvl)...”

2One of the instances uses the old discredited “command neuron” concept of Ikeda & Wiersma (1964). They call it A‘Witch Hunt’ of Command Neurons and note that only three command neurons meet the true N&S criteria (one each in lobster, Aplysia, and Drosophila).

adapted from Figure 3 (Koroshetz et al., 2018). Magnetic resonance angiography highlighting the vasculature in the human brain in high resolution, without the use of any contrast agent, on a 7T MRI scanner. Courtesy of Plimeni & Wald (MGH). [ed. note: here's a great summary on If, how, and when fMRI goes clinical, by Dr. Peter Bandettini.]

The Journal of Neuroscience recently published a paywalled article on The State of the NIH BRAIN Initiative. This paper reviewed the research and technology development funded by the “moonshot between our ears” [a newly coined phrase]. The program has yielded a raft of publications (461 to date) since its start in 2014. Although the early emphasis has not been on Human Neuroscience, NIH is ramping up its funding for human imaging and neuromodulation.

...neuroscience research in general and the BRAIN Initiative specifically, with its focus on unraveling the mysteries of the human brain, generate many important ethical questions about how these new tools could be responsibly incorporated into medical research and clinical practice.

I don't think most of the current grant recipients are focused on “unraveling the mysteries of the human brain”, however. They're interested in cell types, circuit diagrams, and monitoring and manipulating neural activity in model organisms such as Drosophila, zebrafish, and mice. There are aspirations for a Human Cell Atlas, but many of the other tools are very far away (or impossible) for use in humans.

- click on image to enlarge -

Some aspects of the terminology used by Koroshetz et al., (2018)are vague to the savvy but non-expert eye. What is a neural circuit? The authors never actually define the term. You'll get different answers depending on who you ask. We know that “individual neuroscientists have chosen to work at specific spatial scales, ranging from .. ion channels ... to systems level” and we know there is a range of temporal scales, “from the millisecond of synaptic firing to the entire lifespan” (Koroshetz et al., 2018):

Within this diverse set of scales, the circuit is a key point of focus for two primary reasons: (1) neural circuits perform the calculations necessary to produce behavior; and (2) dysfunction at the level of the circuit is the basis of disability in many neurological and psychiatric disorders.

So maybe key point #1 is a generic working definition of a neural circuit, and is the focus of many NIH BRAIN-funded neuroscientists. But there's a huge leap from the impressive work on e.g. mapping, manipulating, and controlling stress-related feeding behaviors in rodents, and key point #2: isolating circuit dysfunction and ultimately treating eating disorders in humans. There is a lot of “promise” and many “aspirational goals”, but the concluding sentence is just too aspirational and promises too much:

With diverse scientists jointly working in novel team structures, often in partnership with industry, and sharing unprecedented types and quantities of data, the BRAIN Initiative offers a unique opportunity to open the door to a golden age in brain science and improved brain health for all.

The research that gets closest to bridging this gap is electocorticography (ECoG) and deep brain stimulation (DBS) in human patients.1The exemplar cited in the NIH paper is by Swann et al. (2018), and involved testing a closed-loop DBS system in two Parkinson's patients. The Activa PC + S system (Medtronic) is able to both stimulate the brain target region (subthalamic nucleus, STN) and record neural activity at the same time. The local field potential (LFP) activity is then fed back to the stimulator, which adjusts its parameters based on a complex control algorithm derived from the neural data.

The unique aspect here is that the authors recorded gamma oscillations (60–90 Hz in this case) from a subdural lead over motor cortex to adjust stimulation. In earlier work, they showed this gamma power was indicative of dyskinesia (abnormal, uncontrolled, involuntary movement), so STN stimulation was adjusted when gamma was above a certain threshold. The study demonstrated feasibility, and its greatest benefit at this early point was energy savings that preserved the battery.

It's cool work that has been promoted by NIH, but unfortunately the first author was not mentioned in the press release, not featured in the accompanying video, and her name isn't even visible on a shot of the poster that appears in the video.2 [the last author gets all the credit.]

2We interrupt the NIH press coverage of this paper to acknowledge the first author, Dr. Nicki Swann. Dr. Swann and many of her female colleagues have described the difficulties of traveling and attending conferences while being a new mother, and offered some possible solutions. If the BRAIN Initiative is serious about addressing Neuroethics (for animals and futuristic sci-fi applications to human patients), they should also be actively involved in issues affecting women and minority researchers. And I imagine they are, it just wasn't apparent here.