Alas, it is unlikely that the field of psychophysiology will un-mangle its measurement of emotions with reflexes in such a short span of time.

My lab uses two reflexes to assess the experience of emotion, both of which can be elicited through short, loud noise probes. The startle blink reflex is measured underneath the eye, and it measures a defensive negative emotional state. The postauricular reflex is a tiny reflex behind the ear that measures a variety of positive emotional states. Unfortunately, neitherreflexassessesemotionreliably.

When I say “reliably”, I mean an old-school meaning of reliability that addresses what percentage of variability in a measurement’s score is due to the construct it’s supposed to measure. The higher that percentage, the more reliable the measurement. In the case of these reflexes, in the best-case scenarios, about half of the variability in scores is due to the emotion they’re supposed to assess.

That’s pretty bad.

For comparison, the reliability of many personality traits is at least 80%, especially from modern scales with good attention to the internal consistency of what’s being measured. The reliability of height measurements is almost 95%.

Why is reflexive emotion’s reliability so bad?

Part of it likely stems from the fact that (at least in my lab), we measure emotion as a difference of reactivity during a specific emotion versus during neutral. For the postauricular reflex, we take the reflex magnitude during pleasant pictures and subtract from that the reflex magnitude during neutral pictures. For the startle blink, we take the reflex magnitude during aversive pictures and subtract from that the reflex magnitude during neutral pictures. Differences can have lower reliabilities than single measurements because the unreliability in both emotion and neutral measures compound when making the difference scores.

However, it’s even worse when we use reflex magnitudes just during pleasant or aversive pictures. In fact, it’s so bad that I’ve found both reflexes have negative reliabilities when measured just as the average magnitude during either pleasant or aversive pictures! That’s a recipe for a terrible, awful, no good, very bad day in the lab. That’s why I don’t look at reflexes during single emotions by themselves as good measures of emotion.

Now, some of these difficulties look like can be alleviated if you look at raw reflex magnitude during each emotion. If you do that, it looks like we could get reliabilities of 98% or more! So why don’t I do this?

Because from person to person, reflex magnitudes during any stimulus can differ over 100 times, which means that it’s a person’s overall reflex magnitude that raw reflex magnitudes are measuring – irrespective of any emotional state the person’s in at that moment.

Let’s take the example of height again. Let’s also suppose that feeling sad makes people’s shoulder’s stoop and head droop, so they should be shorter (that is, have a lower height measurement) whenever they’re feeling sad. I have people stand up while watching a neutral movie and a sad movie, and I measure their height four times during each movie to get a sense of how reliable the measurement of height is.

If all I do is measure the reliability of people’s mean height across the four sadness measurements, I’m likely to get a really high value. But what have I really measured there? Well, it’s just basically how tall people are – it doesn’t have anything to do with the effect of sadness on their height! To understand how sadness specifically affects people’s heights, I’d have to subtract their measured height in the neutral condition from that in the sad condition: a difference score.

Furthermore, if I wanted to take out entirely the variability associated with people’s heights from the effects of sadness I’m measuring (perhaps because I’m measuring participants whose heights vary from 1 inch to 100 inches), I can use a process called “within-subject z scoring”, which is what I use in my work. It doesn’t seem like the overall reflex magnitude people have predicts many interesting psychological states, so I feel confident in this procedure. Though my measurements aren’t great, at least they measure what I want to some degree.

What could I do to make reflexive measures of emotion better? Well, I’ve used four noise probes in each of four different picture contents to cover a broad range of positive emotions. One thing I could do is target a specific emotion within the positive or negative emotional domain and probe it sixteen times. Though it would reduce the generalizability of my findings, it would substantially improve reliability of the reflexes, as reliabilities tend to increase the more trials you include (because random variations have more opportunities to get cancelled out through averaging). For the postauricular reflex, I could also present lots of noise clicks instead of probes to increase the number of reflexes elicited during each picture. Unfortunately, click-elicited and probe-elicited reflexes only share about 16% of their variability, so it may be difficult to argue they’re measuring the same thing. That paper also shows you can’t do that for startle blinks, so that’s a dead end method for that reflex.

In short, there’s a lot of work to do before the psychophysiology of reflexive emotion can relax with its cup of tea after redeeming itself with a reliable, well-received performance (in the lab).

One of the moments that always stuck with me was one in the Captain’s quarters as the Enterprise and its Romulan counterpart wait each other out in silence. Dr. McCoy comes to speak with Captain Kirk, who expresses a rare moment of self-doubt regarding his decisions during tactical combat. The doctor’s compassionate nature comes through as he reminds the captain how across 3 million Earth-like planets that might exist, recapitulated across 3 million million galaxies, there’s only one of each of us – and not to destroy the one named Kirk. The lesson of that moment resonates 50 years later and is one I like to revisit when I feel myself beset by doubts about myself or my career.

Another moment I appreciate is the imperfection allowed in Spock’s character without being under the influence of spores, temporal vortices, or other sci-fi contrivances. Already, he has been accused of being a Romulan spy by a bigoted member of the crew who lost multiple family members in a war with the Romulans decades before visual communication was possible. Now, Spock breaks the silence under which the Enterprise was operating with a clumsy grip on the console he is repairing. Is this the action of a spy? Or just an errant mistake that anyone could make, especially when under heightened scrutiny?

Indeed, this error might be expected when Mr. Spock operates under stereotype threat. Just hours earlier, he was revealed to share striking physiological similarities with the Romulan enemies, who Spock described as possible warrior offshoots of the Vulcan race before Vulcans embraced logic. This revelation caused Lt. Stiles, who had branches of his family wiped out in the prior war with the Romulans, to view Spock with distrust and outright bigotry that was so blatant that the captain called him on it on the bridge. Still, Stiles’s prejudice against Spock is keenly displayed throughout the episode, making it more likely that Spock would conform to the sabotaging behavior expected of him by his bridgemate.

On their own ship, the sneaky and cunning Romulans were not depicted as mere stereotypes of those adjectives but instead as a richly developed martial culture. Their commander and his centurion have a deep bond that extends over a hundred campaigns; the regard these two have for each other is highlighted in the actors’ subtle inflections and camaraderie. The internal politics of the Romulan empire are detailed through select lines of dialog surrounding the character of Decius and the pique that character elicits in his commander. In the end, the Romulan commander is shown to be sensitive to the demands of his culture and his subordinates in the culminating action of the episode, though the conflict between these and his own plans is palpable.

The contrast between Romulans and Spock highlights how alien Vulcan logic seems to everyone else. Spock is a character who represents the outsider, the one struggling for acceptance among an emotional human crew even as he struggles to maintain his culture’s logical discipline. Authors with autism have even remarked how Spock helped them understand how they perceive the world differently from neurotypicals in a highly logical fashion. However, given the emotional militarism of the Romulans, I believe that Vulcan logic is a strongly culturally conditioned behavior rather than a reflection of fundamental differences in baseline neurobiological processing.

This is a long post written for both professionals and curious lay people; the links below allow you to jump among the post’s sections. The links in all CAPS represent the portions of this post I view as its unique intellectual contributions.

Psychology is beset with ways to find things that are untrue. Many famous and influential findings in the field are not standing up to closer scrutiny with tightly controlled designs and methods for analyzing data. For instance, a registered replication report in which my lab was involved found that holding a pen between your lips in a smiling pose does not, in fact, make cartoons funnier. Indeed, less than half of 100 studies published in top-tier psychology journals replicated.

One proposal for solving these problems is preregistration. Preregistration refers to making available – in an accessible repository – a detailed plan about how researchers will conduct a study and analyze its results. Any report that is subsequently written on the study would ideally refer to this plan and hew closely to it in its initial methods and results descriptions. Preregistration can help mitigate a host of questionable research practices that take advantage of researcher degrees of freedom, or the hidden steps behind the scenes that researchers can take to influence their results. This garden of forking paths can transmute data from almost any study into something statistically significant that could be written up somewhere; preregistration prunes this garden into a single, well-defined shrub for any set of studies.

Yet prominent figures doubt the benefits of preregistration. Some even deny there’s a replication crisis that would require these kinds of corrections. And to be sure, there are other steps to take to solve the reproducibility crisis. However, I argue that preregistration has three virtues, which I describe below. In addition to enhancing reproducibility of scientific findings, it provides a method for managing conflicts of interest in a transparent way above and beyond required institutional disclosures. Furthermore, I also believe preregistration permits a lab to demonstrate its increasing competence and a field’s cumulative knowledge.

Enhancing reproducibility

Chief among the proposed benefits of preregistration is the ability of science to know what actually happened in a study. Preregistration is one part of a larger open science movement that aims to make science more transparent to everyone – fellow researchers and the public alike. Preregistration is probably more useful for people on the inside, though, as it helps people knowledgeable in the field assess how a study was done and what the boundaries were on the initial design and analysis. Nevertheless, letting the general public see how science is conducted would hopefully foster trust in the research enterprise, even if it may be challenging to understand the particulars without formal training.

Hypothesizing After the Results are Known (HARKing): You can’t say you thought all along something you found in your data if it’s not described in your preregistration.

Altering sample sizes to stop data collection prematurely (if you find the effect you want) or prolong it (to increase the power, or the likelihood you have to detect effects): You said how many observations you were going to make, so you have a preregistered point to stop. Ideally, this stopping point would be determined from a power analysis using reasonable assumptions from the literature or basic study design about the expected effect sizes (e.g., differences between conditions or strengths of relationships between variables).

Eliminating participants or data points that don’t yield the effect you want: There are many reasons to drop participants after you’ve seen the data, but preregistering reasons for eliminating any participants or data from your analyses stops you from doing so to “jazz up” your results.

Dropping variables that were analyzed: If you collect lots of measures, you’ve got lots of ways to avoid putting your hypotheses to rigorous tests; preregistration forces you to specify which variables are focal tests of your hypothesis beforehand. It also ensures you think about making appropriate corrections for making lots of tests. If you run 20 different analyses, each with a 5% chance (or .05 probability) of yielding a result you want (a typical setup in psychology), then you’re likely to find 1 significant result by chance alone!

Dropping conditions or groups that “didn’t work”: Though it may be convenient to collect some conditions “just to see what happens”, preregistering your conditions and groups makes you consider them when you write them up.

Invoking hidden moderators to explain group differences: Preregistering all the things you believe might change your results ensures you won’t pull an analytic rabbit out of your hat.

Many of these solutions can be summed up in 21 words. Ultimately, rather than having lots of hidden “lab secrets” about how to get an effect to work or a multitude of unknown ingredients working their way into the fruit of the garden of forking paths, research will be cleanly defined and obvious, with bright and shiny fruit from its shrubbery.

Managing conflicts of interest

As I was renewing my CITI training (the stuff we researchers have to refresh every 4 years to ensure we keep up to date on performing research ethically and responsibly), I also realized that preregistration of analytic plans creates a conflict of interest management plan. Preregistered methods and data analytic plans ensure researchers to describe exactly what they’re going to do in a study. Those plans can be reviewed by experts to detect ways in which their own interests might be put ahead of the integrity of the data or analyses in the study, including officials at an individual’s university, at a funding agency, or in a journal’s editorial processes. Conscientious researchers can also scrutinize their own plans to see how their own best interests might have crept ahead of the most scientifically justifiable procedures to follow in a study.

These considerations led the clinical trials field to adopt a set of guidelines to prevent conflicts of interest from altering the scientific record. Far more than institutional disclosure forms, these guidelines force scientists to show their work and stick to the script of their initial study design. Since adopting these guidelines, the number of clinical trials showing null outcomes has increased dramatically. This pattern suggests that conflicts of interest may have guided some of the positive findings for various therapies rather than scientific evidence analyzed according to best practices. The preregistered shrub may not bear as much fruit as the garden of forking paths, but the fruit preregistered science bears is less likely to be poisonous to the consumer of the research literature.

Demonstrating scientific competence and cumulative knowledge

One underappreciated benefit of preregistration is the way it allows researchers to demonstrate their increasing competence in an area of study. When we start out exploring something totally new, we have ideas about basic things to consider in designing, implementing, and analyzing our studies. However, we often don’t think of all the probable ways that data might not comport with our assumptions, the procedural shifts that might be needed to make things work better, or the optimal analytic paths to follow.

When you run a first study, loads of these issues creep up. For example, I didn’t realize how hard it was going to be to recruit depressed patients from our clinic for my grant work on depression (especially after changing institutions right as the grant started), so I had to switch recruitment strategies. Right as we were starting to recruit participants, there was also a conference talk in 2013 that totally changed the way I wanted to analyze our data, as the mood reactivity item was better for what we wanted to look at than an entire set of diagnostic subtypes. In dealing with those challenges, you learn a lot for the second time you run a similar study. Now I know how to specify my recruitment population, and I can point to that talk as a reason for doing things a different way than my grant described. Over time, I’ll know more and more about this topic and the experimental methods in it, plugging additional things into my preregistrations to reflect my increased mastery of the domain.

Ideally, the transition from less detailed exploratory analyses to more detailed confirmatory work is a marker of a lab’s competence with a specific set of techniques. One could even judge a lab’s technical proficiency by the number of considerations advanced in their preregistrations. Surveying preregistered projects for various studies might let you know who the really skilled scientists in an area are. That information could be useful to graduate students wanting to know with whom they’d like to work – or potential collaborators seeking out expertise in a particular topic. Ideally, a set of techniques would be well-established enough within a lab to develop a standard operating procedure (SOP) for analyzing data, just as many labs have SOPs for collecting data.

In this way, the fruits of research become clearer and more readily picked. Rather than taking fruitless dead ends down the garden of forking paths with hidden practices and ad hoc revisions to study designs, the well-manicured shrubbery of preregistered research and SOPs gives everyone a way to evaluate the soundness of a lab’s methods without ever having to visit. Indeed, some journals take preregistration so seriously now that they are willing to provisionally pre-accept papers with sound, rigorous, and preregistered methodology. Tenure committees can likewise peek behind the hood of the studies you’ve conducted, which could alleviate a bit of the publish-or-perish culture in academia. A university’s standards could even reward an investigator’s rigor of research beyond a publication history (which may be more like a lottery than a meritocracy).

A model for confirmatory and exploratory reporting and review

In my ideal world, results sections would be divided into confirmatory and exploratory sections. Literally. Whether written as RESULTS: CONFIRMATORY and RESULTS: EXPLORATORY, PREREGISTERED RESULTS and EXPLORATORY RESULTS, or some other set of headings, it should be glaringly obvious to the reader which is which. The confirmatory section contains all the stuff in the preregistered plan; the exploratory section contains all the stuff that came after. Right now, I would prefer that details about the exploratory analyses be kept in that exploratory results section to make it clear it came after the fact and to create a narrative of the process of discovery. However, similar Data Analysis: Confirmatory and Data Analysis: Exploratory or Preregistered Data Analysis and Exploratory Data Analysis sections might make it easier to separate the data analytics from the meat of the results.

It’s also important to recognize that exploratory analyses shouldn’t be pooh-poohed. Curious scientists who didn’t find what they expected could systematically explore a number of questions in their data subsequent to its collection and preliminary analysis. However, it is critical that all deviations from the preregistration be reported in full detail and with sufficient justification to convince the skeptical reader that the extra analyses were reasonable to perform. Much of the problem with our existing literature is that we haven’t reported these details and justifications; in my view, we just need to make them explicit to bolster confidence in exploratory findings.

Reviewers should ask about those justifications if they’re not present, but exploratory analyses should be held to essentially the same standards as we hold current results sections. After all, without preregistration, we’re all basically doing exploratory analyses! As time passes, confirmatory analyses will likely hold more weight with reviewers. However, for the next 5-10 years, we should all recall that we came from an exploratory framework, and to an exploratory framework we may return when justified. When considering an article, reviewers should also look carefully at the confirmatory plan (which should be provided as an appendix to a reviewed article if a link that would not compromise reviewer anonymity cannot be provided). If the researchers deviated from their preregistered plan, call them on it and make them run their preregistered analyses! In any case, preregistration’s goals can fail if reviewers don’t exercise due diligence in following up the correspondence between the preregistration and the final report.

The broad strokes of a paper I’m working on right now demonstrates the value of preregistration in correcting mistakes and the ways exploratory results might be described. I was showing a graduate student a dataset I’d collected years before, and there were three primary dependent variables I planned on analyzing. To my chagrin, when the student looked through the data, that student pointed out one of those three variables had never been computed! Had I preregistered my data analytic plan, I would have remembered to compute that variable before conducting all of my analyses. When that variable turned out to be the only one with interesting effects, we also thought of ways to drill down and better understand the conditions under which the effect we found held true. We found these breakdowns were justifiable in the literature but were not part of our original analytic plan. Preregistration would have given us a cleaner way to separate these exploratory analyses from the original confirmatory analyses.

In any future work with the experimental paradigm, we’ll preregister both our original and follow-up analyses so there’s no confusion. Such preregistration also acts as a signal of our growing competence with this paradigm. We’ll be able to give sample sizes based on power analyses from the original work, prespecify criteria for excluding data and methods of dealing with missing values, and more precisely articulate how we will conduct our analyses.

My template

Many people talk about the difficulties of preregistering studies, so I advance a template I’ve been working on. In it, I pose a bunch of questions in a format structured like a journal article to guide researchers through questions I’d like to have answered as I start a study. It’s a work in progress, and I hope to add to it as my own thoughts on what all could be preregistered grows. I also hope we can publish some data analytic SOPs along with our psychophysiological SOPs that we use in the lab (a shortened version of which we have available for participants to view). I hope it’s useful in considering your own work and the way you’d preregister. If this seems too daunting, a simplified version of preregistration that hosts the registration for you can get you started!

As the heat of summer washes over the country, basic home safety becomes a concern. Sometimes, parents become worried that their messy houses might cause Child Protective Services to view them as unfit parents. A newpaper from my research collaborators and I has shown that even in homes with genuine safety concerns, the beauty of a home (or lack thereof) isn’t associated with being child abuse potential or socioeconomic status. Thus, it doesn’t appear that messy homes come from abusive parenting environments, and unattractive or unsafe are just as likely to be found in poorer and richer neighborhoods.

We found that trained assessors and people inhabiting homes had reasonable agreement about the beauty of the homes, but they didn’t agree on the safety risks present in the home. Part of that may have been because the trained assessors had checklists with over 50 items to check over in each room to assess safety and appearance, whereas the occupants of the homes only provided summary ratings of room safety and appearance on a 1-6 scale. It’s probably easier to give an overall judgment of the attractiveness of a room than to summarize in your mind all the possible safety risks that exist.

Because it’s so hard to notice these safety risks without a detailed guide, the assessment we developed can also be used as a way to point parents to specific things to fix in the home to make their children’s environment safer. We didn’t want people overwhelmed when thinking about what to clean up or make safer – rather, we wanted to give people specific things to address. We’ll be interested to see if people are better able to make their homes cleaner and safer places with the help of that assessment.

In 2013, two remarkable TV shows hit the air- and cable-waves that provide backstories of two of cinema’s most notable villains. Hannibal features a retelling of the story of Hannibal Lecter and Will Graham that surprises even the most die-hard connoisseurs of Thomas Harris’s original novels and the movies that have been made from them. Bates Motel fills in the history of Norman Bates, tracing his descent from a gawky teenager into the Psycho murderer.

Some of my recent work has examined how absorption is related to initial attention to emotional pictures and subsequent attention to noise probes. We found that people high in absorption had more emotional attention to emotional pictures (both pleasant and aversive) compared to neutral pictures. Thus, people high in absorption get wrapped up in what they’re seeing when it’s emotionally evocative. Furthermore, we found that people high in absorption show less attention to a loud noise probe during all pictures. It’s as if they’re so wrapped up in processing the pictures that they don’t have as strong an ability to disengage attention to process something else coming in a different channel (that is, hearing as opposed to sight).

How does this apply to our two fictional characters? Both of them get really absorbed in the imaginal part of their internal experience, which wreaks havoc on their emotional lives. Will Graham’s unique perceptual gifts entail mentally reconstructing a crime from the residues left at the crime scene. He may be a perceptive person, but his genius lies in absorbing himself in what he sees and piecing people’s last moments together through the eyes of a killer. This kind of perspective taking is rare in individuals on the autism spectrum, as Graham claims himself to be. Therefore, I would argue that absorption is the key trait allowing Graham to get inside killers’ heads; his inability to disengage from the disturbing images that run through his head confuses him and creates untoward consequences that demonstrate the perils as well as the promise of high levels of absorption.

Norman Bates is a more purely maladaptive face of high absorption. Absorption is also associated with dissociation, which refers either to the feeling that one’s self or surroundings aren’t real or to the experience of having done something without recalling having done it. As the seasons progress, Norman’s increasing absorption in his fantasies about his mother propel him from committing murders of women he desires to taking on his mother’s identity without recalling having done it in the morning. Norman’s emotions overwhelm him, and he uses his absorption to retreat into a mental world that’s safer for him, that’s anchored by his mother. It’s this fantasy component of openness and absorption that’s related to psychoticism, which represents a vulnerability to experiencing odd and unusual perceptual experiences consistent with schizotypal personality disorder and certain forms of schizophrenia. In essence, Norman Bates isn’t a psychopathic killer; he’s one of the rare serial murderers with psychotic experiences – in this case, that may be underpinned by absorption. Will Graham exhibits a form of dissociation that might superficially seem related to absorption as well, but instead (SPOILER ALERT) is more likely due to encephalitis than his personality.