Successfully perceiving and recognizing the actions of others is of utmost importance for the survival of many species. For humans, action perception is considered to support important higher order social skills, such as communication, intention understanding and empathy, some of which may be uniquely human. Over the last two decades, neurophysiological and neuroimaging studies in primates have identified a network of brain regions in occipito-temporal, parietal and premotor cortex that are associated with perception of actions, also known as the Action Observation Network. Despite growing body of literature, the functional properties and connectivity patterns of this network remain largely unknown.

The goal of this dissertation is to address these general questions about functional properties and connectivity patterns with a specific focus on whether this system shows specificity for biological agents. To this end, we collaborated with a robotics lab, and manipulated the humanlikeness of agents that perform recognizable actions by varying visual appearance and movement kinematics. We then used a range of measurement modalities including cortical EEG oscillations, event-related brain potentials (ERPs), and fMRI together with a range of analytical techniques including pattern classification, representational similarity analysis (RSA), and dynamical causal modeling (DCM) to study the functional properties, temporal dynamics, and connectivity patterns of the Action Observation Network.

While our findings shed light on whether the human brain shows specificity for biological agents, the interdisciplinary work with robotics also allowed us to address questions regarding human factors in artificial agent design in social robotics and human-robot interaction such as uncanny valley, which is concerned with what kind of robots we should design so that humans can easily accept them as social partners.

Please join us and find out more about some of Burcu’s exciting and interdisciplinary work in the lab!

Tool use is a hallmark of the human species and an essential aspect of daily life. Tools serve to functionally extend the body, allowing the user to overcome physical limitations and interact with the environment in previously impossible ways. Tool-body interactions lead to significant modulation in the user’s representations of body size, a phenomenon known as tool embodiment. In the present dissertation, I used psychophysics and event-related brain potentials to investigate several aspects of tool embodiment that are otherwise poorly understood.

First, we investigated the sensory boundary conditions of tool embodiment, specifically the role of visual feedback during tool use. In several studies, we demonstrate that visual feedback of tool use is a critical driver of tool embodiment. In one such study, we find that participants can embody a visual illusion of tool use, suggesting that visual feedback may be sufficient for tool-induced plasticity.

Second, we investigated the level of representation modulated by tool use. Is embodiment confined to sensorimotor body representations, as several researchers have claimed, or can it extend to levels of self-representation (often called the body image)? Utilizing well-established psychophysical tasks, we found that using a tool modulated the body image in a similar manner as sensorimotor representations. This finding suggesting that similar embodiment mechanisms are involved at multiple levels of body representation.

Third, we used event-related brain potentials to investigate the electrophysiological correlates of tool embodiment. Several studies with tool-trained macaques have implicated multisensory stages of somatosensory processing in embodiment. Whether the same is true for humans is unknown. Consistent with what is found in macaques, we found that using a tool modulates an ERP component (the P100) thought to index the multisensory representation of the body.

The work presented in this dissertation advances our understanding of tool embodiment, both at the behavioral and neural level, and opens up novel avenues of research.

Please join us and find out more about some of Luke’s exciting research in the lab over the past few years!

I woke up this morning to see a link (by James C. Coyne or @CoyneoftheRealm on twitter) to Russ Poldrack’s blog post about bad reporting of fMRI results on his blog. While I get what Poldrack is saying, and have agreed with him and others in the field about the basic problem with this kind of inference using fMRI, what I want to talk about is the fact that the post was shared with the opinion that “fMRI is not replicable and is pseudoscience”.

This is a serious insult that should not be thrown around lightly.

FMRI is one of the methods I use in my work. I do so because I believe it can answer some of the biological questions that interest me. I also believe know it is not just random bits of noise us evil scientists sell to journals, taxpayers and newspapers for fame and glory. (Speaking of which, where is all the fame and glory I’ve been promised?)

When I called Coyne on the assertion that fMRI was pseudoscience, he pointed to this paper as his “proof”. Many of you know this as the famous voodoo correlations paper that caused quite a bit of kerfuffle a few years ago. It’s both puzzling and unfortunate that Coyne reasoned that a paper that makes a specific statistical point (inflated effect sizes of non-independently selected samples) that affects some neuroimaging studies, and is by no means an issue limited to neuroimaging, means neuroimagers are pseudoscientists, but it is indicative of a problem I want to point out.

Nearly 4 years on, the voodoo paper is being thrown around by people outside of our field attesting all fMRI work is pseudoscience. It’s not just outsiders either. Many a paper that has nothing to do with inflated effect sizes has been rejected with reference to this work – yes editors should call reviewers out on this, but often times they don’t (and in this field one negative reviewer is enough to kill your paper). The voodoo paper made some important points and perhaps some of the papers that did make errors in their effect size calculations deserved to be called on their mistakes, but this makes little practical difference to the validity of the vast majority of fMRI findings. More than anything, this whole thing shows that people are very quick to jump in and hate on fMRI, pulling together as their evidence bits of arguments that are at best loosely connected, or appealing to statistical minutae that don’t affect the vast majority of research findings after all.

Another favorite is that salmon “study” everyone thought was brilliant, which can also be summed up as “it would be really silly if people ran a ton of statistics and published the false positives”. Indeed, which is why no one ever runs one run of an fMRI study with one subject and publish without any correction. But don’t spoil the fun! They put a dead salmon in the scanner doing social cognition. I want to find it funny, but it’s really tiresome to have that damn salmon brought up all the time, as if the entire field was in the dark about multiple comparisons until the hero fishermen came along. They should have scanned a herring. A red herring.

If anything, publishing an fMRI study today takes more statistical sophistication and care than many other areas of science. I don’t see people attacking neurophysiology as pseudoscience (and it isn’t). Yet, how many times have I read results like “n neurons of m recorded” with nary a chi squared? Routine statistical methods in electrophysiology research, especially event related potential (ERP) analyses, suffer from the double-dipping issue highlighted by the voodoo papers and others. However, this is not considered to be a deal-breaker like double dipping in fMRI: choosing which ERP components to do statistical tests by “visual inspection” is standard practice.

In an ideal world, scientists would exercise critical thinking before throwing around insults about others’ work. If there’s any group of people who should resist loose reasoning and cult mentality, it should be scientists.

When the issue of fMRI-hating comes up, some colleagues say we must bring bad fMRI under scrutiny so that the public knows better and the science can be improved. Exactly. I am a passionate believer in public engagement. But I believe there are many areas where there is actual pseudoscience that gets a lot of press that has larger consequences. While a minority of neuroimagers may indeed have overhyped their results (probably because they’re trying to sell them to a Glamour magazines that insist your work be hott!), most are the fully aware of the limitations of their work. Second, peer review usually takes care of a lot of issues. Perhaps in the 90s when fMRI was shiny and new, publication standards were lower, but today it is not at all easy to publish fMRI data. Peer review of course isn’t perfect and bad stuff does get out but this is not specific to neuroimaging. And third, is it really hard to understand why “well what we’re recording is an indirect measure of brain activity that we model in this way and find that it correlates with …”, often turns into “scientists have found brain area that does X” in a newspaper article? As Poldrack mentions in his post, the NYT published a widely-criticized article a few years ago, inciting critiques from leaders in the field of cognitive neuroscience. Now NYT went and published a new article with the same exact error in reasoning. Calling other scientists names has hardly solved the problem.

Don’t get me wrong, it would be great if we could teach the public to be more critical in their reasoning. Informing people on how to interpret those brains with blobs they occasionally see would be nice and I try to do so in my courses and in the lab. But is it as important as helping people be more critical of the link between autism and vaccines, or astrology and destiny, or magnets and pain relief, or gender/race and intelligence, or to be bothered to think through and understand the evidence supporting evolution?

All this to say, I do not think all of this fMRI-hate is happening primarily in the interest of the public or good science. Discussions about methods/statistics can and should be held. But these should be a civil and fact-based discussions among scientists (and perhaps the media). There is no need for sweeping statements and name-calling. Lets be fair and apply some critical thinking to the issues at hand first before we jump and say fMRI (or genetics, or whatever) has been discredited (another word I keep hearing about fMRI). It’s anyone’s prerogative to take to a blog and write (as I am doing now). Fighting bad science and actual pseudoscience is important and should be done responsibly. Lets not pretend calling people names without really questioning the content or logic of the arguments is some selfless, holy work being done in the name of science and the taxpayers’ enlightenment.

Now, of course I got defensive about the pseudoscience allegation, but lets also address the assertion that fMRI is not replicable. This is not correct. There is good work on replicability of fMRI, though it may not be as fun and easy to read as the salmon study, I recommend looking into this. Many people replicate effects over different studies (I like to replicate and extend with other complementary methods such as neuropsychological patient work and TMS). But please allow me add something qualitative: Below we have fMRI data from the same subject scanned in the span of 5 years 4 different times in slight variants of the same paradigm. If you don’t know what these data are showing, please don’t worry about what the experiment is for a second, but just look at it with your human eyes and human brain: Do you really believe those colorful swathes look like random false positives? The same patterns emerging year after year? Can that be just noise?

Perhaps the pleasure people get from seeing fMRI lose its glamour is like finding out at the high school reunion that the prom queen is no longer a perfect size 6. But the prom queen’s jeans size does not make you any different. It may be tempting to hate on the popular/glamorous, but it’s also not right to do so just because it is popular/glamorous. Scientists should do better than that.

I want to point out, despite all my passion and defense here, fMRI is only one of the tools I use in my research. I am in fact very critical about a lot of fMRI research. I use it sparingly and I try to be conservative in interpretation. So I am hardly the best person to defend fMRI in this context. My irritation here is that there are irrational and unsubstantiated attacks being made. fMRI is a pretty darn amazing feat of research and technology that allowed us to noninvasively image human brain function at high spatial resolution. It also has many inherent limitations. The biggest problems I encounter in fMRI have little to do with statistics or false positives (which are easily addressed by replications). It is human error in experimental design and/or interpretation of the data. This applies to any area of science, and fMRI is not immune.

I hope this misdirected fMRI hate soon gets culled with actual critical thinking and scientists. Public understanding of science is poor in many countries and federal funding has been declining. The brain is complicated enough without creating factions and cliques and name-calling. There is room for all of us as we try to tackle difficult scientific problems. If you don’t like fMRI, if it’s not the right method for your research, don’t use it. But researchers in this field are by and large honest individuals who are more than open to using/developing the best statistical practices and experimental approaches to address their research goals.

There is no grand fMRI scam and neuroimagers do not deserve to be called pseudoscientists.