Main navigation

Clocks and Clouds

I recently had a short article in Wired on the danger of getting too enthralled with our empirical tools, which leads us to neglect everything that our tools can't explain:

A typical experiment in functional magnetic resonance imaging goes like this: A subject is slid into a claustrophobia-inducing tube, the core of a machine the size of a delivery truck. The person is told to lie perfectly still and perform some task -- look at a screen, say, or make a decision. Noisy superconducting magnets whir. The contraption analyzes the magnetic properties of blood to determine the amount of oxygen present, operating on the assumption that more-active brain cells require more-oxygenated blood. It can't tell what you're thinking, but it can tell where you're thinking it.

Functional MRI has been used to study all sorts of sexy psychological properties. You've probably seen the headlines: "Scientists Discover Love in the Brain!" and "This Is Your Brain on God!" Such claims are often accompanied by a pretty silhouette of a skull, highlighted with splotches of primary color. It's like staring at a portrait of the soul. It's also false. In reality, huge swaths of the cortex are involved in every aspect of cognition. The mind is a knot of interconnections, so interpreting the scan depends on leaving lots of stuff out, sifting through noise for the signal. We make sense of the data by deleting what we don't understand.

What's disappointing here isn't just that these early fMRI studies are overhyped or miss important facts. It's that this mistake is all too familiar. Time and time again, an experimental gadget gets introduced -- it doesn't matter if it's a supercollider or a gene chip or an fMRI machine -- and we're told it will allow us to glimpse the underlying logic of everything. But the tool always disappoints, doesn't it? We soon realize that those pretty pictures are incomplete and that we can't reduce our complex subject to a few colorful spots. So here's a pitch: Scientists should learn to expect this cycle -- to anticipate that the universe is always more networked and complicated than reductionist approaches can reveal.

Look at genetics: When the Human Genome Project was launched in the early 1990s, it was sold as a means of finally making sense of our DNA by documenting the slight differences that encode our individuality. But that didn't happen. Instead, the project has mostly demonstrated that we are more than a text, and that our base pairs rarely explain anything in isolation. It has forced researchers to focus on the much broader study of how our genes interact with the environment.

This same story plays out over and over -- only the nouns change. Once upon a time, physicists thought they had the universe mostly solved, thanks to their fancy telescopes and elegant Newtonian equations. But then came a century of complications, from the theory of relativity to the uncertainty principle; string theorists, in their attempts to reconcile ever widening theoretical gaps, started talking about 11 dimensions. Dark matter remains a total mystery. We used to assume that it was enough to understand atoms -- the bits that compose the cosmos -- but it's now clear that these particles can't be deciphered in a vacuum.

Not surprisingly, this is exactly what neuroscientists are coming to grips with. In the mid-'90s, Marcus Raichle started wondering about all the mental activity exhibited by subjects between tasks, when they appeared to be doing nothing at all. Although Raichle's colleagues discouraged him from trying to make sense of all this noisy activity -- "They told me I was wasting my time," he says -- his team's work led to the discovery of what he calls the default network, which has since been linked to a wide range of phenomena, from daydreaming to autism. However, it can't be accurately described with the kind of distinct spots of a typical fMRI image. There's too much to see: It's a network of colorful complexity. Thanks to the work of Raichle and others, neuroscience now has a mandate to forgo the measurement of local spikes in blood flow in favor of teasing apart the vast electrical loom of the cortex. God and love are nowhere to be found -- and most of the time we have no idea what we're looking at. But that confusion is a good sign. The brain isn't simple; our pictures of the brain shouldn't be, either.

Karl Popper, the great philosopher of science, once divided the world into two categories: clocks and clouds. Clocks are neat, orderly systems that can be solved through reduction; clouds are an epistemic mess, "highly irregular, disorderly, and more or less unpredictable." The mistake of modern science is to pretend that everything is a clock, which is why we get seduced again and again by the false promises of brain scanners and gene sequencers. We want to believe we will understand nature if we find the exact right tool to cut its joints. But that approach is doomed to failure. We live in a universe not of clocks but of clouds.

So how do we see the clouds? I think the answer returns us to the vintage approach of the Victorians. Right now, the life sciences follow a very deductive model, in which researchers begin with a testable hypothesis, and then find precisely the right set of tools to test their conjecture. Needless to say, this has been a fantastically successful approach. But I wonder if our most difficult questions will require a more inductive method, in which we first observe and stare and ponder, and only then theorize. (This was the patient process of Darwin and his 19th century peers.) After all, the power of our newest neuroscientific tools (such as those associated with the connectome) is that they allow us to observe the brain directly, without the frame of a conjecture. (The problem with such conjectures is that force us to sort the noise from the signal before we really understand what we're looking at, which helps explain why entities like the default network were ignored for so many years.) Such an approach might seem anachronistic, but when it comes to deciphering the intractable mysteries of the brain it might be necessary. The human cortex is the most complex object in the universe: Before we can speculate about it, we need to see it, even if we don't always understand what we're looking at.

More like this

Every science goes through several distinct phases. First, there is the dissection phase. The subject is broken apart into its simplest possible elements. (As Plato put it, "nature is cut at the joints, like a good butcher.") For neuroscience, this involved reducing the brain into a byzantine…

The blogosphere has begun debating the merits of fMRI. That's a good thing. The debate began with Paul Bloom's excellent editorial in Seed, in which he argued that "fMRI imagery has attained an undue influence, and we shouldn't be seduced." It continues here and here.
I used to work in a…

Recent advances in functional neuroimaging have enabled researchers to predict perceptual experiences with a high degree of accuracy. For example, it is possible to determine whether a subject is looking at a face or some other category of visual stimulus, such as a house. This is possible because…

Several bloggers have commented on Paul Bloom's Seed plaint about brain imaging studies receiving too much attention and a certain false credibility. (See the posts at Cognitive Daily , Mixing Memory and — in refutation — Small Gray Matters, as well as other citing blogs via Technorati or…

Or do we accept that the scientific method based on imperialism has brought us to the moon and doubled our lifespans? It's lifted the veil of ignorance, so that while we don't necessarily know our place in the universe, we at least know that it exists, emperialism has shaped the world you live in, so keep blogging on the your computer and the internet, both based on technology derived from quantum mechanics, the ultimate method for understanding natures methods, and keep questioning the science that gives you a voice.

From the perspective of a working scientist, it seems that it's _journalists_ who over-hype new technology. MRI is treated with a decent amount of skepticism within the field, and there are many people who don't use it because they think it has nothing to offer them. The research papers themselves rarely (but sometimes!) overhype the results -- there's strong internal pressure against doing so, as overselling your results is a sure way of getting trashed in the peer review process.

Media reports, however, are rarely so cautious. I've read many overblown reports, tracked back to the original paper, and found something much more mild. I understand that some of this has to do with university press releases ... though the fact that so much science journalism is just a reworked press release is in itself an indictment.

Great article. I think that you will be interested in a recently developed set of methods for fMRI analysis, which use machine-learning methods to analyse how distributed parts of the brain jointly encode information, as opposed to just seeing which particular part of the brain lights up. This allows much richer analyses, for example looking at how neural representational structure relates to behavioural performance.

This approach of looking at how multiple parts of the brain jointly encode information is very parallel to the trend that you point out in genetics. Instead of asking "What's the gene for X?", people these days instead use machine-learning tools to investigate how multiple genes encode together. There is no single gene for X (except for a handful of monogenic conditions such as Huntington's), just as there is no single brain area which "does the task X". Both approaches involve trying to explore high-dimensional datasets, which is difficult but worth it. e.g. http://www.ncbi.nlm.nih.gov/pubmed/18097463

These pattern-based approaches can be, and usually are, every bit as hypothesis-driven as any other type of analysis. There also exist more inductive multivariate fMRI analysis approaches, such as ICA, which try to get interesting results spontaneously to bubble up from large datasets, but the multivoxel pattern analyses of brain activation that I am talking about are not like that. Instead, they ask a question, e.g. how does the multidimensional similarity space of neural activation patterns relate to people's behavioural performance?

@erikthebassist... did you even read the article? Where on earth do you get "dismiss empirical observation" from? Jonah's exact words on the scientific method are as follows: "Needless to say, this has been a fantastically successful approach." That doesn't sound much like a dismissal, does it? Suggesting a possible adaptation or expansion on those methods isn't exactly "questioning the science", I'd call it "furthering" science. Try reading and thinking before YOU so quickly dismiss something.

I agree totally, but there are two more points that need to be made. One involves marketing parading as science. New gizmos are often oversold as revelatory mainly to serve researcher bids for research grants. The second point has to do with our preoccupation with technological applications. Truly basic research virtually does not exist today. When some new gizmo participates in a "breakthrough," that advance is always at the engineering level; we miss the fact that it has not brought greater fundamental understanding because we are so enthralled with technological applications. This leads us to mistake engineering-level insight for fundamental understanding.

You're right that everything is more complicated than we would like to think, and because of that important insight, I think you provided a need perspective to humble the claims of science and technology. But here and elsewhere I see a tendency in your writing to conflate that point with another - that these so-called "clouds" (dna, the brain, etc...) are fundamentally irreducible.

These are two different claims, and I think it wise to stay away from the second.

I really like your writing and am constantly inspired by your work. I am encouraged to see a science journalist discuss the limitations of the reductionist approach, and fMRI specifically. I do think, though, that in the spirit of observing and staring and pondering, we have to have something to stare at... which isn't just the big picture, but the pieces the reductionist approach provides us. I think when we illuminate the issues with reductionist science and the temptation to overstate the conclusions thereof, we should propose a marriage of the reductionist and integrative approaches and not go too far in the other direction (not that I think you have, necessarily, but I think it's worth mentioning).
Another issue common to neuroscience that I'd like to see discussed is "the curse of averages". You touch on this when you mention "noise", but I'd love to hear what you have to say about the blessings and curses of individual variation to neuroscience research...

A wonderfully cogent and concise piece.
I might add that late 19th/early 20th century philosopher Charles S. Peirce proposed the notion of "abduction". It seems like very much what you are speaking of.
Lovely work.

I'm not sure the default network is the best example here as, in fact, it can and is accurately described with the kind of distinct spots of a typical fMRI image.

Indeed, the first study to demonstrate the default network just collated distinct spots of deactivation across several studies. Subsequent 'default network' experiment have used nothing else except established neuroimaging paradigms.

The innovation here was not a new use of technology but simply using it to ask a novel question. Raichle covers this well in a recent review article.

Materials in our daily lives need to be identified for our safety. Who knows when youâll touch something highly corrosive or toxic? That just might be the end of your repertory system! Ever wondered how and why these substances change? Science has the answer.

Donate

ScienceBlogs is where scientists communicate directly with the public. We are part of Science 2.0, a science education nonprofit operating under Section 501(c)(3) of the Internal Revenue Code. Please make a tax-deductible donation if you value independent science communication, collaboration, participation, and open access.

You can also shop using Amazon Smile and though you pay nothing more we get a tiny something.

More by this author

NOTE: This blog has moved. The Frontal Cortex is now over here.
I've got some exciting news: Starting today, the Frontal Cortex will be moving over to the Wired website. Needless to say, the move comes with the usual mixture of emotions, as I've greatly enjoyed my four years as part of the…

Over at Gizmodo, Joel Johnson makes a convincing argument for adding random strangers to your twitter feed:
I realized most of my Twitter friends are like me: white dorks. So I picked out my new friend and started to pay attention.
She's a Christian, but isn't afraid of sex. She seems to have some…

I've got a new article in the latest Wired on the science of stress, as seen through the prism of Robert Sapolsky. The article isn't online yet (read it on the iPad!), but here are the opening paragraphs:
Baboons are nasty, brutish and short. They have a long muzzle and sharp fangs designed to…

Over at Sciam's Mind Matters, Melody Dye has a great post on the surprising advantages of thinking like a baby. At first glance, this might seem like a ridiculous conjecture: A baby, after all, is missing most of the capabilities that define the human mind, such as language and the ability to…

Joe Keohane has a fascinating summary of our political biases in the Boston Globe Ideas section this weekend. It's probably not surprising that voters aren't rational agents, but it's always a little depressing to realize just how irrational we are. (And it's worth pointing out that this…

More reads

“Devote yourself, but do not lose who you are!” -Marvel vs. Capcom
With Thanksgiving behind us, it's officially the holiday season here in the USA (and in many other places across the world), and so it's time to kick that off with a great holiday song by Calexico,
Gift X-change,
and to check out the fusion of two great holiday traditions: video games and seasonal sweaters!
Image credit:…

I'm sitting in a hotel in Utah at the PQE 2013 conference. As I write this, the temperature is a rather brisk 19F. (For everyone else in the world, -7.2C) That's not cold at all to some of you, but some of you didn't grow up in south Louisiana.
Once a year they let us grad students out of the basement!
Either way, on the Kelvin scale the weather here is still a balmy 267 degrees above absolute…

"What's in a name? That which we call a rose
By any other name would smell as sweet." -William Shakespeare
Up in the night sky, just a few degrees away from Orion, one of the most identifiable constellations in the winter sky, lies a cluster of newly formed stars.
Image credit: Stellarium. As always, click on all images for the highest-res version available.
5,000 light years away, this cluster…