I’ve got a review of Stanislas Dehaene’s new book, Reading in the Brain, over at the Barnes and Noble Review:

Right now, your mind is performing an astonishing feat. Photons are bouncing off these black squiggles and lines — the letters in this sentence — and colliding with a thin wall of flesh at the back of your eyeball. The photons contain just enough energy to activate sensory neurons, each of which is responsible for a particular plot of visual space on the page. The end result is that, as you stare at the letters, they become more than mere marks on a page. You’ve begun to read.

Seeing the letters, of course, is just the start of the reading process. As the neuroscientist Stanislas Dehaene reveals in his fascinating new book, Reading in the Brain, the real wonder is what happens next. Although our eyes are focused on the letters, we quickly learn to ignore them. Instead, we perceive whole words, chunks of meaning. (The irregularities of English require such flexibility. As George Bernard Shaw once pointed out, the word “fish” could also be spelled ghoti, assuming that we used the gh from “enough,” the o from “women,” and the ti from “lotion.”) In fact, once we become proficient at reading, the precise shape of the letters — not to mention the arbitrariness of the spelling — doesn’t even matter, which is why we read word, WORD, and WoRd the same way.

In this clearly written summary of the field, Dehaene is primarily interested in two separate mysteries. The first mystery is how the individual human brain learns to read. What changes take place inside our head between kindergarten and second grade, when most of us start to take literacy for granted? How do we go from sounding out syllables, carefully parsing the phonetics of each word, to becoming fluent readers? And how does this incredibly complicated act become automatic, so that evn ths sntnce cn b quikly undrstd?

Dehaene begins by introducing the reader to the “letterbox area,” a small bit of brain just behind the left ear. The crucial role of this cortical part was first revealed by Mr. C, a 19th-century neurological patient who, after a mild stroke, lost the ability to read. What made Mr. C’s case so peculiar is that his vision was perfectly fine; he could make sense of objects and faces and even numbers. However, when he opened up a book or glanced at a newspaper, the letters on the page were utterly inscrutable, a mess of inchoate lines and curves. “He [Mr.C] thinks that he has lost his mind,” his doctor dryly noted.

Subsequent studies of patients with pure alexia — they can see everything but written language — have located the specific contours of the letterbox area. Not surprisingly, it takes up a significant chunk of our visual cortex, as the invention of the alphabet seems to have usurped brain cells previously devoted to object recognition. (Dehaene refers to this process as “neuronal recycling.”) He also speculates that, while “learning to read induces massive cognitive gains,” it also comes with a hidden mental cost: because so much of our visual cortex is now devoted to literacy, we’re less able to “read” the details of natural world.

But reading isn’t just about seeing — we still have to imbue those syllabic sounds with meaning. This is why, once the letterbox area deciphers the word — this takes less than 150 milliseconds — the information is immediately sent to other brain areas, which help us interpret the semantic content. Such a complex act requires a variety of brain areas scattered across both hemispheres, all of which must work together to make sense of a sentence. If any of these particular areas are damaged, people tend to lose specific elements of language, such as the ability to conjugate verbs or decipher metaphors.

One of the most intriguing findings of this new science of reading is that the literate brain actually has two distinct pathways for reading. One pathway is direct and efficient, and accounts for the vast majority of reading comprehension — we see a group of letters, convert those letters into a word, and then directly grasp the word’s meaning. However, there’s also a second pathway, which we use whenever we encounter a rare and obscure word that isn’t in our mental dictionary. As a result, we’re forced to decipher the sound of the word before we can make a guess about its definition, which requires a second or two of conscious effort.

The second major mystery explored by Dehaene is how reading came to exist. It’s a mystery that’s only deepened by the recency of literacy: the first alphabets were invented less than 4,000 years ago, appearing near the Sinai Peninsula. (Egyptian hieroglyphic characters were used to represent a Semitic language.) This means that our brain wasn’t “designed” for reading; we haven’t had time to evolve a purpose-built set of circuits for letters and words. As Deheane eloquently notes, “Our cortex did not specifically evolve for writing. Rather, writing evolved to fit the cortex.”

Deheane goes on to provide a wealth of evidence showing this cultural evolution in action, as written language tweaked itself until it became ubiquitous. In fact, even the shape of letters — their odd graphic design — has been molded by the habits and constraints of our perceptual system. For instance, the neuroscientists Marc Changizi and Shinsuke Shimojo have demonstrated that the vast majority of characters in 115 different writing systems are composed of three distinct strokes, which likely reflect the sensory limitations of cells in the retina. (As Dehaene observes, “The world over, characters appear to have evolved an almost optimal combination that can easily be grasped by a single neuron.”) The moral is that our cultural forms reflect the biological form of the brain; the details of language are largely a biological accident.

Deheane ends the book with a discussion of education — he’s a supporter of phonics and ridicules the whole-language method, “which does not fit with the architecture of our visual brain.” It’s an interesting chapter, and it’s always nice to see scientists grapple with the practical implications of their work, but the most compelling themes of the book remain rooted in basic science. As Deheane and others have demonstrated, the brain is much more than the seat of the soul — it’s also the fleshy source of our culture. By studying the wet stuff inside our head, we can begin to understand why this sentence has this structure, and why this letter, this one right here, has its shape.

Comments

Thanks for review. Dehaene’s title reads “brain,” but the opening of your review reads “mind.” Do you distinguish between the mind and the brain? (As Bucky Fuller did). If so, what’s the difference; if not, why confuse the issue by using the word “mind”?

Alan Turing, (“computational theory of mind”), said that the mind is not equal to the brain and the interaction of its parts, but your mind is equivalent to the information processing, the computations, supported by the brain. This is less distinct than Cartesian distinction between the mind and the brain.

Intriguing review, Jonah, on what looks like a long overdue book. I’m curious–were you convinced by his argument about phonics versus whole language? The fact that printed language is perceptually geared for a certain brain architecture does not sound, on the face of it, like a compelling reason to rule out so-called whole language approaches. Considering the two-path architecture along with the multiple areas required for semantic processing that you summarize earlier in the review, wouldn’t the best pedagogy demand some cross-training?

Jonah
You still have not answered me emails re whether we need a new word for reading on screens vs reading on paper in order to better study and differenitate the two reading modes. What do you think now? I wrote to Dr D in Paris and he told me yes, we need or could benefit from a new term for screen-reading. If Marshall McCluean was around today, spell check, he would say same thing. What word would YOu suggest, Jonah? Even Matthew Battles agrees with me. WHy your refusal to reply my emails? Professional arrogance?

Editors reviewed your entry and have decided to publish it on
urbandictionary.com.

snailpaper

(n.) – the print edition of the daily newspaper which arrives in the
morning on your doorstep with news that is already old and stale by at
least 12 hours

“I am sick and tired of reading the snailpaper edition of the New York
Times! By the time it arrives, the front page is already old news. I
much prefer reading the Times online website with the up-dated news as
it happens.”

— overheard at a bar in the East Village in Manhattan, November 15, 2009

”By studying the wet stuff inside our head, we can begin to understand why this sentence has this structure, and why this letter, this one right here, has its shape.”

Jonah, this is KEY: by studying the MRI scans of the brain while reading on paper vs reading on a screen, we will see that different parts of the brain light up, for processing, retenttion, analysis and critical thinking, and screen reading will be seen to be inferior. However, this will not stop the digitial revoltuioon which is money funded….SIGH

If proficient readers ignore letters, then why would Dehaene “ridicule” whole language theory, which basically argues that the reading process resembles language more than it does decoding letters. Ken Goodman, one of the founders of whole language, recently wrote an interesting article about reading and the brain with Steve Strauss, a neurologist, and Eric Paulson, a reading researcher, Brain research and reading: How emerging concepts in neuroscience support a meaning construction view of the reading process. In light of this book, I think you would enjoy reading a different perspective on what neuroscience confirms about the reading process. It sounds like Dehaene doesn’t give a fair account of whole language theory.

Daniel Ferguson in Japan, I like your ideas and thoughts here. You know a lot about all this and yr comments are always well put. I emailed you once in Japan, i am former Tokyoite 5 years, do email me at danbloom in the gmail dept and let’s chat. You know much more about this than I do. Jonah is never going to reply to my emails although matthew battles and maryanne wolf do. I don’t know why Jonah is so shy. JonaH?

I agree with Daniel that recent trends in neuroscience support a meaning construction view of reading more than Deheane admits.

Whole reading is linked in particular to Jeff Hawkins’ book “On Intelligence”, and Hawkins’ influential concept of the brain as a memory-prediction machine. The idea that much of the brain’s drive is internal, and that much of perception is shaped by prior experience and prediction rather than bottom-up sensory input, has been convincingly argued by Marcus Raichle in ‘Science'(‘The brain’s dark energy’, 2006), in his experiments on the ‘default network’ of brain areas active during rest and hypothesized to be involved in day-dreaming, memory-prediction, and the updating of mental models. Raichle’s paper cites both Hawkins and William James in support of the idea, in language that sounds a lot like Jonah Lehrer. Raichle, Randy Buckner, and Jeffrey Zacks at Washington University are all involved in this line of research on memory-prediction and meaning-making in the brain. In particular, Zacks’ recent work on narrative processing and event segmentation– ‘Nature Neuroscience’ 2001, ‘Psychological Science’ 2007– addresses the brain processes behind reading directly, in a way that seems to support whole reading much more than it refutes it. Zacks’ fMRI work with Nicole Speer has shown that reader’s brains simulate the sensory-motor processes that would be stimulated by actual experience, or re-activated during memory recall.

Jonah, I’d be interested to hear your thoughts on this work if you’re familiar with it. It seems from your arguments in “How We Decide”, and your frequent allusions to William James, that you would agree with the concept of the brain as a meaning-making machine, and the idea that narrative-making in reading is informed by a reader’s prior experience and expectations, rather than a pure sensory process of symbol decoding.

I have not read Dr. Dehaene’s book, so I base these comments on this review. However, the problem I see immediately with Dehaene’s analysis is that he implicitly defines “reading” as word recognition instead of reading as a function of language (whole language). Words are only one unit of meaning, which rarely stand alone to carry meaning. Language is made up of several sub-systems: phonology (sounds), morphology (parts of words), syntax (word order and phrases), and grammar (the way language is shaped and changed to serve certain functions such as defining tense, person, possession, etc.) All languages have these features. Written language (orthography) records language in symbols in a way that captures the regularities of any given language. When we read, we interpret written symbols into language in the head, not just words. We can conceivably understand every word we can decode and still not grasp the meaning of the language. One of the first things that children who are learning to read and write must understand is the symbolic and representational nature of written language with the purpose of conveying a message (communication) between writer and reader. Until they grasp this notion, all of the phonics instruction in the world won’t turn them into readers and writing. If Dr. Dehaene had studied the nature of (whole) language processing in the brain neurolinguistically rather than just focusing on the “recognition” of isolated words, syllable and letters, his work might have made a contribution to our understanding of reading. He further discredits his own work by extending his overly simplified theories into the debate over approaches to reading instruction. In many countries around the world, unlike in the United States, literacy educators agree that reading and writing are all about communicating through language and that meaning resides in the language and not in the results of deciphering the written code. Phonics as an instructional tool in teaching children to read and write is a means to an end, and not an end in itself.

Jill You have not read the book so you are not qualified to comment. Dehaene explains with solid scientific research how our brains ‘read’ and of course does not suggest that de-coding is the end to reading. Once our brains have deciphered the black marks on the page then other areas of the brain ‘light up’ to determine the meaning. I am currently reading his book and I also attended a lecture he gave in Melbourne last month.
I am a Prep teacher and have achieved stunning results with my students after abandoning the Whole language pedagogy. My students can decode the words on the page-sounding out is now only used for unfamiliar words. The words they have read regularly are pronounced automatically. AND they are far better at comprehension! Funnily enough, being able to read the actual words on the page helps them to understand the author’s intent! This has been achieved through systematic, explicit instruction in the alpahbetical principle along with phonological awareness, vocabulary, fluency and comprehension strategies. My students LOVE reading because they are successful at it. And English is NOT their first language! They all reached the reading benchmark this year and more than half finished more than 12 months ahead on the reading of levelled texts. An unprecedented result! Just go and read Dehaene’s book.

I have been a big fan of Dehaene’s work on math and the brain. However, the reading piece leaves me wondering if he has made the same mistake that other reading researchers, such as the Shaywitz’s, have made — phonics isn’t phonemic awareness. One needs the ability to discern the phonemes in words before phonics makes sense.

Phonics has only a minimal impact on reading ability compared to the dramatic and permanent changes phonemic instruction has on reading. See http://www.SoundReading.com for the science.

Letters don’t flow through our brains, phonemes do. Very little visual processing is done while reading. Phonemes are involved in all the language tasks involved in reading.
The reason we can read is that reading hijacks the auditory processing centers of the brain.

In some recent work I did with Walter Freeman at his lab in Berkeley, citing the seminal work of Raichle, we established that during the sychronized gamma typical of meditation and consciousness, the analytic power of the brain may momentarily approach zero;

I’ve just published what I believe is a breakthrough paper on meditation and consciousness (formal abstract and link below). It is the first to interrelate the work on synchronized gamma in consciousness with the well-attested work on gamma in meditation in an experimental context.. It adduces experimental and simulated data to show that what both have in common is the ability to put the brain into a state in which it is maximally sensitive and consumes zero power, briefly. It is argued that this may correspond to a “selfless” state and the more typical non-zero state, in which gamma is not so prominent, corresponds to a state of empirical self. Thus, the “zero power” in the title refers not only to the power spectrum of the brain as measured by the Hilbert transform, but also to a psychological state of personal renunciation.
While the general perspective is compatible with panpsychism, a more practical consequence is that the beneficial health effects of meditation may partly be due to the fact that the brain’s “dark energy” consumption normally absorbs about 18% of the body’s metabolic production. During these moments of “zero power” this energy is freed up for repair and healing. In fact, it may also lead to differential gene expression.

The paper is;

O Nuallain,Sean (2009) Zero Power and Selflessness: What Meditation and Conscious Perception Have in Common Cognitive Sciences 4(2)

This paper attempts to reconstrue the art and science of meditation in the context of an overall theory of cognition, and with reference to evidence from simulated and real data analysed in a neurodynamical framework. First, we discuss the phenomenology of meditation and its relation to the known evidence. It is argued that meditation is on a continuum with the types of conscious mental activity characterized by synchronized gamma. Specifically, it is suggested that gamma synchrony in meditation allows the normally prominent background noise of the brain momentarily to subside. Secondly, a set of experiments using both simulated and real data and interpreted in a neurodynamical context that bear on the issue of meditation is described. Thirdly, the theoretical and experimental frameworks are brought together into an overall perspective that impacts on cognition as on applied experientialism. Most of the material alludes to books and other refereed published material by the author.

Kids arrive in Kindergarten with a spoken vocabulary of about 25,000 words. You can’t speak without using phonemes. Giving something a fancy name doesn’t make it suddenly inaccesible to 5-year-olds. They can say “Hi” without knowing that it contains two phonemes.

The best beginning phonics program cuts through all the hype and gets right down to teaching kids to sound out words and read stories as naturally as they learn to speak. See http://www.BrodenBooks.com