The MindSee project aims to develop an information seeking application that exemplifies the fruitful symbiosis of modern Brain Computer Interface technology with real-world Human Computer Interaction. The result will be a cutting-edge information retrieval system that outperforms state-of-the-art tools by more than doubling the performance of information seeking in realistic tasks.

As your eyes scan these words, your brain seems to derive their meaning instantaneously. How are we able to recognize and interpret marks on a page so rapidly? A small new study confirms that a specialized brain area recognizes printed words as pictures rather than by their meaning.

Researchers led by neuroscientist Maximilian Riesenhuber of Georgetown University Medical Center scanned the brains of 12 subjects with functional MRI. They focused on a tiny area of the brain known to be involved in recognizing words, the visual word form area (VWFA), found on the surface of the brain, behind the left ear. The VWFA's right hemisphere analogue is the fusiform face area, which allows us to recognize faces. In young children and people who are illiterate, the VWFA region and the fusiform face area both respond to faces. As people learn to read, the VWFA region is co-opted for word recognition.

The researchers presented the subjects with a series of real words and made-up words. The nonsense words elicited responses from a wide pool of neurons in the VWFA, whereas distinct subsets of neurons responded to real words. After subjects were trained to recognize pseudo words, however, neurons responded as they did to real words, according to the paper published in March in the Journal of Neuroscience. Because the nonsense words had no meaning, Riesenhuber deduced that our neurons must respond to words' orthography—how they look—rather than their meaning.

As we become more proficient at reading, then, we build up a visual dictionary in the VWFA—much as we accumulate a catalogue of familiar faces on the opposite side of our brain.

We “hear” written words in our headSound may have been the original vehicle for language, but writing allows us to create and understand words without it. Yet new research shows that sound remains a critical element of reading.

When people listen to speech, neural activity is correlated with each word's “sound envelope”—the fluctuation of the audio signal over time corresponds to the fluctuation of neural activity over time. In the new study, Lorenzo Magrassi, a neurosurgeon at the University of Pavia in Italy, and his colleagues made electrocorticographic (ECoG) recordings from 16 individuals. The researchers measured neural activity directly from the surface of the language-generating structure known as Broca's area as subjects read text silently or aloud. (This measurement was made possible by the fact that participants were undergoing brain surgery while awake.)

Their neural activity was correlated with the sound envelope of the text they read, which was generated well before they spoke and even when they were not planning to speak, according to the report published in February in the Proceedings of the National Academy of Sciences USA. In other words, Broca's area responded to silent reading much in the same way auditory neurons respond to text spoken aloud—as if Broca's area was generating the sound of the words so the readers heard them internally. The finding speaks to a debate about whether words are encoded in the brain by a neural pattern symbolic of their meaning or if they are encoded via simpler attributes, such as how they sound. The results add to mounting evidence that words are fundamentally processed and catalogued by their basic sounds and shapes.