Breaking news from the frontiers of neuroscience.

Having trouble remembering where you left your keys? You can improve with a little practice, says a new study.

"I've forgotten more than you'll ever...wait, what was I saying?"

It’s an idea that had never occurred to me before, but one that seems weirdly obvious once you think about it: people who train their brains to recall the locations of objects for a few minutes each day show greatly improved ability to remember where they’ve left things.

No matter what age you are, you’ve probably had your share of “Alzheimer’s moments,” when you’ve walked into a room only to forget why you’re there, or set something down and immediately forgotten where you put it. Attention is a limited resource, and when you’re multitasking, there’s not always enough of it to go around.

For people with real Alzheimer’s disease, though, these little moments of forgetfulness can add up to a frustrating inability to complete even simple tasks from start to finish. This is known as mild cognitive impairment (MCI), and its symptoms can range from amnesia to problems with counting and logical reasoning.

That’s because all these tasks depend on memory – even if it’s just the working memory that holds our sense of the present moment together – and most of our memories are dependent on a brain structure called the hippocampus, which is one of the major areas attacked by Alzheimer’s.

What exactly the hippocampus does is still a hotly debated question, but it seems to help sync up neural activity when new memories are “written down” in the brain, as well as when they’re recalled (a process that rewrites the memory anew each time). So it makes sense that the more we associatea particular memory with othermemories – and with strong emotions - the more easily even a damaged hippocampus will be able to help retrieve it.

But now, a team led by Benjamin Hampstead at the Emory University School of Medicine has made a significant breakthrough in rehabilitating people with impaired memories, the journal Hippocampusreports: the researchers have demonstrated that Alzheimer’s patients suffering from MCI can learn to remember better with practice.

The team took a group of volunteers with MCI and taught them a three-step memory-training strategy: 1) the subjects focused their attention on a visual feature of the room that was near the object they wanted to remember, 2) they memorized a short explanation for why the object was there, and 3) they imagined a mental picture that contained all that information.

Not only did the patients’ memory measurably improve after a few training sessions – fMRI scans showed that the training physically changed their brains:

Before training, MCI patients showed reduced hippocampal activity during both encoding and retrieval, relative to HEC. Following training, the MCI MS group demonstrated increased activity during both encoding and retrieval. There were significant differences between the MCI MS and MCI XP groups during retrieval, especially within the right hippocampus.

In other words, the hippocampus in these patients became much more active during memory storage and retrieval than it had been before the training.

Now, it’s important to point out that that finding doesn’t necessarily imply improvement – studies have shown that decreased neural activity is often more strongly correlated with mastery of a task than increased activity is – but it does show that these people’s brains were learning to work differently as their memories improved.

So next time you experience a memory slipup, think of it as an opportunity to learn something new. You’d be surprised what you can train your brain to do with a bit of practice.

A gene that may underlie the molecular mechanisms of memory has been identified, says a new study.

Some of us feel that "yellow" and "red" are open to interpretation...

The gene’s called neuronal PAS domain protein 4 (Npas4 to its friends). When a brain has a new experience, Npas4 leaps into action, activating a whole series of other genes that modify the strength of synapses – the connections that allow neurons to pass electrochemical signals around.

You can think of synapses as being a bit like traffic lights: a very strong synapse is like a green light, allowing lots of traffic (i.e., signals) to pass down a particular neural path when the neuron fires. A weaker synapse is like a yellow light – some signals might slip through now and then, but most won’t make it. Some synapses can inhibit others, acting like red lights – stopping any signals from getting through. And if a particular synapse goes untraveled for long enough, the road starts to crumble away – until finally, there’s no synapse left.

There’s a saying in neuroscience: “Cells that fire together wire together.” (And vice versa.) In other words, synaptic plasticity – the ability of neurons to modify their connectivity patterns – is what allows neural networks to physically change as they take in new information. It’s what gives our brains the ability to learn.

In fact, millions of neurons are delicately tinkering with their connectivity patterns right now, inside your head, as you learn this stuff. Pretty cool, huh?

Anyway, synaptic plasticity’s not exactly breaking news – scientists have been studying it in animals like squid and sea slugs since the 1970s. Neurons in those animals are pretty easy to study with electrodes and a microscope, because a) the animals are anatomically simple compared to humans, and b) some of their neurons are so huge they can be seen with the naked eye.

Studying synapses in humans isn’t quite so simple, though. For one thing, most people wouldn’t like it if you cut open their brain and started poking around while they were alive and conscious – and besides, a lot of the really interesting stuff happens down at the molecular level.

That brings up an important point: though you normally hear about genes in connection with traits – say, a “gene for baldness” and so on – these complex molecular strands actually play all sorts of roles in the body, from building cells to adjusting chemical levels to telling other genes what to do.

That’s why MIT’s Yingxi Lin and her team set out to study the functions of certain genes found in the hippocampus – a brain structure central to memory formation – the journal Sciencereports. The researchers taught a group of mice to avoid a little room in which they received a mild electric shock, then used a precise chemical tracking technique to isolate which genes in the mouse hippocampus were activated right when the mice learned which room to avoid.

In particular, they focused on a hippocampal region with the sci-fi-sounding name of Cornu Ammonis 3 – or CA3 for short:

We found that the activity-dependent transcription factor Npas4 regulates a transcriptional program in CA3 that is required for contextual memory formation. Npas4 was specifically expressed in CA3 after contextual learning.

By “transcriptional program,” the paper’s authors mean a series of genetic “switches” – genes that Npas4 activates – which in turn make chemical adjustments that strengthen or weaken synaptic connections. In short, Npas4 appears to be part of the master “traffic conductor program” for many of the brain’s synapses.

Though they were pretty excited by this discovery (who wouldn’t be?) the researchers took a deep breath, calmed down, and double-checked their results, by testing memory formation in mice whose brains were unable to produce Npas4:

Global knockout or selective deletion of Npas4 in CA3 both resulted in impaired contextual memory, and restoration of Npas4 in CA3 was sufficient to reverse the deficit in global knockout mice.

In short, they make a pretty convincing argument that Npas4 is a necessary ingredient in a mouse’s ability – and probably our ability – to form certain types of new memories.

Exactly how that program relates to our experience of memory remains unclear, but it’s a promising starting point for fine-tuning future memory research. I don’t know about you, but I’d be thrilled to green-light such a project.

Our capacity for short-term memory depends on the synchronization of two types of brainwaves – rapid cycles of electrical activation – says a new study.

Theta and gamma waves try get their dance steps synced up.

When the patterns of theta waves (4-7 Hz) and gamma waves (25-50 Hz) are closely synchronized, pieces of verbal information seem to be “written” into our short-term memory. But it also turns out that longer theta cycles help us remember more bits of information, while longer gamma cycles are correlated with lower recall.

These patterns are measured using electroencephalography (EEG), a lab technique with a long and successful history. Back in the 1950s, it helped scientists unravel the distinct brainwave patterns associated with REM (rapid-eye movement) and deep sleep. More recently, it’s been used to help people with disabilitiescontrol computers, and it’s even helped home users get an up-close look at their own brain activity.

Though more modern techniques like fMRI and DTI are much better at mapping tiny activity patterns deep within the brain, EEG remains a useful tool for measuring the overall patterns of synchronized electrical activity that sweep across the entire brain in various wave-like patterns – hence the term “brainwaves.”

Several types of brainwaves have been well studied since the 1950s: alpha waves, which are correlated with active attention; beta and delta waves, which are associated with logical processing; theta waves, which are associated with meditation and acceptance; and gamma waves, which burst rapidly across the brain when we come to a realization or an understanding.

And now, as the International Journal of Psychophysiology reports, a team led by Jan Kamiński at the Polish Academy of Sciences has discovered a new way of mapping relationships between these patterns of wave activity, to arrive at a new understanding of how theta and gamma waves work together: they studied the lengths of these two cycles relativeto one another – and what they found was pretty amazing:

We have observed that the longer the theta cycles, the more information ‘bites’ the subject was able to remember; the longer the gamma cycle, the less the subject remembered.

The researchers discovered this in a very straightforward way – they simply kept tabs on volunteers’ EEG activity as they sat with eyes closed and let their minds wander; then they compared these recordings against ones taken as the volunteers memorized longer and longer strings of numbers – from three digits up to nine.

The correlation between long theta cycles and greater memory for digits turned out to be quite strong – and for gamma waves, the reverse turned out to be true. This means that gamma waves are probably much more crucial for forming ideas than they are for rote memorization.

Though this finding might not seem all that revolutionary, it provides an elegant demonstration of how even older technologies like EEG can still be used to help us make brand-new discoveries. Which means that in the brains of those of us who keep pluggin’ away at home EEG experiments, there’s probably still a place of honor for those wonderful little gamma waves.

New technology may soon enable us download knowledge directly into our brains, says a new study.

"OK so there's people and planes and OH GOD what the hell am I learning??"

By decoding activation patterns from fMRI scans and then reproducing them as direct input to a precise area of the brain, the new system may be able to “teach” neural networks by example – priming them to fire in a certain way until they learn to do it on their own.

This has led everyone from io9 to the National Science Foundation to make Matrix references – and it’s hard to blame them. After all, immersive virtual reality isn’t too hard to imagine – but learning kung-fu via download, like grabbing an mp3? Sounds like pure sci-fi – especially since we know that the way musclepathways form memories is pretty different from how we remember facts and images.

The basic idea is this: when you learn to perform a physical action – say, riding a bike or shooting a basketball – your muscles learn to coordinate through repetition. This is called procedural memory, because your muscles (and parts of your brain) are learning by repeating a procedure – in other words, a sequence of electrochemical actions – and (hopefully) improving the precision of that procedure with each run-through.

In contrast to this, we have declarative memories – memories of, say, the color of your favorite shirt, or where you had lunch today. Though declarative memories can certainly improve with practice – think of the last time you studied for an exam – there’s typically not an “awkward” stage as your brain struggles to learn how to recreate these memories. In short, once a bit of information is “downloaded” into your conscious awareness, it’s pretty much instantly available (until you forget it, of course).

Now, I could give some examples that blur the lines between these two types of memory – reciting a long list of words, for instance, seems to involve both procedural and declarative memory – but my point here is that procedural memories tend to require practice.

So it’s pretty surprising to read, in the journal Science, that a team led by Kazuhisa Shibata at Boston’s Visual Science Laboratory may have found a way to bridge the gap between these two types of memory.

The team began by taking fMRI scans of the visual cortex as volunteers looked at particular visual images – objects rotated at various angles. Once the team had isolated a precise activation pattern corresponding to a particular angle of orientation, they turned around and directly induced that same activation pattern in the volunteers’ brains:

We induced activity patterns only in early visual cortex corresponding to an orientation without stimulus presentation or participants’ awareness of what was to be learned. The induced activation caused VPL specific to the orientation.

In other words, the researchers triggered brain activity patterns corresponding to specific angles of visual orientation without telling the volunteers what the stimulus was going to be.

Then, when the scientists asked the volunteers what they’d “learned,” the volunteers had no idea. But when the researchers asked them to pick a “best guess” orientation that seemed “right” to them, a significant percentage chose the orientation their brains had been trained to remember.

This isn’t exactly downloadable kung-fu – but it provides some of the first conclusive evidence that not only do repeated visual stimuli help sculpt brain networks – direct stimulation can sculpt the way those networks learn about visual stimuli.

Could we use technology like this to teach people to feel less anxious or depressed? Or to understand concepts they have trouble grasping? And what might happen if we could harness this technology to our own mental output? What if we could literally dream our way to new skills, or more helpful beliefs?

We’re not quite there yet, but it may very well happen in our lifetime.

Yup, this is what we’re doing today. I finally got to see Deathly Hallows Part 2, and it got me thinking about neuroscience like frickin’ everything always does, and I came home and wrote an essay about the nature of consciousness in the Harry Potter universe.

And we’re going to talk about it, because it’s the holidays and can we please just pull it together and act like a normal family for the length of one blog post? Thank you. I really mean it. Besides, I guarantee you that this stuff is gonna bug you too once I’ve brought it up.

So in the movie, there’s this concept of Harry and Voldemort sharing minds; mental resources – each of them can occasionally see what the other one sees; sometimes even remember what the other one remembers.

That idea is not explored to anywhere near a respectable modicum of its full extent.

First of all, are these guys the only two wizards in history who this has happened to? Yeah, I’m sure the mythology already has an answer for this – one that I will devote hours to researching just as soon as that grant money comes through. Ahem. Anyway, the odds are overwhelming that at least some other wizards have been joined in mental pairs previously – I mean, these are guys who can store their subjective memories in pools of water to be re-experienced at will; you can’t tell me nobody’s ever experimented; bathed in another person’s memories; tried to become someone else, or be two people at once. Someone, at some point, must’ve pulled it off. Probably more than one someone.

OK, so there’ve been a few pairs of wizards who shared each others’ minds. Cool. Well, if two works fine, why not three? Hell, why not twelve, or a thousand? With enough know-how and the right set of minds to work with, the wizarding world could whip us up a Magic Consciousness Singularity by next Tuesday.

But there’s the rub: Who all should be included in this great meeting of the minds? Can centaurs and house-elves join? What about, say, dragons, or deer, or birds? Where exactly is the cutoff, where the contents of one mind are no longer useful or comprehensible to another? As a matter of fact, given the – ah – not-infrequent occurrence of miscommunication in our own societies, I’d say it’s pretty remarkable that this kind of mental communion is even possible between two individuals of the same species.

Which brings us to an intriguing wrinkle in the endless debate about qualia – those mental qualities like the “redness” of red, or the “painfulness” of pain, which are only describable in terms of other subjective experiences. Up until now, of course, it’s been impossible to prove whether Harry’s qualia for, say, redness are exactly the same as Voldemort’s – or to explain just how the concept of “exactly the same” would even apply in this particular scenario. But now Harry can magically see through Voldemort’s eyes; feel Voldemort’s feelings – he can experience Voldemort’s qualia for himself.

Ah, but can he, really? I mean, wouldn’t Harry still be experiencing Voldemort’s qualia through his own qualia? Like I said, this is a pretty intriguing wrinkle.

The more fundamental question, though, is this: What does this all tell us about the concept of the Self in Wizard Metaphysics? (It’s capitalized because it’s magical.) Do Harry and Voldemort together constitute a single person? A single self? Is there a difference between those two concepts? Should there be?

I don’t ask these questions idly – in fact, here’s a much more pointed query: What do we rely on when we ask ourselves who we are? A: Memories, of course; and our thoughts and feelings about those memories. Now, if some of Harry’s thoughts and feelings and memories are of things he experienced while “in” Voldemort’s mind (whatever that means) then don’t some of Voldemort’s thoughts and feelings and memories comprise a portion of Harry’s? You can see where we run into problems.

Just one last question, and then I promise I’ll let this drop. When you read about Harry’s and Voldemort’s thoughts and feelings and memories, and you experience them for yourself, what does that say about what your Self is made of?

Intriguingly, the size of some of these regions seems to correlate only with the size of people’s online social networks – not their real-world ones. It’s not clear yet, though, which factor is cause and which is effect – whether increased development in these regions enables people to develop larger online social networks, or vice versa. Even so, this is one of the first studies to directly link neuroanatomical data with online behavior.

As the journal Proceedings of the Royal Society B reports, a team led by Geraint Rees at University College London (UCL) performed MRI scans of the brains of 125 university students, and correlated this data with information on the size of these students’ Facebook friend groups:

The number of social contacts declared publicly on a major web-based social networking site was strongly associated with the structure of focal regions of the human brain. Specifically, we found that variation in the number of friends on Facebook strongly and significantly predicted grey matter volume in left MTG, right STS and right entorhinal cortex.

The exact links between these regions and online communication remain to be studied – but many of them have been correlated with social interaction in other studies.

The amygdala helps us process negative emotions like fear and sadness, both in ourselves and others – and people with larger amygdalas tend to have larger social networks overall, both online and otherwise.

Though no studies so far have correlated the size of the right STS with social network size, this structure is known to be involved in our ability to think of some objects as alive, as well as in helping us understand what others are looking at, and what emotions they’re expressing. A malfunctioning STS is thought to be a major factor in autism.

The exact role of the left MTG isn’t precisely understood, but many neuroscientists think this structure is involved in our ability to recognize familiar faces, and to process the semantic associations (i.e., meanings) of words.

The right ECworks closely with the hippocampus to help us form and consolidate explicit/declarative memories – i.e., memories of specific facts and events, and specific associations between them (e.g., “kitties and bunnies are both mammals”). The EC is also one of the first areas attacked by Alzheimer’s. The researchers found that this region correlated especially strongly with online friend group size, but not particularly with real-world friend group size:

The right entorhinal cortex is implicated in associative memory formation for pairs of items including pairs of names and faces. Such memory capacity for name–face associates would constitute an important function for maintaining a large social network as observed in social network websites.

In short, the ability to mentally “tag” photos and posts with the correct associations is a central skill for maintaining digital friendships.

It’s easy to see how all the brain regions above could play central roles in a person’s ability to maintain a wide-ranging network of online friends. What’s especially interesting, though, is that all of them deal with features of social interaction that port from the real world to the online interaction space in straightforward ways – social hierarchies, facial expressions, repeatable facts, and so on – but that many other vital aspects of real-world social interaction – such as body language, and tone of voice – don’t appear to be nearly as crucial in an online social network. It’s enough to make you wonder what natural selection may have in store for our brains.

I don’t know about you, but I’m picturing a future where men compete in flame-wars for the right to woo attractive females. So while you my competitors are out hitting the bars and clubs this weekend, I’ll be – ah – honing my skills on 4chan and Reddit. Which is basically what I’d be doing this weekend anyway.

Each time we retell a story, our actual memories of its events change, says a new study.

"I don't remember being an alcoholic...but I guess if you guys say so..."

When we receive hints – true or not-so-true – about a story’s details from our friends, we often revise our version if what they say makes sense to us. But what’s incredible is, it isn’t just our retelling of the story that changes – fMRI scans show that our brains actually rewrite our memories, and we end up remembering the new version as “what really happened.”

To understand how this can work at a neurological level, a team led by Micah Edelson at Israel’s Weizmann Institute of Science gathered thirty adult volunteers, split them into groups of five, and showed those groups a short film about police arresting people. Three days later, the volunteers returned to the lab and answered a questionnaire that tested their memory of the film; and four days after that, they returned and lay in an fMRI scanner while answering more questions.

For the second questionnaire, though, the volunteers got to see “lifeline” answers, which were supposedly taken from correct responses by other participants. What the subjects didn’t know, though, was that these lifeline answers were actually incorrect answers to questions they themselves had answered correctly and confidently on the first questionnaire.

How often would you guess they revised their answers? As it turned out, a full 70 percent of the time:

But the researchers wanted to know if these changes reflected something deeper than a willingness to buckle to peer pressure – so they performed two more tests to check.

First, the researchers brought the subjects back into the lab a fourth time, told them the incorrect lifelines were a mix of truths and falsehoods that had been generated at random by a computer, and asked if the volunteers would like to change back to their original answers. As the subjects checked their false answers against their memories, 40 percent chose to remain with the incorrect answers to questions they’d originally answered correctly – even when they’d been sure of their correct answers to begin with.

And finally, by correlating the subjects’ answers with their fMRI data, the researchers noticed an intriguing phenomenon: as the volunteers were changing their answers from correct ones to incorrect ones, their brains showed a strong co-activation between the hippocampus – a structure known to be involved in the consolidation of long-term memories from short-term ones – and the amygdala – a structure crucial for processing strong negative emotions, such as fear, embarrassment, and sadness:

Enhanced activation in the bilateral amygdala and heightened functional connectivity with the anterior hippocampus were a signature of longterm memory change induced by the social environment. This indicates that the incorporation of external social information into memory may involve the amygdala’s intercedence, in accordance with its special position at the crossroads of social cognition and memory.

In short, it seems that as the amygdala processes scary feelings of social pressure, it signals the hippocampus to wipe and rewrite our long-term memories to fit the socially agreed-upon version of events.

You don’t need me to tell you how huge the implications of this are. How many of your own long-term memories do you think coincide with what actually happened? How much do the stories you tell yourself and others about your past shape your and their actions in the present? In a court of law, would you stake your future on the testimony of an eyewitness?

It’s hard not to be reminded of Alan Moore’s oft-repeated comments to the effect that art – in his case, writing in particular – is magic: it reshapes people’s thoughts and memories, which reshape their perceptions of the past, present, and future – and those perceptions, in turn, reshape reality itself. It’s pretty amazing to think that you have such abilities.

But what’s really going to bake your noodle later on is, Would your brain still be doing all this if I hadn’t told you this story?

Connectomics

The human brain contains more than 80 billion neurons, making several hundred trillion interconnections. The better we understand these patterns of connectivity, the better we understand ourselves.
In short, neuroscience is awesome.
This is a blog about it.