Published byStanford Medicine

Neuroscience

Welcome to Biomed Bites, a weekly feature that introduces readers to some of Stanford’s most innovative researchers.

Initially, Stephen Baccus, PhD, wanted to understand how computers work. It didn’t take him very long to discover that the snazziest computer around is the human brain. Now an associate professor of neurobiology, Baccus needed a simple way to study neural circuits. He picked the retina, a component that is relatively well understood.

As Baccus explains in the video above:

In choosing the retina, I wanted to choose a set of experiments we could do where we could control the brain very accurately in order to study it, and I found that the retina was one of the places that we could most accurately control what the input to the nervous system is doing.

It’s a simple enough part of the brain that we can really hope to understand how it works.

Although Baccus and his team are interested in the general principles of neural function that can be observed using the retina, they’re also eager to discover clinical applications of their research such as electronic retinal prostheses.

“From our basic studies on how the retina performs computations, this information can be and actually has been used in the design of prostheses that we believe can actually restore sight,” Baccus says.

Learn more about Stanford Medicine’s Biomedical Innovation Initiative and about other faculty leaders who are driving biomedical innovation here.

A few years ago, a team led by Stanford researcher Krishna Shenoy, PhD, published a paper that proposed a new theory for how neurons in the brain controlled the movement of muscles: Rather than sending out signals with parceled bits of information about the direction and size of movement, Shenoy’s team found that groups of neurons fired in rhythmic patterns to get muscles to act.

That research, done in 2012, was in animals. Now, Shenoy and Stanford neurosurgeon Jamie Henderson, MD, have followed up on that work to demonstrate that human neurons function in the same way, in what the researchers call a dynamical system. The work is described in a paper published in the scientific journal eLife today. In our news release on the study, the lead author, postdoctoral scholar Chethan Pandarinath, PhD, said of the work:

The earlier research with animals showed that many of the firing patterns that seem so confusing when we look at individual neurons become clear when we look at large groups of neurons together as a dynamical system.

The researchers implanted electrode arrays into the brains of two patients with amyotrophic lateral sclerosis (ALS), a neurodegenerative condition also known as Lou Gehrig’s disease. The new study provides further support for the initial findings and also lays the groundwork for advanced prosthetics like robotic arms that can be controlled by a person’s thoughts. The team is planning on working on computer algorithms that translate neural signals into electrical impulses that control prosthetic limbs.

Have you seen the movie “Inside Out” yet? I went over the weekend with my family, and despite reports that some parents weep throughout the last 20 minutes, I only shed a few tears. (A real miracle given what a sap I normally am when it comes to Pixar films – don’t even get me started on the last scene of “Monsters, Inc.”)

The movie takes place inside the brain of an 11-year-old girl, Riley, with different characters playing the role of various emotions (joy, anger, sadness, etc.). I found the movie’s journey through the brain visually stunning and highly entertaining, but I admit to not thinking much about its accuracy – until yesterday, when I came across this post on the NeuroLogica Blog.

Neurologist Steven Novella, MD, writes that he loved the movie and would highly recommend it, but “as ametaphor for brain function, the movie was highly problematic.” He outlines the various ways in which accuracy was sacrificed for plot, or for the sake of simple storytelling, starting with the control panel used in the “command center” of Riley’s brain. “There does not appear to be any equivalent of a command center or control panel in our brains. There is no ‘seat of consciousness’ or ‘global workspace,'” he writes. “Rather, consciousness appears to be highly distributed, with each part of the brain contributing its little bit.”

The entire post is an entertaining and educational read, and I know I’ll keep it in the back of my mind – no pun intended – upon my next viewing of the movie. (Anyone with kids knows there’s no way I’m getting away with seeing a Pixar movie only once.)

If you find yourself forgetting information you have only your synapses to blame. These connections between neurons are what hold on to memories. When they break, there in a flash goes the name of that new coworker.

That’s been the theory for some time now, but Mark Schnitzer, PhD, who is a professor of biology and applied physics, has now shown it to be true. He was able to watch connections form and break in a region of the brain called the hippocampus, where memories are stored for about 30 days in the mice they worked with.

He and his collaborators found that the average synapse also lasts about 30 days in that region, suggesting that the synapse and the memory are related.

For a story I wrote about the work, Schnitzer told me, “Just because the community has had a longstanding idea, that doesn’t make it right.”

He said that his findings, which were published today in Nature, open up the field to investigating other aspects of memory including in stress or disease models.

I thought the idea was so intriguing I wrote a series of stories about what it would take to reverse engineer the brain, and how close we are to succeeding at each. We’re still a ways from computers that mimic our own agile noggins, but a number of people are making progress in everything from figuring out where the brain’s wiring goes to creating computers that can learn.

These are the steps Newsome outlined to take us from our own grey goo to electronics with human-like capacities:

Teach electronics to interact: Engineer Fei-Fei Li, PhD, has taught a computer to recognize images with almost human-like precision. This kind of ability will be needed by electronics of the future like self-driving cars or smarter robots.

As noted in the report, women are more at risk for dementia than men for two primary reasons: age and genetics. Women’s longer lifespans leave them more vulnerable to the age-related condition. In addition, there are biological factors that make women more likely to suffer from dementia.

Women are also more likely to be the caregivers to those with the disease. Women care not only for family members — they’re often also employed in low-paid caregiving professions. This is particularly true in lower income countries, where as many as 62 percent of people with dementia live, according to the report.

The burden of dementia strains family structures and community dynamics in these disadvantaged nations. In the report, Faraneh Farin, who is involved with the Iran Alzheimer Association, describes the situation in countries like Iran:

Nowadays, more women are working to support their families but should they need to care for a family member, then it is expected that they quit their jobs resulting in their marginalization. It seems that either way, whether a woman has dementia or she cares for a loved one, she is trapped in the cycle which has been constructed by the society. Dementia is an issue that engages a woman’s entire life.

The global costs of dementia amount to more than $600 billion, yet many sufferers, caregivers and programs lack adequate funds. The report calls for additional resources for female dementia victims and caregivers, and it highlights the need for additional research on dementia’s effects, especially in countries with lower incomes. These countries also need to develop national strategies that consider the needs of women, the report states.

Alzheimer’s Disease International aims to elevate the awareness of dementia’s impact on women globally and to spur national efforts to improve care. As Executive Director Mark Wortmann wrote in the Foreward: “I hope the report will find its way onto the desks of policy makers to help improve the quality of life for women living with dementia, as well as the millions of women all around the world who provide care and support for them.”

Alex Giacomini is an English literature major at UC Berkeley and a writing and social media intern in the medical school’s Office of Communication and Public Affairs.

Neuroscience has come a long way since the days of phrenology, when lumps on the outside of the skull were believed to denote enhanced size and strength of the particular brain region responsible for particular individual functions. Today’s far more advanced neuroimaging technologies allow scientists to peer deep into the living brain, revealing not only its anatomical structures and the tracts connecting them but, in recent years, physiological descriptions of the brain at work.

Visualized this way, the brain appears to contain numerous “functional networks:” clusters of remote brain regions that are connected directly via white-matter tracts or indirectly through connections with mediating regions. These networks’ tightly coupled brain regions not only are wired together, but fire together. Their pulses, purrs and pauses, so to speak, are closely coordinated in phase and frequency.

Well over a dozen functional networks, responsible for brain operations such as memory, language processing, vision and emotion, have been identified via a technique called resting-state functional magnetic resonance imaging. In a resting-state fMRI scan, the individual is asked to simply lie still, eyes closed, for several minutes and relax. These scans indicate that even at rest, the brain’s functional networks continue to hum along — albeit at lower volumes — at distinguishable frequencies and phases, like so many different radio stations playing simultaneously on the same radio.

But whether the images obtained via resting-state fMRI truly reflect neuronal activity or are some kind of artifact has been controversial. Now, a new study led by neuroscientist Michael Greicius, MD, and just published in Science, has found genetic evidence that convincingly bolsters neuroimaging-based depictions of these brain-activity patterns.

Ricard is on a cross-country speaking tour spreading his belief of how “altruism is the vital thread that can address the main challenges of our time, from economic inequality to environmental sustainability, from life satisfaction to conflict resolution.”

The French native is an internationally bestselling author (an earlier book, Happiness, A Guide to Developing Life’s Most Important Skill, was a huge global success), a scientist with a PhD in molecular genetics, and a photographer. The new book is a tome and perhaps even a salve for these turbulent times when the seams of the world seem to be tearing apart.

Ricard is a gentle man. He resides in a monastery outside Katmandu that sustained significant damage when an earthquake devastated Nepal in April. (His foundation, Karuna-Shechen, is raising funds for disaster relief.) At the age of 20, he went to India to meet the great masters of Tibetan Buddhism. He returned again in 1972, to study full time and lead a contemplative life, often times in isolation for long stretches of time. I asked him if the life of solitude was difficult; he told me it’s a question he’s frequently asked.

In this 1:2:1 podcast, we spoke about the intersection of neuroscience and meditation and the enormous growth of mindfulness in the U.S. I wondered whether he thought mindfulness was becoming too commoditized. For instance, would the world be better off with mindful drone operators? He thinks not. I also asked him about his purpose in life. But the main focus of the interview is his new book and his view that now is the time to spread altruism as the world desperately needs it and is primed to respond.

Ricard left the Bay Area over the weekend moving on to Los Angeles, Washington and New York to spread the word and demonstrate altruism in action.

Imagine the usefulness of knowing if someone is drawing on a memory or experiencing something for the first time. “No, officer, I’ve never seen that person before.”

That’s possible, using an algorithm that interprets brain scans developed by a team of Stanford researchers led by psychology professor Anthony Wagner, PhD. But according to a Stanford Report article, it’s also possible to fool that same program when subjects are coached to hide their memory.

The program, or decoder, capitalizes on the complexity of memory, which taps many different regions of the brain. They use functional magnetic resonance imaging (fMRI) to view which parts of the brain are active.

Hoping to illustrate the limits of their own creation, the researchers asked 24 study participants to study a series of faces. The next day, they exposed them to some of the same faces mixed with entirely new faces:

“We gave them two very specific strategies: If you remember seeing the face before, conceal your memory of that face by specifically focusing on features of the photo that you hadn’t noticed before, such as the lighting or the contours of the face, anything that’s novel to distract you from attending to the memory,” said Melina Uncapher, PhD, a research scientist in Wagner’s lab. “Likewise, if you see a brand-new face, think of a memory or a person that this face reminds you of and try to generate as much rich detail about that face as you can, which will make your brain look like it’s in a state of remembering.”

With just two minutes of coaching and training, the subjects became proficient at fooling the algorithm: The accuracy of the decoder fell to 50 percent, or no better than a coin-flip decision.

The new study shows that imaging technology alone will not be able to “pull about the truth about memory in all contexts,” Wagner said. And, as pointed out in the article, he “sees [the results] as potentially troubling for the goals of one day using fMRI to judge ‘ground truth’ in law cases.”

For years, early childhood teachers have seen that students taught to read using a phonics approach — sounding out the letters in each word — tended to become better readers than those taught to recognize whole words by sight. Now a new study, published in the scientific journal Brain and Language, has given researchers insight into why, providing some of the earliest neurological data about early readers’ learning processes.

During the study, which was co-authored by Bruce McCandliss, PhD, a Stanford education professor who is part of the Stanford Neurosciences Institute, researchers developed a new written language and compared how 16 adult study participants learned when they were taught using a phonics versus a whole-word approach. The researchers then used a brain mapping technique that employs an electroencephalograph, or EEG, to track participants’ responses to newly learned words. As described in a Stanford Report story:

[T]hese very rapid brain responses to the newly learned words were influenced by how they were learned.

McCandliss noted that this strong left hemisphere engagement during early word recognition is a hallmark of skilled readers, and is characteristically lacking in children and adults who are struggling with reading.

The study also showed that as long as study participants used the letter-sound pattern, they were able to read words they had never seen before. As noted in the piece, the researchers believe this work “could eventually lead to better-designed interventions to help struggling readers.”