Menu

General Neuroscience

It’s hard to believe we started this blog over eight years ago – all the way back when we were grad students. What a long way we’ve come. Patryk is now Director of R&D at Brain Corporation, while Michael is an assistant professor at Rutgers University.

Today we are relaunching Neurevolution, with a new design and plans for more frequent posts. Why today? It’s not only Albert Einstein’s birthday, but a very special Pi Day (3/14/15).

It is our hope that this blog continues to provide a forum for brain research knowledge and ideas that is more accessible to the general public than our scientific publications and presentations. We hope you enjoy – please contact us with any questions, ideas, or requests for the blog.

I’m excited to announce that my latest scientific publication – “Multi-task connectivity reveals flexible hubs for adaptive task control” – was just published in Nature Neuroscience. The paper reports on a project I (along with my co-authors) have been working on for over a year. The goal was to use network science to better understand how human intelligence happens in the brain – specifically, our ability to rapidly adapt to new circumstances, as when learning to perform a task for the first time (e.g., how to use new technology).

The project built on our previous finding (from last year) showing that the amount of connectivity of a well-connected “hub” brain region in prefrontal cortex is linked to human intelligence. That study suggested (indirectly) that there may be hub regions that are flexible – capable of dynamically updating what brain regions they communicate with depending on the current goal.

Typical methods were not capable of more directly testing this hypothesis, however, so we took the latest functional connectivity approaches and pushed the limit, going well beyond the previous paper and what others have done in this area. The key innovation was to look at how functional connectivity changes across dozens of distinct task states (specifically, 64 tasks per participant). This allowed us to look for flexible hubs in the fronto-parietal brain network.

We found that this network contained regions that updated their global pattern of functional connectivity (i.e., inter-regional correlations) depending on which task was being performed.

In other words, the fronto-parietal network changed its brain-wide functional connectivity more than any other major brain network, and this updating appeared to code which task was being performed.

What’s the significance?

These results suggest a potential mechanism for adaptive cognitive abilities in humans:
Prefrontal and parietal cortices form a network with extensive connections projecting to other functionally specialized networks throughout the brain. Incoming instructions activate component representations – coded as neuronal ensembles with unique connectivity patterns – that produce a unique global connectivity pattern throughout the brain. Since these component representations are interchangeable it’s possible to implement combinations of instructions never seen before, allowing for rapid learning of new tasks from instructions.

Important points not mentioned or not emphasized in the journal article:

This study was highly hypothesis-driven, as it tested some predictions of our recent compositional theory of prefrontal cortex function (extended to include parietal cortex as well). That theory was first proposed earlier this year in Cole, Laurent, & Stocco (2013).

Also, as described in our online supplemental FAQ for the paper, we identified ‘adaptive task control’ flexible hubs, but there may be other kinds of flexible hubs in the brain. For instance, there may be flexible hubs for stable task control (maintaining task information via connectivity patterns over extended periods of time, only updating when necessary).

We measured intelligence using “fluid intelligence” tests, which measure your ability to solve novel visual puzzles. It turns out that scores on these tests correlate with important life outcomes like academic and job success. So, finding a neuroscientific factor underlying fluid intelligence might have some fairly important implications.

It turns out that it’s relatively unclear exactly what fluid intelligence tests actually test (what helps you solve novel puzzles, exactly?), so we also measured a more basic “cognitive control” ability thought to be related to fluid intelligence – working memory. This measures your ability to maintain and manipulate information in mind in a goal-directed manner.

Overall (i.e., global) brain connectivity with a part of left lateral prefrontal cortex (see figure above) could predict both fluid intelligence and cognitive control abilities.

What does this mean? One possibility is that this prefrontal region is a “flexible hub” that uses its extensive brain-wide connectivity to monitor and influence other brain regions in a goal-directed manner. This may sound a bit like it’s some kind of “homunculus” (little man) that single-handedly implements all brain functions, but in fact we’re suggesting it’s more like a feedback control system that is used often in engineering, that it only helps implement cognitive control (which supports fluid intelligence), and that it doesn’t do this alone.

Indeed, we found other independent factors that were important for predicting intelligence, suggesting there are several fundamental neural factors underlying intelligence. The global connectivity of this prefrontal region could account for 10% of the variability in fluid intelligence, while activity in this region accounts (independently) for 5% of the variability, and overall gray matter volume accounts (again independently) for an additional 6.7% of the variance. Together, these three factors accounted for 26% of the variance in fluid intelligence across individuals.

There are several important questions that this study raises. For instance, does this region change its connectivity depending on the task being performed, as the “flexible hub” hypothesis would suggest? Are there other regions whose global (or local) connectivity contributes substantially to intelligence and cognitive control abilities? Finally, what other factors are there in the brain that might be able to predict fluid intelligence across individuals?

We are rarely alone when learning something for the first time. We are social creatures, and whether it’s a new technology or an ancient tradition, we typically benefit from instruction when learning new tasks. This form of learning–in which a task is rapidly (within seconds) learned from instruction–can be referred to as rapid instructed task learning (RITL; pronounced “rittle”). Despite the fundamental role this kind of learning plays in our lives, it has been largely ignored by researchers until recently.

RITL almost certainly played a tremendous role in shaping human evolution. The selective advantages of RITL for our species are clear: having RITL abilities allows us to partake in a giant web of knowledge shared with anyone willing to instruct us. We might have received instructions to avoid a dangerous animal we have never seen before (e.g., a large cat with a big mane), or instructions on how to make a spear and kill a lion with it. The possible scenarios in which RITL would have helped increase our chances of survival are virtually endless.

When you type a search into Google it figures out the most important websites based in part on how many links each has from other websites. Taking up precious website space with a link is costly, making each additional link to a page a good indicator of importance.

We thought the same logic might apply to brain regions. Making a new brain connection (and keeping it) is metabolically and developmentally costly, suggesting that regions with many connections must be providing important enough functions to make those connections worth the sacrifice.

We found that two large-scale brain networks were among the top 5% of globally connected regions using both metrics (see figure above). The cognitive control network (CCN) is involved in attention, working memory, decision-making and other important high-level cognitive processes (see Cole & Schneider, 2007). In contrast, the default-mode network (DMN) is typically anti-correlated with the CCN and is involved in mind-wandering, long-term memory retrieval, and self-reflection.

Needless to say, these networks have highly important roles! Without them we would have no sense of self-control (via the CCN) or even a sense of self to begin with (via the DMN).

However, there are other important functions (such as arousal, sleep regulation, breathing, etc.) that are not reflected here, most of which involve subcortical regions. These regions are known to project widely throughout the brain, so why aren’t they showing up?

It turns out that these subcortical regions only show up for one of the two metrics we used. This metric—unlike the other one—includes low-strength connections. Subcortical regions tend to be small and project weak connections all over the brain, such that only the metric including weak connections could identify them up.

I recently found out that this article received the 2010 NeuroImage Editor’s Choice Award (Methods section). I was somewhat surprised by this, since I thought there wasn’t much interest in the study. When I looked up the most popular articles at NeuroImage, however, I found out it was the 7th most downloaded article from January to May 2010. Hopefully this interest will lead to some innovative follow-ups to try to understand what makes these brain regions so special!

Figuring out how the brain decides between two options is difficult. This is especially true for the human brain, whose activity is typically accessible only via the small and occasionally distorted window provided by new imaging technologies (such as functional MRI (fMRI)).

In contrast, it is typically more accurate to observe monkey brains since the skull can be opened and brain activity recorded directly.

Despite this, if you were to look just at the human research, you would consider it a fact that the anterior cingulate cortex (ACC) increases its activity during response conflict. The thought is that this brain region detects that you are having trouble making decisions, and signals other brain regions to pay more attention.

If you were to only look at research with monkeys, however, you would think otherwise. No research with macaque monkeys (the ‘non-human primate’ typically used in neuroscience research) has found conflict activity in ACC.

My most recent publication looks at two possible explanations for this discrepancy: 1) Differences in methods used to study these two species, and 2) Fundamental evolutionary differences between the species.

Thousands of brain imaging studies are published each year. A subset of these studies are replications, or slight variations, of previous studies. Attempting to come to a solid conclusion based on the complex brain activity patterns reported by all these replications can be daunting. Meta-analysis is one tool that has been used to make sense of it all.

Meta-analyses take locations of brain activity in published scientific papers and pool them together to see if there is any consistency.

This is typically done using a standardized brain that all the studies fit their data to (e.g., Talairach). Activation coordinates are then placed on a template brain as dots. When dots tend to clump together then the author can claim some consistency is present across studies. See the first figure for an example of this kind of result.

More sophisticated ways of doing this have emerged, however. One of these advanced methods is called activation likelihood estimation (ALE). This method was developed by Peter Terkeltaub et al. (in conjunction with Jason Chein and Julie Fiez) in 2002 and extended by Laird et al. in 2005.

ALE computes the probability of each part of the brain being active across studies. This is much more powerful than simple point-plotting because it takes much of the guess-work out of deciding if a result is consistent across studies or not.

Causal understanding is an important part of human cognition. How do we understand that a particular event or force has caused another event? How do realize that inserting coins into a soda machine results in a cool beverage appearing below? And ultimately, how do we understand people’s reactions to events?

The NSF workshop panel on the Grand Challenges of Mind and Brain highlighted the question of ‘causal understanding’ as their 6th research topic. (This was the final topic in their report.)

In addition to studying causal understanding, it is probably just as important to study causal misunderstanding: that is, why do individuals infer the wrong causes for events? Or incorrect results from causes? Studying the errors we make in causal inference and understanding may help us discover the underlying neural mechanisms.

It probably isn’t too difficult to imagine that progress on causal understanding, and improvements in our ability to be correct about causation, will probably be important for the well-being of humanity. But what kinds of experiments and methods could be used to human brain mechanisms of causal understanding?

I recently watched this talk (below) by Joaquin Fuster. His theories provide a good integration of cortical functions and distributed processing in working and long-term memory. He also has some cool videos of likely network interactions across cortex (in real time) in his talk.

Modern technologies allow eye movements to be used as a tool for studying language processing during tasks such as natural reading. Saccadic eye movements during reading turn out to be highly sensitive to a number of linguistic variables. A number of computational models of eye movement control have been developed to explain how these variables affect eye movements. Although these models have focused on relatively low-level cognitive, perceptual and motor variables, there has been a concerted effort in the past few years (spurred by psycholinguists) to extend these computational models to syntactic processing.

During a modeling symposium at ECEM2007 (the 14th European Conference on Eye Movements), Dr. Ronan Reilly presented a first attempt to take syntax into account in his eye-movement control model (GLENMORE; Reilly & Radach, Cognitive Systems Research, 2006).

In the dark confines behind our eyes lies flesh full of mysterious patterns, constituting our hopes, desires, knowledge, and everything else fundamental to who we are. Since at least the time of Hippocrates we have wondered about the nature of this flesh and its functions. Finally, after thousands of years of wondering we are now able to observe the mysterious patterns of the living brain, with the help of neuroimaging.

First, electroencephalography (EEG) showed us that these brain patterns have some relation in time to our behaviors. EEG showed us when things happen in the brain. More recent technologies such as functional magnetic resonance imaging (fMRI) then showed us where things happen in the brain.

It has been suggested that true insights into these brain patterns will arise when we can understand the patterns’ complex spatio-temporal nature. Thus, only with sufficient spatial and temporal resolution will we be able to decipher the mechanisms behind the brain patterns, and as a result the mechanisms behind ourselves.

Magnetoencephalography (MEG) may help to provide such insight. This method uses superconducting sensors to detect subtle changes in the magnetic fields surrounding the head. These changes reflect the patterns of neural activity as they occur in the brain. Unlike fMRI (and similar methods), MEG can measure neural activity at a very high temporal resolution (>1 kHz). In this respect it is similar to EEG. However, unlike EEG, MEG patterns are not distorted by the skull and scalp, thus providing an unprecedented level of spatio-temporal resolution for observing the neural activity underlying our selves.

Despite being around for several decades, new advances in the technology are providing unprecedented abilities to observe brain activity. Of course, the method is not perfect by any means. As always, it is a method complimentary to others, and should be used in conjunction with other noninvasive (and the occasionally invasive, where appropriate) neuroimaging methods.

MEG relies on something called a superconducting quantum interference device (SQUID). Many of these SQUIDs are built into a helmet, which is cooled with liquid helium and placed around the head. Extremely small magnetic fields created by neural activity can then be detected with these SQUIDs and recorded to a computer for later analysis.

I recently got back from a trip to Finland, where I learned a great deal about MEG. I’m planning to use the method to observe the flow of information among brain regions during cognitive control tasks involving decision making, learning, and memory. I’m sure news of my work in this area will eventually make it onto this website.

It depicts the “evolution of the matter distribution in a cubic region of the Universe over 2 billion light-years”, as computed by the Millennium Simulation. (Click the image above for a better view.)

The next image, of a neuron, is included for comparison.

It is tempting to wax philosophical on this structure equivalence. How is it that both the external and internal universes can have such similar structure, and at such vastly different physical scales?

If we choose to go philosophical, we may as well ponder something even more fundamental: Why is it that all complex systems seem to have a similar underlying network-like structure?

Generally, cognitive neuroscience aims to explain how mental processes such as believing, knowing, and inferring arise in the brain and affect behavior. Two behaviors that have important effects on the survival of humans are cooperation and conflict.

According to the NSF committee convened last year, conflict and cooperation is an important focus area for future cognitive neuroscience work. Although research in this area has typically been the domain of psychologists, it seems that the time is ripe to apply findings from neuroscience to ground psychological theories in the underlying biology.

Neuroscience has produced a large amount of information about the brain regions that are relevant to social interactions. For example, the amygdala has been shown to be involved in strong emotional responses. The “mirror” neuron system in the frontal lobe allows us to put ourselves in someone else’s shoes by allowing us to understand their actions as though they were our own. Finally, the superior temporal gyrus and orbitofrontal cortex, normally involved in language and reward respectively, have also been shown to be involved in social behaviors.

Experiments?

The committee has left it up to us to come up with a way to study these phenomena! How can we study conflict and cooperation from cognitive neuroscience perspective?

At least two general approaches come to mind. The first is fMRI studies in which social interactions are simulated (or carried out remotely) over a computer link to the experiment participant. A range of studies of this sort have recently begun to appear investigating trust and decision-making in social contexts.

The second general approach that comes to mind is that of using neurocomputational simulations of simple acting organisms with common or differing goals. Over the past few years, researchers have been carrying out studies with multiple interacting “agents” that “learn” through the method of Reinforcement Learning.

Reinforcement Learning is an artificial intelligence algorithm which allows “agents” to develop behaviors through trial-and-error in an attempt to meet some goal which provides reward in the form of positive numbers. Each agent is defined as a small program with state (e.g., location, sensory input) and a memory or “value function” which can keep track of how much numerical reward it expects to obtain by choosing a possible action.

Although normally thought to be of interest only to computer scientists, Reinforcement Learning has recently attracted the attention of cognitive neuroscientists because of emerging evidence that something like it might be used in the brain.

By providing these agents with a goal that can only be achieved through some measure of coorperation or under some pressure, issues of conflict and coorperation can by studied in a perfectly controlled computer simulation environment.

Following up on MC's posts about the significant insights in the history of neuroscience, I'll now take Neurevolution for a short jaunt into neuroscience's potential future.

In light of recent advances in technologies and methodologies applicable to neuroscience research, the National Science Foundation last summer released a document on the "Grand Challenges of Neuroscience". These grand challenges were identified by a committee of leading members of the cognitive neuroscience community.

The document, available at http://www.nsf.gov/sbe/grand_chall.pdf, describes six domains of research the committee deemed to be important for progress in understanding the relationship between mind and brain.

Over the next few posts, I will discuss each of the research domains and explain in layperson's terms why these questions are interesting and worth pursuing. I'll also describe potential experimental approaches to address these questions in a cognitive neuroscience framework.

Topic 1: "Adaptive Plasticity"

One research topic brought up by the committee was that of adaptive plasticity. In this context, plasticity refers to the idea that the connections in the brain, and the behavior governed by the brain, can be changed through experience and learning.

Learning allows us to adapt to new circumstances and environments. Arguably, understanding how we learn and how to improve learning could be one of the greatest contributions of neuroscience.

Although it is widely believed that memory is based on the synaptic changes that occur during long-term potentiation and long-term depression (see our earlier post) this has not been conclusively shown!

What has been shown is that drugs that prevent synaptic changes also prevent learning. However, that finding only demonstrates a correlation between synaptic change and memory formation, not causation. (For example, it is possible that those drugs are interfering with some other process that truly underlies memory.)

The overarching question the committee raises is: What are the rules and principles of neural plasticity that implement [the] diverse forms of memory?

This question aims to quantify the exact relationships between changes at the neuronal level and at the level of behavior. For instance, do rapid changes at the synapse reflect rapid learning? And, how do the physical limitations on the changes at the neuronal level relate to cognitive limitations at the behavioral level?

Experiments?
My personal opinion is that the answers to these questions will be obtained through new experiments that either implant new memories or alter existing ones (e.g., through electrical stimulation protocols).

There is every indication that experimenters will soon be able to select and stimulate particular cells in an awake, behaving animal to alter the strength of the connection between those cells. The experimenters can then test the behavior of the animals to see if their memory for the association that might be represented by that connection has been altered.